As I said, you had to
ZIP and attach the file. The forum won't accept files over 2Mb as a precaution against abuse. Since your log was 2.4Mb big, the forum did not accept the attachment not because of a bug, but because that's what I told it to do - it's not a bug, it's a feature! See why I told you to "ZIP and attach" instead of just "attach" the log file? Semantics count ;)
Regarding your issue, I can see what is going on, right here:
DEBUG |110807 14:34:47|S3 -- Uploading part 3 of Tc8us_HCV3igon3DeLkDJ6jyMmxbTIMjfI4odNzAXFiA7xbMctWTX9oVbHWkNDJYnztSJIs34EDDV_3qfoBgPQ--
WARNING |110807 14:51:55|Failed to process file /administrator/components/com_akeeba/backup/site-abm.andreso.net-20110807-183320.zip
WARNING |110807 14:51:55|Error received from the post-processing engine:
WARNING |110807 14:51:55|AEUtilAmazons3::uploadMultipart(): [56] Recv failure: Connection timed out
DEBUG |110807 14:51:55|Not removing processed file /administrator/components/com_akeeba/backup/site-abm.andreso.net-20110807-183320.zip
DEBUG |110807 14:51:55|----- Finished operation 1 ------
DEBUG |110807 14:51:55|Successful Smart algorithm on AECoreDomainFinalization
DEBUG |110807 14:51:55|Kettenrad :: More work required in domain 'finale'
DEBUG |110807 14:51:55|====== Finished Step number 11 ======
DEBUG |110807 14:51:55|*** Engine steps batching: Break flag detected.
DEBUG |110807 14:51:55|*** Batching of engine steps finished. I will now return control to the caller.
DEBUG |110807 14:51:55|No need to sleep; execution time: 1028422.27411 msec; min. exec. time: 2000 msec
DEBUG |110807 14:51:55|Saving Kettenrad instance backend
ERROR |110807 14:51:55|The process was aborted on user's request
What this tells me: Akeeba Backup is trying to enforce a multi-part upload of the backup archive. The first two parts transfer successfully. The third part starts uploading at 14:34:47 but the connection ultimately hangs up at 14:51:55, 17 minutes and 8 seconds later, due to a timeout at Amazon's end. Of course, that causes a timeout error on your server (Apache aborts the request if the script doesn't finish processing it within 180 seconds) which results in the "process was aborted on user's request" message. At this point, the backup is complete, transferring it to S3 has failed and the whole process appears as failed due to the time-out issue. It shouldn't cause the server to go down, though. That part has to be a mod_fcgid misconfiguration or bug.
Don't worry, though, we have a workaround! Go to Akeeba Backup's configuration page. Click on the Configure button next to the Archiver Engine. Set the part size for split archives to 10 Megabytes. Click the Configuration button next to the Data Processing Engine. Check the "Disable multipart uploads" option. Save the configuration settings and retry backup.
Your backup is now split to seven files (.zip, .z01, ..., .z06) and transferred to S3 without a timeout issue. You can try using higher values to reduce the number of archive parts. Maybe you can try 100Mb, so that you get one part, but it may cause timeout issues during the transferring of the archive to S3. Well, the only way to find out is trial and error.
Nicholas K. Dionysopoulos
Lead Developer and Director
🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!