Support

Akeeba Backup for Joomla!

#9147 S3 Thirs part failure to upload

Posted in ‘Akeeba Backup for Joomla! 4 & 5’
This is a public ticket

Everybody will be able to see its contents. Do not include usernames, passwords or any other sensitive information.

Environment Information

Joomla! version
n/a
PHP version
n/a
Akeeba Backup version
n/a

Latest post by nicholas on Wednesday, 30 November 2011 03:09 CST

toomanylogins
Mandatory information about my setup:

Have I read the related troubleshooter articles above before posting (which pages?)? Yes
Have I searched the forum before posting? Yes
Have I read the documentation before posting (which pages?)? Yes
Joomla! version: (1.5.22)
PHP version: (5.3.2)
MySQL version: (5.1.4)
Host: (optional, but it helps us help you)
Akeeba Backup version: (3.3.2)

EXTREMELY IMPORTANT: Please attach your Akeeba Backup log file in order for us to help you with any backup or restoration issue.

Description of my issue:

Good morning,

I have a large backup which I'm uploading to Amazon S3 twice a week the backup is 2.3GB and this is split into 1gb parts. The first part with file name xxx.jpa which is 300mb always upload is okay. The second part with file name xxx.j01 which is 1gb always uploads okay. The third part file name xxx.j02 always fails to upload and the file remains in the /administrator/components/akeeba/backup directory on our server.

I have tried both enabling and disabling the multipart upload feature for Amazon S3 config and this doesn't seem to make any difference. The backup always fails to upload on the last part.

I have attached a log file. I realise this is probably some form of timeout problem but the first two parts always upload so this doesn't make sense.

thanks
Paul


toomanylogins
Sorry about the typo in the subject it should read "third part" but I couldnt find how to edit a post in the forum to correct it.

nicholas
Akeeba Staff
Manager
Hi Paul,

I can see that in this case you are using the multi-part upload and that's causing it to fail. The thing is, the multi-part upload doesn't seem to be the stablest thing in the world, as it tends to halt pretty frequently on Amazon's end. Let's try the following:
1. Upgrade to Akeeba Backup 3.3.6
2. Try disabling the multi-part upload for Profile #2 (warning: do not edit Profile #1, it won't work ;) ). Double check that you disabled multi-part uploads for profile #2!
3. If this doesn't work, try lowering the part size to something like 500Mb, 250Mb or 100Mb.

If all else fails, please ZIP and attach you new log file.

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

toomanylogins
Hello Nicholas,
thank you for the quick reply. I think I am confused by the wording of the S3 consideration. Currently I have "Disable multipart upload" deselected. I have tried both options selected with the same result. I will update to 3.3.6 and see what happens.
Regards
Paul

nicholas
Akeeba Staff
Manager
I think you are editing the wrong profile. Your backup which doesn't work is using profile #2. On that profile, multi-part upload is enabled (the checkbox is NOT checked).

Go to your site's back-end, Components, Akeeba Backup. In the drop-down above the buttons, select your second profile. The page reloads. It should now say that the active profile is such and such and next to its name #2 should be printed. Now click on the Configuration button and take a look at the "Disable multi-part upload" checkbox. Is it checked? If not, check it and click on Save, then retry the backup.

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

toomanylogins
Hello Nicolas,

I upgraded to the latest version as suggested but something strange going on here. I changed the archive size from 1GB to 750mb and selected the "disable multipart" option. Last night the backup proceeded with the same result 2 archives uploaded to s3 and 1 remains on the server. The really strange thing is that the archive size is still 1GB.

Please see attached log file and screenshots of the config, the file on the server and the uploaded files to Amazon.

Thank you for your help.

Regards
Paul

nicholas
Akeeba Staff
Manager
Paul,

You are either editing the configuration of the wrong backup profile or you are overriding the configuration in your CRON command line. Can you please send me a PM with Super Administrator access to your site so that I can take a look and see if I can rule out the first suspicion?

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

toomanylogins
Hello Nicholas,

Re your comments below the schedule is as follows

profile 1 = joomla plus database = 30mb
profile 3 incremetal images folder = 50mb
Proflie 2 runs last as this is the big one

I will change the timngs as you said see what happens.

Regards
Paul

I see something strange. Every day, there are three automatic backups taking place:
Profile #3 (files incremental) then Profile #2 (site files) then Profile #1 (full backup, most likely only database)

Given your backup times, in order to avoid cross-talk issues between those profiles, you need to schedule these as follows:
Profile #3 with Profile #2: about 1 hour apart
Profile #2 with Profile #1: about 3 hours apart
If you try to start them all at the same time or one of them starts when another one is still in progress, hell breaks loose.


toomanylogins
Hello Nicholas,
Please see attached log after implementing your recommendations. This time round the backup produced three archives but only one was successfully transferred to S3.

site-xxx-20111114-040001.jpa 727mb transferred okay

site-xxx-20111114-040001.j01 786mb
site-xxx-20111114-040001.j02 786mb
Did not transfer to S3.

The site is hosted at webfusion and support confirms they have a very fast connection to s3.

are there any other parameters I can try?
regards
Paul

nicholas
Akeeba Staff
Manager
Let's see what the log has to say:
DEBUG |111114 04:12:43|S3 -- Legacy (single part) upload of site-XXXXXX-20111114-040001.j01

WARNING |111114 04:20:17|Failed to process file /administrator/components/com_akeeba/backup/site-XXXXXX.j01

WARNING |111114 04:20:17|Error received from the post-processing engine:

WARNING |111114 04:20:17|AEUtilAmazons3::putObject(): [56] Failure when receiving data from the peer


I honestly believe that the part size you are currently using is right at the verge of causing an Amazon S3 timeout. As I wrote 4 days ago regarding the part size:
If this doesn't work, try lowering the part size to something like 500Mb, 250Mb or 100Mb

Please do that.

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

toomanylogins
hello Nicholas,

I reduced apart size to 500mb and last night's backup failed completely with none of the 4 parts (2gb in total) uploaded successfully to S3. I do not understand this. When the part size was 1GB and it was a single file upload ie the total backup was less than 1GB it worked flawlessly even when the part size reached 900mb. The problem only started when we went to multipart multi-file uploads. Therefore if this was a part size/timeout issue the original single file backups would have failed as they were larger. Therefore reducing the part size is not fixing the problem and it doesn't add up. I have attached the latest log information.

Regards
Paul

nicholas
Akeeba Staff
Manager
I'm looking at the timestamps in your log:
DEBUG |111117 04:04:28|S3 -- Legacy (single part) upload of site-qms.completepicture.co.uk-20111117-040002.jpa

WARNING |111117 04:09:00|Failed to process file /administrator/components/com_akeeba/backup/site-qms.completepicture.co.uk-20111117-040002.jpa

It looks like that the connection speed to S3 has dropped since the last time you tried. The last time it was about 150 Mb/minute, now it certainly under 100. Just try going a little backwards. Start with a part size of 20 Mb and then progress to 50, 100, 200, 300 Mb until you find which one works reliably.

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

toomanylogins
Hello Nicolas,

I review the changes that I made to the configuration of profile #2 and decided to leave the part size at 500mb and remove the quota management. It is now working perfectly. Therefore I think the issue of quota management of S3 when using multiple archives. To test this I'm going to increase the archive size to 1gb this evening and see what happens.

Regards
Paul

nicholas
Akeeba Staff
Manager
This is a coincidence. Your logs tell me that all previous backups failed when they were trying to upload the backup archives to S3. This code runs BEFORE the quota code. The quota code (which only does a few non-data-intensive DELETEs) was succeeding. If the two problems were somehow linked, then it should be the quotas failing (as they come AFTER the upload), not the uploads. The effect can never precede the cause ;)

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

toomanylogins
Hello Nichloas,

Thank you for your comments. This may seem like a coincidence but it is repeatable so something is wrong. Last night I re-enabled the file quota and the backup failed as I expected. To recap the scenario is as follows.

Original part size 1GB always works when the backup was less than 1gb ie single file upload
Enabled remote quotas and backup still worked for a single file.
Backup size greater than 1gb and backup fails always on third file ie it manages to upload 2 files.
Decreased part size to 750 mb and then 500mb backup still fails
Disabled remote quota part size 500mb backup works 4 files uploaded
Increased part size to 750 mb backup continues to work
Enabled remote quota 750 mb part size backup fails 2 files uploaded.

I have attached a log for 22-11 successful backup with 750mb part size and the log for 23-11 which failed. There is also a screenshot of the S3 container showing the files uploaded and a similar one of the files remaining on the server.

Regards
Paul

toomanylogins
Hello Nicolas,

A further anomaly is that if you use the "administer backup files" option in the control panel this shows that all of the 4 file parts of the backup from 23-11 are on the S3 remote server. This is not the case only 2 on the remote server as 2 parts did not upload. Please see screenshots.

Regards
Paul

nicholas
Akeeba Staff
Manager
Hello Paul,

The Administer Backup Files page doesn't know if all or just a few of the files have been uploaded. If no error was reported during the upload, it assumes all parts were uploaded. The workaround woud be to query Amazon S3 every time you loaded the Administer Backup Files page, wasting your money and risking timeout errors.

Regarding the uploads, I still can't see how enabling the remote file quotas has anything to do with the upload. After the backup is complete, the Finalize domain runs and the following actions are executed, in this exact order:
- Remove the temporary files which are stored in the output directory
- The stats table is updated with the execution time
- The total filesize of the backup is update in the stats table
- The post-processing (upload to cloud storage) is performed
- The local quotas are applied. Note: this only removes files on YOUR server
- The remote quotas are applied. Note: this only removes files on the REMOTE server, i.e. Amazon S3
- An email is set to notify about the backup completion, if this feature is enabled

The order of these tasks is defined in administrator/components/com_akeeba/akeeba/core/domain/finalization.php lines 49 to 57. As you can see on the same file, lines 270-417, the post-processing (upload to S3) doesn't care about quotas. The first time the quotas are being evaluated is lines 495-699 (the apply_quotas() method) which runs AFTER the upload of files.

Therefore any quota settings CAN NOT interfere with the upload, because these settings ARE NOT read AT ALL until AFTER the upload code has run. It is IMPOSSIBLE that a single integer, stored in the database and not used ANYWHERE in a block of code can have any effect at all in that block of code. Computers are the dumbest things in the Universe. They will only do what you tell them to do. They don't have a mind of their own. We are not telling your server to use the quota value until AFTER the download, therefore it can't just think of its own and decide to use it nonetheless! That would be absurd. As a result, the quota setting and the upload to S3 can not be linked.

If you don't believe me, please do have another developer take a look at the code. I just pointed out the exact file and line numbers where the code runs.

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

toomanylogins
Hello Nicholas

I have no doubt you're right regarding your code however something strange is happening. I'm going to leave the backup running for a week to make sure it is definitely not a part size problem. Then I will turn on the remote quota and see what happens. I'll keep you posted.
Regards
Paul

nicholas
Akeeba Staff
Manager
Yes, please do! This thing is very strange. I know it can't be the quota setting, but I can't make heads or tails of your results. Better leave it running for a week and let's see what's going on.

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

user34641
I have a similar issue relating to s3. It just says

AEUtilAmazons3::startMultipart(): [SignatureDoesNotMatch] The request signature we calculated does not match the signature you provided. Check your key and signing method.


So
I can't get anything up to s3. It seems like it saying my key is wrong, but I copy and paste it so I don't see how.

nicholas
Akeeba Staff
Manager
You have a completely different problem. Please start a new thread. If we start discussing two unrelated issues on the same thread, I can promise you that I'll get confused and reply the wrong thing to each one of you and we'll all be very unhappy.

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

Support Information

Working hours: We are open Monday to Friday, 9am to 7pm Cyprus timezone (EET / EEST). Support is provided by the same developers writing the software, all of which live in Europe. You can still file tickets outside of our working hours, but we cannot respond to them until we're back at the office.

Support policy: We would like to kindly inform you that when using our support you have already agreed to the Support Policy which is part of our Terms of Service. Thank you for your understanding and for helping us help you!