Support

Akeeba Backup for Joomla!

#21062 Rackspace

Posted in ‘Akeeba Backup for Joomla! 4 & 5’
This is a public ticket

Everybody will be able to see its contents. Do not include usernames, passwords or any other sensitive information.

Environment Information

Joomla! version
n/a
PHP version
n/a
Akeeba Backup version
n/a

Latest post by on Friday, 05 December 2014 17:20 CST

omikron
Where are we at with RackSpace CloudFiles? They no longer list any issues with the API, but my backups still fail to transfer there.

The other ticket that refers to this has a broken link referenced; https://www.akeebabackup.com/download/akeeba-backup/4-0-2.html

And the new version doesn't mention rackspace cloudfiles.

omikron
Aslo, I have contacted rackspace support and this is what they had to say;

They claim that these issues are resolved, however, there are some edir issues that still appear to take place intermittently. In which case the only way around that at this time seems to be retrying


They claim that these issues are resolved, however, there are some edir issues that still appear to take place intermittently. In which case the only way around that at this time seems to be retrying

nicholas
Akeeba Staff
Manager
That's also my experience with them. Sometimes it works, sometimes it doesn't. Just to make sure I'm not an idiot with a bad implementation of the API I am also checking with CyberDuck which had been consistently working with CloudFiles. Nope, it has the same problems to. I also checked CyberDuck's log, they issue exactly the same commands as our own API and we both get the same results: sometimes it works, sometimes it doesn't.

I have pretty much crossed off CloudFiles from my list of reliable, cost-effective cloud storage providers. I recommend using Amazon S3 or Dropbox instead.

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

omikron
Currently cyberduck is connecting fine, but when backing up a site I get;

Upload of your archive failed.

400 Bad request

Your browser sent an invalid request.

nicholas
Akeeba Staff
Manager
There's no way you can hit the same server when uploading through CyberDuck and Akeeba Backup at the same time. Two weeks ago I had open Akeeba Backup and CyberDuck on my local machine at the same time. Sometime both worked. Sometimes both failed. A very few times Akeeba Backup worked but CyberDuck failed. Some other times Akeeba Backup failed and CyberDuck worked. I gave up.

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

omikron
Is there any detailed logs being generated that I can give to rackspace for troubleshooting, especially since they are under the impression that it's fixed.

I host my VPS with rackspace and would rather not have to move my file hosting to amazon.

nicholas
Akeeba Staff
Manager
I've just tested it on my laptop, using both Akeeba Backup AND CyberDuck. I ran each upload three times. In all cases I am trying to upload a small (114Kb) backup. One upload made with Akeeba Backup failed, the other two worked. Two uploads made with CyberDuck failed, one succeeded.

Just for the heck of it, I tried uploading a 4Mb file as well with Akeeba Backup. On the first try it worked. On the second try (and a different filename), it failed. I waited for a minute and tried again, now it worked and it was super fast. Tried connecting with CyberDuck, timeout error. What the...?!

I don't care what RackSpace thinks about their service, it's unreliable. Sometimes it works, some other times it times out. I triple checked the API implementation, it's correct. I also checked CyberDuck's log, apparently we are sending the SAME commands to the API. Whether it works or not seems to depend on a roll of the dice on RackSpace's servers.

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

omikron
With my tests, 10 out of 10 succeed with cyberduck, all with varying file sizes. With akeeba backup 10 out of 10 fail.

I should of had at least one success shouldn't I of? Before a couple weeks ago it never failed, now all my sites fail 100% of time.

nicholas
Akeeba Staff
Manager
I made another three tests today, they all worked (Akeeba Backup and CyberDuck). If you don't believe me I can record a video. It is NOT our software to blame here. In fact, our software sends THE EXACT SAME commands as CyberDuck. Oh, believe me, you didn't spend an entire weekend poring over the RackSpace API, our integration and CyberDuck's logs like I did...

So, this leaves us with the question why do all tests fail on your site. The only plausible explanation is that your host has a firewall which cuts off communications with RackSpace CloudFiles servers. No, no. Before you start shouting "but I'm using RackSpace servers, you insensitive clod" please note that the location of your server is irrelevant. We're talking about a firewall -e.g. ipfilters, ufw and so on- running on your server instance, preventing stuff running inside your server instance from reaching other stuff outside your server instance.

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

omikron
The API uses port 80, 443 or something else?

omikron
This is the error I am receiving;

[141009 11:05:36] Authenticating to CloudFiles 
[141009 11:05:36] PHP NOTICE on line 631 in file <root>/administrator/components/com_akeeba/akeeba/plugins/utils/cloudfiles.php: 
[141009 11:05:36] curl_setopt() [&lt;a href='function.curl-setopt'&gt;function.curl-setopt&lt;/a&gt;]: CURLOPT_SSL_VERIFYHOST with value 1 is deprecated and will be removed as of libcurl 7.28.1. It is recommended to use value 2 instead 
[141009 11:05:37] Uploading site-www.nbparksfoundation.org-20141009-110407.jpa 
[141009 11:05:37] PHP NOTICE on line 631 in file <root>/administrator/components/com_akeeba/akeeba/plugins/utils/cloudfiles.php: 
[141009 11:05:37] curl_setopt() [&lt;a href='function.curl-setopt'&gt;function.curl-setopt&lt;/a&gt;]: CURLOPT_SSL_VERIFYHOST with value 1 is deprecated and will be removed as of libcurl 7.28.1. It is recommended to use value 2 instead 
[141009 11:05:37] Failed to process file <root>/administrator/components/com_akeeba/backup/site-www.nbparksfoundation.org-20141009-110407.jpa 
[141009 11:05:37] Error received from the post-processing engine: 
[141009 11:05:37] <html><body><h1>400 Bad request</h1> \n Your browser sent an invalid request. \n </body></html> \n 

nicholas
Akeeba Staff
Manager
That notice is normal. We can't set that option to 2 just yet due to some crappy hosts out there.

As for the error you are receiving, it's not in RackSpace's documentation. I also can't reproduce it – all the failures I get are timeout errors. If have to look into my crystal ball to help you. I suppose that the problem is in your username, API key, container name or directory name.

Check that your username, API key, container name and directory are correctly spelled and there are no leading or trailing whitespace characters such as spaces or newlines. Do not that they are case sensitive.

Make sure your directory uses a forward slash as the path separator, even if you're using Windows. This is correct:
/this/is/a/correct/path
This is an INVALID path:
\this\is\an'invalid\path

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

omikron
I think I may of figured it out... the one website I have that doesn't have admin tools installed works fine...

So seems like it could be my admin tools firewall or htaccess rule blocking.

omikron
Never mind it is not admin tools.

This particular site was still uploading fine to backspace, then I updated to 4.0.5 and it is now giving the same behavior.

omikron
I can't remember the version the site mentioned above was on, but I found another that is still working and is version 3.11.4.

nicholas
Akeeba Staff
Manager
3.11.4 is using the RackSpace authentication API version 1 which RackSpace has declared deprecated and bound for removal in the coming weeks. Other than that, the APi implementation is exactly the same. I know that their v2 authentication API is the cause of our problems. The thing is, I'd like to keep using their VERY STABLE v1 authentication API, but they decided to remove it. Why? Beats me!!!!

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

omikron
I have tried installing Akeeba backup on a few different Joomla hosting demos and they all get the same "400 Bad request" error when backing up to Rackspace every time.

You can see here;

http://fiftyfoot.demojoomla.com/administrator/
fiftyfoot/CkpT>ey6NVNnn2WNwvE3


nicholas
Akeeba Staff
Manager
After spending the last three and a half hours on this I can share my findings.

1. The official CloudFiles API documentation is wrong! The hostname they tell you to use (storage.clouddrive.com) is wrong. You have to read the hostname from the authentication stage which, thankfully, Akeeba Backup already does. Also, the Content-Length is not optional, it's mandatory. If you do not set it it will return an error. Thankfully I was feeling too paranoid when I implemented the API, so Akeeba Backup already includes this header. This wrong information in their official API documentation cost me 2 hours trying to figure out why the heck when I modified my code to match their documentation nothing worked at all.

2. The CloudFiles transfer speed is abysmal. It is 1 about Mb/minute (not per second; I really mean per minute). Since the timeout limit in cURL is 100 seconds this means that you cannot transfer files larger than about 1.7Mb without getting a timeout. This is the problem you have. If you set the part size for split archives to 1Mb it will work. However, you should note that a 150Mb file will take a little over two hours to transfer. The same file takes about 3 minutes to transfer to Amazon S3.

FYI, using the previous version of their API transfers to CloudFiles were still slow, but bearable. On my live server I would get 128Kb per second with CloudFiles instead of 1Mb per second with Amazon S3. It was 8 times slower but bearable. With the new CloudFiles API version the speed difference has increased more than six times over the previous already bad situation, making CloudFiles 50x slower than S3. Uploading to CloudFiles has the same speed as writing to a 5 ¼" floppy disk. Remember those? I had one in my 80286 twenty two years ago, but I digress...

Even though I will lift the cURL timeout for the next version of Akeeba Backup be advised that part sizes over 2Mb are likely to fail due to timeouts coming from your web server, i.e. Apache gives up waiting for the request to complete or –most likely– PHP kills the backup script due its maximum execution time.

TL;DR

CloudFiles is abysmally slow: 50 times slower than Amazon S3 and Dropbox. I strongly recommend AGAINST using CloudFiles. If you do choose to use it you will have to use a part size of 1Mb which is terribly inconvenient. Even if you do use such a tiny part size the upload will take forever, leading to other timeout issues coming from your server. Unfortunately the abysmal upload speed of CloudFiles and the timeout limits of your server are beyond our control. You will have problems with CloudFiles which are outside our control and cannot be fixed. Therefore I strongly advise against using CloudFiles. There's really nothing more we can do.

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

omikron
Well you have officially convinced me to move to S3. I will probably be happier in the long run.

nicholas
Akeeba Staff
Manager
Oh, absolutely. The pricing is the same and the speed is nearly 50x faster, so... :)

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

omikron
So far the switch to S3 is going good, however I am experiencing one issue. When I am trying to transfer some backups, that never got copied to Rackspace because of the issue, to Amazon S3 they show succeeding, but don't actually get backed up. When you try to download it it gives this error;

NoSuchKeyThe specified key does not exist.mcadoos.com/site-www.mcadoos.com-20141028-000004.jpaE636706D9D3D0937T7Lyaux7TtZifuanke/8BCDl4Zd09M4Bxzzc4iQzxjEJPFJbwZCT+39+wGPxFWycmu1l9r0RUfk=


Backups that happen from a cron job copy to S3 fine, so is this a timeout issue? Any ideas how I can resolve?

nicholas
Akeeba Staff
Manager
Some hosts do not play very well with Amazon S3's multi-part upload feature which allows us to upload a very big archive file in 5Mb chunks. In this case you will have to follow Plan B which is to have Akeeba Backup split the archive file in small chunks, one file per chunk, and then upload each of those chunks in one go. This is a two-legged solution.

For the first leg of the solution, please go to Akeeba Backup's Configuration page and find the Data Processing Engine drop down. Click the Configure button next to it. A new pane opens below. Check the Disable multipart uploads option and make sure that the Process each part immediately option is not checked.

Now, for the second leg, we have to do some trial and error. Still in Akeeba Backup's Configuration page, find the Archiver Engine drop-down and click on the Configure button next to it. A new pane opens below. Find the Part size for split archives option and select the 49.99 option. Try a new backup. If it crashes while uploading files to Amazon S3, go back to this option and try smaller values, i.e. 20, 10, 5, 2 or even 1, trying a new backup after setting each one of those values.

For your reference: https://www.akeebabackup.com/documentation/troubleshooter/abamazons3.html

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

System Task
system
This ticket has been automatically closed. All tickets which have been inactive for a long time are automatically closed. If you believe that this ticket was closed in error, please contact us.

Support Information

Working hours: We are open Monday to Friday, 9am to 7pm Cyprus timezone (EET / EEST). Support is provided by the same developers writing the software, all of which live in Europe. You can still file tickets outside of our working hours, but we cannot respond to them until we're back at the office.

Support policy: We would like to kindly inform you that when using our support you have already agreed to the Support Policy which is part of our Terms of Service. Thank you for your understanding and for helping us help you!