Support

Akeeba Backup for Joomla!

#8988 S3 Upload fails and in addition kills server with mod_fcgid

Posted in ‘Akeeba Backup for Joomla! 4 & 5’
This is a public ticket

Everybody will be able to see its contents. Do not include usernames, passwords or any other sensitive information.

Environment Information

Joomla! version
n/a
PHP version
n/a
Akeeba Backup version
n/a

Latest post by nicholas on Friday, 16 September 2011 13:37 CDT

biopgoo
Mandatory information about my setup:

Have I read the related troubleshooter articles above before posting (which pages?)? Yes
Have I searched the forum before posting? No
Have I read the documentation before posting (which pages?)? Yes, S3 backup
Joomla! version: (unknown) 1.5.23
PHP version: (unknown) 5.3.x
MySQL version: (unknown) 5.1.x
Host: (optional, but it helps us help you)

http://abm.andreso.net

Akeeba Backup version: (unknown) 3.3.2


EXTREMELY IMPORTANT: Please attach your Akeeba Backup log file in order for us to help you with any backup or restoration issue.

Description of my issue:

I have been trying for several days to get Joomla backups to be saved to Amazon S3.

I am using a WHM VPS with 2GHz and 4GB RAM and I am unable to get backups to work. To decrease loads I enabled mod_fcgid

I have made many attempts to backup abm.andreso.net without luck. As post-processing starts the server dies during about ten minutes. It does not respond to ping

I have tried to switch to suphp but the Joomla interface tells me that mod_fcgid is still enabled. I have informed my hosting provider that mod_fcgid is still enabled.

I have not been able to make a backup of my Joomla sites for about a month.

Love

nicholas
Akeeba Staff
Manager
Hi!

The fact that your server stops responding to pings means that something happens which consumes all available CPU resources on your server. The post-processing can't really do that, as we just ask PHP's cURL extension to upload the backup archive to Amazon S3 (this uses practically no CPU resources, guaranteed!). This leads me to believe that the mod_fcgid implementation on your server is buggy and causes the server to be brought down. Unfortunately, I can not assist you with server configuration issues :(

If you, however, believe this is not a server issue, please ZIP and attach your backup log file to your next post, as this will allow me to provide a better informed answer. Right now I speculate what is going on based on your description of the issue. The log file will allow me to know much better what is going on.

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

biopgoo
Hello,

My mistake, I was positive I had attached the log file. I was pretty sure I saw three file upload boxes.

I have a standard WHM 11.30.2 mod_fcgid compiled into apache so I doubt that it is buggy. I had problems getting it to accept installation files as the quantity of data that can be sent by POST by default is ridiculously small. Right now I am stuck as the server has just died again. I have the backups running from cron. No Amazon S3 backups have been done for more than one month.

There is something with standard mod_fcgid that keeps it from working with akeeba backup.

I do not know how to figure out the mod_fcgid version. I do not know how to get rid of mod_fcgid. Changing the PHP processing engine does not work. I have repeatedly told WHM to use suphp and the PHP processor is still mod_fcgid

WHM is WHM 11.30.2 (build 1) with CENTOS 5.6 x86_64 on a 2GHz VPS with 4GB RAM

Love

biopgoo
The log file is below

nicholas
Akeeba Staff
Manager
Nothing got attached. Please try uploading the file to a reputable file hosting service like DropBox or Windows Live SkyDrive and paste a link here. Thank you!

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

nicholas
Akeeba Staff
Manager
Ah, I forgot something important. Using suPHP causes PHP to run in CGI mode, which means that mod_fcgid will be used, unless you disable the FastCGI configuration option (the existence and location of that option seems to have to do with the cPanel version you're using). I'd say that you might be better off trying to disable suPHP, see if that works, then try to disable FastCGI and enable suPHP and see if that works too.

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

biopgoo
What annoys me is that I do not know how to figure out the mod_fcgid version. I have read in the cpanel forums something about a new version with new and improved config options.

I have told my hosting provider I do not know how to get rid of the mod_fcgid. I suspect the only solution is to recompile with Easy Apache

Lets see if I managed to attach the log file.

Love

biopgoo
Something buggy with the forum software

nicholas
Akeeba Staff
Manager
As I said, you had to ZIP and attach the file. The forum won't accept files over 2Mb as a precaution against abuse. Since your log was 2.4Mb big, the forum did not accept the attachment not because of a bug, but because that's what I told it to do - it's not a bug, it's a feature! See why I told you to "ZIP and attach" instead of just "attach" the log file? Semantics count ;)

Regarding your issue, I can see what is going on, right here:
DEBUG |110807 14:34:47|S3 -- Uploading part 3 of Tc8us_HCV3igon3DeLkDJ6jyMmxbTIMjfI4odNzAXFiA7xbMctWTX9oVbHWkNDJYnztSJIs34EDDV_3qfoBgPQ--

WARNING |110807 14:51:55|Failed to process file /administrator/components/com_akeeba/backup/site-abm.andreso.net-20110807-183320.zip

WARNING |110807 14:51:55|Error received from the post-processing engine:

WARNING |110807 14:51:55|AEUtilAmazons3::uploadMultipart(): [56] Recv failure: Connection timed out

DEBUG |110807 14:51:55|Not removing processed file /administrator/components/com_akeeba/backup/site-abm.andreso.net-20110807-183320.zip

DEBUG |110807 14:51:55|----- Finished operation 1 ------

DEBUG |110807 14:51:55|Successful Smart algorithm on AECoreDomainFinalization

DEBUG |110807 14:51:55|Kettenrad :: More work required in domain 'finale'

DEBUG |110807 14:51:55|====== Finished Step number 11 ======

DEBUG |110807 14:51:55|*** Engine steps batching: Break flag detected.

DEBUG |110807 14:51:55|*** Batching of engine steps finished. I will now return control to the caller.

DEBUG |110807 14:51:55|No need to sleep; execution time: 1028422.27411 msec; min. exec. time: 2000 msec

DEBUG |110807 14:51:55|Saving Kettenrad instance backend

ERROR |110807 14:51:55|The process was aborted on user's request



What this tells me: Akeeba Backup is trying to enforce a multi-part upload of the backup archive. The first two parts transfer successfully. The third part starts uploading at 14:34:47 but the connection ultimately hangs up at 14:51:55, 17 minutes and 8 seconds later, due to a timeout at Amazon's end. Of course, that causes a timeout error on your server (Apache aborts the request if the script doesn't finish processing it within 180 seconds) which results in the "process was aborted on user's request" message. At this point, the backup is complete, transferring it to S3 has failed and the whole process appears as failed due to the time-out issue. It shouldn't cause the server to go down, though. That part has to be a mod_fcgid misconfiguration or bug.

Don't worry, though, we have a workaround! Go to Akeeba Backup's configuration page. Click on the Configure button next to the Archiver Engine. Set the part size for split archives to 10 Megabytes. Click the Configuration button next to the Data Processing Engine. Check the "Disable multipart uploads" option. Save the configuration settings and retry backup.

Your backup is now split to seven files (.zip, .z01, ..., .z06) and transferred to S3 without a timeout issue. You can try using higher values to reduce the number of archive parts. Maybe you can try 100Mb, so that you get one part, but it may cause timeout issues during the transferring of the archive to S3. Well, the only way to find out is trial and error.

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

biopgoo
I disabled Fast CGI with Easy Apache. No luck with the configuration change. Server dies uploading the third 10 MB part. The zip and the .z01 reach S3 no problem.

I attach three log files. One from the private end disabling multi part uploads, one from the command line and one from the back end with 10 MB part size.

Is there any PHP extension that is incompatible with Akeeba?

Love

biopgoo
There must be some misconfiguration in my Apache server or php.ini configuration which causes load to skyrocket once 20 MB have been transferred.

One of the backups was done on the command line so it is possible that Apache is innocent

Love

nicholas
Akeeba Staff
Manager
There seems to be an issue with your server and its communications with Amazon S3. By default, Akeeba Backup is using HTTPS (secure) communications. In some cases, the OpenSSL library handling them drops the ball. Up until today, this would only cause an upload failure. It's the first time I heard of CPU overusage, but I can't rule it out. Let's try working around that. Go to Components, Akeeba Backup, Configuration, click on the Configure button next to the Data Processing Engine and uncheck (clear) the Use SSL field. This will fall back to plain HTTP communications.

If that still fails, you will have to talk to your host. They might have some very picky firewall on the server or the network which gets tripped with something in the data being sent to Amazon S3 and cause this strange issue.

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

biopgoo
The PHP configure command is below

I have SSL uploads to S2 disabled. Both the administrator backup.php and altbackup.php scripts kill the server when the S3 timeout is reached.

I attach the front end log file, which I believe is generated by altbackup.php

Could you please provide me with a download link to the latest 2.x release? I seem to have deleted my copy. I want to check if that version works.

My hosting provider disabled the firewall for my server. No difference.


./configure' '--disable-fileinfo' '--disable-pdo' '--disable-phar' '--enable-bcmath' '--enable-calendar' '--enable-ftp' '--enable-gd-native-ttf' '--enable-intl' '--enable-libxml' '--enable-magic-quotes' '--enable-mbstring' '--enable-soap' '--enable-sockets' '--enable-ucd-snmp-hack' '--enable-wddx' '--enable-zip' '--prefix=/usr' '--with-bz2' '--with-curl=/opt/curlssl/' '--with-curlwrappers' '--with-freetype-dir=/usr' '--with-gd' '--with-gettext' '--with-icu-dir=/usr' '--with-imap=/opt/php_with_imap_client/' '--with-imap-ssl=/usr' '--with-jpeg-dir=/usr' '--with-kerberos' '--with-libdir=lib64' '--with-libexpat-dir=/usr' '--with-libxml-dir=/opt/xml2' '--with-libxml-dir=/opt/xml2/' '--with-mcrypt=/opt/libmcrypt/' '--with-mysql=/usr' '--with-mysql-sock=/var/lib/mysql/mysql.sock' '--with-mysqli=/usr/bin/mysql_config' '--with-openssl=/usr' '--with-openssl-dir=/usr' '--with-pcre-regex=/opt/pcre' '--with-pic' '--with-png-dir=/usr' '--with-snmp' '--with-xmlrpc' '--with-xpm-dir=/usr' '--with-xsl=/opt/xslt/' '--with-zlib' '--with-zlib-dir=/usr'

biopgoo
A site with Akeeba 3.2.7 hosted on my server has backed up every single time without problems. A site with Akeeba Pro 3.3.1 has backed up once yesterday. I cannot reproduce as I get the errno 103 SSL error

I attach logs from front end and from back end for this backup process.

Love

biopgoo
Seem like 3.2.7 still kills the server. I set 512KB part size and .z19 was uploaded to S3

The web site that was successfully backed up and uploaded is about 25MB

I attach screen capture and log file

biopgoo
New log file with Akeeba Pro 3.3.1

It died uploading the 20th part at 11:50:14

I attach back end log file and screen capture of the moment the server died.

biopgoo
svn860 with 512KB part size completed successfully. With only one part upload it fails.

nicholas
Akeeba Staff
Manager
Bed on all of the information you provided, the post-processing code seems no to be an issue. The post-processing engine for S3 has not changed between 3.2.7 and the latest SVN. Moreover, the random nature of the crashes even with the same version indicates that the problem is something on the server. I begin to suspect that your server's PHP cURL implementation suffers from a bug that I have not met yet. I would suggest recompiling PHP and, if possible, upgrade to it to the latest version.

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

biopgoo
Seems like s3cmd kills the server. I have been able to reproduce and I have informed the hosting provider.

First time it died
195297280 of 232353494    84% in  610s   312.29 kB/s

Second time it died
65597440 of 232353494    28% in  205s   312.12 kB/s

nicholas
Akeeba Staff
Manager
This looks like a more sinister configuration issue with your server. s3cmd is written in PERL if I recall correctly, so the problem is not isolated with PHP and/or Akeeba Backup. It looks like a packet filtering on your server which causes the CPU to go overboard and bring down the entire machine. I suggest taking this issue to your host so that they can help you with the server setup.

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

biopgoo
It is a VPS so I have not got any control of the mother server's network configuration.

I have given the hosting provider an ultimatum, fix soon S3 uploads or I go find another VPS provider. I have spent far more time researching than it would take me to migrate all my webs to a new VPS

Love

nicholas
Akeeba Staff
Manager
I agree with that. If the host can't fix it, well, it's a free market and there are plenty of competitor hosts (e.g. Rochen) who can deliver a better product for a similar price.

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

biopgoo
I believe the problem is that all available bandwidth is being used by the backup process. I have been able to throttle s3cmd to only use 100KB/s bandwidth and now I am able to run several s3cmd put in parallel.

Does Akeeba send the files as fast as possible to S3? Does it use as much bandwidth as possible to get the transfer over with in as little time as possible?

I remember my network connection on my home PC would become unresponsive when I used all my upload bandwidth.

Love,

nicholas
Akeeba Staff
Manager
Hi!

Akeeba Backup does not attempt to limit the bandwidth used by any means. We try to shovel as much data as possible so that the upload finishes up as soon as possible. All servers I have seen automatically assign bandwidth so that all processes can share the available bandwidth in a fair manner. I am wondering why this is not the case on your server. Any chance of your host giving us any insight?

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

biopgoo
My advice is to stay away from MyHosting, which was the too good to be true VPS provider I tried for a month. My server's connection would die for several minutes every time the bandwidth exceeded 1MB/s Seems it was caused by the hardware firewall.

Now I have switched to another VPS provider and I was really happy to see a backup progress at over 10 MB/s

For the rest I have found something strange in the table exclusion section. akb_ad_agency_settings is marked red and #_ad_agency_stat was white. akb_session is red as well. Trying to reduce resource consumption I once saw the latter table be backed up record by record.

The version I have installed is Akeeba Pro 3.3.4

Love

nicholas
Akeeba Staff
Manager
Sorry for the late reply. Somehow my mail software decided that the email notification for this thread was spam :(

I can't replicate the automatic exclusion (the red icon) of the table you mentioned. The only thing I can think of is that you have set up exclusions in the "RegEx Database Table Exclusions" which inadvertently filter out this table. Tables filtered by this kind of filters appear in red in the regular (manual) exclusion page.

The session table's contents are automatically excluded in order to avoid stored session mismatched which cause strange errors to your visitors when you are restoring the site. That red icon is more than justified; it would be a bug if it weren't there.

Regarding the inclusion of a table you have excluded, I can't replicate it here. When I exclude a table completely (left icon), it's not being backed up. When I exclude its contents (right icon) only its structure is backed up, not its contents.

I would suggest re-downloading Akeeba Backup Professional's package from https://www.AkeebaBackup.com/latest and install it again on your site, without uninstalling your existing Akeeba Backup copy. Then, take a look at the RegEx filters and remove any of them which might be blocking the ad agency tables. Moreover, make sure that you have not included your site's database using the "Multiple Databases Definitions" feature, as this would back up your tables twice (and would explain why the excluded tables are seemingly being included in the backup).

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

Support Information

Working hours: We are open Monday to Friday, 9am to 7pm Cyprus timezone (EET / EEST). Support is provided by the same developers writing the software, all of which live in Europe. You can still file tickets outside of our working hours, but we cannot respond to them until we're back at the office.

Support policy: We would like to kindly inform you that when using our support you have already agreed to the Support Policy which is part of our Terms of Service. Thank you for your understanding and for helping us help you!