Support

Akeeba Backup for Joomla!

#20453 ftp upload

Posted in ‘Akeeba Backup for Joomla! 4 & 5’
This is a public ticket

Everybody will be able to see its contents. Do not include usernames, passwords or any other sensitive information.

Environment Information

Joomla! version
n/a
PHP version
n/a
Akeeba Backup version
n/a

Latest post by dlb on Friday, 18 July 2014 08:24 CDT

hsojhsojhsoj
Here is my log file: dependentmedia.com/pass/akeeba/AkeebaBackupDebugLog.txt
Here is a screenshot of the message: dependentmedia.com/pass/akeeba/ScreenShot.png

i see that the backup works and uploads when i lower the "part size for split archives" to 10megs and below. The problem is that i want to have the backup in one part not many.

I have raised the max_execution_time to 12000
I have raised the max_input_time to 12000
I have raised the memory_limit to 8g
I have raised the post_max_size to 2g
I have raised the upload_max_filesize to 4g

The entire backup process takes less than 5 minutes when it works.
previous ticket #20433

dlb
Increasing your limits is a good thing, but you may not be able to override your host's settings. If you say 12000 and your host says 60, guess who wins?

You can check the ftp log on the destination server, that may give you more information about what went wrong with the file transfer.

A single backup file is a wonderful thing, but sometimes it just won't work. That is why we can split the archive into multiple parts.


Dale L. Brackin
Support Specialist


us.gifEnglish: native


Please keep in mind my timezone and cultural differences when reading my replies. Thank you!


????
My time zone is EST (UTC -5) (click here to see my current time in Philadelphia, PA)

hsojhsojhsoj
Plesk error logs showed timeouts:
mod_fcgid: read data timeout in 45 seconds, referer: http://adiofamilychiro.com/administrator/index.php?option=com_akeeba&view=backup

To fix these i modified /etc/httpd/conf.d/fcid.conf

So fast CGI is modified from:
FcgidIdleTimeout 40
FcgidMaxRequestLen 1073741824
FcgidProcessLifeTime 30
FcgidMaxProcesses 20
FcgidMaxProcessesPerClass 8
FcgidMinProcessesPerClass 0
FcgidConnectTimeout 30
FcgidIOTimeout 45
FcgidInitialEnv RAILS_ENV production
FcgidIdleScanInterval 10

to

FcgidIdleTimeout 4000
FcgidMaxRequestLen 1073741824
FcgidProcessLifeTime 3000
FcgidMaxProcesses 20
FcgidMaxProcessesPerClass 8
FcgidMinProcessesPerClass 0
FcgidConnectTimeout 300
FcgidIOTimeout 1200
FcgidInitialEnv RAILS_ENV production
FcgidIdleScanInterval 10

now the plesk error logs do not have time out errors. However, the backup process still does not successfully upload the file. It is close though. It sends more than 110 megs before the warning.

here is the plesk error log:
dependentmedia.com/pass/akeeba/error_log.txt

nicholas
Akeeba Staff
Manager
You are still changing the PHP time limit, not the Apache time limit. This is unnecessary. When you check the "Set an infinite PHP time limit" box in the configuration page you've already made sure that PHP doesn't time out on your server. The problem is that Apache won't wait for the 10 minutes it takes for your backup to upload and spit a gateway timeout error.

While you could set the Apache time limit to 2000 seconds, it would be a terrible idea as a single runaway PHP script (bugs do happen!) would bring down your server with 100% CPU usage. I have been there. It's not fun. It's a royal pain in the rear to fix that on a live host.

Frankly, using a PHP application through a web browser to transfer a very large amount of data to another server in a single step is unwise and not recommended. You can, however, transfer a very large backup archive using a backup running through a CRON job, only if you use the native CLI backup script (cli/akeeba-backup.php). This is exactly why I wrote it. The only limit is PHP's file handling, meaning that you can only use part sizes up to 2Gb (after that PHP may not be able to write to or read from the backup archive – that's a limitation of the PHP language due to the use of the platform-specific signed integer for its internal integer type, if you care to know the gory details). In any case, please use the akeeba-backup.php CLI script instead of the web interface.

There is a reasoning behind my suggestion:

- If you want a single backup archive no matter what, I assume you do that because it's more convenient when you are transferring it from Dropbox to some other local storage. Most likely that's best implemented as a scheduled backup. The native CLI backup script and a CRON job is THE way to do that.

- If you want to transfer your site to another server by means of a backup archive you shouldn't be going through Dropbox. Going through Dropbox means that you have to wait for Dropbox to sync, then FTP the backup to the new site. That's a waste of time. Instead, you can use a small part size (5 to 10Mb) and the Upload to Remote FTP post-processing engine, with the "Upload kickstart.php" option ticket. Taking a backup will then transfer the backup archive parts and kickstart.php to the new site, meaning that you only need to run Kickstart from your browser once the backup is done. No download/upload required and you can even run this process from a tablet or a smartphone!

So, for all practical purposes, what you are trying to do makes little to no sense and of course won't work due to well thought out server restrictions. If you do lift those restrictions you jeopardise the stability of your server, ending up shooting your own feet in the long run.

If you have questions about the best way to move forward, please post back with your use case. I will be able to describe the best solution when I know exactly what you are trying to achieve.

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

hsojhsojhsoj
You guys are awesome, love your enthusiasm.

If I run a cron job for many sites at once then my server will slow. Thus, i would like to have a list that runs the backups one after the next and not concurrently. I currently run this list of backups using akeeba's free version and things work great. I thought it sure would be nice if the files also uploaded to a second location all grouped into a single folder. Thus the pro version was purchased thinking this was possible. Oops, i can not have the files upload to another server because they have to be split. And many large sites, times many splits, times many backups, equals a mess.

So, how can cron jobs be queued in a list? Or any other cool ideas

dlb
I upload mine into folders split by site, then by date+time. The date+time folders are created automatically by variables used in the upload path in the backup configuration. It doesn't matter how many parts are in the backup, they all go into one folder. I have them set up with quotas, so they automatically expire and are deleted, so I don't have to go in and manually find and delete the old backups.

For one of my sites, the backups needed to be accessible by others in case I get hit by a beer truck. I set up separate credentials for the site level folder so the backup for that site could be recovered by the organization if necessary. That's on Amazon S3 cloud storage.

There is no right answer. That's just what works for me.


Dale L. Brackin
Support Specialist


us.gifEnglish: native


Please keep in mind my timezone and cultural differences when reading my replies. Thank you!


????
My time zone is EST (UTC -5) (click here to see my current time in Philadelphia, PA)

hsojhsojhsoj
The "date+time folders are created automatically by variables used in the upload path in the backup configuration," sounds great.
I see this page about variables:
https://www.akeebabackup.com/documentation/akeeba-backup-documentation/configuration.html

But i also see this page about folders:
"The FTP engine, unlike the other post-processing engines, does not support creating directories, therefore you can't use the variables, such as [DATE], [TIME] and [HOST] in the directory's name. "

https://www.akeebabackup.com/support/akeeba-backup-3x/8580-solved-upload-to-ftp-server-useing-wildcards-ie-host-or-date.html

Could you paste your path here as an example?
I also have not found the "upload path in the backup configuration." I have a "Initial directory" under "Upload to Remote FTP server" Will that work?

dlb
"The FTP engine, unlike the other post-processing engines, does not support creating directories, therefore you can't use the variables, such as [DATE], [TIME] and [HOST] in the directory's name. "
I didn't know that. I'll have to check with Nicholas to see if it still applies. There isn't anything in the docs one way or the other.

For what it's worth, my directory looks like this:
/mysite.com/[DATE]-[TIME]
But The S3 setup is a little different than FTP.


Dale L. Brackin
Support Specialist


us.gifEnglish: native


Please keep in mind my timezone and cultural differences when reading my replies. Thank you!


????
My time zone is EST (UTC -5) (click here to see my current time in Philadelphia, PA)

dlb
Nicholas confirms that we can't use variables in the FTP setup.

We can still create the folders for each site on the target FTP server and use those folders for each site. That cleans things up a little bit, but not as neatly as would be possible on a cloud server.


Dale L. Brackin
Support Specialist


us.gifEnglish: native


Please keep in mind my timezone and cultural differences when reading my replies. Thank you!


????
My time zone is EST (UTC -5) (click here to see my current time in Philadelphia, PA)

Support Information

Working hours: We are open Monday to Friday, 9am to 7pm Cyprus timezone (EET / EEST). Support is provided by the same developers writing the software, all of which live in Europe. You can still file tickets outside of our working hours, but we cannot respond to them until we're back at the office.

Support policy: We would like to kindly inform you that when using our support you have already agreed to the Support Policy which is part of our Terms of Service. Thank you for your understanding and for helping us help you!