I do NOT appreciate your passive aggressive tone at all. You never asked me a pre-sales request and you did indicate that you are fully informed about the product. I don't accept responsibility for your actions. This is VERY insulting, it's a borderline ToS violation and does get your account flagged and your ticket closed. Next time ask a pre-sales request if you are not sure. That's why pre-sales requests are free of charge (I felt that it was self-explanatory but, alas).
Regarding my previous reply, it's not "my" solution! It's fundamentally how your web server is setup and how web servers, in general, work. On top of that it's how the file transfer protocols you chose to use (FTP and WebDAV) work. Both require you to be able to send the entire file in a single request. Due to the way servers and
PHP itself work you MUST load the entire file into memory for this to happen. You do understand that we NEITHER write PHP itself, NOR do we set up your server, correct?
Therefore the real issue you have is a limitation on your server, namely the memory_limit in php.ini, i.e. the maximum amount of memory a PHP script can use at one time. If you set the PHP memory limit to 2.5Gb instead of the current 256Mb
and you have enough available RAM on your server you will not have a problem using WebDAV
ASSUMING that the remote end accepts file uploads that big (extremely unlikely – WebDAV was NOT designed for that kind of volume). I consider this solution to be far more ridiculous than lowering the part size for split archives so I didn't even recommend it.
Just a parenthesis here. If your problem is file organization you do realize that you can use variables in the directory name, right? It's in the documentation. You could use a directory name like /backups/[DATE] to keep all files from the same date under the same folder, a different folder for each calendar day. This makes the point about the number of files completely moot.
However, as I explicitly stated above,
you can always use Amazon S3 WITHOUT using a small part size for split archives. Amazon S3's API allows us to split the large, 2Gb, backup parts in smaller chunks (100Mb works great in CLI as I already said). Yes, I told you explicitly in my previous reply. And yes,
THAT is the
BEST solution for a largely growing site like yours. I can tell you that after having had many long chats with the administrator of a site with a 64Gb backup, growing 1Gb every month. When it comes to storing obscene amounts of data, efficiently, there's nothing better than Amazon S3.
The full list of remote storage provides which support chunked big file transfers are:
- Upload to Amazon S3
- Upload to Dropbox (v1 API)
- Upload to Dropbox (v2 API)
- Upload to Microsoft OneDrive
However due to pricing considerations I would only recommend Amazon S3 for a site the size of yours.
Finally, do bear in mind that even if we made FTP/SFTP work they would still not be a good choice for your site
because of how PHP handles FTP/SFTP uploads. That's why I didn't even offer to debug this issue for you. It's an exercise in futility. You'd still need to use a small part size for split archives to let
PHP itself manage to wrangle the file, assuming that the remote end wouldn't throw a fit on the amount of data being shoved to it in a very limited amount of time (most FTP servers do come with some basic protection features to prevent abuse).
I am closing this ticket since I've already given you the optimal solution, twice. Use Amazon S3. You won't be disappointed.
Nicholas K. Dionysopoulos
Lead Developer and Director
🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!