OK, it's process limits
as I suspected. There's nothing else for us to do.
The difference is that the alt script goes through the web server to run the backup. Instead of one, big, hungry process to run the backup now the server sees several small and much less unwieldy ones. Therefore it won't kill them. The alt script itself consumes almost no resources. It just sits there, waiting for each backup step process to end before starting the next.
The only caveat with this solution is with transferring files to remote storage. If you want to do that you will need to use smaller part sizes (around 10M) or a transfer engine which supports transferring backup archives to remote storage in chunks (e.g. Amazon S3, Dropbox, OneDrive, Google Drive, ...). If you try to transfer a big file, let's say 50M, in one go (like FTP does) you will get a timeout from the web server that's running the backup step processes.
Armed with that knowledge I think you'll have no problem running your backups!
Nicholas K. Dionysopoulos
Lead Developer and Director
🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!