Kickstart is the script which extract the backup archive. You are well past that step since you are currently using ANGIE, the restoration script which is included in the backup archive during the backup process. So, no there is no problem with Kickstart.
As for ANGIE itself, no, there is no problem
with ANGIE. There might be, however, a problem with the actual host where you are restoring the site to. Replacing data in the database is a process which requires a lot of resources. After all, for each selected table we have to go through all its record. For each record we have to pull all of its fields. We then have to see if they contain serialized data, parse it, make replacements and encode it back to serialized data. If it's simple text data we do a much faster simple text replacement. Then we have to write everything back.
The thing is that all other WordPress data replacement solutions do NOT parse serialized data. They rely on very precarious and outright wrong regular expressions which work with about 80% of serialized data. On some sites they just destroy your data. ANGIE does not suffer from that problem. However, parsing (tokenizing) serialized data is inherently
SLOW. It takes about 10 seconds per Megabyte. This means that you can very easily get a timeout error if you try replacing everything at once.
We deal with that the same way we do with the backup process itself: split it in chunks. However, unlike the backup process, we cannot offer a great deal of control over the timing settings or let you skip over anything. Therefore you may end up in a situation that you're making too many requests per unit of time that the server temporarily blacklists your IP. Or you may simply end up using too much CPU or memory because, well, replacing ALL data in the ENTIRE database requires a lot of work and memory.
So, technically, it's your server that has the problem. You can sidestep the issue by temporarily disabling server-side protections against denial of service attacks such as mod_security2 or mod_evasive on Apache. This helps with the auto-blacklisting of IPs. But if the problem is CPU usage there's not much you can do.
Why do other data replacement solutions "work"? Because they don't
really work :) Instead of parsing serialized data they just use a one-size-fits-all regular expression to try and modify data. If the size of the data you are replacing has a different character count this will break nested objects. This breaks a lot of third party plugins' data in the database. But since the regular expressions are faster than full tokenization of serialized data they do sidestep most of the reasons leading to the server interrupting the data replacement so your site
mostly works. I don't really want us to make this kind of half-baked, half-working solution.
Also, you might wonder why do our Joomla!, Drupal etc restoration scripts work much better than the WordPress one? The reason is simple: no other CMS is written by inexperienced developers who use a broken and completely unfit for purpose data format (serialized data) as WordPress. Therefore no other CMS needs a resource-intensive operation to move it to a different domain.
Also note that what I said about serialized data is not my arbitrary opinion. Serialized data has a
very scary and
troubling history of security issues which,
two years ago, affected
ALL CMS. The only CMS that did exactly
nothing to address this issue was, guess what, WordPress :( I'm saying that as the person who worked around the impact of that PHP security issue in Joomla! itself back in Christmas 2015. Also note that the developers of PHP itself have announced that they won't fix any more security issues in serialize. That's pretty much the biggest reason why I moved my blog back to Joomla! in 2015: I saw the gaping holes and
the complete disregard to security in WordPress so I decided to not use it for any of my own sites. As you understand, writing security software comes with having a rather big bullseye painted on my sites. I can't risk it ;)
But I digress. The very short answer to your question is that yes, there is a correlation between the size of the site's database and the likelihood of data replacement failing. I would recommend NOT selecting all tables of the site in the Replace Data page. Select a subset of the tables, ideally only the ones which contain serialized data. For non-serialized data, e.g. the links in third party content items, images in a web shop etc you can always and
safely use third party scripts. Somewhere in my mile long to-do list I have an item for writing such a script which will be released under the Akeeba brand. Other things have taken priority for now.
If you feel the need for clarifications, feel free to ask me :)
Nicholas K. Dionysopoulos
Lead Developer and Director
🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!