Amazon S3 cPanel Backup Transporter Code to retry failed uploads in chunks rather than entire file.
Open Discussion
As an administrator, I would like to see the cPanel Backup Transport code for Amazon S3 buckets to retry in chunks rather than the entire file.
Currently if a large file fails to upload (or times out) to Amazon S3 (for whatever reason) the code retries with the entire file.
When using a third party product such as s3cmd however (version 1.6.0), it retries the upload of the chunk after a short timeout. It does not look as though the cPanel code attempts to do this. The backup transporter is written to retry uploads of entire files when the entire process fails, but the S3 transporter internally does not retry per-chunk as the current version of s3cmd seems to do.
An option should exist to retry in chunks as well as the entire file.
We have recently come across a situation where such a feature would be very beneficial. We are having intermittent issues with Amazon S3, where random chunks fail to transport. This restarts the entire transfer process for any affected account, delaying the transfer process by hours and costing us in wasted S3 data transfer costs.
Where 10-20GB backup files are concerned, the chance of a backup process failing is relatively high considering there are between 500-1000 chunks, each with their own individual PUT request.
We have recently come across a situation where such a feature would be very beneficial. We are having intermittent issues with Amazon S3, where random chunks fail to transport. This restarts the entire transfer process for any affected account, delaying the transfer process by hours and costing us in wasted S3 data transfer costs.
Where 10-20GB backup files are concerned, the chance of a backup process failing is relatively high considering there are between 500-1000 chunks, each with their own individual PUT request.
Replies have been locked on this page!