Our features site is undergoing a refresh! Be sure to explore the revamped site and discover our latest product roadmap launching here on Monday, March 18th.

To avoid the load on the server, there is a make offer to find a way generate the backup on the remote server.

NeeDle shared this idea 11 years ago
Needs Review

It would be great to make the possibility of generating a backup user folders ?panel directly at once on a remote server.


Local server setting using backup priority scripts in time load no more than two. The generation a backup at once on the remote server reduce load avarage. Thereby reducing the use of resources on the local computer.

Best Answer
photo

It is unlikely this feature will be further considered at this time. The only feasible way of generating a backup on a remote server to reduce load would be to transfer all data over to the remote server and have the remote server perform all of the data modification, archival, and compression.


The implications of this are dubious at best since the volume of data you would need to transfer would be significantly higher than that of the locally finished archive, extending transfer times and overall backup completion times. This also means developing an entirely new transfer system/daemon from the ground up (to run on the remote server), which may result in worse overall backup times and performance anyway.


There is also the consideration that incremental backups would still have to compare files local <-> remote and issue a stat() accordingly. This would in of itself contribute to load anyway.


In short, I do not see a viable method in which to accomplish this feature request. If you, or anybody else, can volunteer suggestions on feasible ways to attain this, we are of course interested in any input offered.

Replies (2)

photo
1

It is unlikely this feature will be further considered at this time. The only feasible way of generating a backup on a remote server to reduce load would be to transfer all data over to the remote server and have the remote server perform all of the data modification, archival, and compression.


The implications of this are dubious at best since the volume of data you would need to transfer would be significantly higher than that of the locally finished archive, extending transfer times and overall backup completion times. This also means developing an entirely new transfer system/daemon from the ground up (to run on the remote server), which may result in worse overall backup times and performance anyway.


There is also the consideration that incremental backups would still have to compare files local <-> remote and issue a stat() accordingly. This would in of itself contribute to load anyway.


In short, I do not see a viable method in which to accomplish this feature request. If you, or anybody else, can volunteer suggestions on feasible ways to attain this, we are of course interested in any input offered.

photo
1

I understand you. It is unlikely this function will be their own develop. The only remains to wait support for remote incremental backups ) Thanks!

Replies have been locked on this page!