![]() ![]() The native b2 cli tool wasn't as flexible, and I was getting terrible transfer speeds and the load was way to high, no matter what I did. A larger chunk size than the default was critical. Some of my backups are 500G, others are 100 to 200G, and it was those larger ones that needed me to find the best settings. It took days of experimenting to get a result I was reasonably happy with. What I found with s3cmd was that it was very flexible in terms of allowing you to set how many threads, and chunk size, and bandwidth limit, and by experimenting with all three settings I was able to get good transfer speeds, with minimal pre-transfer delay, without causing too high a load on the server. AND it is a simple binary, so I could potentially run it directly on a node rather than in a Container on a node (unlike, at least previously, s3cmd which is a whole mess of python). Now rclone looks rather interesting as an alternative to using s3cmd. I use s3cmd to sync files rather than rclone. Hours and hours of difference, potentially.Īnother thing I wanted to comment on was speed. But if it is huge, it makes a big difference. Sure, if it is only small then it makes little difference. ![]() So every X days, you'll be re-syncing that locally stored file, over and over. If you allow b2's Lifecycle rules to delete a file after X days, then the next time you do a sync you'll be syncing that local backup to b2 again. in my case I have some backups of VM templates that are stored locally and never deleted, which are also stored on b2 via the sync process. However, b2's Lifecycle rules can't take account of the fact that you may have a stored local backup of a file that's also being synced (not just moved) to b2. So the info provided about how to successfully do it using Lifecycle rules is really interesting - thanks. In the end I wrote my own script using the b2 cli tool to do so. I had tried to use b2's Lifecycle policy myself to delete old files, and failed. Old thread, but I thought I'd chime in, although I only have a small amount of info to offer here. If someone needs help or wants give me suggestions to improve this idea, please let me know! Using a 1 Gbps internet connection I can restore directly from B2 Backblaze at more than 500 Mbps speed! That's pretty fast for an object storage. What is nice is that I can restore a VM directly mounting the object storage using rclone mount or s3fs (rclone mount is better in performance) in a local folder than mapping in Proxmox storage directly (as directory) and restore which backup I need to, without copying backup file locally first. Then I've done a script running at cron weekly that delete in the object storage, oldest backup, keeping just the most recent ones. delete VM backup on local storage to free disk space made a script that with rclone move backup at the end of the process (using script hookup on nf) to the object storage (I'm using B2 Backplace that perform very fast but it works perfectly over S3 compatible, tested over Scaleway). If someone is interested I'm doing in this way: I've finally found a solution that is quite stable and perform well. I'm been working finding a way to backup VM over an object storage like S3 or B2 Backblaze. ![]()
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |