I was planning to use rsync to ship several TB of stuff from my old NAS to my new one soon. Since we're already talking about rsync, I guess I may as well ask if this is right way to go?
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
I couldn't tell you if it's the right way but I used it on my Rpi4 to sync 4tb of stuff from my Plex drive to a backup and set a script up to have it check/mirror daily. Took a day and a half to copy and now it syncs in minutes tops when there's new data
It depends
rsync
is fine, but to clarify a little further...
If you think you'll stop the transfer and want it to resume (and some data might have changed), then yep, rsync
is best.
But, if you're just doing a 1-off bulk transfer in a single run, then you could use other tools like xcopy
/ scp
or - if you've mounted the remote NAS at a local mount point - just plain old cp
The reason for that is that rsync
has to work out what's at the other end for each file, so it's doing some back & forwards communications each time which as someone else pointed out can load the CPU and reduce throughput.
(From memory, I think Raspberry Pi don't handle large transfers over scp
well... I seem to recall a buffer gets saturated and the throughput drops off after a minute or so)
Also, on a local network, there's probably no point in using encryption or compression options - esp. for photos / videos / music... you're just loading the CPU again to work out that it can't compress any further.
yes, it's the right way to go.
rsync over ssh is the best, and works as long as rsync is installed on both systems.
On low end CPUs you can max out the CPU before maxing out network---if you want to get fancy, you can use rsync over an unencrypted remote shell like rsh
, but I would only do this if the computers were directly connected to each other by one Ethernet cable.
Use borg/borgmatic for your backups. Use rsync to send your differentials to your secondary & offsite backup storage.
The thing I hate most about rsync is that I always fumble to get the right syntax and flags.
This is a problem because once it’s working I never have to touch it ever again because it just works and keeping working. There’s not enough time to memorize the usage.
I feel this too. I have a couple of "spells" that work wonders in a literal small notebook with other one liners over the years. Its my spell book lol.
I've been using borg because of the backend encryption and because the deduplication and snapshot features are really nice. It could be interesting to have cross-archive deduplication but maybe I can get something like that by reorganizing my backups. I do use rsync for mirroring and organizing downloads, but not really for backups. It's a synchronization program as the name implies, not really intended for backups.
Tangentially, I don’t see people talk about rclone a lot, which is like rsync for cloud storage.
It’s awesome for moving things from one provider to another, for example.
@calliope It’s also great for local or remote backups over ssh, smb, etc.
I tried to use it via tailscale but it disconnects very easily - is to be expected?
I used to use rsnapshot, which is a thin wrapper around rsync to make it incremental, but moved to restic and never looked back. Much easier and encrypted by default.
I think the there are better alternatives for backup like kopia and restic. Even seafile. Want protection against ransomware, storage compression, encryption, versioning, sync upon write and block deduplication.
comparing seafile to rsync reminds me the old "Space Pen" folk tale.
I need a breakdown like this for Rclone. I've got 1TB of OneDrive free and nothing to do with it.
I'd love to setup a home server and backup some stuff to it.
slow
rsync
is pretty fast, frankly. Once it's run once, if you have -a
or -t
passed, it'll synchronize mtimes. If the modification time and filesize matches, by default, rsync
won't look at a file further, so subsequent runs will be pretty fast. You can't really beat that for speed unless you have some sort of monitoring system in place (like, filesystem-level support for identifying modifications).
yeah, more often than not I notice the bottleneck being the storage drive itself, not rsync.
rsync for backups? I guess it depends on what kind of backup
for redundant backups of my data and configs that I still have a live copy of, I use restic, it compresses extremely well
I have used rsync to permanently move something to another drive though
Maybe I am missing something but how does it handle snapshots?
I use rsync all the time but only for moving data around effectively. But not for backups as it doesn't (AFAIK) hanld snapshots