Follow

Ok stike my last looks like it will take a very very long time to tar up the public folder.

Any suggestions on how to tar 100GBs of small files????????

Without taking years to do?

@omnipotens do you need to tar it up or can you just rsync it to the destination ?

@HexDSL I been running a wc just to see how many files there are for 4 hours. So thoughts

@omnipotens maybe syncthing? it should be able to handle that sort of volume, I would not compress it, just sync it out.

@HexDSL I tried syncthing before and with that large number of files it eats up to much CPU

@omnipotens
Any chance it can be moved to ZFS? Won't really help with the sheer size of the backup files, but it does make generating compressed, encrypted backups much easier. Those in turn could be made available as torrents with checksums publicly available, provided the goal is simply allowing for anyone to download a copy of or a portion of the instance for safekeeping, this could be done pretty easily.

@nergal ext4 yes on lvg rsync takes to long. wc I was just trying to get a idea of the amount of files so I did find . wc and even that is taking hours and hours to run never tried dumping fs but that would grab the whole fs and not just the folder and increase the backup size.

@omnipotens Have you tried any of the things mentioned in this SO post yet? stackoverflow.com/questions/26

Maybe one of them will be the magic bullet you need.

Sign in to participate in the conversation
LinuxRocks.Online

Linux Geeks doing what Linux Geeks do..