NiJo has asked for the wisdom of the Perl Monks concerning the following question:

For my backup project I'm looking for code and suggestions to a rsync workalike that is properly modular and pure perl.

I don't want File:Rsync or anything just wrapped around a rsync binary. librsync is simply dead code and no use for me. fsync is too ugly code to start with. I won't do 1:1 copies of directory trees but need different mappings from content to file name (and back) on both sides. The idea is to consider files with identical SHA1 checksums to be equal and not needed to be transferred. I know it's wrong, but "good enough".

What I want has at least some of rsync main parts:
1) Directory traversal (File::Find::Rule or my own code)
2) Algorithm similar to the rsync tech report
3) Transport protocol (rsync compatibility not required)
4) State storage on the server
5) Scalable server
6) Untrusted clients

Backups in the GB range with 1e5 files on each client should be doable without too much latency. Part 1 can be considered done. I have untested code for 2 that uses sum of digits as rolling checksum and md5/sha1 for the strong checksum.

I need suggestions for the transport protocol. An abandoned implementation used RPC::PlClient, but I have no clue whether XML RPC, SOAP or anything else is worth the additional effort. A compressed || encrypted channel with e. g. SSL, SSH would be a plus.

State storage means to me stat(), possibly ACL info and how to recover a client file. rsync uses the file system with stat(). The abandoned implementation of my backup application used a SQL database, but it was infeasible due to the load of first time clients. Imagine 1e5 inserts hitting the database with some latency. A new implementation would use a SQLite database per client that is "r"synced with the server. Bloom::Filter and some tricks on it will give fast and _accurate_ knowledge of files already on the server.

The server should be simple to setup and low on resource usage. Standalone servers are greatly preferred.

As anybody can hack a client the server needs to prevent harm and privacy issues to other clients by e. g. rechecking checksums before releasing files to the common pool.

Do you have helpfull suggestions or hints to (unfinished) code? Especially picking the right transport protocol is key to breaking up the superb but monolithic rsync.

Thank you very much.

Replies are listed 'Best First'.
Re: rsync workalike
by bluto (Curate) on Aug 12, 2004 at 20:00 UTC
    I don't have too many suggestions. One problem is that you are getting very specific about what you want to implement, but you aren't explaining the scope of what you are actually trying to do.

    For example, when I think of the term "Backup" I don't tend to think "rsync" since backups tend to keep multiple versions of a file, and the files eventually tend to be aggragated onto cheaper tertiary storage (like tape) for scalability/price reasons. Storing these as individual objects on the servers disk and/or in a database is not necessarily efficient (i.e. not "low on resource usage"). Another example is that you aren't defining how many files a server will handle, how many clients per server, etc.

    If you really need something like rsync, one thing you might try is to bundle up many small files at the client end into a single reasonably sized group (e.g. tar) and work with that as a unit. Depending on the file sizes you are dealing with this could cut the number of database entries, and/or checksums down by a factor of 100 or more. Another nice thing about this is that you are forcing the client to do most of the work -- the server doesn't need to stat 100 files. If a small file changes on the client, you could recreate the entire bundle since the total size of the bundle would be reasonable.

    FWIW, I've heard someone creating fairly scalable bundles similar to this by just bundling files in the same directory (i.e. don't recursively descend the tree so that there is 1 bundle per directory). One nice thing about this is that the bundle file has the same parent directory path as every file in it, so you don't have to play games trying to hunt down which bundle a file is located in.

      Thank you very much for your suggestions. I was more after help on replacing rsync than my application. It was only ment to serve as background. Simplified I want to do something like:

      rsync-copy --source /path/file --template remote:/pool/<old_checksum> --target remote:/pool/`md5sum /path/file`

      for many files. The goal is to create a centralized smart backup application (e. g. for home use). Think of a full backup as fast as "updatedb" on Linux/Unix. It has to work across low bandwith links, e. g. analog phone lines. I want to exploit the fact that many clients share most data in different locations. That's mostly operating system and applications. Clients are too dissimilar for simple imaging. They share 90% of data across the same OS distribution.

      I don't have many details about file count on server (1e6 different checksums?), number of clients (1e3 ?) or distribution of file sizes. Despite the large numbers, server DB scalability, performance and disk space should not be a major issue. One DB on the client scales well. Of course the client needs to do 99% of the work.

      For example, when I think of the term "Backup" I don't tend to think "rsync" since backups tend to keep multiple versions of a file,

      There's actually quite a nice technique using rsync for disk based backups that takes advantage of Unix hard links to do fairly efficient multiple revisions.

•Re: rsync workalike
by merlyn (Sage) on Aug 12, 2004 at 20:29 UTC
Re: rsync workalike
by adrianh (Chancellor) on Aug 15, 2004 at 13:43 UTC

    You might want to look at:

    • psync - a Perl rsync-ish system for Mac OS
    • Unison - an bi-directional file-synchronization tool for Unix and Windows. Written in OCaml, but the code is clean.
      The psync script of MacOSX::File uses stat() attributes to do full copies onto the same directory structure stored elsewhere. Unison mostly deals with source code but seems to have a self made rsync workalike. There are several other tools that do rsync + hardlinking unchanged files.

      BackupPC (http://sourceforge.net/projects/backuppc/) has a very similar concept to what I want. It is written in Perl and has a nice CGI interface. The disadvantages include using windows shares, tar, rsync as transport protocol and AFAIK checksumming only parts of a file.

Re: rsync workalike
by NiJo (Friar) on Aug 15, 2004 at 20:36 UTC
    Thank you all for the hints on other programs. But let me ask the main question in another way:

    How would you reimplement rsync (single file sync level) in perl? What modules would you use for an scalable, standardized high level transport protocol?