tar pipe

Occasionally I find my self needing to transfer a large number of files between systems and unable to use rsync. So, I end up using what I call a "tar pipe". Here's how it works.

#[user@source:~/]$ tar -zcf- /path/to/files/to/tar | ssh destination tar -zxf-

Here we are telling the "tar" command that we are creating a new tar file ("-c"), want to gzip the contents ("-z") and that the file ("-f") is to be written to standard out ("-"). We the output from tar in to ssh, to transfer it to the destination system where we remotely run another instance of tar to extract ("-x") the gziped ("-z") contents from the standard in ("-") file ("-f").

One of the added advantages of this method is that you don't actually write the tar file to either system, thus reducing redundant disk I/O. Also, this method streams data from the source end to the remote end, rather than doing blocking copies. This works well when you have a stable connection, like an ssh pipe. However, you will want to resort to other methods to verify that everything transfers properly if you don't have a stable connection.

There really isn't much to this, but I'm documenting this here for coworkers that occasionally have need to do this.