A large-file compression program
rzip [OPTIONS] <files...>
rzip is a file compression program designed to do particularly well on very large files containing long distance redundancy.
Here is a summary of the options to rzip.
-0 fastest (worst) compression -6 default compression -9 slowest (best) compression -d decompress -o filename specify the output file name -S suffix specify compressed suffix (default '.rz') -f force overwrite of any existing files -k keep existing files -P show compression progress -V show version
Print an options summary page
Print the rzip version number
Set the compression level from 0 to 9. The default is to use level 6, which is a reasonable compromise between speed and compression. The compression level is also strongly related to how much memory rzip uses, so if you are running rzip on a machine with limited amounts of memory then you will probably want to choose a smaller level.
Decompress. If this option is not used then rzip looks at the name used to launch the program. If it contains the string 'runzip' then the -d option is automatically set.
Set the output file name. If this option is not set then the output file name is chosen based on the input name and the suffix. The -o option cannot be used if more than one file name is specified on the command line.
Set the compression suffix. The default is '.rz'.
If this option is not specified then rzip will not overwrite any existing files. If you set this option then rzip will silently overwrite any files as needed.
If this option is not specified then rzip will delete the source file after successful compression or decompression. When this option is specified then the source files are not deleted.
If this option is specified then rzip will show the percentage progress while compressing.
Just install rzip in your search path.
rzip operates in two stages. The first stage finds and encodes large chunks of duplicated data over potentially very long distances (up to nearly a gigabyte) in the input file. The second stage is to use a standard compression algorithm (bzip2) to compress the output of the first stage.
The key difference between rzip and other well known compression algorithms is its ability to take advantage of very long distance redundancy. The well known deflate algorithm used in gzip uses a maximum history buffer of 32k. The block sorting algorithm used in bzip2 is limited to 900k of history. The history buffer in rzip can be up to 900MB long, several orders of magnitude larger than gzip or bzip2.
It is quite common these days to need to compress files that contain long distance redundancies. For example, when compressing a set of home directories several users might have copies of the same file, or of quite similar files. It is also common to have a single file that contains large duplicated chunks over long distances, such as pdf files containing repeated copies of the same image. Most compression programs won't be able to take advantage of this redundancy, and thus might achieve a much lower compression ratio than rzip can achieve.
The ideas behind rzip were first implemented in 1998 while I was working on rsync. That version was too slow to be practical, and was replaced by this version in 2003.
Unlike most Unix compression programs, rzip cannot compress or decompress to or from standard input or standard output. This is due to the nature of the algorithm that rzip uses and cannot easily be fixed.
Thanks to the following people for their contributions to rzip
Paul Russell for many suggestions and the debian packaging
The authors of bzlib for an excellent library
rzip was written by Andrew Tridgell http://samba.org/~tridge/
If you wish to report a problem or make a suggestion then please email [email protected]
rzip is released under the GNU General Public License version 2 or later. Please see the file COPYING for license details.