At other sites a secondary tape backup will be required. This tape
backup can be done perhaps weekly from the BackupPC pool file system.
-One comment: in the US in particular, permanent backups of things like
-email are becoming strongly discouraged by lawyers because of discovery
-prior to possible litigation. Using BackupPC without tape backup allows
-recent file changes or losses to be restored, but without keeping a
-history more than a month or two old (although this doesn't avoid the
-problem of old emails languishing in user's email folders forever).
-
=back
=head2 Resources
=item Mail lists
-Two BackupPC mailing lists exist for announcements (backuppc-announce)
-and reporting information, asking questions, discussing development or
-any other topic relevant to BackupPC (backuppc-users).
+Three BackupPC mailing lists exist for announcements (backuppc-announce),
+developers (backuppc-devel), and a general user list for support, asking
+questions or any other topic relevant to BackupPC (backuppc-users).
-You are encouraged to subscribe to either the backuppc-announce
-or backuppc-users mail list on sourceforge.net at either:
+You can subscribe to these lists by visiting:
http://lists.sourceforge.net/lists/listinfo/backuppc-announce
http://lists.sourceforge.net/lists/listinfo/backuppc-users
+ http://lists.sourceforge.net/lists/listinfo/backuppc-devel
The backuppc-announce list is moderated and is used only for
important announcements (eg: new versions). It is low traffic.
-You only need to subscribe to one list: backuppc-users also
-receives any messages on backuppc-announce.
+You only need to subscribe to one of backuppc-announce and
+backuppc-users: backuppc-users also receives any messages on
+backuppc-announce.
+
+The backuppc-devel list is only for developers who are working on BackupPC.
+Do not post questions or support requests there. But detailed technical
+discussions should happen on this list.
To post a message to the backuppc-users list, send an email to
=item Other Programs of Interest
If you want to mirror linux or unix files or directories to a remote server
-you should consider rsync, L<http://rsync.samba.org>. BackupPC uses
+you should consider rsync, L<http://rsync.samba.org>. BackupPC now uses
rsync as a transport mechanism; if you are already an rsync user you
can think of BackupPC as adding efficient storage (compression and
pooling) and a convenient user interface to rsync.
Unison is a utility that can do two-way, interactive, synchronization.
See L<http://www.cis.upenn.edu/~bcpierce/unison>.
-Two popular open source packages that do tape backup are
-Amanda (L<http://www.amanda.org>) and
-afbackup (L<http://sourceforge.net/projects/afbackup>).
+Three popular open source packages that do tape backup are
+Amanda (L<http://www.amanda.org>),
+afbackup (L<http://sourceforge.net/projects/afbackup>), and
+Bacula (L<http://www.bacula.org>).
Amanda can also backup WinXX machines to tape using samba.
These packages can be used as back ends to BackupPC to backup the
BackupPC server data to tape.
+Various programs and scripts use rsync to provide hardlinked backups.
+See, for example, Mike Rubel's site (L<http://www.mikerubel.org>),
+J. W. Schultz's dirvish (L<http://www.pegasys.ws/dirvish>),
+and John Bowman's rlbackup (L<http://www.math.ualberta.ca/imaging/rlbackup>).
+BackupPC provides many additional features, such as compressed storage,
+hardlinking any matching files (rather than just files with the same name),
+and storing special files without root privileges. But these other scripts
+provide simple and effective solutions and are worthy of consideration.
+
=back
=head2 Road map
=item *
-Adding support for rsync as a transport method, in addition to
-smb and tar. This will give big savings in network traffic for
-linux/unix clients. I haven't decided whether to save the pool file
-rsync checksums (that would double the number of files in the pool, but
-eliminate most server disk reads), or recompute them every time. I expect
-to use native rsync on the client side. On the server, rsync would
-need to understand the compressed file format, the file name mangling
-and the attribute files, so I will either have to add features to rsync
-or emulate rsync on the server side in perl.
+Adding hardlink support to rsync.
+
+=item *
+
+Adding block and file checksum caching to rsync. This will significantly
+increase performance since the server doesn't have to read each file
+(twice) to compute the block and file checksums.
=item *
=item *
-Resuming incomplete completed full backups. Useful if a machine
+Allow editing of config parameters via the CGI interface. Users should
+have permission to edit a subset of the parameters for their clients.
+Additionally, allow an optional self-service capability so that users
+can sign up and setup their own clients with no need for IT support.
+
+=item *
+
+Add backend SQL support for various BackupPC metadata, including
+configuration parameters, client lists, and backup and restore
+information. At installation time the backend data engine will
+be specified (eg: MySQL, ascii text etc).
+
+=item *
+
+Disconnect the notion of a physical host and a backup client.
+Currently there is a one-to-one match between physical hosts
+and backup clients. Instead, the current notion of a host
+should be replaced by a backup client. Each backup client
+corresponds to a physical host. A physical host could have
+several backup clients. This is useful for backing up
+different types of data, or backing up different portions
+of a machine with different frequencies or settings.
+
+(Note: this has already been implemented in 2.0.0.)
+
+=item *
+
+Resuming incomplete full backups. Useful if a machine
(eg: laptop) is disconnected from the network during a backup,
-or if the user manually stops a backup. This would work by
-excluding directories that were already complete.
+or if the user manually stops a backup. This would be supported
+initially for rsync. The partial dump would be kept, and be
+browsable. When the next dump starts, an incremental against
+the partial dump would be done to make sure it was up to date,
+and then the rest of the full dump would be done.
+
+=item *
+
+Replacing smbclient with the perl module FileSys::SmbClient. This
+gives much more direct control of the smb transfer, allowing
+incrementals to depend on any attribute change (eg: exist, mtime,
+file size, uid, gid), and better support for include and exclude.
+Currently smbclient incrementals only depend upon mtime, so
+deleted files or renamed files are not detected. FileSys::SmbClient
+would also allow resuming of incomplete full backups in the
+same manner as rsync will.
+
+=item *
+
+Support --listed-incremental or --incremental for tar,
+so that incrementals will depend upon any attribute change (eg: exist,
+mtime, file size, uid, gid), rather than just mtime. This will allow
+tar to be to as capable as FileSys::SmbClient and rsync.
+
+=item *
+
+For rysnc (and smb when FileSys::SmbClient is supported, and tar when
+--listed-incremental is supported) support multi-level incrementals.
+In fact, since incrementals will now be more "accurate", you could
+choose to never to full dumps (except the first time), or at a
+minimum do them infrequently: each incremental would depend upon
+the last, giving a continuous chain of differential dumps.
+
+=item *
+
+Add a backup browsing feature that shows backup history by file.
+So rather than a single directory view, it would be a table showing
+the files (down) and the backups (across). The internal hardlinks
+encode which files are identical across backups. You could immediately
+see which files changed on which backups.
=item *
around 15-20%, which isn't spectacular, and likely not worth the
implementation effort. The program xdelta (v1) on SourceForge (see
L<http://sourceforge.net/projects/xdelta>) uses an rsync algorithm for
-doing efficient binary file deltas.
+doing efficient binary file deltas. Rather than using an external
+program, File::RsyncP will eventually get the necessary delta
+generataion code from rsync.
=back
=item *
-Perl modules Compress::Zlib, Archive::Zip and Rsync. Try "perldoc
+Perl modules Compress::Zlib, Archive::Zip and File::RsyncP. Try "perldoc
Compress::Zlib" and "perldoc Archive::Zip" to see if you have these
modules. If not, fetch them from L<http://www.cpan.org> and see the
instructions below for how to build and install them.
-The Rsync module is available from L<http://backuppc.sourceforge.net>.
-You'll need to install the Rsync module if you want to use Rsync as
-a transport method.
+The File::RsyncP module is available from L<http://perlrsync.sourceforge.net>
+or CPAN. You'll need to install the File::RsyncP module if you want to use
+Rsync as a transport method.
=item *
1.13.7 at a minimum, with version 1.13.20 or higher recommended. Use
"tar --version" to check your version. Various GNU mirrors have the newest
versions of tar, see for example L<http://www.funet.fi/pub/gnu/alpha/gnu/tar>.
-As of July 2002 the latest versons is 1.13.25.
+As of February 2003 the latest version is 1.13.25.
=item *
version 2.5.5 on each client machine. See L<http://rsync.samba.org>.
Use "rsync --version" to check your version.
-For BackupPC to use Rsync you will also need to install the perl Rsync
-module, which is available from L<http://backuppc.sourceforge.net>.
+For BackupPC to use Rsync you will also need to install the perl
+File::RsyncP module, which is available from
+L<http://perlrsync.sourceforge.net>. Version 0.31 is required.
=item *
=head2 Step 2: Installing the distribution
-First off, to enable compression, you will need to install Compress::Zlib
-from L<http://www.cpan.org>. It is optional, but strongly recommended.
+First off, there are three perl modules you should install.
+These are all optional, but highly recommended:
+
+=over 4
+
+=item Compress::Zlib
+
+To enable compression, you will need to install Compress::Zlib
+from L<http://www.cpan.org>.
+You can run "perldoc Compress::Zlib" to see if this module is installed.
+
+=item Archive::Zip
+
To support restore via Zip archives you will need to install
-Archive::Zip, also from L<http://www.cpan.org>. You can run
-"perldoc Compress::Zlib" to see if this module is installed.
-Finally, you will need the Rsync module. To build and install these
-packages you should run these commands:
+Archive::Zip, also from L<http://www.cpan.org>.
+You can run "perldoc Archive::Zip" to see if this module is installed.
+
+=item File::RsyncP
+
+To use rsync and rsyncd with BackupPC you will need to install File::RsyncP.
+You can run "perldoc File::RsyncP" to see if this module is installed.
+File::RsyncP is available from L<http://perlrsync.sourceforge.net>.
+Version 0.20 is required.
+
+=back
+
+To build and install these packages, fetch the tar.gz file and
+then run these commands:
tar zxvf Archive-Zip-1.01.tar.gz
cd Archive-Zip-1.01
make test
make install
+The same sequence of commands can be used for each module.
+
Now let's move onto BackupPC itself. After fetching
BackupPC-__VERSION__.tar.gz, run these commands as root:
As an environment variable BPC_SMB_PASSWD set before BackupPC starts.
If you start BackupPC manually the BPC_SMB_PASSWD variable must be set
-manually first. For backward compatability for v1.5.0 and prior, the
+manually first. For backward compatibility for v1.5.0 and prior, the
environment variable PASSWD can be used if BPC_SMB_PASSWD is not set.
Warning: on some systems it is possible to see environment variables of
running processes.
=item Host name
-If this host is a static IP address this must the machine's IP host name
-(ie: something that can be looked up using nslookup or DNS). If this is
-a host with a dynamic IP address (ie: DHCP flag is 1) then the host
-name must be the netbios name of the machine. The host name should
-be in lower case.
+This is typically the host name or NetBios name of the client machine
+and should be in lower case. The host name can contain spaces (escape
+with a backslash), but it is not recommended.
+
+Please read the second L<How BackupPC Finds Hosts|how backuppc finds hosts>.
+
+In certain cases you might want several distinct clients to refer
+to the same physical machine. For example, you might have a database
+you want to backup, and you want to bracket the backup of the database
+with shutdown/restart using $Conf{DumpPreUserCmd} and $Conf{DumpPostUserCmd}.
+But you also want to backup the rest of the machine while the database
+is still running. In the case you can specify two different clients in
+the host file, using any mnemonic name (eg: myhost_mysql and myhost), and
+use $Conf{ClientNameAlias} in myhost_mysql's config.pl to specify the
+real host name of the machine.
=item DHCP flag
-Set to 0 if this host has a static IP address (meaning it can be looked
-up by name in the DNS). If the host's IP address is dynamic (eg, it is
-assigned by DHCP) then set this flag to 1.
+Starting with v2.0.0 the way hosts are discovered has changed and now
+in most cases you should specify 0 for the DHCP flag, even if the host
+has a dynamically assigned IP address.
+Please read the second L<How BackupPC Finds Hosts|how backuppc finds hosts>
+to understand whether you need to set the DHCP flag.
+
+You only need to set DHCP to 1 if your client machine doesn't
+respond to the NetBios multicast request:
+
+ nmblookup myHost
-The hosts with dhcp = 1 are backed up as follows. If you have
-configured a DHCP address pool ($Conf{DHCPAddressRanges}) then
-BackupPC will check the NetBIOS name of each machine in the
-range. Any hosts that have a valid NetBIOS name (ie: matching
-an entry in the hosts file) will be backed up.
+but does respond to a request directed to its IP address:
+ nmblookup -A W.X.Y.Z
+
+If you do set DHCP to 1 on any client you will need to specify the range of
+DHCP addresses to search is specified in $Conf{DHCPAddressRanges}.
+
+Note also that the $Conf{ClientNameAlias} feature does not work for
+clients with DHCP set to 1.
+
=item User name
This should be the unix login/email name of the user who "owns" or uses
receive email or be allowed to stop/start/browse/restore backups
for this host. Administrators will still have full permissions.
+=item More users
+
+Additional user names, separate by commas and with no white space,
+can be specified. These users will also have full permission in
+the CGI interface to stop/start/browse/restore backups for this host.
+These users will not be sent email about this host.
+
=back
The first non-comment line of the hosts file is special: it contains
Here's a simple example of a hosts file:
- host dhcp user
- farside 0 craig
- larson 1 gary
-
-The range of DHCP addresses to search is specified in
-$Conf{DHCPAddressRanges}.
+ host dhcp user moreUsers
+ farside 0 craig jim,dave
+ larson 1 gary andy
=head2 Step 5: Client Setup
=item WinXX
The preferred setup for WinXX clients is to set $Conf{XferMethod} to "smb".
+(Actually, for v2.0.0, rsyncd is the better method for WinXX if you are
+prepared to run rsync/cygwin on your WinXX client. More information
+about this will be provided via the FAQ.)
You need to create shares for the data you want to backup.
Open "My Computer", right click on the drive (eg: C), and
=item Linux/Unix
The preferred setup for linux/unix clients is to set $Conf{XferMethod}
-to "tar".
+to "rsync", "rsyncd" or "tar".
-You can use either smb or tar for linux/unix machines. Smb requires that
-the Samba server (smbd) be run to provide the shares. Since the smb
+You can use either rsync, smb, or tar for linux/unix machines. Smb requires
+that the Samba server (smbd) be run to provide the shares. Since the smb
protocol can't represent special files like symbolic links and fifos,
-tar is the recommended transport method for linux/unix machines.
+tar and rsync are the better transport methods for linux/unix machines.
(In fact, by default samba makes symbolic links look like the file or
directory that they point to, so you could get an infinite loop if a
symbolic link points to the current or parent directory. If you really
need to use Samba shares for linux/unix backups you should turn off the
"follow symlinks" samba config setting. See the smb.conf manual page.)
-The rest of this section describes the tar setup.
+The requirements for each Xfer Method are:
+
+=over 4
+
+=item tar
You must have GNU tar on the client machine. Use "tar --version"
or "gtar --version" to verify. The version should be at least
-1.13.7, and 1.13.20 or greater is recommended.
+1.13.7, and 1.13.20 or greater is recommended. Tar is run on
+the client machine via rsh or ssh.
-For linux/unix machines you should no backup "/proc". This directory
+The relevant configuration settings are $Conf{TarClientPath},
+$Conf{TarShareName}, $Conf{TarClientCmd}, $Conf{TarFullArgs},
+$Conf{TarIncrArgs}, and $Conf{TarClientRestoreCmd}.
+
+=item rsync
+
+You should have at least rsync 2.5.5, and the latest version 2.5.6
+is recommended. Rsync is run on the remote client via rsh or ssh.
+
+The relevant configuration settings are $Conf{RsyncClientPath},
+$Conf{RsyncClientCmd}, $Conf{RsyncClientRestoreCmd}, $Conf{RsyncShareName},
+$Conf{RsyncArgs}, $Conf{RsyncRestoreArgs} and $Conf{RsyncLogLevel}.
+
+=item rsyncd
+
+You should have at least rsync 2.5.5, and the latest version 2.5.6
+is recommended. In this case the rsync daemon should be running on
+the client machine and BackupPC connects directly to it.
+
+The relevant configuration settings are $Conf{RsyncdClientPort},
+$Conf{RsyncdUserName}, $Conf{RsyncdPasswd}, $Conf{RsyncdAuthRequired},
+$Conf{RsyncShareName}, $Conf{RsyncArgs}, $Conf{RsyncRestoreArgs}
+and $Conf{RsyncLogLevel}. In the case of rsyncd, $Conf{RsyncShareName}
+is the name of an rsync module (ie: the thing in square brackets in
+rsyncd's conf file -- see rsyncd.conf), not a file system path.
+
+=back
+
+For linux/unix machines you should not backup "/proc". This directory
contains a variety of files that look like regular files but they are
special files that don't need to be backed up (eg: /proc/kcore is a
regular file that contains physical memory). See $Conf{BackupFilesExclude}.
(eg: backing up /dev/hda5 just saves the block-special file information,
not the contents of the disk).
+Alternatively, rather than backup all the file systems as a single
+share ("/"), it is easier to restore a single file system if you backup
+each file system separately. To do this you should list each file system
+mount point in $Conf{TarShareName} or $Conf{RsyncShareName}, and add the
+--one-file-system option to $Conf{TarClientCmd} or add --one-file-system
+(note the different punctuation) to $Conf{RsyncArgs}. In this case there
+is no need to exclude /proc explicitly since it looks like a different
+file system.
+
Next you should decide whether to run tar over ssh, rsh or nfs. Ssh is
the preferred method. Rsh is not secure and therefore not recommended.
Nfs will work, but you need to make sure that the BackupPC user (running
files. Ssh is setup so that BackupPC on the server (an otherwise low
privileged user) can ssh as root on the client, without being prompted
for a password. There are two common versions of ssh: v1 and v2. Here
-are some instructions for one way to setup ssh v2:
+are some instructions for one way to setup ssh. (Check which version
+of SSH you have by typing "ssh" or "man ssh".)
+
+=over 4
+
+=item OpenSSH Instructions
=over 4
=item Key generation
-As root on the client machine, use ssh2-keygen to generate a
+As root on the client machine, use ssh-keygen to generate a
+public/private key pair, without a pass-phrase:
+
+ ssh-keygen -t rsa -N ''
+
+This will save the public key in ~/.ssh/id_rsa.pub and the private
+key in ~/.ssh/id_rsa.
+
+=item BackupPC setup
+
+Repeat the above steps for the BackupPC user (__BACKUPPCUSER__) on the server.
+Make a copy of the public key to make it recognizable, eg:
+
+ ssh-keygen -t rsa -N ''
+ cp ~/.ssh/id_rsa.pub ~/.ssh/BackupPC_id_rsa.pub
+
+See the ssh and sshd manual pages for extra configuration information.
+
+=item Key exchange
+
+To allow BackupPC to ssh to the client as root, you need to place
+BackupPC's public key into root's authorized list on the client.
+Append BackupPC's public key (BackupPC_id_rsa.pub) to root's
+~/.ssh/authorized_keys2 file on the client:
+
+ touch ~/.ssh/authorized_keys2
+ cat BackupPC_id_rsa.pub >> ~/.ssh/authorized_keys2
+
+You should edit ~/.ssh/authorized_keys2 and add further specifiers,
+eg: from, to limit which hosts can login using this key. For example,
+if your BackupPC host is called backuppc.my.com, there should be
+one line in ~/.ssh/authorized_keys2 that looks like:
+
+ from="backuppc.my.com" ssh-rsa [base64 key, eg: ABwBCEAIIALyoqa8....]
+
+=item Fix permissions
+
+You will probably need to make sure that all the files
+in ~/.ssh have no group or other read/write permission:
+
+ chmod -R go-rwx ~/.ssh
+
+You should do the same thing for the BackupPC user on the server.
+
+=item Testing
+
+As the BackupPC user on the server, verify that this command:
+
+ ssh -l root clientHostName whoami
+
+prints
+
+ root
+
+You might be prompted the first time to accept the client's host key and
+you might be prompted for root's password on the client. Make sure that
+this command runs cleanly with no prompts after the first time. You
+might need to check /etc/hosts.equiv on the client. Look at the
+man pages for more information. The "-v" option to ssh is a good way
+to get detailed information about what fails.
+
+=back
+
+=item SSH2 Instructions
+
+=over 4
+
+=item Key generation
+
+As root on the client machine, use ssh-keygen2 to generate a
public/private key pair, without a pass-phrase:
ssh-keygen2 -t rsa -P
+or:
+
+ ssh-keygen -t rsa -N ''
+
+(This command might just be called ssh-keygen on your machine.)
+
This will save the public key in /.ssh2/id_rsa_1024_a.pub and the private
key in /.ssh2/id_rsa_1024_a.
man pages for more information. The "-v" option to ssh2 is a good way
to get detailed information about what fails.
-=item ssh version 1 instructions
+=back
+
+=item SSH version 1 Instructions
The concept is identical and the steps are similar, but the specific
commands and file names are slightly different.
up. You should make sure that $Conf{CgiImageDirURL} is the correct
URL for the image directory.
+=head2 How BackupPC Finds Hosts
+
+Starting with v2.0.0 the way hosts are discovered has changed. In most
+cases you should specify 0 for the DHCP flag in the conf/hosts file,
+even if the host has a dynamically assigned IP address.
+
+BackupPC (starting with v2.0.0) looks up hosts with DHCP = 0 in this manner:
+
+=over 4
+
+=item *
+
+First DNS is used to lookup the IP address given the client's name
+using perl's gethostbyname() function. This should succeed for machines
+that have fixed IP addresses that are known via DNS. You can manually
+see whether a given host have a DNS entry according to perls'
+gethostbyname function with this command:
+
+ perl -e 'print(gethostbyname("myhost") ? "ok\n" : "not found\n");'
+
+=item *
+
+If gethostbyname() fails, BackupPC then attempts a NetBios multicast to
+find the host. Provided your client machine is configured properly,
+it should respond to this NetBios multicast request. Specifically,
+BackupPC runs a command of this form:
+
+ nmblookup myhost
+
+If this fails you will see output like:
+
+ querying myhost on 10.10.255.255
+ name_query failed to find name myhost
+
+If this success you will see output like:
+
+ querying myhost on 10.10.255.255
+ 10.10.1.73 myhost<00>
+
+Depending on your netmask you might need to specify the -B option to
+nmblookup. For example:
+
+ nmblookup -B 10.10.1.255 myhost
+
+If necessary, experiment on the nmblookup command that will return the
+IP address of the client given its name. Then update
+$Conf{NmbLookupFindHostCmd} with any necessary options to nmblookup.
+
+=back
+
+For hosts that have the DHCP flag set to 1, these machines are
+discovered as follows:
+
+=over 4
+
+=item *
+
+A DHCP address pool ($Conf{DHCPAddressRanges}) needs to be specified.
+BackupPC will check the NetBIOS name of each machine in the range using
+a command of the form:
+
+ nmblookup -A W.X.Y.Z
+
+where W.X.Y.Z is each candidate address from $Conf{DHCPAddressRanges}.
+Any host that has a valid NetBIOS name returned by this command (ie:
+matching an entry in the hosts file) will be backed up. You can
+modify the specific nmblookup command if necessary via $Conf{NmbLookupCmd}.
+
+=item *
+
+You only need to use this DHCP feature if your client machine doesn't
+respond to the NetBios multicast request:
+
+ nmblookup myHost
+
+but does respond to a request directed to its IP address:
+
+ nmblookup -A W.X.Y.Z
+
+=back
+
=head2 Other installation topics
=over 4
true for backups in v1.4.0 and above. False for all backups prior
to v1.4.0.
+=item xferMethod
+
+Set to the value of $Conf{XferMethod} when this dump was done.
+
+=item level
+
+The level of this dump. A full dump is level 0. Currently incrementals
+are 1. But when multi-level incrementals are supported this will reflect
+each dump's incremental level.
+
=back
=item restores
space if all the files in a directory have the same attributes across
multiple backups, which is common.
+=head2 Optimizations
+
+BackupPC doesn't care about the access time of files in the pool
+since it saves attribute meta-data separate from the files. Since
+BackupPC mostly does reads from disk, maintaining the access time of
+files generates a lot of unnecessary disk writes. So, provided
+BackupPC has a dedicated data disk, you should consider mounting
+BackupPC's data directory with the noatime attribute (see mount(1)).
+
=head2 Limitations
BackupPC isn't perfect (but it is getting better). Here are some
=head1 Copyright
-Copyright (C) 2001-2002 Craig Barratt
+Copyright (C) 2001-2003 Craig Barratt
=head1 Credits
+Xavier Nicollet, with additions from Guillaume Filion, added the
+internationalization (i18n) support to the CGI interface for v2.0.0.
+
Ryan Kucera contributed the directory navigation code and images
for v1.5.0. He also contributed the first skeleton of BackupPC_restore.
Guillaume Filion wrote BackupPC_zipCreate and added the CGI support
for zip download, in addition to some CGI cleanup, for v1.5.0.
-Several people have reported bugs or made useful suggestions; see the
-ChangeLog.
+Many people have reported bugs, made useful suggestions and helped
+with testing; see the ChangeLog and the mail lists.
Your name could appear here in the next version!