X-Git-Url: http://git.rot13.org/?p=BackupPC.git;a=blobdiff_plain;f=README.ASA;h=a1f28a2d6a6da1d7b965d40fd885df16a2b665c0;hp=5134cc3d2afefc8e8152c5dc157ca034eb37b2cc;hb=93efae407e03b97af4872e7c84a1d8a47ef5f391;hpb=911e2c66c4affdbeb576215696d1339a33010f53 diff --git a/README.ASA b/README.ASA index 5134cc3..a1f28a2 100644 --- a/README.ASA +++ b/README.ASA @@ -2,11 +2,53 @@ This document tries to describe ASA extensions for BackupPC 3.2.0 Written by Dobrica Pavlinusic 2011-01-27 -Search and archive maintain data in PostgreSQL and full-text search. Since full-text search -is single-writer, we need to serialize somehow requests for it's update. +This is second iteration of adding search over arbitrary filename substrings and archive +to CD/DVD media with tracking of copies and additional md5sum creation on them for easy +burned media verification. + +ASA maintains it's data in PostgreSQL and KinoSearch (for faster part-of-filename matching). +Since full-text search is single-writer, we need to serialize somehow requests for it's update. + +Implementation is based on archive host feature in BackupPC using _search_archive.pl configuration +file located at /etc/BackupPC/pc/_search_archive.pl + +This provides us with serialization and hooks around it, but lacked incremental tar creation which +is essential because we want to burn always increasing archive on CD/DVD media. + +This is implemented using new global configuration directive TarCreateIncremental + +Using BackupPC hooks to integrate and archive host also provided following advantages: + - web interface for archive host contains our log messages + - all updates are invoked automatically on end of each run (system is always up to date) + +BackupPC can dump multiple machines in parallel, this invoking our _search_archive host and index +update while update from different machine is still in process. Archive host will reject request, +but next invocation of same host will fix problem automatically. + +To be sure that all pending archives are indexed, you can also run cron job which invokes _search_archive +on all pending increments: + + /BackupPC_ASA_ArchiveStart _search_archive backuppc + +You can also force archival of particual pending backups from single host by adding hostname(s) or +hostname:num to backup individual increment. + +Alternativly, you can use _search_archive web interface to invoke increment creation and indexing. + + + +There are two global options which had to be set for all hosts: + +# +# /etc/BackupPC/config.pl +# + +# invoke archive of dump - ASA extension DumpPostCmd was too early +$Conf{DumpPostFinishCmd} = '/srv/BackupPC/bin/BackupPC_ASA_ArchiveStart _search_archive backuppc $host'; + +# dump only incremental changes in tars not whole content - ASA extension +$Conf{TarCreateIncremental} = 1; -This is implemented using archive host feature using _search_archive.pl configuration -file in /etc/BackupPC/pc/_search_archive.pl You can manually trigger all pending backups using: @@ -15,7 +57,6 @@ You can manually trigger all pending backups using: This will start archive host _search_archive which will run it's configuration: - # # /etc/BackupPC/pc/_search_archive.pl # @@ -30,32 +71,27 @@ $Conf{ArchiveDest} = '/data/BackupPC/_search_archive'; $Conf{ArchiveComp} = 'gzip'; $Conf{CompressLevel} = 9; -# dump only incremental changes in tars not whole content - ASA extension -# XXX this option must be global in /etc/BackupPC/config.pl -$Conf{TarCreateIncremental} = 1; # archive media size (in bytes) 4.2Gb for DVD -#$Conf{ArchiveMediaSize} = 4200 * 1024 * 1024; -$Conf{ArchiveMediaSize} = 1440 * 1024; # FIXME floppy - -# size of one chunk burned to archive medium -# useful for transfer to smaller media or limited filesystems -#$Conf{ArchiveChunkSize} = (2048 - 2) * 1024 * 1024; # 2Gb filesystem-limit -$Conf{ArchiveChunkSize} = 100 * 1024 * 1024; # FIXME zipdrive +#$Conf{ArchiveMediaSize} = 4200 * 1024 * 1024; # DVD +$Conf{ArchiveMediaSize} = 630 * 1024 * 1024; # CD +#$Conf{ArchiveMediaSize} = 1440 * 1024; # floppy +#$Conf{ArchiveMediaSize} = 42 * 1024 * 1024; # FIXME # A size in megabytes to split the archive in to parts at. # This is useful where the file size of the archive might exceed the # capacity of the removable media. For example specify 700 if you are using CDs. #$Conf{ArchiveSplit} = 650; -$Conf{ArchiveSplit} = 300 * 1024; # FIXME small testing chunks +$Conf{ArchiveSplit} = 100; # FIXME small testing chunks # The amount of parity data to create for the archive using the par2 utility. # In some cases, corrupted archives can be recovered from parity data. -$Conf{ArchivePar} = 0; -$Conf{ParPath} = undef; - +$Conf{ArchivePar} = 30; +$Conf{ParPath} = '/srv/par2cmdline-0.4-tbb-20100203-lin64/par2'; +# http://chuchusoft.com/par2_tbb/download.html +# par2cmdline 0.4 with Intel Threading Building Blocks 2.2 # use parallel gzip (speedup on multi-code machines) $Conf{GzipPath} = '/usr/bin/pigz';