1 =head1 WebPAC - Search engine or data-warehouse manual
3 It's quite hard to explain conceisly what webpac is. It's a mix between
4 search engine and data warehousing application. Let's see that in detail...
6 WebPAC was originally written to search CDS/ISIS records using C<swish-e>.
7 Since then it has, however, adopted different other input formats and added
8 support for alphabetical lists (earlier described as indexes).
10 With evolution of this concept, we decided to produce following work-flow
15 source file CDS/ISIS, MARC, Excel, robots, ...
17 1 | apply import normalisation rules (xml)
19 intermidiate this data is re-formatted source data converted
20 data to chunks based on tag names from import_xml
22 2 | apply output filter (TT2)
24 data search engine, HTML, OAI, RDBMS
26 3 | filter using query in REST format
27 4 | apply output filter (TT2)
29 client Web browser, SOAP
31 =head2 Normalisation and Intermidiate data
33 This is first step in working with your data.
35 You are creating mappings, one-to-one from source data records to documents
36 in webpac. You can split or merge data from input records, apply filters
37 (perl subroutines), use lookups within same source file or do simple
38 evaluations while producing output.
40 All that is controlled with C<import_xml> configuration file. You will want
41 to create fine-grained chunks of data (like separate first and last name),
42 which will later be used to produce output. You can think of conversation
43 process as application of C<import_xml> recepie on every input record.
45 Each tag within recepie is creating one new records as long as there are
46 fields in input format (which can be repeatable) that satisfy at least one
49 Users of older webpac should note that this file doesn't contain any more
50 formatting or specification of output type and that granularity of each tag
55 Now that we have normalized record, we can create some output. You can create
56 html from it, data files for search engine or insert them into RDBMS.
58 The twist is that application of output filters can be recursive, allowing
59 you to query data generated in previous step. This enables to you represent
60 lists or trees from source data that have structure. This also requires to
61 produce structured data in step 2 which can be filtered and queried in steps
62 3 and 4 to produce final output.
64 You should note that you can query intermidiate data in step 4 also, not
65 just data produced in step 2.
67 Output filter use Template Toolkit 2, so you have full power of simple
68 procedural language (loops, conditions) and handy built-in functions to
71 =head2 REST Query Format
73 Design decision is to use REST query format. This has benefit of simplicity
74 and ability to create unique URLs to all content within webpac. Simple query
77 http://webpac/search/html/personal_name/Joe%20Doe/AND/year/LT%201995
79 This REST query can be broken down to:
85 Hostname on which service is running. Not required if doing lookups, just
90 Name of output filtering methods. This will specify search engine.
94 Specified template that will be used to produce output.
96 =item perlsonal_name/Joe%20Doe...
98 URL encoded query string. It is specific to filtering method used.
102 You can easily produce RSS feed for same query using follwing REST url:
104 http://webpac/search/rss/personal_name/Joe%20Doe/AND/year/LT%201995
106 Yes, it really is that simple. As it should be.
108 =head1 Tehnical stuff
110 Following text will be more hard-code tehnical stuff about how is webpac
115 We are using Hyper Estraier search engine using pgestraier PostgreSQL bindings
118 It should be relativly easy to plugin another one if need arise.
120 =head2 Data Warehouse
122 In a nutshell, webpac has evolved to support hybrid data as input. That
123 means it has become kind of data-warehouse application. It doesn't support
124 directly roll-up and roll-down operations, but they can be emulated using
125 intermidiate data step or output step.