X-Git-Url: http://git.rot13.org/?p=pgpool-online-recovery;a=blobdiff_plain;f=README.md;h=435fac50e705e1ffe46c0975464e411732e418b0;hp=e0d24766191b6866949dae5294b1df0bfcb3f90a;hb=f193b9c6a67c8cb4d4f56f562d33b1cfb2422fef;hpb=f34c3d21ad92aa00147a25b72ec483e3ef9be9f6 diff --git a/README.md b/README.md index e0d2476..435fac5 100644 --- a/README.md +++ b/README.md @@ -52,7 +52,8 @@ The installation steps are simple. You just need to copy provided bash scripts a PS : All similar old files must be backed up to be able to rollback in case of risk (e.g: cp -p /etc/pgpool2/pgpool.conf /etc/pgpool2/pgpool.conf.backup). Make sure that : - All scripts are executable and owned by the proper users. -- /var/lib/postgresql/9.1/archive directory is created (used to archive WAL files). This directory must be owned by postgres user ! +- /var/lib/postgresql/9.1/archive directory is created (used to archive WAL files). This folder must be owned by postgres user ! +- Do not forge to edit pg_hba.conf in each postgres server to allow access to cluster's nodes. Not enough ! It remains only the configuration steps and we'll be done :) @@ -89,6 +90,7 @@ To do, just follow these steps : health_check_password = 'postgrespass' # Failover command failover_command = '/path/to/failover.sh %d %H %P /tmp/trigger_file' + 3- In failover.sh script, specify the proper ssh private key to postgres user to access new master node via SSH. ssh -i /var/lib/postgresql/.ssh/id_rsa postgres@$new_master "touch $trigger_file" @@ -124,15 +126,42 @@ At his stage slave node is connected to master and both of them are connected to Tests ===== -Test PCP interface: - pcp_node_info - pcp_detach_node - pcp_attach_node +Test PCP interface (as root) : + + #retrieves the node information + pcp_node_info 10 localhost 9898 postgres "postgres-pass" "postgres-id" + #detaches a node from pgpool + pcp_detach_node 10 localhost 9898 postgres "postgres-pass" "postgres-id" + #attaches a node to pgpool + pcp_attach_node 10 localhost 9898 postgres "postgres-pass" "postgres-id" + +After starting pgpool, try to test this two scenarios : + +**1. When a slave fall down** : + +Open pgpool log file 'tail -f /var/log/pgpool2/pgpool.log'. + +Stop slave node '/etc/init.d/postgres stop'. + +After exceeding health_check_period, you should see this log message : + + [INFO] Slave node is down. Failover not triggred ! + +Now, start slave failback process (as root) : + + # ./online-recovery.sh + +**2. When a master fall down** : + +Idem, open pgpool log file. + +Stop master node '/etc/init.d/postgres stop'. + +After exceeding health_check_period, you should see this log message : -After starting the postgres master node you should see the following log message in /var/log/postgresql/postgresql-9.1-main.log : + [INFO] Master node is down. Performing failover... -In the postgres master log file you should see : +Start failback process (as root) to switch master(new slave) and slave(new master) roles : -We assume that pgpool log file is /var/log/pgpool2/pgpool.log. After setting up it's convenient config file and restarting it out shoud see this message in log file : - + # ./online-recovery.sh