This simple project aims to automate and make easy the online recovery process of a failed pgpool's backend node in master/slave mode.
+This version is work-in-progress using Centos7 and upstream packages. It doesn't require psmisc package, making Centos7 minimal installation sufficient for scripts to run, since it uses systemd to manage postgresql-9.6 installed in /var/lib/pgsql/9.6/data/
+
+Hardware configuration is 3 nodes:
+
+10.200.1.60 edozvola-pgpool
+10.200.1.61 edozvola-db-01
+10.200.1.62 edozvola-db-02
+
+Deployment script ./t/0-init-cluster.sh assumes that machine from which it's run is 10.200.1.1 which is added
+in pg_hba.conf as authorized to be able to deploy cluster. You can run it with:
+
+make init
+
+This will destroy all databases on all nodes, archive logs, etc, so don't do this if you need your old data later.
+
+On the other hand this will also create setup whole cluster, and you can examine it's status using:
+
+make
+
+If you edited local files, push changes to all nodes using:
+
+make push
+
+To restart all services (pgoool and postgresql) do:
+
+make restart
+
+If you want to see systemd status of pgpool and replication just type:
+
+make status
+
+
+
+If installing on existing streaming replication you will need to tell pgpool where current master is with:
+
+echo 0 > /tmp/postgres_master
+
+You can also force re-check of nodes by removing status file and restarting pgool:
+
+rm /var/log/pgpool_status
+systemctl restart pgpool
+
+
Requirements
============
There are two requirements to these scripts to work.
-* The first one is [pgpool2](http://www.pgpool.net) (v3.1.3) available in [Debian Wheezy](http://packages.debian.org/stable/database/pgpool2). We assume that pgpool2 is installed, set up in master/slave mode with loadbalacing and manageable via PCP interface.
-* The second one is obviously Postgres server (v9.1) also available in Wheezy packages repository.
+* The first one is [pgpool-II](http://www.pgpool.net) (v3.6.5) available for [Centos7 from upstream](http://www.pgpool.net/yum/rpms/3.6/redhat/rhel-7-x86_64/pgpool-II-pg96-3.6.5-1pgdg.rhel7.x86_64.rpm).
+* The second one is obviously Postgres server (v9.6) also for [Centos7 from upstream](https://yum.postgresql.org/9.6/redhat/rhel-7-x86_64/pgdg-redhat96-9.6-3.noarch.rpm)
-There are several tutorials about setting up pgpool2 and postgres servers with [Streaming Replication](http://wiki.postgresql.org/wiki/Streaming_Replication) and this readme is far to be a howto for configuring both of them. You can check out [this tutorial](https://aricgardner.com/databases/postgresql/pgpool-ii-3-0-5-with-streaming-replication/) which describes really all the steps needed.
+There are several tutorials about setting up pgpool2 and postgres servers with [Streaming Replication](http://wiki.postgresql.org/wiki/Streaming_Replication) and this readme is far to be a howto for configuring both of them.
Installation and configuration
==============================
**recovery.conf** : A config file used by postgres slave for streaming replication process.
-**failover.sh** : This script will be executed automatically when a pgpool's backend node (postgres node) fall down. It'll switch the standby node (slave) to master (new master).
+**failover.sh** : This script will be executed automatically when a pgpool's backend node (postgres node) fails down. It'll switch the standby node (slave) to master (new master).
**online-recovery.sh** : This is the bash script which you'll execute manually in order to :
* Reboot, sync and reattach slave node to pgpool if it fails.
The installation steps are simple. You just need to copy provided bash scripts and config files as follow.
**In pgpool node** :
-* Copy pgpool.conf to /etc/pgpool2/. This is an optional operation and in this case you have to edit the default pgpool.conf file in order to looks like the config file we provided.
-* Copy failover.sh into /usr/local/bin/ and online-recovery.sh to your home or another directory that will be easily accessible.
+* Copy pgpool.conf to /etc/pgpool-II/. This is an optional operation and in this case you have to edit the default pgpool.conf file in order to looks like the config file we provided.
+* Copy failover.sh into /etc/pgpool-II/ and online-recovery.sh to same directory or another directory that will be easily accessible.
**In the master and slave postgres nodes** :
-* Copy streaming-replication.sh script into /var/lib/postgresql/ (postgres homedir).
-* Copy postgresql.conf.master and postgresql.conf.slave files to /etc/postgresql/9.1/main/.
+* Copy streaming-replication.sh script into /var/lib/pgsql/ (postgres homedir).
+* Copy postgresql.conf.master and postgresql.conf.slave files to /var/lib/pgsql/9.6/data/.
* Finally copy recovery.conf into /var/lib/postgresql/9.1/main/.
-PS : All similar old files must be backed up to be able to rollback in case of risk (e.g: cp -p /etc/pgpool2/pgpool.conf /etc/pgpool2/pgpool.conf.backup).
+PS : All similar old files must be backed up to be able to rollback in case of risk (e.g: cp -p /etc/pgpool-II/pgpool.conf /etc/pgpool-II/pgpool.conf.backup).
Make sure that :
- All scripts are executable and owned by the proper users.
-- /var/lib/postgresql/9.1/archive directory is created (used to archive WAL files). This folder must be owned by postgres user !
+- /var/lib/pgsql/9.6/archive directory is created (used to archive WAL files). This folder must be owned by postgres user !
- Do not forge to edit pg_hba.conf in each postgres server to allow access to cluster's nodes.
Not enough ! It remains only the configuration steps and we'll be done :)
health_check_period = 30
health_check_user = 'postgres'
health_check_password = 'postgrespass'
+ # - Special commands -
+ follow_master_command = 'echo %M > /tmp/postgres_master'
# Failover command
failover_command = '/path/to/failover.sh %d %H %P /tmp/trigger_file'
After starting pgpool, try to test this two scenarios :
-**1. When a slave fall down** :
+**1. When a slave fails down** :
-Open pgpool log file 'tail -f /var/log/pgpool2/pgpool.log'.
+Open pgpool log file 'journalctl -u pgpool -f'
-Stop slave node '/etc/init.d/postgres stop'.
+Stop slave node 'sudo systemctl stop postgresql-9.6'
After exceeding health_check_period, you should see this log message :
# ./online-recovery.sh
-**2. When a master fall down** :
+**2. When a master fails down** :
Idem, open pgpool log file.