4 This simple project aims to automate and make easy the online recovery process of a failed pgpool's backend node in master/slave mode.
6 This version is work-in-progress using Centos7 and upstream packages. It doesn't require psmisc package, making Centos7 minimal installation sufficient for scripts to run, since it uses systemd to manage postgresql-9.6 installed in /var/lib/pgsql/9.6/data/
8 Hardware configuration is 3 nodes:
10 10.200.1.60 edozvola-pgpool
11 10.200.1.61 edozvola-db-01
12 10.200.1.62 edozvola-db-02
14 If installing on existing streaming replication you will need to tell pgpool where current master is with:
16 echo 0 > /tmp/postgres_master
18 You can also force re-check of nodes by removing status file and restarting pgool:
20 rm /var/log/pgpool_status
21 systemctl restart pgpool
23 Deployment script ./t/0-init-cluster.sh assumes that machine from which it's run is 10.200.1.1 which is added
24 in pg_hba.conf as authorized to be able to deploy cluster. You can run it with:
28 This will destroy all databases on all nodes, archive logs, etc, so don't do this if you need your old data later.
30 You can also edit all local files and push them to all nodes using:
34 To restart all services (pgoool and postgresql) do:
42 There are two requirements to these scripts to work.
44 * The first one is [pgpool-II](http://www.pgpool.net) (v3.6.5) available for [Centos7 from upstream](http://www.pgpool.net/yum/rpms/3.6/redhat/rhel-7-x86_64/pgpool-II-pg96-3.6.5-1pgdg.rhel7.x86_64.rpm).
45 * The second one is obviously Postgres server (v9.6) also for [Centos7 from upstream](https://yum.postgresql.org/9.6/redhat/rhel-7-x86_64/pgdg-redhat96-9.6-3.noarch.rpm)
47 There are several tutorials about setting up pgpool2 and postgres servers with [Streaming Replication](http://wiki.postgresql.org/wiki/Streaming_Replication) and this readme is far to be a howto for configuring both of them.
49 Installation and configuration
50 ==============================
51 What about the given scripts and config files ?
53 **pgpool.conf** : This is a sample config file for pgpool that activates master/slave mode, loadbalancing, backends health check, failover, ...
55 **postgresql.conf.master** : A config file for postgres master node.
57 **postgresql.conf.slave** : A config file for postgres slave node.
59 **recovery.conf** : A config file used by postgres slave for streaming replication process.
61 **failover.sh** : This script will be executed automatically when a pgpool's backend node (postgres node) fails down. It'll switch the standby node (slave) to master (new master).
63 **online-recovery.sh** : This is the bash script which you'll execute manually in order to :
64 * Reboot, sync and reattach slave node to pgpool if it fails.
65 * Setup new master and new slave, sync and reattach them to pgpool if current master fails.
66 This script will invoque remotely the script streaming-replication.sh (in the new slave node) to start the [online recovery process](http://www.postgresql.org/docs/8.1/static/backup-online.html) within the standby node.
67 PS : When a node (master or slave) fails, pgpool still running and DBs remain available. Otherwise, pgpool will detach this node for data consistancy reasons.
69 **streaming-replication.sh** : This script can be executed manually to synchronize a slave node with a given master node (master name/ip must be passed as argument to streaming-replication.sh). Otherwise, this same script is triggred be online-recovery.sh via ssh during failback process.
74 The installation steps are simple. You just need to copy provided bash scripts and config files as follow.
77 * Copy pgpool.conf to /etc/pgpool-II/. This is an optional operation and in this case you have to edit the default pgpool.conf file in order to looks like the config file we provided.
78 * Copy failover.sh into /etc/pgpool-II/ and online-recovery.sh to same directory or another directory that will be easily accessible.
80 **In the master and slave postgres nodes** :
81 * Copy streaming-replication.sh script into /var/lib/pgsql/ (postgres homedir).
82 * Copy postgresql.conf.master and postgresql.conf.slave files to /var/lib/pgsql/9.6/data/.
83 * Finally copy recovery.conf into /var/lib/postgresql/9.1/main/.
85 PS : All similar old files must be backed up to be able to rollback in case of risk (e.g: cp -p /etc/pgpool-II/pgpool.conf /etc/pgpool-II/pgpool.conf.backup).
87 - All scripts are executable and owned by the proper users.
88 - /var/lib/pgsql/9.6/archive directory is created (used to archive WAL files). This folder must be owned by postgres user !
89 - Do not forge to edit pg_hba.conf in each postgres server to allow access to cluster's nodes.
91 Not enough ! It remains only the configuration steps and we'll be done :)
96 To do, just follow these steps :
98 1- First of all make sure you have created a postgres user in pgpool node with SSH access to all Postgres nodes. All cluster's nodes have to be able to ssh each other. You can put "config" file with "StrictHostKeyChecking=no" option under .ssh/ directory of postgres user. This is a best practice (essencially when automating a bunch of operations) that allows postgres to ssh remote machine for the first time without prompting and validating Yes/No authorization question.
100 2- In Pgpool node set up pgpool.conf file for instance the parameters :
102 # Controls various backend behavior for instance master and slave(s).
103 backend_hostname0='master.foo.bar'
106 backend_data_directory0 = '/var/lib/postgres/9.1/main/'
107 backend_flag0 = 'ALLOW_TO_FAILOVER'
108 backend_hostname1='slave.foo.bar'
111 backend_data_directory1 = '/var/lib/postgres/9.1/main/'
112 backend_flag1 = 'ALLOW_TO_FAILOVER'
114 num_init_children = 32
116 # Master/Slave and load balancing (replication mode must be off)
117 load_balance_mode = on
118 master_slave_mode = on
119 master_slave_sub_mode = 'stream'
120 #Health check (must be set up to detecte postgres server status up/down)
121 health_check_period = 30
122 health_check_user = 'postgres'
123 health_check_password = 'postgrespass'
124 # - Special commands -
125 follow_master_command = 'echo %M > /tmp/postgres_master'
127 failover_command = '/path/to/failover.sh %d %H %P /tmp/trigger_file'
129 3- In failover.sh script, specify the proper ssh private key to postgres user to access new master node via SSH.
131 ssh -i /var/lib/postgresql/.ssh/id_rsa postgres@$new_master "touch $trigger_file"
133 4- Idem for online-recovery.sh you have juste to change if needed the postgres's private key, the rest of params is set automatically when the script runs. Magic hein ! :)
135 5- Change the primary_conninfo access parameters (to master) in recovery.conf file in slave side :
137 primary_conninfo = 'host=master-or-slave.foo.bar port=5432 user=postgres password=nopass'
139 6- Rename recovery.conf to recovery.done in master side.
141 7- Setup postgres master node (after backup of postgresql.conf) :
143 cp -p postgresql.conf.master postgresql.conf
144 /etc/init.d/postgresql restart
146 8- Setup postgres slave node (after backup of postgresql.conf) :
148 cp -p postgresql.conf.slave postgresql.conf
150 9- Start first slave synchronisation with master by executing streaming-replication.sh as postgres user :
154 ./streaming-replication.sh master.foo.bar
158 /etc/init.d/pgpool2 restart
160 At his stage slave node is connected to master and both of them are connected to pgpool. If the master fails down, pgpool detach it from the pool and perform failover process (slave become master) automatically.
165 Test PCP interface (as root) :
167 #retrieves the node information
168 pcp_node_info 10 localhost 9898 postgres "postgres-pass" "postgres-id"
169 #detaches a node from pgpool
170 pcp_detach_node 10 localhost 9898 postgres "postgres-pass" "postgres-id"
171 #attaches a node to pgpool
172 pcp_attach_node 10 localhost 9898 postgres "postgres-pass" "postgres-id"
174 After starting pgpool, try to test this two scenarios :
176 **1. When a slave fails down** :
178 Open pgpool log file 'journalctl -u pgpool -f'
180 Stop slave node 'sudo systemctl stop postgresql-9.6'
182 After exceeding health_check_period, you should see this log message :
184 [INFO] Slave node is down. Failover not triggred !
186 Now, start slave failback process (as root) :
188 # ./online-recovery.sh
190 **2. When a master fails down** :
192 Idem, open pgpool log file.
194 Stop master node '/etc/init.d/postgres stop'.
196 After exceeding health_check_period, you should see this log message :
198 [INFO] Master node is down. Performing failover...
200 Start failback process (as root) to switch master(new slave) and slave(new master) roles :
202 # ./online-recovery.sh