Backup and recovery v5
PGD is designed to be a distributed, highly available system. If one or more nodes of a cluster are lost, the best way to replace them is to clone new nodes directly from the remaining nodes.
The role of backup and recovery in PGD is to provide for disaster recovery (DR), such as in the following situations:
- Loss of all nodes in the cluster
- Significant, uncorrectable data corruption across multiple nodes as a result of data corruption, application error, or security breach
Backup
pg_dump
You can use pg_dump, sometimes referred to as logical backup, normally with PGD.
pg_dump dumps both local and global sequences as if
they were local sequences. This behavior is intentional, to allow a PGD
schema to be dumped and ported to other PostgreSQL databases.
This means that sequence-kind metadata is lost at the time of dump,
so a restore effectively resets all sequence kinds to
the value of bdr.default_sequence_kind
at time of restore.
To create a post-restore script to reset the precise sequence kind for each sequence, you might want to use an SQL script like this:
If pg_dump is run using bdr.crdt_raw_value = on
, then you can reload the
dump only with bdr.crdt_raw_value = on
.
Technical Support recommends the use of physical backup techniques for backup and recovery of PGD.
Physical backup
You can take physical backups of a node in a EDB Postgres Distributed cluster using standard PostgreSQL software, such as Barman.
You can perform a physical backup of a PGD node using the same procedure that applies to any PostgreSQL node. A PGD node is just a PostgreSQL node running the BDR extension.
Consider these specific points to consider when applying PostgreSQL backup techniques to PGD:
PGD operates at the level of a single database, while a physical backup includes all the databases in the instance. Plan your databases to allow them to be easily backed up and restored.
Backups make a copy of just one node. In the simplest case, every node has a copy of all data, so you need to back up only one node to capture all data. However, the goal of PGD isn't met if the site containing that single copy goes down, so the minimum is at least one node backup per site (with many copies, and so on).
However, each node might have unreplicated local data, or the definition of replication sets might be complex so that all nodes don't subscribe to all replication sets. In these cases, backup planning must also include plans for how to back up any unreplicated local data and a backup of at least one node that subscribes to each replication set.
Eventual consistency
The nodes in a EDB Postgres Distributed cluster are eventually consistent, but not entirely consistent. A physical backup of a given node provides point-in-time recovery capabilities limited to the states actually assumed by that node.
The following example shows how two nodes in the same EDB Postgres Distributed cluster might not (and usually don't) go through the same sequence of states.
Consider a cluster with two nodes N1
and N2
that's initially in
state S
. If transaction W1
is applied to node N1
, and at the same
time a non-conflicting transaction W2
is applied to node N2
, then
node N1
goes through the following states:
Node N2
goes through the following states:
That is, node N1
never assumes state S + W2
, and node N2
likewise never assumes state S + W1
. However, both nodes end up
in the same state S + W1 + W2
. Considering this situation might affect how
you decide on your backup strategy.
Point-in-time recovery (PITR)
The previous example showed that the changes are also inconsistent in time.
W1
and W2
both occur at time T1
, but the change W1
isn't
applied to N2
until T2
.
PostgreSQL PITR is designed around the assumption of changes arriving
from a single master in COMMIT order. Thus, PITR is possible by
scanning through changes until one particular point in time (PIT) is reached.
With this scheme, you can restore one node to a single point in time
from its viewpoint, for example, T1
. However, that state doesn't include other
data from other nodes that committed near that time but had not yet
arrived on the node. As a result, the recovery might be considered to
be partially inconsistent, or at least consistent for only one
replication origin.
To request this, use the standard syntax:
PGD allows for changes from multiple masters, all recorded in the WAL log for one node, separately identified using replication origin identifiers.
PGD allows PITR of all or some replication origins to a specific point in time, providing a fully consistent viewpoint across all subsets of nodes.
Thus for multi-origins, you can view the WAL stream as containing multiple streams all mixed up into one larger stream. There's still just one PIT, but that's reached as different points for each origin separately.
The WAL stream is read until requested origins have found their PIT. All changes are applied up until that point, except that any transaction records are not marked as commmited for an origin after the PIT on that origin is reached.
You end up with one LSN "stopping point" in WAL, but you also have one single timestamp applied consistently, just as you do with single-origin PITR.
Once you reach the defined PIT, a later one might also be set to allow the recovery to continue, as needed.
After the desired stopping point is reached, if the recovered server
will be promoted, shut it down first. Move the LSN forward to an LSN value higher
than used on any timeline on this server using pg_resetwal
.
This approach ensures that there are no duplicate LSNs produced by logical
decoding.
In the specific example shown, N1
is restored to T1
. It
also includes changes from other nodes that were committed
by T1
, even though they weren't applied on N1
until later.
To request multi-origin PITR, use the standard syntax in
the recovery.conf
file:
You need to specify the list of replication origins that are restored to T1
in one of two ways.
You can use a separate multi_recovery.conf
file by way of
a new parameter recovery_target_origins
:
Or you can specify the origin subset as a list in recovery_target_origins
:
The local WAL activity recovery to the specified
recovery_target_time
is always performed implicitly. For origins
that aren't specified in recovery_target_origins
, recovery can
stop at any point, depending on when the target for the list
mentioned in recovery_target_origins
is achieved.
In the absence of the multi_recovery.conf
file, the recovery defaults
to the original PostgreSQL PITR behavior that's designed around the assumption
of changes arriving from a single master in COMMIT order.
Note
This feature is available only with EDB Postgres Extended.
Barman doesn't create a multi_recovery.conf
file.
Restore
While you can take a physical backup with the same procedure as a standard PostgreSQL node, it's slightly more complex to restore the physical backup of a PGD node.
EDB Postgres Distributed cluster failure or seeding a new cluster from a backup
The most common use case for restoring a physical backup involves the failure or replacement of all the PGD nodes in a cluster, for instance in the event of a data center failure.
You might also want to perform this procedure to clone the current contents of a EDB Postgres Distributed cluster to seed a QA or development instance.
In that case, you can restore PGD capabilities based on a physical backup of a single PGD node, optionally plus WAL archives:
- If you still have some PGD nodes live and running, fence off the host you restored the PGD node to, so it can't connect to any surviving PGD nodes. This practice ensures that the new node doesn't confuse the existing cluster.
- Restore a single PostgreSQL node from a physical backup of one of the PGD nodes.
- If you have WAL archives associated with the backup, create a suitable
recovery.conf
and start PostgreSQL in recovery to replay up to the latest state. You can specify an alternativerecovery_target
here if needed. - Start the restored node, or promote it to read/write if it was in standby recovery. Keep it fenced from any surviving nodes!
- Clean up any leftover PGD metadata that was included in the physical backup.
- Fully stop and restart the PostgreSQL instance.
- Add further PGD nodes with the standard procedure based on the
bdr.join_node_group()
function call.
Cleanup of PGD metadata
To clean up leftover PGD metadata:
- Drop the PGD node using
bdr.drop_node
. - Fully stop and restart PostgreSQL (important!).
Cleanup of replication origins
You must explicitly remove replication origins with a separate step because they're recorded persistently in a system catalog. They're therefore included in the backup and in the restored instance. They aren't removed automatically when dropping the BDR extension, because they aren't explicitly recorded as its dependencies.
To track progress of incoming replication in a crash-safe way, PGD creates one replication origin for each remote master node. Therefore, for each node in the previous cluster run this once:
You can list replication origins as follows:
Those created by PGD are easily recognized by their name.
Cleanup of replication slots
If a physical backup was created with pg_basebackup
, replication slots
are omitted from the backup.
Some other backup methods might preserve replications slots, likely in outdated or invalid states. Once you restore the backup, use this to drop all replication slots:
If you have a reason to preserve some slots,
you can add a WHERE slot_name LIKE 'bdr%'
clause, but this is rarely
useful.
Warning
Never run this on a live PGD node.