Upgrading v5

Because EDB Postgres Distributed consists of multiple software components, the upgrade strategy depends partially on the components that are being upgraded.

In general, you can upgrade the cluster with almost zero downtime by using an approach called rolling upgrade. Using this approach, nodes are upgraded one by one, and the application connections are switched over to already upgraded nodes.

You can also stop all nodes, perform the upgrade on all nodes, and only then restart the entire cluster. This approach is the same as with a standard PostgreSQL setup. This strategy of upgrading all nodes at the same time avoids running with mixed versions of software and therefore is the simplest. However, it incurs some downtime and we don't recommend it unless you can't perform the rolling upgrade for some reason.

To upgrade an EDB Postgres Distributed cluster:

  1. Plan the upgrade.
  2. Prepare for the upgrade.
  3. Upgrade the server software.
  4. Check and validate the upgrade.

Upgrade planning

There are broadly two ways to upgrade each node:

You can use both of these approaches in a rolling manner.

Rolling upgrade considerations

While the cluster is going through a rolling upgrade, mixed versions of software are running in the cluster. For example, suppose nodeA has PGD 3.7.16, while nodeB and nodeC has 4.1.0. In this state, the replication and group management uses the protocol and features from the oldest version (3.7.16 in this example), so any new features provided by the newer version that require changes in the protocol are disabled. Once all nodes are upgraded to the same version, the new features are enabled.

Similarly, when a cluster with WAL-decoder-enabled nodes is going through a rolling upgrade, WAL decoder on a higher version of PGD node produces LCRs with a higher pglogical version. WAL decoder on a lower version of PGD node produces LCRs with lower pglogical version. As a result, WAL senders on a higher version of PGD nodes are not expected to use LCRs due to a mismatch in protocol versions. On a lower version of PGD nodes, WAL senders may continue to use LCRs. Once all the PGD nodes are on the same PGD version, WAL senders use LCRs.

A rolling upgrade starts with a cluster with all nodes at a prior release. It then proceeds by upgrading one node at a time to the newer release, until all nodes are at the newer release. More than two versions of any component must never be running at the same time. This requirement means that you must not start the new upgrade until the previous upgrade process fully finishes on all nodes.

An upgrade process can take more time when caution is required to reduce business risk. However, we don't recommend running the mixed versions of the software indefinitely.

While you can use a rolling upgrade for upgrading a major version of the software, we don't support mixing PostgreSQL, EDB Postgres Extended, and EDB Postgres Advanced Server in one cluster. So you can't use this approach to change the Postgres variant.

Warning

Downgrades of the EDB Postgres Distributed aren't supported. They require that you manually rebuild the cluster.

Rolling server software upgrades

A rolling upgrade is the process where the server software upgrade process is performed on each node in the cluster one after another while keeping the rest of the cluster operating.

The actual procedure depends on whether the Postgres component is being upgraded to a new major version.

During the upgrade process, you can switch the application over to a node that's currently not being upgraded to provide continuous availability of the database for applications.

Rolling upgrade using node join

The other method to upgrade the server software is to join a new node to the cluster and later drop one of the existing nodes running the older version of the software.

For this approach, the procedure is always the same. However, because it includes node join, a potentially large data transfer is required.

Take care not to use features that are available only in the newer Postgres versions until all nodes are upgraded to the newer and same release of Postgres. This is especially true for any new DDL syntax that was added to a newer release of Postgres.

Note

bdr_init_physical makes a byte-by-byte copy of the source node so you can't use it while upgrading from one major Postgres version to another. In fact, currently bdr_init_physical requires that even the PGD version of the source and the joining node be exactly the same. You can't use it for rolling upgrades by way of joining a new node method. Instead, use a logical join.

Upgrading a CAMO-enabled cluster

Upgrading a CAMO-enabled cluster requires upgrading CAMO groups one by one while disabling the CAMO protection for the group being upgraded and reconfiguring it using the new commit scope-based settings.

The following approach is recommended for upgrading two BDR nodes that constitute a CAMO pair to PGD 5.0:

  • Ensure bdr.enable_camo remains off for transactions on any of the two nodes, or redirect clients away from the two nodes. Removing the CAMO pairing while attempting to use CAMO leads to errors and prevents further transactions.
  • Uncouple the pair by deconfiguring CAMO either by resetting bdr.camo_origin_for and bdr.camo_parter_of (when upgrading from BDR 3.7.x) or by using bdr.remove_camo_pair (on BDR 4.x).
  • Upgrade the two nodes to PGD 5.0.
  • Create a dedicated node group for the two nodes and move them into that node group.
  • Create a commit scope for this node group and thus the pair of nodes to use CAMO.
  • Reactivate CAMO protection again by either setting a default_commit_scope or by changing the clients to explicitly set bdr.commit_scope instead of bdr.enable_camo for their sessions or transactions.
  • If necessary, allow clients to connect to the CAMO protected nodes again.

Upgrade preparation

Each major release of the software contains several changes that might affect compatibility with previous releases. These might affect the Postgres configuration, deployment scripts, as well as applications using PGD. We recommend considering these changes and making any needed adjustments in advance of the upgrade.

See individual changes mentioned in the release notes and any version-specific upgrade notes.

Server software upgrade

Upgrading EDB Postgres Distributed on individual nodes happens in place. You don't need to back up and restore when upgrading the BDR extension.

BDR extension upgrade

The BDR extension upgrade process consists of a few steps.

Stop Postgres

During the upgrade of binary packages, it's usually best to stop the running Postgres server first. Doing so ensures that mixed versions don't get loaded in case of an unexpected restart during the upgrade.

Upgrade packages

The first step in the upgrade is to install the new version of the BDR packages. This installation installs both the new binary and the extension SQL script. This step is specific to the operating system.

Start Postgres

Once packages are upgraded, you can start the Postgres instance. The BDR extension is upgraded upon start when the new binaries detect the older version of the extension.

Postgres upgrade

The process of in-place upgrade of Postgres depends on whether you're upgrading to new minor version of Postgres or to a new major version of Postgres.

Minor version Postgres upgrade

Upgrading to a new minor version of Postgres is similar to upgrading the BDR extension. Stopping Postgres, upgrading packages, and starting Postgres again is typically all that's needed.

However, sometimes more steps, like reindexing, might be recommended for specific minor version upgrades. Refer to the release notes of the version of Postgres you're upgrading to.

Major version Postgres upgrade

Upgrading to a new major version of Postgres is more complicated than upgrading to a minor version.

EDB Postgres Distributed provides a bdr_pg_upgrade command line utility, which you can use to do in-place Postgres major version upgrades.

Note

When upgrading to a new major version of any software, including Postgres, the BDR extension, and others, it's always important to ensure the compatibility of your application with the target version of the software you're upgrading.

Upgrade check and validation

After you upgrade your PGD node, you can verify the current version of the binary:

SELECT bdr.bdr_version();

Always check the monitoring after upgrading a node to confirm that the upgraded node is working as expected.

Moving from HARP to PGD Proxy

HARP can for a time coexist with the new connection management configuration. This means you can:

  • Upgrade a whole pre-5 cluster to a PGD 5 cluster.
  • Set up the connection routing.
  • Replace the HARP Proxy with PGD Proxy.
  • Move application connections to PGD Proxy instances.
  • Remove the HARP Manager from all servers.

We strongly recommend doing this as soon as possible after upgrading nodes to PGD 5. HARP isn't certified for long-term use with PGD 5.

TPA provides some useful tools for this and will eventually provide a single-command upgrade path between PGD 4 and PGD 5.