EDB Postgres for Kubernetes Plugin v1
EDB Postgres for Kubernetes provides a plugin for kubectl
to manage a cluster in Kubernetes.
The plugin also works with oc
in an OpenShift environment.
Install
You can install the cnp
plugin using a variety of methods.
Note
For air-gapped systems, installation via package managers, using previously downloaded files, may be a good option.
Via the installation script
Using the Debian or RedHat packages
In the releases section of the GitHub repository, you can navigate to any release of interest (pick the same or newer release than your EDB Postgres for Kubernetes operator), and in it you will find an Assets section. In that section are pre-built packages for a variety of systems. As a result, you can follow standard practices and instructions to install them in your systems.
Debian packages
For example, let's install the 1.18.1 release of the plugin, for an Intel based
64 bit server. First, we download the right .deb
file.
Then, install from the local file using dpkg
:
RPM packages
As in the example for .deb
packages, let's install the 1.18.1 release for an
Intel 64 bit machine. Note the --output
flag to provide a file name.
Then install with yum
, and you're ready to use:
Supported Architectures
EDB Postgres for Kubernetes Plugin is currently built for the following operating system and architectures:
- Linux
- amd64
- arm 5/6/7
- arm64
- s390x
- ppc64le
- macOS
- amd64
- arm64
- Windows
- 386
- amd64
- arm 5/6/7
- arm64
Use
Once the plugin was installed and deployed, you can start using it like this:
Generation of installation manifests
The cnp
plugin can be used to generate the YAML manifest for the
installation of the operator. This option would typically be used if you want
to override some default configurations such as number of replicas,
installation namespace, namespaces to watch, and so on.
For details and available options, run:
The main options are:
-n
: the namespace in which to install the operator (by default:postgresql-operator-system
)--replicas
: number of replicas in the deployment--version
: minor version of the operator to be installed, such as1.17
. If a minor version is specified, the plugin will install the latest patch version of that minor version. If no version is supplied the plugin will install the latestMAJOR.MINOR.PATCH
version of the operator.--watch-namespace
: comma separated string containing the namespaces to watch (by default all namespaces)
An example of the generate
command, which will generate a YAML manifest that
will install the operator, is as follows:
The flags in the above command have the following meaning:
-n king
install the CNP operator into theking
namespace--version 1.17
install the latest patch version for minor version 1.17--replicas 3
install the operator with 3 replicas--watch-namespaces "albert, bb, freddie"
have the operator watch for changes in thealbert
,bb
andfreddie
namespaces only
Status
The status
command provides an overview of the current status of your
cluster, including:
- general information: name of the cluster, PostgreSQL's system ID, number of instances, current timeline and position in the WAL
- backup: point of recoverability, and WAL archiving status as returned by
the
pg_stat_archiver
view from the primary - or designated primary in the case of a replica cluster - streaming replication: information taken directly from the
pg_stat_replication
view on the primary instance - instances: information about each Postgres instance, taken directly by each
instance manager; in the case of a standby, the
Current LSN
field corresponds to the latest write-ahead log location that has been replayed during recovery (replay LSN).
Important
The status information above is taken at different times and at different
locations, resulting in slightly inconsistent returned values. For example,
the Current Write LSN
location in the main header, might be different
from the Current LSN
field in the instances status as it is taken at
two different time intervals.
You can also get a more verbose version of the status by adding
--verbose
or just -v
The command also supports output in yaml
and json
format.
Promote
The meaning of this command is to promote
a pod in the cluster to primary, so you
can start with maintenance work or test a switch-over situation in your cluster
Or you can use the instance node number to promote
Certificates
Clusters created using the EDB Postgres for Kubernetes operator work with a CA to sign a TLS authentication certificate.
To get a certificate, you need to provide a name for the secret to store the credentials, the cluster name, and a user for this certificate
After the secret is created, you can get it using kubectl
And the content of the same in plain text using the following commands:
Restart
The kubectl cnp restart
command can be used in two cases:
requesting the operator to orchestrate a rollout restart for a certain cluster. This is useful to apply configuration changes to cluster dependent objects, such as ConfigMaps containing custom monitoring queries.
request a single instance restart, either in-place if the instance is the cluster's primary or deleting and recreating the pod if it is a replica.
If the in-place restart is requested but the change cannot be applied without a switchover, the switchover will take precedence over the in-place restart. A common case for this will be a minor upgrade of PostgreSQL image.
Note
If you want ConfigMaps and Secrets to be automatically reloaded
by instances, you can add a label with key k8s.enterprisedb.io/reload
to it.
Reload
The kubectl cnp reload
command requests the operator to trigger a reconciliation
loop for a certain cluster. This is useful to apply configuration changes
to cluster dependent objects, such as ConfigMaps containing custom monitoring queries.
The following command will reload all configurations for a given cluster:
Maintenance
The kubectl cnp maintenance
command helps to modify one or more clusters
across namespaces and set the maintenance window values, it will change
the following fields:
- .spec.nodeMaintenanceWindow.inProgress
- .spec.nodeMaintenanceWindow.reusePVC
Accepts as argument set
and unset
using this to set the
inProgress
to true
in case set
and to false
in case of unset
.
By default, reusePVC
is always set to false
unless the --reusePVC
flag is passed.
The plugin will ask for a confirmation with a list of the cluster to modify and their new values, if this is accepted this action will be applied to all the cluster in the list.
If you want to set in maintenance all the PostgreSQL in your Kubernetes cluster, just need to write the following command:
And you'll have the list of all the cluster to update
Report
The kubectl cnp report
command bundles various pieces
of information into a ZIP file.
It aims to provide the needed context to debug problems
with clusters in production.
It has two sub-commands: operator
and cluster
.
report Operator
The operator
sub-command requests the operator to provide information
regarding the operator deployment, configuration and events.
Important
All confidential information in Secrets and ConfigMaps is REDACTED.
The Data map will show the keys but the values will be empty.
The flag -S
/ --stopRedaction
will defeat the redaction and show the
values. Use only at your own risk, this will share private data.
Note
By default, operator logs are not collected, but you can enable operator
log collection with the --logs
flag
- deployment information: the operator Deployment and operator Pod
- configuration: the Secrets and ConfigMaps in the operator namespace
- events: the Events in the operator namespace
- webhook configuration: the mutating and validating webhook configurations
- webhook service: the webhook service
- logs: logs for the operator Pod (optional, off by default) in JSON-lines format
The command will generate a ZIP file containing various manifest in YAML format
(by default, but settable to JSON with the -o
flag).
Use the -f
flag to name a result file explicitly. If the -f
flag is not used, a
default time-stamped filename is created for the zip file.
Note
The report plugin obeys kubectl
conventions, and will look for objects constrained
by namespace. The CNP Operator will generally not be installed in the same
namespace as the clusters.
E.g. the default installation namespace is postgresql-operator-system
results in
With the -f
flag set:
Unzipping the file will produce a time-stamped top-level folder to keep the directory tidy:
will result in:
If you activated the --logs
option, you'd see an extra subdirectory:
Note
The plugin will try to get the PREVIOUS operator's logs, which is helpful when investigating restarted operators. In all cases, it will also try to get the CURRENT operator logs. If current and previous logs are available, it will show them both.
If the operator hasn't been restarted, you'll still see the ====== Begin …
and ====== End …
guards, with no content inside.
You can verify that the confidential information is REDACTED by default:
With the -S
(--stopRedaction
) option activated, secrets are shown:
You'll get a reminder that you're about to view confidential information:
report Cluster
The cluster
sub-command gathers the following:
- cluster resources: the cluster information, same as
kubectl get cluster -o yaml
- cluster pods: pods in the cluster namespace matching the cluster name
- cluster jobs: jobs, if any, in the cluster namespace matching the cluster name
- events: events in the cluster namespace
- pod logs: logs for the cluster Pods (optional, off by default) in JSON-lines format
- job logs: logs for the Pods created by jobs (optional, off by default) in JSON-lines format
The cluster
sub-command accepts the -f
and -o
flags, as the operator
does.
If the -f
flag is not used, a default timestamped report name will be used.
Note that the cluster information does not contain configuration Secrets / ConfigMaps,
so the -S
is disabled.
Note
By default, cluster logs are not collected, but you can enable cluster
log collection with the --logs
flag
Usage:
Note that, unlike the operator
sub-command, for the cluster
sub-command you
need to provide the cluster name, and very likely the namespace, unless the cluster
is in the default one.
and then:
Remember that you can use the --logs
flag to add the pod and job logs to the ZIP.
will result in:
OpenShift support
The report operator
directive will detect automatically if the cluster is
running on OpenShift, and will get the Cluster Service Version and the
Install Plan, and add them automatically to the zip under the openshift
sub-folder.
Note
the namespace becomes very important on OpenShift. The default namespace for OpenShift in CNP is "openshift-operators". Many (most) clients will use a different namespace for the CNP operator.
results in
You can find the OpenShift-related files in the openshift
sub-folder:
Destroy
The kubectl cnp destroy
command helps remove an instance and all the
associated PVCs from a Kubernetes cluster.
The optional --keep-pvc
flag, if specified, allows you to keep the PVCs,
while removing all metadata.ownerReferences
that were set by the instance.
Additionally, the k8s.enterprisedb.io/pvcStatus
label on the PVCs will change from
ready
to detached
to signify that they are no longer in use.
Running again the command without the --keep-pvc
flag will remove the
detached PVCs.
Usage:
The following example removes the cluster-example-2
pod and the associated
PVCs:
Cluster hibernation
Sometimes you may want to suspend the execution of a EDB Postgres for Kubernetes Cluster
while retaining its data, then resume its activity at a later time. We've
called this feature cluster hibernation.
Hibernation is only available via the kubectl cnp hibernate [on|off]
commands.
Hibernating a EDB Postgres for Kubernetes cluster means destroying all the resources generated by the cluster, except the PVCs that belong to the PostgreSQL primary instance.
You can hibernate a cluster with:
This will:
- shutdown every PostgreSQL instance
- detach the PVCs containing the data of the primary instance, and annotate them with the latest database status and the latest cluster configuration
- delete the
Cluster
resource, including every generated resource - except the aforementioned PVCs
When hibernated, a EDB Postgres for Kubernetes cluster is represented by just a group of
PVCs, in which the one containing the PGDATA
is annotated with the latest
available status, including content from pg_controldata
.
Warning
A cluster having fenced instances cannot be hibernated, as fencing is part of the hibernation procedure too.
In case of error the operator will not be able to revert the procedure. You can still force the operation with:
A hibernated cluster can be resumed with:
Once the cluster has been hibernated, it's possible to show the last configuration and the status that PostgreSQL had after it was shut down. That can be done with:
Benchmarking the database with pgbench
Pgbench can be run against an existing PostgreSQL cluster with following command:
Refer to the Benchmarking pgbench section for more details.
Benchmarking the storage with fio
fio can be run on an existing storage class with following command:
Refer to the Benchmarking fio section for more details.
Requesting a new base backup
The kubectl cnp backup
command requests a new physical base backup for
an existing Postgres cluster by creating a new Backup
resource.
The following example requests an on-demand backup for a given cluster:
The created backup will be named after the request time:
By default, new created backup will use the backup target policy defined
in cluster to choose which instance to run on. You can also use --backup-target
option to override this policy. please refer to Backup and Recovery
for more information about backup target.
Launching psql
The kubectl cnp psql
command starts a new PostgreSQL interactive front-end
process (psql) connected to an existing Postgres cluster, as if you were running
it from the actual pod. This means that you will be using the postgres
user.
Important
As you will be connecting as postgres
user, in production environments this
method should be used with extreme care, by authorized personnel only.
By default, the command will connect to the primary instance. The user can
select to work against a replica by using the --replica
option:
This command will start kubectl exec
, and the kubectl
executable must be
reachable in your PATH
variable to correctly work.
Note
When connecting to instances running on OpenShift, you must explicitly
pass a username to the psql
command, because of a security measure built into
OpenShift:
Snapshotting a Postgres cluster
The kubectl cnp snapshot
creates consistent snapshots of a Postgres
Cluster
by:
- choosing a replica Pod to work on
- fencing the replica
- taking the snapshot
- unfencing the replica
Warning
A cluster already having a fenced instance cannot be snapshotted.
At the moment, this command can be used only for clusters having at least one
replica: that replica will be shut down by the fencing procedure to ensure the
snapshot to be consistent (cold backup). As the development of
declarative support for Kubernetes' VolumeSnapshot
API continues,
this limitation will be removed, allowing you to take online backups
as business continuity requires.
Important
Even if the procedure will shut down a replica, the primary Pod will not be involved.
The kubectl cnp snapshot
command requires the cluster name:
The VolumeSnapshot
resource will be created with an empty
VolumeSnapshotClass
reference. That resource is intended by be used by the
VolumeSnapshotClass
configured as default.
A specific VolumeSnapshotClass
can be requested via the -c
option: