Want to see what it takes to get the EDB Postgres for Kubernetes Operator up and running? This section will demonstrate the following:
Installing the EDB Postgres for Kubernetes Operator Deploying a three-node PostgreSQL cluster Installing and using the kubectl-cnp plugin Testing failover to verify the resilience of the cluster It will take roughly 5-10 minutes to work through.
This demo is interactive You can follow along right in your browser by clicking the button below. Once the environment initializes, you'll see a terminal open at the bottom of the screen.
Interactive Demo
Start Now
Clicking Start Now will load an interactive terminal in this window
Once k3d is ready, we need to start a cluster:
k3d cluster create Output
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-k3s-default'
INFO[0000] Created image volume k3d-k3s-default-images
INFO[0000] Starting new tools node...
INFO[0001] Pulling image 'ghcr.io/k3d-io/k3d-tools:5.5.1'
INFO[0001] Creating node 'k3d-k3s-default-server-0'
INFO[0002] Pulling image 'docker.io/rancher/k3s:v1.26.4-k3s1'
INFO[0003] Starting Node 'k3d-k3s-default-tools'
INFO[0006] Creating LoadBalancer 'k3d-k3s-default-serverlb'
INFO[0007] Pulling image 'ghcr.io/k3d-io/k3d-proxy:5.5.1'
INFO[0010] Using the k3d-tools node to gather environment information
INFO[0010] HostIP: using network gateway 172.17.0.1 address
INFO[0010] Starting cluster 'k3s-default'
INFO[0010] Starting servers...
INFO[0010] Starting Node 'k3d-k3s-default-server-0'
INFO[0015] All agents already running.
INFO[0015] Starting helpers...
INFO[0015] Starting Node 'k3d-k3s-default-serverlb'
INFO[0022] Injecting records for hostAliases (incl. host.k3d.internal) and for 2 network members into CoreDNS configmap...
INFO[0024] Cluster 'k3s-default' created successfully!
INFO[0025] You can now use it like this:
kubectl cluster-info This will create the Kubernetes cluster, and you will be ready to use it.
Verify that it works with the following command:
kubectl get nodes Output
NAME STATUS ROLES AGE VERSION
k3d-k3s-default-server-0 Ready control-plane,master 17s v1.26.4+k3s1 You will see one node called k3d-k3s-default-server-0
. If the status isn't yet "Ready", wait for a few seconds and run the command above again.
Install EDB Postgres for Kubernetes Now that the Kubernetes cluster is running, you can proceed with EDB Postgres for Kubernetes installation as described in the "Installation and upgrades" section:
kubectl apply -f https://get.enterprisedb.io/cnp/postgresql-operator-1.20.2.yaml Output
namespace/postgresql-operator-system created
customresourcedefinition.apiextensions.k8s.io/backups.postgresql.k8s.enterprisedb.io created
customresourcedefinition.apiextensions.k8s.io/clusters.postgresql.k8s.enterprisedb.io created
customresourcedefinition.apiextensions.k8s.io/poolers.postgresql.k8s.enterprisedb.io created
customresourcedefinition.apiextensions.k8s.io/scheduledbackups.postgresql.k8s.enterprisedb.io created
serviceaccount/postgresql-operator-manager created
clusterrole.rbac.authorization.k8s.io/postgresql-operator-manager created
clusterrolebinding.rbac.authorization.k8s.io/postgresql-operator-manager-rolebinding created
configmap/postgresql-operator-default-monitoring created
service/postgresql-operator-webhook-service created
deployment.apps/postgresql-operator-controller-manager created
mutatingwebhookconfiguration.admissionregistration.k8s.io/postgresql-operator-mutating-webhook-configuration created
validatingwebhookconfiguration.admissionregistration.k8s.io/postgresql-operator-validating-webhook-configuration created And then verify that it was successfully installed:
kubectl get deploy -n postgresql-operator-system postgresql-operator-controller-manager Output
NAME READY UP-TO-DATE AVAILABLE AGE
postgresql-operator-controller-manager 1/1 1 1 52s Deploy a PostgreSQL cluster As with any other deployment in Kubernetes, to deploy a PostgreSQL cluster
you need to apply a configuration file that defines your desired Cluster
.
The cluster-example.yaml
sample file
defines a simple Cluster
using the default storage class to allocate
disk space:
cat <<EOF > cluster- example.yaml
apiVersion : postgresql.k8s.enterprisedb.io/v1
kind : Cluster
metadata :
name : cluster- example
spec :
instances : 3
primaryUpdateStrategy : unsupervised
storage :
size : 1Gi
EOF
There's more For more detailed information about the available options, please refer
to the "API Reference" section .
In order to create the 3-node PostgreSQL cluster, you need to run the following command:
kubectl apply -f cluster-example.yaml Output
cluster.postgresql.k8s.enterprisedb.io/cluster-example created You can check that the pods are being created with the get pods
command. It'll take a bit to initialize, so if you run that
immediately after applying the cluster configuration you'll see the status as Init:
or PodInitializing
:
kubectl get pods Output
NAME READY STATUS RESTARTS AGE
cluster-example-1-initdb-sdr25 0/1 PodInitializing 0 20s ...give it a minute, and then check on it again:
kubectl get pods Output
NAME READY STATUS RESTARTS AGE
cluster-example-1 1/1 Running 0 47s
cluster-example-2 1/1 Running 0 24s
cluster-example-3 1/1 Running 0 8s Now we can check the status of the cluster:
kubectl get cluster cluster-example -o yaml Output
apiVersion: postgresql.k8s.enterprisedb.io/v1
kind: Cluster
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"postgresql.k8s.enterprisedb.io/v1","kind":"Cluster","metadata":{"annotations":{},"name":"cluster-example","namespace":"default"},"spec":{"instances":3,"primaryUpdateStrategy":"unsupervised","storage":{"size":"1Gi"}}}
creationTimestamp: "2023-07-28T16:14:08Z"
generation: 1
name: cluster-example
namespace: default
resourceVersion: "1115"
uid: 70e054ae-b487-41e3-941b-b7c969f950be
spec:
affinity:
podAntiAffinityType: preferred
topologyKey: ""
bootstrap:
initdb:
database: app
encoding: UTF8
localeCType: C
localeCollate: C
owner: app
enableSuperuserAccess: true
failoverDelay: 0
imageName: quay.io/enterprisedb/postgresql:15.3
instances: 3
logLevel: info
maxSyncReplicas: 0
minSyncReplicas: 0
monitoring:
customQueriesConfigMap:
- key: queries
name: postgresql-operator-default-monitoring
disableDefaultQueries: false
enablePodMonitor: false
postgresGID: 26
postgresUID: 26
postgresql:
parameters:
archive_mode: "on"
archive_timeout: 5min
dynamic_shared_memory_type: posix
log_destination: csvlog
log_directory: /controller/log
log_filename: postgres
log_rotation_age: "0"
log_rotation_size: "0"
log_truncate_on_rotation: "false"
logging_collector: "on"
max_parallel_workers: "32"
max_replication_slots: "32"
max_worker_processes: "32"
shared_memory_type: mmap
shared_preload_libraries: ""
wal_keep_size: 512MB
wal_receiver_timeout: 5s
wal_sender_timeout: 5s
syncReplicaElectionConstraint:
enabled: false
primaryUpdateMethod: restart
primaryUpdateStrategy: unsupervised
resources: {}
startDelay: 30
stopDelay: 30
storage:
resizeInUseVolumes: true
size: 1Gi
switchoverDelay: 40000000
status:
certificates:
clientCASecret: cluster-example-ca
expirations:
cluster-example-ca: 2023-10-26 16:09:09 +0000 UTC
cluster-example-replication: 2023-10-26 16:09:09 +0000 UTC
cluster-example-server: 2023-10-26 16:09:09 +0000 UTC
replicationTLSSecret: cluster-example-replication
serverAltDNSNames:
- cluster-example-rw
- cluster-example-rw.default
- cluster-example-rw.default.svc
- cluster-example-r
- cluster-example-r.default
- cluster-example-r.default.svc
- cluster-example-ro
- cluster-example-ro.default
- cluster-example-ro.default.svc
serverCASecret: cluster-example-ca
serverTLSSecret: cluster-example-server
cloudNativePostgresqlCommitHash: c42ca1c2
cloudNativePostgresqlOperatorHash: 1d51c15adffb02c81dbc4e8752ddb68f709699c78d9c3384ed9292188685971b
conditions:
- lastTransitionTime: "2023-07-28T16:15:29Z"
message: Cluster is Ready
reason: ClusterIsReady
status: "True"
type: Ready
- lastTransitionTime: "2023-07-28T16:15:29Z"
message: velero addon is disabled
reason: Disabled
status: "False"
type: k8s.enterprisedb.io/velero
- lastTransitionTime: "2023-07-28T16:15:29Z"
message: external-backup-adapter addon is disabled
reason: Disabled
status: "False"
type: k8s.enterprisedb.io/externalBackupAdapter
- lastTransitionTime: "2023-07-28T16:15:30Z"
message: external-backup-adapter-cluster addon is disabled
reason: Disabled
status: "False"
type: k8s.enterprisedb.io/externalBackupAdapterCluster
- lastTransitionTime: "2023-07-28T16:15:30Z"
message: kasten addon is disabled
reason: Disabled
status: "False"
type: k8s.enterprisedb.io/kasten
configMapResourceVersion:
metrics:
postgresql-operator-default-monitoring: "788"
currentPrimary: cluster-example-1
currentPrimaryTimestamp: "2023-07-28T16:14:48.609086Z"
healthyPVC:
- cluster-example-1
- cluster-example-2
- cluster-example-3
instanceNames:
- cluster-example-1
- cluster-example-2
- cluster-example-3
instances: 3
instancesReportedState:
cluster-example-1:
isPrimary: true
timeLineID: 1
cluster-example-2:
isPrimary: false
timeLineID: 1
cluster-example-3:
isPrimary: false
timeLineID: 1
instancesStatus:
healthy:
- cluster-example-1
- cluster-example-2
- cluster-example-3
latestGeneratedNode: 3
licenseStatus:
isImplicit: true
isTrial: true
licenseExpiration: "2023-08-27T16:14:08Z"
licenseStatus: Implicit trial license
repositoryAccess: false
valid: true
managedRolesStatus: {}
phase: Cluster in healthy state
poolerIntegrations:
pgBouncerIntegration: {}
pvcCount: 3
readService: cluster-example-r
readyInstances: 3
secretsResourceVersion:
applicationSecretVersion: "760"
clientCaSecretVersion: "756"
replicationSecretVersion: "758"
serverCaSecretVersion: "756"
serverSecretVersion: "757"
superuserSecretVersion: "759"
targetPrimary: cluster-example-1
targetPrimaryTimestamp: "2023-07-28T16:14:09.501164Z"
timelineID: 1
topology:
instances:
cluster-example-1: {}
cluster-example-2: {}
cluster-example-3: {}
nodesUsed: 1
successfullyExtracted: true
writeService: cluster-example-rw
Important The immutable infrastructure paradigm requires that you always
point to a specific version of the container image.
Never use tags like latest
or 13
in a production environment
as it might lead to unpredictable scenarios in terms of update
policies and version consistency in the cluster.
Install the kubectl-cnp plugin EDB Postgres for Kubernetes provides a plugin for kubectl to manage a cluster in Kubernetes, along with a script to install it:
curl -sSfL \
https://github.com/EnterpriseDB/kubectl-cnp/raw/main/install.sh | \
sudo sh -s -- -b /usr/local/binOutput
EnterpriseDB/kubectl-cnp info checking GitHub for latest tag
EnterpriseDB/kubectl-cnp info found version: 1.20.2 for v1.20.2/linux/x86_64
EnterpriseDB/kubectl-cnp info installed /usr/local/bin/kubectl-cnp The cnp
command is now available in kubectl:
kubectl cnp status cluster-example Output
Cluster Summary
Name: cluster-example
Namespace: default
System ID: 7260903692491026447
PostgreSQL Image: quay.io/enterprisedb/postgresql:15.3
Primary instance: cluster-example-1
Status: Cluster in healthy state
Instances: 3
Ready instances: 3
Current Write LSN: 0/6054B60 (Timeline: 1 - WAL File: 000000010000000000000006)
Certificates Status
Certificate Name Expiration Date Days Left Until Expiration
---------------- --------------- --------------------------
cluster-example-ca 2023-10-26 16:09:09 +0000 UTC 89.99
cluster-example-replication 2023-10-26 16:09:09 +0000 UTC 89.99
cluster-example-server 2023-10-26 16:09:09 +0000 UTC 89.99
Continuous Backup status
Not configured
Streaming Replication status
Name Sent LSN Write LSN Flush LSN Replay LSN Write Lag Flush Lag Replay Lag State Sync State Sync Priority
---- -------- --------- --------- ---------- --------- --------- ---------- ----- ---------- -------------
cluster-example-2 0/6054B60 0/6054B60 0/6054B60 0/6054B60 00:00:00 00:00:00 00:00:00 streaming async 0
cluster-example-3 0/6054B60 0/6054B60 0/6054B60 0/6054B60 00:00:00 00:00:00 00:00:00 streaming async 0
Unmanaged Replication Slot Status
No unmanaged replication slots found
Instances status
Name Database Size Current LSN Replication role Status QoS Manager Version Node
---- ------------- ----------- ---------------- ------ --- --------------- ----
cluster-example-1 29 MB 0/6054B60 Primary OK BestEffort 1.20.2 k3d-k3s-default-server-0
cluster-example-2 29 MB 0/6054B60 Standby (async) OK BestEffort 1.20.2 k3d-k3s-default-server-0
cluster-example-3 29 MB 0/6054B60 Standby (async) OK BestEffort 1.20.2 k3d-k3s-default-server-0 Testing failover As our status checks show, we're running two replicas - if something happens to the primary instance of PostgreSQL, the cluster will fail over to one of them. Let's demonstrate this by killing the primary pod:
kubectl delete pod --wait = false cluster-example-1 Output
pod "cluster-example-1" deleted This simulates a hard shutdown of the server - a scenario where something has gone wrong.
Now if we check the status...
kubectl cnp status cluster-example Output
Cluster Summary
Name: cluster-example
Namespace: default
System ID: 7260903692491026447
PostgreSQL Image: quay.io/enterprisedb/postgresql:15.3
Primary instance: cluster-example-2
Status: Failing over Failing over from cluster-example-1 to cluster-example-2
Instances: 3
Ready instances: 2
Current Write LSN: 0/7001000 (Timeline: 2 - WAL File: 000000020000000000000007)
Certificates Status
Certificate Name Expiration Date Days Left Until Expiration
---------------- --------------- --------------------------
cluster-example-ca 2023-10-26 16:09:09 +0000 UTC 89.99
cluster-example-replication 2023-10-26 16:09:09 +0000 UTC 89.99
cluster-example-server 2023-10-26 16:09:09 +0000 UTC 89.99
Continuous Backup status
Not configured
Streaming Replication status
Not available yet
Unmanaged Replication Slot Status
No unmanaged replication slots found
Instances status
Name Database Size Current LSN Replication role Status QoS Manager Version Node
---- ------------- ----------- ---------------- ------ --- --------------- ----
cluster-example-2 29 MB 0/7001000 Primary OK BestEffort 1.20.2 k3d-k3s-default-server-0
cluster-example-3 29 MB 0/70000A0 Standby (file based) OK BestEffort 1.20.2 k3d-k3s-default-server-0 ...the failover process has begun, with the second pod promoted to primary. Once the failed pod has restarted, it will become a replica of the new primary:
kubectl cnp status cluster-example Output
Cluster Summary
Name: cluster-example
Namespace: default
System ID: 7260903692491026447
PostgreSQL Image: quay.io/enterprisedb/postgresql:15.3
Primary instance: cluster-example-2
Status: Failing over Failing over from cluster-example-1 to cluster-example-2
Instances: 3
Ready instances: 2
Current Write LSN: 0/7001000 (Timeline: 2 - WAL File: 000000020000000000000007)
Certificates Status
Certificate Name Expiration Date Days Left Until Expiration
---------------- --------------- --------------------------
cluster-example-ca 2023-10-26 16:09:09 +0000 UTC 89.99
cluster-example-replication 2023-10-26 16:09:09 +0000 UTC 89.99
cluster-example-server 2023-10-26 16:09:09 +0000 UTC 89.99
Continuous Backup status
Not configured
Streaming Replication status
Not available yet
Unmanaged Replication Slot Status
No unmanaged replication slots found
Instances status
Name Database Size Current LSN Replication role Status QoS Manager Version Node
---- ------------- ----------- ---------------- ------ --- --------------- ----
cluster-example-2 29 MB 0/7001000 Primary OK BestEffort 1.20.2 k3d-k3s-default-server-0
cluster-example-3 29 MB 0/70000A0 Standby (file based) OK BestEffort 1.20.2 k3d-k3s-default-server-0
$ kubectl cnp status cluster-example
Cluster Summary
Name: cluster-example
Namespace: default
System ID: 7260903692491026447
PostgreSQL Image: quay.io/enterprisedb/postgresql:15.3
Primary instance: cluster-example-2
Status: Cluster in healthy state
Instances: 3
Ready instances: 3
Current Write LSN: 0/7004D60 (Timeline: 2 - WAL File: 000000020000000000000007)
Certificates Status
Certificate Name Expiration Date Days Left Until Expiration
---------------- --------------- --------------------------
cluster-example-ca 2023-10-26 16:09:09 +0000 UTC 89.99
cluster-example-replication 2023-10-26 16:09:09 +0000 UTC 89.99
cluster-example-server 2023-10-26 16:09:09 +0000 UTC 89.99
Continuous Backup status
Not configured
Streaming Replication status
Name Sent LSN Write LSN Flush LSN Replay LSN Write Lag Flush Lag Replay Lag State Sync State Sync Priority
---- -------- --------- --------- ---------- --------- --------- ---------- ----- ---------- -------------
cluster-example-1 0/7004D60 0/7004D60 0/7004D60 0/7004D60 00:00:00 00:00:00 00:00:00 streaming async 0
Unmanaged Replication Slot Status
No unmanaged replication slots found
Instances status
Name Database Size Current LSN Replication role Status QoS Manager Version Node
---- ------------- ----------- ---------------- ------ --- --------------- ----
cluster-example-2 29 MB 0/7004D60 Primary OK BestEffort 1.20.2 k3d-k3s-default-server-0
cluster-example-1 29 MB 0/7004D60 Standby (async) OK BestEffort 1.20.2 k3d-k3s-default-server-0
cluster-example-3 29 MB 0/70000A0 Standby (file based) OK BestEffort 1.20.2 k3d-k3s-default-server-0
$ kubectl cnp status cluster-example
Cluster Summary
Name: cluster-example
Namespace: default
System ID: 7260903692491026447
PostgreSQL Image: quay.io/enterprisedb/postgresql:15.3
Primary instance: cluster-example-2
Status: Cluster in healthy state
Instances: 3
Ready instances: 3
Current Write LSN: 0/7004D98 (Timeline: 2 - WAL File: 000000020000000000000007)
Certificates Status
Certificate Name Expiration Date Days Left Until Expiration
---------------- --------------- --------------------------
cluster-example-ca 2023-10-26 16:09:09 +0000 UTC 89.99
cluster-example-replication 2023-10-26 16:09:09 +0000 UTC 89.99
cluster-example-server 2023-10-26 16:09:09 +0000 UTC 89.99
Continuous Backup status
Not configured
Streaming Replication status
Name Sent LSN Write LSN Flush LSN Replay LSN Write Lag Flush Lag Replay Lag State Sync State Sync Priority
---- -------- --------- --------- ---------- --------- --------- ---------- ----- ---------- -------------
cluster-example-1 0/7004D98 0/7004D98 0/7004D98 0/7004D98 00:00:00 00:00:00 00:00:00 streaming async 0
Unmanaged Replication Slot Status
No unmanaged replication slots found
Instances status
Name Database Size Current LSN Replication role Status QoS Manager Version Node
---- ------------- ----------- ---------------- ------ --- --------------- ----
cluster-example-2 29 MB 0/7004D98 Primary OK BestEffort 1.20.2 k3d-k3s-default-server-0
cluster-example-1 29 MB 0/7004D98 Standby (async) OK BestEffort 1.20.2 k3d-k3s-default-server-0
cluster-example-3 29 MB 0/70000A0 Standby (file based) OK BestEffort 1.20.2 k3d-k3s-default-server-0 Further reading This is all it takes to get a PostgreSQL cluster up and running, but of course there's a lot more possible - and certainly much more that is prudent before you should ever deploy in a production environment!