Installing Crunchy Postgres Operator v5 on EKS
In my previous post I described how to deploy Crunchy Postgres Operator v4 on Kubernetes and use it to achieve disaster recovery and high availability. The new major version, v5 was released last year and the installation methods have significantly changed.
kubectl
command instead of pgo
used in v4. This post describes how to install PGO v5 and create PostgreSQL cluster on Amazon EKS using Kustomize.Prerequisites
Before you start, make sure you have installed the following tools:
- AWS CLI
- eksctl
- kubectl
- An access key ID and secret access key
- An Amazon S3 bucket (In this tutorial we create a bucket named
my-postgres-bucket
and specify the bucket name in the PostgresCluster YAML file)
Creating an Amazon EKS cluster
Create a 3 nodes Amazon EKS cluster with the latest Kubernetes version 1.21 in region us-west-2.
$ eksctl create cluster \
--name my-cluster \
--version 1.21 \
--region us-west-2 \
--nodes 3
Deploying PGO
$ git clone https://github.com/CrunchyData/postgres-operator-examples.git
$ cd postgres-operator-examples/
Deploy PGO using the following command:
$ kubectl apply -k kustomize/install
PGO pod will be deployed in the namespace postgres-operator
.
$ kubectl get pods -n postgres-operator
NAME READY STATUS RESTARTS AGE
pgo-59c4f987b6-9pqxv 1/1 Running 0 15s
Creating a PostgreSQL Cluster
- PostgreSQL cluster with 1 master and 2 replicas
- Synchronous replication
- Automated failover
- WAL archiving
- Storing backup and WAL files in persistent volume and S3
$ cp -r kustomize/s3 kustomize/my-postgres
$ cat kustomize/my-postgres/postgres.yaml
apiVersion: postgres-operator.crunchydata.com/v1beta1
kind: PostgresCluster
metadata:
name: my-postgres
spec:
image: registry.developers.crunchydata.com/crunchydata/crunchy-postgres:centos8-14.2-0
postgresVersion: 14
instances:
- name: pg-1
replicas: 3
dataVolumeClaimSpec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 10Gi
backups:
pgbackrest:
image: registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:centos8-2.36-1
configuration:
- secret:
name: pgo-s3-creds
global:
repo2-path: /pgbackrest/postgres-operator/my-postgres/repo2
repos:
- name: repo1
volume:
volumeClaimSpec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 10Gi
- name: repo2
s3:
bucket: "my-postgres-bucket"
endpoint: "s3.us-west-2.amazonaws.com"
region: "us-west-2"
patroni:
dynamicConfiguration:
synchronous_mode: true
users:
- name: postgres
- name: testuser
databases:
- testdb
spec.instances.replicas:
the number of replicasspec.backups.pgbackrest.repos.name:
repository to store backup and WAL files. Allow to specify multiple repositories using repoN format.spec.backups.pgbackrest.repos.s3:
S3 bucketspec.instances.dataVolumeClaimSpec:
PVC for PostgreSQL dataspec.backups.pgbackrest.repos.volume.volumeClaimSpec:
PVC for backup and WAL filesspec.patroni.dynamicConfiguration.synchronous_mode:
enable/disable synchronous replicationspec.users:
PostgreSQL users and databases which the user can access. postgres user is created as a superuser.
Register your credentials to access AWS S3. Please note, you must specify the correct repository name in repoN format.
$ cp kustomize/my-postgres/s3.conf{.example,}
$ vi kustomize/my-postgres/s3.conf
[global]
repo2-s3-key=<YOUR_AWS_S3_KEY>
repo2-s3-key-secret=<YOUR_AWS_S3_KEY_SECRET>
$ kubectl apply -k kustomize/
my-postgres
$ kubectl get pods -n postgres-operator --selector=postgres-operator.crunchydata.com/instance-set \
-L postgres-operator.crunchydata.com/role
NAME READY STATUS RESTARTS AGE ROLE
my-postgres-pg-1-7wm4-0 3/3 Running 0 6m8s replica
my-postgres-pg-1-l957-0 3/3 Running 0 6m8s replica
my-postgres-pg-1-wbx9-0 3/3 Running 0 6m8s master
Connecting to PostgreSQL using psql
Once all of the pods is up and running, you can connect to the master pod and verify the replication status.$ PG_CLUSTER_
MASTER_POD=$(kubectl get pod -n postgres-operator -o name -l postgres-operator.crunchydata.com/cluster=my-postgres,postgres-operator.crunchydata.com/role=master) $ kubectl -n postgres-operator port-forward "${PG_CLUSTER_
MASTER_POD}" 5432:5432 &
postgres
.$ PG_CLUSTER_USER_SECRET_NAME=my-postgres-pguser-postgres
$ PGPASSWORD=$(kubectl get secrets -n postgres-operator "${PG_CLUSTER_USER_SECRET_NAME}" -o go-template='{{.data.password | base64decode}}') \
PGUSER=$(kubectl get secrets -n postgres-operator "${PG_CLUSTER_USER_SECRET_NAME}" -o go-template='{{.data.user | base64decode}}') \
psql -h localhost
Handling connection for 5432
psql (14.2)
SSL connection (protocol: TLSv1.2, cipher: ECDHE-ECDSA-AES256-GCM-SHA384, bits: 256, compression: off)
Type "help" for help.
postgres=# select application_name, state, sync_state from pg_stat_replication;
application_name | state | sync_state
-------------------------+-----------+------------
my-postgres-pg-1-l957-0 | streaming | sync
my-postgres-pg-1-7wm4-0 | streaming | async
(2 rows)
Verify automatic failover
PGO uses Patroni to achieve high availability. If a failure occurs in the master pod, pgo will trigger a failover and heal the failed pod.
Let's verify automatic failover.
First, let's check the current status.
$ kubectl get pods -n postgres-operator --selector=postgres-operator.crunchydata.com/instance-set \
-L postgres-operator.crunchydata.com/role
NAME READY STATUS RESTARTS AGE ROLE
my-postgres-pg-1-7wm4-0 3/3 Running 0 49m replica
my-postgres-pg-1-l957-0 3/3 Running 0 49m replica
my-postgres-pg-1-wbx9-0 3/3 Running 0 49m master
$ PG_CLUSTER_MASTER_POD=$(kubectl get pod -n postgres-operator -o name -l postgres-operator.crunchydata.com/cluster=my-postgres,postgres-operator.crunchydata.com/role=master) $ kubectl delete ${PG_CLUSTER_MASTER_POD} -n postgres-operator
$ kubectl get pods -n postgres-operator --selector=postgres-operator.crunchydata.com/instance-set \
-L postgres-operator.crunchydata.com/role
NAME READY STATUS RESTARTS AGE ROLE
my-postgres-pg-1-7wm4-0 3/3 Running 0 51m replica
my-postgres-pg-1-l957-0 3/3 Running 0 51m master
my-postgres-pg-1-wbx9-0 3/3 Running 0 54s replica
Comments
Post a Comment