You want to have a proper ingress handling for your Postgres databases running on Kubernetes? With external DNS registration and signed certificates? Then you’re absolutely right here.
Recently I felt the wish to handle ingress to my Postgres clusters the proper way. I want to have my clusters reachable externally, I want to have DNS name registration and I want to have signed Let’s Encrypt certificates which I want to directly inject in my Postgres pod to be used by the instance for Postgres handling SSL connections.
Prerequisites
This article requires, that you have a working cert-manager and external-dns setup running on your Kubernetes environment. Also, having the Zalando Operator and your Postgres instances configured is needed to follow this tutorial. You can ommit external-dns and signed certificates, but I will not cover how to remove all parameters which make these required here. If you don’t use the Zalando Postgres Operator, you still can do ingress, but this involves way more components than for Operator managed clusters. You can find some example setup (which still uses postgresql
CRDs) in my Github repo here.
Creating those Postgres instances
As always, I’ve prepared every line of code in my Github repo, so if you want to have a quick start, step by. Stay here if you want to proceed step by step.
The Zalando Postgres Operator has the ability to create Kubernetes service loadbalancers for us. So a component which is reachable via a public IP address. Also it’s annotating external-dns to this service. So external-dns will take care for DNS name registration. To get a proper DNS name which belongs to our managed DNS zone, you need to configure the Zalando Operator to do so via the postgres-operator
configmap. The following parameters are relevant to get the right annotation:
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-operator
data:
...
db_hosted_zone: your.dns.zone.com
master_dns_name_format: '{cluster}.{namespace}.{hostedzone}'
replica_dns_name_format: '{cluster}-repl.{namespace}.{hostedzone}'
master_legacy_dns_name_format: ""
replica_legacy_dns_name_format: ""
...
So the DNS annotation will look something like this external-dns.alpha.kubernetes.io/hostname: postgres-cluster01.cluster01.your.dns.zone.com
later.
With DNS out of the way, we can proceed to create namespaces were our Postgres clusters should live in. In this demo we will use two namespaces, cluster01
and cluster02
.
kind: Namespace
apiVersion: v1
metadata:
name: cluster01
---
kind: Namespace
apiVersion: v1
metadata:
name: cluster02
Create both namespaces via kubectl apply -f namespace.yaml
.
Now we create two certificate resources which will be handled by cert-manager. In this demo we use only Let’s Encrypt staging certificates (specified by the issuerRef
). You can change them to whatever certificate provider you have specified in your environment.
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: postgres-cluster01.cluster01.your.dns.zone.com
namespace: cluster01
spec:
dnsNames:
- postgres-cluster01.cluster01.your.dns.zone.com
issuerRef:
group: cert-manager.io
kind: ClusterIssuer
name: letsencrypt-staging
secretName: postgres-cluster01-cluster01-your-dns-zone-com
usages:
- digital signature
- key encipherment
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: postgres-cluster02.cluster02.your.dns.zone.com
namespace: cluster02
spec:
dnsNames:
- postgres-cluster02.cluster02.your.dns.zone.com
issuerRef:
group: cert-manager.io
kind: ClusterIssuer
name: letsencrypt-staging
secretName: postgres-cluster02-cluster02-your-dns-zone-com
usages:
- digital signature
- key encipherment
After a while, you should see your certificates in state Ready
.
kubectl get certificate --all-namespaces
NAMESPACE NAME READY SECRET AGE
cluster01 postgres-cluster01.cluster01.your.dns.zone.com True postgres-cluster01-cluster01-your-dns-zone-com 31m
cluster02 postgres-cluster02.cluster02.your.dns.zone.com True postgres-cluster02-cluster02-your-dns-zone-com 31m
Now, we create the actual Postgres clusters including a Loadbalancer service and usage of the above created certificates. To get the Operator to create this kind of service, use the enableMasterLoadBalancer
parameter. Also, for this demo, I used an allowed source IP range of 0.0.0.0/0
, so every source. You probably want to change that and narrow down the possible source IPs from where your Postgres instance can be connected:
apiVersion: "acid.zalan.do/v1"
kind: postgresql
metadata:
name: postgres-cluster01
namespace: cluster01
spec:
teamId: "postgres"
volume:
size: 5Gi
numberOfInstances: 1
users:
demouser: # database owner
- superuser
- createdb
databases:
demo: demouser # dbname: owner
postgresql:
version: "14"
tls:
secretName: postgres-cluster01-cluster01-your-dns-zone-com
spiloFSGroup: 103
enableMasterLoadBalancer: true
allowedSourceRanges:
- 0.0.0.0/0
---
apiVersion: "acid.zalan.do/v1"
kind: postgresql
metadata:
name: postgres-cluster02
namespace: cluster02
spec:
teamId: "postgres"
volume:
size: 5Gi
numberOfInstances: 1
users:
demouser: # database owner
- superuser
- createdb
databases:
demo: demouser # dbname: owner
postgresql:
version: "14"
tls:
secretName: postgres-cluster02-cluster02-your-dns-zone-com
spiloFSGroup: 103
enableMasterLoadBalancer: true
allowedSourceRanges:
- 0.0.0.0/0
Once again, applying the manifest with kubectl apply -f postgresql.yaml
will create the CRD and after some time, the Zalando Postgres Operator should have created us the wanted clusters. Check out the created services, especially at the above named external-dns annotation:
kubectl get service -n cluster01 postgres-cluster01 -o yaml
apiVersion: v1
kind: Service
metadata:
annotations:
external-dns.alpha.kubernetes.io/hostname: postgres-cluster01.cluster01.your.hosted.zone.com,
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600"
creationTimestamp: "2023-05-25T16:14:17Z"
labels:
application: spilo
cluster-name: postgres-cluster01
spilo-role: master
team: postgres
name: postgres-cluster01
namespace: cluster01
resourceVersion: "118102972"
uid: 78dac507-857d-4fec-8217-d06b6d46da3d
spec:
allocateLoadBalancerNodePorts: true
clusterIP: 10.96.162.109
clusterIPs:
- 10.96.162.109
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
loadBalancerSourceRanges:
- 127.0.0.1/32
ports:
- name: postgresql
nodePort: 30596
port: 5432
protocol: TCP
targetPort: 5432
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 192.168.2.xxx
This annotation should lead your external-dns deployment to get active and register the DNS name at your DNS service. We are nearly there, let’s see if we can connect to the Postgres clusters and what an openssl certificate check will tell us:
openssl s_client -starttls postgres -connect postgres-cluster02.cluster02.your.hosted.zone.com:5432 -showcerts
You should be able to see the staging certificate in the output.
With that we are already done 🙂
Philip