Another short instructionset on how to deploy Portainer on your Kubernetes Cluster. The special sauce is, that we don’t use the NodePort service type in favour for an Ingress ruleset. A scenario, which is not covered in the official documentation without needing Helm.
Portainer can come handy in your Homelab to manage your Kubernetes or Docker installations / deployments. When checking out the official documentation, you can choose for different scenarios on how to deploy Portainer to your cluster. But not in the combination without Helm and wanting to use an Ingress Controller for routing your access. Therefore, I wrote this quick tutorial. If you know what you do and you don’t want to read along, you may shortcut this tutorial and directly head to the deployment manifests in my Github repository.
Basically this tutorial is a step by step guide of the official deployment manifest that can be found here. With the already mentioned exceptions. First step is, to create a Namespace for Portainer obviously, this can be done with the following manifest:
apiVersion: v1
kind: Namespace
metadata:
name: portainer
Following that would be the ServiceAccount
and ClusterRoleBinding
here:
apiVersion: v1
kind: ServiceAccount
metadata:
name: portainer-sa-clusteradmin
namespace: portainer
labels:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "ce-latest-ee-2.10.0"
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: portainer
labels:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "ce-latest-ee-2.10.0"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
namespace: portainer
name: portainer-sa-clusteradmin
Now we specify a PersistentVolumeClaim
. You may also directly create the claim within the Deployment manifest, but to keep things sorted, let’s do it this way.
kind: "PersistentVolumeClaim"
apiVersion: "v1"
metadata:
name: portainer
namespace: portainer
labels:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "ce-latest-ee-2.10.0"
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "1Gi"
Now we come to the special magic sauce of this tutorial. The Service definition. Instead of a NodePort
service type, we use a ClusterIP
service type. So this will not work without having an Ingress Controller (like Nginx in my case) in place. But having ingress is more fun, you may combine it with the Cert-Manager to get signed Let’s Encrypt certificates for your service. If you want to get a tutorial on installing Nginx as an Ingress Controller, have a look here.
apiVersion: v1
kind: Service
metadata:
name: portainer-service
namespace: portainer
labels:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "ce-latest-ee-2.10.0"
spec:
ports:
- port: 9000
targetPort: 9000
protocol: TCP
name: http
- port: 9443
targetPort: 9443
protocol: TCP
name: https
selector:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
In the Ingress manifest, you need to adapt some configurations to your setup.
- Change the Annotation to the ingress class you’re using
- You might or might not have Cert-Manager or Cert Bot configured. So you need to reference the ClusterIssuer of signed certificates to you needs. In this demo, I use only the staging issuer of Let’s Encrypt
- You need to adjust the hostname to your setup
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: portainer
namespace: portainer
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
cert-manager.io/cluster-issuer: "letsencrypt-staging"
spec:
tls:
- hosts:
- portainer.mydomain.de
secretName: portainer.mydomain.de
rules:
- host: portainer.mydomain.de
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: portainer-service
port:
number: 9443
And last but not least, you need to setup the actual Portainer Deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: portainer
namespace: portainer
labels:
io.portainer.kubernetes.application.stack: portainer
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
app.kubernetes.io/version: "ce-latest-ee-2.10.0"
spec:
replicas: 1
strategy:
type: "Recreate"
selector:
matchLabels:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
template:
metadata:
labels:
app.kubernetes.io/name: portainer
app.kubernetes.io/instance: portainer
spec:
nodeSelector:
{}
serviceAccountName: portainer-sa-clusteradmin
volumes:
- name: "data"
persistentVolumeClaim:
claimName: portainer
containers:
- name: portainer
image: "portainer/portainer-ce:latest"
imagePullPolicy: Always
args:
- '--tunnel-port=30776'
volumeMounts:
- name: data
mountPath: /data
ports:
- name: http
containerPort: 9000
protocol: TCP
- name: https
containerPort: 9443
protocol: TCP
- name: tcp-edge
containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
path: /
port: 9443
scheme: HTTPS
readinessProbe:
httpGet:
path: /
port: 9443
scheme: HTTPS
resources:
{}
Deploy the manifests in the order you’ve created them and then, you should reach Portainer unter the specified URL (here https://portainer.mydomain.de
). Happy containering 🙂
Philip
Work perfectly. Thanks 🙂
Thanks for the great guide!!!!
Perhaps you cloud help me with something. Here you are using a pvc that you define in the pvc.yaml file. I am very new to kubernetes, and although i have installed it, i have a difficulty using the storage. I tried creating a pvc and pv called nfs, and i created a nfs share from my truenas. Could you help me using your yaml files and the nfs share that i have created?
Hello Mike,
in order to use NFS for a persistent volume, you need to have a
StorageClass
deployed on your Kubernetes cluster whichhandle NFS volumes. You can find an example deployment for NFS at my Github repo here. It deploys NFS-subdir to your cluster. You need to configure the StorageClass according to your environment.
When you’ve done that, you can adjust the PVC manifest to use the
nfs-client
as StorageClass like so:spec:
storageClassName: "nfs-client"
volumeName: foo-pv
Hope this helps.
Philip