Oracle want’s to participate in the open source movement lately, making itself known as “an Open Source company”. Part of this is the Oracle Database Operator for Kubernetes. It’s an early version (0.1.0) that I’ve tested out, so it offers only a limited set of features as of now. Let’s move it to it’s paces, shall we?
Making a database “Cloud ready” often comes along with running it on Kubernetes. If you want to get this deployed in large scale as a Cloud provider, you most certainly rely on a so called Operator which does some operational tasks for you (e.g. deploying all needed components, checking for cluster states, configuring backups…). Let’s name PostgreSQL here, which has a broad set of Operators to choose from (not all of them are really open source though). So I think Oracle decided to show the Oracle Database also as “Cloud ready” database and they wrote an operator for this. I tend to presume, that Oracle itself not relies on this operator for their own Oracle Cloud services, but I get ahead of myself.
The current version (0.1.0) of the Oracle Database Operator was release in October of 2021, no real updates since then in the main branch of the repository (you can find it here). This is not a good sign in my opinion. But let’s take a look on the features of the operator. You are able to deploy an Oracle Database as an
Autonomous Database, a
Single Database and a
Sharded Database. The first one is a database, that runs in the Oracle Cloud. Presume I understood the documentation correctly, you should also be able to bind (register) already deployed Autonomous Databases in the Operator. Giving you some kind of manageability. The second one is a single database, which runs on your “on-prem” infrastructure or Kubernetes environment, this one I tried out in this article. The latter one deployes your database setup as shards. So many data dependant databases that can be distributed over the globe, again, that’s what I understood from the documentation. I haven’t tried it out by myself yet.
The deployment of the operator is quite simple, I did create a kustomization overlay for it anyway, you can find it in my Github repository. You can also find the sample manifests I describe below in there. Using my repository, the deployment is done by the following command:
kubectl apply -k base
Waiting some minutes, you should find a running pod with the name
oracle-database-operator-controller-manager in the
oracle-database-operator-system namespace. That was really easy. There are some configurations you might want to change, I decided that I don’t need a redundant operator deployment and therefore I configured the
replicas down to
1. For a full manifest of all operator components, have a look here.
The easy deployment is a real “pro” on the list for the operator. Let’s see if a deployment of a database is easy as well. Short answer? It isn’t. The manifest for the actual database provisioning is quite simple, what I don’t like is, there is public repo offered from Oracle to pull your container images from. Either, you have to build the container images by yourself (which is reasonable anyway sooner or later in my opinion), or you need an user account on the Oracle container registry. These credentials you need to specify as a Kubernetes regcred
In the manifests folder of my repo, you can find a demo manifest
demo-database-ee.yaml. As you can see, I create the regred secret
oracle-regcred in this repository. You can create your own secret and replace the template by the following command:
kubectl create secret docker-registry oracle-regcred --docker-server=container-registry.oracle.com --docker-username=<username> --docker-password=<password> --dry-run=client -o yaml > docker-secret.yaml
Copy the content of the docker-secret.yaml in the manifest like shown here:
kind: Namespace apiVersion: v1 metadata: name: oracle-database --- apiVersion: v1 kind: Secret metadata: name: system.orademo namespace: oracle-database stringData: password: "supersecret" --- apiVersion: v1 data: .dockerconfigjson: <BASE64_ENCODED_CONFIG> kind: Secret metadata: creationTimestamp: null name: oracle-regcred namespace: oracle-database type: kubernetes.io/dockerconfigjson --- apiVersion: database.oracle.com/v1alpha1 kind: SingleInstanceDatabase metadata: name: orademo namespace: oracle-database spec: ## Use only alphanumeric characters for sid sid: ORADEMO ## A source database ref to clone from, leave empty to create a fresh database cloneFrom: "" ## NA if cloning from a SourceDB (cloneFrom is set) edition: enterprise ## Should refer to SourceDB secret if cloning from a SourceDB (cloneFrom is set) ## Secret containing SIDB password mapped to secretKey ## This secret will be deleted after creation of the database unless keepSecret is set to true adminPassword: secretName: system.orademo secretKey: password keepSecret: true ## NA if cloning from a SourceDB (cloneFrom is set) charset: AL32UTF8 ## NA if cloning from a SourceDB (cloneFrom is set) pdbName: orademopdb1 ## Enable/Disable Flashback flashBack: false ## Enable/Disable ArchiveLog archiveLog: false ## Enable/Disable ForceLogging forceLog: false ## NA if cloning from a SourceDB (cloneFrom is set) ## Specify both sgaSize and pgaSize (in MB) or dont specify both ## Specify Non-Zero value to use initParams: cpuCount: 0 processes: 100 sgaTarget: 1024 pgaAggregateTarget: 512 ## Database image details ## Database can be patched by updating the RU version/image ## Major version changes are not supported image: pullFrom: container-registry.oracle.com/database/enterprise:18.104.22.168 pullSecrets: oracle-regcred ## size : Minimum size of pvc | class : PVC storage Class ## AccessMode can only accept one of ReadWriteOnce, ReadWriteMany ## Below mentioned storageClass/accessMode applies to OCI block volumes. Update appropriately for other types of persistent volumes. persistence: size: 10Gi storageClass: "local-path" # Adapt this to your storage class of choice accessMode: "ReadWriteOnce" ## Type of service . Applicable on cloud enviroments only ## if loadBalService : false, service type = "NodePort". else "LoadBalancer" loadBalancer: true ## Deploy only on nodes having required labels. Format label_name : label_value ## Leave empty if there is no such requirement. ## Uncomment to use # nodeSelector: # failure-domain.beta.kubernetes.io/zone: bVCG:PHX-AD-1 # pool: sidb ## Count of Database Pods. Applicable only for "ReadWriteMany" AccessMode replicas: 1
Apply the manifest like so:
kubectl apply -f demo-database-ee.yaml namespace/oracle-database created secret/system.orademo created secret/oracle-regcred created singleinstancedatabase.database.oracle.com/orademo created
And now it’s time to wait a long time (at least in Cloud terms). Pulling down the container image takes ages with the 22.214.171.124 image being nearly 8GB in size. The size is one point why I would advice you to create your own container images, the other reason is, that you will need probably many combinations of images with different release updates, release update revision or single patches included. At least that’s my experience in big scale environments. There is always one bug or the other which needs to be fixed with a single patch which is not included yet in a release update. After the pulling of the image is done, the pod is getting builded, and with it, the database on the first start. And this takes again a quite long time. In sum, it took me about 25 minutes to get a running Oracle Database on my Kubernetes cluster. You may follow the progress in the container logs or by shelling into the container and head under
/opt/oracle/cfgtoollogs/dbca/<ORA_SID>. There you find all the logs that are created during database creation.
Logging in to the database could be done also by shell in the container or by the loadbalancer service, we specified to create during the deployment. If you want to setup a loadbalancer on your “self-hosted” Kubernetes cluster, you can find instructions for this here (german only).
kubectl get service -n oracle-database orademo NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE orademo LoadBalancer 10.98.122.205 192.168.2.240 1521:31150/TCP,5500:30817/TCP 3h2m
Let’s create a tnsnames.ora entry for our database on another system with an Oracle Client installed:
K8S = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.2.240)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = ORADEMOPDB1) ) )
And connect to the PDB with the password we’ve specified in the secret
sqlplus system@K8S SQL*Plus: Release 126.96.36.199.0 - Production on Sat May 21 22:54:04 2022 Version 188.8.131.52.0 Copyright (c) 1982, 2021, Oracle. All rights reserved. Enter password: Last Successful login time: Sat May 21 2022 20:06:02 +02:00 Connected to: Oracle Database 19c Enterprise Edition Release 184.108.40.206.0 - Production Version 220.127.116.11.0 SQL>
Phew, done! And it wasn’t that hard, was it? No, but there are some drawbacks I still have to mention. When you have a detailed look at the manifest for the
SingleInstanceDatabase, you might already have found some. But let’s start with the positiv:
- You can clone a new instance / database from an already existing one, that’s neat.
- You can implicitly create a service loadbalancer when provison an instance
- You can specify (some) parameters during creation of the instance
- You can specify to enable / disable flashback and archiving
Now the things that Oracle has to improve in my opinion:
- You can only specify some parameters during instance creation
- There is no possibility to specify separate volumes for datafiles and redo logs
- There is no possibility to mirror redo logs
- There seem to be no possibility to set a retention for archive logs, you have to housekeep this on your own
- No backup scheduling or configuration possible
- No user management possible in the CRDs
There is also a parameter
replicas in the manifest. I haven’t tested it our yet so I’m not entirely sure what this means. I assume, it’s no Dataguard replication that is setup when specify a value greater than 1, because in the comments it says, that the volume claim needs to be
In the end, I appreciate that Oracle provides an open source operator, but it’s not that refined yet and I’m afraid, this will stay in this state for some time. I hope, that Oracle proofs me wrong.
I am using oracle database operator and local-path-provisioner to create a database. Using Dynamic Persistence I can create database normally. But with Static Persistence, I am unable to create the database. I first create the PVC using the example PVC.YAML and POD.YAML. then I change the volumeName of singleinstancedatabase.yaml to volv. but the creation fails. I have tried pvc-041b78a4-74c4-4ad5-8097-74163bd9af44_default_local-path-pvc, local-path-pvc, volume-test. but all unsuccessful
Type Reason Age From Message
Warning FailedScheduling 2m35s default-scheduler 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.