The Problem
Have you seen this before? When you create a ConfigMap with a multiline data block!
Here is an example: When you create the following ConfigMap:
Have you seen this before? When you create a ConfigMap with a multiline data block!
Here is an example: When you create the following ConfigMap:
In KubeZero (an open-source out-of-the-box Platform Orchestrator with GitOps designed for multi-environment Cloud Native setup), virtual clusters are created using vCluster. The main GitOps tool used in KubeZero is Argo CD, so we needed to automate provisioning the cluster and adding it to Argo CD.
If you used Argo CD before, you probably know that Argo CD provides a method for declarative setup (like for GitOps) where you can add new K8s clusters credentials by storing them in secrets, just like repositories or repository credentials.
However, to automate that, you need some way to extract the vClusters credentials and format them as an Argo CD config. There are many ways to do that, I prefer to use a declarative method, which is External Secrets Operator, namely PushSecret and ClusterSecretStore.
The flow is simple: when a K8s cluster is created via vCluster, the cluster credentials are created as a Secret object in the same namespace as the virtual cluster. Then, using PushSecret templating capabilities, it will read the secret, reformat it, and then push it to the Argo CD cluster using ClusterSecretStore.
vCluster supports multiple installation methods. We use vCluster Helm chart, so the PushSecret is created within the Helm chart to further automate it. Using Helm here is not mandatory; you can use any other installation method you like.
Assuming you deploy the virtual cluster using vCluster (v4.3.0) Helm chart, you just need this extra Helm values file (here I just copy the example from KubeZero repo):
--- experimental: deploy: host: manifestsTemplate: | --- # Push the vCluster credentails to KubeZero ClusterSecretStore, # which will save it as a Secret in the KubeZero namespace to be used as an Argo CD cluster config # (just a secret with a specific label). # https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/#clusters apiVersion: external-secrets.io/v1alpha1 kind: PushSecret metadata: name: argo-cd-{{ .Release.Name }}-credentials namespace: {{ .Release.Name }} spec: refreshInterval: 5m secretStoreRefs: - name: kubezero-management kind: ClusterSecretStore selector: secret: name: vc-{{ .Release.Name }} data: - match: secretKey: name remoteRef: remoteKey: argo-cd-{{ .Release.Name }}-credentials property: name - match: secretKey: server remoteRef: remoteKey: argo-cd-{{ .Release.Name }}-credentials property: server - match: secretKey: config remoteRef: remoteKey: argo-cd-{{ .Release.Name }}-credentials property: config template: engineVersion: v2 metadata: annotations: managed-by: external-secrets labels: argocd.argoproj.io/secret-type: cluster data: name: {{ .Release.Name }} server: https://{{ .Release.Name }}.{{ .Release.Namespace }}.svc:443 config: | { "tlsClientConfig": { "insecure": false, "caData": "{{ printf "{{ index . "certificate-authority" | b64enc }}" }}", "certData": "{{ printf "{{ index . "client-certificate" | b64enc }}" }}", "keyData": "{{ printf "{{ index . "client-key" | b64enc }}" }}", "serverName": "{{ .Release.Name }}" } }
That will create the reformated Secret object in the Argo CD namespace, where the Argo CD controller will read it as an config because of the lable argocd.argoproj.io/secret-type: cluster. The actual output will be something like this:
apiVersion: v1 kind: Secret metadata: annotations: managed-by: external-secrets labels: argocd.argoproj.io/secret-type: cluster name: argo-cd-k0-credentials namespace: argo-cd # The base64 is decoded for the sake of the example. data: name: argo-cd-k0 server: https://argo-cd-k0.mgmt-demo.svc:443 config: | { "tlsClientConfig": { "insecure": false, "caData": "<base64 encoded from vCluster secret>", "certData": "<base64 encoded from vCluster secret>", "keyData": "<base64 encoded from vCluster secret>", "serverName": "argo-cd-k0" } }
That's it! Enjoy, and don't forget to star the KubeZero project on GitHub :-)
I'm not sure if that was a hack or undocumented feature, but I can find it now in the GitHub Actions docs.
But in the past, I needed to copy a short multiline file between GitHub Actions jobs, and I didn't want to bother with extra steps of stash/unstash stuff, so I found that you can define a multiline GitHub Actions variable!
It was as easy as this:
jobs: job1: runs-on: ubuntu-latest steps: - name: Set multiline value in bash run: | # The curly brackets are just Bash syntax to group commands # and are not mandatory. { echo 'JSON_RESPONSE<<EOF' cat my-file.json echo EOF } >> "$GITHUB_OUTPUT"
Of course, you need to be sure that the delimiter EOF doesn't occure within the value.
Then you can call that again as:
[...] job2: needs: job1 runs-on: ubuntu-latest steps: - name: Get multiline value in bash run: | echo "${{ needs.job1.outputs.JSON_RESPONSE }}"
That's it! Enjoy! ♾️
Springer Nature has a platform for Research Communities, so after I published my paper Building a Modern Data Platform Based on the Data Lakehouse Architecture and Cloud-Native Ecosystem, I posted a blog post about the paper titled:
The blog post summaries the paper highlights with some personal background.
Enjoy :-)
Finally, after months of hard work, I have published my first research paper in a double-blind peer-reviewed scientific journal by the international publisher Springer Nature 🙌
The paper is titled:
Building a Modern Data Platform Based on the Data Lakehouse Architecture and Cloud-Native Ecosystem
This research paper is the result of several months of work and is based on my master's thesis, which was published in 2023 (I got Master of Science with Distinction in Data Engineering from Edinburgh Napier University).
The paper presents a practical application for data management without vendor lock-in, in addition to ensuring platform extensibility and incorporating modern concepts such as Cloud-Native, Cloud-Agnostic, and DataOps.
Why is this paper important? Because data is the backbone of Artificial Intelligence! In today's world, control over data means political and economic independence.
I would like to extend my sincere gratitude to the research team who contributed to this work, supported me, and shared their knowledge to help bring this paper to the highest quality. It was a truly enriching experience on many levels! 🙌
The research group chose these quotes from our respective languages/cultures to emphasize the importance of perseverance and diligence:
“عِندَ الصَّباحِ يَحمَدُ القومُ السُّرَى”
(In the morning, the people praise the night's journey)
Arabic Proverb
“Αρχή ήμισυ παντός”
(The beginning is half of
everything)
Greek Proverb
“Is obair latha tòiseachadh”
(Beginning is a
day's work)
Scottish Gaelic Proverb
I will write a community blog post about it soon :-)
2 days ago (20.02.2025), it was a pleasure to participate in the Open Source Summit 2025 in KSA.
My session was about participating in Open-source and how it helps to be a better DevOps engineer. In fact, the best DevOps engineers I have encountered possess T-shaped skills that require diving into many areas, even outside of the daily work topics.
It was nice to reflect on all those years of professional work and open-source contributions 🤩
Hello, my name is Ahmed AbouZaid, I'm a passionate Tech Lead DevOps Engineer. 👋
I specialize in Cloud-Native and Kubernetes. I'm also a Free/Open source geek and book author. My favorite topics are DevOps transformation, DevSecOps, automation, data, and metrics.
The Problem Have you seen this before? When you create a ConfigMap with a multiline data block! Here is an ex...