06/06/2025

Automate adding vCluster to Argo CD using External Secrets Operator - GitOps

Overview

In KubeZero (an open-source out-of-the-box Platform Orchestrator with GitOps designed for multi-environment Cloud Native setup), virtual clusters are created using vCluster. The main GitOps tool used in KubeZero is Argo CD, so we needed to automate provisioning the cluster and adding it to Argo CD.

If you used Argo CD before, you probably know that Argo CD provides a method for declarative setup (like for GitOps) where you can add new K8s clusters credentials by storing them in secrets, just like repositories or repository credentials.

However, to automate that, you need some way to extract the vClusters credentials and format them as an Argo CD config. There are many ways to do that, I prefer to use a declarative method, which is External Secrets Operator, namely PushSecret and ClusterSecretStore.

Flow

The flow is simple: when a K8s cluster is created via vCluster, the cluster credentials are created as a Secret object in the same namespace as the virtual cluster. Then, using PushSecret templating capabilities, it will read the secret, reformat it, and then push it to the Argo CD cluster using ClusterSecretStore.

vCluster supports multiple installation methods. We use vCluster Helm chart, so the PushSecret is created within the Helm chart to further automate it. Using Helm here is not mandatory; you can use any other installation method you like.

Prerequisites

Assuming you deploy the virtual cluster using vCluster (v4.3.0) Helm chart, you just need this extra Helm values file (here I just copy the example from KubeZero repo):

---
experimental:
  deploy:
    host:
      manifestsTemplate: |
        ---
        # Push the vCluster credentails to KubeZero ClusterSecretStore,
        # which will save it as a Secret in the KubeZero namespace to be used as an Argo CD cluster config
        # (just a secret with a specific label).
        # https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/#clusters
        apiVersion: external-secrets.io/v1alpha1
        kind: PushSecret
        metadata:
          name: argo-cd-{{ .Release.Name }}-credentials
          namespace: {{ .Release.Name }}
        spec:
          refreshInterval: 5m
          secretStoreRefs:
            - name: kubezero-management
              kind: ClusterSecretStore
          selector:
            secret:
              name: vc-{{ .Release.Name }}
          data:
            - match:
                secretKey: name
                remoteRef:
                  remoteKey: argo-cd-{{ .Release.Name }}-credentials
                  property: name
            - match:
                secretKey: server
                remoteRef:
                  remoteKey: argo-cd-{{ .Release.Name }}-credentials
                  property: server
            - match:
                secretKey: config
                remoteRef:
                  remoteKey: argo-cd-{{ .Release.Name }}-credentials
                  property: config
          template:
            engineVersion: v2
            metadata:
              annotations:
                managed-by: external-secrets
              labels:
                argocd.argoproj.io/secret-type: cluster
            data:
              name: {{ .Release.Name }}
              server: https://{{ .Release.Name }}.{{ .Release.Namespace }}.svc:443
              config: |
                {
                  "tlsClientConfig": {
                    "insecure": false,
                    "caData": "{{ printf "{{ index . "certificate-authority" | b64enc }}" }}",
                    "certData": "{{ printf "{{ index . "client-certificate" | b64enc }}" }}",
                    "keyData": "{{ printf "{{ index . "client-key" | b64enc }}" }}",
                    "serverName": "{{ .Release.Name }}"
                  }
                }

That will create the reformated Secret object in the Argo CD namespace, where the Argo CD controller will read it as an config because of the lable argocd.argoproj.io/secret-type: cluster. The actual output will be something like this:

apiVersion: v1
kind: Secret
metadata:
  annotations:
    managed-by: external-secrets
  labels:
    argocd.argoproj.io/secret-type: cluster
  name: argo-cd-k0-credentials
  namespace: argo-cd
# The base64 is decoded for the sake of the example.
data:
  name: argo-cd-k0
  server: https://argo-cd-k0.mgmt-demo.svc:443
  config: |
    {
      "tlsClientConfig": {
        "insecure": false,
        "caData": "<base64 encoded from vCluster secret>",
        "certData": "<base64 encoded from vCluster secret>",
        "keyData": "<base64 encoded from vCluster secret>",
        "serverName": "argo-cd-k0"
      }
    }

That's it! Enjoy, and don't forget to star the KubeZero project on GitHub :-)

Continue Reading »

05/05/2025

How to define GitHub Actions multiline environment variable or output - CI/CD

I'm not sure if that was a hack or undocumented feature, but I can find it now in the GitHub Actions docs.

But in the past, I needed to copy a short multiline file between GitHub Actions jobs, and I didn't want to bother with extra steps of stash/unstash stuff, so I found that you can define a multiline GitHub Actions variable!

It was as easy as this:

jobs:
  job1:
    runs-on: ubuntu-latest
    steps:
      - name: Set multiline value in bash
        run: |
          # The curly brackets are just Bash syntax to group commands
          # and are not mandatory.
          {
              echo 'JSON_RESPONSE<<EOF'
              cat my-file.json
              echo EOF
          } >> "$GITHUB_OUTPUT"

Of course, you need to be sure that the delimiter EOF doesn't occure within the value.

Then you can call that again as:

[...]
  job2:
    needs: job1
    runs-on: ubuntu-latest
    steps:
      - name: Get multiline value in bash
        run: |
          echo "${{ needs.job1.outputs.JSON_RESPONSE }}"

That's it! Enjoy! ♾️

Continue Reading »

04/04/2025

03/03/2025

Research Paper: Building a Modern Data Platform Based on the Data Lakehouse Architecture and Cloud-Native Ecosystem

Building a Modern Data Platform Based on the Data Lakehouse Architecture and Cloud-Native Ecosystem

Finally, after months of hard work, I have published my first research paper in a double-blind peer-reviewed scientific journal by the international publisher Springer Nature 🙌

The paper is titled:

Building a Modern Data Platform Based on the Data Lakehouse Architecture and Cloud-Native Ecosystem

This research paper is the result of several months of work and is based on my master's thesis, which was published in 2023 (I got Master of Science with Distinction in Data Engineering from Edinburgh Napier University).

The paper presents a practical application for data management without vendor lock-in, in addition to ensuring platform extensibility and incorporating modern concepts such as Cloud-Native, Cloud-Agnostic, and DataOps.

Why is this paper important? Because data is the backbone of Artificial Intelligence! In today's world, control over data means political and economic independence.

I would like to extend my sincere gratitude to the research team who contributed to this work, supported me, and shared their knowledge to help bring this paper to the highest quality. It was a truly enriching experience on many levels! 🙌

  • Dr. Peter Barclay: Head of the Data Engineering program at the School of Computing, Edinburgh Napier University.
  • Dr. Nikolaos Pitropakis, PhD: Associate Professor of Cybersecurity at the School of Computing, Edinburgh Napier University.
  • Dr. Christos Chrysoulas: Associate Professor in Software Engineering at Heriot-Watt University.

The research group chose these quotes from our respective languages/cultures to emphasize the importance of perseverance and diligence:


عِندَ الصَّباحِ يَحمَدُ القومُ السُّرَى
(In the morning, the people praise the night's journey)
Arabic Proverb

Αρχή ήμισυ παντός
(The beginning is half of everything)
Greek Proverb

Is obair latha tòiseachadh
(Beginning is a day's work)
Scottish Gaelic Proverb


I will write a community blog post about it soon :-)

Continue Reading »

22/02/2025

How Open Source Helped Me Step Up My DevOps Career - Presentation

2 days ago (20.02.2025), it was a pleasure to participate in the Open Source Summit 2025 in KSA.

My session was about participating in Open-source and how it helps to be a better DevOps engineer. In fact, the best DevOps engineers I have encountered possess T-shaped skills that require diving into many areas, even outside of the daily work topics.

How Open Source Helped Me Step Up My DevOps Career

It was nice to reflect on all those years of professional work and open-source contributions 🤩

Continue Reading »

31/12/2024

2024 Highlights

Finally, 2024 is over! Another crazy year but I like it! 🤩 This year was a bit chill compared to previous ones, but also I did many new things.

Top 5 highlights in 2024

1. Career

In 2022, I formed the Distribution team at Camunda, which is responsible for building and deploying the Camunda Platform 8 Self-Managed, which is an umbrella Helm chart with 10+ systems. In 2023, the main mission was to increase the team size to 4. In 2024, the onboarding was done for all members, and the team became more autonomous and worked at full capacity. I like the process of building and leading the team and am excited for the next steps in 2025. 🤩

2. Dynamic DevOps Roadmap

Dynamic DevOps Roadmap

In 2023, I wrote why the DevOps linear roadmaps are broken by default and then sow the seed of the Dynamic DevOps Roadmap to become a DevOps Engineer.

In 2024, I finished 100% of the roadmap and launched the roadmap website, making reading and following progress much easier. I'm pretty happy that many people like the approach of that roadmap instead of other roadmaps that list tons of DevOps tools! (which doesn't really work!). Here are some other highlights:

  • The roadmap repo got more than 1.6k stars on GitHub.
  • Added around 250 quiz questions (embedded in each module).
  • Added around 200 general interview questions.
  • Added interview best practices, which are the same as important as technical skills.
Experience-Driven DevOps: Beyond Tools, Where Concepts Meet Real-World Challenges

A FREE pragmatic DevOps roadmap to kickstart your DevOps career in the Cloud Native era following the Agile MVP style! A DevOps Engineer or Software Engineer, this roadmap is all that you need to start, grow, and expand!

3. Public Speaking

This year, I conducted some nice sessions (in Arabic) with awesome people.

As usual, JobStack_ continues to set the bar higher each year! It is an unmissable event for tech enthusiasts. I really enjoyed participating as both a speaker and a listener. 🤩

In November, I had 2 sessions, the kick-off and another specialized in DevOps.

Also, in December, I participated in a live podcast with Ahmed Elemam covering ⭐ How to start with the Dynamic DevOps Roadmap.

4. Activities

Besides these highlights, I had some nice stuff during the year. For example:

  • Updated my main website aabouzaid.com, listed my projects, blog posts, publications, tech talks, and more.
  • Traveled to 3 countries, Oman, Saudi Arabia, and Dominican Republic to attend tech and work-related events. I also had the chance to meet a lot of friends in those travels. Saudi Arabia wasn't my first time there, but the last time was too long ago, happy to be back again. 🤩
  • After 5 years of OnePlus 6, I got a new phone. First, I tried for a couple of months Samsung Galaxy S23 Ultra, a great device but super heavy! So I swtiched to Samsung Galaxy S24, which is high-quality and perfect in size.
  • Since 2020, I've been using the ergonomic mouse (standard vertical) and keyboard (Microsoft Sculpt) and totally like them. Recently, I changed the mouse to Logitech MX Vertical. It's pretty good for me; my only concern is that the clicks are a bit noisy!
  • Lost some weight, which is great 😄

And since we are on this topic, here are the top 5 visited blog posts in 2024!

Top 5 posts in 2024

  1. 2 ways to route Ingress traffic across namespaces - Kubernetes
  2. How to create Makefile targets with dynamic parameters and autocompletion - Make
  3. Validate, format, lint, secure, and test Terraform IaC - CI/CD
  4. Your DevOps learning roadmap is broken! - Career
  5. 3 ways to customize off-the-shelf Helm charts with Kustomize - Kubernetes

Almost the same as last year, 2023, DevOps, Kubernetes, and Kustomize posts are still in the top.

What's next?

As usual, I don't plan the whole year in advance, I just put some high-level directions then work on them as I go in Agile iterative style (yes, I use Agile for personal goals too).

What I know, I will focuse on growing the Dynamic DevOps Roadmap community and work on the Career Growth and Advanced Topics section.

Previous Years


Enjoy, and looking forward to 2024! 🤩

Continue Reading »

08/08/2024

Bootstrap Cloud-Native bootstrappers like Crossplane with K3d - Automation

Crossplane Bootstrapper
I created a logo for the Crossplane Bootstrapper because all good projects deserve a logo. 😁

TL;DR

My contribution to K3d to support embedded files was one of the smoothest open-source contributions, although I needed to refactor my PR fully! I did that happily, thanks to the fruitful discussion with the K3d creator and maintainer, Thorsten Klein 🙌

Let's dive into that a bit more using the STAR method.

Situation

It has always been challenging to initiate the initiator dilemma (the same as Infinite regress in philosophy). For example, what will monitor the monitoring system? Or What will backup the backup system?

The same applies to Cloud-Native tools that run only on Kubernetes, where you need an initial Kubernetes cluster to run those bootstrapping tools to create your resources (like Argo CD and Crossplane).

Task

I wanted to fix this issue once and for all! (a generic way that works with many tools) I found that the best way to do that is to have a declarative way to setup the initial local cluster, which will create the Cloud resources afterward.

Action

I've reviewed a couple of tools and found the best tool to achieve that is K3d, a wrapper around K3s, a Rancher's lightweight Kubernetes distribution. It's like KIND but way more customizable (e.g., it comes with a built-in Helm controller).

In March 2024, I made a K3d PR that added functionality to embed manifests in the K3d cluster configuration. Now, we can have one file to bootstrap the local cluster with bootstrapping tools ready to provision your Cloud resources.

In July 2024, K3d 5.7.0 was released with my feature, so I can use it to bootstrap Crossplane instead of a bunch of Makefiles.

I also created Crossplane Bootstrapper, which makes that process even easier.

Result

With my new feature, it's possible to have 1 YAML file using 1 tool to bootstrap the initial cluster, which will create the reset of your Cloud resources.

Here is an example from Crossplane Bootstrapper. That solution works with any tool (e.g., Argo CD). It also supports external files, which is better for linting and so on.

---
apiVersion: k3d.io/v1alpha5
kind: Simple
metadata:
  name: crossplane-bootstrapper

# Cluster resources.
servers: 1
agents: 1

# Auto deployed manifests.
files:
  - description: Setup Crossplane
    destination: k3s-manifests/crossplane-bootstrapper.yaml
    nodeFilters:
      - "server:*"
    # Source as a file.
    # source: manifest-crossplane-bootstrapper.yaml
    # Source as an embedded manifest.
    source: |
      ---
      apiVersion: v1
      kind: Namespace
      metadata:
        name: crossplane-system
      ---
      # Install Crossplane.
      apiVersion: helm.cattle.io/v1
      kind: HelmChart
      metadata:
        name: crossplane
        namespace: crossplane-system
      spec:
        repo: https://charts.crossplane.io/stable
        chart: crossplane
        targetNamespace: crossplane-system
        valuesContent: |-
          provider:
            packages:
            # Docs: https://marketplace.upbound.io/providers/upbound/provider-family-gcp
            - "xpkg.upbound.io/upbound/provider-gcp-gke:v1.0.2"
          configuration:
            packages:
            # Docs: https://marketplace.upbound.io/configurations/upbound/platform-ref-gcp
            - "xpkg.upbound.io/upbound/platform-ref-gcp:v0.9.0"
      ---
[...]

That's it! Happy DevOpsing :-)

Continue Reading »

07/07/2024

Gomplate v4 is here! - Tools

gomplate logo

This year, one of my nice discoveries was gomplate, a fast template renderer supporting many data sources and hundreds of functions.

And finally, gomplate v4 was released in June 2024. It has many excellent new features and additions (v3 was released in 2018!).

I will not really cover much here as it has extensive documentation. The most basic example could be:

echo 'Hello, {{ .Env.USER }}' | gomplate
Hello, ahmed

But I totally like the idea of the separation between data input (data source) and rendering, so I can generate the data with any tool and then just leave the rendering to gomplate like:

Template file:

{{- $release := ds "release" -}}
{{- with $release -}}
Version: .version
{{- end -}}

Rendering command:

RELEASE_DATA_JSON='{"version": "1.0.0"}'
gomplate \
    --config .gomplate.yaml \
    --file RELEASE.md.tpl \
    --datasource release=env:///RELEASE_DATA_JSON?type=application/json

Of course, that's still super basic, but the idea here is that I could generate the data of env var RELEASE_DATA_JSON or even generate a JSON file and read it in gomplate.

What's better? It's already available in asdf! Just add it directly or declaratively via asdf plugin manager.

asdf plugin-add gomplate

If you want a decent tool for templating, then gomplate is your tool to go.

Enjoy :-)

Continue Reading »

06/06/2024

Powered by Blogger.

Hello, my name is Ahmed AbouZaid, I'm a passionate Tech Lead DevOps Engineer. 👋

I specialize in Cloud-Native and Kubernetes. I'm also a Free/Open source geek and book author. My favorite topics are DevOps transformation, DevSecOps, automation, data, and metrics.

More about me ➡️

Contact Me

Name

Email *

Message *

Start Your DevOps Engineer Journey!

Start Your DevOps Engineer Journey!
Start your DevOps career for free the Agile way in 2024 with the Dynamic DevOps Roadmap ⭐

Latest Post

Automate adding vCluster to Argo CD using External Secrets Operator - GitOps

Overview In KubeZero (an open-source out-of-the-box Platform Orchestrator with GitOps designed for multi-environment Clo...

Popular Posts

Blog Archive