07/07/2024

Gomplate v4 is here! - Tools

gomplate logo

This year, one of my nice discoveries was gomplate, a fast template renderer supporting many data sources and hundreds of functions.

And finally, gomplate v4 was released in June 2024. It has many excellent new features and additions (v3 was released in 2018!).

I will not really cover much here as it has extensive documentation. The most basic example could be:

echo 'Hello, {{ .Env.USER }}' | gomplate
Hello, ahmed

But I totally like the idea of the separation between data input (data source) and rendering, so I can generate the data with any tool and then just leave the rendering to gomplate like:

Template file:

{{- $release := ds "release" -}}
{{- with $release -}}
Version: .version
{{- end -}}

Rendering command:

RELEASE_DATA_JSON='{"version": "1.0.0"}'
gomplate \
    --config .gomplate.yaml \
    --file RELEASE.md.tpl \
    --datasource release=env:///RELEASE_DATA_JSON?type=application/json

Of course, that's still super basic, but the idea here is that I could generate the data of env var RELEASE_DATA_JSON or even generate a JSON file and read it in gomplate.

What's better? It's already available in asdf! Just add it directly or declaratively via asdf plugin manager.

asdf plugin-add gomplate

If you want a decent tool for templating, then gomplate is your tool to go.

Enjoy :-)

Continue Reading »

31/12/2023

2023 Highlights


Image generated with Craiyon.

Finally, 2023 is over! What a year! One more crazy year, but it was the culmination of many actions I've started previously.

Starting with the fun facts ... this is the post no. 100!

After 9 years of tech blogging in English (I had another 10 in Arabic before that). It should be 108 posts since the plan was to post 1 per month, but I missed some, still not bad!

Top 5 highlights in 2023

1. Career

Image by vectorjuice on Freepik

In 2022, I formed the Distribution team at Camunda which is responsible for building and deploying the Camunda Platform 8 Self-Managed which is an umbrella Helm chart with 10+ systems. In 2023, the biggest challenge was increasing the team headcount to 4 and building the workflow to handle that!

2. Academia

I've always been into data! So in 2020, I started a part-time master's in Data Engineering at Edinburgh Napier University. After almost 3 years, I was awarded a Master of Science with Distinction in Data Engineering ๐ŸŽ‰. In June, I got the result after I successfully defended my dissertation titled Modern Data Platform with DataOps, Kubernetes, and Cloud-Native Ecosystem. Then in October, I traveled to Scotland to attend the graduation ceremony. All I can say today is that it was a great experience by all means! (and that's actually why I had some unusual gaps in blog posts in 2023 because I was working on the master's thesis).

3. Mentorship

Dynamic DevOps Roadmap

In the last 5 years, I mentored many people in different career stages (starting their first job, career shift, moving to another work style or company). Almost every day, I see people struggling on their way to the DevOps field. I already wrote why the DevOps linear roadmaps are broken by default! So I decided to fix that and launched a Dynamic DevOps Roadmap to become a DevOps Engineer which under the DevOps Hive identity! The nice thing that I saw many people already like the idea and want to do it that way!

โ„น️ Check out the Dynamic Roadmap content โ„น️

4. Public Speaking

Jobstack is one of my favorite tech events all the time! And as usual, this year was awesome! As a speaker, I had 2 sessions, and as an attendee, I enjoyed many sessions on different topics.

5. Activities

Besides these highlights, I had some nice stuff during the year. For example:

And since we are on this topic, here are the top 5 visited blog posts in 2023!

Top 5 posts in 2023

  1. 2 ways to route Ingress traffic across namespaces - Kubernetes
  2. Validate, format, lint, secure, and test Terraform IaC - CI/CD
  3. Your DevOps learning roadmap is broken! - Career
  4. Delete a manifest from Kustomize base - Kubernetes
  5. 3 ways to customize off-the-shelf Helm charts with Kustomize - Kubernetes

The same as last year, 2022, Kustomize posts are still in the top; that's because there is not much content about it even though it's built-in kubectl now! (since v1.14). That's why I created a Kustomize Awesome list, which is a curated and collaborative list of awesome Kustomize resources.

But this year, 2023, for the first time, a blog post posted in the same year appears in the top 5 posts! Which is discussing how the linear DevOps learning roadmap is broken by default! That's why I had a follow-up post showing the solution that suggests a better way to Become a DevOps Engineer with the Dynamic DevOps Roadmap. ⭐

What's next?

As usual, I don't plan the whole year in advance, I just put some high-level directions then work on them as I go in Agile iterative style (yes, I use Agile for personal goals too).

What I know already is that I need to reward myself and take a good break after the master ๐Ÿ˜

Also, I want to put more effort to grow the DevOps Hive community to help more people to land their first DevOps Engineer role!


Enjoy, and looking forward to 2024!

Continue Reading »

12/12/2023

Become a DevOps Engineer with the Dynamic DevOps Roadmap - Career

Dynamic DevOps Roadmap

A couple of months ago, I discussed why all linear DevOps learning roadmaps are broken by default (like roadmap.sh/devops)! In that post, I've discussed the issue, where it comes from, and the solution I tried for over 5 years!

The solution is simply switching to an MVP-style learning roadmap where you learn in iterations and touch multiple parts simultaneously (not necessarily equally). Each iteration should focus on a primary topic while exploring related side topics. So after a month, you already know about each topic in the roadmap, and after the 2nd month, you have the basics, and after the 3rd month, you have a good base, and so on.

Based on my mentorship experience in the last 5 years, I've concluded that any "tool-based" approach to deal with DevOps will fail miserably. The Cloud-Native landscape is getting bigger and bigger every day, and there is no way to deal with it that way.

For that reason, I've reviewed all docs from the previous mentorship docs I held in the past (I worked with people starting their first job, career shift, or moving to another work style or company) and built the ultimate missing solution!

⭐ Check out the Dynamic Roadmap content ⭐

This roadmap is polymorphic, which means it's designed to work in different modes. It depends on how fast you want to go.

  1. Self-Learning Course: In this mode, you are not expected to have DevOps experience, and you want to go from zero to hero, transforming your knowledge to land your first job as a DevOps Engineer.
  2. Hands-on Project: In this mode, you have some experience with DevOps (usually between 1-2 years of work experience), but you want to step up your skills with real hands-on industry-grad projects to learn DevOps in a pragmatic manner.
  3. Mentorship Program: In this mode, the previous two modes are covered (that means it could be only for the project or the whole roadmap) but with support from a mentor! The project will provide you with a DevOps expert who will guide you in following up on your progress and personalizing your learning plan.

The whole idea is about the Learning by Doing method (aka Problem-based Learning), which is done in iterative phases where you learn as you go and cover the whole DevOps cycle like Code, Containers, Testing, Continuous Integration, Continuous Delivery, Observability, and Infrastructure.

The project is a work in progress; more details are in the status section.

I hope that the project helps more people start their DevOps engineering careers! The market is thirsty for it already!

⭐ Check out the Dynamic Roadmap content ⭐

Happy DevOps-ing :-)

Continue Reading »

12/11/2023

DevOps is not only a culture - Discussion Panel

Today is my second session JobStack 2023 after my previous one yesterday titled "Platform Engineering: Manage your infrastructure using Kubernetes and Crossplane", but this time it was a discussion panel with Ahmad Aabed (CTO of Zyda).

For a long time, DevOps methodologies have been the driving force behind innovation and efficiency in modern software development. You've likely encountered the popular jargon: "DevOps is not a role; it is a culture"! However, the truth is that DevOps has much more!

This session dives deep into the pillars world of DevOps, going beyond the cultural aspect to explore practices, technology stacks, mindset, and more. This is an open discussion where we will share (and encourage you to do so) the experiences of implementing DevOps. In other words, what practical steps have you taken to embrace the power of DevOps in your organization?

A shot from the recoding

At the end, I'd like to share a couple of posts I wrote before in that regard (DevOps methodologies in action):

Enjoy :-)

Continue Reading »

11/11/2023

Platform Engineering: Manage your infrastructure using Kubernetes and Crossplane - Presentation

Just fresh out of the kitchen, part of JobStack 2023, today I conducted a session about a great tool ... Crossplane, the open-source control plane!

I've been using Crossplane for over a year and a half, and it helped me a lot to manage my infrastructure without the need to use Terraform (I still love Terraform, but it wasn't the best for my use case).

In this session I shed the light on how Crossplane could unify infrastructure management within Kubernetes.

I gave a brief how Crossplane extends the functionality of Kubernetes and allows you to create external infrastructure. You can create Cloud resources the same way you create Kubernetes resources! I really love its declarative, cloud-native, GitOps-friendly approach to code-driven infrastructure management.

Agenda:

  1. Scenario
  2. What is Crossplane?
  3. How it look like?
  4. Crossplane Concepts
  5. How Crossplane Works
  6. Pros and Cons
  7. Conclusion
  8. Resources
  9. Questions

Note: For more resources, checkout this Awesome Crossplane list.



A shot from the recoding ๐Ÿ™Œ
Watch the full session on YouTube: Platform Engineering: Manage your infrastructure using Kubernetes and Crossplane (Arabic)

Conclusion:

Crossplane is a great framework for managing infrastructure using the Kubernetes style and benefits from the that ecosystem (ArgoCD, Helm, Kustomize, etc.).

There are many use cases where it can perfectly fit in already. And at the time of writing these words (November 2023), the Marketplace has numerous enterprise and community providers configurations. Also, Composition Functions graduated to beta.

However, it's a relatively new ecosystem and still evolving, so it might not be the optimal solution for every workload. But it's probably a matter of time to grow more. So, if it's not your fit now, consider revisiting in the future.


That's it, enjoy :-)

Continue Reading »

09/09/2023

๐Ÿ”€Merger๐Ÿ”€, a schemaless strategic merge plugin - Kustomize

Kustomize is a great tool. I've been using Kustomize for almost 4 years and am happy with it. However, it's known for its strict merging methods, where it needs to have an OpenAPI schema to merge files properly.

There were many use cases where I needed a more flexible way to merge resources (away from Kustomize's strict merging). So, I've developed a new Kustomize generator plugin (Containerized KRM and Exec KRM) that extends Kustomize's merge strategies (schemaless StrategicMerge).

I wanted to:

  • Generate multiple resources from a single resource without the need to multi-import (you can patch multiple resources with a single patch but not the other way around)
  • An easy way to merge CustomResources without the need to provide the OpenAPI schema for it (that's actually a lot of work)
  • An easy way to merge non-k8s resources and put them in a ConfigMap.
  • A way to split long files into smaller ones.

...

Say Hi to ๐Ÿ”€Merger๐Ÿ”€

Merger is a generator provides schemaless merges with different strategies (StrategicMerge) like replace, append, and combine.

Here is an example:

apiVersion: generators.kustomize.aabouzaid.com/v1alpha1
kind: Merger
metadata:
  name: merge
    annotations:
      config.kubernetes.io/function: |
        container:
          image: ghcr.io/aabouzaid/kustomize-generator-merger
          mounts:
          - type: bind
            src: ./
            dst: /mnt
spec:
  resources:
  - name: example
    input:
      # Available options: overlay,patch.
      # - Overlay: Produce multiple outputs by merging each source with the destination.
      # - Patch: Produce a single output by merging all sources together then with the destination.
      method: overlay
      files:
        # The same as in the KRM container above, omit it if Exec KRM is used.
        root: /mnt
        sources:
        - src01.yaml
        - src02.yaml
        destination: dst.yaml
    merge:
      # Available options: replace,append,combine.
      # - Replace: All keys in source will merge and replace what's in the destination.
      # - Append: Maps from source merged with destination, but the lists will be appended from source to destination.
      # - Combine: Maps from source merged with destination, but the lists will be combined together.
      strategy: combine
    output:
      # Available options: raw.
      # In the next releases also ConfigMap and Secret will be supported.
      format: raw

For more details, check the common use cases section.

...

Some takeaways I learned while developing this project:

  • KubeBuilder markers could be used with the client side to auto-generate the OpenAPI YAML schema from the code.
  • Golang compression methods (like UPX and LZMA) can reduce the binary size up to 80% compared to the standard build method.
  • Cosign keyless artifacts sign is pretty easy to add to the CI pipeline (no need to manage any extra keys).
  • OpenSSF Scorecard offers a great integration assessing the security health metrics of open-source projects.

Enjoy :-)

Continue Reading »

08/08/2023

Helm chart keyless signing with Sigstore/Cosign - DevSecOps

Software supply chain security has been one of the hot topics because of the continues attacks and exploits shown in the latest few years. So having a proper security practices like signing artifacts in today's CI/CD pipeline is not a luxury! Which became a standard DevSecOps practice.

However, for a long time it needed a lot of work to implement those practices, hence, projects like Open Source Security Foundation (OpenSSF) which created security framework like Supply-chain Levels for Software Artifacts (SLSA) and Sigstore which created tools like Cosign to standardize and reduce that fatigue.

Today I want to shed some lights on Helm chart signing with Cosign (part of Sigstore project). This post will focus on GitHub Actions and 2 ways for Helm chart signing based on the type of the Helm registry (simple or OCI-based).

1. Intro

Before starting, what is actually Cosign? I will start with quoting from the project website:

Cosign is a command line utility that can sign and verify software artifact, such as container images and blobs.

Cosign aims to make signatures invisible infrastructure, and one of its main features what's known as "Keyless signing". Keyless signing means rather than using keys like GPG/PGP, it uses associates identities via OpenID Connect (i.e. it auth against providers like Microsoft, Google, and GitHub to issues short-lived certificates binding an ephemeral key).

Cosign could be used as a CLI also integrated part of other build/release tools like GoReleaser.

2. Identity setup

As mentioned, no need for signing keys here, Cosign will use the identity context like GitHub. Let's take GitHub Actions as a base here, but it is worth mentioning that Cosign works with many identity providers.

From the GitHub Actions workflow point of view, you just need to have job permission id-token: write, which allows the job to use OICD and generate JWT (which works as a key).

name: Sign Helm chart artifact
[...]
jobs:
  sign:
    name: Sign
    permissions:
      id-token: write
    [...]

You can find a full example in the repo: https://github.com/DevOpsHiveHQ/cosign-helm-chart-keyless-signing-example

3.1. Keyless signing for simple Helm repository

A simple Helm repository is just a file called index.yaml which reference to the actual chart files URLs. A popular way for that is using Chart Releaser to host the Helm charts using GitHub Pages and Releases.

The simplest way to sign Helm chart using Cosign is to sign the artifact then upload the signature to GitHub release page.

First, sign the chart file:

# For explicitly, it's also possible to add '--oidc-provider=github-actions',
# but no need for that, cosign will discover the context if the GH job permission is correct.
cosign sign-blob my-app-1.0.0.tgz --bundle my-app-1.0.0.tgz.cosign.bundle

That will create a bundle file my-app-1.0.0.tgz.cosign.bundle which contains signing metadata like the signature and certificate (also it's possible to have separate sig and pem files) which should be uploaded to GitHub release page.

Now anyone can download the Helm chart file and Cosign bundle file to verify its integrity:

cosign verify-blob my-app-1.0.0.tgz \
  --bundle my-app-1.0.0.tgz.cosign.bundle \
  --certificate-oidc-issuer "https://token.actions.githubusercontent.com" \
  --certificate-identity "https://github.com/DevOpsHiveHQ/cosign-helm-chart-keyless-signing-example/.github/workflows/sign.yaml@refs/heads/main"

If the file is valid the command will show Verified OK, otherwise it will show something like Error: none of the expected identities matched what was in the certificate ....

On the other hand, that's still some work! Maybe if Chart Releaser has a native support for Cosign. Also, the Helm plugin helm-sigstore could help a bit but you need to download the Helm plugin first.

3.2. Keyless signing for OCI-based Helm repository

Using container image to store config like Helm chart or Terraform module is one of the brilliant ideas.

One of the biggest changes in Helm 3 was ability to use container registries with OCI support to store and share chart packages. It started as an experimental feature, but by Helm v3.8.0, OCI support is enabled by default.

So instead that simple index.yaml an OCI-based registry could be used as a Helm repository and chart storage too. Any hosted registries that support OCI will work for that like Docker Hub, Amazon ECR, Azure Container Registry, Google Artifact Registry, etc.

In that setup, Cosign works a bit differently where it signs the chart as an OCI image (the same way it signs the Docker images) and store the signature in the OCI repository. But in that case it only makes sense to sign the digest not the tags since the digest is immutable.

There are two steps to make that, first push the chart to an OCI-based Helm repository which generates the chart digest, then push the signed digest.

# After login to the registry using "helm registry login ...".
helm push my-app-1.0.0.tgz oci://ttl.sh/charts &> push-metadata.txt

CHART_DIGEST=$(awk '/Digest: /{print $2}' push-metadata.txt)
cosign sign -y "ttl.sh/charts/my-app@${CHART_DIGEST}"

Finally, to verify that:

cosign verify "ttl.sh/charts/my-app@${CHART_DIGEST}" \
  --certificate-oidc-issuer "https://token.actions.githubusercontent.com" \
  --certificate-identity "https://github.com/DevOpsHiveHQ/cosign-helm-chart-keyless-signing-example/.github/workflows/sign.yaml@refs/heads/main"

4. Recap

Nowadays, it's critical to have standard DevSecOps practices in the whole SDLC, and validating integrity is one of the most essential practices in securing the software supply chain.

Any publicly distributed artifact should be signed to ensure the origin of it, and Helm charts are no different. Cosign keyless signing made it much easier to apply such practices without the hassles of managing singing keys.

Continue Reading »

07/07/2023

My Master's Dissertation: Modern Data Platform with DataOps, Kubernetes, and Cloud-Native Ecosystem

Almost 3 years ago (September 2020), I enrolled in a part-time master's program in data engineering at Edinburgh Napier University. Finally, two weeks ago, I got an email informing me that I successfully completed my master's and the program board of examiners awarded me Master of Science with Distinction in Data Engineering ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰

My graduation ceremony would have been today, but I postponed it to October 2023 for some personal matters. So it's just a matter of time till the official graduation.

So today, I'd like to share my master's dissertation: Modern Data Platform with DataOps, Kubernetes, and Cloud-Native Ecosystem.

The dissertation builds a proof of concept for the core of Modern Data Platform using DataOps, Kubernetes, and Cloud-Native ecosystem to build a resilient Big Data platform based on Data Lakehouse architecture, which is the basis for Machine Learning (MLOps) and Artificial Intelligence (AIOps).

It was a super challenging topic, given the fact that Data Lakehouse architecture emerged just in 2020! But it was great to dive into it. I've been into data for years, and it's exciting to step up my skills from T-Shaped to Pi-Shaped (DevOps and DataOps) :-)

Continue Reading »

06/06/2023

Your DevOps learning roadmap is broken! - Career

...
⚠️ Update ⚠️
If you are looking for the solution for this dilemma, then check this out: How to become a DevOps Engineer in 2024 with the Dynamic DevOps Roadmap.
...

Can you read it? Probably not, it's already broken!

As of 2023, the DevOps Engineer role remains one of the top 10 most in-demand jobs across all industries (not just the tech field!) And that was the case for the last 5 years at least and is expected to continue for the foreseeable future.

While DevOps is a hot topic all the time, it's in particular hard to start DevOps engineer as your first job. Many engineers believe that it's not possible at all to begin as a DevOps professional without first working as a developer or in operations (I totally disagree with that!).

Almost every day, I see people struggling on their way to start as fresh/junior DevOps engineers. They usually follow some roadmap (typically roadmap.sh/devops). But still, they cannot land their first job, and sadly, many of them eventually give up!

This blog post explains why most of roadmaps don't work for DevOps roles and won't help you start your first job as a DevOps engineer. Also, the post discusses the best way to start in a DevOps role without prior work experience. While it might not work for everyone, I can say that it has been successful with all the people I have mentored in the last couple of years.


ToC

TL;DR

In 2023 starting a DevOps engineer role is challenging because the DevOps model has various implementations and patterns. It's even more complicated (but still possible) if that's your first job without previous software industry experience. Yet, many learning roadmaps like roadmap.sh/devops still follow a linear path (i.e., learn some topic to the end, then move to another, and so on) which doesn't work well with the DevOps role because a skilled DevOps engineer is a T-Shaped skilled. Adopting a dynamic MVP learning roadmap increases your chances of entering the market and starting your first job as a DevOps engineer without previous software hands-on experience.


DevOps Topologies

First, let's start with the DevOps model itself. Being a high-level methodology that can be implemented in various ways makes it super challenging. Hence, the DevOps engineer role has no unified definition or standard requirements.

I will not delve into the clichรฉ DevOps is not a role, it is a culture (because in reality, it doesn't work like that! DevOps is not just a culture, and it is also a role), but here I want to emphasize that not all DevOps engineers are the same!

Here I want to mention "DevOps Topologies" which covers different team structures that implement DevOps. It shows many bad and good DevOps patterns.

Given that no one-size-fits-all team topology works for every organization, it's wrong to say it's not possible to start your career as a DevOps engineer. But there are situations where it's extremely challenging to do so, such as being the sole DevOps engineer in a team or even in a company.

So, what can you do to increase your chances of landing your first job as a DevOps engineer? You should have T-shaped skills and leave no stone unturned in your learning journey!

T-Shaped Skills

The "T-Shaped skills" helps DevOps engineers to efficivtly handle various challanges.

The T-shaped skills refer to combining broad and deep skills in a specific field. The horizontal bar of the "T" represents a broad range of general knowledge and skills across different disciplines or areas, and the vertical stem of the "T" represents deep expertise in a specific area. It's simply a mix between being a specialist and generalist at the same time!

The T-shaped skills will help you to work in companies with different DevOps patterns. You can easily transition between different areas in the DevOps spectrum. Not only that but also it will help you to handle new challenges effectively. In fact, the best DevOps engineers I have come across possessed T-shaped skills.

Does it mean there's no I-shaped DevOps engineer who specializes in certain areas and no little knowledge in other areas? I would say it's possible, but it may limit the available opportunities and companies you can work with.

Actually, as you progress in your career, it's better to develop more specialization (i.e., more vertical stems), and after a couple of years in the industry, your next step should be Pi-Shaped skills (search also for M-Shaped and Comb-Shaped skills, but it's a topic for another post).

To summarize this section, you should aim to gain exposure to various areas of DevOps practices and technologies without delving too deep into each one, yet, you need to dive in-depth into some of them (according to the market or organizations your target). And due to that, your roadmap shouldn't be linear but follow the MVP-style approach!

MVP Learning Roadmap

You probably heard before about the "Minimum Viable Product" or MVP, a basic version of a product with enough features to satisfy early users and gather feedback for further development, which's commonly associated with Agile methodologies. Interestingly, this approach can also be applied to learning roadmaps too!

Over the years, I saw people want to start their career as DevOps but completely lost! That's probably because they try to progress in a linear fashion. Typically they will follow some roadmap like roadmap.sh/devops and start learning the topics one by one top down. I.e., they spend a couple of weeks with Linux, then a month learning programming language, then some time with the containers, and more time with Kubernetes. And several months passed, and they found themselves stuck in the middle of the roadmap but still unable to get any job because many topics were untouched!

The truth is that the linear vertical learning path DOES NOT WORK in the DevOps field! You need to follow an MVP-style learning roadmap where you learn in iterations and touch multiple parts at the same time (not necessarily equally). Each iteration should focus on a primary topic while exploring related side topics. So after a month, you already know about each topic in the roadmap, and after the 2nd month, you have the basics, and after the 3rd month, you have a good base, and so on.

By adopting an MVP-style learning roadmap, you can ensure that you cover various aspects of DevOps while continuously building upon your knowledge. This approach allows for a more well-rounded understanding and a better chance of landing your first job as a DevOps engineer.

3 different high-level roadmaps models with different approaches to learning, the first two from the left follow the MVP-style, and the last one is linear.

  • The model on the left iterates horizontally in equal chunks over each area (good). It's simple and straightforward, each area (e.g., OS and code, containers and cloud, etc) has a fixed weight based on importance in the daily work. You don't need to think a lot about the next step, from the left to the right, you learn about each area and reach basic knowledge in all of them.

  • The model in the center iterates horizontally in dynamic chunks over each area (better). It's the same as the previous one but a bit more advanced, where it needs hands-on knowledge to decide the right weight for each area (based on many factors like targeted market or learner skills or background). This approach is more efficient; however, it requires guidance from an experienced DevOps engineer (e.g., a mentor or career coach) to define the weights correctly. It's even more critical when you have some sort of constraints like time or so (they are usually there in career shifts).

  • The model on the right iterates vertically over each area (bad). Don't do that! It has several drawbacks. For example, it delays your market fit, where in most cases, you cannot work as a DevOps engineer until you have completed all areas. Additionally, there is no space to review your learning approaches or holistic feedback in general. Finally, it's missing the connection between different areas, at the end of the day at work, you don't use a single skill at a time. I actually saw many disappointed people in the middle of the roadmap because they still didn't get the full picture.

So to ensure a more effective learning journey, it's recommended to adopt an MVP-style learning roadmap that allows for iterative learning across multiple areas while also considering the relevance and importance of each area in real-world DevOps work.

The Solution!

⭐ Check out the Dynamic Roadmap content ⭐

A dynamic MVP-style learning roadmap is one of the best ways to start as a DevOps engineer.

Let's put everything together. The Based on my experience mentoring people in different stages (starting their first job, career shift, moving to another work style or company), I have found that the approach of using a dynamic MVP-style roadmap with hands-on projects designed by an experienced DevOps engineer has been highly successful. That means each project will cover all DevOps areas used in the job. It's also essential to understand the targeted market and organizations because, with different DevOps topologies, the DevOps engineer role requirements vary.

In conclusion, to start working DevOps engineer, you don't need to know "everything" about the software development life cycle (SDLC), nor start as Dev or Ops and then switch to DevOps. In many DevOps topologies, you can secure your first job as a DevOps engineer if you invest enough time in learning (not only the technical aspects) and follow an MVP-style roadmap. Also, undoubtedly having a senior DevOps on both ends (during the learning and in the company where you apply) will make your start much easier.

Continue Reading »
Powered by Blogger.

Hello, my name is Ahmed AbouZaid, I'm a passionate Tech Lead DevOps Engineer. ๐Ÿ‘‹

With 16+ years of open-source contributions, 12+ years of professional hands-on experience in DevOps, and an M.Sc. in Data Engineering from Edinburgh Napier University (UK), I enjoy facilitating the growth of both businesses and individuals.

I specialize in Cloud-Native and Kubernetes. I'm also a Free/Open source geek and book author. My favorite topics are DevOps transformation, automation, data, and metrics.

Contact Me

Name

Email *

Message *

Start Your DevOps Engineer Journey!

Start Your DevOps Engineer Journey!
Start your DevOps career for free the Agile way in 2024 with the Dynamic DevOps Roadmap ⭐

Latest Post

Gomplate v4 is here! - Tools

This year, one of my nice discoveries was gomplate , a fast template renderer supporting many data sources and hundre...

Popular Posts

Blog Archive