Sunday, February 28, 2021

Github CI/CD with deployment to Kubernetes

 In 2020 I did a webcast and blog post about doing CI/CD with Azure DevOps and Kubernetes. This was as much a how-to as it was also evangelism about the power and elegance of containerized build and deployment automation.

This installment goes through the exact same scenario, this time in Github. I'm not going to be "selling" Kubernetes and CI/CD too much - if you're haven't been sold on these things by 2021 you probably never will be. Rather, the motivation this time is an exploration into the impact of Github Actions is going to have on engineers who are currently using Azure DevOps.

All the code is available, well, on Github: skiryazov/AksCiCdDemo: AKS CI/CD Demo Project (github.com)

As you are probably aware, Microsoft is now offering two CI/CD platforms that are for most intents and purposes - identical. Are they going to keep investing in both of them indefinitely? Probably not, and when the dust settles the one left standing will be GitHub, this much is clear. What is not clear is how long it will take for the dust to settle but my personal guess would be about 5 years. Thus, you can do worse than starting to invest into Github Actions knowledge today.

With this out of the way, let's focus on the task at hand - deploying our app to Kubernetes continuously. Here are the high-level steps:

  1. Prepare your app
  2. Provision your cloud resources
  3. Add a containerized build
  4. Deploy to Kubernetes

Now let's dive into the details:

1. Prepare your app

At the end of this step you should have a web app that works published in a Github repo - that's all. You can use any git client, dev env and programming language. I've used an ASP.NET Core project in Visual Studio but with containers it really doesn't matter one iota what exactly you have, as long as you know how to build and run it in command line.

2. Provision your cloud resources

In this example we use Azure but it can be adapted to other providers too. If you don't have a subscription you can use - go right ahead and create one, Microsoft will give you some free usage. If you have a Visual Studio Subscription from work this might also come with some free Azure credits. I assume you've done that and have az cli installed locally and your desired subscription is selected.

All of this can be done via the portal, PowerShell, ARM templates, terraform, you name it - but let's stick to az cli for consistency.

2.1 Create an ACR instance

In this sample we'll use a Premium trier ACR as we need the Scope Maps in order to use the authentication method chosen - it can be adapted to work for Standard tier.

ACR_NAME="smartdev" # Container registry name RES_GROUP="Github-CiCd-Demo" # Resource Group name az group create --resource-group $RES_GROUP --location westeurope az acr create --resource-group $RES_GROUP --name $ACR_NAME --sku Premium --location westeurope

2.2 Create an AKS cluster

 az aks create
  --resource-group Github-CiCd-Demo
  --name github-demo-cluster
  --node-count 2

2.3 Connect to your new cluster

If you've already used kubectl locally for managing other clusters feel free to keep using it but you can also use the cloud shell on the Azure portal that comes with kubectl preinstalled.

Just navigate to the cluster an click Connect - all will be explained to you:

 


3. Add a containerized build

3.1 Add a Dockerfile

I like using the ASP.NET Core dummy app for samples like these as it comes with the multi-stage docker-based build out of the box and it works literally without touching it at all. For other platforms you can find such docker samples easily though they might need a bit of tweaking to get them to work.

Check it out, there is nothing particularly exciting: https://raw.githubusercontent.com/skiryazov/AksCiCdDemo/main/Dockerfile 

It does "dotnet restore" and then "dotnet build" in a temp container (with the full .NET SDK), which then gets thrown away and you ship the final container image based on the ASP.NET runtime (no SDK).

3.2 Add a Github workflow

Go to Actions -> New Workflow and select Manual Workflow


There might be a template that gives us some of what we need out of the box but let's do it manually to see what's needed.

After this is done you'll have a build.yaml file that defines your CI build, which you can edit either with your favourite IDE locally and push, or directly within github (which will result in the same outcome: commit + push).

3.3 Create a service principal

This is the civilized, enterprise-grade way to connect to ACR, especially when you are already using Azure AD. Further down this post we'll also see the old fashioned way with username and password:

Authenticate with service principal - Azure Container Registry | Microsoft Docs

A the end of this process you'll have a service principal ID and password. These, along with a few other bits from your Azure setup will go into the next step.

3.4 Login to ACR

In this step we see the difference between config values and secrets. Handling config is a broad topic in itself, which we aren't going to get into right now, but in its simplest form you can store these config values in your git repository and they define where and how your code runs. 

Secrets are used in a similar fashion, they are sent as input to the same CI/CD steps in your build.yaml, but they can't be stored in source control. What we do instead is we use Github Secrets, where we store them and grant access on a need-to-know basis and in our code we only refer to them by name:

 We do Settings -> Secrets:

... and there you can create Repository Secrets and give them names. In our case study we'll need to add the service principal password in there with the name ACR_PRINCIPAL_PASSWORD

The other values that we'll need to complete the ACR login step are as follows:

  • Service principal name - obtained via the PowerShell script that creates the SP
  • Tenant - your Azure AD Tenant ID (you can see it on the portal, among other places)
  • Registry - the short name of your ACR instance (without the azurecr.io part)
  • Repository - the name of your container image (in docker's weird lingo this is called a container repository)

It's best to define these as variables at the workflow level, ending up with a build definition in these lines:

name: CI
on:
  push:
    branches: [ main ]
  pull_request:
    branches: [ main ]
  workflow_dispatch:

env:
  REGISTRY_NAME: smartdev
  ACR_PRINCIPAL: 9075e3ce-7d6d-434b-b21b-68ea1830455c
  ACR_PRINCIPAL_TENANT: 86d52980-68e9-4166-840a-04ef0494ec2c
  APP_NAME: github-cicd-prep  

jobs:
  build:
    runs-on: ubuntu-latest
    steps:

      - name: Checkout the repo
        uses: actions/checkout@v2
      - name: Login to ACR
        id: ecr
        uses: elgohr/acr-login-action@master
        with:
          service_principal: ${{ env.ACR_PRINCIPAL }}
          service_principal_password: ${{ secrets.ACR_PRINCIPAL_PASSWORD }}
          tenant: ${{ env.ACR_PRINCIPAL_TENANT }}
          registry: ${{ env.REGISTRY_NAME }}
          repository: ${{ env.APP_NAME }}

See how we've loaded all the inputs into variables upfront - this will help you extricate my personal settings from the actual ones you are going to use. Like this everything after jobs: will be the same in your case, only the config at the top would differ.

3.5 Push to image

Now we just need to add a step to run docker to build and push our image:

      - name: Publish to Registry
        uses: elgohr/Publish-Docker-Github-Action@master
        with:
          name: ${{ env.APP_NAME }}
          username: ${{ steps.ecr.outputs.username }}
          password: ${{ steps.ecr.outputs.password }}
          registry: ${{ env.REGISTRY_URI }}
          tags: "${{ env.VERSION }}"

You can see how we've consumed the outputs from the previous task into this one - it spits out a username and password that we can use to push the image.

Another new addition is the VERSION variable. This one I've added to the top, based on the build number so that it autoincrements at every CI pipeline run:

   VERSION: v1.0.0-beta${{ github.run_number }}

In this example I've used the open source Github actions by Lars (elgohr). There are the "official" Microsoft ones out there too but I found these easier to get off the ground.

After you've pushed this you can go check out the progress of your workflow:

... and when you get your green light you can head to ACR to check out the container images available - a new one should have popped up:
Click on the repository name and you'll see all versions available, represented as tags.

4. Deploy to Kubernetes

4.1 Create a namespace

In a mature CI/CD setup you would have automated this step too but for the purposes of the demo I created it with Azure Cloud Shell manually:

kubectl create namespace github-test

4.2 Create an image pull secret

We've already talked about Github secrets but this is a different type of secret - a Kubernetes secret, which works in exactly the same way but it's stored in Kubernetes, not in Github. It's a secret value generated from the container registry and stored within k8s so that it can fetch its container images when needed:
      - name: Create image pull secret
        uses: azure/k8s-create-secret@v1
        with:
          container-registry-url: ${{ env.REGISTRY_URI }}
          container-registry-username: ${{ secrets.REGISTRY_USERNAME }}
          container-registry-password: ${{ secrets.REGISTRY_PASSWORD }}
          secret-name: ${{ env.SECRET }}
          namespace: ${{ env.NAMESPACE }}
          force: true

These username and password are stored as Github secrets, just like the principal password above, and we get the values, for instance, from the Azure portal:


4.3 Add a Kubernetes yaml file

Describing what exactly all the lines mean would take us deeper into Kubernetes details than the scope of this post calls for but the file provided with this sample will work in the exact same way in your case even if the app is totally different - it only cares about the name of the container image.


4.4 Replace version number tag

One non-standard bit you might spot in the yaml file is the #{VERSION}# tag. We'll need one more piece of open source magic to make sure this is replaced by the actual version number as defined in the build yaml - the replace-tokens task by Christopher Schleiden:

- name: Replace version number in k8s yaml file uses: cschleiden/replace-tokens@v1 with: files: '["**/*.yaml"]' env: VERSION: ${{ env.VERSION }}

4.4 Deploy

And now, ladies and gentlemen, the moment you've all been waiting for - the deployment! With all the groundwork that we've laid already - it's a matter of just adding one more simple step:

      - name: Kubernetes deployment
        uses: azure/k8s-deploy@v1
        with:
          manifests: |
            k8s-deployment.yaml
          images: |
            ${{ env.REGISTRY_URI }}/${{ env.APP_NAME }}:${{ env.VERSION }}
          imagepullsecrets: |
            ${{ env.SECRET }}
          namespace: ${{ env.NAMESPACE }}

REGISTRY_URI in this case is the full name of the registry, with the "azurecr.io" in the end. 
Now we push, wait for the workflow to finish and if green - we're seconds away from seeing the fruit of our labour. We'll only need to grab the public IP that AKS automatigically provisioned for us, by going into the cloud shell and running kubectl get services -A

In my case the output is:






With this IP in hand, we can now go bask in the glory of our success:

Congratulations! You can now go update your CV with the following keywords: Github Actions, CI/CD, docker, kubernetes - and it will be more deserved than some of the candidates whom I've interviewed.

No comments: