Sunday, February 28, 2021

Github CI/CD with deployment to Kubernetes

 In 2020 I did a webcast and blog post about doing CI/CD with Azure DevOps and Kubernetes. This was as much a how-to as it was also evangelism about the power and elegance of containerized build and deployment automation.

This installment goes through the exact same scenario, this time in Github. I'm not going to be "selling" Kubernetes and CI/CD too much - if you're haven't been sold on these things by 2021 you probably never will be. Rather, the motivation this time is an exploration into the impact of Github Actions is going to have on engineers who are currently using Azure DevOps.

All the code is available, well, on Github: skiryazov/AksCiCdDemo: AKS CI/CD Demo Project (github.com)

As you are probably aware, Microsoft is now offering two CI/CD platforms that are for most intents and purposes - identical. Are they going to keep investing in both of them indefinitely? Probably not, and when the dust settles the one left standing will be GitHub, this much is clear. What is not clear is how long it will take for the dust to settle but my personal guess would be about 5 years. Thus, you can do worse than starting to invest into Github Actions knowledge today.

With this out of the way, let's focus on the task at hand - deploying our app to Kubernetes continuously. Here are the high-level steps:

  1. Prepare your app
  2. Provision your cloud resources
  3. Add a containerized build
  4. Deploy to Kubernetes

Now let's dive into the details:

1. Prepare your app

At the end of this step you should have a web app that works published in a Github repo - that's all. You can use any git client, dev env and programming language. I've used an ASP.NET Core project in Visual Studio but with containers it really doesn't matter one iota what exactly you have, as long as you know how to build and run it in command line.

2. Provision your cloud resources

In this example we use Azure but it can be adapted to other providers too. If you don't have a subscription you can use - go right ahead and create one, Microsoft will give you some free usage. If you have a Visual Studio Subscription from work this might also come with some free Azure credits. I assume you've done that and have az cli installed locally and your desired subscription is selected.

All of this can be done via the portal, PowerShell, ARM templates, terraform, you name it - but let's stick to az cli for consistency.

2.1 Create an ACR instance

In this sample we'll use a Premium trier ACR as we need the Scope Maps in order to use the authentication method chosen - it can be adapted to work for Standard tier.

ACR_NAME="smartdev" # Container registry name RES_GROUP="Github-CiCd-Demo" # Resource Group name az group create --resource-group $RES_GROUP --location westeurope az acr create --resource-group $RES_GROUP --name $ACR_NAME --sku Premium --location westeurope

2.2 Create an AKS cluster

 az aks create
  --resource-group Github-CiCd-Demo
  --name github-demo-cluster
  --node-count 2

2.3 Connect to your new cluster

If you've already used kubectl locally for managing other clusters feel free to keep using it but you can also use the cloud shell on the Azure portal that comes with kubectl preinstalled.

Just navigate to the cluster an click Connect - all will be explained to you:

 


3. Add a containerized build

3.1 Add a Dockerfile

I like using the ASP.NET Core dummy app for samples like these as it comes with the multi-stage docker-based build out of the box and it works literally without touching it at all. For other platforms you can find such docker samples easily though they might need a bit of tweaking to get them to work.

Check it out, there is nothing particularly exciting: https://raw.githubusercontent.com/skiryazov/AksCiCdDemo/main/Dockerfile 

It does "dotnet restore" and then "dotnet build" in a temp container (with the full .NET SDK), which then gets thrown away and you ship the final container image based on the ASP.NET runtime (no SDK).

3.2 Add a Github workflow

Go to Actions -> New Workflow and select Manual Workflow


There might be a template that gives us some of what we need out of the box but let's do it manually to see what's needed.

After this is done you'll have a build.yaml file that defines your CI build, which you can edit either with your favourite IDE locally and push, or directly within github (which will result in the same outcome: commit + push).

3.3 Create a service principal

This is the civilized, enterprise-grade way to connect to ACR, especially when you are already using Azure AD. Further down this post we'll also see the old fashioned way with username and password:

Authenticate with service principal - Azure Container Registry | Microsoft Docs

A the end of this process you'll have a service principal ID and password. These, along with a few other bits from your Azure setup will go into the next step.

3.4 Login to ACR

In this step we see the difference between config values and secrets. Handling config is a broad topic in itself, which we aren't going to get into right now, but in its simplest form you can store these config values in your git repository and they define where and how your code runs. 

Secrets are used in a similar fashion, they are sent as input to the same CI/CD steps in your build.yaml, but they can't be stored in source control. What we do instead is we use Github Secrets, where we store them and grant access on a need-to-know basis and in our code we only refer to them by name:

 We do Settings -> Secrets:

... and there you can create Repository Secrets and give them names. In our case study we'll need to add the service principal password in there with the name ACR_PRINCIPAL_PASSWORD

The other values that we'll need to complete the ACR login step are as follows:

  • Service principal name - obtained via the PowerShell script that creates the SP
  • Tenant - your Azure AD Tenant ID (you can see it on the portal, among other places)
  • Registry - the short name of your ACR instance (without the azurecr.io part)
  • Repository - the name of your container image (in docker's weird lingo this is called a container repository)

It's best to define these as variables at the workflow level, ending up with a build definition in these lines:

name: CI
on:
  push:
    branches: [ main ]
  pull_request:
    branches: [ main ]
  workflow_dispatch:

env:
  REGISTRY_NAME: smartdev
  ACR_PRINCIPAL: 9075e3ce-7d6d-434b-b21b-68ea1830455c
  ACR_PRINCIPAL_TENANT: 86d52980-68e9-4166-840a-04ef0494ec2c
  APP_NAME: github-cicd-prep  

jobs:
  build:
    runs-on: ubuntu-latest
    steps:

      - name: Checkout the repo
        uses: actions/checkout@v2
      - name: Login to ACR
        id: ecr
        uses: elgohr/acr-login-action@master
        with:
          service_principal: ${{ env.ACR_PRINCIPAL }}
          service_principal_password: ${{ secrets.ACR_PRINCIPAL_PASSWORD }}
          tenant: ${{ env.ACR_PRINCIPAL_TENANT }}
          registry: ${{ env.REGISTRY_NAME }}
          repository: ${{ env.APP_NAME }}

See how we've loaded all the inputs into variables upfront - this will help you extricate my personal settings from the actual ones you are going to use. Like this everything after jobs: will be the same in your case, only the config at the top would differ.

3.5 Push to image

Now we just need to add a step to run docker to build and push our image:

      - name: Publish to Registry
        uses: elgohr/Publish-Docker-Github-Action@master
        with:
          name: ${{ env.APP_NAME }}
          username: ${{ steps.ecr.outputs.username }}
          password: ${{ steps.ecr.outputs.password }}
          registry: ${{ env.REGISTRY_URI }}
          tags: "${{ env.VERSION }}"

You can see how we've consumed the outputs from the previous task into this one - it spits out a username and password that we can use to push the image.

Another new addition is the VERSION variable. This one I've added to the top, based on the build number so that it autoincrements at every CI pipeline run:

   VERSION: v1.0.0-beta${{ github.run_number }}

In this example I've used the open source Github actions by Lars (elgohr). There are the "official" Microsoft ones out there too but I found these easier to get off the ground.

After you've pushed this you can go check out the progress of your workflow:

... and when you get your green light you can head to ACR to check out the container images available - a new one should have popped up:
Click on the repository name and you'll see all versions available, represented as tags.

4. Deploy to Kubernetes

4.1 Create a namespace

In a mature CI/CD setup you would have automated this step too but for the purposes of the demo I created it with Azure Cloud Shell manually:

kubectl create namespace github-test

4.2 Create an image pull secret

We've already talked about Github secrets but this is a different type of secret - a Kubernetes secret, which works in exactly the same way but it's stored in Kubernetes, not in Github. It's a secret value generated from the container registry and stored within k8s so that it can fetch its container images when needed:
      - name: Create image pull secret
        uses: azure/k8s-create-secret@v1
        with:
          container-registry-url: ${{ env.REGISTRY_URI }}
          container-registry-username: ${{ secrets.REGISTRY_USERNAME }}
          container-registry-password: ${{ secrets.REGISTRY_PASSWORD }}
          secret-name: ${{ env.SECRET }}
          namespace: ${{ env.NAMESPACE }}
          force: true

These username and password are stored as Github secrets, just like the principal password above, and we get the values, for instance, from the Azure portal:


4.3 Add a Kubernetes yaml file

Describing what exactly all the lines mean would take us deeper into Kubernetes details than the scope of this post calls for but the file provided with this sample will work in the exact same way in your case even if the app is totally different - it only cares about the name of the container image.


4.4 Replace version number tag

One non-standard bit you might spot in the yaml file is the #{VERSION}# tag. We'll need one more piece of open source magic to make sure this is replaced by the actual version number as defined in the build yaml - the replace-tokens task by Christopher Schleiden:

- name: Replace version number in k8s yaml file uses: cschleiden/replace-tokens@v1 with: files: '["**/*.yaml"]' env: VERSION: ${{ env.VERSION }}

4.4 Deploy

And now, ladies and gentlemen, the moment you've all been waiting for - the deployment! With all the groundwork that we've laid already - it's a matter of just adding one more simple step:

      - name: Kubernetes deployment
        uses: azure/k8s-deploy@v1
        with:
          manifests: |
            k8s-deployment.yaml
          images: |
            ${{ env.REGISTRY_URI }}/${{ env.APP_NAME }}:${{ env.VERSION }}
          imagepullsecrets: |
            ${{ env.SECRET }}
          namespace: ${{ env.NAMESPACE }}

REGISTRY_URI in this case is the full name of the registry, with the "azurecr.io" in the end. 
Now we push, wait for the workflow to finish and if green - we're seconds away from seeing the fruit of our labour. We'll only need to grab the public IP that AKS automatigically provisioned for us, by going into the cloud shell and running kubectl get services -A

In my case the output is:






With this IP in hand, we can now go bask in the glory of our success:

Congratulations! You can now go update your CV with the following keywords: Github Actions, CI/CD, docker, kubernetes - and it will be more deserved than some of the candidates whom I've interviewed.

Wednesday, August 26, 2020

Kubernetes Deployment with Azure DevOps

I still recall encountering containers and orchestrators for the first time - it sounded cool but I couldn't intuitively relate to the problems they solve. This lasted only until I tried integrating containerized builds into a CI/CD pipeline when I saw the beauty of it all. I aim to show the complete setup, from the ground up to a containerized app with automated build and release - all in a simple 1.5 hour webinar.

Here is the link: https://youtu.be/tRDM7ycWS-Q 

This blog post is the companion resource to the webinar, giving you quick access to the relevant scripts and code snippets. The whole setup is also accessible on a public Azure DevOps project: https://dev.azure.com/FirebrandSprint/KubernetesDevOps

While the code and the pipelines in the above project will remain available indefinitely, the Azure resources that the app relies on will be deleted a few days after the webinar takes place so the app itself and the pipelines will not actually work.

The scenario we'll go through goes like this, from a bird's eye view:

  • Provision resources (container registry, Kubernetes cluster)
  • Create the application
  • Build pipeline (CI)
  • Release pipeline (CD)

Below are all the relevant code snippets for each of these tasks:

Provision Resources

We need to have an Azure subscription to try this out but luckily we can get our hands on one for free, with enough allowance to try it out independently.

For this exercise we'll need only two Azure resources: Container Registry (ACR) and Azure Kubernetes Service (AKS). These can be trivially created through the portal but we can also script them - here are the snippets for AKS:

Create AKS cluster with AZ CLI

 az aks create
  --resource-group fbsprintAksDevops
  --name fbsprintAksDevopsCluster
  --node-count 2

Create AKS with PowerShell

New-AzAks
 -ResourceGroupName fbsprintAksDevops
 -Name fbsprintAksDevopsCluster
 -NodeCount 2

No matter which one you try it should be done within 15 minutes.

Create the Application

This can be done with any technology - that's the beauty of containers, you just pick the right base image and you can run anything. In the webinar I show a plain vanilla ASP.NET Core app, straight out of the Visual Studio template but anything else would do.

The application will also need to be furnished with a multi-stage dockerfile for building and distributing it. Whatever you use to create your app - it might support that (Visual Studio does), otherwise you'll need to create your own or fish one out from the internet.

Here is an example of a usable dockerized build for a web app using npm to resolve its dependencies:

FROM node as build
WORKDIR /app
COPY package*.json /app/
RUN npm install
COPY . /app
ARG configuration=production
RUN npm run build -- --outputPath=./dist/out --configuration $configuration

FROM nginx
COPY --from=build /app/dist/out/ /usr/share/nginx/html
COPY /nginx-custom.conf /etc/nginx/conf.d/default.conf

Build Pipeline (CI)

These days mostly everything is "as code" so that it can be stored in source control, merged, compared - you name it. The build pipeline definition is no exception - the default pipeline type is now YAML-based and here is the one we used:

trigger:
- master

resources:
- repo: self

variables:
#  tag: '$(Build.BuildId)'
  tag: '1'
  imageName: 'fbsprintregistry.azurecr.io/fbsprint-live'
  dockerRegistryServiceConnection: 'fbsprintRegistry'
stages:
- stage: Build
  displayName: Build image
  jobs: 
  - job: Build
    displayName: Build
    pool:
      vmImage: 'ubuntu-latest'
    steps:
    - task: Docker@2   
      displayName: Build and push an image to container registry
      inputs:
        command: buildAndPush
        repository: $(imageName)
        dockerfile: '$(Build.SourcesDirectory)/Dockerfile'
        containerRegistry: $(dockerRegistryServiceConnection)
        tags: |
          $(tag)

 

https://dev.azure.com/FirebrandSprint/_git/KubernetesDevOps?path=%2Fci.yml

Release Pipeline (CD)

The CD pipeline itself is not scriptable at the time of writing though I believe it's a matter of time before this also gets converted to YAML. We do have one important script to add here though and this is your YAML file - the file that encodes how you deploy your app to Kubernetes:

apiVersion: v1
kind: Service
metadata:
  name: fbsprint-live
  labels:
    app: fbsprint-live
spec:
  selector:
    app: fbsprint-live
  ports:
    - protocol: TCP
      port: 80
  type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: fbsprint-live
spec:
  selector:
    matchLabels:
      app: fbsprint-live
  replicas: 1
  template:
    metadata:
      labels:
        app: fbsprint-live
    spec:
      containers:
      - name: fbsprint-live
        image: fbsprintregistry.azurecr.io/fbsprintregistry.azurecr.io/fbsprint-live:1
        ports:
        - containerPort: 80
      imagePullSecrets:
      - name: secret

https://dev.azure.com/FirebrandSprint/_git/KubernetesDevOps?path=%2Fk8s-deployment.yaml

See if it works

For serious work you should have kubectl installed and configured locally but you can try it out in the cloud shell too - you need access to a storage account to run it. You get the local kubernetes config set up automatically with:

az aks get-credentials -g fbsprintAksDevops -n fbsprintAksDevopsClusterDemo

The you just do kubectl get services and you have your external IP. You can also take a look at the status of the pods and check logs with the following:

kubectl get pods
kubectl logs <pod_name>

What next?

There you have it, a completely automated CI/CD for a containerized application in 1.5 hours, give or take. What next then? In the webinar I point out directions in which you can take this setup to make it fully enterprise-grade: versioning, multiple environments, deployment approvals and release gates. 

If this has piqued your interest feel free to dive deeper into DevOps - I've accumulated a bunch of free resources which are now falling out of date but I plan to refresh this collection some time soon - feel free to poke me if you're interested in this!

Enjoy your DevOps journey and don't hesitate to let me know what you think down in the comments.

Friday, March 20, 2020

Introducing Spaghetti Code School - let's teach kids to program!

This post is going to be quite different from my previous ones - I'm not going to address a specific technological challenge, not even tongue in cheek. No, this time I'm going to put forward a way to make the world a better place!

I've long been splitting my time between my day jobs (building software and teaching others to do so) and volunteering, which has been centered around advocating critical thinking and improving fact-based science education. I've focused my charity work on education and rational thinking as I came to realize that this is how my modest efforts would have the strongest impact on the world. How I came to this conclusion is a matter for another time.

Lately, there has been one other major demand on my time - I have little kids. Enough said. If I may say so myself, I've been a very dedicated father, spending time with my kids whenever I get a chance, reading every article and book about parenting I can get my hands on (only evidence-based, of course!) and trying out with my kids anything that seems to be a good practice.

My children have also been actively involved with various extracurricular activities - you know, the usual: sports, music, arts - at ages 2 and 4 it's all really playtime with different themes. Then, one day, I thought - isn't it already time to introduce programming? Teaching my kids to code is the most valuable skill I can give them to prepare them for life in the 21st century

And then it struck me. It may sound obvious in hindsight but to this moment I hadn't realized that I combine my 3 strongest passions (programming, volunteering, parenting) in one activity: a coding school for little kids! Thus, the idea behind the spaghetti code school was born.

What is Spaghetti Code School?

The concept is aimed at kids from 3 to 7 years and is centered around one simple moto: Programming without a computer
Without a computer? Yes, it's possible! This probably raises two questions: why and now.

Why?

Study after study shows that kids thrive better the less time they spend in front of screens - any screens, even these cool looking education games and TV shows that do help your kids learn to count and read come with a host of downsides, from hindering the development of social skills to increasing the risk of obesity. Teaching kids to program is not about introducing them to technology early on - they will be fluent users of gadgets without our help, thank you very much. We want to help kids develop the skills needed for programming without gluing them in front of a computer.

How?

Programming is really about learning conceptualize what you want the computer to do and breaking it down into small, unambiguous instructions. The key pillar of the code school is the "live coding" session - I myself will take on the role of the computer that will execute the kids instructions and I will give them an instruction set that will let them program me! We'll start with the basics, like make a step to the left, turn around and clap and we'll get all the way to loops and recursion. The idea is to translate all these complicated sounding programming concepts into the language kids understand best - play.

There will also be story time, where we'll read children's books that illustrate programming concepts, for instance this one: https://www.amazon.com/How-Code-Sandcastle-Josh-Funk/dp/0425291987

Finally, we'll have a selection of toys that develop various analytical skills - from humble shape sorters for the little ones to 3D puzzles for the grade schoolers. 

Where do I sign up?

We are looking for volunteers to fill in the first class English speaking class, starting in Autumn 2020 in Brussels. An additional locations, should there be in interest and local support, can also be setup in Sofia and London. Interested? Like the facebook page and send us a message:

Once the concept is proven I'm also happy to help setup other locations around Europe.

How can I help?

In many ways, thanks for asking! We need:
  • English-speaking kids who like to play
  • A location in Brussels where we can hold a 6-kid 1.5 hour class twice a month
  • Wannabe assistants 
  • Ideas for techniques, resources, books
  • Enthusiasts to spread the word

Thursday, January 23, 2020

Learn Azure DevOps for free - and pass the AZ-400 exam!

TL;DR scroll down for a ton of free resources that will enlighten you in the ways of Azure DevOps and will help you prepare for the AZ-400 exam. If you first read on though it will all make a lot more sense.

How is Azure certification different?

Cloud computing is disrupting not just the way companies deal with IT infrastructure but also how we look at IT training and certification. Gone are the days when companies could run the same multiple-choice question exam for 10 years with only slight updates every couple of years - the platforms we use are now changing in front of our very eyes and any material more than a few months old is more likely than not to be at least somewhat out of date. Instead, we are getting innovative exam formats, like the AZ-300 exam where you get an Azure sandbox where you need to get actual work done! That's a breath of fresh air compared to the traditional format, where you are drilled in reciting names of classes from an object model and the like. It comes with a price though - the credential you receive is valid for only 2 years, which (I hope) is quite a bit shorter than the lifetime duration of the older Microsoft certificates.

What about AZ-400 specifically?

The hands-on exam format hasn't made it to AZ-400 (Azure DevOps) yet and this exam and the corresponding course come with their own set of challenges. The syllabus encompasses a vast array of technologies, many of which change quite often and to top it off, the materials that Microsoft offers are particularly disorganized and much more voluminous than other week-long Microsoft courses. If you are taking this course in a one week format you should brace yourself for an intensive experience!

It's not all doom and gloom though - one other unusual thing Microsoft have done is to build the course around a ton of freely available materials. They've indexed YouTube videos that cover a good chunk of the materials, they've provided free exercise guides and the cherry on the cake - a free sandbox, complete with a demo generator to initialize the setup for each lab.

These are awesome resources but apparently keeping it current is a tough order that Microsoft can't keep up with and you can expect a lot of the stuff to not actually work out of the box. You know, like in the real world. In December 2019 I sifted through these materials so that I can select the ones that are usable for my students and it's my pleasure to share these with you - they mostly work at the time of writing and for some I've provided quick hints on getting them to work.

So, behold, a great free companion to your AZ-400 course, or potentially even an alternative in case you have experienced DevOps professionals around you in lieu of an instructor. It's split in 7 "parts", for lack of a better word, which I fit into 5 days of training but that's my own way of doing it, other instructors might organize it differently.

Day 1

Part 1 - Implementing DevOps Development Processes

Module 1 - Getting Started with Source Control
Video: Introduction to DevOps
Video: Introduction to Source Control
Video: Working with Git Locally
Video: Introduction to Azure Repositories
Video: Azure Repositories with VSTS CLI
Video: Migrating from TFVC to Git

Lab: Version Controlling with Git in Azure Repos

 Module 2 - Scaling Git for Enterprise DevOps

Video: Git Branching Workflows
https://www.youtube.com/watch?v=ADIlVkzfo5o
Video: GitFlow Improving the Flow of Code
https://www.youtube.com/embed/T7QYscQZwAM 
Video: Implementing GitFlow
https://www.youtube.com/embed/v3yQcjMYSfI 
Video: Collaborating with Pull Requests
https://www.youtube.com/watch?v=VaOdZlhblZ4 
 
 Lab: Code Review with Pull Requests:
https://www.azuredevopslabs.com/labs/azuredevops/git/ 
Video: GitHooks in Action

Video: Inner Source with Forks
Video: GitVersion
Video: Public Projects

  Module 3: Implement and Manage Build Infrastructure
Video: Azure Pipelines

Lab: Configuring a CD pipeline for your Jenkins CI

Lab: Integrate Your GitHub Projects With Pipelines
Known issues:
  The sample calculator repo missing, use any other repo, most of the lab will work
  The status badge markdown doesn't work in github at the time of writing. It does in Azure repos!

Lab: Deploying a Multi-container Application to AKS



Day 2


  Module 4: Managing Application Config and Secrets
Video: SQL Injection Attack
  https://www.youtube.com/watch?v=b3ODyunzDoQ 
Video: Threat Modeling
  https://www.youtube.com/watch?v=VxmZ-s9Yqs4 
Video: Key Vault
  https://www.youtube.com/watch?v=jvaubI0BccM
Video: Managing Technical Debt
Video: SonarCloud
Lab: SonarCloud: Driving continuous quality of your code with SonarCloud https://www.azuredevopslabs.com/labs/vstsextend/sonarcloud/ 
  Known issues:   SonarCloud service connection needs to be created, it's not explicitly mentioned in the instructions.   On the last step the release gate didn't pick up the quality gate outcome.
 
Lab: WhiteSource: Managing Open-source security and license with WhiteSource (featured also in Part 4) https://azuredevopslabs.com//labs/vstsextend/whitesource/ 



Video: Implement Continuous Security Validation
https://www.youtube.com/embed/PLC2WhCW7wA
Video: Securing Infrastructure with AzSK
https://www.youtube.com/watch?v=BkvA58vHuNU 
Video: Azure Policy Management Pipelines
https://www.youtube.com/embed/OiOXlgFNgDo 

Part 2: Implementing Continuous Integration


  Module 1: Implementing Continuous Integration in an Azure DevOps Pipeline
Video: Introduction to Continuous Integration
Video: Implementing Continuous Integration in Azure DevOps
Video: Using Variables to Avoid Hard-coded Values
Video: Configuring Build Retention
Video: Automated Build Workflows
Video: Implementing Build Triggers
Video: Working with Hosted Agents
Video: Implementing a Hybrid Build Process

Lab: Deploy existing .NET apps as Windows containers (Modernizing .NET apps) Lab: Integrating Jenkins CI with Azure Pipelines

  Module 2: Managing Code Quality and Security Policies
Video: Code Quality Defined
Video: Configuring SonarCloud in a Build Pipeline
Video: Reviewing SonarCloud Results and Resolving Issues

Lab: Managing Technical Debt with Azure DevOps and SonarCloud

  Module 3: Implementing a Container Build Strategy

Video: Overview of Containers
Video: Create an Azure Container Registry
Video: Add Docker Support to an Existing Application

Lab: Deploy existing .NET apps as Windows containers (Modernizing .NET apps)

Part 3: Implementing Continuous Delivery

  Module 1: Design a Release Strategy
 Video: Artifact Source
 Video: Deployment Stages
 Video: Build Cadence
 Video: Manual Approval
 Video: Release Gates

  Module 2: Set Up a Release Management Workflow
 Video: Service Connection
 Video: Task groups
 Video: Variable Groups

 Lab: Deploying to Azure VM using Deployment Groups

 Lab: Using Azure Monitor as Release Gate
Known issues

 Lab: Setting up secrets in the pipeline with Azure Key vault

 Video: Secrets

Day 3

  Module 3: Implement an Appropriate Deployment Pattern
Video: Setting up a Blue-Green deployment
Video: Setting up a Ring Based Deployment with Traffic Manager

Part 4: Implementing Dependency Management

  Module 1: Designing a Dependency Management Strategy
Video: Creating an Azure Artifacts feed
Video: Push packages to feed
Video: Consume packages from a feed
Video: Pushing package from a pipeline
Video: Promoting packages

Lab: Package management
    Known issue: The instructions are missing some steps but with a bit of improvisation can be made to work.
 In Task 3 just create a new VS solution, or grab one from another Repo.
  Module 2: Manage Security and Compliance
Video: Scanning with WhiteSource Bolt

Day 4

Part 5: Implementing Application Infrastructure

  Module 1: Infrastructure and Configuration Azure Tools
 Video: Architecting Automation
 Video: ARM templates
 Video: Azure CLI
 Video: PowerShell
 Video: Other automation tools
 Video: Version control

 Lab: Azure Deployments using Resource Manager templates


  Module 2: Azure Automation
Lab: Azure Automation Runbook Deployments
Lab: Azure Automation State configuration DSC
  Module 3: Azure Compute Services
Lab: Deploy Application to Azure Kubernetes Service Lab: Deploy Application to Azure App Services using Azure DevOps

Day 5

  Module 4: Third Party and Open Source Tool Integration with Azure
Lab: Deploy app with Puppet on Azure
Lab: Ansible with Azure

  Module 5: Compliance and Security
Lab: Implement Security and Compliance in Azure DevOps pipelines

Part 6: Implementing Continuous Feedback

  Module 1: Recommend and Design System Feedback Mechanisms
Lab: Feature Flag Management with LaunchDarkly and AzureDevOps
Lab: Microsoft Teams integration

  Module 2: Implement Process for Routing System Feedback to Development Teams
Video: Application Insights

Part 7: Designing a DevOps Strategy

  Module 1: Planning for DevOps
Video: Goals
Video: Greenfield and brownfield
Video: Systems of records vs Systems of engagement
Video: Agile Development Practices Defined
Video: Mentoring Team Members on Agile Practices

Lab: Agile Boards

  Module 2: Planning for Quality and Security
Video: Secure development

  Module 3: Migrating and Consolidating Artifacts and Tools
Video: Artifact Repositories
Video: Authorization and access strategy
Video: Azure Devops licensing

Lab: Eclipse Integration