Event-driven Azure Function in Kubernetes using KEDA

The somewhat unlikely partnership Microsoft & RedHat is behind the cool technology KEDA, allowing an event-driven and serverless-ish approach to running things like Azure functions in Kubernetes.

Would it not be cool if we could run Azure functions in a Kubernetes cluster and still get scaling similar to the managed Azure Functions service. KEDA address this and will automatically scale/spin-up the pods based on a Azure Function trigger. And remove them again when not needed anymore of course.

This does not work with all Azure Function triggers but the queue ones are supported (RabbitMQ, Azure ServiceBus/Storage Queues and Apache Kafka).

Let’s try it!

As with all new technologies in the microservice space, setting up a test-rig is easy! In my test i will use:

  • Kubernetes cluster in Azure AKS
  • Apache Kafka cluster
  • KEDA (Kubernetes Event Driven Architecture)
  • Azure Function (running in Docker)

1 – Deploy a Apache Kafka cluster using Helm.

helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
kubectl create namespace kafka
helm install kafka -n kafka --set kafkaPassword=somesecretpassword,kafkaDatabase=kafka-database --set volumePermissions.enabled=true --set zookeeper.volumePermissions.enabled=true bitnami/kafka

2 – Create a Kafka Topic

kubectl --namespace kafka exec -it kafka-0 -- kafka-topics.sh --create --zookeeper kafka-zookeeper:2181 --replication-factor 1 --partitions 1 --topic important-stuff

3 – Deploy KEDA to Azure AKS cluster

The Azure functions cli tool has functionality to deploy KEDA to your cluster but here i will use Helm. The end result is the same.

helm repo add kedacore https://kedacore.github.io/charts
helm repo update
kubectl create namespace keda
helm install keda kedacore/keda --namespace keda

4 – Create an Azure Function

func init KedaTest

#Add the Kafka extension as this will be our trigger
dotnet add package Microsoft.Azure.WebJobs.Extensions.Kafka --version 1.0.2-alpha

5 – Add Dockerfile to function app

func init --docker-only

The Dockerfile generated will not have the prereq required by the Kafka extension in Linux. So we need to modify the Dockerfile to get the dependency librdkafka installed.

RUN apt-get update && apt install -y librdkafka-dev

I also updated the .Net Core SDK docker image to version to 3.1

6 – Deploy the Azure function

The functions cli tool do have built in functionality to create the necessary Kubernetes manifests as well as applying them. To generate them without applying them use the -“–dry-run” parameter

func kubernetes deploy –name dostuff –registry magohl –namespace kafka

Sweet – we have our function deployed. But is it running?

A kubectl get pods shows no pods running. Lets wait for KEDA to do some auto-scaling for us!

6 – Test KEDA!

We will watch pods created in one window. Send some test messages in another and look at the logs in a third window.

Cool! KEDA does watch the Kafka topic and will schedule pods when messages appear. After a default of 5 minutes the pods will be terminated.

Deploy to Azure AKS with Github Actions

Github Actions may not be feature complete or even close to CI/CD rivals Azure Devops or Jenkins when it comes to having the most bells and whistles but there is something fresh, new and lightweight to it which i really like. Sure there are some basic things missing such as manually triggering an action, but i am sure that is coming soon.

Its free for open source projects and it is very easy to get started with. Lets see if we can get an automated deployment to an Azure AKS cluster.

Time for some CI/CD

This will be my simple CI/CD pipeline steps:

  • Check out code (I will use my Asp.Net Core WhoamiCore as the application)
  • Build Docker container
  • Push container to github repo
  • Set Kubernetes context
  • Apply docker manifest files (deployment, service and ingress)
#Put action .yml file in .github/workflows folder.
name: Build and push AKS
on:
push:
#Trigger when a new tag is pushed to repo
#This could just as easily be on code push to master
#but using tags allow a very nice workflow
tags:
- '*'
jobs:
build:
#Run on a GitHub managed agent
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v1

#Set environment variable with the tag id
- name: Set env
run: echo ::set-env name=RELEASE_VERSION::${GITHUB_REF#refs/tags/}

Ok – now we have checked out the code in an managed ubuntu container. It already got things like docker installed!

    #Login to dockerhub.io. Other repos also work fine
- uses: azure/docker-login@v1
with:
#Get credentials from Dockerhub secret store
username: ${{ secrets.REGISTRY_USERNAME }}
password: ${{ secrets.REGISTRY_PASSWORD }}
#Build and push container image
- run: |
docker build . -t magohl/whoamicore:${{env.RELEASE_VERSION}}
docker push magohl/whoamicore:${{env.RELEASE_VERSION}}

And finally lets deploy to Azure AKS

     #Set Kubernetes context (Azure AKS)
- uses: azure/aks-set-context@v1
with:
creds: '${{ secrets.AZURE_CREDENTIALS }}' # Azure credentials
resource-group: 'magnus-aks'
cluster-name: 'kramerica-k8s'
id: login

# Deploy to Azure AKS using kubernetes
- uses: Azure/k8s-deploy@v1
with:
namespace: default
#Specify what manifest file or files to use
manifests: |
.manifests/ingress.yaml
.manifests/deployment.yaml
.manifests/service.yaml
#This will replace any image in manifest files with this
#specific version
images: |
index.docker.io/magohl/whoamicore:${{env.RELEASE_VERSION}}

Lets make a change. In my case i changed the deployment to use 5 replicas instead of 3. I create a new tag/release and head over to the Action tab in GitHub.

We can follow the progress in a nice way.

Let´s verify in the ingress reverse proxy (Traefik) that we now got 5 replicas

The application changes is now automatically deployed to my Azure AKS cluster.

GitHub actions also support running self hosted GitHub Actions agents. This can be a great fit for both enterprise/onprem and local development machine scenarios. More on that later!