Featured

Event-driven Azure Function in Kubernetes using KEDA

The somewhat unlikely partnership Microsoft & RedHat is behind the cool technology KEDA, allowing an event-driven and serverless-ish approach to running things like Azure functions in Kubernetes.

Would it not be cool if we could run Azure functions in a Kubernetes cluster and still get scaling similar to the managed Azure Functions service. KEDA address this and will automatically scale/spin-up the pods based on a Azure Function trigger. And remove them again when not needed anymore of course.

This does not work with all Azure Function triggers but the queue ones are supported (RabbitMQ, Azure ServiceBus/Storage Queues and Apache Kafka).

Let’s try it!

As with all new technologies in the microservice space, setting up a test-rig is easy! In my test i will use:

  • Kubernetes cluster in Azure AKS
  • Apache Kafka cluster
  • KEDA (Kubernetes Event Driven Architecture)
  • Azure Function (running in Docker)

1 – Deploy a Apache Kafka cluster using Helm.

helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
kubectl create namespace kafka
helm install kafka -n kafka --set kafkaPassword=somesecretpassword,kafkaDatabase=kafka-database --set volumePermissions.enabled=true --set zookeeper.volumePermissions.enabled=true bitnami/kafka

2 – Create a Kafka Topic

kubectl --namespace kafka exec -it kafka-0 -- kafka-topics.sh --create --zookeeper kafka-zookeeper:2181 --replication-factor 1 --partitions 1 --topic important-stuff

3 – Deploy KEDA to Azure AKS cluster

The Azure functions cli tool has functionality to deploy KEDA to your cluster but here i will use Helm. The end result is the same.

helm repo add kedacore https://kedacore.github.io/charts
helm repo update
kubectl create namespace keda
helm install keda kedacore/keda --namespace keda

4 – Create an Azure Function

func init KedaTest

#Add the Kafka extension as this will be our trigger
dotnet add package Microsoft.Azure.WebJobs.Extensions.Kafka --version 1.0.2-alpha

5 – Add Dockerfile to function app

func init --docker-only

The Dockerfile generated will not have the prereq required by the Kafka extension in Linux. So we need to modify the Dockerfile to get the dependency librdkafka installed.

RUN apt-get update && apt install -y librdkafka-dev

I also updated the .Net Core SDK docker image to version to 3.1

6 – Deploy the Azure function

The functions cli tool do have built in functionality to create the necessary Kubernetes manifests as well as applying them. To generate them without applying them use the -“–dry-run” parameter

func kubernetes deploy –name dostuff –registry magohl –namespace kafka

Sweet – we have our function deployed. But is it running?

A kubectl get pods shows no pods running. Lets wait for KEDA to do some auto-scaling for us!

6 – Test KEDA!

We will watch pods created in one window. Send some test messages in another and look at the logs in a third window.

Cool! KEDA does watch the Kafka topic and will schedule pods when messages appear. After a default of 5 minutes the pods will be terminated.

Featured

Deploy to Azure AKS with Github Actions

Github Actions may not be feature complete or even close to CI/CD rivals Azure Devops or Jenkins when it comes to having the most bells and whistles but there is something fresh, new and lightweight to it which i really like. Sure there are some basic things missing such as manually triggering an action, but i am sure that is coming soon.

Its free for open source projects and it is very easy to get started with. Lets see if we can get an automated deployment to an Azure AKS cluster.

Time for some CI/CD

This will be my simple CI/CD pipeline steps:

  • Check out code (I will use my Asp.Net Core WhoamiCore as the application)
  • Build Docker container
  • Push container to github repo
  • Set Kubernetes context
  • Apply docker manifest files (deployment, service and ingress)
#Put action .yml file in .github/workflows folder.
name: Build and push AKS
on:
push:
#Trigger when a new tag is pushed to repo
#This could just as easily be on code push to master
#but using tags allow a very nice workflow
tags:
- '*'
jobs:
build:
#Run on a GitHub managed agent
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v1

#Set environment variable with the tag id
- name: Set env
run: echo ::set-env name=RELEASE_VERSION::${GITHUB_REF#refs/tags/}

Ok – now we have checked out the code in an managed ubuntu container. It already got things like docker installed!

    #Login to dockerhub.io. Other repos also work fine
- uses: azure/docker-login@v1
with:
#Get credentials from Dockerhub secret store
username: ${{ secrets.REGISTRY_USERNAME }}
password: ${{ secrets.REGISTRY_PASSWORD }}
#Build and push container image
- run: |
docker build . -t magohl/whoamicore:${{env.RELEASE_VERSION}}
docker push magohl/whoamicore:${{env.RELEASE_VERSION}}

And finally lets deploy to Azure AKS

     #Set Kubernetes context (Azure AKS)
- uses: azure/aks-set-context@v1
with:
creds: '${{ secrets.AZURE_CREDENTIALS }}' # Azure credentials
resource-group: 'magnus-aks'
cluster-name: 'kramerica-k8s'
id: login

# Deploy to Azure AKS using kubernetes
- uses: Azure/k8s-deploy@v1
with:
namespace: default
#Specify what manifest file or files to use
manifests: |
.manifests/ingress.yaml
.manifests/deployment.yaml
.manifests/service.yaml
#This will replace any image in manifest files with this
#specific version
images: |
index.docker.io/magohl/whoamicore:${{env.RELEASE_VERSION}}

Lets make a change. In my case i changed the deployment to use 5 replicas instead of 3. I create a new tag/release and head over to the Action tab in GitHub.

We can follow the progress in a nice way.

Let´s verify in the ingress reverse proxy (Traefik) that we now got 5 replicas

The application changes is now automatically deployed to my Azure AKS cluster.

GitHub actions also support running self hosted GitHub Actions agents. This can be a great fit for both enterprise/onprem and local development machine scenarios. More on that later!

Featured

Asp.Net Core, reverse proxy and X-Forwarded-* headers

Running microservices and applications using Asp.Net Core and Kestrel inside docker on Linux fronted by one or several reverse proxies will create a few issues that has to be addressed.

A typical scenario is one proxy acting ingress controller to the container orchestration platform and sometimes a second reverse proxy for internet exposure.

If we ‘do nothing’ .Net Core will consider the last proxy as ‘the client’ making the requests. Framework components or custom code using the HttpContext to get information on the original client will get the wrong information.

One example where this causes problems are the OpenIdConnect middleware which will generate wrong redirects links. Another is when we want to log the client IP-address.

Luckily the http headers X-Forwarded-* are here to address this exact problem and .Net Core has great support with the ForwardedHeaders middleware. Each reverse proxy will add to the X-Forwarded headers and the middleware will change the HttpContext accordingly.

With the ForwardedHeaders middleware configured with XForwardedHost + XForwardedProto (which is all that is needed for a OIDC redirect) it work fine.

//ConfigureServices
services.Configure<ForwardedHeadersOptions>(options =>
{
    options.ForwardedHeaders =
       //This one did not work ForwardedHeaders.XForwardedFor | 
       ForwardedHeaders.XForwardedHost | 
       ForwardedHeaders.XForwardedProto;
});

//Add to pipeline in Configure
app.UseForwardedHeaders();


I had the middleware was configured like above. Lets see if we can find out why it did not work in Docker as soon as i added X-Forwarded-For.

Create a test-rig

First lets create a simple test-rig so that we can simulate one or more proxies in a simple way.

Simply output both the HttpContext based properties and the relevant http header values.

Now lets make a request ‘faking’ proxies adding the X-Forwarded-* headers. And lets make one with and one without the X-Forwarded-For header to the service running in Docker

I use the VScode extension REST-Client here but Postman would also work.

The application now act as if i the request was made to https://superapp.com and not http://localhost:99

Now if we add the ForwardedHeaders.XForwardedFor to the middleware configuration and give it another try things change – suddenly it does not work anymore. The same request to Kestrel running directly on Windows work fine.

Now the middleware does not update the HttpContext based on the X-ForwardedFor-* headers. Unlike some other settings such as header symmetry errors if RequireHeaderSymmetry is enabled which will generate a warning, this is completely silent in the logs.

The solution is very simple. The application running inside docker is on a different network than the ‘proxy’ and the middleware default behavior is to only trust proxies on loopback interface. Lets fix that.

//ConfigureServices
services.Configure<ForwardedHeadersOptions>(options =>
{
    options.ForwardedHeaders =
       ForwardedHeaders.XForwardedFor | 
       ForwardedHeaders.XForwardedHost | 
       ForwardedHeaders.XForwardedProto;

       options.ForwardLimit=2;  //Limit number of proxy hops trusted
       options.KnownNetworks.Clear();
       options.KnownProxies.Clear();
});

If we clear the default list of KnownNetworks and KnownProxies it work fine. In a real scenario we would add the real addresses to
KnownNetworks and KnownProxies based on a configuration property so that it could change without changing code.

Now all three properties we are interested work as expected! The originating client IP is based on X-Forwarded-For header.

.Asp.Net Core 3.x

Asp.Net Core 3.x have simplified the use of ForwardedHeaders middleware. If we set the environment variable ‘ASPNETCORE_FORWARDEDHEADERS_ENABLED’ to true the framework will insert the ForwardedHeaders middleware automatically for us.

Lets try it!

docker run -e ASPNETCORE_FORWARDEDHEADERS_ENABLED=true -p 99:80 forwardedheaderswebtester


We now no longer have the correct host and our OIDC middleware will NOT work as expected. Why is that?

Well, lets have a look at the Asp.Net Core sourcecode as to what default behavior is when that environment variable is used.
https://github.com/dotnet/aspnetcore/blob/1480b998660d2f77d0605376eefab6a83474ce07/src/DefaultBuilder/src/WebHost.cs#L244

In the code we can see that it adds XForwardedFor + XForwardedProto. It does NOT add XForwardedHost.

So in my scenario where i need all three headers i will have to configure the middleware manually as per above and everything work as expected!

The code above is available here :
https://github.com/magohl/ForwardedHeadersWebTester

Power of PowerBI with Azure Stream Analytics and IoT Hub

Recently i gave a talk at a Microsoft Integration Commuity Event on Azure IoT Hub. In the demo i used Azure Stream Analytics for ingestion and PowerBi as its output.

To demo the IoT Hub bidirectional feature it had an abnormal detection scenario using Stream Analytics together with EventHub and a Azure Function sending a message back to the device.

The scenario is illustrated below:

Azure IoT Hub demo scenario

I used two real devices and one multi-device simulator written in C#.

  • Raspberry Pi 3 + GrovePi+ with sensors for temperature/humidy, rotary and a LCD-display. OS was Windows 10 IoT Core and a custom UWP app that sent the telemetry data to IoT Hub every second.
  • Adafruit Feather M0 (sensor: BME280 for temperature/humidity). The SDK used was the one from here https://github.com/Azure/azure-iot-arduino. It has only HTTP support at the moment but MQTT is said to be coming very soon.
  • C# Simulator was a commandline application with a random telemetry simulator to get somewhat realistic movements on its faked sensors.

The Azure Stream analytics jobs had 1 input, 2 outputs and a two part query. One for the PowerBi output where “group by TumblingWindow” was used and one for the abnormal detection (vibrationLevel > 400).

Stream Analytics Query:

WITH 
Normal AS
(
SELECT
System.Timestamp AS time,
deviceId,
AVG(temperature) AS avgtemperature,
AVG(humidity) AS avghumidity,
MAX(vibrationLevel) AS maxvibrationlevel,
longitude,
latitude
FROM [devices-in]
GROUP BY
TUMBLINGWINDOW(second,5), time, deviceId, longitude, latitude
),
Abnormal AS
(
SELECT deviceId, MAX(vibrationLevel) as maxvibrationlevel from [devices-in]
GROUP BY TUMBLINGWINDOW(second,30), deviceId
)

SELECT * INTO [pbi-out] FROM Normal
SELECT * INTO [abnormal-vibro-out] FROM Abnormal WHERE vibrationlevel > 400

The TumblingWindow (together with Sliding and Hopping -window) is the star of Stream Analytics. Thousands (or millions!) of devices sending telemetry data every second can be grouped using a time window. For demo purposes i used a short window.

The EventHub output for the abnormal scenario was read by an Azure Function that sent a Cloud-2-Device message back to the device that caused a background color change and a message on the LCD.

Azure Function code :

#r "Newtonsoft.Json"

using System;
using System.Net;
using Newtonsoft.Json; 
using Microsoft.Azure.Devices;

public static void Run(string alertMessage, TraceWriter log)
{
     log.Info($"Got message {alertMessage} from eventhub");
     string connectionString = System.Configuration.ConfigurationManager.ConnectionStrings["iothub"].ConnectionString.ToString();

     dynamic messageIn = JsonConvert.DeserializeObject(alertMessage);
     string deviceid = messageIn.deviceid;
     log.Info($"Will send alert notification to device {deviceid}");

     ServiceClient serviceClient = ServiceClient.CreateFromConnectionString(connectionString);
     var message = new {
         alert = true, 
         display = "TECH. DISPATCHED!"
         };

     var commandMessage = new Message(System.Text.Encoding.ASCII.GetBytes(JsonConvert.SerializeObject(message)));
     serviceClient.SendAsync(deviceid, commandMessage); 
     log.Info($"Sent {message} to device {deviceid}");
}

 

After some simplified PowerBI reporting the demo looked like this:

2016-11-17-19_03_48-power-bi

The map highlights the location of the devices and the circle size are related to the vibration level.

The real power here is the scaling. Knowing that one could take this from 6 devices to millions (and back down again!) without any investment or upfront costs is pretty amazing and unthinkable just a couple years ago. That is what cloud computing means to me.

Code on GitHub

HL7 FHIR JSON decoding in BizTalk

HL7 FHIR represents the next generation healthcare interoperability standard and tries to combine the good stuff from older standards yet leveraging (somewhat) modern things like JSON, XML, HTTP and REST.

With BizTalk you can use the XML-schemas found on the main FHIR site. As BizTalk is all about XML it’s a perfect match.

A side note is that FHIR does not use any versioning in its namespace which will lead to problems if you need more than one version deployed. As usual this can be solved by modifying the namespace in and out of BizTalk using a namespace altering pipeline component.

FHIR JSON

While FHIR resources can be represented in XML they also can come dressed in JSON. Lets have a look at how we can handle that in BizTalk.

If we try to use the out-of-the-box pipeline components in BizTalk 2013 R2 for JSON –> XML conversion (or any other non-FHIR aware json decoder) the generated XML will not conform with the the FHIR-schemas and specification. The differences are highlighted here http://hl7.org/fhir/json.html#xml but two key ones are:

  • How FHIR resource type are defined.
    • In XML its the actual root-node name
    • In JSON its the ‘resourceType’ field.
  • Values are normally placed in a xml attribute instead of in the elements.

Lets look at a simplified example with a FHIR “Encounter” in JSON and HTTP POST it to a BizTalk (WCF-WebHttp) ReceiveLocation using a pipeline with the new and out-of-the-box BizTalk 2013R2 JSON Decoder.

The source json document used
The source json document used

After the pipeline the XML looks like this:

Several problems. The ResourceType element should not exist and the values should be placed inside value xml attributes.
Several problems. The ResourceType element should not exist and the values should be placed inside value xml attributes.

This does NOT match the FHIR schema.

Solution

To solve this we need a “FHIR aware” JSON to XML Decoder. Luckily there is a great open source one for .NET called .NET API for HL7 FHIR. It´s a really feature rich API that can do a lot more than just FHIR JSON and XML conversion!

Lets create a BizTalk pipeline component using .NET API for HL7 FHIR.

using Hl7.Fhir.Serialization;
using Microsoft.BizTalk.Component.Interop;
using Microsoft.BizTalk.Message.Interop;
using System.IO;

namespace Kramerica.Bts.Fhir.Common.PipelineComponents
{
    public partial class FhirJsonDecoder : IComponent, IBaseComponent, IPersistPropertyBag, IComponentUI
    {
        //Execute is the main method invoked every time a message passes the pipeline component
        public IBaseMessage Execute(IPipelineContext pc, IBaseMessage inmsg)
        {
            IBaseMessagePart bodyPart = inmsg.BodyPart;
            if (bodyPart != null)
            {
                string json;

                using (Stream originalDataStream = bodyPart.GetOriginalDataStream())
                {
                    if (originalDataStream != null)
                    {
                        //Read the json message
                        using (TextReader tr = new StreamReader(originalDataStream))
                        {
                            json = tr.ReadToEnd();
                        }

                        //Use FHIR-NET-API to create a FHIR resource from the json
                        //This 'breaks' the stream a puts the complete message into memory
                        ResourceReader resourceReader = new Hl7.Fhir.Serialization.ResourceReader(FhirParser.FhirReaderFromJson(json));

                        //Use FHIR-NET-API to serialize the resource to XML
                        byte[] resourceXmlBytes = Hl7.Fhir.Serialization.FhirSerializer.SerializeToXmlBytes(resourceReader.Deserialize());

                        //Create the new BizTalk message
                        var memoryStream = new MemoryStream();
                        memoryStream.Write(resourceXmlBytes, 0, resourceXmlBytes.Length);
                        memoryStream.Position = 0;
                        inmsg.BodyPart.Data = memoryStream;
                    }
                }
            }

            return inmsg;
        }
    }
}

As there is a 1:1 correlation between the FHIR ResourceType and xml root node name there is no need to have a configuration parameter for root node name/namespace as we have in the standard JSON decode component. Surely we could have some parameters controlling some aspects of the .NET API for HL7 FHIR but not for this simple proof-of-concept.

Lets see how the XML now looks after a HTTP POST of the source FHIR JSON to the same ReceiveLocation with a pipeline using our new pipeline component.

This looks much better and matches the FHIR XML schemas.
This looks much better and matches the FHIR XML schemas.

Great, we now have a correct FHIR XML instance and we can use it in a BizTalk integration process.

To return/send an instance of a FHIR message we just need to reverse the process by creating a FHIR JSON to FHIR XML BizTalk pipeline component and use that in a send pipeline.

SoapUI in high DPI

Running SoapUI on my new fantastic 4K 15.6″ Windows 10 laptop had me looking for a pair of binoculars.

Windows 10 itself handles the extreme high DPI resolutions quite good. But SOAPUI does not scale very well. Luckily the solution is simple.

Resolution

Add the following registry setting:

reg add HKLM\Software\Microsoft\Windows\CurrentVersion\SideBySide /v PreferExternalManifest /d 1 /t REG_DWORD

Create a file called {your-soapui-exe}.manifest in the SoapUI bin folder, such as soapUI-5.2.1.exe.manifest, with the following content:

<?xml version=”1.0″ encoding=”UTF-8″ standalone=”yes”?> <assembly xmlns=”urn:schemas-microsoft-com:asm.v1″ manifestVersion=”1.0″ xmlns:asmv3=”urn:schemas-microsoft-com:asm.v3″> <description>soapui</description> <asmv3:application> <asmv3:windowsSettings xmlns=”http://schemas.microsoft.com/SMI/2005/WindowsSettings”> <ms_windowsSettings:dpiAware xmlns:ms_windowsSettings=”http://schemas.microsoft.com/SMI/2005/WindowsSettings”>false</ms_windowsSettings:dpiAware> </asmv3:windowsSettings> </asmv3:application> </assembly>

 

Done. Restart SoapUI and it scales so much better!

This could also be applied to other applications having the same problem.

Azure Logic Apps Text-to-speech with TalkToMe API

Many years ago i made a text-to-speech WCF binding that i used in demos and proof-of-concepts with BizTalk Server. Its time to bring that up to 2015!

This time i will use the Azure App Service stack with a custom Azure API App that can be used from a Azure Logic App. The API App will let you connect a browser session to it that will act as the “loudspeaker”.

TalkToMe GitHub repo – instructions and code

The API app has two parts, both hosted together. First the API that hosts a SignalR hub and the second one is the UI that connects one (or more) browser as the API App “Loudspeaker”. As an Azure API App is just a ‘normal’ Web App it can host both parts and no other deployment is needed. Sweet!

  • WebAPI
    • SignalR Hub
  • Client UI (Hosted inside the API App!)

SignalR is the brilliant communication framework that will allow us to trigger functionality in the client (browser) yet abstracting away the actual connection details (web-sockets, long-polling etc.).

To test the API App i used a Logic App that gets weather data from api.openweathermap.com that TalkToMe API then will read out. Note that by design the API will return 202/Accepted even if no browser (ie. SignalR client) is connected and no queuing or similar is performed.

How to use in Azure Logic Apps

  1. Deploy Azure API App
  2. Create and deploy a Logic App where TalkToMe is used as an ‘action’ Logic App
  3. Connect one or more browsers to the API by browsing the root URL of the deployed API. If you are unsure of the URL you can find the link in the ‘essentials’ section of the API App in the portal.image image
  4. Run the Logic App

When the logic app runs and the TalkToMe action fires the portal should show something like image

And you should hear the artificial voice read the weather 🙂 image

Test API using HTTP

If you just want to try the API App, outside of an Logic App, you can use a simple REST call.

REST param Value
URL http://{your-apiapp-url}/api/TalkToMe
Method HTTP POST
Content-Type application/json
Body { “TextToRead” : “Nice weather today” }

 

 

 

Azure Logic App with File Connector for on-premise filesystem integration

Azure Logic Apps is Microsofts new shiny integration PaaS offering building on API Apps as the integration building-blocks. Sure – the tooling is far from ready and the management capabilities is lacking in functionality. But i like the way this is moving. For cloud/SaaS scenarios there is already an impressive number of connectors available and a simple API App model for custom extensions.

I decided to document the steps for simplified integration scenario of Azure Logic App with FILE Connector for on-premise filesystem integration. Its really very simple but there are some small quirks and even a small bug!

The scenario is that the Logic App should pick up files on the local file-system without requiring a custom developed scheduled process/script/application to upload the files to Azure. The FILE Connector uses a “Hybrid Connection service” to make an outgoing connection to the ServiceBus relay. This means (IT-dept Security responsible – please cover your ears) you normally don´t need any firewall openings.

To get this scenario running start by creating a “Service Bus Namespace” using the old (current) portal at https://manage.windowsazure.com and copy the connection string using the “Connection Information” function. You do not need to create a relay as this will be created automatically for you by the FILE Connector API app.

soapfault_com_api-app-fileconnector10

Click on the Connection Information as seen in the screenshot above.

Next step is to create the actual File Connector API using the the new/preview portal. Search for ‘File Connector’ if you are having trouble finding it.

Fill in the Service Bus Connection string copied earlier and the local root folder to be used. You will specify a relative folder to this when configuring the File Connector API APP in the Logic App later on.soapfault_com_api-app-fileconnector11

 

When the API app is created after a couple of minutes its time to setup the Hybrid Connection. Or at least that is what i thought…

The summary page of the API app is supposed to show some Hybrid Connection information but this never appears. This seems to be some kind of a bug (it surely can’t be me can it…) in the portal (see below) not saving the parameters specified in Package Settings correctly.

soapfault_com_api-app-fileconnector12

 

If this happens to you the easiest way of solving it is to click the Host link in the essentials section (marked in a square above) followed by Application Settings. Scroll down to the App settings section and have a look at the File_RootFolderPath and ServiceBusConnectionString. They are empty!

soapfault_com_api-app-fileconnector13

 

Edit them manually.soapfault_com_api-app-fileconnector14

Click Save and then reopen the File Connector API app blade. After some time you should now see the “Hybrid Connection” icon saying its setup is incomplete. Sweet!

soapfault_com_api-app-fileconnector15

 

Click to open the Hybrid Connection configuration.

soapfault_com_api-app-fileconnector16

 

You can either download the Hybrid Connection msi manually from here or use the Click-Once-Installation using the provided link ‘Download and Configure’.

soapfault_com_api-app-fileconnector6

soapfault_com_api-app-fileconnector7

 

If everything works out you should now see this on the summary page.

soapfault_com_api-app-fileconnector8

Great! Now lets try the File Connector API app as an Logic App trigger.

Lets assume this on-premise legacy system produces XML files and that we want to access the data. In this heavily simplified test-scenario i use the Biztalk JSON Encoder and Slack Connector as Logic App actions but anything goes.

soapfault_com_api-app-fileconnector24

Note that the folder specified is relative to the folder specified as a root folder.

Ok, lets drop an XML file in my fictitious on-premise system and see what happens.

soapfault_com_api-app-fileconnector21

 

The file disappears from the filesystem and if we look in the Azure portal both actions run successfully.

soapfault_com_api-app-fileconnector25

 

And the message appeared in Slack. Nice!

soapfault_com_api-app-fileconnector23

WCF service hosted in Azure Websites

While WCF might not be the most viable .NET technology stack on the open web as opposed to WEB API 2.0 it is still very relevant in enterprise and B2B scenarios.

It is sometimes considered hard to configure and host WCF. Lets see how hard it really is now with .NET 4.5.

The other day i quickly needed a simple test SOAP endpoint exposed on the internet. I thought i would host it in Azure Websites.

First let us create the web site in Azure. As an alternative to normal ftp-deploy lets use Azure Websites great Kudu features and GIT support.

C:\>mkdir EchoService
C:\>cd EchoService
C:\EchoService>azure site create EchoService --git --location "North Europe"

 

Lets crate a simple untyped WCF service echoing any SOAP-request back as a response. This is probably not a real world scenario although the untyped nature of System.ServiceModel.Channels.Message is really powerful. But any WCF service you find appropriate would work.

The sample below is a minimal single-file wcf service without any code-behind. Save it to a file called EchoService.svc.

<%@ ServiceHost Language="C#" Service="EchoService" %>

using System.ServiceModel;
using System.ServiceModel.Channels;

[ServiceContract]
public class EchoService
{
    [OperationContract(Action="*", ReplyAction="*")]
    public Message Echo(Message message)
    {
        return message;
    }
}

 

Save the file and push it to the remote repository automatically created in your Azure Website.

c:\EchoService>git add .
c:\EchoService>git commit -m"First checkin"
c:\EchoService>git push azure master

 

Done!

GIT pushed the EchoService.svc file to the remote repository and Kudu automatically deployed it into the website wwwroot folder. If you want to learn more on the amazing kudu stuff in Azure Websites i highly recommend having a look at the short videos made by Scott Hanselman and David Ebbo.

You can reach the service at http://yourwebsitename.azurewebsites.net/EchoService.svc and maybe use something like SoapUI to try it out. The WCF default configuration will expose an endpoint using the BasicHttpBinding meaning any SOAP 1.1 envelope will work. Metadata publication is disabled by default but as this is an untyped service there is really no need for it. If needed it can easily be enabled in code or configuration.

soapui_echoservice

As show Microsoft PaaS service Azure Websites is i really simple way to host a WCF endpoint in the cloud. With the help of .NET 4.5 this is easier than ever.

Great times for cloud developers

The last couple of years has meant a incredible things for us developers as they now can stay focused on the important stuff and forget about stuff that used to take hours, days and sometimes even weeks.

Lets take an example. Using the Azure command-line tools (which are built in node/javascript and are platform agnostic running on Windows, Linux, OSX) and GIT i want to:

  • Create new cloudbased WebSite
  • Create a distributed source control repository locally and at my remote website
  • Setup automatic continuous-integration from the remote GIT repository
  • Create a html file
  • Add file to local repository
  • Push to remote repository and have the site automatically deployed from there.

Ok, lets begin…

azure site create reallycoolsite --location "North Europe" --git

copy con default.html
<!doctype html><title>Really cool site</title><p>Nothing to see here...</p>^z

git add .

git commit -m "Added deafult.html to this really cool site"

git push azure master

Thats it! Done! Thanks to Azure, Kudu and GIT it took 4 (!) commands. Well, 5 if you count creating the website homepage. The site could be ASP.NET, Node.JS, PHP or static HTML.

If this was an ASP.NET application i could attach to and debug the application running in Azure directly from my local Visual Studio.

Now what if we want to use the real power of cloud computing, scaling! Can we do that? Well then we need to issue another command…

azure site scale mode standard reallycoolsite

azure site scale instances 3 reallycoolsite

Now we have a website running load-balanced on 3 server instances. I can scale up and down and pay for the minutes i use. No IIS, no applicationppol, no NLB-setup and no script editing. The developer can focus on the site content and functionality.

When i need more control i just go to the default created log/debug/diagnostic site created by Kudu at https://<yoursite>.scm.azurewebsites.net. Here you can copy files, look at deployment details and have a go at the shiny new debug console.

2014-02-02 09_45_16-Diagnostic Console

 

The abstraction and simplification provided by Azure Websites is really really effective and powerful yet lets you gain more detailed control when you need it.