Azure Machine Learning Clemens Siebler  

Deploying Azure Machine Learning Models to Azure App Service

This post will explain deploying Azure Machine Learning models to Azure App Service. This allows for easier model deployments, especially for those users who do not want to deploy to e.g., Azure Kubernetes Service. In this post, we will follow the proposed approach from the official documentation to a certain degree, but we will also add model telemetry/metrics logging and Model Data Collection. This allows for a ‘more complete’ approach of model deployment with model monitoring over just running a plain Docker container on App Service.

Overall, this example can be reused to deploy the Docker images generated by Azure Machine Learning to any platform that is capable of running Docker images.

Architecture diagram

If we look at the architecture diagram above, we’ll focus on the following steps in this post:

  • Packaging the model as a Docker image
  • Deploying the image to App Service
  • Adding model telemetry logging to Application Insights
  • Adding model data collection to Azure Blob
  • Consuming the model using its exposed API

Getting Started

To get started, we assume that we already have:

  • a registered model in Azure Machine Learning
  • a scoring script with model data collection enabled (example:
  • a conda.yml with your model dependencies (example: conda.yml)

In short, you already have taken the steps to train and deploy a model.

Packaging our model for deployment

First, we want to package our existing model using the registered model, our scoring script and our Conda environment using this Gist:

from azureml.core import Workspace, Model
from azureml.core.model import InferenceConfig
from azureml.core.environment import Environment
from azureml.core.conda_dependencies import CondaDependencies

ws = Workspace.from_config()

env = Environment("inference-env")
env.docker.enabled = True
# Replace with your conda enviroment file
env.python.conda_dependencies = CondaDependencies("./conda.yml")

# Replace with your
inference_config = InferenceConfig(entry_script="", environment=env)

# Replace with your model
model = Model(ws, 'my-model')

package = Model.package(ws, [model], inference_config)

print(f"Packaged model image: {package.location}")

The code will return the URL to the new Docker image:

Packaged model image:

Now that we have the model image built, we can start deploying it to a Docker runtime. If you need a more customized Docker image (e.g., maybe you are required to add a few more complex dependencies), you can follow the tutorial from the last post on building custom Docker images.

Running the model locally

Next, we can try running the model locally on our laptop, Compute Instance or where ever you have Docker running. For this, we log in to the Azure Container Registry where our model image has been stored:

docker login

You can easily retrieve the login credentials for the Container Registry through the Azure Portal:

From here, we can run the image via Docker by forwarding web service port 5001 to the host:

docker run -it --rm \
-p 5001:5001 \

From here, we can quickly test if we can call the model successfully:

import json, requests

test_sample = json.dumps({
    'data': [{
        "Age": 20,
        "Sex": "male",
        "Job": 0,
        "Housing": "own",
        "Saving accounts": "little",
        "Checking account": "little",
        "Credit amount": 100,
        "Duration": 48,
        "Purpose": "radio/TV"

url = "http://localhost:5001/score"
headers = {'Content-Type':'application/json'}

response =, test_sample, headers=headers)

Which in our case here, returns the HTTP response code and the mode’s predictions:

{"predict_proba": [[0.6900664207902661, 0.30993357920973386]]}

Perfect, our model is up and running, next we’ll add some telemetry logging to Application Insights.

Adding model telemetry logging

Next, we can add model telemetry logging to Application Insights. We can achieve this by setting the appropriate logging-related environment variables in our Docker command:

  • AML_APP_INSIGHTS_KEY=<Instrumentation key>
  • WORKSPACE_NAME=<Name of your Workspace>
  • SERVICE_NAME=<arbitrary service name, e.g. deployment name or build Id>

You can retrieve the Instrumentation key for Application Insights from the Azure Portal:

Adding those to our Docker command should look like this:

docker run -it --rm \
-e WORKSPACE_NAME=aml-demo-we \
-e SERVICE_NAME=build12345 \
-e AML_APP_INSIGHTS_KEY=1f224928-xxxx-xxxx-xxxx-xxxxxxxxx \
-p 5001:5001 \

Now, once we run the model and send some data to it, we should see it popping up in Application Insights by going to “Log Analytics” and querying for requests:

Alternatively, we can also query by traces, which will show us STDOUT/STDERR of our model’s code that is running the Docker container:

Great, now we have our model running and it is reporting back its STDOUT/STDERR and its predictions to Application Insights. Next, we will also add model data collection, to push model input and prediction data back to Azure Blob Storage.

Adding model data collection

Lastly, we can add model data collection to our model. For this, we first need to have a storage account with a container called modeldata. In this case, we can just create the container using the Azure Portal:

Next, we need to set the Model Data Collection related environment variables:

  • AML_MODEL_DC_STORAGE=<Storage Connection String>

In this case, AML_MODEL_DC_STORAGE refers to the connection string to your Storage Account. With this, we can re-run our Docker container with the appropriate parameters set:

docker run -it --rm \
-e WORKSPACE_NAME=aml-demo-we \
-e SERVICE_NAME=build2542 \
-e AML_APP_INSIGHTS_KEY=123445-1234-1234-1234-12345667889 \
-e AML_MODEL_DC_STORAGE="DefaultEndpointsProtocol=https;AccountName=xxxxxx;AccountKey=xxxxxxxxx;" \
-p 5001:5001 \

After a while (it is currently unclear to me how long this takes), our model input and prediction data will show up in our modeldata container in Blob:

From here, we can finally start deploying the model to App Service.

Deployment to Azure App Service

First, let’s create an new App Service:

az group create --name app-service-deployment --location "West Europe"
az appservice plan create --name models --resource-group app-service-deployment --sku B1 --is-linux

Next, let’s let’s deploy our container to the App Service (it won’t pull the image yet as we first need to add authentication to our Container Registry):

az webapp create --resource-group app-service-deployment --plan models --name model1-blog-demo --deployment-container-image-name

Next, let’s add the Managed Identity of our new app to the Container Registry, so it can pull the image:

# Assign Managed Identity to our Web App
az webapp identity assign --resource-group app-service-deployment --name model1-blog-demo --query principalId --output tsv

# Query the resource id of our Container Registry
az acr show -g aml-demo-we -n amldemowexxxxxx --query id --output tsv

# Assign Pull permission of our Web App to our Container Registry
az role assignment create --assignee <id from first command> --scope <output from second command> --role "AcrPull"

Next, we can add the port mapping and our environment variables:

az webapp config appsettings set --resource-group app-service-deployment --name model1-blog-demo --settings WEBSITES_PORT=5001
az webapp config appsettings set --name model1-blog-demo --resource-group app-service-deployment --settings WORKSPACE_NAME="aml-demo-we"
az webapp config appsettings set --name model1-blog-demo --resource-group app-service-deployment --settings SERVICE_NAME="build12345"
az webapp config appsettings set --name model1-blog-demo --resource-group app-service-deployment --settings AML_APP_INSIGHTS_ENABLED="true"
az webapp config appsettings set --name model1-blog-demo --resource-group app-service-deployment --settings AML_APP_INSIGHTS_KEY="123445-1234-1234-1234-12345667889"
az webapp config appsettings set --name model1-blog-demo --resource-group app-service-deployment --settings AML_MODEL_DC_STORAGE_ENABLED="true"
az webapp config appsettings set --name model1-blog-demo --resource-group app-service-deployment --settings AML_MODEL_DC_STORAGE="DefaultEndpointsProtocol=https;AccountName=xxxxx;AccountKey=xxxxxxxx;"

Lastly, let’s restart our app so it pulls the new settings:

az webapp restart --resource-group app-service-deployment --name model1-blog-demo

From here, we can finally call our endpoint (same code as above) using After a few minutes, our telemetry metrics and data collection should start to kick in show our model’s telemetry in Azure.

Next Steps

As a last step, we should consider two open points: authentication and automation.

Firstly, we will need to enable authentication via an identity provider for our deployed model. The web app is per default publicly exposed and by using e.g., Azure Active Directory, we can force the model consumers to authenticate. Furthermore, we can use the rich networking settings on App Service to further lock down the service if desired.

Secondly, we obviously should use some form of Continuous Deployment to automate the deployment steps. This can be fairly easily done by using the commands provided in this post and putting them into a CI/CD pipeline of your choice, e.g. in Azure DevOps or GitHub Actions.


This post gave a short overview how to packages models to Docker images in Azure Machine Learning. From there, we discussed options how to capture model telemetry and also enable model data collection. With this, we can now deploy models easily to various platform, such as Azure App Service, while still receiving logs in Application Insights and data in Azure Blob.

Leave A Comment