Model Deployment in Microsoft Azure AutoML

Rahul Bhat
5 min readApr 11, 2021

--

Time-Series Forecasting in Microsoft Azure Automated Machine Learning (AutoML) PART 2.

A step-by-step guide to deploy a model in Microsoft Azure AutoML.

In the first part, I showed you how to do time-series forecasting in Microsoft Azure AutoML. I hope you have already read the first part. If not then this is the link below.

Time-Series Forecasting in Microsoft Azure Automated Machine Learning (AutoML) PART 1. | by Rahul Bhat | Apr, 2021 | Medium

In this part, we will deploy our best model for testing. I will also show you to test the model after deployment in the postman using the REST endpoint.

Getting Started

We used a training dataset to predict future sales. The training dataset consisted of 5 years of store-item sales data. The daily sales were given for 10 different stores and 50 different items from 2013.01.01 until 2017.12.31. The objective was to predict the sales for 50 different items at 10 different stores.

After completion, we checked our model and it performed very well. We checked the Predicted vs True and as you can see the results above.

There you see a list of model and their scores. StackEnsemble performed well amongst all and was our best model. Now it’s time to deploy and test it.

The next thing we will do is to select our best model or the model you want to deploy under the Algorithm name.

There you can see the model summary. To deploy the model click on the deploy tab and a screen appears from the right like this.

Give the name to the model, I gave it stackensemble. Write the description of the model, it’s optional. In Compute Type select Azure Container Instance. Azure Kubernetes Service is for final web deployment. We are going to locally deploy our model and test it. We don’t need a web deployment. Azure Kubernetes needs a GPU to compute the cluster for web deployment. Read more about it.

How to deploy machine learning models — Azure Machine Learning | Microsoft Docs

We are using Azure Container Instance, select that and Deploy. Deploying will take some time. So have patience and wait for some time until deployment is successful.

The deployment models will be stored in Endpoints. After the deployment is successful, you can go to Endpoints and check your deployed model there.

You will see a screen like above. The model which is successfully deployed would be listed there. Click on your deployed model.

After clicking on the deployed model, you will see the details regarding the deployed model. Click on Test for testing the model. You will see the screen like above.

Our dataset had 4 features, date, store, item and sales. Sales were our target, which our model is going to predict. You can see on the screen that there are three inputs and our model will predict the sales. We need to feed the inputs to predict the sales. We need for what date, store and item you want to predict the sales.

Give the input to the model for which date, store and item you want to predict the sales for. After feeding the input click on test and you will see results on the right. The results are in JSON format. You can play around and cross-check if the results are good or not. You can also feed the CSV data but the CSV data should be like this below.

You can see the JSON results above with the forecast values and the inputs we provided. This is how you can test for multiple stores, items and dates using CSV.

REST API endpoint

Deploying an Azure Machine Learning model as a web service creates a REST API endpoint. You can send data to this endpoint and receive the prediction returned by the model.

We will use the REST endpoints to test your model. Click on consume and you will see on the top the REST endpoints and the URL. There are three types of consumptions one is for C#, second is for Python and third is for R. If you don’t want to deploy your model using Azure then you can use the REST API endpoints and the consumption types of your choice in web development.

Create client for model deployed as web service — Azure Machine Learning | Microsoft Docs

I have chosen Python consumption type. In the consume tab you can see how you should send a data for a web service deployment outside Azure. Now we will copy the REST API endpoints and check that in postman. We will check if the REST endpoints working in the outside world or not.

Open postman and click start with something new. This will Create a new request, collection or API, in a Workspace.

Change GET to POST because we want to post a request to our REST API endpoint to fetch the forecast results. Paste the REST API endpoint URL in the post section. After that copy the data from Consume shown above in JSON format and paste it in the body section and send a request to the REST endpoints. After sending the request you will get the forecasting result from our model using a REST API endpoint.

This is how you can deploy and test the deployed model. You can also use the REST endpoints for web deployment outside the Azure environment.

If you liked my blog and my tutorial helped you to understand AutoML deployment and testing, please click the 👏 button and share it with your friends and others. I’d love to hear your thoughts, so feel free to leave comments below.

References:

Create client for model deployed as web service — Azure Machine Learning | Microsoft Docs

--

--

Rahul Bhat
Rahul Bhat

No responses yet