IBM Watson AutoAI and Microsoft Azure Machine Learning — can that work ?

Lukasz Cmielowski, PhD
5 min readApr 14, 2022

Written by: Lukasz Cmielowski, PhD, Trent Gray-Donald

Today, we will run IBM AutoAI experiment from Microsoft Azure Machine Learning Studio and then deploy it into Azure Kubernetes Service (AKS). Read the story to see the cross-cloud experience and final results.

IBM Watson AutoAI experiment run

To run the IBM Watson AutoAI experiment we have used the Microsoft Azure Machine Learning Studio and the notebook environment.

We start with updating the runtime packages to versions matching the AutoAI. The following packages need to be installed and updated (pip command):

pip install ibm-watson-machine-learning
pip install snapml
pip install scikit-learn==1.0.2
pip install xgboost==1.5.1
pip install lightgbm==3.3.1
pip install gensim==4.1.2

Now, our notebook runtime is ready to run AutoAI experiment and find best classification model for german credit risk use case. Use this sample notebook (prepared for IBM Watson Studio) to set up the IBM Watson Machine Learning instance and python API (credentials, space and authorization). Next, let’s define the AutoAI optimiser and train models.

Optimiser definition & fit call

Similar to scikit-learn API the fit call starts the search and train process. Computation happens in the IBM Watson Machine Learning service (go here to get an instance). The progress is displayed in the notebook on Microsoft Azure.

Pipelines leaderboard

The list of best models found by IBM Watson Studio AutoAI ranked by optimization score roc_auc. The scores (metrics) are calculated on both train (cross-validation with 3-folds) and holdout data sets.

Get the best model

The get_pipeline method downloads and loads the pickled model using the joblib package. The model is scikit-learn pipeline model with custom transformation available in autoai-libs package. You can also display confusion matrix or feature importance using get_pipeline_details method.

Get model’s source code (pipeline definition)

The AutoAI brings the full pipeline transparency — you can preview the pipeline definition (source code) by calling pretty_print method.

Get the predictions

The model can be used on any environment — there is no dependency on the IBM Cloud. Like using regular scikit-learn pipeline model call the predict method to get predictions back. The model operates on numpy arrays.

As we can see calling the AutoAI API on Microsoft Azure is no different than calling it on IBM Cloud.

Deploying a webservice to Azure Kubernetes Service (AKS)

In this section we will describe all steps required to deploy best IBM Watson AutoAI model on Microsoft Azure Kubernetes Service. As you will see below, beside defining extra python dependencies (Create environment section) there is nothing special (the rest of steps are standard AKS deployment steps).

First, we need to get a workspace and register the AutoAI model.

To register a model the best_pipeline (AutoAI best model) is pickled using joblib package. Next, we use register() method from azureml.core.model module to register the model for webservice deployment. Using Environment and CondaDependencies modules we define the deployment environment that will be used to serve AutoAI model. The list of

After creating the environment we need to write the serving script consisting of two functions init and run. The init function loads the model using joblib package. The run converts json payload to numpy array and passes it to loaded model to get the predictions back.

In the next step we need to create the AKS Cluster. The InferenceConfig linking our serving script and cluster name must be provided.

Finally, our AKS cluster is ready. Using Model.deploy method we can create the webservice (AKS service).

AKS service operation finished, the service status is Healthy. Let’s test our deployed model using the run method offered by aks_service object. The service can be also tested using GUI and Test tab (as shown on screenshot below). The test passes.

Now, we can use our deployed AutoAI model via http requests from any application. The request requires the url , api_key (can be copied from the Consume tab) and payload with new samples.

As you can see inferencing AutoAI model on Azure Kubernetes Service is super easy. Enjoy the scoring!.

--

--

Lukasz Cmielowski, PhD

Senior Technical Staff Member at IBM, responsible for AutoAI (AutoML).