AI Insights, DECEMBER 2020

How to overcome the challenges of productionizing AI

AHMED ZEWAIN

DATA SCIENTIST, 2021.AI

Putting AI models in production is notoriously tricky, and the challenges have many different nuances. This blog post summarizes the fundamental difficulties of productionizing AI models and how we, at 2021.AI, accommodate these on our GRACE AI Platform.

A notoriously difficult task

Productionizing an AI model is notoriously difficult. In many cases, algorithms are developed simply to be stowed away in their repos and never actually put to good use. One of the main challenges in productionizing an AI model is accommodating an AI environment, especially when the model’s existing infrastructure is complex, making this task difficult or, in some cases, infeasible. On top of that, organizations working with AI face new legal and compliance challenges, which introduces a governance level that needs to be accounted for in productionizing and operating all AI models.

To support an AI agenda, you will need to develop or adopt an entirely new platform. In its mature phase, this new infrastructure must have three main characteristics.

First, it needs to be specialized to support a specific software flavor that does not necessarily adhere to standard notions of software development. AI software differs from traditional software engineering practices because AI is required in cases where the desired behavior cannot be obtained without dependency on external data. This means that the traditional software practice of neatly delineated modular design and strict abstraction boundaries becomes more difficult or erodes over time.

Secondly, since AI is a relatively immature field regarding horizontal implementation with other software, 95% of the code in a mature project ends up being a “glue code” (code that adapts different modules) while only 5% is an AI-related code.

Third, different AI governance measures are being developed within various organizations, and AI must comply with those in production. These measures range from model monitoring to ethical AI requirements.

Using the GRACE AI platform is a means of accommodating these challenges in development and production.

Production on GRACE – Solution principles

The GRACE AI Platform is designed with machine learning, data science, and AI governance in mind. This means that the journey from data ingestion over verification to feature extraction, model training, production, and scaling have all been considered. Additionally, GRACE also addresses the challenges of governance by providing regulatory tools for data scientists.

The production process on GRACE is designed around three simple principles:

  • Minimizing time-to-production
  • Encouraging the development of common APIs to minimize glue code
  • Ensuring governed AI development & production

Production process on GRACE

Minimizing time-to-production is achieved by supporting a limited set of libraries and requires that models are delivered in a certain format. If those requirements are met, deployment to production can be done with a few simple commands.

Thus, GRACE aims for the middle ground where our “one-click- production” supports the most popular models, and ease of use and more exotic variants need tailoring, which the platform is open to. Governance ensures that the exposed model APIs are certified and live up to the ethical requirements both during the development and operational phases.

From the user perspective, the second point (deployment as REST API) is mainly done through the docker registry. Have you built a pipeline that handles a variety of multiclass classification problems and does model optimization? Or perhaps trained a general neural network to recognize faces? Great! You can submit this to the platform, and the image will be available on the registry for your colleagues to use or develop further. Having common components like this across a platform makes it easier to maintain and update code.

GRACE’s governance module allows you to create an ethical and compliant workflow when developing models and putting them to production. The governed workflow is reached via a set of tools that the platform provides, ranging from impact assessments, meta-data events, rules engine, and model certification. All these tools combined ensure a transparent, validated, and compliant AI production process.

Technical solution

The architecture consists of six central components and is best illustrated with an example shown in the figure below.

AI Model Architecture

The user points to a project that contains the requirements using either the CLI or GRACE frontend. This is followed by the model certification step and platform resource allocation step. The model certification is an important aspect of the AI governance framework where the meta-data layer is checked for technical and non-technical rules that the model must comply with before being exposed as a REST API.

Then in the backend, a deployment workflow is initialized through a RESTful API, where one or several commands are sent to the message queue for execution. The available CLI commands are: build, start, stop, delete, and list. Composite commands, such as deploy, are made up of two parts- build and start. If the source is unknown, i.e., it does not exist in the docker registry, the image builder will take the source’s necessary code components and build a new image. This image is saved in the registry and ported on to a component that handles deployment to the container management system.

If the image has already been built, it will be listed as part of the registry, and the user can choose to deploy that image along with model metadata. Either way, a URL for submitting requests to the model, based on the project name, is returned to the user.

Summary

In summary, GRACE offers viable solutions to well-known problems in the production phase of machine learning models. This is important because it gives the data scientist a set of specific requirements to follow. If these requirements are fulfilled, the deployment and subsequent scaling of any model will be easy. The governance module allows organizations to scale their AI projects to 10, 20, or even 50 AI models that generate value.

Further, the flow from data scientists to the registry and deployment in the container management system makes it easy to maintain and deploy new models. This is an important point, as any mature AI pipeline needs maintenance regularly, and new data scientists should understand the process of older projects, even if they were not involved. Overall, the GRACE platform aims to make the process around productionizing transparent and open to configuration while adhering to a certain standard, enabling fast and reliable deployment that is easy to monitor and maintain.

Ahmed Zewain

Ahmed Zewain

DATA SCIENTIST, 2021.AI

Ahmed Zewain is a Data Scientist at 2021.AI with an MA in mathematical modeling and computing, and extensive knowledge of several data engineering tools. Ahmed’s skills include building ML POC projects and taking them further into production for a wide variety of clients.

You might also like…

Start with AI

Getting started with AI: How much data do you need?

Data is the fuel that can be leveraged to create new products and services, improving the existing ones…

READ MORE

Business value with AI

Harnessing real business value with AI

Artificial intelligence is a booming field of research getting a lot of attention in the media due to its impressive…

READ MORE