View Categories
Model Deployment Options

1 min read

REST API #

The machine learning model publishes a REST API endpoint and is made available as a service. This allows the model to be integrated into a web app, e.g., Google Cloud Vision API, or an enterprise application and accessed through a REST API.

Re-coding #

The model is re-coded from the language it is written in (e.g., R, Python, Scala, etc) to the language the enterprise production system understands (e.g., C++, Java, SQL, etc).

Batch Scoring #

The machine learning model is used in an offline fashion and predictions are made in tranches, usually when the load on the infrastructure is minimal. Batch scoring is used when predictions are not immediately required, e.g., a batch process can be run once a day, once a week, etc. Scoring can be requested via a manual trigger, GUI on-demand or an automated process (scheduled or triggered by an event). Predictions are sent to a file (usually Excel or text) or a database table and usually usually used for decision support.

Edge Deployment aka Device Scoring #

The model is trained on a server and then transferred to the edge device (e.g., mobile phone, IoT device, a drone, etc) where all scoring is done. Edge deployment is used where predictions are required instantly in an environment with uncertain network availability, e.g., in an autonomous drone, on a mobile phone or in an IoT device.

References #

https://ml-ops.org/content/three-levels-of-ml-software#code-deployment-pipelines
https://blog.cloudera.com/putting-machine-learning-models-into-production
https://deepchecks.com/top-considerations-for-deploying-machine-learning-models

Online resources accessed on 22 May 2022

  • Join WhatsApp group here
  • Join Facebook group here

Powered by BetterDocs

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top