1. A Streaming ML Model Deployment

    Sun 29 December 2019

    In general, when a client communicates with a software service two patterns are available: synchronous and asynchronous communication. When doing synchronous communication, a message is sent to the service which blocks the sender until the operation is done and the result is returned to the client. With an asynchronous message, the service receives the message and does not block the sender of the message while it does the processing. We’ve already seen an asynchronous deployment for a machine learning model in a previous blog post. In this blog post, we’ll show a similar type of deployment that is useful in different situations. We’ll be focusing on deploying an ML model as part of a stream processing system.

    read more
  2. An AWS Lambda ML Model Deployment

    Sun 10 November 2019

    In the last few years, a new cloud computing paradigm has emerged: serverless computing. This new paradigm flips the normal way of provisioning resources in a cloud environment on its head. Whereas a normal application is deployed onto pre-provisioned servers that are running before they are needed, a serverless application's codebase is deployed and the servers are assigned to run the application as demand for the application rises.

    read more
  3. A Task Queue ML Model Deployment

    Thu 24 October 2019

    When building software, we may come across situations in which we want to execute a long-running operation behind the scenes while keeping the main execution path of the code running. This is useful when the software needs to remain responsive to a user, and the long running operation would get in the way. These types of operations often involve contacting another service over the network or writing data to IO. For example, when a web service needs to send an email, often the best way to do it is to launch a task in the background that will actually send the email, and return a response to the client immediately.

    read more
  4. A Batch Job ML Model Deployment

    Fri 20 September 2019

    In previous blog posts I showed how to develop an ML model in such a way that makes it easy to deploy, and I showed how to create a web app that is able to deploy any model that followed the same design pattern. However, not all deployments of ML model are deployed within web apps. In this blog post I deploy the same model used in the previous blog posts as an ETL job.

    read more
  5. Using the ML Model Base Class

    Sun 28 July 2019

    In previous blog posts I showed how to build a simple base class for abstracting machine learning models and how to create a python package that makes use of the base class. In this blog post I aim to use the ideas from the previous blog posts to build a simple application that uses the MLModel base class to deploy a model.

    read more

« Page 3 / 4 »

social