DevOps Articles

Curated articles, resources, tips and trends from the DevOps World.

Explore Amazon SageMaker Serverless Inference for Deploying ML Models

3 years ago thenewstack.io
Explore Amazon SageMaker Serverless Inference for Deploying ML Models

Summary: This is a summary of an article originally published by The New Stack. Read the full original article here →

Launched at the company’s re:Invent 2021 user conference earlier this month, https://aws.amazon.com/?utm_content=inline-mention‘ https://aws.amazon.com/about-aws/whats-new/2021/12/amazon-sagemaker-serverless-inference/ is a new inference option to deploy machine learning models without configuring and managing the compute infrastructure. The fundamental difference between the other mechanisms and serverless inference is how the compute infrastructure is provisioned, scaled, and managed.

Amazon SageMaker Serverless Inference joins existing deployment mechanisms, including real-time inference, elastic inference, and asynchronous inference.

Luckily, the workflow doesn’t change when switching between the conventional real-time inference endpoint and the new serverless inference endpoint.

In the next part of this series, we will look at the steps involved in publishing a SageMaker serverless inference endpoint for a TensorFlow model.

Made with pure grit © 2024 Jetpack Labs Inc. All rights reserved. www.jetpacklabs.com