Category: Kubernetes

This tutorial is the latest part of a series where we build an end-to-end stack to perform machine learning inference at the edge. We will extend that use case further by deploying https://developer.nvidia.com/nvidia-triton-inference-server that treats the MinIO tenant as a model store. By the end of this tutorial, we will have a fully configured model server and registry ready for inference.

Before deploying the model server, we need to have the model store or repository populated with a few models.

You have successfully deployed and configured the model server backed by a model store running at the edge.

Related Articles