Category: Kubernetes, Ubuntu, Docker, shell, artificial-intelligence

In this tutorial, we will explore the idea of running TensorFlow models as microservices at the edge. For the completeness of the tutorial, we will run a single node K3s on Jetson Nano.

Check the version of Docker runtime with the below command: Since Docker supports custom runtimes, we can use the standard Docker CLI with --runtime nvidia switch to use NVIDIA’s container runtime.

For the AI workloads running in K3s, we need access to the GPU which is available only through the nvidia-docker runtime.

We will also add a couple of other switches which makes it easy to use the kubectl CLI with K3s.

Related Articles