Category: Data, Kubernetes, github

This tutorial is the latest installment in an explanatory series on Kubeflow, Google’s popular open source machine learning platform for Kubernetes. In the last part of this series, we launched a custom Jupyter Notebook Server backed by a shared PVC to prepare and process the raw dataset. In the current tutorial, let’s utilize the dataset to train and save a TensorFlow model. The saved model will be stored in another shared PVC which will be accessed by the deployment Notebook Server.

In the next part of this series, we will deploy this model for inference.

Related Articles