Source: hugoluchessi.medium.com

Scaling Data Science development
Deployment was zipping all the code and copying to the server via ssh, ftp or even physically with a flash drive. Horizontal scaling was buying new bare metal machines and allocating it in a refrigerated room (or under some developer’s desk).

After the code in a centralized repository, it is much easier to implement a tool to automate processes that build, test and depending on the output, avoid bad code to be deployed in production.

In this case, we need to create code (and gather data) to train and test the model.

There are more focused articles in this matter and it is not the point of this article.

Related Articles