Category: Database, containerization, yaml, artificial-intelligence

The development time of such applications may vary based on the hardware of the machine we use for development.

In this tutorial, we discuss how to develop GPU-accelerated applications in containers locally and how to use Docker Compose to easily deploy them to the cloud (the Amazon ECS platform).

For this tutorial, we rely on sample code provided in the Tensorflow documentation, to simulate a GPU-accelerated translation service that we can orchestrate with Docker Compose.

We can now query the translator service which uses the trained model: Keep in mind that, for this exercise, we are not concerned about the accuracy of the translation but how to set up the entire process following a service approach that will make it easy to deploy with Docker Compose.

We can easily do this by removing the service from the Compose file: and then run docker compose up again to update the running application.

Related Articles