Category: Software, Business, Kubernetes, Infrastructure, machine-learning, artificial-intelligence

A startup is using a VMware-style technique to give data scientists greater access to GPU compute power to run their artificial intelligence (AI) workloads. Run:AI, whose platform is designed to enable organizations to leverage all of the compute power they need to accelerate their AI development and deployment efforts, recently unveiled two new technologies, Thin GPU Provisioning and Job Swapping, which allow data scientists to share the GPUs they’re allocated for their AI work.

Other data scientists that are requesting compute power may get access to a compute power that was previously assigned to someone else based on the fact that our system identifies that this someone is not actually using the GPUs.

The things that are not consistent are what tools data scientists use in order to run workloads on the GPUs.

The user who originally was allocated the GPUs doesn’t see if any of the accelerator compute power is reallocated to another data scientist and that data scientist doesn’t see that they were able to get more compute power.

Related Articles