At its annual https://www.nvidia.com/gtc/ (GTC) developer event today, Nvidia is announcing two new cloud services based on https://thenewstack.io/5-ai-trends-to-watch-out-for-in-2022/ technology. One service lets users customize pre-trained LLMs for their own specific use cases and another caters to biomedical research using trained protein models.

Nvidia had announced vast improvements to https://thenewstack.io/nvidia-shaves-up-to-30-off-large-language-model-training-times/ less than two months ago.

Beyond prompt learning, the cloud service will also allow its LLMs to be used for inference directly.

All of this, especially early access to the NeMo cloud services, provides a cool opportunity for developers to work with the LLMs, with very little barrier to entry.

Related Articles