Large language models are inefficient, period. That’s apparent at AWS re:Invent this week. Inference is a hot topic, and conversations center on how to make the most of LLMs, considering the cost of training and the energy consumption required.

Related Articles