Over the past few weeks, there has been a declaration that https://techcrunch.com/2023/05/11/making-foundation-models-accessible-the-battle-between-closed-source-and-open-source-ai/ will dominate the field. The argument goes something like this: Open source machine-learning algorithms have exceeded the capabilities of proprietary algorithms. When using open source algorithms to train https://roadmap.sh/guides/introduction-to-llms on open source data sets, the performance of the “foundational” models is quite good with respect to benchmarks.
So does this mean that every generative AI use case needs a foundational model built from proprietary real-time data?
Enterprises looking to build generative AI will likely need to rely on foundational models from large companies that have the checkbook to maintain their own real-time data infrastructure and open source foundation models for other use cases.