Earlier this week, OpenAI launched GPT-4, the latest in its series of large language models capable of reading, summarizing, translating and generating text in a way that seems almost human-like. GPT-4 is intended to be the successor to GPT-3.5, which is the model that the popular https://thenewstack.io/just-out-of-the-box-chatgpt-causing-waves-of-talk-concern/ conversational tool is built upon.

Notably, GPT-4 also exhibited “human-level performance” on various standardized academic tests like the LSAT, GRE, and various AP exams. OpenAI’s line of GPTs are what are known as https://en.wikipedia.org/wiki/Generative_pre-trained_transformer, a family of language models that are pre-trained on large text-based datasets.

Like previous models, GPT-4 can be prone to risks like generating harmful advice, or inaccurate information, but the additional capabilities of GPT-4 entails new risks.

Related Articles