Category: Data, Microsoft, github, artificial-intelligence

We’ve become accustomed to machines automatically correcting our spelling, completing sentences or translating text into other languages, thanks to the power of natural language processing (NLP), a technology that gives machines the ability to read, understand and extract meaning from human languages. However, with the recent release of GPT-3, a massive NLP model created by artificial intelligence lab OpenAI, it’s clear that machines will soon be capable of much more: writing coherent essays or fiction, tweets, poems, blog posts, technical manuals, answering reading comprehension questions and even producing code — much of it indistinguishable from what would be generated by a human.

Even more concerning were some of the potential race, gender and religious biases that the researchers observed in this current model, as well as in GPT-2.

To test for racial biases, the team tested the model with sentence prompts that were to be filled in with either “Asian,” “Black,” “White,” “Latinx,” “Indian,” or “Middle Eastern” and a description.

Despite these flaws and the potential for misuse, GPT-3 nevertheless represents a big leap forward in terms of getting machines to understand and manipulate human languages.

Related Articles