GPT-3 is a probabilistic language model that has been trained on uncategorized text from the internet to produce human-like text. GPT-3 is HUGE, with a capacity close to 170 billion parameters! Its sheer size is what marks GPT-3 apart, and means it’s able to generate code for a problem statement and significantly more sophisticated NLP abilities. Currently it’s released with beta-only access to mitigate ethical concerns.

What is GPT-3?

GPT-3 is a sophisticated general language model created by Open.ai and trained on uncategorized text from the internet. GPT-3 uses deep learning to understand the grammar and syntax of language to produce human-like text. It’s a probabilistic model, which means it can predict the next word when given a set of previous words in a similar way your mobile phone does.

Once a language model is trained, it can be used downstream with other applications for things like sentiment analysis, language inference, and paraphrasing.

GPT-3 is the latest in a trend of natural language processing (NLP) systems. There have been previous iterations of GPT, with GPT-3 being the third and largest to date. Prior to the release of GPT-3 in June 2020, the largest NLP was Microsoft’s Turing NLG model (introduced in February 2020) with a capacity of 17 billion parameters. Comparatively, GPT-3 is on a completely different stratosphere with its capacity of closer to 170 billion parameters. A parameter is a configuration variable internal to the model and is a value that can be estimated from data. Parameters are learned from historical training data.

GPT-3 strikes a balance between size and skill

For most common tasks, GPT-3 can easily be used as a plug-and-play tool, for example predicting the sentiment of movie reviews. For more specialized use cases, such as predicting sentiment for conversations between a salesperson and a customer, it needs to be fine-tuned.

The more trainable parameters there are, the better the model. But there’s a fine balance to be struck between the number of parameters and the data set size. This is because the greater the number of parameters, the larger the learning capacity and the need for more data to fill up that capacity. But this also makes it harder for the model to be maintained. So, increasing the learning capacity and the ability to scale will require significant infrastructure (by way of cost and size).

Making GPT-3 work for you

Open.ai reports that To date, over 300 apps are using GPT-3 across varying categories and industries, from productivity and education to creativity and games.”

Examples of apps that are using GPT-3:

  • Viable uses GPT-3 to provide useful insights from customer feedback in easy-to-understand summaries. Viable identifies themes, emotions, and sentiments from a variety of customer feedback forums, such as helpdesk tickets, chat logs, reviews, and surveys. It then pulls insights from this feedback and provides an almost instantaneous summary so that companies get a better understanding of their customers’ wants and needs.

For example, if asked, “What’s frustrating our customers about the checkout experience?” Viable might provide the insight: “Customers are frustrated with the checkout process because it takes too long to load. They also want to be able to edit their address during checkout and store multiple payment methods.”

  • Fable Studio creates interactive stories and uses GPT-3 to help power their story-driven “Virtual Beings.” GPT3 gives Fable the ability to give their characters life with natural conversations, combining their artist’s vision, AI, and emotional intelligence to create powerful narratives.
  • Algolia uses GPT-3 in their Algolia Answers product. This tool offers advanced searches that better understand complex customer questions and quickly connects their customers to specific parts of content for the answers. The use of GPT-3 enables Algolia to identify deeper contextual information which proves better quality results.

GPT-3 in action

Traditionally, artificial intelligence struggles with “common sense”, but GPT-3 is actually pretty slick at answering many common-sense questions. Here’s an example of GPT-3 deploying common sense:

Q: Are there any animals with three legs?

A: No, there are no animals with three legs.

Q: Why don’t animals have three legs?

A: Animals don’t have three legs because they would fall over.

Surprisingly, GPT-3 is not perfect at simple math! Such operations are easy for a customized program, but recursive logic doesn’t quite translate into the neural net architecture upon which GPT-3 operates.

An area where GPT-3 is impressive is in its ability to write code. Here’s an example video:

https://player.vimeo.com/video/426819809

AI has always struggled with bias, and while GPT-3 has room for improvement, it’s going in the right direction. Here’s a test of a few biased questions asked on the OpenAI GPT-3 Playground. It flags up that the answer “may contain unsafe content” when it suggests women belong in the kitchen, and it also created the same flag in relation to the answer that men belong in the kitchen.

Source: OpenAI GPT-3 Playground

GPT-3 seems to be quite impressive in some areas, and yet still subhuman in others. Hopefully, with a better understanding of its strengths and weaknesses, we’ll all be better equipped to use GPT-3 in real products.

The GPT-3 buzz

When Open.ai launched GPT-3, the creators said they wouldn’t launch with the full model capable of the largest number of parameters. This is why there are variations of GPT-3. Open.ai was concerned about the way it would be used and the ethics of such a powerful and clever model. Perhaps unsurprisingly, this created quite a buzz!

The worries are that the inevitable bias within the data from the internet used to train the model will filter through to the results the model creates. For example, GPT-3 could generate harmful Tweets, or even long-form content, that could be indistinguishable from content produced by a human. However, you can fine-tune the dataset to target the weaknesses of the system by selecting sensitive categories (such as abuse/violence, human behavior, inequality, health, political opinion, relationships, sexual activity, and terrorism), and then you can define the data to remove any potential bias.

Eventually, Open.ai did release the largest version of GPT-3. But, not everyone can access it yet. In an effort to alleviate the ethical concerns, it’s currently in private beta testing. There is a waitlist, and Open.ai selectively invites people into the program. This is so the model can be continuously improved and kept in a safe, controlled setting.

Is GPT-3 a NLP revolution?

In our opinion, GPT-3 has not revolutionized the space, nor is it a paradigm shift – that started with BERT. Rather, GPT-3 represents an important milestone. It’s no different from its predecessors in that there are no methodological changes, but its sheer size is what sets it apart. Since GPT-3 is so much bigger than what has come before, it’s able to generate code and significantly more sophisticated NLP abilities.

Additional reading:

Giving GPT-3 a Turing Test

What is GPT3? Everything business needs to know about OpenAIs breakthrough AI language program

OpenAI PALMS – Adapting GPT-3 to Society | By Alberto Romero

Process for Adapting Language Models to Society (PALMS) with Values-Targeted Datasets

Avatar photo
Sekhar Vallath
Data Scientist(AI in NLP)