The Subtle Art of Priming GPT-3

Original article was published by Carlos E. Perez on Artificial Intelligence on Medium


The Subtle Art of Priming GPT-3

Photo by Baiq Bilqis on Unsplash

It’s not clear for many how GPT-3 is primed to get the right outputs. Let me try to explain this.

The input to GPT-3 is a single text field with multiple knobs that are scalar and categorical. Examples of these knobs are temperature and flag toxicity. Think of these knobs as degrees of emotion.

Knobs for GPT-3

Then there’s this priming part. We know that human cognition can be primed. So if you a word if you hear the word mouse and asked to name another animal, you most likely will say cat. en.wikipedia.org/wiki/Priming

GPT-3 learns to predict from sequences a new sequence of words. So like human priming, there is an order of events. So when you prime your inputs, there’s an expectation that it has different outputs.

The way to prime is to have several examples of the form (c,x,r) where c=context,x=example,r=result. To generate the result you want you to have something like c x r c x r c x r c x r c x. GPT-3 will fill in the last r.

Example of rewording text

To reduce ambiguity further, you can add prompts between each element in the triple. So you have the following string “c from: x to: r. So you write c from: x to: r end c from: x to: r end c from: x to: r end c from: x to: “. GPT-3 will fill in the r and sometimes the end.

This isn’t difficult to do. The art however is in the examples. What combination of examples gives one the best results? So if I were trying to have GPT-3 paraphrase my text in the style of Carl Sagan, which of Sagan’s words shall I use (i.e. the r’s) and which (x’s) to design. Notice that depending on your goals, the data that constitutes x and r will differ. In this case of style transfer, the x’s are not the examples of Carl Sagan’s words.

Each of these mapping of x->r gives GTP-3 subtle hints on what to do. So it isn’t just blindly offering up examples, the examples must be good examples of what you want to achieve. That is where human art comes in!

In addition, each c can be different. So there’s a 3 dimensional space (plus the scalar knobs) that needs to be explored to get the outputs you want.

Another observation is that when it tries to fill in the sequence for r, it doesn’t know that it is complete. So it feels like an incremental search where additional text that it adds to r further constrains the result. The incremental generation of results (i.e. r) is another aspect to think about.

The most useful applications that will be built from this will inhabit a niche of this massive space. It will combine well-designed UI and strong validation procedures.