This is insane: OpenAI’s GPT-3 Can Convert Verbal Prompts into Code

Original article was published on Artificial Intelligence on Medium


Language Models are Few-Shot Learners

In their provocatively-titled Arxiv paper — Language Models are Few-Shot Learners — the team behind GPT-3 introduced it to the world, warts and all.

By crawling through and learning from human-generated text, GPT-3 and its cousins adopted the racial biases of the humans who produced it:

Illustration from Language Models are Few-Shot Learners

GPT-3 gaped into the Nietzschean abyss, and the abyss gaped back.

When asked to generate news articles, GPT-3 did so so convincingly that human evaluators were barely able to distinguish them from articles written by humans.

Illustration from Language Models are Few-Shot Learners

OpenAI recently showed that transformer models can even be used for visual pattern completion.

Just how many tools are there inside this Swiss Army knife of AI? Only time will tell.

The authors of this paper from OpenAI were Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei.