Meet GPT-Three. It discovered to code (and weblog and argue).

0
1

Before asking GPT-3 to generate new text, you can focus it on certain patterns that it may have learned during training and prepare the system for certain tasks. You can enter descriptions of smartphone apps and the appropriate Figma code. Or you can show it off countless human dialogues. Then when you start typing, the sequence will be completed in a more specific way. For example, if you prepare it with a dialogue, it will chat with you.

"It has this emerging quality," said Dario Amodei, vice president of research at OpenAI. "It has some ability to see the pattern you gave it and complete the story. Give another example."

Previous language models worked in a similar way. However, GPT-3 can do things that previous models couldn't, such as: B. writing your own computer code. And, perhaps more importantly, you can use just a few examples to prepare it for specific tasks, as opposed to the thousands of examples and several hours of extra training required by its predecessors. Researchers call this "low-shot learning" and believe that GPT-3 is the first real example of a potentially powerful phenomenon.

"It exhibits a skill that no one thought was possible," said Ilya Sutskever, OpenAI's chief scientist and a key figure in the rise of artificial intelligence technologies over the past decade. "Any layperson can take this model and deploy these examples in about five minutes to learn useful behavior from it."

This is both a blessing and a curse.

OpenAI plans to sell access to GPT-3 over the Internet and turn it into a widespread commercial product. This year the system was made available to a limited number of beta testers through their web browsers. Not long after, Jerome Pesenti, who runs the Facebook A.I. The lab, called GPT-3, is "unsafe" and indicates some sexist, racist, and otherwise toxic language the system generated when asked to speak about women, blacks, Jews, and the Holocaust.

With systems like GPT-3, the problem is endemic. Everyday language is inherently biased and often hateful, especially on the internet. Because GPT-3 learns from such a language, it can also show bias and hatred. And because it learns from Internet texts that associate atheism with the words "cool" and "correct" and associate Islam with "terrorism", GPT-3 does the same.

This may be a reason that OpenAI only shared GPT-3 with a small number of testers. The lab has built filters to warn of poisonous language coming, but they're just patches that are put over a problem no one knows exactly how to solve.