Four reasons why OpenAI’s GPT2 is both great and scary for us
What is GPT2?
To put it simply, GPT2 is an AI-based program that generates text based on an input series of words, using a pretrained language model based on the transformer architecture. Oh, and it’s the second iteration of this NLP breakthrough. You type in some text, and GPT2 predicts the next word (repeatedly) based on what it has seen by training itself (unsupervised) on 40GB of text datasets (that’s 8 million web pages) and 1.5 billion parameters. If you give it an email subject; it will write the email for you. Give it the title of a blog post; it will write the blog post for you (which is probably what I should’ve done for this post).
And the thing is, it works really well. Scarily well. So well that they didn’t initially release the full model for fear of misuse. When Cornell University surveyed people to assess a GPT2 filled piece of text on whether it was written by a machine or a human, the latest model scored a 6.91/10. This means almost 7 out of 10 people thought the text was written by a human. This means you can almost automate human communication and this has massive potential both for misuse and for gain.
Cover letters and job applications
You can no longer walk into a shop or company headquarters, hand in your resume and get interviewed on the spot like the old days. Everything is done online, and it could take you 100+ online applications to even get a call back. This could mean 100+ dedicated cover letters and applications personalised to each company. So either you spread yourself thin and churn out a lot of poor quality applications or carefully prepare a few high quality applications.
Well, with GPT2, the effort it takes to produce a personalised cover letter is significantly reduced, making it possible to achieve a large number of quality applications (with some fine-tuning required).
Replacing customer service with chatbots
Customer service firms typically have a hierarchical structure where the bottom rungs have very low authority and high algorithmic behaviour. Even if you’re talking to a human, the actions they can take are basically limited.
With GPT2, it is possible to eliminate basically all the lower levels of the pyramid with chatbots. By training chatbots on real quality human interactions with customers, it can handle most queries/complaints that customers have. Anything that can’t be handled can be passed upwards to a real human with authority that can bend the rules and deal with problems creatively.
Deepfake social media profiles
In this new form of synthetic propaganda, social media profiles can be created to intelligently echo the opinions of their creators. If public opinion hates dogs, just create a bunch of Reddit/Twitter bots that echo dog based propaganda. Pretty soon, the average human Twitter user will see hundreds, if not thousands, of profiles talking about how great dogs are and noone will be able to discern whether they are human or machines.
This can be used to generate synthetic propaganda for any cause; from promoting veganism to supporting Brexit.
Indeed, there are a large number of Twitter bots that have already been trained on various famous profiles’ previous tweets, all using the GPT2 model. A great example is a Donald Trump GPT2 bot, whose tweets seem indistinguishable from the President’s himself.
Increasing the profitability of spam/phishing
The goal of spammers is to cast as wide of a net as possible in order to catch a few lone clueless fish. With spamming, the more elaborate the scam, the more human effort is required, thus lowering the potential payout. Automated scamming techniques cast a wide net, but land a lot of small payouts, while quality scamming techniques which require a human touch cast a small net, but land a few large payouts. Now, with GPT2 it’s possible to achieve both quantity and quality of payouts. You don’t need a human touch (to a certain extent) to deal with the common reservations people have when being scammed.