Original article was published by Michael Mirwald on Artificial Intelligence on Medium
5 examples that show how advanced AI is
AI is everywhere and it’s hard to even mean the same things when talking to someone because the field has become so vast. To prove that — and to show what the technology can (and can’t) do I compiled this list.
“We know that we have a great interest in the future and that we should not remain silent about it.”
This quote has never been said by anyone, ever. It was created by an AI model that was trained on thousands of quotes. It’s a small project build by a single person, but it demonstrates very well how AI works.
AI has become a mass phenomenon. It is no longer just a topic in scientific journals or corporate laboratories. The technology has become such a vast field of research, development and application that one might wonder whether the term AI still has the necessary discriminatory power for all the different aspects of its usage.
But what are current issues of AI in 2020? What challenges and debates are shaping the technology? I present five examples that show what AI is capable of and what the current discussions are.
1. GPT-3 shows the power of unsupervised learning in language-AI
The quote at the beginning was created based on GPT-2 — and it was its successor, GPT-3, that made the headlines this year like no other AI project. The new development of OpenAI was revealed, and the press was hyped. Kelsey Piper wrote for vox.com:
“GPT-3 represents a tremendous leap for AI.”
This is mainly due to the unsupervised learning technology that GPT-3 uses. The data that feeds the program is not labeled or pre-processed. GPT-3 can work with unprocessed data, so it scales faster and can react more flexibly to different tasks.
One example? Arram Sabeti prompted GPT-3 to write a Dr. Seuss poem about Elon Musk.
Take a look at a small part from it:
The SEC said, “Musk,
your tweets are a blight.
They really could cost you your job,
if you don’t stop
all this tweeting at night.”
He replied, “Well, I do tweet
and it’s really quite neat.
and I’ll tweet in a while
and send you some sweet treats.”
2. Generative Adversial Networks and how they might give life to digital models
Miquela Sousa is a 19-year-old Brazilian-American model with more than two million followers on Instagram. She has worked with Prada and Givenchy and was featured in a Calvin Klein video.
But Miquela is not a real person — she is a virtual creation..
She is a computer-generated image (CGI) that can do nothing on its own. Generative adverserial networks technology might change that, a type of machine learning and a subset of AI that the Japanese tech company DataGrid provides. It could allow Miquela to copy the poses of real models, which would have a great impact on the entire industry, writes Sinead Bovell for Vogue.
3. The pitfalls of emotion recognition
Machine emotion recognition promises billions of dollars in business. But a new study shows that the technology currently used to analyze facial expressions is extremely error-prone.
This article by Angela Chen in the MIT Technology Review summarizes the findings. It seems that the data is not complete, because emotions not only have to be analyzed in the face, but in the whole body in order to interpret them.That’s not all:
“In short, the expressions we’ve learned to associate with emotions are stereotypes, and technology based on those stereotypes doesn’t provide very good information.”
The only thing that could help is more data — but that is expensive and requires very specific datasets, more than companies or researchers currently have. Plus, the collection of so much specific and personal data also raises questions of security and privacy.
4. Why the importance of computing power is underestimated in the AI age
The United States and China invest billions each year in the growth of their AI industry. For all its geopolitical complexity, the AI competition boils down to a simple technical triad: data, algorithms, and computing power. But while the first two issues receive enormous political attention, the issue of computing power is grossly neglected.
According to OpenAI, which was quoted in this Foreign Affairs article, the computing power used to train AI projects has increased by a factor of 300,000 between 2012 and 2018. These powerful chips cost a lot of money to produce — and the technology might be a real diplomatic leverage as of right now, argues Ben Buchanan in the piece.
5. Why general artificial intelligence might never exist
AI is a deeply technical subject, but it has highly interesting philosophical aspects. This Nature article deals with the evolutionary history of AI since WW II and the many experiments to create machine intelligence comparable to humans.
The key takeaway: Despite new approaches like deep learning and big data, AI is still not anywhere close to General Intelligence. Nevertheless, the potential of AI is often overestimated — with sometimes dangerous consequences for science.
The progress of artificial intelligence is happening at such a fast pace that it is almost impossible to keep track. Machine learning models get smarter and faster, but at the same time, the needed computing power to run them is skyrocketing. Furthermore, the public often overestimates what artificial intelligence can do. Nevertheless, it is undeniable that AI is here to stay and we need to learn how to deal with it.