Did AI create this music? This book?

Original article was published on Artificial Intelligence on Medium

Did AI create this music? This book?

Did AI create this music? This book?

In this series, we examine how AI has been “creating” novel things
spoiler alert — it’s getting really good…

Twitter
WordPress
Meetup
Meetup
Medium
Medium
Linkedin

By VICTOR ANJOS
Welcome to my multi-part series on AI as a creator. In this series we aim to look at whether today’s artificial intelligence is capable of devising new works of arts.
We realize that Art is completely in the eye of the beholder, so what may be considered art to some is different than others. Our definition of art will be loose and we will use it to mean the following:
“Art is often considered the process or product of deliberately arranging elements in a way that appeals to the senses or emotions. It encompasses a diverse range of human activities, creations and ways of expression, including music, literature, film, sculpture, paintings, and also such things as math, physics, biology, chemistry and general inventions. “
In today’s part of the series we are examining:

AI composing music and generating literature

Creativity may be the ultimate moonshot for artificial intelligence. Already AI has helped write pop ballads, mimicked the styles of great painters and informed creative decisions in filmmaking. Experts wonder, however, how far AI can or should go in the creative process.

When we recently spoke to AI experts and thought leaders, their opinions varied as to whether AI has the potential to become a true creative partner or even the creator of solo works of art. While this debate will likely continue for some time, it’s clear that as digital content and delivery platforms continue infiltrating all forms of media and expression, the role of AI will undoubtedly expand.

Making machines creative?

Examples of these amazing applications are becoming more numerous, ranging from generating still images, to videos (i.e. deepfakes), or even generating videos from still images. The good news is, now you can see the Mona Lisa nodding and laughing.
Machines are developing the capacity to create rather than just learn. And here’s where it gets interesting — what if we can teach machines to be creative? If they can create an image, then why not a painting? And if they can create a sound, then why not a pleasing sonata? And if they can generate a logical sequence of words, then why not poems, tales, and novels?
We’re in the age of machine evolution. Have you ever looked at a cubist or an avant-garde painting and said to yourself, this better be created by machines? The sharp lines, the hazy features — all characteristics of these artifacts. We humans of the postmodern era are more inclined towards abstraction. So why not let machines take the lead — they love abstraction!
https://youtu.be/LSHZ_b05W7o

GANs in music

For a generative algorithm, images might seem easy to generate. Sound, however, is a different kind of challenge because each sample heavily depends on the previous ones. It’s also important for the model to be able to generate a melody structure and a distinctive mode that depends on the relation between different tones and chords. Hao-Wen Dong, et. al. proposed a model based on GANs that’s capable of generating musical tracks. In their 2017 paper, MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment, they introduced MuseGAN, which is fed with a dataset of over one hundred thousand bars of rock music. The generated phrases consist of bass, drums, guitar, piano, and strings tracks. A sample generated track linked in the paper shows us the performance of the model. Overall, the results are promising and aesthetically appealing. However, the structure seems repetitive in a way that suggests that the generation process lacks novelty. Training a generative algorithm for music generation is a hard task indeed, especially when you have different instruments with independent properties, such as percussion instruments, lead guitars, etc. Nonetheless, we shouldn’t give up on the enormous capabilities of GANs. AI engineers and data scientists are constantly working on enhancing these existing models, along with creating new ones.

AI Jukebox: Creating Music with Neural Networks

Another cool project that is nearly as old is the AI Jukebox. The AI Jukebox is a neural network that generates music. Lets start out by sampling some of AI Jukebox’s work below.
The AI jukebox trains on a collection of midi music files, where it gains a “machine understanding” by mapping the latent, internal structural relationships of the dataset, and from this “understanding” is then able to create new, unique generated content.

You can train AI Jukebox with your own collections of midi files; simply use the code and follow the operational instructions on its GitHub repo.

GANs as literary writers

Like music, text generation requires the consideration of word sequence preceding each new addition. However, the task here is simpler, as the input — words — are easily discriminated, especially with the help of advanced NLP techniques and language models.

Text generation was lately associated with prohibition and the fear of AI threat. Same with almost every AI application; people always fear the spread of AI and the possibility of devastating consequences. When it comes to text generation tech, one should definitely mention GPT2. GPT2 is the invisible product of the AI team I love the most — OpenAI.

Nearly two years ago, Open AI published a paper introducing the world to their amazing GPT2 language model, whose main mission is to predict the next word following an existing bit of human-written context. Hence, it can build an entire story starting from just a sentence! Here’s a sample they attached in the paper, where GPT2 is talking to us about four-horned unicorns with human origins.

The generated text seems akin to what a news reporter might write. And that’s what makes it kind of scary! OpenAI realized this and decided to release the full model in four stages. Such a measure was taken in order to analyze the possible misuse of the model before being fully deployed. The latest stage presented a model with 750 Million parameters, while the full model is expected to reach 1.5 Billion parameters.
You can enjoy the amusement of the 750M model by visiting TalkToTransformer.com and type whatever comes to your mind. Of course, it’s not as powerful as the 1.5B one, but it’s still fascinating. Here’s a sample of my own trial:

The core threat of text generation is hence obvious. The generated articles could be spread as fake news without having the slightest doubt about their authenticity. And the OpenAI team was confident enough to tell us that there’s no current machine learning algorithm with the ability to accurately discriminate real text from their fake one.
But hey, we’re only here for art! So let’s explore the possibility of making a poet machine — without worrying about these threats for now. Luckily, this field is producing some very good results that can ignite our excitement.
In their 2018 paper, “Beyond Narrative Description: Generating Poetry from Images by Multi-Adversarial Training”, Bei Liu et. al. presented an NLP-aided GAN that can generate poems from images. The algorithm takes clues from the image, mainly the description, and figures out a suitable compilation of poetic lines that fit these clues. The model has two “discriminator” blocks instead of one. The first checks if the generated poem suits the input image, while the second is responsible for checking the poetic authenticity of the poem.
In the paper, they attached a sample of a falcon image turned into a poem. In rhetorical terms, the generated stanza rhymes with pleasing consonance over the whole poem. The cool thing is that while keeping the poetic structure of the stanza, the words work together in a way that makes the whole text connected and meaningful.
So what do you do when all signs point to having to go to University to gain any sort of advantage? Unfortunately it’s the current state of affairs that most employers will not hire you unless you have a degree for even junior or starting jobs. Once you have that degree, coming to my Modular Lab Program, with 1000ml with our Patent Pending training system, the only such system in the world; is the only way to gain the practical knowledge and experience that will jump start your career.
Check out our next dates below for our upcoming seminars, labs and programs, we’d love to have you there.
All Categories
Business
Data Exploration and EDA
Dataset Acquisition
Deep Learning
Design
Development
Infrastructure & DevOps
Marketing
Model Experimentation
Model Performance Metrics
Model Tuning
Presentation Skills
All Price
Free
Paid
Search
Be a friend, spread the word!
Upcoming
Events
Online Streaming: Peers Knowledge Sharing & Problem Solving Clinic Session
May 16 @ 1:00 pm
Webinar: Falling in Love with the Problem Statement by Nike Sr PM
May 16 @ 2:30 pm
Webinar: Essentialist Product Manager by ProductPlan Co-Founder
May 17 @ 2:30 pm
Product Management Live Chat by AWS Senior Product Manager
May 18 @ 1:30 pm
Webinar: How to Identify Product Opportunities by LinkedIn PM
May 18 @ 2:30 pm
Webinar: 8 Difficulties When Entering Chinese Market by Airbnb fmr PM
May 18 @ 5:00 pm