Automatic writing with Deep Learning

Source: Deep Learning on Medium


Quite many machine and deep learning problems are directed at building a mapping function of roughly the following form:

Input X — -> Output Y,

where:

X is some sort of an object: an email text, an image, a document;

Y is either a single class label from a finite set of labels, like spam / no spam, detected object or a cluster name for this document or some number, like salary in the next month or stock price.

While such tasks can be daunting to solve (like sentiment analysis or predicting stock prices in realtime) they require rather clear steps to achieve good levels of mapping accuracy. Again, I’m not discussing situations with lack of training data to cover the modelled phenomenon or poor feature selection.

In contrast, somewhat less straightforward areas of AI are the tasks that present you with a challenge of predicting as fuzzy structures as words, sentences or complete texts. What are the examples? Machine translation for one, natural language generation for another. One may argue, that transcribing audio to text is also such type of mapping, but I’d argue it is not. Audio is a “wave” and the speech detection is an okay solved task (with state of the art above 90% of accuracy), however such an algorithm does not capture the meaning of the produced text, except for where it is necessary to do the disambiguation of what was said. Again, I have to make it clear, that audio->text problem is not at all easy with its own intricacies, like handling speaker self corrections, noise and so on.

Lately, the task of writing texts with a machine (e.g. here) caught my eye on twitter. Previously, papers from Google on writing poetry or other text producing software were giving me creepy feelings. I somehow undermined the role of such algorithms in the space of natural language processing and language understanding and saw only diminishing value of such systems to users. Again, any challenging tasks might be solved and even bring value to solving other challenging tasks. But who would use an automatic poetry writing system? Why would somebody, I thought, use these systems — just for fun? My practical mind battled against such “fun” algorithms. Again, making an AI/NLProc system capable of producing anything sensible is hard. Take the task of sentiment analysis, where it is quite unclear what the agreement between experts is, not to mention non-experts.

Experiment

In the following exercise I have set a very modest goal: train a co-writer on previously written texts with an attempt to suggest something useful from them. I could imagine, that this could be extended to texts that are trending or a collection of particularly interesting titles. What have you.

To train such a model I have used Robin Sloan’s RNN writer: https://github.com/robinsloan/rnn-writer. The goodies of the project are:

  • Trained on Torch. Nowadays, Torch is leveraged via PyTorch, a deep learning Python library that is nearing its production readiness time.
  • The trained model gets exposed into an Atom — pluginable editor (I’d imagine, real writers would want to have the model integrated into their favourite editor, like Word).
  • API is available too to integrate into custom apps (and this is exactly how it is integrated with Atom).

I will skip the installation of Torch and training the network and proceed to examples. The rnn-writer github repository has a good set of instructions to proceed with. I have installed Torch and trained the model on a Mac.

Rnn Writer in action: video

First things first: RNN trained on my Master’s Thesis “Design and Implementation of Peer-to-Peer Network” (University of Kuopio, 2007).

The text of the Master’s Thesis is about 50 pages in English with diagrams and formulas. On one hand, having more data makes NNs learn more word representations and should have larger probability space to predict next word given the condition of the current word or phrase. On the other hand, limiting the input corpus to phrases that have certain domain goal, like writing an email, could leverage a clean set of phrases that a user employs in many typical email passages.

As I got an access to Fox articles, I thought, this could warrant another RNN model and a test. Something to share next time.