Using computer science to predict speech in multilinguals

Original article was published by Daniela Campuzano on Artificial Intelligence on Medium


Using computer science to predict speech in multilinguals

Photo by Leonardo Toshiro Okubo on Unsplash

If you speak three languages, you’re trilingual. If you speak two languages, you’re bilingual. If you speak one language, you’re probably American.

Jokes aside, many high schools began offering foreign language classes to reverse this monolingual stereotype implied in Americans. But, let’s be honest: you probably relied on Google Translate to answer your Spanish homework questions, right?

If you have ever used a translator app, you probably noticed how the software tried to predict what you would type, and then translate that prediction (which could lead to some #googletranslatefails).

This sense of prediction is what many linguists have been trying to perfect through the use of computer science. The emerging field of computational sociolinguistics tackles the automatic detection of non-linguistic information from the linguistic signal. The AI goes through an elaborate three-step process while you passively read:

First: The front-end. The mind simply reads line by line with no other purpose. In the back-end, the brain takes all the data — all the text — and passively processes the linguistic signal. The mind interprets the literal meaning of all the words.

Dr. Gregory Scontas runs his own linguistics research office, The Meaning Lab, in the Language Science Department at the University of California, Irvine. Dr. Scontas explains the study of language as a way of “understanding ourselves better, to the extent that we’ve understood how we do language. We’ve understood something about ourselves that is likely rather profound.” He notes that language is something unique to only humans, and it is crucial for us to embrace our mode of communication. For 76% of U.S. citizens, the only way to converse is, in fact, in English.

Second: The back-end. The mind is also updating all the beliefs that it already came into the conversation with. Knowledge is being refreshed with additional information and syntax every time. The reasoning about the things that the author could have said but didn’t say, or why the author chose to phrase or put emphasis on words how they did — it all changes once the brain gets exposed to a new speaker.

This AI is very American.

Many researchers disregarded a special classification of speakers prominent in today’s American society when collecting data for this special type of AI: heritage speakers.

Scontras qualifies a heritage language in his research paper, Understanding heritage languages, to be one where the “language [is] spoken at home or otherwise readily available to young children, and crucially this language is not a dominant language of the larger (national) society.”

Heritage speakers provide a special pool of data that can cause problems when developing prediction AI. Multilinguals have the tendency of code switching. This change of language mid-sentence makes it difficult for the AI to differentiate between languages because of their diverse syntax and varying social & cultural markers.

Dr. Sameer Singh is an associate professor and researcher of Computer Science, concentrating in Linguistics at University of California, Irvine, who specializes in integrating computer science when studying language. He explains that studying code switching is a new research topic that linguists are barely touching upon. “I think code switching in language is a good example where we don’t really have a lot of language resources, because people do it a lot more in spoken language rather than in written [text] like Twitter, Reddit, messages, and places where many get caught code switching, but news articles probably wouldn’t at all.”

Third: The conclusion. All of this information is interacting to yield a concluding belief after you finish reading.

In hopes of programming a multilingual AI to be a little more “worldly”, there has been some work done on creating unsupervised machine translation, where the machine tries to translate between languages, even though it hasn’t seen sentences of both languages together. Singh states, “This becomes relevant for us when code switching is concerned, because code switching are these examples of sentences which have a little bit of both.”

Scontras is hopeful about the future of interaction between AI and multilinguals. “We need a lot more data when we’re dealing with cases of heterogeneity to capture what systematicity looks like… but I do think that there’s plenty that can be done. I think this parallel work of figuring out where the systematicity is empirically, and encoding that in the form of hypotheses computationally — I think that’s where we’re going to find some good progress.”

Living in the cultural melting pot that America is, there is a vivid community of multilinguals where data can be taken from. With hopes of integrating this AI to teach languages in a formal and informal setting, we can continue to strive forward for better inter-connected individuals.

If you speak three languages, you’re trilingual. If you speak two languages, you’re bilingual. If you speak one language, the technology is at our disposal to learn another.