The symbiosis of organic and synthetic intelligence

Original article was published by Will Smith on Artificial Intelligence on Medium

The symbiosis of organic and synthetic intelligence

Author: Will Smith (@wlil_sitmh), Student Doctor at the University of Liverpool; Blog Editor and President of LivPsych

Considering human intelligence has taken something like 300 million years of evolution to craft, it’s pretty remarkable that, although inspired by its maker, artificial intelligence has taken a mere matter of decades to nearby replicate its function. Putting comparison of their rates of development aside, here I explore specifically how AI intersects with the practice of psychiatry, and how intelligent machines are helping to shape tools which allow us to treat disorders of the mind.

Intuitively, we may feel that humans have a sixth sense for assessing the complex, nuanced emotions and behaviour of others. Such is the bedrock of what it means to be human. But in turn, this creates a problem in psychiatry, a medical discipline that relies almost entirely on what patients say and do to make a diagnosis. Without definitive tests or scans for mental illnesses, we are forced to rely at best on the learned experience, yet limited resources of highly trained psychiatrists; or at worst the misguided advice of others. However, it seems that AI could be helpful in creating tools that act as a second best. Machine analysis of patient notes, brain scans, and even iPhone videos has helped researchers assimilate complex data to offer a raft of potential tools to help people suffering from mental illness.

One such avenue utilises the rich kind of data we generate every day — from our phones. Imitating a psychiatrist’s beady gaze, apps like ‘AICure’ can analyse videos of your speech to identify features of schizophrenia. Such apps draw on both subtle changes in facial expression, namely the rate of change and ‘expressivity’, as well as speech patterns, like shifts in character or speed, before comparing them both to previous personal uploads and to the wider population. So far, the reports generated by such apps are found to be highly similar to reports by human clinicians, although the data correlating the two is currently still sparse. Other elements of speech like monotonicity, or increased use of personal pronouns, which have both been recognised as markers of depression by computerised text analysis of psychiatric patients’ writing, could also be used to identify mood disorders like depression.

Similarly, a study at the University of Vermont used machine analysis of patients Instagram posts to identify features which could correlate with a depression. Their analysis showed that both ‘black-and-white’ or ‘blue-toned’ filters, as well as an increased propensity to post pictures featuring solitary faces were both reliable markers of the disorder. Although of course such changes could be subject to fashionable changes in social media use, they nonetheless indicate that application of voluntary data from our phones could be invaluable to help doctors monitor more patients, or to monitor patients who are long distances away. It could also help to detect patterns too subtle for humans to detect, or help psychiatrists reduce subjectivity in their assessments.

Looking beyond diagnosis, early changes in speech could help predict and alleviate mental health problems before they even happen. For example, speech analysis software at Harvard found a subtle change in the ‘specificity’ of language which could detect schizophrenia years before it is noticed by doctors. Finding and supporting these people early could help combat the potential complications of schizophrenia which could occur before getting a diagnosis, like damage to personal relationships or even state punishment.

AI can also help us make other predictions about our mental well-being. Nowadays, if your doctor wants to know if you might hurt yourself or others, there’s only one thing they can do: ask. But relying on this method can seem flimsy as hell, especially when their patient is not in their right mind, for a medical reason or otherwise. On the frontline of medicine, Liverpool’s NHS mental health trust ‘Mersey Care’ is currently making progress on three projects which use AI to make predictions about their patients. Dr Cecil Kullu, a Consultant Psychiatrist and Associate Medical Director for Research for Mersey Care said “when we started looking at the highest clinical risk issue with regards to indemnity, self-harm and suicide were the highest causes for adverse events … it therefore became obvious that we had to do something about it.

Each of their tools has a bespoke purpose: ‘MaST’ helps to “allocate limited resources accordingly” by determining the likelihood of users needing to access the crisis service, whilst ‘SWiM’ uses data entered into a diary app similar to those mentioned previously, as a “tool to help patients and staff manage self-harm better by predicting the risk of self-harm events”.

The third, ‘AVERT’, represents one of the most exciting projects in AI innovation to date. As a collaboration between the University of Liverpool and King’s College London, the project is analysing 10 years-worth of electronic patient data to help manage the risk of crisis associated with a depressive relapse. Of the AVERT project, Dr Kullu said “we often see patients present to services in a mental health crisis or after the adverse event, but when preventative measures to pick up early signs and intervene are in place the outcomes are better”. On using technology to overcome systemic “pinch points”, he said “curiosity is really at the core of the practice of psychiatry and this also is the core of innovation”. He suggested that the need to tailor treatments to individual patients lent itself “beautifully” to innovation, with “new ideas yet to be discovered … we cannot expect technology on its own to bring about change, but we can use technology and people to bring about change for the good of all.

With news from neuroscience of the increasing reliance on AI to understand real neural networks with artificial ones, and with the birth of ‘computational psychiatry’ which combines multiple types of data to help improve diagnosis and treatment; it seems that artificial intelligence may finally offer us the means to be able to successfully assimilate the sprawling yet intricate cascade of information provided to us by the brain, to help us all in moments of need.

To read more articles like this, head to