The Guardian view on AI in social work: algorithms don’t have all the answers | Editorial


The Guardian view on AI in social work: algorithms don’t have all the answers


Machine learning could help caring professionals, but it could never replace them

Between a tenth and a third of the jobs in Britain are at risk of being automated away, depending on which survey or well-informed guess you believe. Should social workers be among them? It might seem that the particular and personal skills of social work are of a nature that could never be replaced by a machine, but from the point of view of an economist they are part of the machinery to provide help in the most cost-effective way. This involves judgment: which families need what kind of help most urgently? And that kind of classification is one of the things that various forms of artificial intelligence promise to do better and more quickly than unaided human beings. It involves the ability to detect significance in a mass of confusing information, often in ways that are hard to explain – a faculty that in humans is called intuition. When computers do it, it is called machine learning. In neither case is it wholly reliable.

The augmentation or replacement of social workers with machine learning is the possibility raised by our report that five English councils are trialling software which will help to pick out families and children in need of intervention. It is not the only danger of the projects. There are obvious problems around informed consent: as far back as 2012, Dame Louise Casey took the view that demanding informed consent for the use of the relevant data damaged society as a whole and particularly its most vulnerable members. This is not what the Guardian believes. Nor, after the GDPR, is it what the law allows.

Then there is the fallibility of the algorithms themselves: the way in which putting human prejudices into software makes AI “money laundering for bias”. Beyond that is the further danger that even if this function of social workers and other professionals can be successfully augmented by machine learning, that might tend to make all the other things they do look less important when in fact they will become much more so.

Yet Ms Casey’s argument has an intuitive force. We are all leaving increasingly large trails of data behind us as we move through life. At the moment it is mostly analysed in order to sell us things more efficiently. It would be absurd for society not to make use of it for more valuable purposes. If informed consent could be obtained from everyone within a council’s area, there are no doubt interesting and useful things that could be discovered. But that kind of analysis would have to involve all citizens. The more data these algorithms have, the better the results will be. Importantly, this must not just be data on “problem” families. We cannot know how they differ from the norm without a great deal of data from the families who will never trouble social work departments. This is unlikely to be politically as popular as the idea of automating hard-pressed social services departments.

None of this happens in a vacuum. Technology is not bestowed on us by benevolent gods. It is deployed in response to pre-existing social and economic pressures. Why might local government want to invest in this kind of thing, and why might central government want to believe in it? To ask the question is to answer it. The technology appears to promise a magical way to slip out of the vice of rising costs, demand and expectation on the one side, and shrinking revenues on the other.

But there is no magic escape here. In some cases the enthusiastic deployment of technology can make a bad situation very much worse. The good intentions of local councils will only be rewarded if they are properly funded from the centre.

Source: The Guardian – Artificial intelligence