5 reasons why robots should have rights




Unlike many ethical questions in the murky world of robotics and artificial intelligence (AI), the issue of ‘robot rights’ has in fact entered public consciousness and discourse in recent years. Most are aware of the decision in 2017 of the Saudi Arabia government to grant the robot ‘Sophia’ full Saudi citizenship. This was derided as a publicity stunt, and came across as absurd and disrespectful in a nation where full and equal citizenship is not awarded to various Saudis, such as women, who were finally permitted to drive this year.

Similarly, last year the EU Parliament approved a resolution that would grant a special ‘electronic personhood’ to particularly advanced robots, just as certain companies or organisations can have a ‘corporate personhood’ meaning that they can enjoy at least some of the rights (and obligations) owned by people. This decision too had many people scratching their heads, arguing that this was merely a way for the creators of such robots to absolve themselves of various legal responsibilities — i.e so they could not be held liable for the allegedly ‘autonomous’ behaviour of their machines, if they ended up breaking the law.

Overall, I think it is safe to say that most would dismiss the notion of extensive robotic rights out of hand, believing that such privileged status could only ever be reserved for humans. Could it ever be morally reprehensible to pull the plug on a computer? Certainly, AI in its current state could not merit anything like our comprehensive notion of ‘human rights’. Nonetheless, it is possible that the technology concerned could advance to a certain level where such it could indeed be necessary to seriously consider these issues. What follows is my 5 reasons as to why (advanced) robots should have (extensive) rights.

Sophia – a Saudi citizen

1. The grounds for human rights could also obtain in the case of robots

‘Human rights’ are a set of inalienable norms that must be observed when dealing with other humans, by virtue of the fact that they are human. What is the ethical grounding for this? In other words what is it that makes a human, human? Some potential candidates are as follows:

1a) Consciousness: There is some kind of sacred value inherent in being alive and aware of one’s own existence and personhood. This consciousness is what sets humanity apart from animals. Human rights are a way of ensuring that this humanity is respected equally in all people.

1b) Free Will/Autonomy: What sets humans apart is that they are autonomous and can form their own decisions about what they want to do. Any actions that impinges on a basic level of human agency is wrong, hence the existence of certain human rights to guarantee this autonomy.

1c) Rationality/morality: As set out by Kant, all humans are ‘rational’: i.e they can work out, by their powers of reason alone, what is the right course of action. This extraordinary power of rationality must be respected, and therefore humans must never be used or exploited, and must be treated with dignity, hence the existence of certain human rights.

This list is of course not exhaustive, but I would argue that each of 1a, 1b and 1c could potentially become applicable in the case of robots.

1a) Consciousness: Technology in its current form merely serves human ends: it is not ‘alive’ in any meaningful sense, nor is it ‘self-aware’ That does not mean, of course, that a kind of self-awareness could not arise in the future. Furthermore, it is not clear whether consciousness is of absolute importance: some animals without extensive rights (eg apes, other mammals) have often been said by scientists to be ‘conscious’, whilst some of those without advanced consciousness (eg infants) do in fact possess extensive rights.

1b) Free Will/Autonomy: On the one hand of course, computers are always controlled/programmed by their creators, and are created to enhance or further some human project. On the other hand, many advanced computers can discover more streamlined and faster ways of fulfilling their functions than the exact methods with which they were originally programmed. This is a process that has been called ‘machine learning’. Even if artificial intelligence doesn’t currently choose its own destination, over time it can learn to change and adapt the route it was originally given.

The alleged autonomy of machines is actually one of the most noteworthy areas in this whole debate. The international community has been grappling this year with the regulation of autonomous weapons in warfare (International Panel on the Regulation of Autonomous Weapons). Most notably, it appears that military drones have been invented that can attack and destroy enemy targets without direct human instructions. Many employees at Google were so concerned about these advancements that they formally protested the work that was being done by their company on behalf of the US military.

All of this seems to suggest that machine ‘autonomy’ is far from being a distant fiction. Indeed, there has been discussion of instilling artificially intelligent military machines with certain responsibilities not to break international humanitarian norms. Not only does this seem to be an admission of robotic autonomy, but also appears to be a preliminary step on the journey to full recognition of robotic obligations (and rights).

1c) Rationality/morality: It is bizarre to think of a robot being aware of right and wrong. At one level, however, morality is nothing more than programming. Our intuitions, experiences and beliefs shape our choice and judgement of what is the ‘right’ course of action in any particular situation. With the advent of so called ‘deep learning’, it is not so ridiculous to foresee a situation where we feed a whole load of moral principles, statistics and situations into a black box, and watch as the bot ‘learns’ how to identify right from wrong, just as computers now can mysteriously ‘learn’ to identify a bicycle, after being fed thousands of images. Such moral rationality would therefore not be the exclusive domain of humans.

At any rate, the ‘humanity’ which grounds human rights needs some further explanation if it is to be used as a trump card in the assumption that robots could never attain the same moral status as humans.

2. The Turing test

The dividing line between the ‘intelligence’ of humans and robots appears to be constantly narrowing. So much so in fact, that in 2014 a virtual 13 year old boy called Eugene allegedly passed the ‘Turing’ test, named after the codebreaker Alan Turing, who first proposed the idea in 1951. This test involves a judge and two contestants; one of which is human, and one of which is a computer. If the judge is unable to reliably identify which one is human, then the computer is considered to have passed the test.

As further examples of the continual progress of AI, IBM’s chess computer, Deep Blue, famously defeated the world champion Garry Kasparov in the late 1990’s in controversial circumstances, as the former grandmaster accused the computer of trying to brainwash him. A few years ago, I was lucky enough to be present as IBM Watson came to the Oxford Union, and delivered an interactive Q&A of sorts.

So clearly there have been massive advances in what we might call the ‘intelligence’ of computers, such that misidentification of humans and robots may indeed become a problem. Indeed, during the launch of the Google Assistant, the new technology successfully booked a haircut, without the hairdresser being any the wiser as to the fact that there was a machine on the end of the line.

Why does this matter? Intelligence is not the be all and end all when it comes to moral status: we do not think that clever humans should be favoured morally over stupid humans. Indeed, robotic ‘intelligence’ will probably never be as advanced and powerful as the human brain. It seems very strange indeed, for example, that robots would be able to discover the laws of physics, for example.

But the issue of misidentification is an important one: it is better to be safe than to be sorry in the realm of ethics. If we are consistently unsure whether we are dealing with human or robot [which is not the case right now], especially when it comes to the most serious rights such as freedom from discrimination, prudency might be the best course of action. Given that it is not worth running the risk of violating the rights of humans, and it would not be a huge sacrifice to adopt a universal system of rights, it would merely be practical to grant intelligent and human-like robots extensive rights.

IBM Watson at the Oxford Union

3. Animal rights

The parallel of animals is an interesting and relevant one: in both cases we are considering somewhat intelligent ‘individuals’ cultivated, owned and exploited for human benefit.

For some influential vegetarians, such as Singer and McMahan, the most compelling reason not to eat meat is the fact that we deprive animals of a great deal of pleasure (the remainder of their lives), for a limited amount of our own pleasure (a steak). This is a utilitarian argument: it rests on the assumption that the greater the aggregate happiness in the world, the better off the world is.

Indeed, when it comes to preventing cruelty against animals, we are often motivated by some consideration of the pain and suffering incurred by the animal.

Such an argument could also be used in the case of robots, even though these are non-biological, non-sentient beings, on the grounds that intelligent machines also have rudimentary systems of pleasure and emotion — they have in built systems of desires and aversions, of repulsion and reward, at least in relation to the fulfilment of their particular task or function.

Admittedly, there is always the danger of us projecting inappropriate human emotions and characteristics onto robots [as well as onto animals]. Names like Siri and Alexa, and cinematic characters like C3PO and R2D2 are evidence of our innate inclination to anthropomorphize. It’s the reason that robots are often created to look and act like humans, like Honda’s famous football playing ASIMO, who hung up his boots earlier this year.

Having said that, the idea of a computer experiencing pain has vexed philosophers and computer scientists alike, including Alan Turing himself. Indeed, it seems as if robots, such as advanced prosthetic hands, can be programmed to react to certain stimuli like touch and heat. Whether this is an example of a robot really ‘feeling’ pain is another matter altogether, that enters into the thorny territory of consciousness.

It is important to consider the fact that the very idea that animals could experience any kind of pain or emotion has only been widely accepted in very recent years, after millennia of wanton cruelty and maltreatment. So we should be very careful with our anthropocentric disdain towards the possibility that AI could experience some more advanced forms of pain (and pleasure). We don’t want to make the same mistake again.

The now retired Asimo

4. Man-computer symbyosis

We are deeply connected with technology in today’s society, in both more and less extreme ways. Our smartphones augment our intelligence and capabilities and facilitate social interactions; our laptops enable us to perform our daily work. But in some extreme situations, individuals have advanced and ‘intelligent’ bionic limbs, for example, that are physically meshed with their bodies, and are connected to their minds through neurons.

In Dan Brown’s recent book Origin, the inexorable future predicted is one of greater and greater interaction between human and machine, until an entirely new race or species of robot-humans is created. Indeed, in especially apocalyptic visions of the future, such as the one proposed by Elon Musk, the only way for humans to stay relevant in a world of ultra intelligent machines is to combine themselves with such machines, and become ‘cyborgs’.

Now the truth, of course, is that no one really has a clue about what the future will hold. It is possible, however, that the currently clear line between human and machine may well become blurred. Here too, we are back in the realm of misidentification: it might become difficult to delineate a human-like robot from a robot-like human. Either way, an extensive system of universal rights for both would seem to be a prudent course of action.

5. What it says about us

1) How do we measure moral praiseworthiness?

We judge how ‘good’ or ‘admirable’ a person is by the actions that they make and by the way they treat others. Similarly, we can judge the moral attainment of humanity writ large by the way it acts as an entity. Of course, this is most important in terms of our treatment of other human beings, but it is also visible in the manner in which we treat those less powerful than us; i.e animals and also perhaps less intelligent beings such as robots.

2) How do we cultivate a strong moral character?

Acting in a decent and proper way is a matter of habit and learning; we have a duty to cultivate an attitude that treats others with respect and with dignity. This would be served by creating and respecting a code of rights with regard to our treatment of intelligent machines.

3) Conscious decision to shape our society

In many ways, we find ourselves at a crossroads, looking into a future that will undoubtedly include more and more advanced technology. It is important to set out some ground rules, so to speak, to guide us into the unknown; at the very least they will provide a sensible framework for the immediate future.


Advanced artificial intelligence and robotics have the power to improve our world in all kinds of ways. If we plan to build a better future, where technology and humans become ever more closely related and difficult to distinguish, then it will be helpful to have some grasp of the complicated ethical territory ahead. To my mind, there will come a time when robots are sufficiently advanced, or sufficiently similar to humans, as to be awarded extensive rights, however counterintuitive this may seem to us now.

Source: Deep Learning on Medium