AI Does Evil?

Original article was published by Md Ashikquer Rahman on Artificial Intelligence on Medium


Photo by Istockphoto

Many people are passionate about AI because it can be automated, can defeat humans in games, and improve people’s overall living standards. This is of course an indisputable fact, but there are still flaws and even dangers in AI, which may or may not be caused by humans. Even today, there are still countless incidents of AI harassing people or endangering lives. How did this happen? AI will become more and more popular in the future. What potential characteristics of AI will destroy society?

Moral flaws in AI algorithms

AI has many flaws, and they breed moral problems. Many of them are due to deviations in the training data, in order to establish an information feedback loop mode. Give some examples to see how these small mistakes affect people in real life.

1. Facial recognition training data is biased

Imagine one day, when you walked home from school, the police showed up and presented an arrest warrant, handcuffed you, and wanted to take you to a detention center. You have no idea what you made wrong, and your family and neighbors are watching. After arriving at the police station, they searched the whole body, took fingerprints, took photos, and let you stay in a dirty cell all night. You just sit there and wait, wondering why you were caught.

This is what happened to Robert. He is a black man and was arrested for no reason in Farmington Hills, USA. When the police cross-examined him, they took out a photo of a black man stealing goods in the store. William denied that he had been there, and he was not in the photo. The other photo is a close-up photo of the burglar. It doesn’t look like William. He said, “No, this is not me. Do you think all black people look the same?” Why did the police catch the wrong? This is the algorithm The problem is. The police used artificial intelligence to search for suspects, but it was obvious that this was wrong. How can AI go wrong, shouldn’t it be very accurate? Yes, AI is usually very accurate, but only if there is no problem with training. The main reason why William was regarded as a suspect was the bias in the training data. The facial recognition system is fairly fair to whites, but it is not so accurate for some groups. Because the data set used is mainly white faces and faces of a few other groups, the recognition of blacks is low.

Although the recognition of “dark-skinned men” is high, there is still a big gap between the recognition of dark-skinned and light-skinned skins, especially for “dark-skinned women.” These results belong to cutting-edge technology companies, and may not be used by the police.

2. YouTube’s feedback system

Everyone must have seen this website. You will see an interesting video or a popular video on the homepage. After an hour, you will still be on this page, but you just watched more videos unknowingly. This is the role of the recommendation system, it will provide you with videos that can stay on the page longer, thereby gaining more advertising profits. You might think: “What about staying longer on that platform? It won’t hurt anyone.” No, this recommendation algorithm will cause harmful content in the feedback. YouTube’s AI feedback loop has a very terrible problem: “The website’s recommendation algorithm makes it easier for pedophiles to find or share child obscene works in the comment section of a particular video. There are many horrifying points about this discovery. Not only video is commercialized, its recommendation system is continuously promoting children’s videos to thousands of users.” The feedback loop system will also create a single mode in which viewers may always see content they reject. Then there will be some people who will never listen to opposing voices, leading to polarization. Furthermore, this system will promote conspiracy theories. Like the polarization, it will convince those who are slightly interested in or skeptical of conspiracy theories. More and more videos are recommended for you, and they all convey the same message.

3. AI leads to shortening of medical care time

In 2016, a new algorithm was applied to the Arkansas medical system. The algorithm is to better allocate resources and time to medical staff. It will combine several factors to determine how much help each patient needs. However, the improvement of medical efficiency also means that the length of medical care for many people will be reduced, but these people often need help most.

The software will not serve diabetics or cerebral palsy patients. This problem stems from a bug and some error codes. Then comes the ethical question, should we use AI to determine the health of the patient? Why does this state use the algorithm when it is unknown or undetected? There is a clear way to show how the algorithm affects people’s lives The example comes from Tammy Dobo. Tammy Dobb was suffering from cerebral palsy for many years and moved to Arkansas in 2008. She needs a lot of help, cannot move to the wheelchair on her own, and has stiff hands. Because of this, the state’s medical institutions allocated her the longest care time, 56 hours a week.

After the Arkansas medical system applied AI algorithms, all this changed. Daub’s medical care time has been reduced from 56 hours to 32 hours! Only 56% of the original time! This reduction has obviously had a huge impact on her life.

Improper use of AI

There are also some examples of how AI is intentionally used as an unethical operation, which is worse than non-human code errors!

1. IBM and the massacre

Yes, IBM helped the Nazis carry out the Holocaust in World War II. You may be curious: “Is there AI backing in World War II?” It can be said that it is similar. At that time, IBM developed a set of tabulation functions for classification. It is not just a simple algorithm for splitting cards for classification, but a more complicated tool that requires a lot of maintenance, not artificial intelligence as you usually think. That being said, I still believe that it is important to discuss the improper use of technology, whether it is AI or not.

First review how this machine works: you will get a punch card and need to insert it; the machine will read the information, save it, and finally output numbers; these numbers represent which concentration camp people will go to, the types of prisoners and how they will be dead. A chief investigator of economic warfare named Howard Carter wrote: “Hitler’s economic warfare, one of our (American) companies did the same. So IBM and the Nazis are the same kind of people, and the whole world is smashed by international monsters. Suppress.”

2. Change face

Changing faces is also a kind of artificial intelligence. The images, videos and even audio it generates are extremely real, and it uses a confrontational generative network, which is a deep learning model. There is no harm in using face-changing just for entertainment, but face-changing has real dangers and also brings moral problems. One way to create harm by changing faces is in the political realm. If a tweet will cause the situation to become more and more tense, what about a video or a voice? A recording of insulting other political tycoons will spread like a virus on the Internet, the polarization will deepen, and the situation will be more tense if people believe that it is true. Another way to disrupt society is to flood the Internet with a lot of false content. No matter when you go online, how can you know that what you see is true? If the Internet and the media are overwhelming, no one will ever know what is true or what is true. Even texts can be false. How do you know if the reading is written by me or if the robot is misleading you.

The last thing that can cause harm is obscene works. According to a report in September 2019, 96% of online face-changing applications are related to obscene works. This is what Forbes said in an article: Facial change obscene works are often artificially synthesized with celebrities or personal contacts without the permission of others.

Ethical issues that emerge

Although it sounds scary, it may be worse at the back. With the rapid development of AI and technology, we will eventually face an Orwellian situation. The government will have more authorization, and they will use AI to understand when and where we are doing. The facial recognition installed in the camera will monitor your every movement, the microphone will recognize who is speaking and what is talking, and the predictive algorithm will determine your next move.

This brings another problem. The easier it is for the government to track you, the easier it is to get rid of people they hate. The next “Hitler” job will be more efficient, no need to find annoying people, because you already know where they are and can be found at any time. The government can also find out who is opposing them, and then it is easier to punish those opponents. Persuasive false texts and videos will flood the Internet and deliver the most untrustworthy and worthless information. We must be vigilant about AI.