AI: IT’S IMPLICATIONS AND THREAT

Original article was published on Artificial Intelligence on Medium

AI: IT’S IMPLICATIONS AND THREAT

Introduction to Artificial Intelligence,

Artificial Intelligence, a term coined by John McCarthy in 1956, simply put is the ‘stimulation of human intelligence processes by machines ’. Source: TechTarget.

AI programming is based upon three cognitive skills; learning, reasoning and self-correction.

Where learning processes focus on acquiring data and devising rules (algorithms) to turn that data into actionable information, reasoning processes assure that ‘right’ algorithms are used to achieve the desired outcome. The third skill, self-correction is based on constantly renewing and correcting algorithms, to ensure that the outcome achieved is only the most accurate.

The AI of today are called ‘weak or narrow ( AI )’. They are designed to perform a specific task. Apple’s Siri is one of the many examples of weak AI.

Strong ( AI ) or Artificial General Intelligence ( AGI ), which pose a ‘greater’ although not an immediate threat, are AI which can replicate the cognitive functions of a human brain. They can multitask, and if developed to the fullest, outperform humans. On the brighter note, they aren’t developed yet. Strong Artificial Intelligence is the ‘goal’ of AI researchers, a milestone they’re working up towards.

Implications of Artificial Intelligence,

In an article on Future Of Life, Max Tegmark, president of the Future of Life Institute, talks about the potential benefits of developing AI:

He says, “Everything we love about civilization is a product of intelligence, so amplifying our human intelligence with artificial intelligence has the potential of helping civilization flourish like never before – as long as we manage to keep the technology beneficial.”

One of the few benefits is automation of tasks in workplace. It means that the human labour can instead take up tasks that are more engaging for them, that they are better suited for. Tasks that, for example, involve empathy and creativity. This reduced burden on labour force means greater productivity and satisfaction.

The benefits of automation of tasks aren’t just limited to the workforce, AI might change the way we learn too. Artificial intelligence can automate grades, assess students and tend to their requirements, making the oork for teachers easier.

The most important breakthrough would come, perhaps, in the health care sector. Artificial intelligence can combine patients data with other data, to provide the hypothesis that are more accurate than a human’s. Billing, booking appointments, and finding medical appointments can be made far convenient by online virtual assistants and chatbots. Also, the incorporation of AI in medicine can reduce operational costs. In his report, David Bolier says AI is ‘–broadening access to specialized knowledge –shifts that can reconfigure medical treatment norms and health care market.’

However, it must be noted that the misperception of ‘AI’ and its implications, might make people question the use of AI in healthcare. Among patients, it is likely to cause controversy. According to a poll, 80% of Russians ‘had a negative perception of robots performing medical surgeries in the future’.

Threats associated with AI

Along with all pros, the development of AI comes with its risks:

In this regard, physicist Stephen Hawking said, “Success in creating effective AI could be the biggest event in the history of our civilisation. Or the worst. So we cannot know if we will be infinitely helped by AI or ignored by it and sidelined, or conceivably destroyed by it.”

Evolution of the workforce, for example, can pose a risk. AI can replace most of the workforce, which means loss of employment for most of the labour. The uncertainty of how exactly AI would affect the economy can also be challenging for some.

Since the world is getting smaller, AI would need to work by rules that stand globally, rules that allow for effective interaction all over the world. Imposition of such rules isn’t at all an easy task. Regulation of AI is tricky too; with the introduction of new technologies, the older regulatory rules can easily be obsolete.

The development of AI also allows for malpractices, such as hacking or AI trafficking. Built-In-Bias allows for the programmer of the AI to introduce, either intentionally or unintentionally, a Bias. An Artificial Intelligence working with a bias or learning from biased data would also produce biased results. This can give an arbitrary group, in some cases, an unfair advantage over the others, although the outcome of a biased AI being ‘unpredictable’ isn’t any less of a nuisance.

The ‘weirder’ threat of AI

In her speech at Ted, Janelle Shane , an AI researcher describes working with AI ‘less like working with another human and a lot more like working with some kind of weird force of nature’. Since the mind of AI is alien to ours, it would accomplish the task we’ve given it, although not in a way we might expect it to. Janelle also says, “It’s really easy to give AI the wrong problem to solve.” Which, she says, can be quite destructive.

Whereas this problem seems more applicable to the weak AI, the threat of strong AI isn’t far from it. The threat lies in misalignment of AI’s goals with ours, and the risks grow with Strong AI. Another problem is that AI might become super competent, doing everything in their power to achieve a particular goal, however unethical the way might be. They might take an instruction too literally, which is a problem in its own.

Why should we care about the ‘threat’ of AI?

On 2 December, 2014, Stephen Hawkings told the BBC, “AI could spell the end of the human race.” And, a lot of other researchers and scientists believe it to be so, especially with the unexpected technological advances, which have made what was once once seen as fictional ( Strong AI ), a possibility. According to Future Of Life Institute, as discussed in Puerto Rico conference ‘most researchers guess that it could happen before 2060,’ saying, ‘Since it may take decades to complete the required safety research, it is prudent to start it now.’

To wrap up the topic, here’s what John Howard said in his article at pubmed.gov: ‘Engaging in strategic foresight about AI workplace applications will shift occupational research and practice from a reactive posture to a proactive one.’

What were your preconceptions about AI? Do you see it as a potential threat?

Continue the discussion down below!