Original article can be found here (source): artificial intelligence
Over the past many decades, science fiction has shown us scenarios where AI has surpassed human intelligence and overpowered humanity. As we near a tipping point where AI could feature in every part of our lives – from logistics to healthcare, human resources to civil security – we take a look at opportunities and ethical questions in AI. In this article, we speak to AI expert Prof Dr Patrick Glauner about AI bias, as well as which impact – good and bad – AI could have on industry and workers.
What about our jobs? Can we trust AI to do what it is meant to, and without bias? What will society look like once we are surrounded by AI? Who will decide how far AI should go? These are some of the ‘frequently asked questions’ when it comes to AI. These were also part of the questions participants were encouraged to delve into at the FNR’s science-meets-science-fiction event ‘House of Frankenstein’ – sparking also the question of what it means to be human in the age of AI.
‘It’s not who has the best algorithm that wins. It’s who has the most data.’
“For about the last decade, the Big Data paradigm that has dominated research in machine learning can be summarized as follows: ‘It’s not who has the best algorithm that wins. It’s who has the most data.’”explains Dr Patrick Glauner, who in February 2020 starts a Full Professorship in AI at the Deggendorf Institute of Technology (Germany), at the young age of 30.
In machine learning and statistics, samples of the population are typically used to get insights or derive generalisations about the population as a whole. Having a biased data set means that it is not representative of the population. Glauner explains biases appear in nearly every data set.
“The machine learning models trained on those data sets subsequently tend to make biased decisions, too.”
Queue facial recognition can for example unlock your phone by scanning your face. However, this technology has turned out to have [SE1] ethnic bias, with personal stories and studies pointing to the technology not distinguishing between faces of Asian ethnicity. Also apps that are meant to ‘predict’ criminality tend to be biased toward people with darker skin. Why? Because it was developed based on, for example, Caucasian men, rather than a representative sample of populations.
Then there is the case of Tay. Tay was an AI chatbot, which immediately turned racist when unleashed on and exposed to Twitter. This shows that currently AI does not understand what it computes – meaning the term ‘Intelligence’ is criticised by part of AI research community itself. It is crucial to train AI on data sets, but the risk here is that AI makes decisions about something it does not understand at all. Decisions which are then applied by humans without knowing how the AI came to this decision. This is referred to as the “explainability” problem – the black box effect.
Other concerns are the power that comes with this technology, and where to put the limits on how it is used? China, for example, has rolled out facial recognition technology, which can be used to identify protesters. And not just that, a city in China is currently apologising for using facial recognition to shame citizens who are seen wearing their pyjamas in public.
While the EU has drafted ethics guidelines for trustworthy AI, and the CEO of Microsoft has called for global guidelines, ethical guidelines for Government use of such technology are yet to be agreed on and implemented. The use of armed drones in warfare are also a concern.
Bias – an old problem on a larger scale
Prof Dr Glauner explains that bias in data is far from new, and that there is a risk that known issues will be carried over to AI if not properly addressed.
“Biases have always been present in the field of statistics. I am aware of statistics papers from 1976 and 1979 that started discussing biases. In my opinion, in the Big Data era, we tend to repeat the very same mistakes that have been made in statistics for a long time, but at a much larger scale.”
Glauner explains that the machine learning research community has recently started to look more actively into the problem of biased data sets. However, he stresses that there needs to be greater awareness of this issue amongst students studying machine learning, as well as amongst professors.
“In my view, it will be almost impossible to entirely get rid of biases in data sets, but that approach would be at least a great start.”
Glauner also explains that it is imperative to close the gap between AI in academia and industry, emphasising that he will ensure that students he teaches under his Professorship will learn early on how to solve real-world problems.
AI and jobs
AI has both positive and negative implications for the working world. Some tasks will inevitably be handed over to AI, while others will continue to require humans. There will also be a mix. The Luxembourg Government’s ‘Artificial Intelligence: a strategic vision for Luxembourg’ puts the focus on how AI can improve our professional lives by automating time-consuming data-related tasks, helping us use our time more efficiently in the areas that require social relations, emotional intelligence and cultural sensitivity.
Prof Dr Glauner, whose AI background is rooted in industry, sees AI having a significant impact on the jobs market, both for businesses and workers. Not everyone who loses their job to AI will be able to transform into an AI developer. He also points out that the job market has always undergone change.
“For example, look back 100 years: most of the jobs from that time do not exist anymore. However, those changes are now happening more frequently. As a consequence, employees will be forced to potentially undergo retraining multiple times in their career.
For instance, China has become a world-leading country in AI innovation. Chinese companies are using that advantage to rapidly advance their competitiveness in a large number of other industries. If Western companies do not adapt to that reality, they will probably be out of business in the foreseeable future.”
“AI is the next step of the industrial revolution”
“Even though those changes are dramatic, we cannot stop them. AI is the next step of the industrial revolution.
“While the previous steps addressed the automation of repetitive physical steps, AI allows us to automate manual decision-making. That is a discipline in which humans naturally excel. AI’s ability to do so, too, will significantly impact nearly every industry. From a business perspective, this will result in more efficient business processes and new services/products that improve humans’ lives.”
Prof Dr Glauner’s PhD project is a concrete example of how AI can be used to improve output, and customer experience. Funded by an Industrial Fellowship grant (AFR-PPP at the time) – a collaboration between public research and industry – Glauner developed AI algorithms that detect the non-technical losses (NTL) of power grids: critical infrastructure assets.
“NTLs include, but are not limited to, electricity theft, broken or malfunctioning meters and arranged false meter readings. In emerging markets, NTL are a prime concern and often account for up to 40% of the total electricity distributed.
The annual world-wide costs for utilities due to NTL are estimated to be around USD 100 billion. Reducing NTL in order to increase reliability, revenue, and profit of power grids is therefore of vital interest to utilities and authorities. My thesis has resulted in appreciable results on real-world big data sets of millions of customers.”
AI and new industries
The opportunities AI presents for existing industries areas are manifold, if done right, and AI could pave the way for completely new industries as well: space exploration and space mining would hardly be developing so fast without AI. For example, there is a communication delay from the Earth to the Moon, which makes controlling an unmanned vehicle or a machine from Earth challenging to say the least. However, if the machine would be able to navigate on its own and make the most basic of decisions, this communication gap would no longer be much of an obstacle. Find out more about this FNR-funded project.
Improve, not replace
AI undoubtedly represents huge opportunities for industry in particular, and has the potential to improve performance and, output, as well as worker and customer satisfaction, to name only a few. However, it is imperative the bodies in charge put ethical considerations and the good of society at the heart of their strategies. A balance must be found. The goal has to be to improve society and the lives of the people within it, not to replace them. The same goes for bias in AI: after all, what good can come from algorithms that build their assumptions on non-representative data?