Deep Learning/AI Challenges in 2019 and How to Work Around Them

Source: Deep Learning on Medium


Go to the profile of Rebecca Beris

Artificial Intelligence (AI) seems to appear in the technology sections of well-respected publications on an almost weekly basis now. It’s clear that a revolution in AI technology is underway, driven by advances in computing power and techniques such as deep learning in addition to greater investment and more data than ever.

It was as far back as 1950 that Alan Turing posed the famous question, “can machines think?” Since then, several disciplines have emerged in the field of AI. Proponents of modern AI give the impression of practically boundless applications that will completely change society for the better, while skeptics argue that the benefits of AI are overblown and the costs are higher than conveyed.

The truth is that AI and sub-fields like Deep Learning (DL) are useful tools that can be used to augment human activities in a number of dramatic ways. However, the successful implementation of AI ultimately depends on our ability to face some important challenges and overcome them. Here are five AI and deep learning challenges for 2019.

  1. Neural Network Opacity

Deep learning neural networks are typically opaque, so you cannot explain the outputs you get. Each neural network has up to billions of parameters, known as neurons, and each parameter is only identifiable in terms of its location within a vastly complicated system.

It’s a huge challenge, therefore, to explain the outputs produced by a given network. This challenge is of particular concern in applications such as medical diagnostics, in which medical professionals would ideally like to understand why a given network came to a certain decision.

Gaining transparency into deep learning networks is a huge challenge that rests on the advancement of various techniques that help to visualize the features represented by individual neurons. There have been some developments in neural network technology, which have enabled the creation of neural networks that can explain themselves.

2. Ensuring Data Quality

Machine learning and deep learning models are data hungry, meaning they need lots of high quality data to become adept at performing tasks like image recognition. Indeed, a high level of accuracy is one of the main reasons to use deep learning for tasks like computer vision. By adding “random noise” into the training datasets used for these types of systems, tests have shown that their performance markedly declines.

While an organization or enterprise looking to train and run a machine learning model in production obviously wouldn’t purposefully introduce data that can obscure performance, it’s clear that using poor quality training data can unintentionally hamper the usefulness of these technologies.

It’s imperative, therefore, to run the training data through some sort of data cleansing or analytics tool in advance of using it to train machine learning models. Such processes can identify data anomalies and errors, making it more likely that the model is trained to do what its users need with enough accuracy.

3. Plugging the Talent Gap

According to a McKinsey Global Survey on AI from November 2018, among the top challenges to AI adoption in businesses, 43% of responders cited a lack of AI talent. Despite the hype about AI automating many activities previously carried out by people, the truth is that significant human resources are needed to implement it correctly.

Businesses interested in plugging the talent gap need to broaden their horizons if they want to use AI solutions that actually solve problems. Furthermore, educational institutions such as schools and universities must understand the need to increase knowledge among students with regards to the various AI technologies by introducing dedicated modules and lessons focusing on machine learning methods, algorithmic statistical modeling, and more.

4. Data Security

Machine learning frameworks and systems are used as a springboard for training neural networks. They provide training data so the network can learn how to complete tasks, which can then be realized in the form of intelligent applications and tools, such as chatbots, recommendation engines, and robots that automate tedious tasks previously requiring human labor.

Behind any application using machine learning or deep learning methods is a complex ecosystem involving lots of different systems, data formats, data movement, and disparate stakeholder groups. In a business context, huge amounts of sensitive real-world business data are needed to serve the needs of stakeholders in such an ecosystem.

Data security is, therefore, a huge challenge in the context of realizing the business use cases of artificial intelligence techniques. Meeting this challenge entails ensuring all stakeholders are well-informed on data security methods, including encryption, authentication, and compliance requirements. Other factors can help, such as performing threat modeling and exercising good information security hygiene.

5. Production-Grade AI

There is nothing wrong with the underlying mathematics and statistics of deep learning and machine learning systems ━ rather, the ability of an organization to solve actual problems with AI rests on becoming production ready.

Production-grade AI solutions can only be implemented through adequate investment in infrastructure, encompassing both hardware and software. Underlying systems need to be reliable with minimal downtime, particularly if they are cloud-based. Other concerns already discussed, including ensuring data security and hiring the right talent, can further enhance the realization of production-grade AI systems that solve real-world business problems.

Wrap Up

As we have seen, artificial intelligence can be both a boon and a burden, with applications ranging from manufacturing to medical diagnosis. None of the challenges addressed here are insurmountable, but it is important that everyone involved in AI places a greater emphasis on solving these issues during 2019.