The Access Problem in AI



It’s not automation of jobs we should be worried about, but accessibility of AI tech

The Paradigm

In 1965, the co-founder of Intel published a paper observing an odd phenomenon. Every year, the number of components per Integrated Circuit seemed to double. This meant circuits were getting smaller and cheaper, while compute was getting faster. In the decades to come, this trend was going to be the cornerstone of the technological revolution.

Four years later later, in 1969, a team funded by the US Department of Defense started working on inter-networking multiple networks to create a single larger network. In 1990 this project led to the creation of what we today call the Internet, and the communications revolution was born.

In 2012, a dramatic breakthrough in the ImageNet challenge improved the field of computer vision by a massive margin of performance. This leap in performance was premised upon the technological revolution (2012 was the first time GPUs were used in ImageNet) and the communications revolution (decades-long international collaboration on machine learning research). This victory for the computer vision community was soon to translate into performance improvements in natural language processing, generative tasks, audio processing, and multiple other tasks.

20-point performance jump in 5 years

In 2018, we’re still coming to grips with the enormous potential of machine learning. We’re learning what it means for the future of programming, and in the process discovering a new paradigm of software. Developing ML-intensive software demands an altogether different workflow, a novel stack of techniques & practices. Andrej Karpathy, Director of AI at Tesla, puts it best.

“Neural networks are not just another classifier, they represent the beginning of a fundamental shift in how we write software. They are Software 2.0.”

Jeff Dean, a giant of the field and currently the head of AI at Google, believes that deep learning is transforming the landscape for systems engineering. In his SysML 2018 keynote talk, he notes:

Deep Learning is transforming how we design computers.

The Problem

Machine learning algorithms are performing better than experts at their narrowly-defined tasks, and the labour market is naturally concerned about automation. When deep learning performs better than cardiologists at reading EKGs, at-par with human radiologists in detecting pneumonia, and almost as good as humans at recognizing conversational speech, the concerns of a drastic shifts in both education and employment of human resource seem legitimate, the “future of work” seems in disarray and no job seems too safe from disruption.

Source: Andrej Karpathy

But these views over automation and consequent job loss are myopic. They don’t take into account the new jobs being created due to these technological advancements. The new paradigm puts optimization at the centre of programming, and this means tweaking hyperparameters, designing a representative loss function, ensuring clean accurate data, and checking for fairness. All this is a fundamental departure from the way traditional programmers work and think. No wonder Forbes thinks the field of Artificial Intelligence is seriously unerstaffed, and AI expertise are in huge demand.

The problem isn’t automation, it’s inequal access to the benefits of technological advancement.

Perhaps the best evidence we have against the need to be worried about loss of jobs from automation is the trend of labour productivity (output of goods & services per hour of labour) and productivity growth (labour productivity over time) of the US economy over the last 50 years. While the former has steadily increased, the latter is in decline. In fact, productivity growth has gone down drastically across the world. Perhaps the most startling implication of this is that while technology is making our individual lives better, it’s not having a big impact on the productivity growth of the labour market. This makes a lot of sense, if we think about the way handheld devices have affected lives and individual productivity over the past decade. The work we do, our productivity, it has improved. But today we have more work, not less. So loss of jobs are not the biggest of our problems.

On the other hand, the thing that’s worrying is the income of the average US household. The more we advance technologically, the more the benefits seem to concentrate in the uppermost 20% of this population. We see all a rise in labour productivity, but the truth is the benefits of this rise hasn’t been broadly shared. A naive way to think about it is to note that the upper 20% has the existing capital to actually enforce and utilize these new technological innovations to personal profit. This leads to a strong compounding effect that makes technology a means for the rich to get richer, and the poor to stay stagnant. This is the Access Problem that AI faces today. To put it in a few words:

AI & Automation threaten to widen the social and economic mobility gap in American Society, simply because the lower classes don’t have access to the education and technology they need to get be part of the new workforce.

The Solution

NimbleBox is a cloud platform that aims to bring AI to the masses. It allows users to rent GPUs cheaper than any other platform, and has features which make building AI projects an intuitive experience even for beginners. The education-focussed approach Nimblebox takes to solving this problem is essential to in enabling access for new-comers to AI, as well as soon easing deployment for experienced users.

Armed with intuitive UX and a responsive support team, NimbleBox aims to democratize AI. We’re here to help each and everyone achieve their AI goals faster and better. We believe artificial technology is a definitive moment in the history of software, and making sure we share the benefits of this advancement in an inclusive and shared manner is a collective responsibility.

This is our mission.
And we hope you’ll join us at NimbleBox.ai

Source: Deep Learning on Medium