When Does Artificial Intelligence Not Work In Business? – Forbes

Original article was published by on artificial intelligence

Digital generated image particle connection network on grey background


What are the shortcomings of AI/ML in business? originally appeared on Quora: the place to gain and share knowledge, empowering people to learn from others and better understand the world. 

Answer by Josh Coyne, Investor at Kleiner Perkins, in their Session: 

Today, AI/ML tends to fall short in three main categories: capabilities vs. expectations, financial impact, and amorality.

To start, there is a large divide between expectations and the reality of AI’s capabilities. It’s important to note that AI is not a panacea — it’s math. AI cannot program or think better than humans. Instead, it excels at narrow, closed-set, deterministic use cases that contain large amounts of data. In many ways, AI is just pattern-matching. Unfortunately, this disconnect between expectations and capabilities continues to result in poor user experiences and increased technical debt.

Second, using AI/ML can have material implications on a company’s financial profile. A key point of contrast between the development of AI/ML and traditional software is data management and processing. Developing ML models requires collecting, storing, transforming, labeling, and serving data in production. These processes can quickly add up in cost, leading to meaningful gross margin compression. Even training a single model can cost hundreds of thousands of dollars in computational overhead. Additional COGS stem from human-in-the-loop structures (annotation and human failover), model inference in production, and decreasing marginal returns to training models. Unfortunately, as OpenAI demonstrated, the computational demands of AI workloads are not slowing down — upwards of 2x increase in compute requirements every 3.4 months (!). This does not bode well for improving costs / margins.

Finally, ML models lack any moral compass. It’s up to ML researchers and developers to ensure that their algorithms meet the highest ethical standards. Unfortunately, this is far from reality today. The desire to create the best performing AI models has made many organizations prioritize complexity over explainability and trust, opening the door to potential biases. An example is the Gender Shades study in 2018. In the study, it was revealed that facial recognition services by Microsoft and IBM performed better on men than women. As the world becomes more dependent on algorithms for decision-making, it’s critical that explainability becomes a core component of ML models. Without it, discriminatory algorithmic decisions go unchecked. Being able to interpret AI remains key to addressing the lack of trust around black-box decisions, avoiding vulnerabilities in models, and decreasing the amount of human bias in machine learning. We still have a lot of work to do.

You can follow Quora Sessions here. More Sessions: