Getting smart about artificial intelligence – Straight Talk

Original article can be found here (source): artificial intelligence

Assuming this proposition to be correct, it is likely to have profound consequences for companies as well as society.

The boundaries of organizations will break down even more. Today NXP develops sensors to gather data, but tomorrow we will also help other companies optimize their manufacturing or design processes. It is possible that new revenue models will be developed. You can expect to see totally new business models erupt from within your data.

The good news for established companies is that startups won’t necessarily have the advantage in this next industrial revolution. Don’t underestimate the value of an established supply chain. Giant hundred-year-old companies will be able to disrupt themselves once they know how to interpret the deeper implications of the data they generate.

Data generated in manufacturing could be used for marketing, or information from your retail stores can be used for other areas of your company. As companies respond to these opportunities, the overall company structure and the operating model are likely to become more fluid – as will the workforce and the overall ecosystem.

The range of data that can now be collected is incredible. In our NXP portfolio we have sensors that measure humidity, temperature, and sound. We have radar, we have cameras, we have smart speakers. And all those devices generate useful data. For example, if I walk into an office, my entrance can be caught on camera, so my availability is known. If I walk into a meeting room, my presence might be noted and, if the room is not taken, it could be booked for me automatically. Meanwhile, the system might adjust some background music to my liking, make a call I have scheduled, or track my heart rate to make sure I’m healthy.

Data as a personal asset

This is also going to have a profound impact on customer value propositions.

In the past, privacy was your right; now it’s an asset you can sell.

Would you be open to putting a dashboard camera on your car and sharing your driving with everyone? Probably not. But what if your insurance company offered to charge you less if you agreed to do it? Would you be open to it then? At some point, this may go even further. Some company may offer you a free car, as long as you offer to supply them with your data. It will be a tradeoff.

AI is also likely to blow up traditional professional hierarchies. Earlier waves of automation affected the low-pay, low-grade jobs, but this revolution will affect very senior positions and transform the world of finance, HR, procurement, and IT.

For a long time, we have had doctors who studied for years to learn to read MRI scans, read CAT scans, and diagnose somebody with a specific disease or illness based on what they saw. Soon, thanks to simple algorithms, it will take computers minutes to make an assessment that is 98% more accurate than that specialist’s diagnosis.

Over the next 10-15 years AI will lead to bigger changes than any industrial revolution we have seen before. Even now, if you look at our NXP Edge products, we have already given people computing power that almost allows you to do real-time AI right in your hand.

But computing in the AI era is not going to just be faster and more powerful than what we have had before. AI systems are fundamentally different, able to identify patterns on their own, without our even having posed the machines a question.

This will create a host of new challenges, both for IT professionals and society.

In Europe, individuals own their data. In the US, the companies own that data. And in the Asia Pacific, regulators and the government own the data. I believe that our individual data needs to be protected, but we also need to keep in mind that limiting access to data will also limit the rate of progress with our innovation.

Some of those challenges are ethical. Like children, AI systems learn by example: in one recent experiment, bots who were fed a diet of Twitter feeds learned how to be racist, even though obviously no one had programmed them to be racist.

I believe a certain level of ethics can be hardwired into the algorithms. I’m working with a friend who has set up algorithms so that when the algorithm goes out of bounds, it gets a penalty – it’s the mathematical equivalent of sending a misbehaving child to have a timeout in his room.

We will also have to decide what uses we want to permit. Some social media analysts have claimed that after a user has made five clicks, companies have your psychological profile, and after 300 clicks and likes, they have a better understanding of who you are and what drives you than do your spouse or people you have lived with for 30 years. Once they know all that, it’s going to be easy for them to influence you online, by making adjustments to your environment that they know will influence your consumption behavior.

If we can generate all this data, and the data is publicly accessible and conditioning us in certain ways, at what point is the technology influencing us, rather than vice versa?

For the IT professional, this suggests that we no longer have to know only about technology, but also psychology, organizational science, and political science. As Darwin said, it’s not the strongest or the smartest that survive, but the species most responsive to change. That may be true, but you will need to deliver more than the fastest adaptation to AI’s new capacities. Ultimately, I believe that if your company is going to thrive, you will need to design systems that also protect people’s emotional and physical safety.