IoT & the new Data Science frontiers

Original article was published on Deep Learning on Medium

IoT & the new Data Science frontiers

Image from Wikipedia

Its been a while that the Hype Cycle for IoT was proposed by Gartner & slowly we are moving into the ‘Slope of Englightenment’

https://youtu.be/y6BHsXH1P6M?t=118

Here’s an interview in which David Kote, CEO of Honeywell explains how IoT & Data Science has become the primary activity of their company. The reason I have taken this interview is because what he is saying is something that I have personally experienced. There is a lot of hype around AI and IoT, but here he talks about the ground reality. What he is narrating example is very pragmatic and you would find that most industries are moving in this direction.

There were 3 Important things that he has mentioned in his interview.

  1. Although being an automation based company which would typically employ electrical & mechanical engineers he mentions that 50% of their engineers are software engineers and their primary role is to collect data analyse it.
  2. Next, he talks about the edge devices collecting data & sending it over the internet. I had also worked on a similar project which was the automation of an oil depot, and there I could see people walking around with walkie Talkies and taking the measurement from different sensors and relaying it to people sitting in the control room who would then take necessary action. Basically the data was generated by people earlier and now it’s being created by the sensors. This data is significantly accurate which is generated by the sensors and it is very unstructured in nature. This is basically a fertile ground for Data Scientists.
  3. The third and most important part he mentioned was the rise in computational power. Machine Learning (ML) algorithms have not been invented today. You can trace some of these algorithms at least 50 years back. Neural networks which is the foundation of deep learning have been discovered in the 1950s & the first commercial implementation of neural networks was done in 1980s, but the only reason we have seen the spectacular growth in implementation today is because we have the computational power to run these algorithms at a scale the industry demands and at a cost which is commercially feasible. The growth of AI-ML is directly proportional to the rise in computational power in the future, which has continued it’s exponential growth for decades.

What has changed in recent times is the implementation of deep learning models on edge devices.

A typical model of IoT looks like this.

Vector images compiled from Freepik

In this architecture, the edge devices are typically connected through wired or wireless medium to an IoT gateway. These edge devices usually have very low-end microcontrollers which have memory as low as 4kb. With such type of devices, you cannot port conventional OS like Linux, Windows, Android, etc. Most of these devices run a stand-alone application without any OS. An IoT gateway’s primary functions to take raw data from these devices, encode it & send it securely to the server.

However, there is a small issue with this model. The ML model is deployed on the cloud. The edge devices typically are capable of creating a few thousand data points every second. Let’s take the example of a temperature sensor. The temperature of any body does not change suddenly so if your sensor is sending 1000 readings/sec, most of this data is useless because these reading would almost be the same. Next, the server can process this data realtime & through it’s ML model command the device to take a proactive step. Although this might look little inefficient, this architecture has one distinct advantage. No matter what type of raw data is coming from the edge devices, the ML model on the server can undergo continuous upgrades & changes over the period of time & with the advent of continuous deployment models any amount of complexity is manageable.

However, it would be great if the ML model could be deployed locally on the edge device to take such action. Well, now that is possible. Recently ST electronics launched their new set of microcontroller series with firmware tools which can convert neural network models into local models & port it on their microcontroller. The model is trained by the developers on their machines & then using the tool that model is converted into a logical model & ported on to the device. However the model deployed would be significantly simpler than what we can deploy on cloud simply because of the limitation of computing power.

There are a lot of questions here, answers to which only time would reveal. However, let’s analyse what changes we might get to see.

  1. In the IoT architecture, each of the component ( edge devices, servers, ML models, etc) may be developed by different solution providers. With the intense cut-throat competition, it is imperative that only intelligent solutions would add to the value to the top line of any company.
  2. If the models are placed on the devices then hardware vendor would benefit from it & if placed on the cloud the vendor providing cloud solutions would benefit. Rest of the solution provider would have to face stiff competition & thin top lines.
  3. ML models at times can be very specific to the device & there is no one size fit for all solution. In this case, a solution provider who has the domain knowledge of a particular use case would have an upper hand.

We all would love to take control of our household stuff from our mobile phones & likewise every businessman would love to control his business from his laptop. This only means that IoT would have an upward growth trend for at least a decade now. However, it is not linear or vertical growth, it is a highly domain-specific growth & a horizontal growth. Long term strategy of any professional or company should be developing domain expertise over technology.