Original article was published on Artificial Intelligence – TechCrunch
Sony has developed an interesting new hybrid technology: An image sensor with AI processing system built into the hardware, making it a single integrated system. The benefits and applications for this are potentially enormous as imagery and code continue to merge.
The idea is fairly simple in concept. You take a traditional CMOS image sensor like you’d find in any phone or camera today, and stack it on top of a logic chip that’s built not just for pulling pixels off the sensor but for operating a machine learning model that extracts information from those pixels.
The result is a single electronic assembly that can do a great deal of interesting processing on a photo before that photo is ever sent elsewhere, like a main logic board, GPU, or the cloud.
To be clear, image sensors already have companion processors that do the usual work of sorting pixels, compressing them into a JPEG, and so on. But they’re very focused on performing a handful of common tasks very quickly.
The Sony chip, as the company explains it, is capable of more sophisticated processes and outputs. For instance, if the exposure is of a dog in a field, the chip could immediately analyze it for objects, and instead of sending on the full image, simply report “dog,” “grass,” and anything else it recognizes.
It could also perform essentially improvisational edits, such as cropping out everything in the photo but parts it recognizes and has been told to report — only the flowers, but never the stems, say.
The benefit of such a system is that it can discard all kinds of unnecessary or unwanted data before that data ever goes into the main device’s storage or processing pipeline. That means less processor power is used, for one thing, but it may also be safer and more secure.
Cameras in public places could preemptively blur faces or license plates. Smart home devices could recognize individuals without ever saving or sending any image data. Multiple exposures could be merged to form heat or frequency maps of the camera’s field of view.
You might expect a higher power draw or latency from a chip with integrated AI processes, but companies like Xnor (recently acquired by Apple) have shown that such tasks can be performed very quickly and at extremely low cost.
While more complex processing would still be the purview of larger, more powerful chips, this kind of first pass is able to produce a huge variety of valuable data and, properly designed, could prove to be more robust against attacks or abuse.
Right now Sony’s “Intelligent Vision Sensor” is still only a prototype, available to order for testing but not production. But as Sony is one of the leading image sensor providers in the world, this is likely to find its way into quite a few devices in one form or another.