Xnor.ai Binarization with OpenVINO™

Source: Deep Learning on Medium


Xnor demonstrates the recipe for accurate and efficient deep learning inference live at the Intel® booth, Embedded Vision Summit

Xnor.ai demo running live at Intel® booth at Embedded Vision Summit, 2019

The two ingredients for accurate and efficient deep learning are a great model and a great runtime environment for inferencing. Xnor wanted to know what could be achieved by bringing our accurate binarized models together with OpenVINO™ on Intel® processors. Here are the results:

For the comparison baseline, we started with the 32-bit floating point ResNet-50 image classifier trained on ImageNet with an input resolution of 224 x 224 running in TensorFlow on a single Intel® Core™ i5–6500TE processor. The next step was to run that same model in OpenVINO™ instead of TensorFlow, still at full 32-bit floating point precision. The final step was to run Xnor.ai’s binarized model in a new version of OpenVINO™ that implements binary convolution. The combined solution is nearly three times faster than the baseline.

Xnor.ai and Intel unlock the potential for efficient deep learning inference at the edge, reducing latency, reducing costs, and protecting data.

With these advances, Xnor’s binarized person and vehicle detector for video analytics applications can monitor more than 40 simultaneous video streams each 30 frames per second in OpenVINO™ on a single Intel®Core i5 processor with no GPU or other hardware acceleration.

Learn more

To learn about more Xnor’s solutions please visit https://www.xnor.ai or contact: sales@xnor.ai

Intel, the Intel logo, Intel Core, and OpenVINO are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others.