Original article was published on Artificial Intelligence on Medium
Udacity Coupon | Intel® Edge AI for IoT Developers Course
Lead the development of cutting-edge Edge AI applications for the future of the Internet of Things. Leverage the Intel® Distribution of OpenVINO™ Toolkit to fast-track development of high-performance computer vision & deep learning inference applications.
Get Udacity Coupon 30% OFF For Intel® Edge AI for IoT Developers Course
Leverage the Intel® Distribution of OpenVINO™ Toolkit to fast-track development of high-performance computer vision and deep learning inference applications, and run pre-trained deep learning models for computer vision on-premise. You will identify key hardware specifications of various hardware types (CPU, VPU, FPGA, and Integrated GPU), and utilize the Intel® DevCloud for the Edge to test model performance on the various hardware types. Finally, you will use software tools to optimize deep learning models to improve performance of Edge AI systems.
Edge AI Fundamentals with OpenVINO™
Leverage a pre-trained model for computer vision inferencing. You will convert pre-trained models into the framework agnostic intermediate representation with the Model Optimizer, and perform efficient inference on deep learning models through the hardware-agnostic Inference Engine. Finally, you will deploy an app on the edge, including sending information through MQTT, and analyze model performance and use cases
Hardware for Computer Vision & Deep Learning Application Deployment
Grow your expertise in choosing the right hardware. Identify key hardware specifications of various hardware types (CPU, VPU, FPGA, and Integrated GPU). Utilize the Intel® DevCloud for the Edge to test model performance and deploy power-efficient deep neural network inference on on the various hardware types. Finally, you will distribute workload on available compute devices in order to improve model performance.
Optimization Techniques and Tools for Computer Vision & Deep Learning Applications
Learn how to optimize your model and application code to reduce inference time when running your model at the edge. Use different software optimization techniques to improve the inference time of your model. Calculate how computationally expensive your model is. Use the DL Workbench to optimize your model and benchmark the performance of your model. Use a VTune amplifier to find and fix hotspots in your application code. Finally, package your application code and data so that it can be easily deployed to multiple devices.
NEED TO PREPARE?
- For Python Experience: AI Programming with Python.
- For Deep Learning Experience: Deep Learning.
- For AI Modeling: Intro to Machine Learning with Pytorch or Intro to Machine Learning with TensorFlow.
- For Computer Vision Experience: Computer Vision.