Skip to collection list
Skip to video grid
Fast CNN Inference in Resource-Constrained Environments
David Ojika, Consultant Engineer, DellEMC
While a lot of research has focused on training convolutional neural networks (CNNs) on GPUs, not so much attention has been given to real-time inference of CNN with respect to latency in resource-constrained environments. This work presents the usage of specialized vision processing unit (VPU) from Intel to help accelerate existing and emerging CNN architectures. We present comparative performance of image classification across multiple architectures. Participants will learn how to easily deploy CNN models with Python APIs and derive fast inference performance through hardware acceleration, all on an efficient and cost effective solution for embedded applications.