David Ojika, Consultant Engineer, DellEMC While a lot of research has focused on training convolutional neural networks (CNNs) on GPUs, not so much attention has been given to real-time inference of CNN with respect to latency in resource-constrained environments. This work presents the usage of specialized vision processing unit (VPU) from Intel to help accelerate existing and emerging CNN architectures. We present comparative performance of image classification across multiple architectures. Participants will learn how to easily deploy CNN models with Python APIs and derive fast inference performance through hardware acceleration, all on an efficient and cost effective solution for embedded applications.
Currently loaded videos are 1 through 15 of 93 total videos.