Skip to collection list
Skip to video grid
Portability & Performance in Embedded Deep Learning: Can We Have Both?
Cormac Brick, Director of Machine Intelligence
"In recent years there has been lots of work done on low precision inference, this shows that by training for quantization, large gains in energy efficiency can be achieved. On the other hand we have seen embedded runtime packages like Tensorflow Lite and Caffe2go emerge that offer portability over a number of platforms. This talk will look at challenge presented by this choice, and asks: “Why can't we have both?” Specifically we will examine: - How big this gap truly is, using state of the art methods for both and specifically trained networks, showing performance over a range of popular vision applications. - Best in class design techniques for developing portable networks to maximize performance on different compute architectures variety of network architectures. - A Look at industry challenges and progress needed to close the portability performance gap."