Skip to collection list
Skip to video grid
FPGAs In Deep Learning Inference Today
Adrian Macias, Sr Manager, High Level Design Solutions, Intel
There have been many customer success stories regarding FPGA deployment for Deep Learning in recent years. The unique architectural characteristics of the FPGA are particularly impactful for distributed, low latency applications and where the FPGAs local on-chip high memory bandwidth can be used to optimize system performance metrics like power and cost. We will survey some of these success stories and discuss the specific technical features and the advances in hardware and software stack abstractions that highlight the FPGA value for Deep Learning.