Skip to collection list
Skip to video grid
Understanding New Vector Neural Network Instructions (VNNI)
Evarist Fomenko, Software Engineer, Intel
In this section we present the new integer vector neural network instructions targeting future Intel(R) Xeon(R) Scalable processors. These instructions improve throughput of multiply-add operations with int8 and int16 data types and are used to achieve performance gains in low precision convolution and matrix-matrix multiplication operations used in deep neural networks. We will also go through the 8-bit integer convolution implementation made in Intel(R) MKL-DNN library to demonstrate how this new instruction is used in optimized code.