Evarist Fomenko, Software Engineer, Intel In this section we present the new integer vector neural network instructions targeting future Intel(R) Xeon(R) Scalable processors. These instructions improve throughput of multiply-add operations with int8 and int16 data types and are used to achieve performance gains in low precision convolution and matrix-matrix multiplication operations used in deep neural networks. We will also go through the 8-bit integer convolution implementation made in Intel(R) MKL-DNN library to demonstrate how this new instruction is used in optimized code.
Currently loaded videos are 1 through 15 of 93 total videos.