Computer and Systems Engineering Seminar Series: Software-Hardware Co-Design for Efficient Neural Network Acceleration
Tuesday, January 30, 2018
1:00 pm - 2:00 pm
Yu Wang, Associate Professor with the Department of Electronic Engineering, Tsinghua University
Firstly, we will review the current deep learning acceleration work together with the chart we make, and then review our solution: a complete design flow to achieve both fast deployment and high energy efficiency for accelerating neural networks on FPGA [FPGA 16/17]. Deep compression and data quantization are employed to exploit the redundancy in algorithm and reduce both computational and memory complexity. Two architecture designs for CNN and DNN/RNN are proposed together with compilation environment. Evaluated on Xilinx Zynq 7000 and Kintex Ultrascale series FPGA with real-world neural networks, up to 15 times higher energy efficiency can be achieved compared with mobile GPU and desktop GPU. We will further introduce our effort on how to turn these papers into production from Deephi Tech's perspective, i.e. what we achieved for more test cases, the application domain and products in Surveillance/ Data Center/ Automobile, and the deep learning inference chip design. Finally, we will briefly talk about the trends of adopting emerging NVM technology for efficient learning systems to further improve the energy efficiency and what we have done in the past 5 years.