This event has passed.
Friday, November 20, 2020 – 7:00AM to 8:00AM
Shih-Chii Liu, Professor in the Faculty of Science at the University of Zurich
Recent progress in the development of higher-performance, neuromorphic event-driven Dynamic Audio Sensor (DAS) and Dynamic Vision Sensor (DVS) along with versatile hardware platforms such as FPGAs have allowed one to determine the power-latency tradeoff of different deep network architectures and feature representations before ASIC chips are designed and fabricated. The outputs of event-driven sensors can enable always-on sensing and with lower-latency system-level response time at lower power than possibly conventional sampled sensors for simple tasks suitable for Internet of Things. This talk describes brain-inspired event-driven deep neural network architectures that capitalize on spatial and temporal sparsity. We will describe also the impact of these bio-inspired architectures on the throughput and latency specifications of TinyML systems when combined with the DAS and DVS, and the system performance in various audio and vision tasks.