Featured

Event-driven low-compute bio-inspired processing for edge audio devices - by Prof. Shih-Chii Liu



Published
Abstract: Edge audio devices have gained a lot of recent interest especially for tasks such as keyword spotting. These tasks require deep networks that use a small footprint allowing all blocks to be embedded in an ASIC. The conventional approach is to first sample at Nyquist or oversample the microphone input through an analog-to-digital converter (ADC) and then further process the samples using a digital signal processing block. Stringent power constraints of ‘always-on’ operation in edge audio devices pose several design challenges, forcing the ASIC designers to look for solutions to reduce standby power overhead. ‘Event-driven’ bioinspired audio sensors, specifically spiking silicon cochleas, bypass the combined stage of ADC and digital filtering. Instead, they use analog filters to extract continuous-time audio features and convert them into binary asynchronous event streams, but only in the presence of sounds that pass the filtering. Furthermore, their rectification stage facilitates data conversion with the useful amplitude information only, greatly reducing the necessary ADC sample rates. The events retain the asynchronous audio timing which is available for tasks such as source localization. Putting it all together, their low-power and low-latency responses are ideal for spatialized audio edge device operation where power and latency are critical. This talk presents the development of event-driven spiking cochleas and deep neural network algorithms used for early edge audio tasks including voice activity detection and keyword spotting. I will show examples of audio devices that combine this event-driven audio front-end with low-compute neural networks to implement continuous small vocabulary speech recognition and keyword spotting for low-power (nW-uW) ASICs.

Biography: Shih-Chii Liu is a professor in the Faculty of Science at the University of Zurich. She co-directs the Sensors group (https://sensors.ini.uzh.ch) at the Institute of Neuroinformatics, University of Zurich and ETH Zurich. Her research focus is on the design of low-power low-latency asynchronous spiking sensors, bio-inspired computing circuits, event-driven deep neural network accelerators and their use in neuromorphic artificial intelligent systems. Dr. Liu is past Chair of the IEEE CAS Sensory Systems and Neural Systems and Applications Technical Committees. She is current Chair of the IEEE Swiss CAS/ED Society and was general co-chair of the 2020 IEEE Artificial Intelligence for Circuits and Systems conference and the IEEE ISSCC Student Research Preview Committee. In 2020, her audio group was awarded the Mahowald Prize for Neuromorphic Engineering (https://www.mahowaldprize.org/) for their work on “Hearing with Silicon Cochleas".
Category
Audio
Be the first to comment