*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Title: Attention-based Convolutional Neural Network Model and Its Combination with Few-shot learning for Audio Classification
Committee:
Dr. David Anderson, ECE, Chair, Advisor
Dr. Mark Davenport, ECE
Dr. Christopher Rozell, ECE
Dr. Eva Dyer, BME
Dr. Thomas Ploetz, IC
Abstract: Environmental sound and acoustic scene classification are crucial tasks in audio signal processing and audio pattern recognition. In recent years, deep learning methods such as convolutional neural networks (CNN), recurrent neural networks (RNN), and their combinations, have achieved great success in such tasks. However, there are still numerous challenges left to be addressed in this domain. For example, in most cases, the sound events of interest will be present through only a portion of the entire audio clip, and the clip can also suffer from the background noise. Furthermore, in many application scenarios where the amount of labelled training data can be very limited, the application of few- shot learning methods especially prototypical networks have achieved great success. But metric learning methods such as prototypical networks often suffer from bad feature em- beddings of support samples or outliers, or may not perform well on noisy data. Therefore, the proposed work seeks to overcome the above limitations by introducing a multi-channel temporal attention-based CNN model and then introduce a hybrid attention module into the framework of prototypical networks. Additionally, a pi-model is integrated into our model to improve performance on noisy data, and a new time-frequency feature is explored. Various experiments have shown that our proposed framework is capable of dealing with the above mentioned issues and provide promising results.