*********************************
There is now a CONTENT FREEZE for Mercury while we switch to a new platform. It began on Friday, March 10 at 6pm and will end on Wednesday, March 15 at noon. No new content can be created during this time, but all material in the system as of the beginning of the freeze will be migrated to the new platform, including users and groups. Functionally the new site is identical to the old one. webteam@gatech.edu
*********************************
Title: Designing Responsive Environments to Support Speech Perception for Individuals with Mild Cognitive Impairment
Date: Monday, April 25th
Time: 1:00 pm - 3:00 pm EDT
Physical Location: CODA C1215 "Midtown" Room
Virtual Location: https://bluejeans.com/2730373843/
Committee:
Dr. Elizabeth Mynatt (co-advisor), School of Interactive Computing, Georgia Institute of Technology
Dr. Thomas Ploetz, School of Interactive Computing, Georgia Institute of Technology
Dr. Keith Edwards (co-advisor), School of Interactive Computing, Georgia Institute of Technology
Dr. Erica Ryherd, College of Engineering, University of Nebraska-Lincoln
Dr. Craig Zimring, College of Design, Georgia Institute of Technology
Abstract:
Individuals with mild cognitive impairment (MCI) face significant challenges perceiving speech of others. Difficulty hearing and understanding what others say has an impact on daily activities and can encourage social withdrawal and isolation. The factors that cause individuals with MCI to have greater difficulty perceiving speech are multi-faceted and include the loss of hearing due to aging as well as changes to cognitive systems that impact how sound and speech are processed. While technologies exist to supplement the loss of physiological functions to improve speech perception (e.g., hearing aids), no existing technological solution addresses the components of the built environment that influence the ability for individuals with MCI to hear and understand others. The physical infrastructure of the environment where a conversation takes place mediates the characteristics of the speech signal that reaches the listener. A well-designed environment can improve the ability for a listener to perceive speech, while a poorly designed environment can introduce acoustic properties like reflection that create particularly adverse conditions for individuals with MCI.
There is not, however, a single set of desirable acoustic properties for all social situations or for all physical environments. The dynamic needs of the physical environment make a traditional static architectural approach of carefully designing the environment for speech perception to support a target population and set of activities insufficient. Similarly, a wearable computing approach that does not modify the properties of the built environment fails to fully take advantage of the opportunities available to improve the ability of individuals with MCI to hear and understand others.
My work combines the body of knowledge, methods, and research approaches from architectural acoustics and ubiquitous computing to prototype a responsive environment that physically changes the environment to create an acoustic setting that is supportive of speech perception for individuals with MCI. I use traditional qualitative user-centered research methods including focus groups and surveying as well as architectural acoustic measurements of a therapeutic facility to develop a foundational understanding of the environmental factors that impact speech perception for individuals with MCI. Using the knowledge from this work, I design and deploy a novel sensor for live measuring and reporting relevant acoustic properties. I continue my mixed methods approach by conducting an in-situ evaluation of the responsive acoustic environment in a real-world therapeutic facility with individuals with MCI to validate the effectiveness of introducing embedded acoustic sensors to induce changes in the built environment.