ELO-SPHERES Project

Description - Summary - Publications - Tools and Datasets

Description

Environment and Listener Optimised Speech Processing for Hearing Enhancement in Real Situations (ELO-SPHERES) was a project funded by the EPSRC at University College London and Imperial College London between September 2019 and March 2023.

Principal Investigators were Mark Huckvale, Patrick Naylor and Stuart Rosen. Research Fellows include Tim Green and Gaston Hilkhuysen at UCL, and Alastair Moore, Sina Hafezi, Xue Wang, Rebecca Vos and Pierre Guiraud at IC.

Lay Summary

Although modern hearing aids offer the potential to exploit advanced signal processing techniques, the experience and capabilities of hearing impaired listeners are still unsatisfactory in many everyday listening situations. In part this is because hearing aids reduce or remove subtle differences in the signals received at the two ears. The normally-hearing auditory system uses such differences to determine the location of sound sources in the environment, separate wanted from unwanted sounds and allow attention to be focused on a particular talker in a noisy, multi-talker environment.

True binaural hearing aids - in which the sound processing that takes place in the left and right ears is coordinated rather than independent - are just becoming available. However practitioners' knowledge of how best to match their potential to the requirements of impaired listeners and listening situations is still very limited. One significant problem is that typical existing listening tests do not reflect the complexity of real-world listening situations, in which there may be many sound sources which may move around and listeners move their heads and also use visual information. A second important issue is that HI listeners vary widely in their underlying spatial hearing abilities.

This project aims to understand better the problems of hearing impaired listeners in noisy, multiple-talker conversations, particularly with regard to (i) their abilities to attend to and recognise speech coming from different directions while listening through binaural aids, (ii) their use of audio-visual cues. We will develop new techniques for coordinated processing of the signals arriving at the different ears that will allow identification of the locations and characteristics of different sound sources in complex environments and tailor the information presented to match the individual listener's pattern of hearing loss. We will build virtual reality simulations of complex listening environments and develop audio-visual tests to assess the abilities of listeners. We will investigate how the abilities of hearing-impaired listeners vary with their degree of impairment and the complexity of the environment.

This research project is a timely and focussed addition to knowledge and techniques to realise the potential of binaural hearing aids. Its outcomes will provide solutions to some key problems faced by hearing aid users in noisy, multiple-talker situations.

Publications

2020

2021

2022

2023

Tools and Data sets

HearVR - Virtual Reality Video database for audiovisual speech intelligibility assessment

British English matrix sentences were recorded in our anechoic chamber against a green screen, using a 360° camera and a high-quality condenser microphone. These sentences contain 5 slots with 10 choices in each slot. Different sets of 200 sentences were recorded by 5 male and 5 female talkers sitting at a table. The individual talkers have been cropped from the videos and the green screen and table replaced by a transparent background to allow compositing of speakers in new 360° scenes. The individual matrix sentences have been extracted as 11s videos together with level-normalised monophonic audio. We have developed scripts that build multi-talker videos from these elements in a format that can be played on VR Headsets such as the Oculus 2. We have also developed a Unity application for the headsets, which will play the 360° composite videos and create spatialised sound sources for the talkers together with background noise delivered from multiple virtual loudspeakers.

Materials used in "Measuring audio-visual speech intelligibility under dynamic listening conditions using virtual reality"

The materials in this record are the audio and video files, together with various configuration files, used by the "SEAT" software in the study described in Moore, Green, Brookes & Naylor (2022) "Measuring audio-visual speech intelligibility under dynamic listening conditions using virtual reality" They are shared in this form so that the experiment may be reproduced. For any other use please contact the authors to obtain the original database(s) from which these materials are derived.

eBrIRD - ELOSPHERES binaural room impulse response database

The ELOSPHERES binaural room impulse response database (eBrIRD) is a resource for generating audio for binaural hearing-aid (HA) experiments. It allows testing the performance of new and existing audio processing algorithms under ideal as well as real-life auditory scenes in different environments: an anechoic chamber, a restaurant, a kitchen and a car cabin. The database consists of a collection of binaural room impulse responses (BRIRs) measured with six microphones. Two microphones represent the listener's eardrums. Four microphones are located at the front and back of two behind-the-ear hearing aids placed over the listener's pinnae. The database allows simulations of head movement in the transverse plane.