The World Health Organization estimates that over 5% of the world's population has disabling hearing loss, a figure that is expected to grow to 10% by 2050, a percentage already found today in the most disadvantaged countries. Hearing problems, including those related to aging, lead to communication difficulties that are often underestimated. These difficulties represent real barriers to communication, obstacles to the full social participation of the person, comparable to architectural barriers for individuals with motor disabilities. In face-to-face communication, both people with hearing loss and the hearing people who interact with them can encounter difficulties. The former often struggle to understand due to perceptual or linguistic deficits, sometimes aggravated by unsuitable environments; the latter can experience a sense of frustration or discomfort in the face of a lack of understanding, with the risk of withdrawing from the conversation. In this context, assistive technologies that convert speech into text or facilitate listening are called to play a crucial role, promoting more effective mutual understanding and offering support to all people with complex communication needs.
DETAILED DESCRIPTION OF PLANNED ACTIONS AND INTERVENTIONS
Activity 1:Selection and evaluation of applications for speech transcription
The main objective of this activity is to identify the best applications available for transcribing speech into text and for communication support through a testing phase with hearing people. The applications will be subjected to an evaluation process divided into two main phases:
-The subjects will participate in reading sessions of pre-processed and coded texts containing sentences and words of different types and frequency of use (high and low). The applications will be used to transcribe the contents read by different subjects, with male and female voices, also with regional inflections, in silent and noisy contexts.
-The performance of the apps will be compared with functional and non-functional criteria. For the former, the following will be evaluated: transcription accuracy, processing speed, best signal-to-noise ratio, multilingual support, option to save communications in the cloud and speaker recognition capability. For the latter: intuitive and user-friendly interface, simple and well-structured navigation, short learning times for new users, guarantee of continuity even in offline mode, minimum loading time for the application, ease of updating without service interruptions, compatibility with different operating systems (Android, iOS), demo mode, quick installation, cost and possible free version with basic mode. The weights of the various evaluation items will be agreed with the client.
2.Interaction phase and conversation tests:
-The subjects will use the applications to interact with an operator via the text displayed on the application and, if applicable, via integrated chatbots.
-Different environmental conditions (silent and non-silent) and distances from the microphone will be tested to evaluate the robustness of the applications.
-During some sessions, subjects will use noise-cancelling headphones to reduce the impact of external noise, simulating specific usage conditions.
Activity 2:Experimentation on subjects with hearing problems
In this activity, the applications selected in the previous phase will be tested on a sample of subjects with hearing disabilities to evaluate their effectiveness and usability in experimental and real contexts. In particular, the following will be involved: CeDisMa collaborators with cochlear implants; students with hearing loss or deafness attending the inclusion services of the Università Cattolica in Milan, and/or Brescia and Piacenza and the Politecnico di Milano; deaf people participating in events promoted by industry associations already collaborating with CeDisMa for other projects always on