How We Recognize Terms in Real-Time

Summary: A new study has identified three different methods people use to identify words:” Wait and See”,” Supported Activation”, and” Slow Activation”. These techniques were tested on both cochlear implants and regular hearing people, demonstrating how highly individualized word recognition processes are.

This finding contributes to improved language processing and may increase hearing impairment treatment. The findings even point out that phrase identification differences may extend far beyond those with hearing difficulties.

Important Facts:

  1. Scientists identified three word-recognition techniques:” Wait and See”,” Supported Activation”, and” Decrease Activation”.
  2. These behaviors were observed in cochlear implant users as well as normal hearing people, highlighting the importance of specialized terminology processing.
  3. By better understanding how words are recognized, the research could lead to more effective hearing aids.

Origin: University of Iowa

Researchers at the University of Iowa have defined how comments are reinterpreted.

The researchers identified three main methods that individuals with or without hearing loss use to understand words, which are a crucial component of understanding spoken language, in a fresh research involving people who use cochlear implant to learn.

Regardless of hearing ability or aptitude, which approach is best for the person depends on the individual: some may wait a while before identifying a word, while others may tussle between two or more words before deciding which word has been heard.

When a person hears a word, the brain briefly considers hundreds, if not thousands, of options and rules out most of them in less than a second. When someone hears” Hawkeyes”, for example, the brain might briefly consider “hot dogs”, “hawk”, “hockey”, and other similar-sounding words before settling on the target word.

The findings in this study are significant because they could help hearing specialists identify word-recognition issues in early childhood or older adults ( who are more likely to lose hearing ) and more effectively manage those issues. Although the brain operates quickly and differences in word-recognition strategies may be subtle.

According to Bob McMurray, F. Wendell Miller Professor in the Department of Psychological and Brain Sciences and study’s corresponding author,” we found that people do n’t all function the same way, even at the level of how they recognize a single word.”

People seem to adopt their own unique approaches to the word recognition problem. There’s not one way to be a language user. That’s kind of wild when you think about it” .&nbsp,

McMurray has spent the past 30 years studying word recognition in both older and younger people. His research has revealed that people of all ages can identify with spoken language differently. However, those differences were generally so minor that it was challenging to precisely categorize.

So, McMurray and his research team turned to people who use electrodes to deliver sound, which are devices used by the profoundly deaf or severely deaf of hearing to bypass the traditional ways of hearing. &nbsp,

” It’s like replacing millions of hair cells and thousands of frequencies with 22 electrodes. It just smears everything together. But it works, because the brain can adapt”, McMurray says.

The Iowa Cochlear Implant Clinical Research Center at University of Iowa Health Care Medical Center was made up of 101 individuals by the research team. The participants chose the word from the list of four computer-generated images that corresponded to the word they had heard while listening to loudspeakers.

Eye-tracking technology allowed the researchers to follow each participant’s decision-making process in less than a second after the hearing and selection activities.

The cochlear-implant experimenters discovered that they used the same fundamental process to choose spoken words when hearing them, even in a different way.

The researchers termed three word-recognition dimensions:

  • Wait and See
  • Sustained Activation
  • Slow Activation&nbsp,

The researchers discovered that the majority of cochlear implant participants actually used Wait and See, which is the practice of waiting as long as a quarter of a second after hearing the word to decide for themselves.

Previous research in McMurray’s lab has shown that children with early hearing loss have Wait and See tendencies, but this has n’t been observed more generally.

” Maybe it’s a way for them to avoid a lot of other word competitors in their heads,” McMurray says. They” can kind of slow down and keep it simple,” she said.

The researchers also discovered that some cochlear implant users preferred to engage in Slow Activation, where listeners alternate between words for a while before choosing what they believe to hear. Importantly, every listener seems to adopt a hybrid, with a different degree of each strategy.

The dimensions match the patterns by which people without hearing impairment, from youth to older ages, tend to recognize words, as shown in a previous&nbsp, study &nbsp, by McMurray’s team.

” Now that we’ve identified the dimensions with our cochlear implant population, we can look at people without hearing impairment, and we see that the exact same dimensions apply”, McMurray says. What we can see very clearly is that many people’s perception of words is also changing because of how cochlear implant users perceive them.

The researchers now hope to use the findings to create strategies that might benefit those who are at the extreme ends of a particular word-recognition spectrum. About 15 % of adults in the United States have hearing loss, which could cascade into cognitive decline, fewer social interactions, and greater isolation.

” We aim to have a more refined way than simply asking them,’ How well are you listening, do you struggle to perceive speech in the real world?'” McMurray says.

The study,” Cochlear implant users reveal the underlying dimensions of real-time word recognition”, was published online Aug. 29&nbsp, in the journal&nbsp, Nature Communications.

Contributing authors, all from Iowa, include Francis Smith, Marissa Huffman, Kristin Rooff, John Muegge, Charlotte Jeppsen, Ethan Kutlu, and Sarah Colby.

The research was funded by the National Institutes of Health and the U.S. National Science Foundation, which has been supporting the Iowa Cochlear Implant Clinical Research Center for 30 years.

news about neuroscience research and language

Author: Richard Lewis
Source: University of Iowa
Contact: Richard Lewis – University of Iowa
Image: The image is credited to Neuroscience News

Original Research: Open access.
By Bob McMurray and colleagues,” What are the fundamental characteristics of real-time word recognition in cochlear implant users?.” Nature Communications


Abstract

What are the fundamental characteristics of real-time word recognition in cochlear implant users?

Word recognition is a gateway to language, linking sound to meaning. Its cognitive mechanisms have been described in earlier works as a form of competition between words that sound similar. However, it has not identified the dimensions along which this competition varies among people.

We looked to find these characteristics in a sample of people who use cochlear implants and have varied backgrounds and audiological backgrounds as well as in a sample of people who have no hearing loss. Using the Visual World Paradigm, our study uses the Visual World Paradigm to describe the lexical competition process.

A principal component analysis reveals that people’s ability to resolve lexical competition varies according to three factors, which are in line with those of earlier small-scale studies. These dimensions capture the degree to which lexical access is delayed (” Wait-and-See” ), the degree to which competition fully resolves (” Sustained-Activation” ), and the overall rate of activation.

Each dimension is predicted by a different auditory skills and demographic factors (onset of deafness, age, cochlear implant experience ). Moreover, each dimension predicts outcomes ( speech perception in quiet and noise, subjective listening success ) over and above auditory fidelity. Higher degrees of&nbsp, Wait-and-See&nbsp, and&nbsp, Sustained-Activation&nbsp, predict poorer outcomes.

These results suggest that the mechanisms of word recognition vary according to a few underlying dimensions, which contribute to the variation in performance among listeners who are challenged by auditory stimuli.

[ihc-register]