Lunes 1 de Julio de 2002, Ip nš 18

Static: the new hearing aid
Por Patrick Di Justo

For the deaf to hear more clearly, it may be necessary to turn up the static.

This strategy is counterintuitive, according to Dr. Jay Rubinstein, associate professor of otology at the University of Iowa.

Speaking at the conference of the American Society for Artificial Internal Organs, Rubinstein described a way to insert non-informational random noise into the audio signal of cochlear implants. The implants are electronic devices inserted into the inner ear of deaf people to stimulate the auditory nerve.

This noise, instead of degrading the signal, actually increases the perceived dynamic range, allowing the deaf to hear softer sounds.

In a typical cochlear implant, an external microphone, looking much like a standard hearing aid, picks up sound waves and converts them into patterns the brain can understand.

It then broadcasts these patterns through the skull via short-range RF to the implant, which triggers the auditory nerve to produce the sensation of hearing.

Currently, cochlear implant users can understand spoken words in a quiet setting, but find it difficult to follow conversations in noisy environments. Close to 70,000 people, some less than 1 year old, have been fitted for the devices worldwide.

The original conversion algorithms for cochlear implants, developed in the 1970s, assumed that the auditory nerve always fired the same way in response to the same sound. Early implants stimulated the neurons to produce standardized, and therefore synchronized, responses to external sounds.

Since they were conveying the same information at the same time, at least half the neuron's signals were redundant, creating a perceived sound with a narrow frequency bandwidth, a narrow dynamic range and a lack of timbre.

In a hearing person, the auditory neurons are not synchronized with each other.

"Even in a quiet room," he explains, "the auditory nerve (of a hearing person) is still firing at random, which explains why a quiet room is never totally quiet." This low-level noise, created by the ear itself, keeps the auditory neurons out of sync, and prevents the nerve signals from interfering with each other.

Beginning in 1984 with a handful of Fortran and Matlab DSP programs and occasional access to a Cray supercomputer, Rubinstein went to work making the neural impulses produced by a cochlear implant more like those produced by a working ear.

Working now with a cluster of five Macintosh G4s ("Which together are faster than the Cray," he laughs), still programming in Fortran, he has found the right way to adequately mimic the stochastic firing of a normal auditory nerve.

Last year Rubinstein reprogrammed the speech conversion processor in an Advanced Bionics Clarion cochlear implant to add the correct random factor to each audio signal.

This increased noise makes the neural pattern more natural, and results in a lower sound threshold, allowing patients to detect subtler sounds. Human testing of the new software began in June 2001 with 30 patients.

Rubinstein hopes to improve the system to the point where deaf patients can hear and enjoy music. "Currently," he says, "people with cochlear implants can't tell the difference between a guitar and a piano playing the same note."

Rubinstein hopes to have FDA approval for his new system in the next three to six months.


  24/06/2002. Wired Magazine.