Tuesday , June 28 2022

Scientists turn thoughts into persuasive, computer-generated speech



[ad_1]

Scientists turn thoughts into persuasive, computer-generated speechiStock / Asia
A federal judge has recently rejected attempts by cable and telephone companies to halt network neutrality. That is, the FCC program of new Internet laws goes into effect as planned. Here's what it means to you:

Related stories

San Francisco – How can a computer translate thoughts into spoken words? A team of scientists at the University of California has discovered a promising new piece of the puzzle, resulting in an incredibly convincing pattern of synthetic speech.

The scientists created a system that translates brain waves into words by focusing on the physical movements related to speech rather than on the sound of the words that seek to be revealed. They found the special movements of the tongue, throat, and other mechanisms of speech that allowed them to copy sound sounds more reliably than to try to match brainwaves to predictable speech.

Using this information, the team created a computer program that simulates the movements of the sound system by honing the brain's speech centers.

Take a look at an example of this type of speech modeling. You can see the connection between the intended spoken words, and how these words are formed by different parts of the sound system.

The team's findings Recently published in the journal Nature. The paper concluded that this new method could be the basis for life-changing technology for people with severe speech disorders, physical trauma, or other situations that limit their ability to communicate.

"It has been a long-standing goal of our laboratory to create technologies to restore communication for patients with severe speech disabilities," Edward Chang, co-author of the project, said at a press briefing. "We want to create technologies that can replicate speech directly from human brain activity. This study provides proof in principle that this is possible."

This is not just an exciting takeaway from the research group. According to Zhang, the model of the mechanical speech process can actually be applied from person to person.

"The neural code for voice movements is partially shared across different people and that the artificial sound system modeled on one person's voice can be adapted to synthesize speech from another person's brain activity," Chang explains. "This means that a speech decoder that is trained in one person with complete speech can perhaps one day act as a starting point for someone who has speech disability, who can then learn to control the vocal ways through their brain activity."

According to a recent study by UC scientists, Communication technologies for people with speech and engine limitations are developing, but can still be frustrating and inaccurate. If the last breakthrough can eventually be applied at the individual patient level, it can open a new world of understanding, and be understood.

[ad_2]
Source link