HomeEconomyArtificial intelligence (AI) can analyze data like the brain processes speech

Artificial intelligence (AI) can analyze data like the brain processes speech

According to recent research from the University of California, Berkeley, artificial intelligence (AI) systems can analyze data in a way that is very similar to how the brain processes speech. According to the researchers, this result may help explain the “black box” of how AI systems work.

Researchers at the Berkeley Speech and Computation Lab monitored the brain waves of volunteers as they listened to the single syllable “bah” while wearing a system of electrodes on their heads. The signals produced by an artificial intelligence system that learned to understand English were then compared with brain activity.

“The shapes are remarkably similar,” said Gasper Begus, an assistant professor of linguistics at UC Berkeley and lead author of a study published recently in the journal Scientific Reports. “That tells you that similar things are encoded, that the processing is similar.”

A side-by-side comparison graph of the two signals shows this similarity strikingly. “There are no modifications in the data,” Begus added. “This is raw.

AI systems have advanced by leaps and bounds in recent times. Since ChatGPT bounced around the world last year, the tools are predicted to uplift some sectors of society and change the way millions of people work. But despite these impressive advances, scientists have only a limited understanding of how exactly the tools they’ve created work between input and output.

image search 1683351334993

ChatGPT was a benchmark to measure the intelligence

The question and answer in ChatGPT was a benchmark to measure the intelligence and bias of the AI ​​system. But what happens between those steps was something of a black box. Knowing how and why these systems provide the information they do how they learn is becoming essential as they become embedded in everyday life in areas ranging from healthcare to education.

Begus and his co-authors, Alan Zhou of Johns Hopkins University and T. Christina Zhao of the University of Washington, are among a cadre of scientists trying to crack that box. To do so, Begus turned to his training in linguistics.

When we listen to spoken words, Begus said, the sound enters our ears and is converted into electrical signals. These signals then travel through the brainstem and to the outer parts of our brain. With an electrode experiment, the researchers traced this pathway in response to 3,000 repetitions of a single sound and found that the brain waves for speech closely followed the actual language sounds.

The researchers passed the same recording of the “bah” sound through an unsupervised neural network an AI system that can interpret the sound. Using a technique developed at the Berkeley Speech and Computation Lab, they measured the coincident waves and documented them as they occurred.

Previous research required further steps to compare waves from the brain and machines. Studying the waves in their raw form will help researchers understand and improve how these systems learn and increasingly reflect human cognition, Begus said.

image search 1683351339091

Understanding how these signals compare to the brain

“As a scientist, I’m really interested in the interpretability of these models,” Begus said. “They are so powerful. Everyone is talking about them. And everyone uses them. But much less is being done to understand them.”

Begus believes that what happens between input and output need not remain a black box. Understanding how these signals compare to the brain activity of human beings is an important benchmark in the race to build ever more powerful systems. So does knowing what’s going on under the hood.

For example, such an understanding could help build a guardrail for increasingly powerful artificial intelligence models. It could also improve our understanding of how errors and biases are baked into learning processes.

Begus said he and his colleagues are working with other researchers using brain imaging techniques to measure how these signals might compare. They are also studying how other languages, such as Mandarin, are decoded differently in the brain and what this may indicate about knowledge.

Many models are trained on visual cues such as colors or written text – both of which have thousands of variations at a granular level. But language opens the door to a firmer understanding, Begus said.

Read Now:Glaciers melting at an unprecedented rate due to climate change in the European Alps face widespread habitat loss

[responsivevoice_button buttontext="Listen This Post" voice="Hindi Female"]

LEAVE A REPLY

Please enter your comment!
Please enter your name here

RELATED ARTICLES

Trending News

Researchers Identify ‘Bacterial Vampirism’ Linked to Fatal Infections

In a groundbreaking discovery, researchers from Washington State University and the University of Oregon have uncovered what they term...

Indian Institute of Astrophysics (IIA) Plays Key Role in Surya Tilak Project at Ayodhya

New Delhi: The Indian Institute of Astrophysics (IIA), an autonomous body under the Department of Science and Technology, has...

Mount Ruang Volcano Erupts, Forces Hundreds to Evacuate

Indonesia's outermost region witnessed a series of eruptions from Mount Ruang on Wednesday, prompting authorities to elevate the alert...

Physicists Discover Unexpected Quantum Hybrid State on Arsenic Crystal Surface

In a groundbreaking revelation, physicists from Princeton University have stumbled upon an unforeseen phenomenon lurking on the surface of...