Scientists Found the Neurons That Respond to Uptalk

Scientists have found groups of neurons that listen for changes in someone’s speaking tone—and turn it into meaning.
Image may contain Text Label and Sticker
Getty Images

If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more. Please also consider subscribing to WIRED

Too often, letters, words, and sentences get the credit for conveying information. But the human brain also makes meaning out of pitch. Like how upspeak turns any sentence into a question? Or how emphasizing the beginning of a sentence (Tom and Leila bought a boat) helps clarify that it was in fact Tom and Leila who bought the boat, not some other couple. If you emphasize the end of that sentence (Tom and Leila bought a boat) however, you’re just pointing out that your friends didn’t buy a car, dirt bike, or pony.

Pitch matters, and you’ve got the brain cells to prove it. A new study, published Thursday in Science, found groups of neurons that listen for changes in someone’s speaking tone. Some are tuned for shifts upward, others for shifts downward, and some that fire only when a sound goes up, and then down in pitch. What’s more, these cells aren’t trained for absolute pitch—they can’t tell an A sharp from a D flat—but they listen for relative shifts, taking each voice on its own merit. This gives scientists a big boost in understanding how our brains turn sounds into meaning.

“I think most people just take for granted how good humans are at making meaning out of sound,” says Edward Chang, a neurosurgeon at UC San Francisco and lead author of the new study. This makes sense—people communicated through sound for millennia before they started to scribble their thoughts down. And obviously, language and grammar matter. In previous research, Chang and some other co-authors showed that human brains had cells specialized to pick out the sounds of consonants and vowels. But vocalized communication contains nuances beyond the order that letters and words get strung together—for instance, the way humans modulate their voices up or down to emphasize a word or phrase. “These differences are all really important, because they change the meaning of the words without changing the words themselves,” says Chang. So he and his new co-authors reasoned that there might also be neurons tuned to intonation.

To find the answer, they needed direct access to the brain. Functional MRI, the famous (and occasionally maligned) method for mapping brain activity, is noninvasive, and lets you look at the whole brain all at once, but the signal is much too slow. So they enlisted some helpful epileptic patients who had electrodes implanted under their skulls. These electrodes allow their doctors to pinpoint exactly where seizures originate, and do so on the millisecond time scale. “In some cases we can cure epilepsy if we can identify precisely where the seizures are coming from,” says Chang. That millisecond resolution is a huge advantage if you are looking for how auditory signals light up the brain.

Chang and his crew recruited 10 of these electrode-outfitted patients, who volunteered to listen to sentences repeated over and over again. The sentences, four in total, were simple: “Humans value genuine behavior;” “Movies demand minimal energy;” “Reindeer are a visual animal;” “Lawyers give a relevant opinion.” The researchers recorded each using three different voices—one male, and two female—and four different intonation patterns. The first intonation was neutral (Think Ferris Bueller’s econ teacher calling “Bueller …. Bueller … Bueller…”). Then they spiced it up. The next intonation emphasized the first word (“Humans value genuine behavior.”); and another emphasized the third word (“Humans value genuine behavior.”). The last intonation was upspeak: A question?

And voila! When they ran the data, they clearly saw that the brain had specific sets of neurons tuned to pitch, distinct from those tuned to consonants and vowels. “So what it tells us is the ear and brain have taken a speech signal and deconstructed it into different elements, and processes them to derive different meanings,” says Chang. Chang says these multiple axes for meaning may have evolved because it makes communication more efficient, with a single signal containing many elements for interpretation. Not a stretch for animals as social as human beings.

That’s not even the coolest bit. These pitch-tuned neurons are actually discerning intonation on the fly. Somehow, the cells establish a baseline pitch for the incoming speech and process the ups and downs from there. To musicians, this probably isn’t surprising. It’s sort of like shifting a melody up or down a key—the melody is still recognizable. Of course, human brains also have neurons trained for absolute pitch. This probably helps with things like identifying individual voices in a crowded, noisy space. “I think people take for granted how good humans are at doing stuff like holding conversations in a busy bar where there’s all these competing sounds,” says Chang.

Next, Chang and his crew will be turning their investigation on its head. He wants to understand how the brain controls intonation. This means not just watching electrodes in the brain, but looking at the muscles that control the vocal folds and larynx. “The one limitation is we can’t easily see how things like the lips, jaw, and tongue move in coordination with the vocal folds and larynx to produce sound,” says Chang. No matter how loud and clear the speech, it won't make any sense without brains.