Creations/Shutterstock.com

The Attention Machine

A new brain-scanning technique could change the way scientists think about human focus.

Human attention isn’t stable, ever, and it costs us: lives lost when drivers space out, billions of dollars wasted on inefficient work, and mental disorders that hijack focus. Much of the time, people don’t realize they’ve stopped paying attention until it’s too late. This “flight of the mind,” as Virginia Woolf called it, is often beyond conscious control.

So researchers at Princeton set out to build a tool that could show people what their brains are doing in real time, and signal the moments when their minds begin to wander. And they've largely succeeded, a paper published today in the journal Nature Neuroscience reports. The scientists who invented this attention machine, led by professor Nick Turk-Browne, are calling it a “mind booster.” It could, they say, change the way we think about paying attention—and even introduce new ways of treating illnesses like depression.

Here’s how the brain decoder works: You lie down in a functional magnetic resonance imaging machine (fMRI)—similar to the MRI machines used to diagnose diseases—which lets scientists track brain activity. Once you're in the scanner, you watch a series of pictures and press a button when you see certain targets. The task is like a video game—the dullest video game in the world, really, which is the point. You see a face, overlaid atop an image of a landscape. Your job is to press a button if the face is female, as it is 90 percent of the time, but not if it’s male. And ignore the landscape. (There’s also a reverse task, in which you’re asked to judge whether the scene is outside or inside, and ignore the faces.)

Megan deBettencourt/Princeton

To gauge attention from the brain, the researchers used a learning algorithm like the one Facebook uses to recognize friends’ photos. The algorithm can discern “Your Brain On Faces” versus “Your Brain On Scenes.” Whenever you start spacing out, it detects more “scene” than “face” in your brain signal, and tells the program to make the faces you are watching grow dimmer. In turn, you have to focus harder to figure out what you’re seeing, and to succeed at the “game.” In the Princeton face-scene game, college students made errors 30 percent of the time.

If this were a test, they would have gotten a D.

Ken Norman/Princeton

“Internal states are kind of ineffable,” says Turk-Browne, an associate professor of psychology at the Princeton Neuroscience Institute. “You may not know when you're in a good or bad state. We wanted to see: If we give people feedback before they make mistakes, can they learn to be more sensitive to their own internal states?”

It turns out they can, Turk-Browne says. The key is that, for some subjects, the pictures were controlled not by their own brains, but by someone else’s: meaningless jitter. Of the 16 subjects who got their own brain feedback, 11 said they felt they were making the pictures clearer by focusing, as opposed to four of 16 who watched the placebo feedback. What the scientists found is that only people whose own brains drove the images’ dimming improved their ability to focus. Paying attention, in other words, is like learning basketball or French: Good old-fashioned practice matters.

“I think what's exciting about this finding,” explains Turk-Browne, “is the idea that certain aspects of cognition like attention are only partly consciously accessible. So, if we can directly access people's mental states with real time fMRI, we can give them more information than they could get from their own mind.”

* * *

Neuroscientists have been reading brain patterns with computer programs like this for just over a decade. Machine-learning algorithms, like the ones Google and Facebook use to recognize everything online, can hack the brain’s code, too: essentially software for reading brain scans. Given samples of neural patterns— your brain imagining faces , say, versus your brain picturing places —a decoder is trained to tell whether you are remembering a face (Jennifer Aniston, President Obama) or a location (the Hollywood sign, the White House). A prior study by researchers at the memory lab of professor Ken Norman, a co-developer of the attention tool, read out these categories from people’s brains as they freely recalled pictures they had studied earlier. Similar work has "decoded" what people see , attend to , learn , remember falsely , and dream . What’s new and remarkable now is how fast neural decoding is happening. Machines today can harness brain activity to drive what a person sees in real time.

“The idea that we could tell anything about a person's thoughts from a single brain snapshot was such a rush,” Norman recalls of the early days, over a decade ago. “Certainly the kinds of decoding we are doing now can be done much faster.”

Here is how Princeton's current scanner sees a human brain: First, it divides a brain image into around 40,000 cubes, called voxels, or 3-D pixels. This basic unit of fMRI is a 3 millimeter by 3 millimeter cube of brain. So, the neural pattern representing any mental state—from how you feel when you smell your wife’s perfume to suicidal despair—is represented by this matrix. The same neural code for, say, Scarlett Johansson, will represent her in your memory, or as you talk to her on the phone, or in your dreams. The decoding approach, first pioneered in 2001 by the neuroscientist James Haxby and colleagues at Princeton, is known technically as “multi-voxel pattern analysis,” or MVPA. This “decoding” is distinct from the more common, less sophisticated form of fMRI analysis that gets a lot of attention in the media, the kind that shows what parts of the brain “light up” when a person does a task, relative to a control. “Though fMRI is not very cheap to use, there may be a certain advantage of neurofeedback training, compared to pure behavioral training,” suggests Kazuhisa Shibata, an assistant professor at Brown University, “if this work is shown to generalize to other tasks or domains.”

That is a big if . One caveat to the neurofeedback trend is that many “brain-training” tasks, including popular corporate games like Lumosity, which promise to improve brain function, are roundly criticized by neuroscientists: People trained on them often only improve at the games themselves. They don’t actually get better at paying attention, remembering things, or controlling mood more generally. As Johns Hopkins neuroscientist and memory expert David Linden points out in his recent book, Touch , physical exercise is one of the few interventions shown to improve general cognition reliably, far better than most “brain games." So neurofeedback has a high bar to clear. That said, Shibata’s work on vision, one of few other successful examples of real-time fMRI, showed visual learning can be driven by brain feedback.

Other experts note the Princeton team’s technical advance, but with some skepticism. “The setup for monitoring attentional states is impressive,” says Yukiyasu Kamitani, a pioneer of neural decoding at ATR Computational Neuroscience Labs and professor at Nara Institute of Science and Technology, “although the behavioral effects of neurofeedback they found are marginal.”

Let’s not get carried away just yet, in other words. But as the neurofeedback technique improves, it is likely to become widely used. When effective, the potential to link brain patterns directly to behavior is unprecedented for human neuroscience.

Neurofeedback training could work the brain almost as muscles are worked in physical therapy, as Shibata and his Kyoto colleagues published in a 2011 Science study. The process, which the authors called “inception” in homage to the 2010 film about dreams implanted in people’s brains, made a big splash when it came out. The only instruction inside the fMRI? “Somehow regulate activity in the posterior part of your brain to make the solid green disc… as large as possible.” Without conscious knowledge of what they were learning, subjects managed to make the green disc grow. Shibata trained a decoder to work like the "faces" versus "landscapes" experiment, only he used three different orientations of line gratings for images. Then, while people were watching the green disc, he "rewarded" them by making the disc grow when their brains responded to one of the three patterns of lines. In turn, they became better at seeing the patterns that they associated with the green disc growing.

As Turk-Browne points out, this sort of learning is often unconscious. Which is why some scientists believe tools like the attention machine at Princeton may soon help not only to better understand when the brain goes wrong, but even to treat mental illness.

If you've ever known someone with ADHD or depression, you know how these disorders affect attention, holding hostage the senses, focusing them relentlessly on gloomy perspectives. Depression is especially pernicious: My boss frowned at me ; my girlfriend dissed my cooking ; nobody is talking to me at this party , I'm so boring . The Princeton group, in collaboration with the University of Texas, Austin, hopes to leverage its mental prosthetic to curb this negative attention bias. Instead of noticing the (perhaps imagined) frown on someone's face, the tool might train depressed brains to focus on the information they are being told.

“Why do some people recover from sad mood, while others stay stuck for months or years?” asks Chris Beevers, a professor of psychology at the University of Texas, Austin, and one of the co-authors of this pilot work. What interests him and his colleagues about the attention tool is its potential to “target mechanisms that maintain sad mood, and reverse them,” a trend he calls precision medicine. “From the clinician’s perspective, we’d like to tailor treatment to an individual’s neural function: not treating every depressed patient the same.”

Today, mental illness is usually treated in two ways: drugs and behavioral therapy. Only around 50 percent of depression patients respond to any drug, according to the National Institutes of Mental Health . The nearly 20 percent of Americans with mental disorders, and the roughly half who will experience one in their lifetimes , are stuck with checklists— How anxious are you, on a scale of 1 to 10? Are you hearing voices? How's your sleep? —when what they need, some scientists believe, is direct access to the brain. Psychologists like Beevers envision a future in which patients would be evaluated through quantitative tests of traits like memory and attention bias, to determine symptoms to target, and tailor treatment for each patient’s needs. Those who have “difficulty disengaging from emotional content,” as Beevers puts it, may be good candidates for neurofeedback training.

This sort of training has its roots in today's talk therapy. People with anxiety are taught to identify feelings that may spiral out of control. But as much as cognitive-behavioral therapy trains the conscious mind to catch rumination, compulsion or panic, and nip them in the bud, other emotional tendencies are completely outside of deliberate control—habits of the brain. So the Princeton-Austin team is using real-time fMRI to rein in the brain’s biases. Depressed subjects are shown a collection of faces, some sad, overlaid on scenes they are told to judge: outdoors or inside? When the machine detects that the viewer is focusing more on faces than scenes, the sad face grows clearer, the scene harder to see. This prompts self-correction by focusing on the scene instead. Over time, the theory goes, subjects get better at not being drawn in by sad faces, at focusing on the task at hand. The hope is that whatever in a depressed person's brain draws her toward sad things may gradually learn to regulate itself, the researchers say. That's the hope, anyway. There's still the question of whether such therapies could treat depression broadly—or, like brain games, just teach people how to excel at the treatment exercise.

The depression research is still ongoing—the authors stress the need for many more subjects and controls—but data reported at November's Society For Neuroscience conference offered a promising proof of concept. The future of this work, in any case, is provocative to imagine.

“We still haven't plumbed the depths of what information can be mined from fMRI,” the memory researcher Norman says. “We're over the honeymoon period, but we're still finding ways to squeeze more information out of the signal. Now we can pick up on not just ‘How awake are you?’ but ‘What plans are in your head?,’ ‘What are you attending to?’ There's never been a technology that allows us to get such rich information about mental states from the brain."

( Image via Creations / Shutterstock.com )