Photo: Pierre Duez
Ultrasound is good for more than monitoring fetuses and identifying heart defects. According to engineers in Canada, it can help tell what people are thinking as well. Their research suggests that ultrasound-based devices could lead to a new kind of brain-computer interface.
Brain-computer interface technology allows users to control devices with brain activity alone. Researchers have focused primarily on clinical applications for people with severe disabilities who would otherwise have difficulty interacting with the outside world.
In addition to brain-computer interfaces that involve electronics inserted directly into a patient’s head, researchers are also developing a number of noninvasive methods. For instance, electroencephalography (EEG) relies on electrodes attached to a person’s head; functional magnetic resonance imaging (fMRI) uses powerful magnetic fields to measure blood flow in the brain that telegraphs brain activity;magnetoencephalography (MEG) detects the magnetic fields generated by clusters of thousands of neurons; and near-infrared spectroscopy (NIRS) uses light to scan for changes in blood hemoglobin concentrations.
Yet practical use of these methods has so far been limited due to a number of drawbacks. For instance, EEG faces “noise” from electrical signals sent by the muscles and eyes; fMRI and MEG are very expensive and require large equipment; and NIRS, while still early in development as a brain-computer interface technology, has a low data-transmission rate.
Now biomedical engineer Tom Chau and his colleagues at the University of Toronto reveal that ultrasound can also monitor brain activity, suggesting that it could be used for brain-computer interfaces.
The researchers used lightweight ultrasound headgear to measure blood flow in the brains of nine able-bodied adults as they alternated between relaxing and performing two mental tasks. One task required them to think of words that began with a letter displayed on a video screen, and the other asked them to compare two objects rotated to different angles and determine whether they were the same object or mirror images of each other.
Using this new technique, researchers could see with 82.9 percent accuracy whether people were performing the word-generation task and they could tell with 85.7 percent accuracy if they were doing the mental-rotation tasks or just relaxing. Word generation usually led to an increase in blood flow in the left middle cerebral artery, while mental rotation caused increases in both the left and right middle cerebral arteries.
Myrden, Chau, and their colleagues reported their findings online on 7 September in the journal PLoS ONE.
With some adjustments, results could be even more accurate, Chau says. For instance, if subjects had time to train in these exercises, or if they were given instantaneous visual or auditory feedback during the tasks, they could better adjust their efforts toward their goals.
“The results reported are encouraging,” says Rajesh Rao, a computational neuroscientist at the University of Washington, in Seattle, who did not take part in this study. “The approach offers a potentially cheaper alternative to more traditional EEG- and fMRI-based noninvasive brain-computer interfacing.”
With this approach, it can take as long as 45 seconds before researchers can tell what participants are concentrating on. However, Chau notes that they could speed this up by improving the algorithms used to recognize each task in the brain.
There are also other problems to solve. There are only a few spots on the skull thin enough for the ultrasound to scan through, which limits the amount of brain activity it can monitor. Also, Chau says, “our measurements reflect changes in blood-flow velocity secondary to neural activity in the brain—it will thus always be slower than, say, the detection of electrical activity due to neuronal firing.”
Still, even if it takes 10 seconds for an ultrasound scanner to register what a patient is thinking, it may be worthwhile for someone who is severely disabled, Chau says. The researchers are now looking into combining ultrasound with NIRS to achieve greater speed, accuracy, and detail in monitoring brain activity than either approach could provide alone.
“We hope to soon apply this brain-control interface to the target population—individuals with severe and multiple disabilities,” says Andrew Myrden, a graduate student at the University of Toronto who led the research.