Multiple brain-computer interface (BCI) devices now have the capability to enable users to perform various tasks such as manipulating computer cursors, translating brain activity into words, and converting handwriting into text. control computer cursors, to translate neural activity into words, to convert handwriting into text. Although a recent example of BCI accomplishes similar tasks, it does so without requiring time-consuming personal calibration or high-risk neurosurgery.
As described in a study published by University of Texas Austin researchers, they have developed a wearable cap that allows users to perform complex computer tasks by interpreting brain activity as actionable commands. Instead of needing to customize each device to an individual's brain activity, a new machine learning program offers a more efficient, universal approach that significantly reduces training time. PNAS Nexus, University of Texas Austin researchers have developed a wearable cap that allows a user to accomplish complex computer tasks through interpreting brain activity into actionable commands. But instead of needing to tailor each device to a specific user’s neural activity, an accompanying machine learning program offers a new, “one-size-fits-all” approach that dramatically reduces training time.
“Training a BCI subject usually begins with an offline calibration session to collect data for building an individual decoder,” the team explains in their paper’s abstract. “Apart from being time-consuming, this initial decoder might be inefficient as subjects do not receive feedback that helps them to elicit proper [sensorimotor rhythms] during calibration.”
To address this issue, researchers developed a new machine learning program that identifies an individual’s specific needs and adjusts its repetition-based training as needed. As a result of this adaptable self-calibration, trainees do not require the guidance of the research team or complex medical procedures to install an implant.
[Related: Neuralink shows first human patient using brain implant to play online chess.]
“When we think about this in a clinical setting, this technology will make it so we won’t need a specialized team to do this calibration process, which is long and tedious,” said Satyam Kumar, a graduate student involved in the project, in a recent statement. “It will be much faster to move from patient to patient.”
To prepare, all a user needs to do is wear the vivid red, electrode-dotted cap. The electrodes gather and transmit neural activity to the research team’s newly developed decoding software during training. Thanks to the program’s machine learning capabilities, developers avoided the time-consuming, personalized training usually required for other BCI tech to calibrate for each individual user.
Over a five-day period, 18 test subjects successfully learned to mentally control a car racing game and a simpler bar-balancing program using the new training method. The decoder was so effective that wearers could train on both the bar and racing games simultaneously, rather than one at a time. At the annual South by Southwest Conference last month, the UT Austin team took things a step further. During a demonstration, volunteers used the wearable BCI, then learned to control a pair of hand and arm rehabilitation robots within just a few minutes.
The team has only tested their BCI cap on subjects without motor impairments so far, but they intend to expand their decoder’s capabilities to include users with disabilities.
“We want to bring the BCI to the medical field to assist individuals with disabilities,” said José del R. Millán, co-author of the study and electrical and computer engineering professor at UT. “At the same time, we must enhance our technology to simplify its usage and make a greater difference for those with disabilities.” Millán’s team is also integrating comparable BCI technology into a wheelchair.