Machines that attach to the brain and decode its activity promise to open up all kinds of medical possibilities, potentially allowing for improved screening of Alzheimer’s or the monitoring of internal organs. One of their more promising applications involves allowing sufferers of paralysis to regain control of prosthetic devices and limbs via their brain signals, something a team from the University of California, San Francisco (UCSF) has now demonstrated with a first-of-a-kind plug-and-play device.
These types of machines are known as brain-computer interfaces (BCIs), and there are quite a few under development that have shown some promising capabilities over the past few years. In their various forms, these devices can be implanted in the brain and, powered by advanced algorithms, turn its electrical signals into control inputs for all kinds of devices, from prosthetic limbs, to complete exoskeletons and even drones.
The new technology developed at UCSF could mark a significant step forward in this field of research, with the team focusing on the software that translates brain activity into action. This machine learning algorithm was trained to track a paralyzed user’s imagined movements of the neck or wrist, as they watched a computer cursor make its way across a screen.
To begin with, this algorithm had to be reset each day, with the software gradually learning to match the user’s desired motions with the actual movement of the cursor on screen, eventually enabling them to control it. But this could take hours of experimentation each day, so the scientists began to explore other options.
Some tweaks to the algorithm enabled it to continue learning about the user’s brain activity and desired movements, without resetting and starting from scratch each day. The team found that this approach enabled the algorithm to better itself each day on an ongoing basis, and eventually meant that the user was able to plug in and begin using it to great effect right away.
“We found that we could further improve learning by making sure that the algorithm wasn’t updating faster than the brain could follow – a rate of about once every 10 seconds,” says Karunesh Ganguly, a practicing neurologist with UCSF Health. “We see this as trying to build a partnership between two learning systems – brain and computer – that ultimately lets the artificial interface become an extension of the user, like their own hand or arm.”
The BCI used in these experiments is known as an ECoG array, which is a pad of electrodes around the size of a Post-it note that is surgically implanted on the surface of the brain. The researchers obtained special approval to implant it in paralyzed patients on a long-term basis for the purpose of their experiments, and found that over time, the users’ brains were optimizing their activity to control the BCI, without the need for daily recalibration.
“Once the user has established an enduring memory of the solution for controlling the interface, there’s no need for resetting,” says study senior author Karunesh Ganguly. “The brain just rapidly convergences back to the same solution.”
With enough work, the researchers found they could actually switch off the algorithm’s auto-updating feature and the user could simply plug in and start using it each day. Even without any daily calibration, the performance did not decline over a 44-day period of use, with the user also able go several days without using it and only experience a small decline in performance.
“The BCI field has made great progress in recent years, but because existing systems have had to be reset and recalibrated each day, they haven’t been able to tap into the brain’s natural learning processes. It’s like asking someone to learn to ride a bike over and over again from scratch,” says Ganguly. “Adapting an artificial learning system to work smoothly with the brain’s sophisticated long-term learning schemas is something that’s never been shown before in a paralyzed person.”
The research was published in the journal Nature Biotechnology.
Source of Article