Nerve interface provides intuitive and precise control of prosthetic hand

Current state-of-the-art designs for a multifunctional prosthetic hand are restricted in functionality by the signals used to control it. A promising source for prosthetic motor control is the peripheral nerves that run from the spinal column down the arm, since they still function after an upper limb amputation. But building a direct interface to the peripheral nervous system is challenging, because these nerves and their electrical signals are incredibly small. Current interface techniques are hindered by signal amplitude and stability issues, so they provide amputees with only a limited number of independent movements. 

Now, researchers from the University of Michigan have developed a novel regenerative peripheral nerve interface (RPNI) that relies on tiny muscle grafts to amplify the peripheral nerve signals, which are then translated into motor control signals for the prosthesis using standard machine learning algorithms. The research team has demonstrated real-time, intuitive, finger-level control of a robotic hand for amputees, as reported in a recent issue of Science Translational Medicine.

“We take a small graft from one of the patient’s quadricep muscles, or from the amputated limb if they are doing the amputation right then, and wrap just the right amount of muscle around the nerve. The nerve then regrows into the muscle to form new neuromuscular junctions,” says Cindy Chestek, an associate professor of biomedical engineering at the University of Michigan and a senior author on the study. “This creates multiple innervated muscle fibers that are controlled by the small nerve and that all fire at the same time to create a much larger electrical signal—10 or 100 times bigger than you would record from inside or around a nerve. And we do this for several of the nerves in the arm.”

This surgical technique was initially developed by co-researcher Paul Cederna, a plastic surgeon at the University of Michigan, to treat phantom limb pain caused by neuromas. A neuroma is a painful growth of nerve cells that forms at the site of the amputation injury. Over 200 patients have undergone the surgery to treat neuroma pain.

“The impetus for these surgeries was to give nerve fibers a target, or a muscle, to latch on to so neuromas didn’t develop,” says Gregory Clark, an associate professor in biomedical engineering from the University of Utah who was not involved in the study. “Paul Cederna was insightful enough to realize these reinnervated mini-muscles also provided a wonderful opportunity to serve as signal sources for dexterous, intuitive control. That means there’s a ready population that could benefit from this approach.”

The Michigan team validated their technique with studies involving four participants with upper extremity amputations who had previously undergone RPNI surgery to treat neuroma pain. Each participant had a total of 3 to 9 muscle grafts implanted on nerves. Initially, the researchers measured the signals from these RPNIs using fine-wire, nickel-alloy electrodes, which were inserted through the skin into the grafts using ultrasound guidance. They measured high-amplitude electromyography signals, representing the electrical activity of the mini-muscles, when the participants imagined they were moving the fingers of their phantom hand. The ultrasound images showed the participants’ thoughts caused the associated specific mini-muscles to contract. These proof-of-concept measurements, however, were limited by the discomfort and movement of the percutaneous electrodes that pierced the skin.

Next, the team surgically implanted permanent electrodes into the RPNIs of two of the participants. They used a type of electrode commonly used for battery-powered diaphragm pacing systems, which electrically stimulate the diaphragm muscles and nerves of patients with chronic respiratory insufficiency to help regulate their breathing. These implanted electrodes allowed the researchers to measure even larger electrical signals—week after week from the same participant—by just plugging into the connector. After taking 5 to 15 minutes of calibration data, the electrical signals were translated into movement intent using machine learning algorithms and then passed on to a prosthetic hand. Both subjects were able to intuitively complete tasks like stacking physical blocks without any training—it worked on the first try just by thinking about it, says Chestek. Another key result is that the algorithm kept working even 300 days later.

“The ability to use the determined relationship between electrical activity and intended movement for a very long period of time has important practical consequences for the user of a prosthesis, because the last thing they want is to rely on a hand that is not reliable,” Clark says.

Although this clinical trial is ongoing, the Michigan team is now investigating how to replace the connector and computer card with an implantable device that communicates wirelessly, so patients can walk around in the real world. The researchers are also working to incorporate sensory feedback through the regenerative peripheral nerve interface. Their ultimate goal is for patients to feel like their prosthetic hand is alive, taking over the space in the brain where their natural hand used to be.

“People are excited because this is a novel approach that will provide high quality, intuitive, and very specific signals that can be used in a very straightforward, natural way to provide high degrees of dexterous control that are also very stable and last a long time,” Clark says.

Read the article in Science Translational Medicine.

Illustration of multiple regenerative peripheral nerve interfaces (RPNIs) created for each available nerve of an amputee. Fine-wire electrodes were embedded into his RPNI muscles during the readout session. Credit: Philip Vu/University of Michigan; Science Translational Medicine doi: 10.1126/scitranslmed.aay2857

This is a reposting of my news brief, courtesy of Materials Research Society.

Learning from health-related social media posts: A Q&A

Image by Max Pixel

About 6,000 tweets are sent every second and they aren’t all about celebrities. Posts about health or illness can be tremendously valuable to health care professionals, allowing them to track trends, spot epidemics and assess the quality of services provided by health facilities, to name just a few uses.

But how can the researchers make sense of this flood of data? To find out, I spoke with Sidhartha Sinha, MD, an assistant professor of medicine at Stanford, who analyzes social media posts to better understand patient and societal perceptions.

What sparked your interest in online data?

“While there are certainly downsides with working with unstructured data from sources such as social media and online patient forums, there are also tremendous advantages, including the scope of patients we can ‘reach.’

For example, in our work analyzing data from an online patient forum for patients with inflammatory bowel disease, we are able to access tens of thousands of posts from patients with IBD. These patients are describing a variety of issues around their experience with the disease — such as their therapy side effects (some of which have not been seen before and may offer early insights), psychosocial issues with chronic disease, and opinions regarding treatments and interventions. By analyzing this data, we are in effect  ‘listening’ to these patients’ experiences and hopefully gaining insights to better treat the disease.”

I understand you’ve also used online data to better understand public sentiment — could you describe that?

“One of the most important things health care providers do is try to prevent disease.  And one of the best means to do this is through disease screening.  However, millions of people do not get age-appropriate screening for diseases such as breast cancer or colon cancer.  My group’s initial work targeted understanding the perceptions around cancer screening tools.  Understanding how people feel about these screening interventions — particularly on the scale we’re able to examine using social media — allows us to not only identify barriers, but also further ascertain methods that work.”

How did you do that?

“Tens of thousands of tweets mentioning screening tests are created weekly. And while there are clear limitations to the quality of data and its generalizability, the sheer volume of data that we can access is much larger than most other means such as conventional surveys, which carry their own significant limitations. So we developed and validated a machine learning algorithm to classify sentiment (positive, negative, or neutral) around mentions of three common cancer screening tools: colonoscopy, mammography and PAP smears.

We found more negative sentiment expressed for colonoscopy and more positive sentiment for mammography. For example, the words ‘fear’ and ‘pain’ were commonly associated with negative sentiment. We also found that posts that were negative in sentiment spread more rapidly through social media than positive posts.”

How are these findings being used?

“Knowing the types of postings that reach more users, and some of the common issues expressed in them, could certainly influence how professional societies develop outreach interventions to improve engagement with preventive health efforts.

Based on our initial findings, we are developing additional algorithms to hopefully butter understand patient and societal perceptions of disease. We are also now engaged with professional societies such as the Crohn’s and Colitis Foundation to provide organizations with improved methods to understand patient needs and promote health.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.