Brain-machine interfaces (BMI) are an emerging field at the
intersection of neuroscience and engineering that may improve the quality of
life for amputees and individuals with paralysis. These patients are unable to
get signals from their motor cortex — the part of the brain that normally
controls movement — to their muscles.
Researchers are overcoming this disconnect by implanting in
the brain small electrode arrays, which measure and decode the electrical
activity of neurons in the motor cortex. The sensors’ electrical signals are
transmitted via a cable to a computer and then translated into commands that
control a computer cursor or prosthetic limb. Someday, scientists also hope to
eliminate the cable, using wireless brain sensors to control prosthetics.
In order to realize this dream, however, they need to
improve both the brain sensors and the algorithms used to decode the neural
signals. Stanford electrical engineer Krishna Shenoy, PhD,
and his collaborators are tackling this algorithm challenge, as described in a recent
paper in Neuron.
Currently, most neuroscientists process their BMI data looking
for “spikes” of electrical activity from individual neurons. But this process requires
time-consuming manual or computationally-intense automatic data sorting, which are
both prone to errors.
Manual data sorting will also become unrealistic for future
technologies, which are expected to record thousands to millions of electrode channels
compared to the several hundred channels recorded by today’s state-of-the-art
sensors. For example, a dataset composed of 1,000 channels could take over 100
hours to hand sort, the paper says. In addition, neuroscientists would like to
measure a greater brain volume for longer durations.
So, how can they decode all of this data?
Shenoy suggests simplifying the data analysis by eliminating
spike sorting for applications that depend on the activity of neural
populations rather than single neurons — such as brain-machine interfaces for
In their new study, the Stanford team investigated whether eliminating
this spike sorting step distorted BMI data. Turning to statistics, they
developed an analysis method that retains accuracy while extracting information
from groups rather than individual neurons. Using experimental data from three previous
animal studies, they demonstrated that their algorithms could accurately decode
neural activity with minimal distortion — even when each BMI electrode channel
measured several neurons. They also validated these experimental results with
“This study has a bit of a hopeful message in that observing activity in the brain turns out to be easier than we initially expected,” says Shenoy in a recent Stanford Engineering news release. The researchers hope their work will guide the design and use of new low-power, higher-density devices for clinical applications since their simplified analysis method reduces the storage and processing requirements.
This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.
New technology and access to large databases are fundamentally changing how researchers investigate the genetic roots of psychiatric disorders.
“In the past, a lot of the conditions that people knew to be genetic were found to have a relatively simple genetic cause. For example, Huntington’s disease is caused by mutations in just one gene,” said Laramie Duncan, PhD, an assistant professor of psychiatry and behavioral sciences at Stanford. “But the situation is entirely different for psychiatric disorders, because there are literally thousands of genetic influences on every psychiatric disorder. That’s been one of the really exciting findings that’s come out of modern genetic studies.”
These findings are possible thanks to genome-wide association studies (GWAS), which test for millions of genetic variations across the genome to identify the genes involved in human disease.
Duncan is the lead author of a recent commentary in Neuropsychopharmacology that explains how GWAS studies have demonstrated the inadequacy of previous methods. The paper also highlights new genetics findings for mental health.
Before the newer technologies and databases were available, scientists could only analyze a handful of genetic variations. So they had to guess that a specific genetic variation (a candidate) was associated with a disorder — based on what was known about the underlying biology — and then test their hypothesis. The body of research that has emerged from GWAS studies, however, show that nearly all of these earlier “candidate study” results are incorrect for psychiatric disorders.
“There are actually so many genetic variations in the genome, it would have been almost impossible for people to guess correctly,” Duncan said. “It was a reasonable thing to do at the time. But we now have better technology that’s just as affordable as the old ways of doing things, so traditional candidate gene studies are no longer needed.”
Duncan said she began questioning the candidate gene studies as a graduate student. As she studied the scientific literature, she noticed a pattern in the data that suggested the results were wrong. “The larger studies tended to have null results and the very small studies tended to have positive results. And the only reason you’d see that pattern is if there was strong publication bias,” said Duncan. “Namely, positive results were published even if the study was small, and null results were only published when the study was very large.”
In contrast, the findings from the GWAS studies become more and more precise as the sample size increases, she explained, which demonstrates their reliability.
Using GWAS, researchers now know that thousands of variations distributed across the genome likely contribute to any given mental disorder. By using the statistical power gleaned from giant databases such as the UK Biobank or the Million Veterans Program, they have learned that most of these variations aren’t even in the regions of the gene’s DNA that code for proteins, where scientists expected them to be. For example, only 1.1 percent of schizophrenia risk variants are in these coding regions.
“What’s so interesting about the modern genetic findings is that they are revealing entirely new clues about the underlying biology of psychiatric disorders,” Duncan said. “And this opens up lots of new avenues for treatment development.”
This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.
There are pros and cons to social media discussions of suicide. Social media can spread helpful knowledge and support, but it can also quickly disseminate harmful messaging and misinformation that puts vulnerable youth at risk.
Vicki Harrison, MSW, the program director for the Stanford Center for Youth Mental Health and Wellbeing, discussed this new online education tool — developed in collaboration with a youth advisory panel — in a recent Healthier, Happy Lives Blogpost.
“My hope is that these guidelines will create awareness about the fact that the way people talk about suicide actually matters an awful lot and doing so safely can potentially save lives. Yet we haven’t, up to this point, offered young people a lot of guidance for how to engage in constructive interactions about this difficult topic,” Harrison said in the blog post. “Hopefully, these guidelines will demystify the issue somewhat and offer practical suggestions that youth can easily apply in their daily interactions.”
A few main takeaways from the guidelines are below:
Before you post anything online about suicide
Remember that posts can go viral and they will never be completely erased. If you do post about suicide, carefully choose the language you use. For example, avoid words that describe suicide as criminal, sinful, selfish, brave, romantic or a solution to problems.
Also, monitor the comments for unsafe content like bullying, images or graphic descriptions of suicide methods, suicide pacts or goodbye notes. And include a link to prevention resources, like suicide help centers on social media platforms. From the guidelines:
“Indicate suicide is preventable, help is available, treatment can be successful, and that recovery is possible.”
Sharing your own thoughts, feelings or experience with suicidal behavior online
If you’re experiencing suicidal thoughts or feelings, try to reach out to a trusted adult, friend or professional mental health service before posting online. If you are feeling unsafe, call 911.
In general, think before you post: What do you hope to achieve by sharing your experience? How will sharing make you feel? Who will see your post and how will it affect them?
If you do post, share your experience in a safe and helpful way without graphic references, and consider including a trigger warning at the beginning to warn readers about potentially upsetting content.
Communicating about someone you know who is affected by suicidal thoughts, feelings or behaviors
If you’re concerned about someone, ask permission before posting or sharing content about them if possible. If someone you know has died by suicide, be sensitive to the feelings of their grieving family members and friends who might see your post. Also, avoid posting or sharing posts about celebrity suicides, because too much exposure to the suicide of well-known public figures can lead to copycat suicides.
Responding to someone who may be suicidal
Before you respond to someone who has indicated they may be at risk of suicide, check in with yourself: How are you feeling? Do you understand the role and limitations of the support you can provide?
If you do respond, always respond in private without judgement, assumptions or interruptions. Ask them directly if they are thinking of suicide. Acknowledge their feelings and let them know exactly why you are worried about them. Show that you care. And encourage them to seek professional help.
Memorial websites, pages and closed groups to honor the deceased
Setting up a page or group to remember someone who has died can be a good way to share stories and support, but it also raises concerns about copycat suicides. So make sure the memorial page or group is safe for others — by monitoring comments for harmful or unsafe content, quickly dealing with unsupportive comments and responding personally to participants in distress. Also outline the rules for participation.
Individuals in crisis can receive help from the Santa Clara County Suicide & Crisis Hotline at (855) 278-4204. Help is also available from anywhere in the United States via Crisis Text Line (text HOME to 741741) or the National Suicide Prevention Lifeline at (800) 273-8255. All three services are free, confidential and available 24 hours a day, seven days a week.
This is a resposting of my Scope blog story, courtesy of Stanford School of Medicine.
For millions of people throughout the world, even the simplest surgeries can be risky due to challenging conditions like frequent power outages. In response, Stanford surgeon Thomas Weiser, MD, is part of a team from Lifebox working to develop a durable, affordable and high-quality surgical headlamp for use in low-resource settings. Lifebox is a nonprofit that aims to make surgery safer throughout the world.
Why is an inexpensive surgical headlight important?
“The least expensive headlight in the United States costs upwards of $1000, and most cost quite a bit more. They are very powerful and provide excellent light, but they’re not fit for purpose in lower-resource settings. They are Ferraris when what we need is a Tata – functional, but affordable.
Jared Forrester, MD, a third-year Stanford surgical resident, lived and worked in Ethiopia for the last two years. During that time, he noted that 80 percent of surgeons working in low- and middle-income countries identify poor lighting as a safety issue and nearly 20 percent report direct experience of poor-quality lighting leading to negative patient outcomes. So there is a massive need for a lighting solution.”
How did you develop your headlamp specifications?
“Jared started by passing around a number of off-the-shelf medical headlights with surgeons in Ethiopia. We also asked surgeons in the U.S. and the U.K. to try them out to see how they felt and evaluate what was good and bad about them.
We performed some illumination and identification tests using pieces of meat in a shoebox with a slit cut in it to mimic a limited field of view and a deep hole. We asked surgeons to use lights at various power with the room lights off, with just the room lights on and with overhead surgical lights focused on the field. That way we could evaluate the range of light needed in settings with highly variable lighting, something that does not really exist here in the U.S.”
How do they differ from recreational headlamps?
“Recreational headlights have their uses and I’ve seen them used for providing care — including surgery. However, they tend to be uncomfortable during long cases and not secure on the head. Also, the light isn’t uniformly bright. You can see this when you shine a recreational light on a wall: there is a halo and the center is a different brightness than the outer edge of the light. This makes distinguishing tissue planes and anatomy more difficult.”
What are the barriers to implementation?
“While surgeons working in these settings all express interest in having a quality headlight, there is no reliable manufacturer or distributor for them. Surgeons cannot afford expensive lights, and no one has stepped up to provide a low-cost alternative that is robust, high quality and durable. We’re working to change that.”
What are your next steps?
“We are now evaluating a select number of headlights and engaging manufacturers in discussions about their current devices and what changes might be needed to make a final light at a price point that would be affordable to clinicians and facilities in these settings. By working through our networks and using our logistical capacity, we can connect the manufacturer with a new market that currently does not exist — but is ready and waiting to be developed.
We believe these lights will improve the ability of surgeons to provide better, safer surgical care and also allow emergency cases to be completed at night when power fluctuations are most problematic. These lights should increase the confidence of the surgeon that the operation can be performed safely.”
This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.
The goal of cancer therapy is to destroy the cancer cells while minimizing side effects and damage to the rest of the body. Common types of treatment include surgery, chemotherapy, targeted therapy and radiation therapy. Often combined with surgery or drugs, radiation therapy uses high-energy X-rays to harm the DNA and other critical processes of the rapidly-dividing cancer cells.
New innovations in radiation therapy were the focus of a recent episode of the Sirius radio show “The Future of Everything.” On hand was Stanford’s Billy Loo, MD, PhD, a professor of radiation oncology, who spoke with Stanford professor and radio host Russ Altman, MD, PhD.
Radiation has been used to treat cancer for over a century, but today’s technologies target the tumor with far greater precision and speed than the old days. Loo explained that modern radiotherapy now delivers low-dose beams of X-rays from multiple directions, which are accurately focused on the tumor so the surrounding healthy tissues get only a small dose while the tumor gets blasted. Radiation oncologists use imaging — CT, MRI or PET — to determine the three-dimensional sculpture of the tumor to target.
“We identify the area that needs to be treated, where the tumor is in relationship to the normal organs, and create a plan of the sculpted treatment,” Loo said. “And then during the treatment, we also use imaging … to see, for example, whether the radiation is going where we want it to go.”
In addition, oncologists now implement technologies in the clinic to compensate for motion, since organs like the lungs are constantly moving and patients have trouble lying still even for a few minutes. “We call it motion management. We do all kinds of tricks like turning on the radiation beam synchronized with the breathing cycle or following tumors around with the radiation beam,” explained Loo.
Currently, that is how standard radiation therapy works. However, Stanford radiation oncologists are collaborating with scientists at SLAC Linear Accelerator Center to develop an innovative technology called PHASER. Although Loo admits that the acronym was inspired because he loves Star Trek, PHASER stands for pluridirectional high-energy agile scanning electronic radiotherapy. This new technology delivers the radiation dose of an entire therapy session in a single flash lasting less than a second — faster than the body moves.
“We wondered, what if the treatment was done so fast — like in a flash photography — that all the motion is frozen? That’s a fundamental solution to this motion problem that gives us the ultimate precision,” he said. “If we’re able to treat more precisely with less spillage of radiation dose into normal tissues, that gives us the benefit of being able to kill the cancer and cause less collateral damage.”
The research team is currently testing the PHASER technology in mice, resulting in an exciting discovery — the biological response to flash radiotherapy may differ from slower traditional radiotherapy.
“We and a few other labs around the world have started to see that when the radiation is given in a flash, we see equal or better tumor killing but much better normal tissue protection than with the conventional speed of radiation,” Loo said. “And if that translates to humans, that’s a huge breakthrough.”
Loo also explained that their PHASER technology has been designed to be compact, economical, reliable and clinically efficient to provide a robust, mobile unit for global use. They expect it to fit in a standard cargo shipping container and to power it using solar energy and batteries.
“About half of the patients in the world today have no access to radiation therapy for technological and logistical reasons. That means millions of patients who could potentially be receiving curative cancer therapy are getting treated purely palliatively. And that’s a huge tragedy,” Loo said. “We don’t want to create a solution that everyone in the world has to come here to get — that would have limited impact. And so that’s been a core principle from the beginning.”
This is a reposting of my Scope blog post, courtesy of Stanford School of Medicine.
“Hey Siri, am I depressed?” When I posed this question to my iPhone, Siri’s reply was “I can’t really say, Jennifer.” But someday, software programs like Siri or Alexa may be able to talk to patients about their mental health symptoms to assist human therapists.
“AI psychology isn’t a new specialty yet, but I do see it as a growing interdisciplinary need. I work to improve mental health access and quality through safe and effective artificial intelligence. I use methods from social science and computer science to answer questions about AI and vulnerable groups who may benefit or be harmed.”
How did you become interested in this field?
“During my training as a clinical psychologist, I had patients who waited years to tell anyone about their problems for many different reasons. I believe the role of a clinician isn’t to blame people who don’t come into the hospital. Instead, we should look for opportunities to provide care when people are ready and willing to ask for it, even if that is through machines.
I was reading research from different fields like communication and computer science and I was struck by the idea that people may confide intimate feelings to computers and be impacted by how computers respond. I started testing different digital assistants, like Siri, to see how they responded to sensitive health questions. The potential for good outcomes — as well as bad — quickly came into focus.”
Why is technology needed to assess the mental health of patients?
“We have a mental health crisis and existing barriers to care — like social stigma, cost and treatment access. Technology, specifically AI, has been called on to help. The big hope is that AI-based systems, unlike human clinicians, would never get tired, be available wherever and whenever the patient needs and know more than any human could ever know.
However, we need to avoid inflated expectations. There are real risks around privacy, ineffective care and worsening disparities for vulnerable populations. There’s a lot of excitement, but also a gap in knowledge. We don’t yet fully understand all the complexities of human–AI interactions.
People may not feel judged when they talk to a machine the same way they do when they talk to a human — the conversation may feel more private. But it may in fact be more public because information could be shared in unexpected ways or with unintended parties, such as advertisers or insurance companies.”
What are you hoping to accomplish with AI?
“If successful, AI could help improve access in three key ways. First, it could reach people who aren’t accessing traditional, clinic-based care for financial, geographic or other reasons like social anxiety. Second, it could help create a ‘learning healthcare system’ in which patient data is used to improve evidence-based care and clinician training.
Lastly, I have an ethical duty to practice culturally sensitive care as a licensed clinical psychologist. But a patient might use a word to describe anxiety that I don’t know and I might miss the symptom. AI, if designed well, could recognize cultural idioms of distress or speak multiple languages better than I ever will. But AI isn’t magic. We’ll need to thoughtfully design and train AI to do well with different genders, ethnicities, races and ages to prevent further marginalizing vulnerable groups.
If AI could help with diagnostic assessments, it might allow people to access care who otherwise wouldn’t. This may help avoid downstream health emergencies like suicide.”
How long until AI is used in the clinic?
“I hesitate to give any timeline, as AI can mean so many different things. But a few key challenges need to be addressed before wide deployment, including the privacy issues, the impact of AI-mediated communications on clinician-patient relationships and the inclusion of cultural respect.
The clinician–patient relationship is often overlooked when imagining a future with AI. We know from research that people can feel an emotional connection to health-focused conversational AI. What we don’t know is whether this will strengthen or weaken the patient-clinician relationship, which is central to both patient care and a clinician’s sense of self. If patients lose trust in mental health providers, it will cause real and lasting harm.”
This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.
New Web-based Tool Hosted at NERSC Helps Visualize Exometabolomic Data
Understanding nutrient flows within microbial communities is important to a wide range of fields, including medicine, bioremediation, carbon sequestration, and sustainable biofuel development. Now, researchers from Lawrence Berkeley National Laboratory (Berkeley Lab) have built an interactive, web-based data visualization tool to observe how organisms transform their environments through the increase and decrease of metabolites — enabling scientists to quickly see patterns in microbial food webs.
This visualization tool — the first of its kind — is a key part of a new data repository, the Web of Microbes (WoM) that contains liquid chromatography mass spectrometry datasets from the Northen Metabolomics Lab located at the U.S. Department of Energy’s Joint Genome Institute (JGI). The Web of Microbes project is an interdisciplinary collaboration between biologists and computational researchers at Berkeley Lab and the National Energy Research Scientific Computing Center (NERSC). JGI and NERSC are both DOE Office of Science user facilities.
“While most existing databases focus on metabolic pathways or identifications, the Web of Microbes is unique in displaying information on which metabolites are consumed or released by an organism to an environment such as soil,” said Suzanne Kosina, a senior research associate in Berkeley Lab’s Environmental Genomics & Systems Biology (EGSB) Division, a member of the DOE ENIGMA Scientific Focus Area, and lead author on a paper describing WoM published in BMC Microbiology. “We call them exometabolites since they are outside of the cell. Knowing which exometabolites a microbe ‘eats’ and produces can help us determine which microbes might benefit from growing together or which might compete with each other for nutrients.”
Four Different Viewpoints
Four different viewing methods are available by selecting the tabs labeled “The Web”, “One Environment”, “One Organism”, and “One Metabolite.” “The Web” view graphically displays data constrained by the selection of an environment, while the other three tabs display tabular data from three constrainable dimensions: environment, organism, and metabolite.
“You can think of the 3D datasets as a data cube,” said NERSC engineer Annette Greiner, second author on the BMC Microbiology paper. “The visualization tool allows you to slice the data cube in any direction. And each of these slices gives one of the 2D views: One Environment, One Organism, or One Metabolite.”
The most intuitive way to view the data is via The Web, which displays an overview of connections between organisms and the nutrients they act on within a selected environment. After choosing the environment from a pull-down menu, The Web provides a network diagram in which each organism is represented as a little box, each metabolite as a circle, and their interactions as connecting lines. The size of the circle scales with the number of organisms that interact with that metabolite, whereas the color and shade of the connecting line indicate the amount of increase (red) or decrease (blue) in the metabolite level due to the microbial activation.
“Having a lot more connecting lines indicates there’s more going on in terms of metabolism with those compounds in the environment. You can clearly see differences in behavior between the organisms,” Greiner said. “For instance, an organism with a dense number of red lines indicates that it produces many metabolites.”
Although The Web view gives users a useful qualitative assessment of metabolite interaction patterns, the other three tabular views provide more detailed information.
The One Environment view addresses to what extent the organisms in a single environment compete or coexist with each other. The heatmap table shows which metabolites (shown in rows) are removed or added to the environment by each of the organisms (shown in columns), where the color of each table cell indicates the amount of metabolic increase or decrease. And icons identify whether pairs of organisms compete (X) or are compatible (interlocking rings) for a given metabolite.
“For example, if you’re trying to design a bioreactor and you want to know which organisms would probably work well together in the same environment, then you can look for things with interlocking rings and try to avoid the Xs,” said Greiner.
Similarly, the One Organism heatmap table allows users to compare the actions of a single microbe on many metabolites across multiple environments. And users can use the One Metabolite table to compare the actions of multiple organisms on a selected metabolite in multiple environments.
“Ultimately, WoM provides a means for improving our understanding of microbial communities,” said Trent Northen, a scientist at JGI and in Berkeley Lab’s EGSB Division. “The data and visualization tools help us predict and test microbial interactions with each other and their environment.”
The WoM tools were developed iteratively using a participatory design process, where research scientists from Northen’s lab worked directly with Greiner to identify needs and quickly try out solutions. This differed from the more traditional approach in which Greiner completes a coherent design for the user interface before showing it to the scientists.
Both Greiner and Kosina agreed that collaborating together was fun and productive. “Instead of going off to a corner alone trying to come up with something, it’s useful to have a user sitting on my shoulder giving me feedback in real time,” said Greiner. “Scientists often have a strong idea about what they need to see, so it pays to have frequent interactions and to work side by side.”
In addition to contributing Greiner’s expertise in data visualization and web application development, NERSC hosts WoM and stores the data. NERSC’s computing resources and well-established science gateway infrastructure should enable WoM to grow both in volume and features in a stable and reliable environment, the development team noted in the BMC Microbiology paper.
According to Greiner, the data itself doesn’t take up much storage space but that may change. Currently, only Northen’s group can upload data but the team hopes to support multiple user groups in the future. For now, the Berkeley Lab researchers are excited to share their data on the Web of Microbes where it can be used by scientists all over the world. And they plan to add more data to the repository as they perform new experiments.
Kosina said it also made sense to work with NERSC on the Web of Microbes project because the Northen metabolomics lab relies on many other tools and resources at NERSC. “We already store all of our mass spectrometry data at NERCS and run our analysis software on their computing systems,” Kosina said.
Eventually, the team plans to link the Web of Microbes exometabolomics data to mass spectrometry and genomics databases such as JGI’s Genome Portal. They are also working with the DOE Systems Biology Knowledgebase (KBase) to allow users to take advantage of KBase’s predictive modeling capabilities, Northen added, which will enable researchers to determine the functions of unknown genes and predict microbial interactions.
This is a reposting of my news feature originally published by Berkeley Lab’s Computing Sciences.