Predicting women at risk of preeclampsia before clinical symptoms

Many of my female friends became pregnant with their first child in their late 30s or early 40s, which increased their risk of common complications such as high blood pressure, gestational diabetes and preeclampsia.

Affecting over 8 million women worldwide, preeclampsia can lead to serious, even fatal, complications for both the mother and baby. The clinical symptoms of preeclampsia typically start at 20 weeks of pregnancy and include high blood pressure and signs of kidney or liver damage.

“Once these clinical symptoms appear, irreparable harm to the mother or the fetus may have already occurred,” said Stanford immunologist Brice Gaudilliere, MD, PhD.  “The only available diagnostic blood test for preeclampsia is a proteomic test that measures a ratio of two proteins. While this test is good at ruling out preeclampsia once clinical symptoms have occurred, it has a poor positive predictive value.”

Now, Stanford researchers are working to develop a diagnostic blood test that can accurately predict preeclampsia prior to the onset of clinical symptoms.

A new study conducted at Stanford was led by senior authors Gaudilliere, statistical innovator Nima Aghaeepour, PhD, and clinical trial specialist Martin Angst, MD, and co-first authors and postdoctoral fellows Xiaoyuan Han, PhD, and Sajjad Ghaemi, PhD. Their results were recently published in Frontiers in Immunology.

They analyzed blood samples from 11 women who developed preeclampsia and 12 women with normal blood pressure during pregnancy. These samples were obtained at two timepoints, allowing the scientists to measure how immune cells behaved over time during pregnancy.

“Unlike prior studies that typically assessed just a few select immune cell types in the blood at a single timepoint during pregnancy, our study focused on immune cell dynamics,” Gaudilliere explained. “We utilized a powerful method called mass cytometry, which measured the distribution and functional behavior of virtually all immune cell types present in the blood samples.”

The team identified a set of eight immune cell responses that accurately predicted which of the women would develop preeclampsia — typically 13 weeks before clinical diagnosis.

At the top of their list was a signaling protein called STAT5. They observed higher activity of STAT5 in CD4+ T-cells, which help regulate the immune system, at the beginning of pregnancy for all but one patient who developed preeclampsia.

“Pregnancy is an amazing immunological phenomenon where the mother’s immune system ‘tolerates’ the fetus, a foreign entity, for nine months,” said Angst. “Our findings are consistent with past studies that found preeclampsia to be associated with increased inflammation and decreased immune tolerance towards the fetus.”

Although their results are encouraging, more research is needed before translating them to the clinic.

The authors explained that mass cytometry is a great tool to find the “needle in the haystack.” It allowed them to survey the entire immune system and identify the key elements that could predict preeclampsia, but it is an exploratory platform not suitable for the clinic, they said.

“Now that we have identified the elements of a diagnostic immunoassay, we can use conventional instruments such as those used in the clinic to measure them in a patient’s blood sample.” Aghaeepour said.

First though, the team needs to validate their findings in a large, multi-center study. They are also using machine learning to develop a “multiomics” model that integrates these mass cytometry measurements with other biological analysis approaches. And they are investigating how to objectively define different subtypes of preeclampsia.

Their goal is to accurately diagnose preeclampsia before the onset of clinical symptoms.

 “Diagnosing preeclampsia early would help ensure that patients at highest risk have access to health care facilities, are evaluated more frequently by obstetricians specialized in high-risk pregnancies and receive treatment,” said Gaudilliere.

Women with preeclampsia can receive care through the obstetric clinic at Lucile Packard Children’s Hospital Stanford.

Photo by Pilirodriquez

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Advertisements

Explaining neuroscience in ongoing Instagram video series: A Q&A

At the beginning of the year, Stanford neuroscientist Andrew Huberman, PhD, pledged to post on Instagram one-minute educational videos about neuroscience for an entire year. Since a third of his regular followers come from Spanish-speaking countries, he posts them in both English and Spanish. We spoke soon after he launched the project. And now that half the year is over, I checked in with him about his New Year’s resolution.

How is your Instagram project going?

“It’s going great. I haven’t kept up with the frequency of posts that I initially set out to do, but it’s been relatively steady. The account has grown to about 13,500 followers and there is a lot of engagement. They ask great questions and the vast majority of comments indicate to me that people understand and appreciate the content. I’m really grateful for my followers. Everyone’s time is valuable and the fact that they comment and seem to enjoy the content is gratifying.”

What have you learned?

“The feedback informed me that 60 seconds of information is a lot for some people, especially if the topic requires new terms. That was surprising. So I have opted to do shorter 45-second videos and those get double or more views and reposts. I also have started posting images and videos of brains and such with ‘voice over’ content. It’s more work to produce, but people seem to like that more than the ‘professor talking’ videos.

I still get the ‘you need to blink more!’ comments, but fortunately that has tapered off. My Spanish is also getting better but I’m still not fluent. Neural plasticity takes time but I’ll get there.”

What is your favorite video so far?

“People naturally like the videos that provide something actionable for their health and well-being. The brief series on light and circadian rhythms was especially popular, as well as the one on how looking at the blue light from your cell phone in the middle of the night can potentially alter sleep and mood. I particularly enjoyed making that post since it combined vision science and mental health, which is one of my lab’s main focuses.”

What are you planning for the rest of the year?

“I’m kicking off some longer content through the Instagram TV format, which will allow people who want more in-depth information to get that. I’m also helping The Society for Neuroscience get their message out about their annual meeting. Other than that, I’m just going to keep grinding away at delivering what I think is interesting neuroscience to people that would otherwise not hear about it.”

Is it fun or an obligation at this point?

“There are days where other things take priority of course — research, teaching and caring for my bulldog Costello — but I have to do it anyway since I promised I’d post. However, it’s always fun once I get started. If only I could get Costello to fill in for me when I get busy…”

This is a reposting of my Scope story, courtesy of Stanford School of Medicine.

Simplified analysis method could lead to improved prosthetics, a Stanford study suggests

Brain-machine interfaces (BMI) are an emerging field at the intersection of neuroscience and engineering that may improve the quality of life for amputees and individuals with paralysis. These patients are unable to get signals from their motor cortex — the part of the brain that normally controls movement — to their muscles.

Researchers are overcoming this disconnect by implanting in the brain small electrode arrays, which measure and decode the electrical activity of neurons in the motor cortex. The sensors’ electrical signals are transmitted via a cable to a computer and then translated into commands that control a computer cursor or prosthetic limb. Someday, scientists also hope to eliminate the cable, using wireless brain sensors to control prosthetics.

In order to realize this dream, however, they need to improve both the brain sensors and the algorithms used to decode the neural signals. Stanford electrical engineer Krishna Shenoy, PhD, and his collaborators are tackling this algorithm challenge, as described in a recent paper in Neuron.

Currently, most neuroscientists process their BMI data looking for “spikes” of electrical activity from individual neurons. But this process requires time-consuming manual or computationally-intense automatic data sorting, which are both prone to errors.

Manual data sorting will also become unrealistic for future technologies, which are expected to record thousands to millions of electrode channels compared to the several hundred channels recorded by today’s state-of-the-art sensors. For example, a dataset composed of 1,000 channels could take over 100 hours to hand sort, the paper says. In addition, neuroscientists would like to measure a greater brain volume for longer durations.

So, how can they decode all of this data?

Shenoy suggests simplifying the data analysis by eliminating spike sorting for applications that depend on the activity of neural populations rather than single neurons — such as brain-machine interfaces for prosthetics.

In their new study, the Stanford team investigated whether eliminating this spike sorting step distorted BMI data. Turning to statistics, they developed an analysis method that retains accuracy while extracting information from groups rather than individual neurons. Using experimental data from three previous animal studies, they demonstrated that their algorithms could accurately decode neural activity with minimal distortion — even when each BMI electrode channel measured several neurons. They also validated these experimental results with theory.

 “This study has a bit of a hopeful message in that observing activity in the brain turns out to be easier than we initially expected,” says Shenoy in a recent Stanford Engineering news release. The researchers hope their work will guide the design and use of new low-power, higher-density devices for clinical applications since their simplified analysis method reduces the storage and processing requirements.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Photo by geralt.

AI could help radiologists improve their mammography interpretation

The guidelines for screening women for breast cancer are a bit confusing. The American Cancer Society recommends annual mammograms for women older than 45 years with average risk, but other groups like the U.S. Preventative Services Task Force (USPSTF) recommend less aggressive breast screening.

This controversy centers on mammography’s frequent false-positive detections — or false alarms — which lead to unnecessary stress, additional imaging exams and biopsies. USPSTF argues that the harms of early and frequent mammography outweigh the benefits.

However, a recent Stanford study suggests a better way to reduce these false alarms without increasing the number of missed cancers. Using over 112,000 mammography cases collected from 13 radiologists across two teaching hospitals, the researchers developed and tested a machine-learning model that could help radiologists improve their mammography practice.

Each mammography case included the radiologist’s observations and diagnostic classification from the mammogram, the patient’s risk factors and the “ground-truth” of whether or not the patient had breast cancer based on follow-up procedures. The researchers used the data to train and evaluate their computer model.

They compared the radiologists’ performance against their machine-learning model, doing a separate analysis for each of the 13 radiologists. They found significant variability among radiologists.

Based on accepted clinical guidelines, radiologists should recommend follow-up imaging or a biopsy when a mammographic finding has a two percent probability of being malignant. However, the Stanford study found participating radiologists used a threshold that varied from 0.6 to 3.0%. In the future, similar quantitative observations could be used to identify sources of variability and to improve radiologist training, the paper said.

The study included 1,214 malignant cases, which represents 1.1 percent of the total number. Overall, the radiologists reported 176 false negatives indicating cancers missed at the time of the mammograms. They also reported 12,476 false positives or false alarms. In comparison, the machine-learning model missed one additional cancer but it decreased the number of false alarms by 3,612 cases relative to the radiologists’ assessment.

The study concluded: “Our results show that we can significantly reduce screening mammography false positives with a minimal increase in false negatives.”

However, their computer model was developed using data from 1999 to 2010, the era of analog film mammography. In future work, the researchers plan to update the computer algorithm to use the newer descriptors and classifications for digital mammography and three-dimensional breast tomosynthesis.

Ross Shachter, PhD, a Stanford associate professor of management science and engineering and lead author on the paper, summarized in a recent Stanford Engineering news release, “Our approach demonstrates the potential to help all radiologists, even experts, perform better.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Celebrating cancer survivors by telling their stories

Photo of Martin Inderbitzen by Michael Goldstein

Neurobiologist and activist Martin Inderbitzen, PhD, began his talk with a question: “Did you ever face a life situation that was totally overwhelming?” Most of his audience likely answered yes, since he was speaking to cancer survivors and their families at a Stanford event called Celebrating Cancer Survivors.

The evening focused on life after cancer and highlighted Stanford’s Cancer Survivorship Program, which helps survivors and their families transition to life after treatment by providing multidisciplinary services and health care. Lidia Schapira, MD, a medical oncologist and director of the program, said they aim to  “help people back into health.”

But to me, the heart of the event was the personal stories openly shared by the attendees while standing in line for the food buffet or waiting for the speeches to begin. As a Hodgkin’s survivor who was treated at Stanford twenty-five years ago, I swapped “cancer stories” with my comrades.

Inderbitzen understands firsthand the importance of sharing such cancer survival stories. In 2012, he was diagnosed at the age of 32 with pancreatic cancer. From an online search, he quickly learned that 95 percent of people with his type of cancer die within a few years. However, his doctor gave him hope by mentioning a similar patient, who was successfully treated some years earlier and is now happily skiing in the mountains.

“This picture of someone skiing in the mountains became my mantra,” Inderbitzen explained. “I had all these bad statistics against me, but then I also had this one story. And I thought, maybe I can also be one story, because this story was somehow the personification of a possibility. It inspired me to rethink how I saw my own situation.”

Later, Inderbitzen publicly shared his own cancer journey, which touched many people who reached out to him. This inspired him to found MySurvivalStory.org — an initiative that documents inspiring cancer survival stories to help other cancer patients better cope with their illness. He and his wife quit their jobs, raised some funds and began traveling around the globe to find and record short videos of cancer survivors from different cultures.

“We share the stories in formats that people can consume when they have ‘chemo brain’ — like podcasts you can listen to and short videos you can process even when you’re tired,” he said. He added, “These stories are powerful because they provide us with something or someone to aspire to — someone who is a bit ahead of us, so we think “I can do that.’”

Inderbitzen isn’t the only one to recognize the empowering impact of telling your cancer story. For example, the Stanford Center for Integrative Medicine compiles some patient stories on their Surviving Cancer website. And all of these stories have the potential to help both the teller and listener.

However, Inderbitzen offers the following advice when sharing your cancer story:

“Change the story you tell and you will be able to change the life you live. So that’s a very powerful concept. And I would like to challenge you and also encourage you that every day when you wake up and get out of bed and things are not looking good, remind yourself that it’s actually you who chooses which story to tell. And choosing a better story doesn’t mean that you’re ignoring reality. No, it just means that you’re giving yourself a chance.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Cleaning cosmic microwave background data to measure gravitational lensing

NERSC facilitates development of new analysis filter to better map the invisible universe

A set of cosmic microwave background 2D images with no lensing effects (top row) and with exaggerated cosmic microwave background gravitational lensing effects (bottom row). Image: Wayne Hu and Takemi Okamoto/University of Chicago

Cosmic microwave background (CMB) radiation is everywhere in the universe, but its frigid (-460° F), low-energy microwaves are invisible to the human eye. So cosmologists use specialized telescopes to map out the temperature spectrum of this relic radiation — left over from the Big Bang — to learn about the origin and structure of galaxy clusters and dark matter.

Gravity from distant galaxies cause tiny distortions in the CMB temperature maps, a process called gravitational lensing, which are detected by data analysis software run on supercomputers like the Cori system at Lawrence Berkeley National Laboratory’s (Berkeley Lab’s) National Energy Research Scientific Computing (NERSC) facility. Unfortunately, this temperature data is often corrupted by “foreground emissions” from extragalactic dust, gas, and other noise sources that are challenging to model.

“CMB images get distorted by gravitational lensing. This distortion is not a nuisance, it’s the signal we’re trying to measure,” said Emmanuel Schaan, a postdoctoral researcher in the Physics Division at Berkeley Lab. “However, various foreground emissions always contaminate CMB maps. These foregrounds are nuisances because they can mimic the effect of lensing and bias our lensing measurements. So we developed a new method for analyzing CMB data that is largely immune to the foreground noise effects.”

Schaan collaborated with Simone Ferraro, a Divisional Fellow in Berkeley Lab’s Physics Division, to develop their new statistical method, which is described in a paper published May 8, 2019 in Physical Review Letters.

“Our paper is mostly theoretical, but we also demonstrated that the method works on realistic simulations of the microwave sky previously generated by Neelima Sehgal and her collaborators,” Schaan said.

These publicly available simulations were originally generated using computing resources at the National Science Foundation’s TeraGrid project and Princeton University’s TIGRESS file system. Sehgal’s team ran N-body three-dimensional simulations of the gravitational evolution of dark matter in the universe, which they then converted into two-dimensional (2D) simulated maps of various components of the microwave sky at different frequencies — including 2D temperature maps of foreground emissions.

These 2D images show different types of foreground emissions that can interfere with CMB lensing measurements, as simulated by Neelima Sehgal and collaborators. From left to right: The cosmic infrared background, composed of intergalactic dust; radio point sources, or radio emission from other galaxies; the kinematic Sunyaev-Zel’dovich effect, a product of gas in other galaxies; and the thermal Sunyaev-Zel’dovich effect, which also relates to gas in other galaxies. Image: Emmanuel Schaan and Simone Ferraro/Berkeley Lab

Testing Theory On Simulated Data

NERSC provided resources that weren’t otherwise available to the team. Schaan and Ferraro applied their new analysis method on the existing 2D CMB temperature maps using NERSC. They wrote their analysis code in Python and used a library called pathos to run across multiple nodes in parallel. The final run that generated all the published results were run on  NERSC’s Cori supercomputer.

“As we progressively improved our analysis, we had to test the improved methods,” Schaan said. “Having access to NERSC was very useful for us.”

The Berkeley Lab researchers did many preliminary runs on NERSC’s Edison supercomputer before it was decommissioned because the wait time for the Edison queue was much shorter than the Cori queues. Schaan said they haven’t yet optimized the code for the Cori many-core energy-efficient KNL nodes, but they need to do that soon.

It might be time to speed up that code given their future research plans. Schaan and Ferraro are still perfecting their analysis, so they may need to run an improved method on the same 2D CMB simulations using NERSC. They also hope to begin working with real CMB data.

“In the future, we want to apply our method to CMB data from Simons Observatory and CMB S4, two upcoming CMB experiments that will have data in a few years. For that data, the processing will very likely be done on NERSC,” Schaan said.

NERSC is a U.S. Department of Energy Office of Science User Facility.

For more information, see this Berkeley Lab news release: A New Filter to Better Map the Dark Universe.

This is a reposting of my news feature orginally published by Berkeley Lab’s Computing Sciences.

Physicians need to be educated about marijuana, resident argues

 

Photo by 7raysmarketing

Nathaniel Morris, MD, a resident in psychiatry at Stanford, said he learned almost nothing about marijuana during medical school. Its absence made some sense, he explained in a recent JAMA Internal Medicine editorial: why focus on marijuana when physicians must worry about medical emergencies such as cardiac arrest, sepsis, pulmonary embolisms and opioid overdoses?

However, marijuana use has dramatically changed in the few years since he earned his medical degree, he pointed out. Thirty-three states and Washington, D.C. have now passed laws legalizing some form of marijuana use, including 10 states that have legalized recreational use. And the resulting prevalence of marijuana has wide-ranging impacts in the clinic.

“In the emergency department, I’ve come to expect that results of urine drug screens will be positive for tetrahydrocannabinol (THC), whether the patient is 18 years old or 80 years old,” he said in the editorial. “When I review medications at the bedside, some patients and families hold out THC gummies or cannabidiol capsules, explaining dosages or ratios of ingredients used to treat symptoms, including pain, insomnia, nausea, or poor appetite.” He added that other patients come to the ED after having panic attacks or psychotic symptoms and physicians have to figure out whether marijuana is involved.

Marijuana also impacts inpatient units. Morris described that some patients smuggle in marijuana and smoke in their rooms, while others who abruptly stop their use upon entering the hospital experience withdrawal symptoms like sleep disturbances and restlessness.

The real problem, he said, is that many physicians are unprepared and poorly educated about marijuana and its health effects. This is in part because government restrictions have made it difficult to study marijuana, so there is limited research to guide clinical decisions.

Although people have used marijuana to treat various health conditions for years, the U.S. Food and Drug Administration (FDA) has not approved the cannabis plant for treating any health problems. The FDA has approved three cannabinoid-based drugs: a cannabidiol oral solution used to treat a rare form of epileptic seizures and two synthetic cannabinoids used to treat nausea and vomiting associated with cancer chemotherapy or loss of appetite in people with AIDS.

In January 2017, the National Academies of Science, Engineering, and Medicine published a report that summarizes the current clinical evidence on the therapeutic effects and harmful side effects of marijuana products. However, more and higher quality research is needed, Morris said.

Physicians also need to be educated about marijuana through dedicated coursework in medical school and ongoing continuing medical education activities, he said. Morris noted that physicians should receive instruction pertinent to their fields — such as gastroenterology fellows learning about marijuana’s potential effects on nausea or psychiatry residents learning about associations between marijuana and psychosis.

“These days, I find myself reading new studies about the health effects of marijuana products, attending grand rounds on medical marijuana, and absorbing tips from clinicians who have more experience related to marijuana and patient care than I do,” Morris said. “Still, I suspect that talking with patients about marijuana use and what it means to them will continue to teach me the most.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.