AI could help radiologists improve their mammography interpretation

The guidelines for screening women for breast cancer are a bit confusing. The American Cancer Society recommends annual mammograms for women older than 45 years with average risk, but other groups like the U.S. Preventative Services Task Force (USPSTF) recommend less aggressive breast screening.

This controversy centers on mammography’s frequent false-positive detections — or false alarms — which lead to unnecessary stress, additional imaging exams and biopsies. USPSTF argues that the harms of early and frequent mammography outweigh the benefits.

However, a recent Stanford study suggests a better way to reduce these false alarms without increasing the number of missed cancers. Using over 112,000 mammography cases collected from 13 radiologists across two teaching hospitals, the researchers developed and tested a machine-learning model that could help radiologists improve their mammography practice.

Each mammography case included the radiologist’s observations and diagnostic classification from the mammogram, the patient’s risk factors and the “ground-truth” of whether or not the patient had breast cancer based on follow-up procedures. The researchers used the data to train and evaluate their computer model.

They compared the radiologists’ performance against their machine-learning model, doing a separate analysis for each of the 13 radiologists. They found significant variability among radiologists.

Based on accepted clinical guidelines, radiologists should recommend follow-up imaging or a biopsy when a mammographic finding has a two percent probability of being malignant. However, the Stanford study found participating radiologists used a threshold that varied from 0.6 to 3.0%. In the future, similar quantitative observations could be used to identify sources of variability and to improve radiologist training, the paper said.

The study included 1,214 malignant cases, which represents 1.1 percent of the total number. Overall, the radiologists reported 176 false negatives indicating cancers missed at the time of the mammograms. They also reported 12,476 false positives or false alarms. In comparison, the machine-learning model missed one additional cancer but it decreased the number of false alarms by 3,612 cases relative to the radiologists’ assessment.

The study concluded: “Our results show that we can significantly reduce screening mammography false positives with a minimal increase in false negatives.”

However, their computer model was developed using data from 1999 to 2010, the era of analog film mammography. In future work, the researchers plan to update the computer algorithm to use the newer descriptors and classifications for digital mammography and three-dimensional breast tomosynthesis.

Ross Shachter, PhD, a Stanford associate professor of management science and engineering and lead author on the paper, summarized in a recent Stanford Engineering news release, “Our approach demonstrates the potential to help all radiologists, even experts, perform better.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Advertisements

Celebrating cancer survivors by telling their stories

Photo of Martin Inderbitzen by Michael Goldstein

Neurobiologist and activist Martin Inderbitzen, PhD, began his talk with a question: “Did you ever face a life situation that was totally overwhelming?” Most of his audience likely answered yes, since he was speaking to cancer survivors and their families at a Stanford event called Celebrating Cancer Survivors.

The evening focused on life after cancer and highlighted Stanford’s Cancer Survivorship Program, which helps survivors and their families transition to life after treatment by providing multidisciplinary services and health care. Lidia Schapira, MD, a medical oncologist and director of the program, said they aim to  “help people back into health.”

But to me, the heart of the event was the personal stories openly shared by the attendees while standing in line for the food buffet or waiting for the speeches to begin. As a Hodgkin’s survivor who was treated at Stanford twenty-five years ago, I swapped “cancer stories” with my comrades.

Inderbitzen understands firsthand the importance of sharing such cancer survival stories. In 2012, he was diagnosed at the age of 32 with pancreatic cancer. From an online search, he quickly learned that 95 percent of people with his type of cancer die within a few years. However, his doctor gave him hope by mentioning a similar patient, who was successfully treated some years earlier and is now happily skiing in the mountains.

“This picture of someone skiing in the mountains became my mantra,” Inderbitzen explained. “I had all these bad statistics against me, but then I also had this one story. And I thought, maybe I can also be one story, because this story was somehow the personification of a possibility. It inspired me to rethink how I saw my own situation.”

Later, Inderbitzen publicly shared his own cancer journey, which touched many people who reached out to him. This inspired him to found MySurvivalStory.org — an initiative that documents inspiring cancer survival stories to help other cancer patients better cope with their illness. He and his wife quit their jobs, raised some funds and began traveling around the globe to find and record short videos of cancer survivors from different cultures.

“We share the stories in formats that people can consume when they have ‘chemo brain’ — like podcasts you can listen to and short videos you can process even when you’re tired,” he said. He added, “These stories are powerful because they provide us with something or someone to aspire to — someone who is a bit ahead of us, so we think “I can do that.’”

Inderbitzen isn’t the only one to recognize the empowering impact of telling your cancer story. For example, the Stanford Center for Integrative Medicine compiles some patient stories on their Surviving Cancer website. And all of these stories have the potential to help both the teller and listener.

However, Inderbitzen offers the following advice when sharing your cancer story:

“Change the story you tell and you will be able to change the life you live. So that’s a very powerful concept. And I would like to challenge you and also encourage you that every day when you wake up and get out of bed and things are not looking good, remind yourself that it’s actually you who chooses which story to tell. And choosing a better story doesn’t mean that you’re ignoring reality. No, it just means that you’re giving yourself a chance.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Cleaning cosmic microwave background data to measure gravitational lensing

NERSC facilitates development of new analysis filter to better map the invisible universe

A set of cosmic microwave background 2D images with no lensing effects (top row) and with exaggerated cosmic microwave background gravitational lensing effects (bottom row). Image: Wayne Hu and Takemi Okamoto/University of Chicago

Cosmic microwave background (CMB) radiation is everywhere in the universe, but its frigid (-460° F), low-energy microwaves are invisible to the human eye. So cosmologists use specialized telescopes to map out the temperature spectrum of this relic radiation — left over from the Big Bang — to learn about the origin and structure of galaxy clusters and dark matter.

Gravity from distant galaxies cause tiny distortions in the CMB temperature maps, a process called gravitational lensing, which are detected by data analysis software run on supercomputers like the Cori system at Lawrence Berkeley National Laboratory’s (Berkeley Lab’s) National Energy Research Scientific Computing (NERSC) facility. Unfortunately, this temperature data is often corrupted by “foreground emissions” from extragalactic dust, gas, and other noise sources that are challenging to model.

“CMB images get distorted by gravitational lensing. This distortion is not a nuisance, it’s the signal we’re trying to measure,” said Emmanuel Schaan, a postdoctoral researcher in the Physics Division at Berkeley Lab. “However, various foreground emissions always contaminate CMB maps. These foregrounds are nuisances because they can mimic the effect of lensing and bias our lensing measurements. So we developed a new method for analyzing CMB data that is largely immune to the foreground noise effects.”

Schaan collaborated with Simone Ferraro, a Divisional Fellow in Berkeley Lab’s Physics Division, to develop their new statistical method, which is described in a paper published May 8, 2019 in Physical Review Letters.

“Our paper is mostly theoretical, but we also demonstrated that the method works on realistic simulations of the microwave sky previously generated by Neelima Sehgal and her collaborators,” Schaan said.

These publicly available simulations were originally generated using computing resources at the National Science Foundation’s TeraGrid project and Princeton University’s TIGRESS file system. Sehgal’s team ran N-body three-dimensional simulations of the gravitational evolution of dark matter in the universe, which they then converted into two-dimensional (2D) simulated maps of various components of the microwave sky at different frequencies — including 2D temperature maps of foreground emissions.

These 2D images show different types of foreground emissions that can interfere with CMB lensing measurements, as simulated by Neelima Sehgal and collaborators. From left to right: The cosmic infrared background, composed of intergalactic dust; radio point sources, or radio emission from other galaxies; the kinematic Sunyaev-Zel’dovich effect, a product of gas in other galaxies; and the thermal Sunyaev-Zel’dovich effect, which also relates to gas in other galaxies. Image: Emmanuel Schaan and Simone Ferraro/Berkeley Lab

Testing Theory On Simulated Data

NERSC provided resources that weren’t otherwise available to the team. Schaan and Ferraro applied their new analysis method on the existing 2D CMB temperature maps using NERSC. They wrote their analysis code in Python and used a library called pathos to run across multiple nodes in parallel. The final run that generated all the published results were run on  NERSC’s Cori supercomputer.

“As we progressively improved our analysis, we had to test the improved methods,” Schaan said. “Having access to NERSC was very useful for us.”

The Berkeley Lab researchers did many preliminary runs on NERSC’s Edison supercomputer before it was decommissioned because the wait time for the Edison queue was much shorter than the Cori queues. Schaan said they haven’t yet optimized the code for the Cori many-core energy-efficient KNL nodes, but they need to do that soon.

It might be time to speed up that code given their future research plans. Schaan and Ferraro are still perfecting their analysis, so they may need to run an improved method on the same 2D CMB simulations using NERSC. They also hope to begin working with real CMB data.

“In the future, we want to apply our method to CMB data from Simons Observatory and CMB S4, two upcoming CMB experiments that will have data in a few years. For that data, the processing will very likely be done on NERSC,” Schaan said.

NERSC is a U.S. Department of Energy Office of Science User Facility.

For more information, see this Berkeley Lab news release: A New Filter to Better Map the Dark Universe.

This is a reposting of my news feature orginally published by Berkeley Lab’s Computing Sciences.

Physicians need to be educated about marijuana, resident argues

 

Photo by 7raysmarketing

Nathaniel Morris, MD, a resident in psychiatry at Stanford, said he learned almost nothing about marijuana during medical school. Its absence made some sense, he explained in a recent JAMA Internal Medicine editorial: why focus on marijuana when physicians must worry about medical emergencies such as cardiac arrest, sepsis, pulmonary embolisms and opioid overdoses?

However, marijuana use has dramatically changed in the few years since he earned his medical degree, he pointed out. Thirty-three states and Washington, D.C. have now passed laws legalizing some form of marijuana use, including 10 states that have legalized recreational use. And the resulting prevalence of marijuana has wide-ranging impacts in the clinic.

“In the emergency department, I’ve come to expect that results of urine drug screens will be positive for tetrahydrocannabinol (THC), whether the patient is 18 years old or 80 years old,” he said in the editorial. “When I review medications at the bedside, some patients and families hold out THC gummies or cannabidiol capsules, explaining dosages or ratios of ingredients used to treat symptoms, including pain, insomnia, nausea, or poor appetite.” He added that other patients come to the ED after having panic attacks or psychotic symptoms and physicians have to figure out whether marijuana is involved.

Marijuana also impacts inpatient units. Morris described that some patients smuggle in marijuana and smoke in their rooms, while others who abruptly stop their use upon entering the hospital experience withdrawal symptoms like sleep disturbances and restlessness.

The real problem, he said, is that many physicians are unprepared and poorly educated about marijuana and its health effects. This is in part because government restrictions have made it difficult to study marijuana, so there is limited research to guide clinical decisions.

Although people have used marijuana to treat various health conditions for years, the U.S. Food and Drug Administration (FDA) has not approved the cannabis plant for treating any health problems. The FDA has approved three cannabinoid-based drugs: a cannabidiol oral solution used to treat a rare form of epileptic seizures and two synthetic cannabinoids used to treat nausea and vomiting associated with cancer chemotherapy or loss of appetite in people with AIDS.

In January 2017, the National Academies of Science, Engineering, and Medicine published a report that summarizes the current clinical evidence on the therapeutic effects and harmful side effects of marijuana products. However, more and higher quality research is needed, Morris said.

Physicians also need to be educated about marijuana through dedicated coursework in medical school and ongoing continuing medical education activities, he said. Morris noted that physicians should receive instruction pertinent to their fields — such as gastroenterology fellows learning about marijuana’s potential effects on nausea or psychiatry residents learning about associations between marijuana and psychosis.

“These days, I find myself reading new studies about the health effects of marijuana products, attending grand rounds on medical marijuana, and absorbing tips from clinicians who have more experience related to marijuana and patient care than I do,” Morris said. “Still, I suspect that talking with patients about marijuana use and what it means to them will continue to teach me the most.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.