AI could help radiologists improve their mammography interpretation

The guidelines for screening women for breast cancer are a bit confusing. The American Cancer Society recommends annual mammograms for women older than 45 years with average risk, but other groups like the U.S. Preventative Services Task Force (USPSTF) recommend less aggressive breast screening.

This controversy centers on mammography’s frequent false-positive detections — or false alarms — which lead to unnecessary stress, additional imaging exams and biopsies. USPSTF argues that the harms of early and frequent mammography outweigh the benefits.

However, a recent Stanford study suggests a better way to reduce these false alarms without increasing the number of missed cancers. Using over 112,000 mammography cases collected from 13 radiologists across two teaching hospitals, the researchers developed and tested a machine-learning model that could help radiologists improve their mammography practice.

Each mammography case included the radiologist’s observations and diagnostic classification from the mammogram, the patient’s risk factors and the “ground-truth” of whether or not the patient had breast cancer based on follow-up procedures. The researchers used the data to train and evaluate their computer model.

They compared the radiologists’ performance against their machine-learning model, doing a separate analysis for each of the 13 radiologists. They found significant variability among radiologists.

Based on accepted clinical guidelines, radiologists should recommend follow-up imaging or a biopsy when a mammographic finding has a two percent probability of being malignant. However, the Stanford study found participating radiologists used a threshold that varied from 0.6 to 3.0%. In the future, similar quantitative observations could be used to identify sources of variability and to improve radiologist training, the paper said.

The study included 1,214 malignant cases, which represents 1.1 percent of the total number. Overall, the radiologists reported 176 false negatives indicating cancers missed at the time of the mammograms. They also reported 12,476 false positives or false alarms. In comparison, the machine-learning model missed one additional cancer but it decreased the number of false alarms by 3,612 cases relative to the radiologists’ assessment.

The study concluded: “Our results show that we can significantly reduce screening mammography false positives with a minimal increase in false negatives.”

However, their computer model was developed using data from 1999 to 2010, the era of analog film mammography. In future work, the researchers plan to update the computer algorithm to use the newer descriptors and classifications for digital mammography and three-dimensional breast tomosynthesis.

Ross Shachter, PhD, a Stanford associate professor of management science and engineering and lead author on the paper, summarized in a recent Stanford Engineering news release, “Our approach demonstrates the potential to help all radiologists, even experts, perform better.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Advertisements

Celebrating cancer survivors by telling their stories

Photo of Martin Inderbitzen by Michael Goldstein

Neurobiologist and activist Martin Inderbitzen, PhD, began his talk with a question: “Did you ever face a life situation that was totally overwhelming?” Most of his audience likely answered yes, since he was speaking to cancer survivors and their families at a Stanford event called Celebrating Cancer Survivors.

The evening focused on life after cancer and highlighted Stanford’s Cancer Survivorship Program, which helps survivors and their families transition to life after treatment by providing multidisciplinary services and health care. Lidia Schapira, MD, a medical oncologist and director of the program, said they aim to  “help people back into health.”

But to me, the heart of the event was the personal stories openly shared by the attendees while standing in line for the food buffet or waiting for the speeches to begin. As a Hodgkin’s survivor who was treated at Stanford twenty-five years ago, I swapped “cancer stories” with my comrades.

Inderbitzen understands firsthand the importance of sharing such cancer survival stories. In 2012, he was diagnosed at the age of 32 with pancreatic cancer. From an online search, he quickly learned that 95 percent of people with his type of cancer die within a few years. However, his doctor gave him hope by mentioning a similar patient, who was successfully treated some years earlier and is now happily skiing in the mountains.

“This picture of someone skiing in the mountains became my mantra,” Inderbitzen explained. “I had all these bad statistics against me, but then I also had this one story. And I thought, maybe I can also be one story, because this story was somehow the personification of a possibility. It inspired me to rethink how I saw my own situation.”

Later, Inderbitzen publicly shared his own cancer journey, which touched many people who reached out to him. This inspired him to found MySurvivalStory.org — an initiative that documents inspiring cancer survival stories to help other cancer patients better cope with their illness. He and his wife quit their jobs, raised some funds and began traveling around the globe to find and record short videos of cancer survivors from different cultures.

“We share the stories in formats that people can consume when they have ‘chemo brain’ — like podcasts you can listen to and short videos you can process even when you’re tired,” he said. He added, “These stories are powerful because they provide us with something or someone to aspire to — someone who is a bit ahead of us, so we think “I can do that.’”

Inderbitzen isn’t the only one to recognize the empowering impact of telling your cancer story. For example, the Stanford Center for Integrative Medicine compiles some patient stories on their Surviving Cancer website. And all of these stories have the potential to help both the teller and listener.

However, Inderbitzen offers the following advice when sharing your cancer story:

“Change the story you tell and you will be able to change the life you live. So that’s a very powerful concept. And I would like to challenge you and also encourage you that every day when you wake up and get out of bed and things are not looking good, remind yourself that it’s actually you who chooses which story to tell. And choosing a better story doesn’t mean that you’re ignoring reality. No, it just means that you’re giving yourself a chance.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Cleaning cosmic microwave background data to measure gravitational lensing

NERSC facilitates development of new analysis filter to better map the invisible universe

A set of cosmic microwave background 2D images with no lensing effects (top row) and with exaggerated cosmic microwave background gravitational lensing effects (bottom row). Image: Wayne Hu and Takemi Okamoto/University of Chicago

Cosmic microwave background (CMB) radiation is everywhere in the universe, but its frigid (-460° F), low-energy microwaves are invisible to the human eye. So cosmologists use specialized telescopes to map out the temperature spectrum of this relic radiation — left over from the Big Bang — to learn about the origin and structure of galaxy clusters and dark matter.

Gravity from distant galaxies cause tiny distortions in the CMB temperature maps, a process called gravitational lensing, which are detected by data analysis software run on supercomputers like the Cori system at Lawrence Berkeley National Laboratory’s (Berkeley Lab’s) National Energy Research Scientific Computing (NERSC) facility. Unfortunately, this temperature data is often corrupted by “foreground emissions” from extragalactic dust, gas, and other noise sources that are challenging to model.

“CMB images get distorted by gravitational lensing. This distortion is not a nuisance, it’s the signal we’re trying to measure,” said Emmanuel Schaan, a postdoctoral researcher in the Physics Division at Berkeley Lab. “However, various foreground emissions always contaminate CMB maps. These foregrounds are nuisances because they can mimic the effect of lensing and bias our lensing measurements. So we developed a new method for analyzing CMB data that is largely immune to the foreground noise effects.”

Schaan collaborated with Simone Ferraro, a Divisional Fellow in Berkeley Lab’s Physics Division, to develop their new statistical method, which is described in a paper published May 8, 2019 in Physical Review Letters.

“Our paper is mostly theoretical, but we also demonstrated that the method works on realistic simulations of the microwave sky previously generated by Neelima Sehgal and her collaborators,” Schaan said.

These publicly available simulations were originally generated using computing resources at the National Science Foundation’s TeraGrid project and Princeton University’s TIGRESS file system. Sehgal’s team ran N-body three-dimensional simulations of the gravitational evolution of dark matter in the universe, which they then converted into two-dimensional (2D) simulated maps of various components of the microwave sky at different frequencies — including 2D temperature maps of foreground emissions.

These 2D images show different types of foreground emissions that can interfere with CMB lensing measurements, as simulated by Neelima Sehgal and collaborators. From left to right: The cosmic infrared background, composed of intergalactic dust; radio point sources, or radio emission from other galaxies; the kinematic Sunyaev-Zel’dovich effect, a product of gas in other galaxies; and the thermal Sunyaev-Zel’dovich effect, which also relates to gas in other galaxies. Image: Emmanuel Schaan and Simone Ferraro/Berkeley Lab

Testing Theory On Simulated Data

NERSC provided resources that weren’t otherwise available to the team. Schaan and Ferraro applied their new analysis method on the existing 2D CMB temperature maps using NERSC. They wrote their analysis code in Python and used a library called pathos to run across multiple nodes in parallel. The final run that generated all the published results were run on  NERSC’s Cori supercomputer.

“As we progressively improved our analysis, we had to test the improved methods,” Schaan said. “Having access to NERSC was very useful for us.”

The Berkeley Lab researchers did many preliminary runs on NERSC’s Edison supercomputer before it was decommissioned because the wait time for the Edison queue was much shorter than the Cori queues. Schaan said they haven’t yet optimized the code for the Cori many-core energy-efficient KNL nodes, but they need to do that soon.

It might be time to speed up that code given their future research plans. Schaan and Ferraro are still perfecting their analysis, so they may need to run an improved method on the same 2D CMB simulations using NERSC. They also hope to begin working with real CMB data.

“In the future, we want to apply our method to CMB data from Simons Observatory and CMB S4, two upcoming CMB experiments that will have data in a few years. For that data, the processing will very likely be done on NERSC,” Schaan said.

NERSC is a U.S. Department of Energy Office of Science User Facility.

For more information, see this Berkeley Lab news release: A New Filter to Better Map the Dark Universe.

This is a reposting of my news feature orginally published by Berkeley Lab’s Computing Sciences.

Physicians need to be educated about marijuana, resident argues

 

Photo by 7raysmarketing

Nathaniel Morris, MD, a resident in psychiatry at Stanford, said he learned almost nothing about marijuana during medical school. Its absence made some sense, he explained in a recent JAMA Internal Medicine editorial: why focus on marijuana when physicians must worry about medical emergencies such as cardiac arrest, sepsis, pulmonary embolisms and opioid overdoses?

However, marijuana use has dramatically changed in the few years since he earned his medical degree, he pointed out. Thirty-three states and Washington, D.C. have now passed laws legalizing some form of marijuana use, including 10 states that have legalized recreational use. And the resulting prevalence of marijuana has wide-ranging impacts in the clinic.

“In the emergency department, I’ve come to expect that results of urine drug screens will be positive for tetrahydrocannabinol (THC), whether the patient is 18 years old or 80 years old,” he said in the editorial. “When I review medications at the bedside, some patients and families hold out THC gummies or cannabidiol capsules, explaining dosages or ratios of ingredients used to treat symptoms, including pain, insomnia, nausea, or poor appetite.” He added that other patients come to the ED after having panic attacks or psychotic symptoms and physicians have to figure out whether marijuana is involved.

Marijuana also impacts inpatient units. Morris described that some patients smuggle in marijuana and smoke in their rooms, while others who abruptly stop their use upon entering the hospital experience withdrawal symptoms like sleep disturbances and restlessness.

The real problem, he said, is that many physicians are unprepared and poorly educated about marijuana and its health effects. This is in part because government restrictions have made it difficult to study marijuana, so there is limited research to guide clinical decisions.

Although people have used marijuana to treat various health conditions for years, the U.S. Food and Drug Administration (FDA) has not approved the cannabis plant for treating any health problems. The FDA has approved three cannabinoid-based drugs: a cannabidiol oral solution used to treat a rare form of epileptic seizures and two synthetic cannabinoids used to treat nausea and vomiting associated with cancer chemotherapy or loss of appetite in people with AIDS.

In January 2017, the National Academies of Science, Engineering, and Medicine published a report that summarizes the current clinical evidence on the therapeutic effects and harmful side effects of marijuana products. However, more and higher quality research is needed, Morris said.

Physicians also need to be educated about marijuana through dedicated coursework in medical school and ongoing continuing medical education activities, he said. Morris noted that physicians should receive instruction pertinent to their fields — such as gastroenterology fellows learning about marijuana’s potential effects on nausea or psychiatry residents learning about associations between marijuana and psychosis.

“These days, I find myself reading new studies about the health effects of marijuana products, attending grand rounds on medical marijuana, and absorbing tips from clinicians who have more experience related to marijuana and patient care than I do,” Morris said. “Still, I suspect that talking with patients about marijuana use and what it means to them will continue to teach me the most.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Ramadan: Advising clinicians on safe fasting practices

Photo by mohamed_hassan

If you are a basketball fan who recently watched Portland Trail Blazers’ Enes Kanter play against the Warriors in the western NBA semi-finals, you may have heard about Ramadan fasting. But most Americans haven’t — and that includes clinicians.

“Even those clinicians who are aware of Ramadan often do not fully understand the nuances of fasting,” explains Rania Awaad, MD, a clinical assistant professor of psychiatry and behavioral sciences and the director of the Muslims and Mental Health Lab at Stanford. “For example, there is no oral intake from sunup to sundown of food, liquids and also medications. For clinicians who may be alarmed by this, it’s important to remember that fasting is globally practiced safely by adjusting the timing and dosing of medications and by following best practices like consuming enough fluids to rehydrate after the fast.”

Ramadan is the ninth month of the Islamic calendar, which is 11 days shorter than the solar year. This year in the U.S., it began on May 5 and ends on June 4. During Ramadan, many of the nearly two billion Muslims around the world fast during the sunlight hours as a means of expressing self-control, gratitude and compassion for those in need.

Several groups are exempted from this religious requirement — including pregnant women, children, the elderly and people who are acutely or chronically ill — but some fast anyway because of the spiritual significance, Awaad says.

“Ramadan is a very spiritual and communal month. So when clinicians immediately advise their patients not to fast, they may not realize they’re inadvertently isolating their patients from the broader community and support system,” Awaad says. She notes this is particularly important for patients with mental health disorders.

Awaad says she strongly advises clinicians to encourage their patients to seek a dual consultation with both a faith leader and medical professional at places like the Khalil Center, a professional counseling center specializing in Muslim mental health. Alternatively, patients observing Ramadan can consult both their faith leader and physician individually and help facilitate a consultation between both entities.

“Without a holistic treatment plan, patients are either fasting when they shouldn’t be — not taking their medications without telling their health care provider — or they are potentially not partaking in Ramadan when they can be,” Awaad says.

In a recent editorial in The Lancet Psychiatry, Awaad and her colleagues outline more clinical suggestions on the safety and advisability of Ramadan fasting that she hopes physicians will consider. For example, the editorial suggests that physicians working with patients with eating disorders should discuss the risks and benefits of fasting and consider close follow-up in this period and in the months following.

But the first step is knowing whether patients are Muslim. By co-teaching the “Culture and Religion in Psychiatry” class, Awaad says she helps Stanford psychiatry residents become comfortable asking about their patients’ religion, in the same way they are trained to ask other sensitive questions like sexual orientation.

“If we miss that our patient draws strength and support from their religion, then we miss the opportunity to support them holistically by incorporating their faith leader or faith community into their treatment plans,” Awaad explains. “The last Gallop poll revealed 87 percent of Americans believe in God, so it’s important to incorporate this into our patient care.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Creativity can jump or slump during middle childhood, a Stanford study shows

 

Photo by Free-Photos

As a postdoctoral fellow in psychiatry, Manish Saggar, PhD, stumbled across a paper published in 1968 by a creativity pioneer named E. Paul Torrance, PhD. The paper described an unexplained creativity slump occurring during fourth grade that was associated with underachievement and increased risk for mental health problems. He was intrigued and wondered what exactly was going on.  “It seemed like a profound problem to solve,” says Saggar, who is now a Stanford assistant professor of psychiatry and behavioral sciences.

Saggar’s latest research study, recently published in NeuroImage, provides new clues about creativity during middle childhood. The research team studied the creative thinking ability of 48 children — 21 starting third grade and 27 starting fourth grade — at three time points across one year. This allowed the researchers to piece together data from the two groups to estimate how creativity changes from 8 to 10 years of age.

At each of the time points, the students were assessed using an extensive set of standardized tests for intelligence, aberrant behavior, response inhibition, temperament and creativity. Their brains were also scanned using a functional near-infrared spectroscopy (fNIRS) cap, which imaged brain function as they performed a standardized Figural Torrance Test of Creative Thinking.

During this test, the children sat at a desk and used a pen and paper to complete three different incomplete figures to “tell an unusual story that no one else will think of.” Their brains were scanned during these creativity drawing tasks, as well as when they rested (looking at a picture of a plus sign) and they completed a control drawing (connecting the dots on a grid).

Rather than using the conventional categories of age or grade level, the researchers grouped the participants based on the data — revealing three distinct patterns in how creativity could change during middle childhood.

The first groups of kids slumped in creativity initially and then showed an increase in creativity after transitioning to the next grade, while the second group showed the inverse. The final group of children had no change in creativity and then a boost after transitioning to the next grade.

“A key finding of our study is that we cannot group children together based on grade or age, because everybody is on their own trajectory,” says Saggar.

The researchers also found a correlation between creativity and rule-breaking or aggressive behaviors for these participating children, who scored well within the normal range of the standard child behavior checklist used to assess behavioral and emotional problems. As Saggar clarifies, these “problem behaviors” were things like arguing a lot or preferring to be with older kids rather than actions like fighting.

“In our cohort, the aggression and rule-breaking behaviors point towards enhanced curiosity and to not conforming to societal rules, consistent with the lay notion of ‘thinking outside the box’ to create unusual and novel ideas,” Saggar explains. “Classic creative thinking tasks require people to break rules between cognitive elements to form new links between previously unassociated elements.”

They also found a correlation between creativity and increased functional segregation of the frontal regions of the brain. Certain functions of our brain are done by regions independently and other functions are done by integration, when different brain regions come together to help us do the task. For example, a relaxing walk in the park with a wandering mind might have brain regions chattering in a segregated independent fashion, while focusing intently to memorize a series of numbers might require brain integration. And our brain needs to balance between this segregation and integration. In the study, they showed that increases in creativity tracked with increased segregation of the frontal regions.

“Having increased segregation in the frontal regions indicates that they weren’t really focusing on something specific,” Saggar says. “The hypothesis we have is perhaps you need more diffused attention to be more creative. Like when you get your best ideas while taking a shower or a walk.”

Saggar hopes their findings will help develop new interventions for teachers and parents in the future, but he says that longer studies, with a larger and more diverse group of children, are first needed to validate their results.

Once they confirm that the profiles observed in their current study actually exist in larger samples, the next step will be to see if they can train kids to improvise and become more creative, similar to a neuroscience study that successfully trained adults to enhance their creativity.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Pokémon experts’ brains shed light on neurological development

Photo by Colin Woodcock

Whether parents are dreading or looking forward to taking their kids to see the new “Pokémon” movie may depend on how their brains developed as a child. If they played the video game a lot growing up, a specific region of their visual cortex — the part of the brain that processes what we see — may preferentially respond to Pokémon characters, according to a new research study.

The Stanford psychologists studied the brains of Pokémon experts and novices to answer fundamental questions about how experience contributes to your brain’s development and organization.

Jesse Gomez, PhD, first author of the study and a former neuroscience Stanford graduate student, started playing a lot of Pokémon around first grade. So he realized that early exposure to Pokémon provided a natural neuroscience experiment. Namely, children that played the video game used the same tiny handheld device at roughly the same arm’s length. They also spent countless hours learning the hundreds of animated, pixelated characters, which represent a unique category of stimuli that activates a unique region of the brain.

The research team identified this specialized brain response using functional magnetic resonance to image the brains of 11 Pokémon experts and 11 Pokémon novices, who were adults similarly aged and educated. During the fMRI scan, each participant randomly viewed different kinds of stimuli, including faces, animals, cartoons, bodies, pseudowords, cars, corridors and Pokémon characters.

“We find a big difference between people who played Pokémon in their childhood versus those who didn’t,” explained Gomez in the video below. “People who are Pokémon experts not only develop a unique brain representation for Pokémon in the visual cortex, but the most interesting part to us is that the location of that response to Pokémon is consistent across people.”

In the expert participants, Pokémon activated a specific region in the high-level visual cortex, the part of the brain involved in recognizing things like words and faces. “This helped us pinpoint which theory of brain organization might be the most responsible for determining how the visual cortex develops from childhood to adulthood,” Gomez said.

The study results support a theory called eccentricity bias, which suggests the brain region that is activated by a stimulus is determined by the size and location of how it is viewed on the retina. For example, our ability to discriminate between faces is thought to activate the fusiform gyrus in the temporal lobe near the ears and to require the high visual acuity of the central field of vision. Similarly, the study showed viewing Pokémon characters activates part of the fusiform gyrus and the neighboring region called the occipitotemporal sulcus — which both get input from the central part of the retina — but only for the expert participants.

The eccentricity bias theory implies that a different or larger region of the brain would be preferentially activated by early exposure to Pokémon played on a large computer monitor. However, this wasn’t an option for the 20-something participants when they were children.

These findings have applications well beyond Pokémon, as Gomez explained in the video:

“The findings suggest that the very way that you look at a visual stimulus, like Pokémon or words, determines why your brain is organized the way it is. And that’s useful going forward because it might suggest that visual deficits like dyslexia or face blindness might result simply from the way you look at stimuli. And so that’s a promising future avenue.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.