Predicting women at risk of preeclampsia before clinical symptoms

Many of my female friends became pregnant with their first child in their late 30s or early 40s, which increased their risk of common complications such as high blood pressure, gestational diabetes and preeclampsia.

Affecting over 8 million women worldwide, preeclampsia can lead to serious, even fatal, complications for both the mother and baby. The clinical symptoms of preeclampsia typically start at 20 weeks of pregnancy and include high blood pressure and signs of kidney or liver damage.

“Once these clinical symptoms appear, irreparable harm to the mother or the fetus may have already occurred,” said Stanford immunologist Brice Gaudilliere, MD, PhD.  “The only available diagnostic blood test for preeclampsia is a proteomic test that measures a ratio of two proteins. While this test is good at ruling out preeclampsia once clinical symptoms have occurred, it has a poor positive predictive value.”

Now, Stanford researchers are working to develop a diagnostic blood test that can accurately predict preeclampsia prior to the onset of clinical symptoms.

A new study conducted at Stanford was led by senior authors Gaudilliere, statistical innovator Nima Aghaeepour, PhD, and clinical trial specialist Martin Angst, MD, and co-first authors and postdoctoral fellows Xiaoyuan Han, PhD, and Sajjad Ghaemi, PhD. Their results were recently published in Frontiers in Immunology.

They analyzed blood samples from 11 women who developed preeclampsia and 12 women with normal blood pressure during pregnancy. These samples were obtained at two timepoints, allowing the scientists to measure how immune cells behaved over time during pregnancy.

“Unlike prior studies that typically assessed just a few select immune cell types in the blood at a single timepoint during pregnancy, our study focused on immune cell dynamics,” Gaudilliere explained. “We utilized a powerful method called mass cytometry, which measured the distribution and functional behavior of virtually all immune cell types present in the blood samples.”

The team identified a set of eight immune cell responses that accurately predicted which of the women would develop preeclampsia — typically 13 weeks before clinical diagnosis.

At the top of their list was a signaling protein called STAT5. They observed higher activity of STAT5 in CD4+ T-cells, which help regulate the immune system, at the beginning of pregnancy for all but one patient who developed preeclampsia.

“Pregnancy is an amazing immunological phenomenon where the mother’s immune system ‘tolerates’ the fetus, a foreign entity, for nine months,” said Angst. “Our findings are consistent with past studies that found preeclampsia to be associated with increased inflammation and decreased immune tolerance towards the fetus.”

Although their results are encouraging, more research is needed before translating them to the clinic.

The authors explained that mass cytometry is a great tool to find the “needle in the haystack.” It allowed them to survey the entire immune system and identify the key elements that could predict preeclampsia, but it is an exploratory platform not suitable for the clinic, they said.

“Now that we have identified the elements of a diagnostic immunoassay, we can use conventional instruments such as those used in the clinic to measure them in a patient’s blood sample.” Aghaeepour said.

First though, the team needs to validate their findings in a large, multi-center study. They are also using machine learning to develop a “multiomics” model that integrates these mass cytometry measurements with other biological analysis approaches. And they are investigating how to objectively define different subtypes of preeclampsia.

Their goal is to accurately diagnose preeclampsia before the onset of clinical symptoms.

 “Diagnosing preeclampsia early would help ensure that patients at highest risk have access to health care facilities, are evaluated more frequently by obstetricians specialized in high-risk pregnancies and receive treatment,” said Gaudilliere.

Women with preeclampsia can receive care through the obstetric clinic at Lucile Packard Children’s Hospital Stanford.

Photo by Pilirodriquez

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Simplified analysis method could lead to improved prosthetics, a Stanford study suggests

Brain-machine interfaces (BMI) are an emerging field at the intersection of neuroscience and engineering that may improve the quality of life for amputees and individuals with paralysis. These patients are unable to get signals from their motor cortex — the part of the brain that normally controls movement — to their muscles.

Researchers are overcoming this disconnect by implanting in the brain small electrode arrays, which measure and decode the electrical activity of neurons in the motor cortex. The sensors’ electrical signals are transmitted via a cable to a computer and then translated into commands that control a computer cursor or prosthetic limb. Someday, scientists also hope to eliminate the cable, using wireless brain sensors to control prosthetics.

In order to realize this dream, however, they need to improve both the brain sensors and the algorithms used to decode the neural signals. Stanford electrical engineer Krishna Shenoy, PhD, and his collaborators are tackling this algorithm challenge, as described in a recent paper in Neuron.

Currently, most neuroscientists process their BMI data looking for “spikes” of electrical activity from individual neurons. But this process requires time-consuming manual or computationally-intense automatic data sorting, which are both prone to errors.

Manual data sorting will also become unrealistic for future technologies, which are expected to record thousands to millions of electrode channels compared to the several hundred channels recorded by today’s state-of-the-art sensors. For example, a dataset composed of 1,000 channels could take over 100 hours to hand sort, the paper says. In addition, neuroscientists would like to measure a greater brain volume for longer durations.

So, how can they decode all of this data?

Shenoy suggests simplifying the data analysis by eliminating spike sorting for applications that depend on the activity of neural populations rather than single neurons — such as brain-machine interfaces for prosthetics.

In their new study, the Stanford team investigated whether eliminating this spike sorting step distorted BMI data. Turning to statistics, they developed an analysis method that retains accuracy while extracting information from groups rather than individual neurons. Using experimental data from three previous animal studies, they demonstrated that their algorithms could accurately decode neural activity with minimal distortion — even when each BMI electrode channel measured several neurons. They also validated these experimental results with theory.

 “This study has a bit of a hopeful message in that observing activity in the brain turns out to be easier than we initially expected,” says Shenoy in a recent Stanford Engineering news release. The researchers hope their work will guide the design and use of new low-power, higher-density devices for clinical applications since their simplified analysis method reduces the storage and processing requirements.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Photo by geralt.

AI could help radiologists improve their mammography interpretation

The guidelines for screening women for breast cancer are a bit confusing. The American Cancer Society recommends annual mammograms for women older than 45 years with average risk, but other groups like the U.S. Preventative Services Task Force (USPSTF) recommend less aggressive breast screening.

This controversy centers on mammography’s frequent false-positive detections — or false alarms — which lead to unnecessary stress, additional imaging exams and biopsies. USPSTF argues that the harms of early and frequent mammography outweigh the benefits.

However, a recent Stanford study suggests a better way to reduce these false alarms without increasing the number of missed cancers. Using over 112,000 mammography cases collected from 13 radiologists across two teaching hospitals, the researchers developed and tested a machine-learning model that could help radiologists improve their mammography practice.

Each mammography case included the radiologist’s observations and diagnostic classification from the mammogram, the patient’s risk factors and the “ground-truth” of whether or not the patient had breast cancer based on follow-up procedures. The researchers used the data to train and evaluate their computer model.

They compared the radiologists’ performance against their machine-learning model, doing a separate analysis for each of the 13 radiologists. They found significant variability among radiologists.

Based on accepted clinical guidelines, radiologists should recommend follow-up imaging or a biopsy when a mammographic finding has a two percent probability of being malignant. However, the Stanford study found participating radiologists used a threshold that varied from 0.6 to 3.0%. In the future, similar quantitative observations could be used to identify sources of variability and to improve radiologist training, the paper said.

The study included 1,214 malignant cases, which represents 1.1 percent of the total number. Overall, the radiologists reported 176 false negatives indicating cancers missed at the time of the mammograms. They also reported 12,476 false positives or false alarms. In comparison, the machine-learning model missed one additional cancer but it decreased the number of false alarms by 3,612 cases relative to the radiologists’ assessment.

The study concluded: “Our results show that we can significantly reduce screening mammography false positives with a minimal increase in false negatives.”

However, their computer model was developed using data from 1999 to 2010, the era of analog film mammography. In future work, the researchers plan to update the computer algorithm to use the newer descriptors and classifications for digital mammography and three-dimensional breast tomosynthesis.

Ross Shachter, PhD, a Stanford associate professor of management science and engineering and lead author on the paper, summarized in a recent Stanford Engineering news release, “Our approach demonstrates the potential to help all radiologists, even experts, perform better.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Celebrating cancer survivors by telling their stories

Photo of Martin Inderbitzen by Michael Goldstein

Neurobiologist and activist Martin Inderbitzen, PhD, began his talk with a question: “Did you ever face a life situation that was totally overwhelming?” Most of his audience likely answered yes, since he was speaking to cancer survivors and their families at a Stanford event called Celebrating Cancer Survivors.

The evening focused on life after cancer and highlighted Stanford’s Cancer Survivorship Program, which helps survivors and their families transition to life after treatment by providing multidisciplinary services and health care. Lidia Schapira, MD, a medical oncologist and director of the program, said they aim to  “help people back into health.”

But to me, the heart of the event was the personal stories openly shared by the attendees while standing in line for the food buffet or waiting for the speeches to begin. As a Hodgkin’s survivor who was treated at Stanford twenty-five years ago, I swapped “cancer stories” with my comrades.

Inderbitzen understands firsthand the importance of sharing such cancer survival stories. In 2012, he was diagnosed at the age of 32 with pancreatic cancer. From an online search, he quickly learned that 95 percent of people with his type of cancer die within a few years. However, his doctor gave him hope by mentioning a similar patient, who was successfully treated some years earlier and is now happily skiing in the mountains.

“This picture of someone skiing in the mountains became my mantra,” Inderbitzen explained. “I had all these bad statistics against me, but then I also had this one story. And I thought, maybe I can also be one story, because this story was somehow the personification of a possibility. It inspired me to rethink how I saw my own situation.”

Later, Inderbitzen publicly shared his own cancer journey, which touched many people who reached out to him. This inspired him to found MySurvivalStory.org — an initiative that documents inspiring cancer survival stories to help other cancer patients better cope with their illness. He and his wife quit their jobs, raised some funds and began traveling around the globe to find and record short videos of cancer survivors from different cultures.

“We share the stories in formats that people can consume when they have ‘chemo brain’ — like podcasts you can listen to and short videos you can process even when you’re tired,” he said. He added, “These stories are powerful because they provide us with something or someone to aspire to — someone who is a bit ahead of us, so we think “I can do that.’”

Inderbitzen isn’t the only one to recognize the empowering impact of telling your cancer story. For example, the Stanford Center for Integrative Medicine compiles some patient stories on their Surviving Cancer website. And all of these stories have the potential to help both the teller and listener.

However, Inderbitzen offers the following advice when sharing your cancer story:

“Change the story you tell and you will be able to change the life you live. So that’s a very powerful concept. And I would like to challenge you and also encourage you that every day when you wake up and get out of bed and things are not looking good, remind yourself that it’s actually you who chooses which story to tell. And choosing a better story doesn’t mean that you’re ignoring reality. No, it just means that you’re giving yourself a chance.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Physicians need to be educated about marijuana, resident argues

 

Photo by 7raysmarketing

Nathaniel Morris, MD, a resident in psychiatry at Stanford, said he learned almost nothing about marijuana during medical school. Its absence made some sense, he explained in a recent JAMA Internal Medicine editorial: why focus on marijuana when physicians must worry about medical emergencies such as cardiac arrest, sepsis, pulmonary embolisms and opioid overdoses?

However, marijuana use has dramatically changed in the few years since he earned his medical degree, he pointed out. Thirty-three states and Washington, D.C. have now passed laws legalizing some form of marijuana use, including 10 states that have legalized recreational use. And the resulting prevalence of marijuana has wide-ranging impacts in the clinic.

“In the emergency department, I’ve come to expect that results of urine drug screens will be positive for tetrahydrocannabinol (THC), whether the patient is 18 years old or 80 years old,” he said in the editorial. “When I review medications at the bedside, some patients and families hold out THC gummies or cannabidiol capsules, explaining dosages or ratios of ingredients used to treat symptoms, including pain, insomnia, nausea, or poor appetite.” He added that other patients come to the ED after having panic attacks or psychotic symptoms and physicians have to figure out whether marijuana is involved.

Marijuana also impacts inpatient units. Morris described that some patients smuggle in marijuana and smoke in their rooms, while others who abruptly stop their use upon entering the hospital experience withdrawal symptoms like sleep disturbances and restlessness.

The real problem, he said, is that many physicians are unprepared and poorly educated about marijuana and its health effects. This is in part because government restrictions have made it difficult to study marijuana, so there is limited research to guide clinical decisions.

Although people have used marijuana to treat various health conditions for years, the U.S. Food and Drug Administration (FDA) has not approved the cannabis plant for treating any health problems. The FDA has approved three cannabinoid-based drugs: a cannabidiol oral solution used to treat a rare form of epileptic seizures and two synthetic cannabinoids used to treat nausea and vomiting associated with cancer chemotherapy or loss of appetite in people with AIDS.

In January 2017, the National Academies of Science, Engineering, and Medicine published a report that summarizes the current clinical evidence on the therapeutic effects and harmful side effects of marijuana products. However, more and higher quality research is needed, Morris said.

Physicians also need to be educated about marijuana through dedicated coursework in medical school and ongoing continuing medical education activities, he said. Morris noted that physicians should receive instruction pertinent to their fields — such as gastroenterology fellows learning about marijuana’s potential effects on nausea or psychiatry residents learning about associations between marijuana and psychosis.

“These days, I find myself reading new studies about the health effects of marijuana products, attending grand rounds on medical marijuana, and absorbing tips from clinicians who have more experience related to marijuana and patient care than I do,” Morris said. “Still, I suspect that talking with patients about marijuana use and what it means to them will continue to teach me the most.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Ramadan: Advising clinicians on safe fasting practices

Photo by mohamed_hassan

If you are a basketball fan who recently watched Portland Trail Blazers’ Enes Kanter play against the Warriors in the western NBA semi-finals, you may have heard about Ramadan fasting. But most Americans haven’t — and that includes clinicians.

“Even those clinicians who are aware of Ramadan often do not fully understand the nuances of fasting,” explains Rania Awaad, MD, a clinical assistant professor of psychiatry and behavioral sciences and the director of the Muslims and Mental Health Lab at Stanford. “For example, there is no oral intake from sunup to sundown of food, liquids and also medications. For clinicians who may be alarmed by this, it’s important to remember that fasting is globally practiced safely by adjusting the timing and dosing of medications and by following best practices like consuming enough fluids to rehydrate after the fast.”

Ramadan is the ninth month of the Islamic calendar, which is 11 days shorter than the solar year. This year in the U.S., it began on May 5 and ends on June 4. During Ramadan, many of the nearly two billion Muslims around the world fast during the sunlight hours as a means of expressing self-control, gratitude and compassion for those in need.

Several groups are exempted from this religious requirement — including pregnant women, children, the elderly and people who are acutely or chronically ill — but some fast anyway because of the spiritual significance, Awaad says.

“Ramadan is a very spiritual and communal month. So when clinicians immediately advise their patients not to fast, they may not realize they’re inadvertently isolating their patients from the broader community and support system,” Awaad says. She notes this is particularly important for patients with mental health disorders.

Awaad says she strongly advises clinicians to encourage their patients to seek a dual consultation with both a faith leader and medical professional at places like the Khalil Center, a professional counseling center specializing in Muslim mental health. Alternatively, patients observing Ramadan can consult both their faith leader and physician individually and help facilitate a consultation between both entities.

“Without a holistic treatment plan, patients are either fasting when they shouldn’t be — not taking their medications without telling their health care provider — or they are potentially not partaking in Ramadan when they can be,” Awaad says.

In a recent editorial in The Lancet Psychiatry, Awaad and her colleagues outline more clinical suggestions on the safety and advisability of Ramadan fasting that she hopes physicians will consider. For example, the editorial suggests that physicians working with patients with eating disorders should discuss the risks and benefits of fasting and consider close follow-up in this period and in the months following.

But the first step is knowing whether patients are Muslim. By co-teaching the “Culture and Religion in Psychiatry” class, Awaad says she helps Stanford psychiatry residents become comfortable asking about their patients’ religion, in the same way they are trained to ask other sensitive questions like sexual orientation.

“If we miss that our patient draws strength and support from their religion, then we miss the opportunity to support them holistically by incorporating their faith leader or faith community into their treatment plans,” Awaad explains. “The last Gallop poll revealed 87 percent of Americans believe in God, so it’s important to incorporate this into our patient care.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Creativity can jump or slump during middle childhood, a Stanford study shows

 

Photo by Free-Photos

As a postdoctoral fellow in psychiatry, Manish Saggar, PhD, stumbled across a paper published in 1968 by a creativity pioneer named E. Paul Torrance, PhD. The paper described an unexplained creativity slump occurring during fourth grade that was associated with underachievement and increased risk for mental health problems. He was intrigued and wondered what exactly was going on.  “It seemed like a profound problem to solve,” says Saggar, who is now a Stanford assistant professor of psychiatry and behavioral sciences.

Saggar’s latest research study, recently published in NeuroImage, provides new clues about creativity during middle childhood. The research team studied the creative thinking ability of 48 children — 21 starting third grade and 27 starting fourth grade — at three time points across one year. This allowed the researchers to piece together data from the two groups to estimate how creativity changes from 8 to 10 years of age.

At each of the time points, the students were assessed using an extensive set of standardized tests for intelligence, aberrant behavior, response inhibition, temperament and creativity. Their brains were also scanned using a functional near-infrared spectroscopy (fNIRS) cap, which imaged brain function as they performed a standardized Figural Torrance Test of Creative Thinking.

During this test, the children sat at a desk and used a pen and paper to complete three different incomplete figures to “tell an unusual story that no one else will think of.” Their brains were scanned during these creativity drawing tasks, as well as when they rested (looking at a picture of a plus sign) and they completed a control drawing (connecting the dots on a grid).

Rather than using the conventional categories of age or grade level, the researchers grouped the participants based on the data — revealing three distinct patterns in how creativity could change during middle childhood.

The first groups of kids slumped in creativity initially and then showed an increase in creativity after transitioning to the next grade, while the second group showed the inverse. The final group of children had no change in creativity and then a boost after transitioning to the next grade.

“A key finding of our study is that we cannot group children together based on grade or age, because everybody is on their own trajectory,” says Saggar.

The researchers also found a correlation between creativity and rule-breaking or aggressive behaviors for these participating children, who scored well within the normal range of the standard child behavior checklist used to assess behavioral and emotional problems. As Saggar clarifies, these “problem behaviors” were things like arguing a lot or preferring to be with older kids rather than actions like fighting.

“In our cohort, the aggression and rule-breaking behaviors point towards enhanced curiosity and to not conforming to societal rules, consistent with the lay notion of ‘thinking outside the box’ to create unusual and novel ideas,” Saggar explains. “Classic creative thinking tasks require people to break rules between cognitive elements to form new links between previously unassociated elements.”

They also found a correlation between creativity and increased functional segregation of the frontal regions of the brain. Certain functions of our brain are done by regions independently and other functions are done by integration, when different brain regions come together to help us do the task. For example, a relaxing walk in the park with a wandering mind might have brain regions chattering in a segregated independent fashion, while focusing intently to memorize a series of numbers might require brain integration. And our brain needs to balance between this segregation and integration. In the study, they showed that increases in creativity tracked with increased segregation of the frontal regions.

“Having increased segregation in the frontal regions indicates that they weren’t really focusing on something specific,” Saggar says. “The hypothesis we have is perhaps you need more diffused attention to be more creative. Like when you get your best ideas while taking a shower or a walk.”

Saggar hopes their findings will help develop new interventions for teachers and parents in the future, but he says that longer studies, with a larger and more diverse group of children, are first needed to validate their results.

Once they confirm that the profiles observed in their current study actually exist in larger samples, the next step will be to see if they can train kids to improvise and become more creative, similar to a neuroscience study that successfully trained adults to enhance their creativity.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Pokémon experts’ brains shed light on neurological development

Photo by Colin Woodcock

Whether parents are dreading or looking forward to taking their kids to see the new “Pokémon” movie may depend on how their brains developed as a child. If they played the video game a lot growing up, a specific region of their visual cortex — the part of the brain that processes what we see — may preferentially respond to Pokémon characters, according to a new research study.

The Stanford psychologists studied the brains of Pokémon experts and novices to answer fundamental questions about how experience contributes to your brain’s development and organization.

Jesse Gomez, PhD, first author of the study and a former neuroscience Stanford graduate student, started playing a lot of Pokémon around first grade. So he realized that early exposure to Pokémon provided a natural neuroscience experiment. Namely, children that played the video game used the same tiny handheld device at roughly the same arm’s length. They also spent countless hours learning the hundreds of animated, pixelated characters, which represent a unique category of stimuli that activates a unique region of the brain.

The research team identified this specialized brain response using functional magnetic resonance to image the brains of 11 Pokémon experts and 11 Pokémon novices, who were adults similarly aged and educated. During the fMRI scan, each participant randomly viewed different kinds of stimuli, including faces, animals, cartoons, bodies, pseudowords, cars, corridors and Pokémon characters.

“We find a big difference between people who played Pokémon in their childhood versus those who didn’t,” explained Gomez in the video below. “People who are Pokémon experts not only develop a unique brain representation for Pokémon in the visual cortex, but the most interesting part to us is that the location of that response to Pokémon is consistent across people.”

In the expert participants, Pokémon activated a specific region in the high-level visual cortex, the part of the brain involved in recognizing things like words and faces. “This helped us pinpoint which theory of brain organization might be the most responsible for determining how the visual cortex develops from childhood to adulthood,” Gomez said.

The study results support a theory called eccentricity bias, which suggests the brain region that is activated by a stimulus is determined by the size and location of how it is viewed on the retina. For example, our ability to discriminate between faces is thought to activate the fusiform gyrus in the temporal lobe near the ears and to require the high visual acuity of the central field of vision. Similarly, the study showed viewing Pokémon characters activates part of the fusiform gyrus and the neighboring region called the occipitotemporal sulcus — which both get input from the central part of the retina — but only for the expert participants.

The eccentricity bias theory implies that a different or larger region of the brain would be preferentially activated by early exposure to Pokémon played on a large computer monitor. However, this wasn’t an option for the 20-something participants when they were children.

These findings have applications well beyond Pokémon, as Gomez explained in the video:

“The findings suggest that the very way that you look at a visual stimulus, like Pokémon or words, determines why your brain is organized the way it is. And that’s useful going forward because it might suggest that visual deficits like dyslexia or face blindness might result simply from the way you look at stimuli. And so that’s a promising future avenue.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

What healthy looks like: New study offers clues based on personalized tracking

Photo by DariuszSankowski

Everyone’s body is a little bit different. So it is important to understand our personal biological makeup while we are still healthy, so deviations from these healthy baselines can be used to detect early signs of disease. That’s key to precision health.

“We generally study people when they’re sick, rarely when they’re healthy, and it means we don’t really know what ‘healthy’ looks like at an individual biochemical level,” said Stanford geneticist Michael Snyder, PhD, in a recent Stanford news release.

As part of an international collaboration, Snyder used big data approaches to profile and track the health of more than 100 people at risk for diabetes for up to eight years. Participants underwent extensive testing each quarter, including clinical laboratory testing, exercise and physiological testing, microbial and molecular assessments, genetic sequencing, cardiovascular imaging and wearable sensor monitoring using smart watches or glucose monitors.

The goal of the study was to evaluate whether the emerging technologies could detect diseases early. During the study, the researchers discovered over 67 major clinically-actionable health issues — spanning across metabolism disorders, cardiovascular disease, cancer, blood disorders and infectious diseases. Namely, most of the participants had an unknown potential health problem flagged by the study, as reported in a paper recently published in Nature Medicine.

“We caught a lot of health issues because we noticed their delta, or their change from baseline. For instance, we caught nine people with diabetes as it was developing by continuously monitoring their glucose and insulin levels,” Snyder explained in the release. He added, “We were able to catch a lot of things before they were even symptomatic. And in most cases, it either led to folks being followed more carefully or to a medical intervention.”

The research team also used the big datasets to discover new biomarkers that may be able to predict the risk of cardiovascular and certain other diseases. Although preliminary, these results have inspired them to conduct larger follow-up studies.

This approach of extensively tracking personal health is currently too expensive to implement into standard health care on a broad scale, according to Snyder. But he hopes the prices will drop as more researchers and physicians innovate in the space.

“Ultimately, we want to shift the practice of medicine from treating people when they are ill to a focus on keeping them healthy by predicting disease risk and catching disease before it is symptomatic,” Snyder said.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Genetic roots of psychiatric disorders clearer now thanks to improved techniques

Photo by LionFive

New technology and access to large databases are fundamentally changing how researchers investigate the genetic roots of psychiatric disorders.

“In the past, a lot of the conditions that people knew to be genetic were found to have a relatively simple genetic cause. For example, Huntington’s disease is caused by mutations in just one gene,” said Laramie Duncan, PhD, an assistant professor of psychiatry and behavioral sciences at Stanford. “But the situation is entirely different for psychiatric disorders, because there are literally thousands of genetic influences on every psychiatric disorder. That’s been one of the really exciting findings that’s come out of modern genetic studies.”

These findings are possible thanks to genome-wide association studies (GWAS), which test for millions of genetic variations across the genome to identify the genes involved in human disease.

Duncan is the lead author of a recent commentary in Neuropsychopharmacology that explains how GWAS studies have demonstrated the inadequacy of previous methods. The paper also highlights new genetics findings for mental health.

Before the newer technologies and databases were available, scientists could only analyze a handful of genetic variations. So they had to guess that a specific genetic variation (a candidate) was associated with a disorder — based on what was known about the underlying biology — and then test their hypothesis. The body of research that has emerged from GWAS studies, however, show that nearly all of these earlier “candidate study” results are incorrect for psychiatric disorders.

“There are actually so many genetic variations in the genome, it would have been almost impossible for people to guess correctly,” Duncan said. “It was a reasonable thing to do at the time. But we now have better technology that’s just as affordable as the old ways of doing things, so traditional candidate gene studies are no longer needed.”

Duncan said she began questioning the candidate gene studies as a graduate student. As she studied the scientific literature, she noticed a pattern in the data that suggested the results were wrong. “The larger studies tended to have null results and the very small studies tended to have positive results. And the only reason you’d see that pattern is if there was strong publication bias,” said Duncan. “Namely, positive results were published even if the study was small, and null results were only published when the study was very large.”

In contrast, the findings from the GWAS studies become more and more precise as the sample size increases, she explained, which demonstrates their reliability.

Using GWAS, researchers now know that thousands of variations distributed across the genome likely contribute to any given mental disorder. By using the statistical power gleaned from giant databases such as the UK Biobank or the Million Veterans Program, they have learned that most of these variations aren’t even in the regions of the gene’s DNA that code for proteins, where scientists expected them to be. For example, only 1.1 percent of schizophrenia risk variants are in these coding regions.

“What’s so interesting about the modern genetic findings is that they are revealing entirely new clues about the underlying biology of psychiatric disorders,” Duncan said. “And this opens up lots of new avenues for treatment development.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.