Artificial Intelligence can help predict who will develop dementia, a new study finds

 

Photo by Lukas Budimaier

If you could find out years ahead that you were likely to develop Alzheimer’s, would you want to know?

Researchers from McGill University argue that patients and their families could better plan and manage care given this extra time. So the team has developed new artificial intelligence software that uses positron emission tomography (PET) scans to predict whether at-risk patients will develop Alzheimer’s within two years.

They retrospectively studied 273 individuals with mild cognitive impairment who participated in the Alzheimer’s Disease Neuroimaging Initiative, a global research study that collects imaging, genetics, cognitive, cerebrospinal fluid and blood data to help define the progression of Alzheimer’s disease.

Patients with mild cognitive impairment have noticeable problems with memory and thinking tasks that are not severe enough to interfere with daily life. Scientists know these patients have abnormal amounts of tau and beta-amyloid proteins in specific brain regions involved in memory, and this protein accumulation occurs years before the patients have dementia symptoms.

However, not everyone with mild cognitive impairment will go on to develop dementia, and the McGill researchers aimed to predict which ones will.

First, the team trained their artificial intelligence software to identify patients who would develop Alzheimer’s, by identifying key features in the amyloid PET scans of the ADNI participants. Next, they assessed the performance of the trained AI using an independent set of ADNI amyloid PET scans. It predicted Alzheimer’s progression with an accuracy of 84 percent before symptom onset, as reported in a recent paper in Neurobiology of Aging.

The researchers hope their new AI tool will help improve patient care, as well as accelerate research to find a treatment for Alzheimer’s disease by identifying which patients to select for clinical trials.

“By using this tool, clinical trials could focus only on individuals with a higher likelihood of progressing to dementia within the time frame of the study. This will greatly reduce the cost and time necessary to conduct these studies,” said Serge Gauthier, MD, a senior author and professor of neurology and neurosurgery and of psychiatry at McGill, in a recent news release.

The new AI tool is now available to scientists and students, but the McGill researchers need to conduct further testing before it will be approved and available to clinicians.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

New study intervenes to help female collegiate distance runners eat enough

Photo by David Gonzalez

Like other athletes at risk, female collegiate distance runners are predisposed to develop bone stress injuries from a condition known as the female athlete triad triad, said Michael Fredericson, MD, a professor of orthopaedic surgery and sports medicine at Stanford, who has worked with Stanford athletes for more than 25 years.

The triad stems from an energy deficiency, he explained:

“When your body isn’t getting enough food, then you stop producing normal levels of sex hormones, which leads to menstrual dysfunction. Your growth hormones go down, so you lose muscle mass. Your thyroid hormones go down, so your metabolism gets suppressed. And your stress hormones go up, which also leads to menstrual dysfunction and reduced muscle mass. And all of that leads to lower bone density, and eventually osteopenia [low bone strength] or even osteoporosis.”

The problem is common. “Based on our historical data, 38 percent of our female runners developed stress fractures over a three-year period from 2010-2013,” Fredericson said. “I knew the time had come to do something to prevent this.”

He is investigating the effectiveness of a nutritional intervention, in collaboration with Aurelia Nattiv, MD, from the University of California, Los Angeles. They have enrolled about 180 male and female runners from Stanford and UCLA in their study.

“The goal is to have our runners eat 45 kcal/kg/fat free mass per day, which is really just a normal diet — so their energy input equals their energy output,” Fredericson said. “We found a third of the women were getting less than this and half of the men were getting less. So it’s fair to say that a significant number of our runners were not getting adequate nutrition or calories.”

The runners met individually with a sports dietician and filled out an extensive dietary questionnaire to estimate their food intake, Fredericson told me. A low-dose x-ray machine was also used to measure their bone density and basic blood work measured vitamin D and thyroid levels, he said. Finally, their risk of female athlete triad was assessed using an established point system.

After their health assessment, a dietician helped each runner select individual nutrition goals, like adding a snack or increasing the energy density of a current meal, Fredericson said. “We typically want them to eat smaller more frequent meals — particularly right before and immediately after exercising,” he said.

The runners also used an app developed by collaborators at UCLA, which provided an eight-week nutrition education curriculum, including handouts, video clips, recipes and behavior modifying exercises.

Although the researchers have only completed the first year of a three-year study, they have found their intervention is working. “A majority of the runners have increased their bone density over the one year period by 2 to 5 percent,” Fredericson said. “ Our preliminary findings also show for every one-point increase in risk score, there was a 17 percent increase in the time it took to recover after an injury. … Anecdotally, we are seeing less injuries and the type of injuries that we are seeing are less severe.”

He emphasized the importance of the work:

“We have a number of young women that are exercising at levels beyond their ability to support their nutritional requirements. By the time they enter college many of them have osteoporosis … Ours is the first attempt to address these issues in an organized study with elite athletes. We need to turn things around for these young women, and prevent more serious health problems later in life.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Strong association between vision loss and cognitive decline

Photo by Les Black

In a nationally representative sample of older adults in the United States, Stanford researchers found a strong relationship between visual impairment and cognitive decline, as recently reported in JAMA Ophthalmology.

The research team investigated this association in elderly populations by analyzing two large US population data sets — over 30,000 respondents from the National Health and Aging Trends Study (NHATS) and almost 3,000 respondents from the National Health and Nutrition Examination Study (NHANES) — which both included measurements of cognitive and vision function.

“After adjusting for hearing impairment, physical limitations, patient demographics, socioeconomic status and other clinical comorbidities, we found an over two-fold increase in odds of cognitive impairment among patients with poor vision,” said Suzann Pershing, MD, assistant professor of ophthalmology at Stanford and chief of ophthalmology for the VA Palo Alto Health Care System. “These results are highly relevant to an aging US population.”

Previous studies have shown that vision impairment and dementia are conditions of aging, and their prevalence is increasing as our populations become older. However, the Stanford authors noted that their results are purely observational and do not establish a causative relationship.

The complexity of the relationship between vision and cognition was discussed in a related commentary by Jennifer Evans, PhD, an assistant professor of epidemiology at the London School of Hygiene and Tropical Medicine. She stated that this association could arise owing to problems with measuring vision and cognitive impairment tests in this population. “People with vision impairment may find it more difficult to complete the cognitive impairment tests and … people with cognitive impairment may struggle with visual acuity tests,” she wrote.

Assuming the association between vision and cognitive impairment holds, Evans also raised questions relevant patient care, such as: Which impairment developed first? Would successful intervention for visual impairment reduce the risk of cognitive impairment? Is sensory impairment an early marker of decline?

Pershing said she plans to follow up on the study:

“I am drawn to better understand the interplay between neurosensory vision, hearing impairment and cognitive function, since these are likely synergistic and bidirectional in their detrimental effects. For instance, vision impairment may accelerate cognitive decline and cognitive decline may lead to worsening ability to perform visual tasks. Ultimately, we can aim to better identify impairment and deliver treatments to optimize all components of patients’ health.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Stanford researchers find a better way to predict ER wait times

Photo by meredith kuipers

Need to go to the emergency room, but not sure where to go? There’s an app for that. A growing number of hospitals are publishing their emergency department wait times on mobile apps, websites, billboards and screens within a hospital.

But according to researchers at the Stanford Graduate School of Business, these estimates aren’t that accurate.

The trouble with most wait time estimates is that the models these systems use are often oversimplified compared to the complicated reality on the ground, said study author Mohsen Bayati, PhD, an associate professor of operations, information and technology at Stanford, in a recent news story. The most common ways of estimating a wait time is to simply use a rolling average of the time it took for the last several patients to be seen, but this only works well if patients arrive at a steady rate with similar ailments, he said.

Bayati and his colleagues studied five ways to predict ER wait times using data from four hospitals, including three private teaching hospitals in New York City and the San Mateo Medical Center public hospital that primarily serves low-income residents.

In particular, the researchers focused on wait times for less acute patients who often have to wait much longer than predicted, because patients with life-threatening illnesses are given priority and treated quickly. At SMMC, they found that some patients with less severe health needs had to wait more than an hour and a half longer than predicted using the standard rolling average method, according to their recent paper in Manufacturing and Service Operations Management.

The researchers determined that their new approach, called Q-Lasso, predicted ER wait times more accurately than the other methods — for instance, it reduced the margin of error by 33 percent for SMMC.

The team’s new Q-Lasso method combined two mathematical techniques: queueing theory and lasso statistical analysis. From queuing theory, they identified a large number of potential factors that could influence ER wait times. They then used lasso analysis to identify the combination of these factors that are the best predictors at a given time.

The authors were quick to qualify that their method was more accurate, but it still had errors — between 17 minutes to an hour. However, they said that it has the advantage of overestimating wait times rather than underestimating them, which leads to a more positive experience. Bayati explained in the news piece why this is important:

“If a patient is very satisfied with the service, they’re much more likely to follow the care advice that they receive. A good prediction that provides better patient satisfaction benefits everyone.”

This is a reposting of my Stanford blog story, courtesy of Stanford School of Medicine.

Socioeconomic status and food: A Stanford researcher observed families to learn more

Photo courtesy of Priya Fielding-Singh

Priya Fielding-Singh, a PhD candidate in sociology at Stanford, wanted to learn more about the relationship between socioeconomic status and diet. So she made observations and conducted in-depth interviews with parents and adolescents from 73 families across the socioeconomic spectrum throughout the San Francisco Bay Area. I recently spoke with her to learn about her study.

What inspired you to research the relationship between socioeconomic status and diet?

“Growing up, my family was a foster family and we took in many children that came from impoverished backgrounds. I think this early exposure to social inequality was formative in shaping my interests and propelling me into the field of sociology. I became interested in food the more that I learned about diet and disease prevention.

We have excellent large-scale, quantitative studies that show a socioeconomic gradient in diet quality in the United States. Thus, we know that socioeconomic status is one of a few key determinants of what and how people eat. But what we understand less well is why. I wanted to know: how do people’s socioeconomic conditions shape the way that they think about and consume food?”

How did you obtain your data?

“In almost every family, I interviewed, separately, at least one parent and one adolescent to better understand both family members’ perspectives. I also conducted 100 hours of observations with families across socioeconomic status, where I spent months with each family and went about daily life with them.

I saw very clearly that food choices are shaped by myriad different external and internal influences that I only gained exposure to when I spent hours with families on trips to supermarkets, birthday parties, church services, nail salons and back-to-school nights. Importantly, I was able to collect data on family members’ exchanges around food, including discussions and arguments. What families eat is often the product of negotiations and compromises.”

What was it like to observe the family dynamics first-hand?

 “I’m a very curious person, as well as a people person, so I felt in my element conducting ethnographic observations. I was touched by how generously families welcomed me into their lives and shared their experiences with me. Because families were so open with me — and in many cases, did not attempt to shelter me from the challenging aspects of family life — observations were an incredibly illuminative part of the research.”

Based on your study, how is diet transmitted from parents to children?

“I found that parents play a central role in shaping teenagers’ beliefs around food, but there was often a difference in how adolescents perceived their mothers and fathers in relation to diet. Adolescents generally saw their mothers as the healthy parent and their fathers as less invested in healthy eating. So, feeding families and monitoring the dietary health of families largely remains moms’ job, as I explained in a recent article.

In addition, I found that how mothers talked to adolescents about food varied across socioeconomic status. My Stanford colleague, Jennifer Wang, and I wrote a paper explaining these differences. More affluent families had discussions that highlighted the importance of consuming high quality food, which may strengthen messages about healthy eating. In contrast, less privileged families had more discussions about the price of food that highlighted the unaffordability of healthy eating.

Finally, I found that lower-income parents sometimes used food to buffer their children against the hardships of life in poverty. They often had to deny their children’s requests for bigger purchases because those purchases were out of financial reach, but they had enough money to say yes to their kids’ food requests. So low-income parents used food to nourish their children physically, but they also used food to nourish their children emotionally.”

What were your favorite foods as a child?

“My favorite food growing up is the same as my favorite food today: ice cream. Beyond that, the diet I ate as a child was very different than the one I follow now. I grew up in a family of carnivores, but I became a vegetarian in my early 20s and never looked back.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Training anesthesiologists to handle emergencies using simulation

Photo courtesy of David Gaba

Most anesthesiologists excel at routine procedures. But how do they fare when faced with an emergency, such as a sudden cardiorespiratory arrest, a severe allergic reaction or a massive hemorrhage?

“Like airline pilots, it’s the ability to handle the unexpected that patients, or passengers, are really paying for,” said David Gaba, MD, a professor of anesthesiology, perioperative and pain medicine at Stanford.

Gaba helped pioneer mannequin-based simulation tools used to hone the skills of both novice and highly-experienced physicians. During a simulation, a computerized mannequin fills in for the patient. “The mannequin has pulses and eyes that blink. It breathes, talks and provides all the waveforms and numbers to the clinical monitor displays that physicians and nurses are used to seeing,” said Gaba. “The instructor can tell the system to do all sorts of things, and can recreate many situations.”

These mannequins are particularly useful to practice how to handle unexpected life-threatening situations, he said. “We can allow medical students and residents in training to be the final decision-maker in simulation, whereas fully-experienced physicians will take over to protect a real patient,” Gaba said.

Since practicing teamwork is critical, the simulations are sometimes done with a full team of anesthesiologists, surgeons, nurses and technicians. Sometimes, teams members such as nurses are following the instructor’s directions; in other situations, all participants are new to the scenario,” Gaba said.

In a recent study, 263 board-certified anesthesiologists participated in simulated crisis scenarios with team members who were working with the instructor. In one scenario scripted by Stanford, the simulated patient undergoing an urgent belly surgery had a severe heart attack, causing an abnormal heart rhythm and dangerous drop in blood pressure.

The study identified different types of performance deficiencies: lack of knowledge, reluctance to use more aggressive treatments or failure to fully engage the surgeon. However, the most important lesson may be the need to call for help sooner. “When help was called, it almost always improved the overall performance of the team,” Gaba said.

In the scenario described above, for example, the unstable patient’s dangerously low blood pressure necessitated the aggressive treatment of shocking the heart with a defibrillator, he told me. “Although most anesthesiologists know this, they are more familiar with using a variety of medications and some participants were reluctant to do the appropriate, but more invasive action,” Gaba said.

Gaba identified various ways to overcome these performance gaps, such as using role-playing, verbal simulations with a colleague, full simulations and emergency manuals.

During the 30 years he has been researching mannequin-based simulations, Gaba said he’s witnessed many changes:

“When we started, people thought that simulation was a ‘nice toy,’ but they couldn’t see all of its applications. They thought that it was good just for simple technical things like CPR. But, we saw the cognitive parallels between our world in anesthesiology and worlds like aviation. Similarly, 30 years ago the notion of emergency manuals would have been called ‘a cheat sheet’ or ‘a crutch.’ It is now recognized that smart people use such cognitive aids because no one can remember everything, especially in the heat of a crisis. That’s why pilots and others use them – just common sense.”

Despite this progress, Gaba said that simulations are still not fully embedded in health care training. He estimates that only about five percent of practicing physicians have been through a meaningful simulation, beyond the basic life support or advanced CPR courses.

But he is still hopeful. “We’re pretty sure that there are hearts, brains and lives that have been saved due to our work, and I’m not retiring any time soon,” he said.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Study shows link between playing football and neurodegenerative disease

Photo by Cpl. Michelle M Dickson

You’ll likely hear quite a bit this week about a new study that suggests football players have an increased risk of developing chronic traumatic encephalopathy, or CTE, which is a progressive degenerative brain disease associated with repetitive head trauma.

As reported today in JAMA, researchers from the Boston University CTE Center and the VA Boston Healthcare System found pathological evidence of CTE in 177 of the 202 former football players whose brains were donated for research — including 117 of the 119 who played professionally in the United States or Canada. Their study nearly doubles the number of CTE cases described in literature.

The co-first author, Daniel Daneshvar, MD, PhD, is a new resident at Stanford in the orthopaedic surgery’s physical medicine and rehabilitation program, which treats traumatic brain injury and sports injury patients. He recently spoke with me about the study that he participated in while at BU.

“I really enjoyed playing football in high school. I think it’s an important sport for team building, learning leadership and gaining maturity,” he explained. “That being said, I think this study provides evidence of a relationship between playing football and developing a neurodegenerative disease. And that is very concerning, since we have kids as young as 8 years old potentially subjecting themselves to risk of this disease.”

The researchers studied the donated brains of deceased former football players who played in high school, college and the pros. They diagnosed CTE based on criteria recently defined by the National Institutes of Health. Currently, CTE can only be confirmed postmortem.

The study found evidence of mild CTE in three of the 14 former high school players and severe CTE in the majority of former college, semiprofessional and professional players. However, the researchers are quick to acknowledge that their sample is skewed, because brain bank donors don’t represent the overall population of former football players. Daneshvar explained:

“The number of NFL players with CTE is certainly less than the 99 percent that we’re reporting here, based on the fact that we have a biased sample. But the fact that 110 out of the 111 NFL players in our group had CTE means that this is in no way a small problem amongst NFL players.”

The research team also performed retrospective clinical evaluations, speaking with the players’ loved ones to learn their athletic histories and disease symptoms. Daneshvar worked on this clinical component — helping to design the study, organize the brain donations, conduct the interviews and analyze the data. The clinical assessment and pathology teams worked independently, blind to each other’s results.

“It’s difficult to determine after people have passed away exactly what symptoms they initially presented with and what their disease course was,” he told me. “We developed a novel mechanism for this comprehensive, retrospective clinical assessment. I was one of the people doing the phone interviews with the participant’s family members and friends to assess cognitive, behavioral, mood and motor symptoms.”

At this point, there aren’t any clinical diagnosis criteria for CTE, Daneshvar said. Although the current study wasn’t designed to establish these criteria, the researchers are going to use this data to correlate the clinical symptoms that a patient suffers through in life and their pathology at time of death, Daneshvar said. He went on to explain:

“The important thing about this study is that it isn’t just characterizing disease in this population. It’s about learning as much as we can from this methodologically rigorous cohort going forward, so we can begin to apply the knowledge that we’ve gained to help living athletes.”

Daneshvar and his colleagues are already working on a new study to better understand the prevalence and incidence of CTE in the overall population of football players. And they have begun to investigate what types of risk factors affect the likelihood of developing CTE.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

The skinny on how chickens grow feathers and, perhaps, on how humans grow hair

Photo by Kecia O’Sullivan

How do skin cells make regularly spaced hairs in mammals and feathers in birds? Scientists had two opposing theories, but new research at the University of California, Berkeley surprisingly links them.

The first theory contends that the timing of specific gene activation dictates a cell’s destiny and predetermines tissue structure — for example, in the skin, gene activation determines whether a skin cell becomes a sweat gland cell or hair cell, or remains a skin cell. The second theory asserts that a cell’s fate is determined instead by interacting with other cells and the material that it grows on.

Now, Berkeley researchers have found that the creation of feather follicles (like hair follicles) is initiated by cells exerting mechanical tension on each other, which then triggers the necessary changes in gene expression to create the follicles. Their results were recently reported in Science.  

“The cells of the skin in the embryo are pulling on each other and eventually pull one another into little piles that each go on to become a follicle,” said first author Amy Shyer, PhD, a post-doctoral fellow in molecular and cell biology at the University of California, Berkeley, in a recent news release. “What is really key is that there isn’t a particular genetic program that sets up this pattern. All of these cells are initially the same and they have the same genetic program, but their mechanical behavior produces a difference in the piled-up cells that flips a switch, forming a pattern of follicles in the skin.”

The research team grew skin taken from week-old chicken eggs on different materials with varying stiffness. They found that the stiffness of the substrate material was critical to forming feather follicles — material that was too stiff or too soft yielded only one follicle, whereas material with intermediate stiffness resulted in an orderly array of follicles.

“The fundamental tension between cells wanting to cluster together and their boundary resisting them is what allows you to create a spaced array of patterns,” said co-author Alan Rodgues, PhD, a biology consultant and former visiting scholar at Berkeley.

The researchers also showed that when the cells cluster together, this activated genes in those cells to generate a follicle and eventually a feather.

Although the study used chicken skin, the researchers suggest that they have discovered a basic mechanism, which may be used in the future to help grow artificial skin grafts that look like normal human skin with hair follicles and sweat pores.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Mindset about personal activity correlated with lifespan, new Stanford research shows

Photo by Paul Hansa

The mind is a powerful thing — a simple thought can have an immediate physiological effect. For instance, just thinking about something stressful can make you sweat or increase your heart rate.

Now, Stanford researchers have found that mindsets about exercise can influence health and longevity. Namely, people that think they are less active than their peers tend to have shorter life spans, even if their activity levels are similar.

“Our findings fall in line with a growing body of research suggesting that our mindsets — in this case, beliefs about how much exercise we are getting relative to others — can play a crucial role in our health,” said Alia Crum, PhD, an assistant professor of psychology at Stanford, in a recent Stanford news release.

As outlined in a paper in Health Psychology, the researchers analyzed surveys from more than 61,000 U.S. adults from three national databases, which documented participants’ health, physical activity levels and personal demographics. The research team focused on the question: “Would you say that you are physically more active, less active, or about as active as other persons your age?”

Using statistical models to control for factors like physical activity, age, body mass index and chronic illnesses, they then correlated the results with death records. The researchers found that people who thought they were less active than their peers were up to 71 percent more likely to die during the follow-up period (of up to 21 years) than those who perceived themselves as more active — even when both groups had similar activity levels.

A possible explanation suggested by the researchers is that perception can positively or negatively affect motivation. People who see themselves as unfit are more likely to remain inactive, which then increases their feelings of stress and depression to reinforce the negative cycle.

Although the research identifies a correlation between perceived amounts of exercise and health outcomes, it does not show that perceptions of inactivity cause an earlier death. However, it suggests that Americans should feel good about the healthy activities that they do every day — such as taking the stairs, walking or biking to work, or cleaning the house — instead of only valuing vigorous exercise at a gym, the authors said.

“It’s time that we start taking the role of mindset in health more seriously,” Crum said in the release. “In the pursuit of health and longevity, it is important to adopt not only healthy behaviors, but also healthy thoughts.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

The implications of male and female brain differences: A discussion

Photo by George Hodan

Men and women are equal, but they and their brains aren’t the same, according to a growing pile of scientific evidence. So why is most research still performed on only male animals and men? A panel of researchers explored this question and its implications on a recent episode of KALW’s City Visions radio show.

“It’s important to study sex differences because they are everywhere affecting everything,” said panelist Larry Cahill, PhD, a professor of neurobiology and behavior at the University of California, Irvine. “Over the last 20 years in particular, neuroscientists and really medicine generally have discovered that there are sex differences of all sizes and shapes really at every level of brain function. And we can’t truly treat women equally if we continue to essentially ignore them, which is what we’ve been doing.”

Neuropsychiatrist and author Louann Brizendine, MD, went on to say that many prescription medicines are only tested on male animals and men, even birth control pills designed for women. This is because the researchers don’t want the fluctuations of hormones associated with the menstrual cycle to “mess up” the research data, she said.

However, this practice can lead to dangerous side effects for women, she explained. For example, the U.S. Food and Drug Administration determined that many women metabolized the common sleep aid, Ambien, more slowly than men so the medication remained at a high level in their blood stream in the morning, which impaired activities like driving. After reassessing the clinical data on Ambien, Brizendine said, the FDA reset the male dose to 10 mg and the female dose down to 5 mg.

Niaro Shah, MD, PhD, a professor of psychiatry and behavioral sciences and of neurobiology at Stanford, said this action by the FDA was a sign of progress. “Decisions like what were made about Ambien represent people starting slowly to wake up and realize that we’ve been assuming that we don’t have to worry fundamentally about sex. And in not worrying about it, we are disproportionally harming women. Bare in mind, women absolutely, clearly and disproportionally bear the brunt of side effects of drugs and medicine.” In fact, he explained, eight out of ten drugs are withdrawn from the market due to worse side effects in women. He later added, “This issue is deeply affecting medical health, especially for women.”

So why are most researchers still studying only male animals or men?

According to Cahill, researchers have a deeply ingrained bias against studying sex differences, believing that sex differences aren’t fundamental because they aren’t shared by both men and women. He also said that resistance to this research boils down to the implicit and false assumption that equal has to mean the same. “If a neuroscientist shows that males and females (be that mice or monkeys or humans) are not the same in some aspect of brain function, then [many people think] the neuroscientist is showing that they are not equal — and that is false.”

Cahill offered advice for consumers: “You can go to the FDA website and for almost any approved drug you can get the essentials on how the testing was done. You’re going to find a mixed bag. For some drugs, you’re going to find there is pretty darn good evidence that the drug probably has roughly equal effects in men and women. On the other hand, you’re going to find a lot of cases when the testing was done mostly or exclusively in males and basically people don’t know [the effects in women].”

“You should be discerning and do your homework,” Brizendine agreed.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.