Awe, anxiety, joy: Researchers identify 27 categories for human emotions

Photo by hannahlouise123

Scores of words describe the wide range of emotions we experience. And as we grasp for words to describe our feelings, scientists are similarly struggling to comprehend how our brain processes and connects these feelings.

Now, a new study from the University of California, Berkeley challenges the assumptions traditionally made in the science of emotion. It was published recently in the Proceedings of the National Academy of Sciences.

Past research has generally categorized all emotions into six to 12 groups, such as happiness, sadness, anger, fear, surprise and disgust. However, the Berkeley researchers identified 27 distinct categories of emotions.

They asked a diverse group of over 850 men and women to view a random sampling of 2185 short, silent videos that depicted a wide range of emotional situations — including births, endearing animals, natural beauty, vomit, warfare and natural disasters, to name just a few. The participants reported their emotional response after each video — using a variety of techniques, including independently naming their emotions or ranking the degree they felt 34 specific emotions. The researchers analyzed these responses using statistical modeling.

The results showed that participants generally had a similar emotional response to each of the videos, and these responses could be categorized into 27 distinct groups of emotions. The team also organized and mapped the emotional responses for all the videos, using a particular color for each of the 27 categories. They created an interactive map that includes links to the video clips and lists their emotional scores.

“We sought to shed light on the full palette of emotions that color our inner world,” said lead author Alan Cowen, a graduate student in neuroscience at the UC Berkeley, in a recent news release.

In addition, the new study refuted the traditional view that emotional categories were entirely distinct islands. Instead, they found many categories to be linked by fuzzy boundaries. For example, there are smooth gradients between emotions like awe and peacefulness, they said.

Cowen explained in the release:

“We don’t get finite clusters of emotions in the map because everything is interconnected. Emotional experiences are so much richer and more nuanced than previously thought.

Our hope is that our findings will help other scientists and engineers more precisely capture the emotional states that underlie moods, brain activity and expressive signals, leading to improved psychiatric treatments, an understanding of the brain basis of emotion and technology responsive to our emotional needs.”

The team hopes to expand their research to include other types of stimuli such as music, as well as participants from a wider range of cultures using languages other than English.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Using history as a guide to end tobacco addiction

Photo courtesy of Robert Proctor

The public’s opinion of tobacco use has dramatically changed over time. Gone are the days when cigarette companies advertise using slogans like “fresh as mountain air” or “more doctors smoke Camels than any other cigarette.” We now know that cigarettes cause blindness and tuberculosis, among many other conditions, and are highly addictive.

But in the era of nicotine e-cigarettes that are touted as cool and harmless, have we really changed our ways? I spoke with Robert Proctor, PhD, a professor of history at Stanford, to learn about his work.

What inspired you to research the history of cigarette design?

“Cigarettes are the world’s leading preventable cause of death, killing about 6 million people worldwide every year.  A physician might hope to heal a thousand or perhaps ten thousand people over a career, but what if we could save these 6 million people annually?  It was this hope of saving lives that led to my exploring how cigarettes have been designed, and how they might be stopped.”

Where do you find your research materials?

“The Legacy Tobacco Documents Library is a real treasure.  I use it to explore the industry’s myriad secret projects — like Project Subculture Urban Marketing, a secret Reynolds campaign from the 1990s to target gays and the homeless in San Francisco.  I also use it to find out what they’ve been adding to cigarettes—like diammonium phosphate, a free-basing agent used to boost the potency of the nicotine molecule. I also use it to find out who has been working for the industry, as grantees or expert witnesses. Historically that included dozens of Stanford professors, but I don’t know any still working in that capacity today.”

What do you think about the FDA’s plan to reduce nicotine in cigarettes?

“As I explained in a recent op-ed for the New York Times, the Food and Drug Administration will try to mandate the reduction of nicotine in cigarettes to a sub-addictive level. However, they will encounter ferocious resistance from the industry, which sees nicotine as the indispensable ingredient of their business. For beginning smokers, nicotine is actually a negative in the smoking experience. Once addicted, most smokers regret having started. It will be crucial for the FDA to reduce nicotine sufficiently to make sure new users don’t become addicted. De-nicotinization is easy. Multiple techniques are available to achieve this, including genetic technologies and some of the same techniques used to de-caffeinate coffee.”

Have you also studied e-cigarettes?

“I have studied e-cigarettes but not as intensively. Many of the same techniques once used to market traditional cigarettes have been revived for e-cigarettes and other vaping devices, as Robert Jackler, MD, and his colleagues have shown so beautifully. E-cigarettes may help some smokers quit, but they are more likely to renormalize smoking and act as gateways to regular cigarettes. They also serve as bridge products to keep smokers from quitting nicotine entirely, which is why the big cigarette makers have all launched new vaping devices.”

What more can be done?

“Physicians often know the right thing to do, but may not have the power to make that happen — that is medical impotence.  A third of all cancer deaths, for example, are caused by cigarettes. Just knowing that, though, isn’t enough to do any good, since there are powerful forces dedicated to making sure we keep pulling smoke into our lungs. Much more could be done to solve such problems — the new age minima for purchasing cigarettes should help. I also believe we need to explore what I call ‘the causes of causes.’  Cigarette smoking causes disease, but what causes cigarette smoking?  Too often we end with the individual, rather than going upstream to the source of the problem in the first place. Stop the manufacture of cigarettes, for example, and you stop having to yank out tumors from lungs or putting people on oxygen. We need more upstream thinking in the practice of medicine.

We also need to think more about health in our own community. For instance, Stanford got a failing grade from the Santa Clara County Public Health Department in 2011 as the most cigarette-friendly campus in the Bay area — for allowing the sale and use of cigarettes on campus.  We did finally manage get the sale of cigarettes in the student union stopped, after years of painful protest.”

 

Editor’s note: Stanford has a smoke-free environment policy that prohibits smoking in all buildings, facilities, vehicles, covered walkways and during indoor or outdoor athletic events. Smoking has been banned on the School of Medicine campus for a decade. 

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

 

A look at health care reform — in China

Photo courtesy of Karen Eggleston

The struggles with health insurance reform here in the United States piqued my curiosity about what we could learn from other countries. I reached out to Karen Eggleston, PhD, a senior fellow at the Freeman Spogli Institute for International Studies at Stanford, who researches health care systems and health reform in Asia, especially China.

What is health insurance like in China?

“The original system was linked to the centrally planned economy. Communes in rural areas supported the barefoot doctors and state-owned enterprises provided people with health care coverage in the urban areas. But when China converted to a market-based economy, this had to change. So long story short, they’ve put into place a new system of health coverage based on government subsidized insurance for rural and non-employed urban populations, as well as an employee-based medical insurance for the employed urban population. As I like to tell my US colleagues, if you think providing coverage for 40 million uninsured people is a challenge then think about covering over 800 million uninsured — that’s what China was dealing with.

Many westerners would be surprised to know that Chinese have almost complete freedom when choosing their doctor or hospital.”

How has this expanded coverage impacted health and survival?

“There is a lot of evidence that the expanded health insurance improved access to care and helped protect households from high health care expenditures, but it’s actually pretty difficult to pin down the effects on health and survival.

In a recent study in Health Affairs, we looked at the New Cooperative Medical Scheme that provides health insurance to rural areas. We used the fact that it was introduced over time in different counties to look at the effect it had later, correlating this data with cause-specific mortality data from China’s CDC. We didn’t see a significant impact on mortality rate due to expanded medical coverage.

This was and wasn’t surprising. It may take a long time for the results to manifest. But it’s also quite well known in health economic research that health and survival are often shaped by non-medical factors like lifestyle.”

What are some of the biggest health care challenges in China?

“As China urbanizes hundreds of millions of people at a time, they are changing their diet and living a more sedentary lifestyle. As a result, they’re now getting what are sometimes called the diseases of affluence, such as diabetes. Like many developing countries, China’s healthcare system was setup to deal primarily with acute conditions and to control infectious diseases. Now, they need to sustainably finance and manage programs to prevent and care for people with chronic diseases.

A lot of China’s care is also based in hospitals, so they need to strengthen primary care — ironic for a country so famous for barefoot doctors. Physicians’ career trajectories are better in urban hospitals and patients know that’s where the best physicians are. But new policies are trying to lure patients and doctors to primary care.”

What other factors are affecting health in China?

“China has a rapidly aging population — largely due to their triumph in extending lives by controlling infectious diseases and lifting millions out of poverty, and also related to low fertility. This demographic change reinforces the challenge of preventing and controlling chronic disease.

We also know that people with more education tend to have better health and survival than people with lower education. China’s economic growth has brought a rapid increase in living standards but also a rise in inequality, in both rural and urban areas. One of the best ways to address this is to improve opportunities of education for the disadvantaged. This isn’t typically thought of as a health policy, but actually studies have shown education can have long lasting effects on health and survival.”

This is a reposting of my Scope blog post, courtesy of Stanford School of Medicine.

Artificial Intelligence can help predict who will develop dementia, a new study finds

 

Photo by Lukas Budimaier

If you could find out years ahead that you were likely to develop Alzheimer’s, would you want to know?

Researchers from McGill University argue that patients and their families could better plan and manage care given this extra time. So the team has developed new artificial intelligence software that uses positron emission tomography (PET) scans to predict whether at-risk patients will develop Alzheimer’s within two years.

They retrospectively studied 273 individuals with mild cognitive impairment who participated in the Alzheimer’s Disease Neuroimaging Initiative, a global research study that collects imaging, genetics, cognitive, cerebrospinal fluid and blood data to help define the progression of Alzheimer’s disease.

Patients with mild cognitive impairment have noticeable problems with memory and thinking tasks that are not severe enough to interfere with daily life. Scientists know these patients have abnormal amounts of tau and beta-amyloid proteins in specific brain regions involved in memory, and this protein accumulation occurs years before the patients have dementia symptoms.

However, not everyone with mild cognitive impairment will go on to develop dementia, and the McGill researchers aimed to predict which ones will.

First, the team trained their artificial intelligence software to identify patients who would develop Alzheimer’s, by identifying key features in the amyloid PET scans of the ADNI participants. Next, they assessed the performance of the trained AI using an independent set of ADNI amyloid PET scans. It predicted Alzheimer’s progression with an accuracy of 84 percent before symptom onset, as reported in a recent paper in Neurobiology of Aging.

The researchers hope their new AI tool will help improve patient care, as well as accelerate research to find a treatment for Alzheimer’s disease by identifying which patients to select for clinical trials.

“By using this tool, clinical trials could focus only on individuals with a higher likelihood of progressing to dementia within the time frame of the study. This will greatly reduce the cost and time necessary to conduct these studies,” said Serge Gauthier, MD, a senior author and professor of neurology and neurosurgery and of psychiatry at McGill, in a recent news release.

The new AI tool is now available to scientists and students, but the McGill researchers need to conduct further testing before it will be approved and available to clinicians.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

New study intervenes to help female collegiate distance runners eat enough

Photo by David Gonzalez

Like other athletes at risk, female collegiate distance runners are predisposed to develop bone stress injuries from a condition known as the female athlete triad triad, said Michael Fredericson, MD, a professor of orthopaedic surgery and sports medicine at Stanford, who has worked with Stanford athletes for more than 25 years.

The triad stems from an energy deficiency, he explained:

“When your body isn’t getting enough food, then you stop producing normal levels of sex hormones, which leads to menstrual dysfunction. Your growth hormones go down, so you lose muscle mass. Your thyroid hormones go down, so your metabolism gets suppressed. And your stress hormones go up, which also leads to menstrual dysfunction and reduced muscle mass. And all of that leads to lower bone density, and eventually osteopenia [low bone strength] or even osteoporosis.”

The problem is common. “Based on our historical data, 38 percent of our female runners developed stress fractures over a three-year period from 2010-2013,” Fredericson said. “I knew the time had come to do something to prevent this.”

He is investigating the effectiveness of a nutritional intervention, in collaboration with Aurelia Nattiv, MD, from the University of California, Los Angeles. They have enrolled about 180 male and female runners from Stanford and UCLA in their study.

“The goal is to have our runners eat 45 kcal/kg/fat free mass per day, which is really just a normal diet — so their energy input equals their energy output,” Fredericson said. “We found a third of the women were getting less than this and half of the men were getting less. So it’s fair to say that a significant number of our runners were not getting adequate nutrition or calories.”

The runners met individually with a sports dietician and filled out an extensive dietary questionnaire to estimate their food intake, Fredericson told me. A low-dose x-ray machine was also used to measure their bone density and basic blood work measured vitamin D and thyroid levels, he said. Finally, their risk of female athlete triad was assessed using an established point system.

After their health assessment, a dietician helped each runner select individual nutrition goals, like adding a snack or increasing the energy density of a current meal, Fredericson said. “We typically want them to eat smaller more frequent meals — particularly right before and immediately after exercising,” he said.

The runners also used an app developed by collaborators at UCLA, which provided an eight-week nutrition education curriculum, including handouts, video clips, recipes and behavior modifying exercises.

Although the researchers have only completed the first year of a three-year study, they have found their intervention is working. “A majority of the runners have increased their bone density over the one year period by 2 to 5 percent,” Fredericson said. “ Our preliminary findings also show for every one-point increase in risk score, there was a 17 percent increase in the time it took to recover after an injury. … Anecdotally, we are seeing less injuries and the type of injuries that we are seeing are less severe.”

He emphasized the importance of the work:

“We have a number of young women that are exercising at levels beyond their ability to support their nutritional requirements. By the time they enter college many of them have osteoporosis … Ours is the first attempt to address these issues in an organized study with elite athletes. We need to turn things around for these young women, and prevent more serious health problems later in life.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Strong association between vision loss and cognitive decline

Photo by Les Black

In a nationally representative sample of older adults in the United States, Stanford researchers found a strong relationship between visual impairment and cognitive decline, as recently reported in JAMA Ophthalmology.

The research team investigated this association in elderly populations by analyzing two large US population data sets — over 30,000 respondents from the National Health and Aging Trends Study (NHATS) and almost 3,000 respondents from the National Health and Nutrition Examination Study (NHANES) — which both included measurements of cognitive and vision function.

“After adjusting for hearing impairment, physical limitations, patient demographics, socioeconomic status and other clinical comorbidities, we found an over two-fold increase in odds of cognitive impairment among patients with poor vision,” said Suzann Pershing, MD, assistant professor of ophthalmology at Stanford and chief of ophthalmology for the VA Palo Alto Health Care System. “These results are highly relevant to an aging US population.”

Previous studies have shown that vision impairment and dementia are conditions of aging, and their prevalence is increasing as our populations become older. However, the Stanford authors noted that their results are purely observational and do not establish a causative relationship.

The complexity of the relationship between vision and cognition was discussed in a related commentary by Jennifer Evans, PhD, an assistant professor of epidemiology at the London School of Hygiene and Tropical Medicine. She stated that this association could arise owing to problems with measuring vision and cognitive impairment tests in this population. “People with vision impairment may find it more difficult to complete the cognitive impairment tests and … people with cognitive impairment may struggle with visual acuity tests,” she wrote.

Assuming the association between vision and cognitive impairment holds, Evans also raised questions relevant patient care, such as: Which impairment developed first? Would successful intervention for visual impairment reduce the risk of cognitive impairment? Is sensory impairment an early marker of decline?

Pershing said she plans to follow up on the study:

“I am drawn to better understand the interplay between neurosensory vision, hearing impairment and cognitive function, since these are likely synergistic and bidirectional in their detrimental effects. For instance, vision impairment may accelerate cognitive decline and cognitive decline may lead to worsening ability to perform visual tasks. Ultimately, we can aim to better identify impairment and deliver treatments to optimize all components of patients’ health.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Stanford researchers find a better way to predict ER wait times

Photo by meredith kuipers

Need to go to the emergency room, but not sure where to go? There’s an app for that. A growing number of hospitals are publishing their emergency department wait times on mobile apps, websites, billboards and screens within a hospital.

But according to researchers at the Stanford Graduate School of Business, these estimates aren’t that accurate.

The trouble with most wait time estimates is that the models these systems use are often oversimplified compared to the complicated reality on the ground, said study author Mohsen Bayati, PhD, an associate professor of operations, information and technology at Stanford, in a recent news story. The most common ways of estimating a wait time is to simply use a rolling average of the time it took for the last several patients to be seen, but this only works well if patients arrive at a steady rate with similar ailments, he said.

Bayati and his colleagues studied five ways to predict ER wait times using data from four hospitals, including three private teaching hospitals in New York City and the San Mateo Medical Center public hospital that primarily serves low-income residents.

In particular, the researchers focused on wait times for less acute patients who often have to wait much longer than predicted, because patients with life-threatening illnesses are given priority and treated quickly. At SMMC, they found that some patients with less severe health needs had to wait more than an hour and a half longer than predicted using the standard rolling average method, according to their recent paper in Manufacturing and Service Operations Management.

The researchers determined that their new approach, called Q-Lasso, predicted ER wait times more accurately than the other methods — for instance, it reduced the margin of error by 33 percent for SMMC.

The team’s new Q-Lasso method combined two mathematical techniques: queueing theory and lasso statistical analysis. From queuing theory, they identified a large number of potential factors that could influence ER wait times. They then used lasso analysis to identify the combination of these factors that are the best predictors at a given time.

The authors were quick to qualify that their method was more accurate, but it still had errors — between 17 minutes to an hour. However, they said that it has the advantage of overestimating wait times rather than underestimating them, which leads to a more positive experience. Bayati explained in the news piece why this is important:

“If a patient is very satisfied with the service, they’re much more likely to follow the care advice that they receive. A good prediction that provides better patient satisfaction benefits everyone.”

This is a reposting of my Stanford blog story, courtesy of Stanford School of Medicine.

Socioeconomic status and food: A Stanford researcher observed families to learn more

Photo courtesy of Priya Fielding-Singh

Priya Fielding-Singh, a PhD candidate in sociology at Stanford, wanted to learn more about the relationship between socioeconomic status and diet. So she made observations and conducted in-depth interviews with parents and adolescents from 73 families across the socioeconomic spectrum throughout the San Francisco Bay Area. I recently spoke with her to learn about her study.

What inspired you to research the relationship between socioeconomic status and diet?

“Growing up, my family was a foster family and we took in many children that came from impoverished backgrounds. I think this early exposure to social inequality was formative in shaping my interests and propelling me into the field of sociology. I became interested in food the more that I learned about diet and disease prevention.

We have excellent large-scale, quantitative studies that show a socioeconomic gradient in diet quality in the United States. Thus, we know that socioeconomic status is one of a few key determinants of what and how people eat. But what we understand less well is why. I wanted to know: how do people’s socioeconomic conditions shape the way that they think about and consume food?”

How did you obtain your data?

“In almost every family, I interviewed, separately, at least one parent and one adolescent to better understand both family members’ perspectives. I also conducted 100 hours of observations with families across socioeconomic status, where I spent months with each family and went about daily life with them.

I saw very clearly that food choices are shaped by myriad different external and internal influences that I only gained exposure to when I spent hours with families on trips to supermarkets, birthday parties, church services, nail salons and back-to-school nights. Importantly, I was able to collect data on family members’ exchanges around food, including discussions and arguments. What families eat is often the product of negotiations and compromises.”

What was it like to observe the family dynamics first-hand?

 “I’m a very curious person, as well as a people person, so I felt in my element conducting ethnographic observations. I was touched by how generously families welcomed me into their lives and shared their experiences with me. Because families were so open with me — and in many cases, did not attempt to shelter me from the challenging aspects of family life — observations were an incredibly illuminative part of the research.”

Based on your study, how is diet transmitted from parents to children?

“I found that parents play a central role in shaping teenagers’ beliefs around food, but there was often a difference in how adolescents perceived their mothers and fathers in relation to diet. Adolescents generally saw their mothers as the healthy parent and their fathers as less invested in healthy eating. So, feeding families and monitoring the dietary health of families largely remains moms’ job, as I explained in a recent article.

In addition, I found that how mothers talked to adolescents about food varied across socioeconomic status. My Stanford colleague, Jennifer Wang, and I wrote a paper explaining these differences. More affluent families had discussions that highlighted the importance of consuming high quality food, which may strengthen messages about healthy eating. In contrast, less privileged families had more discussions about the price of food that highlighted the unaffordability of healthy eating.

Finally, I found that lower-income parents sometimes used food to buffer their children against the hardships of life in poverty. They often had to deny their children’s requests for bigger purchases because those purchases were out of financial reach, but they had enough money to say yes to their kids’ food requests. So low-income parents used food to nourish their children physically, but they also used food to nourish their children emotionally.”

What were your favorite foods as a child?

“My favorite food growing up is the same as my favorite food today: ice cream. Beyond that, the diet I ate as a child was very different than the one I follow now. I grew up in a family of carnivores, but I became a vegetarian in my early 20s and never looked back.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Training anesthesiologists to handle emergencies using simulation

Photo courtesy of David Gaba

Most anesthesiologists excel at routine procedures. But how do they fare when faced with an emergency, such as a sudden cardiorespiratory arrest, a severe allergic reaction or a massive hemorrhage?

“Like airline pilots, it’s the ability to handle the unexpected that patients, or passengers, are really paying for,” said David Gaba, MD, a professor of anesthesiology, perioperative and pain medicine at Stanford.

Gaba helped pioneer mannequin-based simulation tools used to hone the skills of both novice and highly-experienced physicians. During a simulation, a computerized mannequin fills in for the patient. “The mannequin has pulses and eyes that blink. It breathes, talks and provides all the waveforms and numbers to the clinical monitor displays that physicians and nurses are used to seeing,” said Gaba. “The instructor can tell the system to do all sorts of things, and can recreate many situations.”

These mannequins are particularly useful to practice how to handle unexpected life-threatening situations, he said. “We can allow medical students and residents in training to be the final decision-maker in simulation, whereas fully-experienced physicians will take over to protect a real patient,” Gaba said.

Since practicing teamwork is critical, the simulations are sometimes done with a full team of anesthesiologists, surgeons, nurses and technicians. Sometimes, teams members such as nurses are following the instructor’s directions; in other situations, all participants are new to the scenario,” Gaba said.

In a recent study, 263 board-certified anesthesiologists participated in simulated crisis scenarios with team members who were working with the instructor. In one scenario scripted by Stanford, the simulated patient undergoing an urgent belly surgery had a severe heart attack, causing an abnormal heart rhythm and dangerous drop in blood pressure.

The study identified different types of performance deficiencies: lack of knowledge, reluctance to use more aggressive treatments or failure to fully engage the surgeon. However, the most important lesson may be the need to call for help sooner. “When help was called, it almost always improved the overall performance of the team,” Gaba said.

In the scenario described above, for example, the unstable patient’s dangerously low blood pressure necessitated the aggressive treatment of shocking the heart with a defibrillator, he told me. “Although most anesthesiologists know this, they are more familiar with using a variety of medications and some participants were reluctant to do the appropriate, but more invasive action,” Gaba said.

Gaba identified various ways to overcome these performance gaps, such as using role-playing, verbal simulations with a colleague, full simulations and emergency manuals.

During the 30 years he has been researching mannequin-based simulations, Gaba said he’s witnessed many changes:

“When we started, people thought that simulation was a ‘nice toy,’ but they couldn’t see all of its applications. They thought that it was good just for simple technical things like CPR. But, we saw the cognitive parallels between our world in anesthesiology and worlds like aviation. Similarly, 30 years ago the notion of emergency manuals would have been called ‘a cheat sheet’ or ‘a crutch.’ It is now recognized that smart people use such cognitive aids because no one can remember everything, especially in the heat of a crisis. That’s why pilots and others use them – just common sense.”

Despite this progress, Gaba said that simulations are still not fully embedded in health care training. He estimates that only about five percent of practicing physicians have been through a meaningful simulation, beyond the basic life support or advanced CPR courses.

But he is still hopeful. “We’re pretty sure that there are hearts, brains and lives that have been saved due to our work, and I’m not retiring any time soon,” he said.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Study shows link between playing football and neurodegenerative disease

Photo by Cpl. Michelle M Dickson

You’ll likely hear quite a bit this week about a new study that suggests football players have an increased risk of developing chronic traumatic encephalopathy, or CTE, which is a progressive degenerative brain disease associated with repetitive head trauma.

As reported today in JAMA, researchers from the Boston University CTE Center and the VA Boston Healthcare System found pathological evidence of CTE in 177 of the 202 former football players whose brains were donated for research — including 117 of the 119 who played professionally in the United States or Canada. Their study nearly doubles the number of CTE cases described in literature.

The co-first author, Daniel Daneshvar, MD, PhD, is a new resident at Stanford in the orthopaedic surgery’s physical medicine and rehabilitation program, which treats traumatic brain injury and sports injury patients. He recently spoke with me about the study that he participated in while at BU.

“I really enjoyed playing football in high school. I think it’s an important sport for team building, learning leadership and gaining maturity,” he explained. “That being said, I think this study provides evidence of a relationship between playing football and developing a neurodegenerative disease. And that is very concerning, since we have kids as young as 8 years old potentially subjecting themselves to risk of this disease.”

The researchers studied the donated brains of deceased former football players who played in high school, college and the pros. They diagnosed CTE based on criteria recently defined by the National Institutes of Health. Currently, CTE can only be confirmed postmortem.

The study found evidence of mild CTE in three of the 14 former high school players and severe CTE in the majority of former college, semiprofessional and professional players. However, the researchers are quick to acknowledge that their sample is skewed, because brain bank donors don’t represent the overall population of former football players. Daneshvar explained:

“The number of NFL players with CTE is certainly less than the 99 percent that we’re reporting here, based on the fact that we have a biased sample. But the fact that 110 out of the 111 NFL players in our group had CTE means that this is in no way a small problem amongst NFL players.”

The research team also performed retrospective clinical evaluations, speaking with the players’ loved ones to learn their athletic histories and disease symptoms. Daneshvar worked on this clinical component — helping to design the study, organize the brain donations, conduct the interviews and analyze the data. The clinical assessment and pathology teams worked independently, blind to each other’s results.

“It’s difficult to determine after people have passed away exactly what symptoms they initially presented with and what their disease course was,” he told me. “We developed a novel mechanism for this comprehensive, retrospective clinical assessment. I was one of the people doing the phone interviews with the participant’s family members and friends to assess cognitive, behavioral, mood and motor symptoms.”

At this point, there aren’t any clinical diagnosis criteria for CTE, Daneshvar said. Although the current study wasn’t designed to establish these criteria, the researchers are going to use this data to correlate the clinical symptoms that a patient suffers through in life and their pathology at time of death, Daneshvar said. He went on to explain:

“The important thing about this study is that it isn’t just characterizing disease in this population. It’s about learning as much as we can from this methodologically rigorous cohort going forward, so we can begin to apply the knowledge that we’ve gained to help living athletes.”

Daneshvar and his colleagues are already working on a new study to better understand the prevalence and incidence of CTE in the overall population of football players. And they have begun to investigate what types of risk factors affect the likelihood of developing CTE.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.