Identifying and addressing gender bias in healthcare: A Q&A

International Women’s Day offered a reminder t0 “celebrate women’s achievement, raise awareness against bias and take action for equality.” Stanford-trained surgeon Arghavan Salles, MD, PhD, is up for the challenge.

As a scholar in residence with Stanford Medicine’s educational programs, Salles researches gender equity, implicit bias, inclusion and physician well-being. Beyond Stanford, she is an activist against sexual harassment in medicine, and she’s written on these topics from a personal perspective for the popular press, including Scientific Americanand TIMEmagazine.

I recently spoke with her to learn more.

What inspired your research focus?

As an engineering undergraduate, I never really thought about gender or diversity issues.

Then during the first year of my PhD at Stanford, I learned about stereotype threat. The basic idea is that facing a negative stereotype about part of your identity can affect your performance during tests. For example, randomized controlled studies show that if minority students are asked for their race or ethnicity at the beginning of a test of intellectual ability, like the GRE (Graduate Record Examination), this question can impair their performance. A lot of decisions are based on these kinds of test scores, and this really changed how I think about merit.

At the time, I was also in the middle of my residency to become a surgeon. I started thinking about whether stereotype threat affects women who are training to be surgeons, so that’s what I studied for my dissertation.

I have continued to think about these types of issues, studying things like: Who gets the opportunity to speak at conferences? Does gender affect how supervisors write performance evaluations for residents and medical students?  And how extensive is gender bias in health care?

How does gender bias impact women surgeons?

We all have biases. Growing up in the U.S., we generally expect men to be decisive and in control and women to be warm and nurturing. So when women physicians make decisions quickly and take charge in order to provide the best care to their patients, they’re going against expectations.

I hear the same struggles from women all over. For women surgeons in particular, for example, the operating room staff often don’t hear when they ask for instruments. The staff may not have all the devices and equipment in the room because their requests aren’t taken as seriously as those of men. And they are often labeled as being demanding or difficult if they act like their male colleagues, which has significant consequences on opportunities like promotions.

Related to gender bias, women surgeons also deal all the time with microaggressions from patients and health care professionals. For instance, patients report to the nursing staff they haven’t seen a surgeon yet, when their female surgeon saw them that morning. Or they say, ‘Oh, a woman surgeon. I’ve never heard of that.’ So you have to strategically decide what to confront.

How can we address these issues?

It’s really important to have allies to give emotional support and advice, but also to speak up when these things are happening. For example, an ally can speak up if a committee member brings up something irrelevant during a promotion review.  

In the bigger scheme, we need to change how we hire people, to make it more difficult to act on our biases. We should use a blinded review so we don’t know the gender or race of the applicant. We should have applicants do relevant work sample tests to select the most qualified candidate. And we should use standardized interview questions. Changing how we hire and promote people would make a big difference.

We also need to create a culture of inclusion, in addition to hiring more women, underrepresented minorities and transgender and nonbinary gender people to bring new ideas. Diversity without inclusion is essentially exclusion. We’ve talked about gender today, but a lot of the same challenges are faced by other underrepresented groups.

Why do you write about these topics from a very personal viewpoint?

In some ways, I’m a naive person. I don’t have the same degree of professional self-preservation that some people have. There may be unintended negative consequences, but I’m just honest to a fault.

The piece about anger came out of seeing time and time again women being misunderstood — having their anger attributed to some personality flaw rather than a reasonable consequence of what they were experiencing. I figured if I wrote about it, I could raise awareness and maybe a few people would react differently next time they saw a woman express anger.

I wrote the fertility piece because I wanted to share my experience to educate people, so fewer people would end up involuntarily childless. In general, I just feel that it’s important to share my experiences to help others not make the same mistakes that I have.

Photo courtesy of Arghavan Salles

This is a reposting of my Scope story, courtesy of Stanford School of Medicine.

Improving cancer prognoses: A radio show

“Looking in the patients’ eyes and having a conversation” has motivated Stanford oncologist Ash Alizadeh, MD, PhD, to improve the way we diagnose, talk about and treat cancer.

Patients go home nervous and the care team is nervous, he pointed out, because you’re fighting a battle together to save a life and the things you’re doing are toxic and expensive.

“It’s really sobering to look at how blunt our tools are for getting a sense for whether you’re making progress as you’re going through the course of your therapy,” said Alizadeh in a recent episode of the Sirius radio show The Future of Everything hosted by Russ Altman, MD, PhD.

A key area of his work aims to more accurately predict a patient’s prognosis. He developed a computer algorithm (the focus of a recent Stanford Medicine magazine article) that searches data for information likely to affect the patient’s long-term outcome — generating a unique personalized estimate of risk, called the continuous individualized risk index (CIRI). The goal is to use CIRI to guide personalized therapy selection.

In the episode, he explained that their integrated approach better forecasts a patient’s prognosis by analyzing the complete medical path of the patient, whereas oncologists typically give more weight to the most recent data.

The researchers validated their predictive model using data gathered over time from patients with three types of cancers: diffuse large B-cell lymphoma (DLBCL), chronic lymphocytic leukemia or early-stage breast cancer.

In the study, they also measured the amount of circulating tumor DNA (ctDNA) in the blood of 132 DLBCL patients, before and during their treatment. Circulating tumor DNA is DNA that was shed from dying tumor cells and released into the bloodstream.

For this small group of DLBCL patients, standard methods to forecast how well a patient will do had a predictive index of 0.6, where a perfectly accurate test would score 1 and a random test like a coin toss would score 0.5. Alizadeh’s CIRI score was 0.8 for the same patients — not perfect but markedly better than the current “crystal ball exercise,” he said in a news release.

In the radio show, he also discussed how this predictive model complements his work to develop new technologies for cancer diagnosis and treatment.

For example, he explained measuring ctDNA levels with a non-invasive liquid biopsy may help detect early-stage cancer, guide treatment selection and monitor treatment response. And if liquid biopsies detect cancers at an early stage, this may allow oncologists to leverage their patients’ immune system to attack their cancer, he said.

“So instead of directly attacking the tumor cells with drugs that kill the cancer cells, you now have drugs that engage the immune system to say, ‘Hey, wake up,’” he said. That means the same drug could work for many cancers.

Alizadeh is developing these new techniques to personalize cancer diagnosis and treatment in hopes of improving the outcomes for his patients, he said.

 Photo by Pikrepo

This is a reposting of my Scope story, courtesy of Stanford School of Medicine.

Behind the scenes with a Stanford pediatric surgeon

In a new series, “Behind the Scenes,” we’re inviting Stanford Medicine physicians, nurses, researchers and staff to share a glimpse of their day.

As a science writer, I talk to a lot of health care providers about their work. But I’ve often wondered what it is really like to be a surgeon. So I was excited to speak with pediatric surgeon Stephanie Chao, MD, about her day.

Chao is a pediatric general surgeon, an assistant professor of surgery and the trauma medical director for Stanford Children’s Health. In addition to performing surgeries on children of all ages, she has a range of research interests, including how to reduce gun-related deaths in children and the hospital cost associated with pediatric firearm injuries.

Morning routine
On days that I operate, I get up between 5:50 and 6 a.m., depending on whether I hit the snooze button. I typically don’t eat breakfast. I don’t drink coffee because I don’t want to get a tremor. I’m out the door by 6:30 a.m. and at the hospital by 7 a.m. I usually go by the bedside of the first patient I’m going to operate on to say hi. The patient is in the operating room by 7:30 a.m. and my cases start.

On non-surgical days, it’s more chaotic. I have a 3-year-old and 1-year-old. So every day there’s a jigsaw puzzle as to whether my husband or I stay to get the kids ready for preschool, and who comes home early.

Part of Stephanie Chao’s day involves checking on patients, including this newborn.

In the operating room
The operating room is the place where I have the privilege of helping children feel better. It’s a very calming place, like a temple. When I walk through the operating room doors, the rest of the world becomes quiet. Even if it is a high-intensity case when the patient is very sick, I know there is a team of nurses, scrub techs and anesthesiologists used to working together in a well-orchestrated fashion. So even when the unexpected arises, we can focus on the patient with full confidence that we’ll find a solution.

There are occasions when babies are so sick that we need silence in the operating room. Everyone becomes hyper-attuned to all the beeps on the monitors. When patients are not as critically sick, I often play a Pandora station that I created called “Happy.” I started it with Pharrell Williams’ “Happy” and then Pandora pulled in other upbeat songs, including a bunch of Taylor Swift songs, so everyone thinks I’m a big Taylor Swift fan.

The OR staff call me by my first name. I believe that if everyone is relaxed and feels like they have an equal say in the procedure, we work better as a well-oiled machine for the benefit of the patient.

“The OR staff call me by my first name,” Stephanie Chao said.

Favorite task
Some of the most rewarding times of my day are when I sit down with patients and their families to hear their concerns, to reassure them and to help them understand what to expect — and hopefully to make a scary situation a little less so. As a parent, I realize just how hard it is to entrust one’s child completely in the hands of another. I also like to see patients in the hospital as they’re recovering.

Favorite time
The best part of the day is when I come home. When I open the door into the house, my kids recognize that sound and I hear their little footsteps as they run towards the door, shrieking with joy.

Evening ritual
When my husband and I get home, on nights I am not on call, I cook dinner in the middle of the chaos of hearing about the kids’ day. Hopefully, we “sit down” to eat by 6:20 or 6:30 p.m., and I mean that term loosely. It’s a circus, but eventually everyone is somewhat fed.

And then we do bath time and bedtime. There’s a daily negotiation with my three-year-old on how many books we read before bed. On school nights, she’s allowed three books but she tries to negotiate for 10.

Eventually, we get both kids down for the night. Then my husband and I clean up the mess of the day and try to have a coherent conversation with each other. But by then both of us are exhausted. I try to log on again to finish some work, read or review papers. I usually go to sleep around 11 p.m.

Managing it all
When I can carve out time to do relaxing things for myself, like go to the gym, that is great. But it’s rare and I remind myself that I am blessed with a job that I love and a wonderfully active family.

The result sometimes feels like chaos, but I don’t want to wish my life away waiting for my kids to get older and for life to get easier. Trying to live in the moment, and embracing it, is how I find balance.

Photos by Rachel Baker

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

“Poor air quality affects everyone” — How to protect yourself and clean the air

I remember when you could ride BART for free on a “Spare the Air” day, when smog was expected to reach unhealthy levels based on standards set by the Environmental Protection Agency. Now, there are too many of these days — 26 in the Bay Area last year — to enjoy that perk.

This bad air is making us sick, according to Stanford allergy specialist and clinical associate professor Sharon Chinthrajah, MD. In a recent episode of the Sirius radio show “The Future of Everything,” she spoke with Stanford professor and radio host Russ Altman, MD, PhD, about how we can combat the negative health impacts of air pollution.

“Poor air quality affects everybody: healthy people and people with chronic heart and lung conditions,” said Chinthrajah. “And you know, in my lung clinic I see people coming in with exacerbations of their underlying lung diseases like asthma or COPD.”

On Spare the Air days, Chinthrajah said even healthy people can suffer from eye, nose, throat and skin irritations caused by air pollution. And the health impacts can be far more serious for her patients. So she tells them to prepare for bad air quality days and to monitor the air quality index (AQI) in their area, she said.

The AQI measures the levels of ozone and other tiny pollutants in the air. The air is considered unhealthy when the AQI is above 100 for sensitive groups — like people with chronic illnesses, older adults and children. It’s unhealthy for everyone when the AQI is above 150.

On these unhealthy air days, Chinthrajah recommends taking precautions:

  • Limit the time you spend outdoors.
  • When outside, use a well-fitted air mask that filters out pollutants larger than 2.5 microns (which is about 20 times smaller than the thickness of an average human hair).
  • When driving, recirculate the air in your car and keep your windows closed.
  • Stay hydrated.
  • Once inside, change your clothes and take a quick shower before you go to bed, removing any air particulates that collected on you during the day.

In the radio show, Chinthrajah explained that published studies by the World Health organization and others demonstrate that people who live in developing countries like India and Asia — where they suffer poor air quality many days of the year — have a shortened life span.

“You know, there’s premature deaths. There’s exacerbation of underlying lung issues and cardiovascular issues. There’s more deaths from heart attacks and strokes in countries where there is poor air quality,” she said.

She admitted that it is difficult to definitively say these health problems are due to poor air quality — given the other problems in the developing country

es like limited access to clean water, food and health care — but she thinks poor air quality is a major contributor.

Chinthrajah said she believes we need to address the problem of air pollution at a societal level. And that means we need to target cars that burn fossil fuel, which account for much of the air pollution in California, she said. Instead, we need to move towards using public transportation and electric vehicles, as well as generating electricity from clean energy sources like solar, wind and water.

She noted that California is now offering a $9,5000 subsidy to qualifying low-income families to purchase low emission vehicles like all-electric cars or plug-in hybrids, on top of the standard federal and state rebates.

“So it seems like an overwhelming, daunting task, right? But I think we each have to take ownership of what we can do to reduce our carbon footprint. And then lobby within our local organizations to create practices that are sustainable,” she said.

Chinthrajah hopes that addressing air pollution and energy consumption at a societal level will lead to less asthma and other health problems, she said.

Image by U.S. Environmental Protection Agency 

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Could the next generation of particle accelerators come out of the 3D printer?

SLAC scientists and collaborators are developing 3D copper printing techniques to build accelerator components.

Imagine being able to manufacture complex devices whenever you want and wherever you are. It would create unforeseen possibilities even in the most remote locations, such as building spare parts or new components on board a spacecraft. 3D printing, or additive manufacturing, could be a way of doing just that. All you would need is the device materials, a printer and a computer that controls the process.

Diana Gamzina, a staff scientist at the Department of Energy’s SLAC National Accelerator Laboratory; Timothy Horn, an assistant professor of mechanical and aerospace engineering at North Carolina State University; and researchers at RadiaBeam Technologies dream of developing the technique to print particle accelerators and vacuum electronic devices for applications in medical imaging and treatment, the electrical grid, satellite communications, defense systems and more.

In fact, the researchers are closer to making this a reality than you might think.

“We’re trying to print a particle accelerator, which is really ambitious,” Gamzina said. “We’ve been developing the process over the past few years, and we can already print particle accelerator components today. The whole point of 3D printing is to make stuff no matter where you are without a lot of infrastructure. So you can print your particle accelerator on a naval ship, in a small university lab or somewhere very remote.”

3D printing can be done with liquids and powders of numerous materials, but there aren’t any well-established processes for 3D printing ultra-high-purity copper and its alloys – the materials Gamzina, Horn and their colleagues want to use. Their research focuses on developing the method.

Indispensable copper

Accelerators boost the energy of particle beams, and vacuum electronic devices are used in amplifiers and generators. Both rely on components that can be easily shaped and conduct heat and electricity extremely well. Copper has all of these qualities and is therefore widely used.

Traditionally, each copper component is machined individually and bonded with others using heat to form complex geometries. This manufacturing technique is incredibly common, but it has its disadvantages.

“Brazing together multiple parts and components takes a great deal of time, precision and care,” Horn said. “And any time you have a joint between two materials, you add a potential failure point. So, there is a need to reduce or eliminate those assembly processes.”

Potential of 3D copper printing

3D printing of copper components could offer a solution.

It works by layering thin sheets of materials on top of one another and slowly building up specific shapes and objects. In Gamzina’s and Horn’s work, the material used is extremely pure copper powder.

The process starts with a 3D design, or “construction manual,” for the object. Controlled by a computer, the printer spreads a few-micron-thick layer of copper powder on a platform. It then moves the platform about 50 microns – half the thickness of a human hair – and spreads a second copper layer on top of the first, heats it with an electron beam to about 2,000 degrees Fahrenheit and welds it with the first layer. This process repeats over and over until the entire object has been built.

3D printing of a layer of a device known as a traveling wave tube using copper powder. (Christopher Ledford/North Carolina State University)

The amazing part: no specific tooling, fixtures or molds are needed for the procedure. As a result, 3D printing eliminates design constraints inherent in traditional fabrication processes and allows the construction of objects that are uniquely complex.

“The shape doesn’t really matter for 3D printing,” said SLAC staff scientist Chris Nantista, who designs and tests 3D-printed samples for Gamzina and Horn. “You just program it in, start your system and it can build up almost anything you want. It opens up a new space of potential shapes.”

The team took advantage of that, for example, when building part of a klystron – a specialized vacuum tube that amplifies radiofrequency signals – with internal cooling channels at NCSU. Building it in one piece improved the device’s heat transfer and performance.

Compared to traditional manufacturing, 3D printing is also less time consuming and could translate into cost savings of up to 70%, Gamzina said.

A challenging technique

But printing copper devices has its own challenges, as Horn, who began developing the technique with collaborators at RadiaBeam years ago, knows. One issue is finding the right balance between the thermal and electrical properties and strengths of the printed objects. The biggest hurdle for manufacturing accelerators and vacuum electronics, though, is that these high-vacuum devices require extremely high quality and pure materials to avoid part failures, such as cracking or vacuum leaks.

The research team tackled these challenges by first improving the material’s surface quality, using finer copper powder and varying the way they fused layers together. However, using finer copper powder led to the next challenge. It allowed more oxygen to attach to the copper powder, increasing the oxide in each layer and making the printed objects less pure.

So, Gamzina and Horn had to find a way to reduce the oxygen content in their copper powders. The method they came up with, which they recently reported in Applied Sciences, relies on hydrogen gas to bind oxygen into water vapor and drive it out of the powder.

Using this method is somewhat surprising, Horn said. In a traditionally manufactured copper object, the formation of water vapor would create high-pressure steam bubbles inside the material, the material would blister and fail. In the additive process, on the other hand, the water vapor escapes layer by layer, which releases the water vapor more effectively.

Although the technique has shown great promise, the scientists still have a ways to go to reduce the oxygen content enough to print an actual particle accelerator. But they have already succeeded in printing a few components, such as the klystron output cavity with internal cooling channels and a string of coupled cavities that could be used for particle acceleration.

Planning to team up with industry partners

The next phase of the project will be driven by the newly-formed Consortium on the Properties of Additive-Manufactured Copper, which is led by Horn. The consortium currently has four active industry members – Siemens, GE Additive, RadiaBeam and Calabazas Creek Research, Inc – with more on the way.

“This is a nice example of collaboration between an academic institution, a national lab and small and large businesses,” Gamzina said. “It would allow us to figure out this problem together. Our work has already allowed us to go from ‘just imagine, this is crazy’ to ‘we can do it’ in less than two years.”

This work was primarily funded by the Naval Sea Systems Command, as a Small Business Technology Transfer Program with Radiabeam, SLAC, and NCSU. Other SLAC contributors include Chris Pearson, Andy Nguyen, Arianna Gleason, Apurva Mehta, Kevin Stone, Chris Tassone and Johanna Weker. Additional contributions came from Christopher Ledford and Christopher Rock at NCSU and Pedro Frigola, Paul Carriere, Alexander Laurich, James Penney and Matt Heintz at RadiaBeam.

Citation: C. Ledford et al., Applied Sciences, 24 September 2019 (10.3390/app9193993)

For questions or comments, contact the SLAC Office of Communications at communications@slac.stanford.edu.

————————————————

SLAC is a vibrant multiprogram laboratory that explores how the universe works at the biggest, smallest and fastest scales and invents powerful tools used by scientists around the globe. With research spanning particle physics, astrophysics and cosmology, materials, chemistry, bio- and energy sciences and scientific computing, we help solve real-world problems and advance the interests of the nation.

SLAC is operated by Stanford University for the U.S. Department of Energy’s Office of Science. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit energy.gov/science.

Top figure: Examples of 3D-printed copper components that could be used in a particle accelerator: X-band klystron output cavity with micro-cooling channels (at left) and a set of coupled accelerator cavities. (Christopher Ledford/North Carolina State University)

This is a reposting of my news feature, courtesy of SLAC Linear Accelerator Center.

Designing buildings to improve health

Are the buildings that we live and work in stressing us out?

The answer is probably yes, according to Stanford engineer Sarah Billington, PhD, and her colleagues. They also believe this stress is taking a significant toll on our mental and physical health because Americans typically spend almost 90% of their lives indoors.

During a recent talk at a Stanford Reunion Homecoming alumni celebration, Billington described a typical noisy office cut off from nature and filled with artificial light and artificial materials. This built environment makes workers feel stress, anxiety and distraction, which reduces their productivity and their ability to collaborate with others, she explained.

Now, Billington’s multidisciplinary research team is working to design buildings that instead reduce stress and increase a sense of belonging, physical activity and creativity.

Their first step is to measure how building features — such as airflow, lighting and views of nature — affect human well-being. They are quantifying well-being by measuring levels of stress, belonging, creativity, physical activity and environmental behavior.

In a preliminary online study, the team showed about 300 participants pictures of different office environments and asked them to envision working there at a new job. Across the board, the fictitious work environment was viewed as important to well-being.

“In eight out of the nine things that we were looking at, there were statistically significant increases in their sense of belonging, their self-efficacy and their environmental efficacy when they believed they were going to be working in an environment that had natural materials, natural light or diverse representations,” said Billington.

The researchers are now expanding this work by performing larger lab studies and designing future field studies. They plan to collect data from “smart buildings,” which use high-tech sensors to control the heating, air conditioning, ventilation, lighting, security and other systems. The team also plans to collect data from personal devices such as smartwatches, smartphones and laptops.

By analyzing all of this data, they plan to infer the participants’ behaviors, emotions and physiological states. For example, the researchers will use the building’s occupancy sensors to detect if a worker is interacting with other people who are nearby. Or they will figure out someone’s stress level based on how he or she uses a laptop trackpad and mouse, Billington said.

Stanford computer scientist Pablo Paredes, PhD, who collaborates on the project, explained in a paper how their simple model of arm-hand dynamics can detect stress from mouse motion. Basically, your muscles get tense and stiff when you’re stressed, which changes how you move a computer mouse.

Next, the team plans to use statistical modeling and machine learning to connect these human states to specific building features. They believe this will allow them to design better buildings that improve the occupants’ health.

The researchers said they intend to bring nature indoors by engineering living walls with adaptable acoustic and thermal properties.

They also plan to incorporate dynamic digital displays — such as a large art display on the wall or a small one on an individual’s personal devices — that reflect occupant activity and well-being. For example, a digital image of a flower might represent the energy level of a working group based on how open the petals are, and this could nudge their behavior, Billington said in the talk.

“Our idea is, what if we could make our buildings shape us in a positive way and keep improving over time?” Billington said.

Photo by Nastuh Abootalebi

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Eponym debate: The case for biologically-descriptive names

Naming a disease after the scientist who discovered it, like Hashimoto’s thyroiditis or Diamond-Blackfan anemia, just doesn’t work anymore, some physicians say.

A main argument against eponyms is that plain-language names — which describe the disease symptoms or underlying biological mechanisms —  are more helpful for patients and medical trainees. For example, you can probably out a bit about acquired immunodeficiency syndrome (AIDS), whooping cough or pink eye just from their names.

“The more obscure and opaque the name — whether due to our profession’s Greek and Latin fetish or our predecessors’ narcissism — the more we separate ourselves from our patients,” says Caitlin Contag, MD, a resident physician at Stanford.

Stanford endocrinologist Danit Ariel, MD, agrees that patients are often confused by eponyms.

“I see this weekly in the clinic with autoimmune thyroid disease. Patients are often confusing Graves’ disease with Hashimoto’s thyroiditis because the names mean nothing to them,” says Ariel. “So when I’m educating them about their diagnosis, I try to use the simplest of terms so they understand what is going on with their body.”

Ariel says she explains to her patients that the thyroid is overactive in Graves’ disease and underactive in Hashimoto’s.

Ariel says she believes using biological names also helps medical students better understand the underlying mechanisms of diseases, whereas using eponyms relies on rote memorization that can hinder learning. “When using biologically-descriptive terms, it makes inherent sense and students are able to build on the concepts and embed the information more effectively,” Ariel says.

Medical eponyms are particularly confusing when more than one disease is named after the same person, Contag argues. For example, neurosurgeon Harvey Williams Cushing, MD, has 12 listings in the medical eponym dictionary. 

Stanford resident physician Angela Primbas, MD, agrees that having multiple syndromes named after the same person is confusing. She says it’s also confusing to have diseases named differently in different countries. In fact, the World Health Organization has tried to address this, along with other issues, by providing best-practice guidelines for naming infectious diseases. (Genetic disorders, however, lack a standard convention for naming.)

In addition, Primbas said she thinks naming a disease after a single person is an oversimplification of a complex story. “Often many people contribute to the discovery of a disease process or clinical finding, and naming it after one person is unfair to the other people who contributed,” she says. “Plus, it’s often disputed who first discovered a disease.”

Also, few disease names recognize the contributions (or suffering) of women and non-Europeans. And some eponyms are decidedly problematic, like those named after Nazi doctors. A famous example is Reiter’s syndrome named for Hans Reiter, MD, who was convicted of war crimes for his medical experiments performed at a concentration camp.

“Reiter’s syndrome is now called reactive arthritis for the simple reason that Reiter committed atrocities on other human beings to conduct his ‘science.’ Such people should not have their name tied to a profession that espouses the principles of beneficence and nonmaleficence,” says Vishesh Khanna, MD, a resident physician at Stanford. He says medicine is swinging away from using these controversial eponyms to describe them on the basis of their biology instead.

Personally, Khanna also admits that naming a disease after himself wouldn’t sit well.

“Receiving credit for discovering something can certainly be a wonderful feather in a physician’s career cap, but the thought of actually naming a disease after myself makes me cringe,” says Khanna. “Patients and doctors would utter my name every time they had to bring up a disease.”

Such sentiments may be why Contag’s example of a good disease name — cyclic vomiting syndrome — is in plain English. Was no one eager to lend his or her name to it?

While the debate over medical eponyms continues, Khanna suggests a potential solution. “Perhaps a reasonable approach to naming going forward is to allow the use of already established eponyms without dubious histories, while only naming newly discovered diseases based on pathophysiology,” he says.

Everyone I spoke with agrees that changing the medical eponyms will only happen slowly, if at all, since it is difficult to change language. However, it can be done, according to Dina Wang-Kraus, MD, a Stanford resident in psychiatry and behavioral sciences.

“I looked through our diagnostic manual and we do not have diseases named after people in psychiatry. This shift happened quite some time ago so as to avoid confusion and to allow clinicians from all over the world to have a unified language,” says Wang-Kraus. “In psych, we often say that we wish other specialties would adopt a universal nomenclature too.”

This is the conclusion of a series on naming diseases. The first part is available here.

Photo by 4772818

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Eponym debate: The case for naming diseases after people

Is it better to name a genetic disorder Potocki-Lupski syndrome or the 17p11.2 duplication syndrome? What about Addison’s disease as opposed to adrenal insufficiency? Or Tay-Sachs disease versus hexosaminidase alpha-subunit deficiency (variant B)?

If you have a strong opinion about which is preferable, you aren’t alone: there is an ongoing controversy on how to name diseases. In Western science and medicine, a long-standing tradition is to name a disease after a person. However, many physicians now argue that these eponyms should be abandoned for biologically-descriptive names.

First, a bit about how eponyms are created.

Although the media sometimes comes up with a catchy name that sticks, like swine flu, diseases are typically named by scientists when they first report them in scientific publications.

Oftentimes, diseases are named after prominent scientists who played a major role in identifying the disease. The example that leaps to my mind is Hodgkin’s disease — a type of cancer associated with enlarged lymph nodes — because I was diagnosed and treated for Hodgkin’s at Stanford years ago. Hodgkin’s disease was named after Thomas Hodgkin, an English physician and pathologist who described the disease in a paper in 1832.

Less frequently, diseases are named after a famous patient. For example, amyotrophic lateral sclerosis (ALS), commonly known as Lou Gehrig’s disease, was named after the famous New York Yankee baseball player who was forced to retire after developing the disease in 1939.

As these examples show, one of the reasons to keep eponyms is that they are embedded with medical traditions and history. They include some kind of story. And, oftentimes, they honor key people associated with the disease.

“I think the people who discover these conditions deserve recognition,” explains Angela Primbas, MD, a resident physician at Stanford. “I don’t think the medical community would know their names otherwise.”

Some physicians also feel eponyms bring color to medicine. “The use of eponyms in medicine, as in other areas, is often random, inconsistent, idiosyncratic, confused, and heavily influenced by local geography and culture. That is part of their beauty,” writes Australian medical researcher Judith Whitworth, MD, in an editorial in BMJ.

Other proponents of eponyms are more practical. They argue that eponymous disease names provide a convenient shorthand for doctors and patients.

Medical eponyms are also widely used by patients, physicians, textbooks and websites. According to a dictionary of medical eponyms, thousands of eponyms are used throughout the world particularly in the United States and Europe. They are even prominent in the World Health Organization’s international classification of diseases.

So is a massive effort to purge these eponyms worth it, or even realistic?

“There are certainly examples where eponymous disease names are so inculcated in medical vernacular that changing them to a pathology-based name might not be worth the effort,” says Vishesh Khanna, MD, a resident physician at Stanford. He gives the examples of Alzheimer’s disease and Crohn’s disease.

Jimmy Zheng, a medical student at Stanford, agrees that eponyms are here to stay. “At the level of medical school, eponyms are broadly dispensed in class, in USMLE study resources and in our clinical training,” Zheng says. “While some clinicians have called for the complete erasure of eponyms, this is unlikely to happen.”

Zheng and Stanford neurologist Carl Gold, MD, recently assessed the historical trends of medical eponym use in neurology literature. They also surveyed neurology residents on their knowledge and attitude towards eponyms. Their study’s findings were published in Neurology.

“Regardless of ‘should,’ our analyses demonstrate that eponyms are increasingly prevalent in the scientific literature and that new eponyms like the Potocki-Lupski syndrome continue to be coined,” Gold says. “Despite awareness of both the pros and cons of eponyms, the majority of Stanford neurology trainees in our study reported that historical precedent, pervasiveness and ease of use would drive the continued use of eponyms in neurology.”

So the debate rages on. According to my informal and small survey, some Stanford physicians favor eliminating eponymous disease names — stay tuned to find out why.

This is the beginning of a two-part series on naming diseases. The conclusion will appear this week.

Photo via Good Free Photos

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Measuring depression with wearables

Depression and emotional disorders can occur at any time of year — and do for millions of Americans. But feeling sad, lonely, anxious and depressed may seem particularly isolating during this holiday season, which is supposed to be a time of joy and celebration.

A team of Stanford researchers believes that one way to work towards ameliorating this suffering is to develop a better way to quantitatively measure stress, anxiety and depression.

“One of the biggest barriers for psychiatry in the field that I work in is that we don’t have objective tests. So the way that we assess mental health conditions and risks for them is by interview and asking you how do you feel,” said Leanne Williams, MD, a professor in psychiatry and behavioral sciences at Stanford, when she spoke at a Stanford Reunion Homecoming alumni celebration.

She added, “Imagine if you were diagnosing and treating diabetes without tests, without sensors. It’s really impossible to imagine, yet that is what we’re doing for mental health, right now.”

Instead, Stanford researchers want to collect and analyze data from wearable devices to quantitatively characterize mental states. The multidisciplinary team includes scientists from the departments of psychiatry, chemical engineering, bioengineering, computer science and global health.

Their first step was to use functional magnetic resonance imaging to map the brain activity of healthy controls compared to people with major depressive disorder who were imaged before and after they were treated with antidepressants.

The researchers identified six “biotypes” of depression, representing different ways brain circuitry can be disrupted to cause specific symptoms. They classified the biotypes as rumination, anxious avoidance, threat dysregulation, anhedonia, cognitive dyscontrol and inattention.

“For example, threat dysregulation is when the brain stays in alarm mode after acute stress and you feel heart racing, palpitations, sometimes panic attacks,” presented Williams, “and that’s the brain not switching off from that mode,” Williams said.

The team, which includes chemical engineer Zhenan Bao, PhD, then identified links between these different brain biotypes and various physiological differences, including changes in heart rate, skin conductance, electrolyte levels and hormone production. In particular, they found correlations between the biotypes and production of cortisol, a hormone strongly related to stress level.

Now, they are developing a wearable device — called MENTAID — that measures the physiological parameters continuously. Their current prototype can already measure cortisol levels in sweat in agreement with standard laboratory measurements. This was an incredibly challenging task due to the extremely low concentration and tiny molecular size of cortisol.

Going forward, they plan to validate their wearable device with clinical trials, including studies to assess its design and user interface. Ultimately, the researchers hope MENTAID will help prevent and treat mental illness — for example, by better predicting and evaluating patient response to specific anti-depressants.

Photo by Sora Sagano

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Floppy vibration modes explain negative thermal expansion in solids

Animation showing how solid crystals of ScF3 shrink upon heating. While the bonds between scandium (green) and fluorine (blue) remain relatively rigid, the fluorine atoms along the sides of the cubic crystals oscillate independently, resulting in a wide range of distances between neighboring fluorine atoms. The higher the temperature, the greater the buckling in the sides of the crystals leading to the overall contraction (negative thermal expansion) effect. Credit: Brookhaven National Laboratory

Matching the thermal expansion values of materials in contact is essential when manufacturing precision tools, engines, and medical devices. For example, a dental filling would cause a toothache if it expanded a different amount than the surrounding tooth when drinking a hot beverage. Fillings are therefore comprised of a composite of materials with positive and negative thermal expansion, creating an overall expansion tailored to the tooth enamel.

The underlying mechanisms of why crystalline materials with negative thermal expansion (NTE) shrink when heated have been a matter of scientific debate. Now, a multi-institutional research team led by Igor Zaliznyak, a physicist at Brookhaven National Laboratory, believes it has the answer.

As recently reported in Science Advances, the scientists measured the distance between atoms in scandium trifluoride powder, a cubic NTE material—at temperatures ranging from 2 K to 1099 K—using total neutron diffraction. The research team determined the probability that two particular atomic species would be found at a given distance. They studied scandium trifluoride because it has a simple atomic structure in which each scandium atom is surrounded by an octahedron of fluorine atoms. According to the prevailing rigid-unit-mode (RUM) theory, each fluorine octahedron should vibrate and move as a rigid unit when heated — but that is not what they observed.

“We found that the distances between scandium and fluorine were pretty rigidly-defined until a temperature of about 700 K, but the distances between the nearest-neighbor fluorines became ill-defined at temperatures above 300 K,” says Zaliznyak. “Their probability distributions became very broad, which is basically a direct manifestation of the fact that the shape of the octahedron is not preserved. If the fluorine octahedral had been rigid, the fluorine-fluorine distance would have been as well defined as scandium-fluorine.”

With the help of high school researcher David Wendt and condensed matter theorist Alexei Tkachenko, Zaliznyak developed a simple model to explain these experimental results. The team went back to the basics—the fundamental laws of physics.

“When we removed the ill-controlled constraint that there must be these rigid units, then we could explain the fundamental interactions that govern the atomic positions in the [ScF3] solid using just Coulomb interactions.”

The team developed a negative thermal expansion model that treats each Sc-F bond as a rigid monomer link and the entire ScF3 crystal structure as a floppy, under-constrained network of freely jointed monomers. Each scandium ion is constrained by rigid bonds in all three directions, whereas each fluorine ion is free to vibrate and displace orthogonally to its Sc-F bonds. This is a direct three-dimensional analogy of the well-established behavior of chainlike polymers. And their simple theory agreed remarkably well with their experiments, accurately predicting the distribution of distances between the nearest-neighbor fluorine pairs for all temperatures where NTE was observed.   

“Basically we figured out how these ceramic materials contract on warming and how to make a simple calculation that describes this phenomenon,” Zaliznyak says.

Angus Wilkinson, an expert on negative thermal expansion materials at the Georgia Institute of Technology who is not involved in the project, agrees that Zaliznyak’s work will change the way people think about negative thermal expansion in solids.

“While the RUM picture of NTE has been questioned for some time, the experimental data in this paper, along with the floppy network (FN) analysis, provide a compelling alternative view,” says Wilkinson. “I very much like the way the FN approach is applicable to both soft matter systems and crystalline materials. The floppy network analysis is novel and gives gratifyingly good agreement with a wide variety of experimental data.”

According to Zaliznyak, the next major step of their work will be to study more complex materials that exhibit NTE behavior now that they know what to look for.

Read the article in Science Advances.

This is a reposting of my news brief, courtesy of MRS Bulletin.