Nerve interface provides intuitive and precise control of prosthetic hand

Current state-of-the-art designs for a multifunctional prosthetic hand are restricted in functionality by the signals used to control it. A promising source for prosthetic motor control is the peripheral nerves that run from the spinal column down the arm, since they still function after an upper limb amputation. But building a direct interface to the peripheral nervous system is challenging, because these nerves and their electrical signals are incredibly small. Current interface techniques are hindered by signal amplitude and stability issues, so they provide amputees with only a limited number of independent movements. 

Now, researchers from the University of Michigan have developed a novel regenerative peripheral nerve interface (RPNI) that relies on tiny muscle grafts to amplify the peripheral nerve signals, which are then translated into motor control signals for the prosthesis using standard machine learning algorithms. The research team has demonstrated real-time, intuitive, finger-level control of a robotic hand for amputees, as reported in a recent issue of Science Translational Medicine.

“We take a small graft from one of the patient’s quadricep muscles, or from the amputated limb if they are doing the amputation right then, and wrap just the right amount of muscle around the nerve. The nerve then regrows into the muscle to form new neuromuscular junctions,” says Cindy Chestek, an associate professor of biomedical engineering at the University of Michigan and a senior author on the study. “This creates multiple innervated muscle fibers that are controlled by the small nerve and that all fire at the same time to create a much larger electrical signal—10 or 100 times bigger than you would record from inside or around a nerve. And we do this for several of the nerves in the arm.”

This surgical technique was initially developed by co-researcher Paul Cederna, a plastic surgeon at the University of Michigan, to treat phantom limb pain caused by neuromas. A neuroma is a painful growth of nerve cells that forms at the site of the amputation injury. Over 200 patients have undergone the surgery to treat neuroma pain.

“The impetus for these surgeries was to give nerve fibers a target, or a muscle, to latch on to so neuromas didn’t develop,” says Gregory Clark, an associate professor in biomedical engineering from the University of Utah who was not involved in the study. “Paul Cederna was insightful enough to realize these reinnervated mini-muscles also provided a wonderful opportunity to serve as signal sources for dexterous, intuitive control. That means there’s a ready population that could benefit from this approach.”

The Michigan team validated their technique with studies involving four participants with upper extremity amputations who had previously undergone RPNI surgery to treat neuroma pain. Each participant had a total of 3 to 9 muscle grafts implanted on nerves. Initially, the researchers measured the signals from these RPNIs using fine-wire, nickel-alloy electrodes, which were inserted through the skin into the grafts using ultrasound guidance. They measured high-amplitude electromyography signals, representing the electrical activity of the mini-muscles, when the participants imagined they were moving the fingers of their phantom hand. The ultrasound images showed the participants’ thoughts caused the associated specific mini-muscles to contract. These proof-of-concept measurements, however, were limited by the discomfort and movement of the percutaneous electrodes that pierced the skin.

Next, the team surgically implanted permanent electrodes into the RPNIs of two of the participants. They used a type of electrode commonly used for battery-powered diaphragm pacing systems, which electrically stimulate the diaphragm muscles and nerves of patients with chronic respiratory insufficiency to help regulate their breathing. These implanted electrodes allowed the researchers to measure even larger electrical signals—week after week from the same participant—by just plugging into the connector. After taking 5 to 15 minutes of calibration data, the electrical signals were translated into movement intent using machine learning algorithms and then passed on to a prosthetic hand. Both subjects were able to intuitively complete tasks like stacking physical blocks without any training—it worked on the first try just by thinking about it, says Chestek. Another key result is that the algorithm kept working even 300 days later.

“The ability to use the determined relationship between electrical activity and intended movement for a very long period of time has important practical consequences for the user of a prosthesis, because the last thing they want is to rely on a hand that is not reliable,” Clark says.

Although this clinical trial is ongoing, the Michigan team is now investigating how to replace the connector and computer card with an implantable device that communicates wirelessly, so patients can walk around in the real world. The researchers are also working to incorporate sensory feedback through the regenerative peripheral nerve interface. Their ultimate goal is for patients to feel like their prosthetic hand is alive, taking over the space in the brain where their natural hand used to be.

“People are excited because this is a novel approach that will provide high quality, intuitive, and very specific signals that can be used in a very straightforward, natural way to provide high degrees of dexterous control that are also very stable and last a long time,” Clark says.

Read the article in Science Translational Medicine.

Illustration of multiple regenerative peripheral nerve interfaces (RPNIs) created for each available nerve of an amputee. Fine-wire electrodes were embedded into his RPNI muscles during the readout session. Credit: Philip Vu/University of Michigan; Science Translational Medicine doi: 10.1126/scitranslmed.aay2857

This is a reposting of my news brief, courtesy of Materials Research Society.

Why do viruses like the coronavirus sometimes steal our sense of smell?

When you catch a severe cold, your nose stuffs up, you can’t smell anything and food tastes funny. Fortunately, most people regain their sense of smell once the cold runs its course. But for others, the complete (anosmia) or partial (hyposmia) loss of the sense of smell is permanent.

I spoke with Zara Patel, MD, a Stanford associate professor of otolaryngology, head and neck surgery, and director of endoscopic skull base surgery, to learn more about her research on olfactory disorders. In particular, we discussed her recent study on the possible association between post-viral olfactory loss and other cranial neuropathies, which are disorders that impair your nerves and ultimately your ability to feel or move. She also described how her work pertains to the COVID-19 pandemic.  

How does a virus impair someone’s sense of smell?

A variety of viruses can attack the cranial nerves related to smell or the mucosal tissue that surrounds those nerves. Cranial nerves control things in our head and neck — such as the nerves that allow us to speak by using our vocal cords, control our facial motion, hear and smell.

For example, COVID-19 is just one type of disease caused by a coronavirus. There are many other types of coronaviruses that cause colds and upper respiratory illnesses, as well as rhinoviruses and influenza viruses. Any of these viruses are known to cause inflammation, either directly around the nerve in the nasal lining or within the nerve itself. When the nerve is either surrounded by inflammatory molecules or has a lot of inflammation within the nerve cell body, it cannot function correctly — and that is what causes the loss or dysfunction of smell. And it can happen to anyone: young and old, healthy and sick.

How did your study investigate olfactory loss?

In my practice, I see patients who have smell dysfunction. But I’m also a sinus and skull base surgeon, so I have a whole host of other patients with sinus problems and skull-based tumors who don’t have an olfactory loss. So we did a case-control study to compare the incidence of cranial neuropathies — conditions in which nerves in the brain or brain stem are damaged — in two patient groups. Ninety-one patients had post-viral olfactory loss and 100 were controls; and they were matched as closely as possible for age and gender.

We also looked at family history of neurologic diseases — such as Alzheimer’s disease, Parkinson’s disease and stroke.

What did you find?

Patients with post-viral olfactory loss had six-times higher odds of having other cranial neuropathies than the control group — with an incidence rate of other cranial nerve deficits of 11% and 2%, respectively. Family history of neurologic diseases was associated with more than two-fold greater odds of having a cranial nerve deficit. Although we had a small sample size, the striking difference between the groups implies that it is worthwhile to research this with a larger population.

Our findings suggest that patients experiencing these pathologies may have inherent vulnerabilities to neural damage or decreased ability of nerve recovery — something beyond known risk factors like age, body mass index, co-morbidities and the duration of the loss before intervention. For example, there may be a genetic predisposition, but that is just an untested theory at this point.

How does this work pertain to COVID-19?

Smell loss can be one of the earliest signs of a COVID-19 infection. It can sometimes be the only sign. Or it can present after other symptoms. Although it may not affect every patient with COVID-19, loss of smell and taste is definitely associated with the disease. In some countries, including France, they’ve used this as a triage mechanism. People need to know that these symptoms can be related to the COVID-19 disease process so they aren’t going about their lives like normal and spreading the virus.

The pandemic also might impact how we treat patients with olfactory dysfunction in general. When someone has a viral-induced inflammation of the nerve, we sometimes treat it with steroids to decrease the inflammation. But treating COVID-19 patients with steroids might be a bad idea because of its effect on the inflammatory processes going on in their heart and lungs.

What advice do you have for people who have an impaired sense of smell?  

First, if you lose your sense of smell and it isn’t coming back after all the other symptoms have gone away, seek care as soon as possible. If you wait too long, there is much less that we can do to help you. Interventions, including olfactory training and medications, are more effective when you are treated early.

Second, if you lose your sense of smell or taste during this pandemic and you don’t have any other symptoms, contact your doctor. The doctor can decide whether you need to be tested for COVID-19 or whether you need to self-isolate to avoid being a vector of the virus in your family or community.

Image by carles

This is a reposting of my Scope story, courtesy of Stanford School of Medicine.

Harnessing AMReX for Wind Turbine Simulations

ECP ExaWind Project Taps Bereley Lab’s AMReX to Help Model Next-Generation Wind Farms

Driving along Highway 580 over the Altamont Pass in Northern California, you can’t help but marvel at the 4,000+ wind turbines slowly spinning on the summer-golden hillsides. Home to one of the earliest wind farms in the United States, Altamont Pass today remains one of the largest concentrations of wind turbines in the world. It is also a symbol of the future of clean energy.

Before utility grids can achieve wide-scale deployment of wind energy, however, they need more efficient wind plants. This requires advancing our fundamental understanding of the flow physics governing wind-plant performance.

ExaWind, a U.S. Department of Energy (DOE) Exascale Computing Project, is tackling this challenge by developing new simulation capabilities to more accurately predict the complex flow physics of wind farms. The project entails a collaboration between the National Renewable Energy Laboratory (NREL), Sandia National Laboratories, Oak Ridge National Laboratory, the University of Texas at Austin, Parallel Geometric Algorithms, and — as of a few months ago — Lawrence Berkeley National Laboratory (Berkeley Lab).

“Our ExaWind challenge problem is to simulate the air flow of nine wind turbines arranged as a three-by-three array inside a space five kilometers by five kilometers on the ground and a kilometer high,” said Shreyas Ananthan, a research software engineer at NREL and lead technical expert on the project. “And we need to run about a hundred seconds of real-time simulation.” 

By developing this virtual test bed, the researchers hope to revolutionize the design, operational control, and siting of wind plants, plus facilitate reliable grid integration. And this requires a combination of advanced supercomputers and unique simulation codes.

Unstructured + Structured Calculations

The principle behind a wind turbine is simple: energy in the wind turns the turbine blades, which causes an internal gearbox to rotate and spin a generator that produces electricity. But simulating this is complicated. The flexible turbine blades rotate, bend, and twist as the wind shifts direction and speed. The yaw and pitch of these blades are controlled in real time to extract as much energy as possible from a wind event. The air flow also entails complex dynamics  — such as influences from the ground terrain, formation of a turbulent wakefield downstream from the blades, and turbine-turbine interactions.

To improve on current simulations, scientists need more computing power and higher resolution models that better capture the crucial dynamics. The ExaWind team is developing a predictive, physics-based, and high-resolution computational model — progressively building from petascale simulations of a single turbine toward exascale simulations of a nine-turbine array in complex terrain.

A Nalu-Wind solution to the differential equations of motion for a wind turbine operating in uniform air flow (moving from left to right). Two of the three wind turbine’s blades are pictures (think blue rectangles on left). The slice in the background represents the contours of the whirling air’s motion, showing the vertical direction of the wake structure behind the turbine blades (red indicates swirl in counterclockwise direction and blue clockwise direction around blade tip).

“We want to know things like the air velocity and air temperature across a big three-dimensional space,” said Ann Almgren, who leads the Center for Computational Sciences and Engineering in Berkeley Lab’s Computational Research Division. “But we care most about what’s happening right at the turbines where things are changing quickly. We want to focus our resources near these turbines, without neglecting what’s going on in the larger space.”

To achieve the desired accuracy, the researchers are solving fluid dynamics equations near the turbines using a computational code called Nalu-Wind, a fully unstructured code that gives users the flexibility to more accurately describe the complex geometries near the turbines, Ananthan explained.

But this flexibility comes at a price. Unstructured mesh calculations have to store information not just about the location of all the mesh points but also about which points are connected to which. Structured meshes, meanwhile, are “logically rectangular,” which makes a lot of operations much simpler and faster.

“Originally, ExaWind planned to use Nalu-Wind everywhere, but coupling Nalu-Wind with a structured grid code may offer a much faster time-to-solution,” Almgren said.

Enter AMReX

Luckily, Ananthan knew about Berkeley Lab’s AMReX, a C++ software framework that supports block-structured adaptive-mesh algorithms for solving systems of partial differential equations. AMReX supports simulations on a structured mesh hierarchy; at each level the mesh is made up of regular boxes, but the different levels have different spatial resolution.

Ananthan explained they actually want the best of both worlds: unstructured mesh near the turbines and structured mesh elsewhere in the domain. The unstructured mesh and structured mesh have to communicate with each other, so the ExaWind team validated an overset mesh approach with an unstructured mesh near the turbines and a background structured mesh. That’s when they reached out to Almgren to collaborate.

“AMReX allows you to zoom in to get fine resolution in the regions you care about but have coarse resolution everywhere else,” Almgren said. The plan is for ExaWind to use an AMReX-based code (AMR-Wind) to resolve the entire domain except right around the turbines, where the researchers will use Nalu-Wind. AMR-Wind will generate finer and finer cells as they get closer to the turbines, basically matching the Nalu-Wind resolution where the codes meet. Nalu-Wind and AMR-Wind will talk to each other using a coupling code called TIOGA.

Even with this strategy, the team needs high performance computing. Ananthan’s initial performance studies were conducted on up to 1,024 Cori Haswell nodes at Berkeley Lab’s National Energy Research Scientific Computing Center (NERSC) and 49,152 Mira nodes at the Argonne Leadership Computing Facility.

“For the last three years, we’ve been using NERSC’s Cori heavily, as well as NREL’s Peregrine and Eagle,” said Ananthan. Moving forward, they’ll also be using the Summit system at the Oak Ridge Leadership Computing Facility and, ultimately, the Aurora and Frontier exascale supercomputers — all of which feature different types of GPUs: NVIDIA on Summit (and NERSC’s next-generation Perlmutter system), Intel on Aurora, and AMD on Frontier. 

Although Berkeley Lab just started partnering with the ExaWind team this past fall, the collaboration has already made a lot of progress. “Right now we’re still doing proof-of-concept testing for coupling the AMR-Wind and Nalu-Wind codes, but we expect to have the coupled software running on the full domain by the end of FY20,” said Almgren.

NERSC is a DOE Office of Science user facility.

Top figure: Some of the 4000+ wind turbines in Northern California’s Altamont Pass wind farm. Credit: David Laporte

This is a reposting of my news feature, courtesy of Berkeley Lab.

Hydrogel elicits switchable, reversible, and controllable self-trapping light beams

The next generation of optoelectronic and photonic systems — with wide-ranging potential applications in image transmission, light-guiding-light signal processing, logic gates for computing, and medicine — may be realized through the invention of circuitry-free, rapidly reconfigurable systems powered by solitons. Optical spatial solitons are self-trapped optical beams of finite spatial cross section that travel without diverging like freely diffracting beams. These nonlinear waves propagate in photoresponsive materials through self-inscribed waveguides, which are generated when the materials locally change their refractive index in response to light intensity. In conventional nonlinear materials, self-trapping requires high-powered lasers or external electric fields.

Now, a team of researchers from the University of Pittsburgh, Harvard University, and McMaster University have developed a pH-responsive poly(acrylamide-co-acrylic acid) hydrogel, a hydrophilic three-dimensionally connected polymer network, in which light self-trapping can be turned rapidly on and off many times in a controllable and reversible way using a low-intensity visible laser. They reported their work in a recent issue of Proceedings of the National Academy of Sciences.

Developed by Joanna Aizenberg’s group at Harvard University, the hydrogel contains critical covalently-tethered chromophores that absorb specific wavelengths of visible light and thereby transform their structure. In the absence of light, the gel is relaxed and the chromophores are predominantly in a ring-open merocyanine form. When the hydrogel is irradiated with visible light, the isomerization of merocyanine to its closed-ring spiropyran form triggers a local expulsion of water, a contraction of the hydrogel, and ultimately an increase in the refractive index along the irradiated path.

The novelty of this work is that this isomerization process is reversible. In the absence of light, the hydrogel reverts back to its original state.

The researchers demonstrated the reversible self-trapping process with experiments led by Kalaichelvi Saravanamuttu’s team at McMaster University—measuring the diameter and peak intensity of the beam over time using a 532 nm laser, optical lenses, neutral density filters, and a CCD camera. They also performed a series of control experiments, such as testing the hydrogel matrix without chromophores, to determine which parameters are critical for self-trapping.

“We determined it was important to have a hydrogel matrix that became more hydrophobic in the presence of light. It was important to have the chromophores covalently-tethered to the three-dimensional matrix to localize the refractive index change. And photoisomerization was critical in triggering this sequence of events,” says Saravanamuttu, an associate professor of chemistry and chemical biology and a senior author on the paper.

More surprising, when the researchers irradiated the hydrogel with two parallel lasers, the self-trapping beams interacted with each other when separated by distances up to 10 times the beam width. “They modulated each other, reducing their self-trapping efficiency, at remote distances through the interconnected and flexible network of the hydrogel,” Saravanamuttu says.

Being able to reversibly, predictably, and remotely control one self-trapped beam with another opens up the possibility of applications like all-optical computing using beams of ambient light. Traditional computations are performed using hard materials such as wires, semiconductors and photodiodes to couple electronics to light. Instead, the team hopes to control light with light. So far, they have already used the interactions of self-trapped beams to do basic binary arithmetic, says Saravanuamuttu.

These experimental results were confirmed by numerical simulations developed by senior authors Aizenberg, a professor of materials science and of chemistry and chemical biology at Harvard University, and Anna Balazs, a professor of chemical and petroleum engineering at the University of Pittsburgh, and their groups. Their model dynamically calculated the spatial and temporal evolution of the optical field as it propagated through the hydrogel, whose index of refraction was changing. Consistent with experiments, the model accurately captured the self-trapping dynamics and efficiency when using the single or double laser beams.

“This paper marks an interesting step forward that is indicative of the potential of one disruptive technology,” says John Sheridan, a professor of electrical and electronic engineering at the University College of Dublin, who was not involved in the research. “Technologies like this will provide core hardware components enabling the three-dimensional, all-optical connection and switching hardware needed for ‘Internet of things’ data integration and the 5G/6G telecommunications systems of the future.”

Currently, the speed of the waveguide formation and switching happens in seconds, though, rather than the nanoseconds typical of optoelectronic switches. So the researchers plan to investigate what parameters are slowing down the process and how to change them. For example, they will explore making the hydrogel more flexible to give the chromophores greater freedom to undergo isomerization in hopes of eliciting a faster response. They will also look at different types of isomerizable chromophores.

However, Saravanamuttu emphasizes they are not trying to replace digital computers that use conventional electronics, so speed may not be critical. Other potential applications include autonomous stimuli-responsive soft robotic systems for drug delivery or dynamic optics.

“This is particularly exciting because we see it as a material that can reciprocally interact with an environmental stimulus. It isn’t just turned on and off, but it actually changes its behavior in a dynamic way,” she says.

Read the article in Proceedings of the National Academy of Sciences

Figure caption: (a) Schematic illustration of the experimental setup used to probe laser self-trapping due to photoinduced local contraction of the hydrogel. A 532 nm laser beam is focused onto the entrance face of the hydrogel, propagated through the material, and imaged onto a CCD camera. (b) Illustration of beam-induced contraction of the hydrogel when continuously irradiated with a 532 nm laser beam. Credit: Saravanamuttu group, McMaster University, Aizenberg Group, Harvard University, Balazs Group, University of Pittsburgh; PNAS doi.org/10.1073/pnas.1902872117

This is a reposting of my MRS news brief, courtesy of the Materials Research Society.

Defend or delay? Grad students must decide whether to present their thesis virtually

Graduate students who are trying to finish their degrees amid the COVID-19 pandemic are finding, after years of research and months of preparation, that the big day of defending their thesis has to be delayed or done remotely.

Faced with a new order to shelter at her off-campus home, Anjali Bisaria, a graduate student in chemical and systems biology at Stanford, decided to forge ahead. She works in the lab of Tobias Meyer, PhD,  where they study how human cells move and divide to build, maintain and repair tissues and organs.

On the scheduled date and time, Bisaria logged into a Zoom session and defended her research to a virtual audience of advisors, classmates, friends and family members. She then virtually met with just her faculty examinees. After being declared a doctor, she celebrated with her lab via yet another Zoom session.

“I know it was the right thing to do to keep the community safe,” she said in a Stanford news story. “But it was a little bit sad because this is likely my last quarter on campus. So to not be able to interact with my classmates and not be able to enjoy that honeymoon phase of grad school felt unceremonious.”

Soon, microbiology and immunology graduate student Kali Pruss will face the same decision. Her in-person PhD oral is currently scheduled for May 22 at Munzer Auditorium on Stanford campus.

“I haven’t yet decided whether I’ll proceed with my defense via Zoom or delay my defense to later in the summer, in hopes that I would be able to have an in-person defense,” Pruss told me. “I was planning on staying through the summer, taking a writing quarter anyway. Thankfully, this gives me some flexibility in terms of timing.”

As a member of the lab run by Justin Sonnenburg, PhD, Pruss studies how Clostridium difficile — a bacteria that commonly causes diarrhea and colitis — adapts to the inflammation that it generates, she said.

Pruss is currently writing a paper on her research, but the pandemic is impacting that too. She told me that she’s doing more data analysis and relying less on experiments than she normally would — and she’s a bit worried about how this approach will be received.

“I’m concerned with how this is going to affect the review process, and whether I’ll be able to successfully address reviewer comments asking for additional experiments for my papers,” she said.

She added, “Ultimately, though, I feel incredibly privileged and grateful to be able to continue working remotely towards my dissertation. The question of how my research is being impacted, and whether to postpone my defense, has been a minor concern in the scope of what is currently happening at Stanford and around the world.”

Given the extension of the Bay Area’s shelter-at-home order to last through at least May 3, Pruss’s hopes of defending in-person on May 22 may not be realized. So, her extended family — from Wisconsin, Indiana and Illinois — canceled their travel arrangements. They hope to come in late summer if she delays her defense and sheltering orders have been lifted.  

Regardless of how she defends her thesis, she plans to celebrate her upcoming educational milestone.

“This is the one time we, as PhD students, get to celebrate our time in grad school as an accomplishment,” she said.

After graduation, Pruss plans to join Jeffrey Gordon’s lab at Washington University School of Medicine in St. Louis as a postdoc. Ultimately, she plans to run her own academic lab.

Photo by Anjali Bisaria

This is a reposting of my Scope story, courtesy of Stanford School of Medicine.

Twitter journal clubs: Sharing knowledge from a social distance

When I was an academic researcher, I attended many journal clubs — convening with my group in a conference room to discuss the methods and findings of a selected paper. These meetings are common in academic and medical education, allowing students to develop their presentation skills and helping everyone keep up with the flood of scientific literature.

In the era of social media, such in-person journal clubs are being replaced by Twitter journal clubs — now more than ever — and it’s led me to wonder, are 280 characters really enough?

I spoke with Roxana Daneshjou, MD, PhD, a dermatology resident at Stanford, to find out. She co-authored a recent editorial in JAMA that describes the advantages of using Twitter compared to the traditional format.

How do Twitter journal clubs work?

The journal club picks a paper to discuss, often using crowdsourcing to select something people are interested in. Everyone logs into Twitter at a specific time and has an online conversation with people from around the globe. Someone may facilitate and use pre-selected questions, but there’s also time for open discussion. You can string many tweets together, so you can basically write as much as you want.

Most journal clubs meet once a month for an hour, but the nice thing about Twitter is that the conversation is saved. So, if someone wants to comment the next day, the participants will see those responses whenever they log into Twitter. That’s important because participants are from different time zones. Having the conversation publicly recorded could be an issue for some people, but I think scientists and clinicians aren’t shy about asking questions and critiquing papers, even publicly.

Why did you start the first dermatology Twitter journal club?

I lurked in other journal clubs and participated in a dermatopathology one that was really interesting. But I wanted to have the same experience with medical dermatology, discussing disease management and new clinical discoveries.

I think Twitter journal clubs are particularly useful for small specialties like dermatology. They allow dermatologists to share knowledge across institutions. They also help promote the field of dermatology to a wider, cross-specialty audience, demonstrating the role that dermatologists can play for their patients. These interactions among specialists are easier with Twitter, compared to traditional journal clubs, because anyone can comment or ask a question about the topic, using the free Twitter website or app without advanced coordination.

Who participates?

We have over 1,700 people following our dermatology journal club, but we typically only have about 15 to 20 people actively participating in a meeting — with more people lurking. Our participants are a diverse group of residents, medical students, faculty and community physicians from across the country.

However, we’ve gotten a much larger group when we’ve done joint meetings with other specialties. For example, we did a joint journal club with nephrology — one of the largest Twitter journal clubs —  to discuss the role of dermatologists in helping manage immunosuppressed kidney transplant patients who are at higher risk of skin cancer. These cross-specialty Twitter interactions are great, because I’ve become friends with residents and faculty at other institutions and now feel comfortable sending them private messages if I have a question. For example, I met dermatologist Adewole Adamson, MD, MPP, through the journal club, and he provided me with a high level of mentorship to co-write the JAMA editorial.

How has the pandemic affected Twitter journal clubs?

Multiple Twitter journal clubs have discussed issues related to COVID-19 and their particular specialty. Our most recent dermatology journal club discussed how dermatologists were transitioning to virtual visits to help with social distancing and how resident training was continuing in dermatology with COVID-19. On April 6, infectious disease’s Twitter journal club will be discussing a paper entitled, “A Trial of Lopinavir-Ritonavir in Adults with Severe COVID-19.”

With social distancing, in-person journal clubs will be more difficult to have. Twitter is the perfect medium for having multiple conversations at once with many people. This is a really difficult time for many, and I hope Twitter journal clubs can help physicians and trainees continue to engage in academic conversations.

Image by Mohamed Mahmoud Hassan

This is a reposting of my Scope story, courtesy of Stanford School of Medicine.

Identifying and addressing gender bias in healthcare: A Q&A

International Women’s Day offered a reminder t0 “celebrate women’s achievement, raise awareness against bias and take action for equality.” Stanford-trained surgeon Arghavan Salles, MD, PhD, is up for the challenge.

As a scholar in residence with Stanford Medicine’s educational programs, Salles researches gender equity, implicit bias, inclusion and physician well-being. Beyond Stanford, she is an activist against sexual harassment in medicine, and she’s written on these topics from a personal perspective for the popular press, including Scientific Americanand TIMEmagazine.

I recently spoke with her to learn more.

What inspired your research focus?

As an engineering undergraduate, I never really thought about gender or diversity issues.

Then during the first year of my PhD at Stanford, I learned about stereotype threat. The basic idea is that facing a negative stereotype about part of your identity can affect your performance during tests. For example, randomized controlled studies show that if minority students are asked for their race or ethnicity at the beginning of a test of intellectual ability, like the GRE (Graduate Record Examination), this question can impair their performance. A lot of decisions are based on these kinds of test scores, and this really changed how I think about merit.

At the time, I was also in the middle of my residency to become a surgeon. I started thinking about whether stereotype threat affects women who are training to be surgeons, so that’s what I studied for my dissertation.

I have continued to think about these types of issues, studying things like: Who gets the opportunity to speak at conferences? Does gender affect how supervisors write performance evaluations for residents and medical students?  And how extensive is gender bias in health care?

How does gender bias impact women surgeons?

We all have biases. Growing up in the U.S., we generally expect men to be decisive and in control and women to be warm and nurturing. So when women physicians make decisions quickly and take charge in order to provide the best care to their patients, they’re going against expectations.

I hear the same struggles from women all over. For women surgeons in particular, for example, the operating room staff often don’t hear when they ask for instruments. The staff may not have all the devices and equipment in the room because their requests aren’t taken as seriously as those of men. And they are often labeled as being demanding or difficult if they act like their male colleagues, which has significant consequences on opportunities like promotions.

Related to gender bias, women surgeons also deal all the time with microaggressions from patients and health care professionals. For instance, patients report to the nursing staff they haven’t seen a surgeon yet, when their female surgeon saw them that morning. Or they say, ‘Oh, a woman surgeon. I’ve never heard of that.’ So you have to strategically decide what to confront.

How can we address these issues?

It’s really important to have allies to give emotional support and advice, but also to speak up when these things are happening. For example, an ally can speak up if a committee member brings up something irrelevant during a promotion review.  

In the bigger scheme, we need to change how we hire people, to make it more difficult to act on our biases. We should use a blinded review so we don’t know the gender or race of the applicant. We should have applicants do relevant work sample tests to select the most qualified candidate. And we should use standardized interview questions. Changing how we hire and promote people would make a big difference.

We also need to create a culture of inclusion, in addition to hiring more women, underrepresented minorities and transgender and nonbinary gender people to bring new ideas. Diversity without inclusion is essentially exclusion. We’ve talked about gender today, but a lot of the same challenges are faced by other underrepresented groups.

Why do you write about these topics from a very personal viewpoint?

In some ways, I’m a naive person. I don’t have the same degree of professional self-preservation that some people have. There may be unintended negative consequences, but I’m just honest to a fault.

The piece about anger came out of seeing time and time again women being misunderstood — having their anger attributed to some personality flaw rather than a reasonable consequence of what they were experiencing. I figured if I wrote about it, I could raise awareness and maybe a few people would react differently next time they saw a woman express anger.

I wrote the fertility piece because I wanted to share my experience to educate people, so fewer people would end up involuntarily childless. In general, I just feel that it’s important to share my experiences to help others not make the same mistakes that I have.

Photo courtesy of Arghavan Salles

This is a reposting of my Scope story, courtesy of Stanford School of Medicine.

Improving cancer prognoses: A radio show

“Looking in the patients’ eyes and having a conversation” has motivated Stanford oncologist Ash Alizadeh, MD, PhD, to improve the way we diagnose, talk about and treat cancer.

Patients go home nervous and the care team is nervous, he pointed out, because you’re fighting a battle together to save a life and the things you’re doing are toxic and expensive.

“It’s really sobering to look at how blunt our tools are for getting a sense for whether you’re making progress as you’re going through the course of your therapy,” said Alizadeh in a recent episode of the Sirius radio show The Future of Everything hosted by Russ Altman, MD, PhD.

A key area of his work aims to more accurately predict a patient’s prognosis. He developed a computer algorithm (the focus of a recent Stanford Medicine magazine article) that searches data for information likely to affect the patient’s long-term outcome — generating a unique personalized estimate of risk, called the continuous individualized risk index (CIRI). The goal is to use CIRI to guide personalized therapy selection.

In the episode, he explained that their integrated approach better forecasts a patient’s prognosis by analyzing the complete medical path of the patient, whereas oncologists typically give more weight to the most recent data.

The researchers validated their predictive model using data gathered over time from patients with three types of cancers: diffuse large B-cell lymphoma (DLBCL), chronic lymphocytic leukemia or early-stage breast cancer.

In the study, they also measured the amount of circulating tumor DNA (ctDNA) in the blood of 132 DLBCL patients, before and during their treatment. Circulating tumor DNA is DNA that was shed from dying tumor cells and released into the bloodstream.

For this small group of DLBCL patients, standard methods to forecast how well a patient will do had a predictive index of 0.6, where a perfectly accurate test would score 1 and a random test like a coin toss would score 0.5. Alizadeh’s CIRI score was 0.8 for the same patients — not perfect but markedly better than the current “crystal ball exercise,” he said in a news release.

In the radio show, he also discussed how this predictive model complements his work to develop new technologies for cancer diagnosis and treatment.

For example, he explained measuring ctDNA levels with a non-invasive liquid biopsy may help detect early-stage cancer, guide treatment selection and monitor treatment response. And if liquid biopsies detect cancers at an early stage, this may allow oncologists to leverage their patients’ immune system to attack their cancer, he said.

“So instead of directly attacking the tumor cells with drugs that kill the cancer cells, you now have drugs that engage the immune system to say, ‘Hey, wake up,’” he said. That means the same drug could work for many cancers.

Alizadeh is developing these new techniques to personalize cancer diagnosis and treatment in hopes of improving the outcomes for his patients, he said.

 Photo by Pikrepo

This is a reposting of my Scope story, courtesy of Stanford School of Medicine.

Behind the scenes with a Stanford pediatric surgeon

In a new series, “Behind the Scenes,” we’re inviting Stanford Medicine physicians, nurses, researchers and staff to share a glimpse of their day.

As a science writer, I talk to a lot of health care providers about their work. But I’ve often wondered what it is really like to be a surgeon. So I was excited to speak with pediatric surgeon Stephanie Chao, MD, about her day.

Chao is a pediatric general surgeon, an assistant professor of surgery and the trauma medical director for Stanford Children’s Health. In addition to performing surgeries on children of all ages, she has a range of research interests, including how to reduce gun-related deaths in children and the hospital cost associated with pediatric firearm injuries.

Morning routine
On days that I operate, I get up between 5:50 and 6 a.m., depending on whether I hit the snooze button. I typically don’t eat breakfast. I don’t drink coffee because I don’t want to get a tremor. I’m out the door by 6:30 a.m. and at the hospital by 7 a.m. I usually go by the bedside of the first patient I’m going to operate on to say hi. The patient is in the operating room by 7:30 a.m. and my cases start.

On non-surgical days, it’s more chaotic. I have a 3-year-old and 1-year-old. So every day there’s a jigsaw puzzle as to whether my husband or I stay to get the kids ready for preschool, and who comes home early.

Part of Stephanie Chao’s day involves checking on patients, including this newborn.

In the operating room
The operating room is the place where I have the privilege of helping children feel better. It’s a very calming place, like a temple. When I walk through the operating room doors, the rest of the world becomes quiet. Even if it is a high-intensity case when the patient is very sick, I know there is a team of nurses, scrub techs and anesthesiologists used to working together in a well-orchestrated fashion. So even when the unexpected arises, we can focus on the patient with full confidence that we’ll find a solution.

There are occasions when babies are so sick that we need silence in the operating room. Everyone becomes hyper-attuned to all the beeps on the monitors. When patients are not as critically sick, I often play a Pandora station that I created called “Happy.” I started it with Pharrell Williams’ “Happy” and then Pandora pulled in other upbeat songs, including a bunch of Taylor Swift songs, so everyone thinks I’m a big Taylor Swift fan.

The OR staff call me by my first name. I believe that if everyone is relaxed and feels like they have an equal say in the procedure, we work better as a well-oiled machine for the benefit of the patient.

“The OR staff call me by my first name,” Stephanie Chao said.

Favorite task
Some of the most rewarding times of my day are when I sit down with patients and their families to hear their concerns, to reassure them and to help them understand what to expect — and hopefully to make a scary situation a little less so. As a parent, I realize just how hard it is to entrust one’s child completely in the hands of another. I also like to see patients in the hospital as they’re recovering.

Favorite time
The best part of the day is when I come home. When I open the door into the house, my kids recognize that sound and I hear their little footsteps as they run towards the door, shrieking with joy.

Evening ritual
When my husband and I get home, on nights I am not on call, I cook dinner in the middle of the chaos of hearing about the kids’ day. Hopefully, we “sit down” to eat by 6:20 or 6:30 p.m., and I mean that term loosely. It’s a circus, but eventually everyone is somewhat fed.

And then we do bath time and bedtime. There’s a daily negotiation with my three-year-old on how many books we read before bed. On school nights, she’s allowed three books but she tries to negotiate for 10.

Eventually, we get both kids down for the night. Then my husband and I clean up the mess of the day and try to have a coherent conversation with each other. But by then both of us are exhausted. I try to log on again to finish some work, read or review papers. I usually go to sleep around 11 p.m.

Managing it all
When I can carve out time to do relaxing things for myself, like go to the gym, that is great. But it’s rare and I remind myself that I am blessed with a job that I love and a wonderfully active family.

The result sometimes feels like chaos, but I don’t want to wish my life away waiting for my kids to get older and for life to get easier. Trying to live in the moment, and embracing it, is how I find balance.

Photos by Rachel Baker

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

“Poor air quality affects everyone” — How to protect yourself and clean the air

I remember when you could ride BART for free on a “Spare the Air” day, when smog was expected to reach unhealthy levels based on standards set by the Environmental Protection Agency. Now, there are too many of these days — 26 in the Bay Area last year — to enjoy that perk.

This bad air is making us sick, according to Stanford allergy specialist and clinical associate professor Sharon Chinthrajah, MD. In a recent episode of the Sirius radio show “The Future of Everything,” she spoke with Stanford professor and radio host Russ Altman, MD, PhD, about how we can combat the negative health impacts of air pollution.

“Poor air quality affects everybody: healthy people and people with chronic heart and lung conditions,” said Chinthrajah. “And you know, in my lung clinic I see people coming in with exacerbations of their underlying lung diseases like asthma or COPD.”

On Spare the Air days, Chinthrajah said even healthy people can suffer from eye, nose, throat and skin irritations caused by air pollution. And the health impacts can be far more serious for her patients. So she tells them to prepare for bad air quality days and to monitor the air quality index (AQI) in their area, she said.

The AQI measures the levels of ozone and other tiny pollutants in the air. The air is considered unhealthy when the AQI is above 100 for sensitive groups — like people with chronic illnesses, older adults and children. It’s unhealthy for everyone when the AQI is above 150.

On these unhealthy air days, Chinthrajah recommends taking precautions:

  • Limit the time you spend outdoors.
  • When outside, use a well-fitted air mask that filters out pollutants larger than 2.5 microns (which is about 20 times smaller than the thickness of an average human hair).
  • When driving, recirculate the air in your car and keep your windows closed.
  • Stay hydrated.
  • Once inside, change your clothes and take a quick shower before you go to bed, removing any air particulates that collected on you during the day.

In the radio show, Chinthrajah explained that published studies by the World Health organization and others demonstrate that people who live in developing countries like India and Asia — where they suffer poor air quality many days of the year — have a shortened life span.

“You know, there’s premature deaths. There’s exacerbation of underlying lung issues and cardiovascular issues. There’s more deaths from heart attacks and strokes in countries where there is poor air quality,” she said.

She admitted that it is difficult to definitively say these health problems are due to poor air quality — given the other problems in the developing country

es like limited access to clean water, food and health care — but she thinks poor air quality is a major contributor.

Chinthrajah said she believes we need to address the problem of air pollution at a societal level. And that means we need to target cars that burn fossil fuel, which account for much of the air pollution in California, she said. Instead, we need to move towards using public transportation and electric vehicles, as well as generating electricity from clean energy sources like solar, wind and water.

She noted that California is now offering a $9,5000 subsidy to qualifying low-income families to purchase low emission vehicles like all-electric cars or plug-in hybrids, on top of the standard federal and state rebates.

“So it seems like an overwhelming, daunting task, right? But I think we each have to take ownership of what we can do to reduce our carbon footprint. And then lobby within our local organizations to create practices that are sustainable,” she said.

Chinthrajah hopes that addressing air pollution and energy consumption at a societal level will lead to less asthma and other health problems, she said.

Image by U.S. Environmental Protection Agency 

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

%d bloggers like this: