Featured

Blood test may detect early signs of lung-transplant rejection

New blood test measures the DNA fragments of lung transplant donors in the blood of recipients, in hopes of preventing organ rejection and saving lives.

Advertisements
upper-body-944557_1280
Image by kalhh

After receiving a lung transplant, patients face the likely chance that their body’s immune system will reject the transplanted organ. Rejection can happen at any time due to a variety of factors such as a lung infection or an injury to the lungs during transplant surgery. The most deadly type of rejection is chronic lung allograft rejection (CLAD), which develops slowly and often silently without obvious symptoms.

Now, researchers have developed a simple blood test that detects tissue graft injury within the first three months after lung transplant surgery. After further validation, this non-invasive test could identify patients with a high risk of CLAD or death due to graft failure, allowing doctors to intervene early and possibly prevent chronic rejection.

“This test solves a long-standing problem in lung transplants: detection of hidden signs of rejection,” said Hannah Valantine, MD, co-leader of the study and a senior investigator at the National Heart, Lung, and Blood Institute, in a recent news release. “We’re very excited about its potential to save lives, especially in the wake of a critical shortage of donor organs.”

Valantine is also a Stanford professor of medicine and Kiran Khush, MD, associate professor of medicine, is a co-senior author.

The new test measures the amount of DNA fragments circulating freely in a patient’s bloodstream. Since the lung donor and recipients have different genomes, the test can identify and quantify the fragments from both people. If there are a lot more donor DNA fragments, this indicates that the organ is injured.

As recently reported in EBioMedicine, the researchers regularly monitored blood samples from 106 lung transplant patients during the first three months after surgery at several institutions, including Stanford. After dividing the patients into three groups based on the level of donor-derived DNA fragments in their blood, the team found that patients with higher levels were six times more likely to subsequently develop transplant organ failure or die than those with lower levels. And many of these high-risk patients didn’t have symptoms.

“We showed for the first time that donor-derived DNA is a predictive marker for chronic lung rejection and death, and could provide critical time-points to intervene, perhaps preventing these outcomes,” Valantine said in the release. “Once rejection is detected early via this test, doctors would then have the option to increase the dosages of anti-rejection drugs, add new agents that reduce tissue inflammation, or take other measures to prevent or slow the progression.”

The researchers expect commercial versions of the blood test to be available for clinical use soon. They are also planning future studies to evaluate the blood test for other solid organ transplants.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

The future hope of “flash” radiation cancer therapy

aqua-1756734_1920_crop.jpg

The goal of cancer therapy is to destroy the cancer cells while minimizing side effects and damage to the rest of the body. Common types of treatment include surgery, chemotherapy, targeted therapy and radiation therapy. Often combined with surgery or drugs, radiation therapy uses high-energy X-rays to harm the DNA and other critical processes of the rapidly-dividing cancer cells.

New innovations in radiation therapy were the focus of a recent episode of the Sirius radio show “The Future of Everything.” On hand was Stanford’s Billy Loo, MD, PhD, a professor of radiation oncology, who spoke with Stanford professor and radio host Russ Altman, MD, PhD.

Radiation has been used to treat cancer for over a century, but today’s technologies target the tumor with far greater precision and speed than the old days. Loo explained that modern radiotherapy now delivers low-dose beams of X-rays from multiple directions, which are accurately focused on the tumor so the surrounding healthy tissues get only a small dose while the tumor gets blasted. Radiation oncologists use imaging — CT, MRI or PET — to determine the three-dimensional sculpture of the tumor to target.

“We identify the area that needs to be treated, where the tumor is in relationship to the normal organs, and create a plan of the sculpted treatment,” Loo said. “And then during the treatment, we also use imaging … to see, for example, whether the radiation is going where we want it to go.”

In addition, oncologists now implement technologies in the clinic to compensate for motion, since organs like the lungs are constantly moving and patients have trouble lying still even for a few minutes. “We call it motion management. We do all kinds of tricks like turning on the radiation beam synchronized with the breathing cycle or following tumors around with the radiation beam,” explained Loo.

Currently, that is how standard radiation therapy works. However, Stanford radiation oncologists are collaborating with scientists at SLAC Linear Accelerator Center to develop an innovative technology called PHASER. Although Loo admits that the acronym was inspired because he loves Star Trek, PHASER stands for pluridirectional high-energy agile scanning electronic radiotherapy. This new technology delivers the radiation dose of an entire therapy session in a single flash lasting less than a second — faster than the body moves.

“We wondered, what if the treatment was done so fast — like in a flash photography — that all the motion is frozen? That’s a fundamental solution to this motion problem that gives us the ultimate precision,” he said. “If we’re able to treat more precisely with less spillage of radiation dose into normal tissues, that gives us the benefit of being able to kill the cancer and cause less collateral damage.”

The research team is currently testing the PHASER technology in mice, resulting in an exciting discovery — the biological response to flash radiotherapy may differ from slower traditional radiotherapy.

“We and a few other labs around the world have started to see that when the radiation is given in a flash, we see equal or better tumor killing but much better normal tissue protection than with the conventional speed of radiation,” Loo said. “And if that translates to humans, that’s a huge breakthrough.”

Loo also explained that their PHASER technology has been designed to be compact, economical, reliable and clinically efficient to provide a robust, mobile unit for global use. They expect it to fit in a standard cargo shipping container and to power it using solar energy and batteries.

“About half of the patients in the world today have no access to radiation therapy for technological and logistical reasons. That means millions of patients who could potentially be receiving curative cancer therapy are getting treated purely palliatively. And that’s a huge tragedy,” Loo said. “We don’t want to create a solution that everyone in the world has to come here to get — that would have limited impact. And so that’s been a core principle from the beginning.”

This is a reposting of my Scope blog post, courtesy of Stanford School of Medicine.

The future of genomics: A podcast featuring Stanford geneticists

diverse-group-silhouette
Image by Pat Lyn

Every living organism on Earth has a genome, the complete set of DNA containing all of the information needed to develop and maintain the organism. Humans inherit three billion long strings of DNA called chromosomes from each parent, so your genome can help identify your personal ancestry. But genomes can also identify the movement of human populations based on who is similar to whom.

Carlos Bustamante, PhD, a professor of biomedical data science, of genetics and of biology at Stanford, discusses the blossoming uses of genomes on a recent episode of “The Future of Everything” radio show.

For example, Bustamante told host Russ Altman, MD, PhD, a professor of bioengineering, of genetics, of medicine and of biomedical data science, about the genomic fingerprints of the history of slavery in the United States. As part of an international collaboration, he studied the DNA of modern individuals and individuals from slave cemeteries, tracing their history to particular tribal groups in Africa.

“A lot of that history has been lost and African Americans want to reclaim parts of that history using DNA,” Bustamante said. “What’s interesting, at least in the United States, is that most of the slave ships went first to the Caribbean and Brazil. Only a couple hundred thousand people came in straight to the Port of Charleston. So the history of the slave trade is actually written in the DNA of the Caribbean, Brazilian and U.S. African descendant populations.”

But that is only one of the many genomic applications discussed on the episode. Another important use is predicting disease risks. Genetic tests are now available for many hereditary conditions, including cancer risk assessment, at Stanford.

This raises a challenge, however, because our knowledge of DNA is primarily based on people of European descent. As Bustamante explained, this occurred because European countries were the first to recognize the potential impact that DNA sequencing could have on health care, once the cost of DNA sequencing technology plummeted.

“They invested quickly and by the year, say 2009, they’d done about a thousand studies and 95 percent of the participants in those studies were of European descent — be they from the countries in Europe or in Iceland.”

Since humans are 99.9 percent identical in their genetic makeup, maybe this doesn’t sound like a problem. But Bustamante said the differences may be important because they could help lead to improvements in health care. He described this lack of diversity as both a problem and an opportunity.

Take blond hair, for example. Bustamante explained that two main populations have blond hair: Europeans and Melanesians from the Solomon Islands. When the scientists started a research project, they hypothesized that a European went to Melanesia and had a lot of kids. But that isn’t what the genetics showed.

“The genetics of blond hair in Europe are different than the genetics of blond hair in Melanesia. They look the same, but it turns out that the underlying genes are different,” he said. “And why is that interesting? From the point of view of medical genetics, if this is true for blond hair — which is about as simple a trait as you can get — what about diabetes? Why would we assume the genetic basis of diabetes is the same in every population, when we know diabetes actually presents differently in different populations?”

He also argued that new drug discovery would be more successful if it was based on genetic leads. Cholesterol lowering drugs called PCSK9 inhibitors, for instance, were found by studying families with naturally high or low levels of cholesterol. Successes like these are the reason he thinks it’s important to study diverse populations.

“If we spread our bets across different human populations, we’re much more likely to find interesting biology that then benefits everybody,” he said. “Because these cholesterol lowering drugs aren’t just good for those people with high cholesterol for genetic reasons. That’s the key. You can mimic it in others and it benefits everybody.”

Of course, the potential for genomics goes beyond human applications. Altman and Bustamante also discuss plant and animal uses, including designing your dream dog.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Can AI improve access to mental health care? Possibly, Stanford psychologist says

artificial-2970158_1920
Image by geralt

“Hey Siri, am I depressed?” When I posed this question to my iPhone, Siri’s reply was “I can’t really say, Jennifer.” But someday, software programs like Siri or Alexa may be able to talk to patients about their mental health symptoms to assist human therapists.

To learn more, I spoke with Adam Miner, PsyD, an instructor and co-director of Stanford’s Virtual Reality-Immersive Technology Clinic, who is working to improve conversational AI to recognize and respond to health issues.

What do you do as an AI psychologist?

“AI psychology isn’t a new specialty yet, but I do see it as a growing interdisciplinary need. I work to improve mental health access and quality through safe and effective artificial intelligence. I use methods from social science and computer science to answer questions about AI and vulnerable groups who may benefit or be harmed.”

How did you become interested in this field?

“During my training as a clinical psychologist, I had patients who waited years to tell anyone about their problems for many different reasons. I believe the role of a clinician isn’t to blame people who don’t come into the hospital. Instead, we should look for opportunities to provide care when people are ready and willing to ask for it, even if that is through machines.

I was reading research from different fields like communication and computer science and I was struck by the idea that people may confide intimate feelings to computers and be impacted by how computers respond. I started testing different digital assistants, like Siri, to see how they responded to sensitive health questions. The potential for good outcomes — as well as bad — quickly came into focus.”

Why is technology needed to assess the mental health of patients?

“We have a mental health crisis and existing barriers to care — like social stigma, cost and treatment access. Technology, specifically AI, has been called on to help. The big hope is that AI-based systems, unlike human clinicians, would never get tired, be available wherever and whenever the patient needs and know more than any human could ever know.

However, we need to avoid inflated expectations. There are real risks around privacy, ineffective care and worsening disparities for vulnerable populations. There’s a lot of excitement, but also a gap in knowledge. We don’t yet fully understand all the complexities of human–AI interactions.

People may not feel judged when they talk to a machine the same way they do when they talk to a human — the conversation may feel more private. But it may in fact be more public because information could be shared in unexpected ways or with unintended parties, such as advertisers or insurance companies.”

What are you hoping to accomplish with AI?

“If successful, AI could help improve access in three key ways. First, it could reach people who aren’t accessing traditional, clinic-based care for financial, geographic or other reasons like social anxiety. Second, it could help create a ‘learning healthcare system’ in which patient data is used to improve evidence-based care and clinician training.

Lastly, I have an ethical duty to practice culturally sensitive care as a licensed clinical psychologist. But a patient might use a word to describe anxiety that I don’t know and I might miss the symptom. AI, if designed well, could recognize cultural idioms of distress or speak multiple languages better than I ever will. But AI isn’t magic. We’ll need to thoughtfully design and train AI to do well with different genders, ethnicities, races and ages to prevent further marginalizing vulnerable groups.

If AI could help with diagnostic assessments, it might allow people to access care who otherwise wouldn’t. This may help avoid downstream health emergencies like suicide.”

How long until AI is used in the clinic?

“I hesitate to give any timeline, as AI can mean so many different things. But a few key challenges need to be addressed before wide deployment, including the privacy issues, the impact of AI-mediated communications on clinician-patient relationships and the inclusion of cultural respect.

The clinician–patient relationship is often overlooked when imagining a future with AI. We know from research that people can feel an emotional connection to health-focused conversational AI. What we don’t know is whether this will strengthen or weaken the patient-clinician relationship, which is central to both patient care and a clinician’s sense of self. If patients lose trust in mental health providers, it will cause real and lasting harm.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Soda taxes increase prices but lower consumption, studies find

beverage-drinks-soda-3008
Photo by Breakingpic

Local surcharges on sugar-sweetened beverages are becoming the latest “sin tax” designed to reduce our consumption of unhealthy products, like soda, tobacco and alcohol. Driven by the growing health concerns of diabetes, obesity and heart disease, their goal is to improve public health while generating tax revenues.

Commonly called soda taxes, they typically also include sweetened energy, sports and fruity drinks and presweetened tea and coffee — leaving water, milk and natural juices untaxed. If you live in the Bay Area, you’ve probably heard of them since Berkeley, San Francisco, Albany and Oakland imposed soda taxes in the last several years. But do these kinds of surcharges work?

“There’s a lot of debate about whether to pass those kinds of taxes and how to design them,” says Stephan Seiler, PhD, an associate professor of marketing, in a recent Stanford Graduate School of Business news article. “How high should the tax rates be? What type of products should be covered — regular or diet or both? And should the tax be levied at the city or county level?”

Two studies recently investigated the long-term effectiveness of beverage taxes. The first study analyzed sales data from over 1,200 retail stores in Philadelphia, which imposed a 1.5-cent-per-ounce tax on sweetened beverages starting in 2017. As part of the multi-institutional team, Seiler says they wanted to learn how the tax affected things like tax revenue and people’s financial burdens, and use that to contribute to ongoing policy discussions.

As expected, the Philadelphia study found that beverage manufacturers passed on almost all of the tax to consumers by raising prices by 34 percent. As a result, local demand for the taxed drinks dropped by 46 percent. But that didn’t necessarily mean that residents consumed less. Instead, they traveled four or five miles to purchase sweetened beverages outside the taxed area. Taking this into account, the researchers found the demand actually dropped by only 22 percent.

Another recent study analyzed the effectiveness of Berkeley’s 1-cent-per-ounce soda tax using beverage frequency questionnaires from 2014 to 2017 — polling 1,513 people in high-foot-traffic areas in demographically-diverse neighborhoods in Berkeley, as well as  3,712 people in Oakland and San Francisco before their soda taxes were implemented for comparison. This multi-institutional research team included Sanjay Basu, MD, an assistant professor of medicine, health research and policy at Stanford.

After implementation of the Berkeley tax and corresponding increase in prices, the researchers reported a 52 percent decrease in consumption of sweetened drinks and a 29 percent increase in water consumption. The comparison groups in Oakland and San Francisco had similar baseline drink consumptions but saw no significant changes.

One difference between these soda taxes concerns diet soda, which is taxed in Philadelphia but exempt in Berkeley. It may be easier to switch from regular to diet soda, so Seiler suggests that a better design is to tax regular sodas but not their diet counterparts and to levy the tax across a wide geographic area.

In fact, some countries — including Mexico, France, United Kingdom and many others — have implemented a national soda tax. “That type of tax would be harder to avoid,” Seiler says.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Searching for Photocathodes that Convert CO2 into Fuels

Figure
Six-step selection criteria used in the search for photocathodes for CO2 reduction. The search began with 68,860 inorganic compounds. The number of materials that satisfied the requirements of each step are shown in red, with 52 meeting all the requirements.

Carbon dioxide (CO2) has a bad reputation due to its pivotal role in the greenhouse gas effect at the Earth’s surface. But scientists at the Joint Center for Artificial Photosynthesis (JCAP), a U.S. Department of Energy (DOE) Innovation Hub, view CO2 as a promising solution to clean, low-cost, renewable energy.

JCAP is a team led by the California Institute of Technology (Caltech) that brings together more than 100 world-class scientists and engineers, primarily from Caltech and its lead partner, Lawrence Berkeley National Laboratory (Berkeley Lab).

The JCAP team is developing new ways to produce transportation fuels from CO2, sunlight, and water using a process called artificial photosynthesis, which harvests solar energy and stores it in chemical bonds. If successful, they’ll be able to produce fuels while also eliminating some CO2 — a “win-win,” according to Arunima Singh, an assistant professor of physics at Arizona State University and a former member of the JCAP team.

Singh became involved in the research as a postdoctoral associate at Berkeley Lab, where she searched for new photocathodes to efficiently convert CO2 to chemical fuels — a major hurtle to realizing scalable artificial photosynthesis.

“There is a dire need to find new materials to enable the photocatalytic conversion of CO2. The existing photocathodes have very low efficiencies and product selectivity, which means the CO2 produces many products that are expensive to distill,” said Singh. “Previous experimental attempts found new photocatalytic materials by trial and error, but we wanted to do a more directed search.”

Searching for Needles in a Materials Project Haystack

Using supercomputing resources at the National Energy Research Scientific Computing Center (NERSC), the Berkeley Lab team performed a massive photocathode search, starting with 68,860 materials and screening them for specific intrinsic properties. Their results were published in the January issue of Nature Communications.

“The candidate materials need to be thermodynamically stable so they can be synthesized in the lab. They need to absorb visible light. And they need to be stable in water under the highly reducing conditions of CO2 reduction, ” said first author Singh. “These three key properties were already available through the Materials Project.”

The Materials Project is a DOE-funded database of materials properties calculated based on predictive quantum-mechanical simulations using supercomputing clusters at NERSC, which is a DOE Office of Science User Facility. The database includes both experimentally known materials and hypothetical structures predicted by machine learning algorithms or various other procedures. Of the 68,860 candidate materials screened in the Nature Communications study, about half had already been experimentally synthesized, while the remaining were hypothetical.

The researchers screened these materials in six steps. First they used the Materials Project to identify the materials that were thermodynamically stable, able to absorb visible light, stable in water, and electrochemically stable. This strategy reduced the candidate pool to 235 materials — dramatically narrowing the list for the final two steps, which required computationally intensive calculations.

“By leveraging a large amount of data already available in the Materials Project, we were able to cut the computational cost of the project by several millions of CPU hours,” said Kristin Persson, a faculty scientist in Berkeley Lab’s Energy Technologies Area and senior author on the paper.

Additional Screening with First-Principles Calculations

However, the Materials Project database did not have all the necessary data. So the final screening required new first-principles simulations of materials properties based on quantum mechanics to accurately estimate the electronic structures and understand the energy of the excited electrons. These calculations were computed at NERSC and the Texas Advanced Computing Center (TACC) for the remaining 235 candidate materials.

“NERSC is the backbone of the Materials Project computation and database. But we also used about two million NERSC core hours to do the step 5 and 6 calculations,” said Singh. “Without NERSC, we would have been running our simulations on 250 cores for 24 hours a day for a year, versus being able to do these calculations in parallel on NERSC in a matter of a few months.”

The team also used about half a million core hours for these calculations at TACC, which were allocated through the National Science Foundation’s Extreme Science and Engineering Discovery Environment (XSEDE).

These theoretical calculations showed that 52 materials met all of the stringent requirements of the screening process, but that only nine of these had been previously studied for CO2 reduction. Among the 43 newly identified photocathodes, 35 have previously been synthesized and eight are hypothetical materials.

“We performed the largest exploratory search for CO2 reduction photocathodes to date, covering 68,860 materials and identifying 43 new photocathode materials exhibiting promising properties,” Persson said.

Finally, the researchers narrowed the list down to 39 promising candidates by looking at the vibrational properties of the eight hypothetical materials and ruling out the four predicted to be unstable by themselves.

However, more work is needed before artificial photosynthesis becomes are reality, including working with their experimental colleagues like Caltech’s John Gregoire (a leader of JCAPS’s high-throughput experimentation laboratory) to validate their computational results.

“We have collaborators at Berkeley Lab and Caltech who are actively trying to grow these materials and test them,” Singh said. “I’m excited to see our study opening up new avenues of research.”

This is a reposting of my Computing Sciences news feature, courtesy of Berkeley Lab.

Myths and facts: Stanford psychiatrist discusses schizophrenia

surreal-402830_1920
Image by geralt

Inaccurate stereotypes and erroneous beliefs abound concerning schizophrenia. Stanford psychiatrist Jacob Ballon, MD, dispelled a few of these myths in a recent article in Everyday Health. Here are the takeaways and a bit of basic info regarding the disease:

  • There is an underlying, complex genetic component to schizophrenia that is not understood well enough to provide a diagnosis or guide treatment. Ballon said about 10 percent of the disease risk can be attributed to genetics.
  • Schizophrenics do not have multiple personalities — that is called dissociative identity disorder.
  • Most cases of schizophrenia develop between adolescence and age 30; children rarely develop schizophrenia.
  • People with schizophrenia as a group are not more prone to violent behavior. Although there is an association between violence and schizophrenia, the additional risk is largely due to substance abuse.
  • People with schizophrenia have a significantly higher risk of suicide and depression.
  • People with schizophrenia can have delusions and hallucinations. A delusion is a belief and a hallucination is a sensory experience, like a vision. Delusions can include persecutory delusions of feeling watched or followed, thoughts of feeling contaminated or delusions of grandeur. If you are with someone with schizophrenia, keep in mind that the delusion or hallucination is very real to them.
  • Movement can be affected and people with schizophrenia may have difficulty speaking or struggling to pay attention or remember things. Their decision-making may also be compromised.
  • Although there is no cure for schizophrenia, there are effective treatments, including psychotherapy and medications.
  • Starting treatment soon after the disease develops is thought to be the most beneficial.

“There are a number of people who are treated for schizophrenia and are doing quite well,” said Ballon in the article. He added, “If people are able to participate in psychotherapy, they can better extend the value of the medication and also apply some helpful principles to actual experiences in their life. I want people to get help and support with school and employment because I believe they’ll be able to get back on an upward trajectory much more quickly if they get help at an early stage.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

A medical mystery: Diagnosing dead artists by their works of art

Leonardo_da_Vinci,_Salvator_Mundi,_detail_of_face_med
Close-up photo of Leonardo da Vinci’s Salvator Mundi

More and more students are staring at paintings as part of medical school curriculum. For instance, Stanford students observe art alongside their faculty at the Cantor Arts Center as part of the Medicine & the Muse Program to improve their observational and descriptive abilities — skills that are essential to health care providers.

Some doctors are taking it a step further. Fascinated by how health problems have affected famous artists, they are combing historical records and works of art for diagnostic clues, as a recent article in Artsy explains. And then they are publishing their studies in peer-reviewed medical journals.

According to a study published in JAMA Ophthalmology, for example, Leonardo da Vinci had crossed eyes — a vision disorder called strabismus — that caused him to lose depth perception. Researcher Christopher Tyler, PhD, DSc, from the City University of London, hypothesized this by assessing two sculptures, two oil paintings and two drawings by da Vinci that are thought to portray him. Measuring the average angular divergence of the pupils in these works, he found the mean angle of misalignment to be consistent with strabismus.

Another case study published in the British Journal of General Practice suggests that impressionist painter Claude Monet’s color selection was affected by poor eyesight, since his color palette changed after cataract surgery. “Monet’s postoperative works are devoid of garish colors or course application,” said National Health Service ophthalmologist and author Anna Gruener, MD, in the Artsy article. “It is therefore unlikely that he had intentionally adopted the broader and more abstract style…”

However, Michael Marmor, MD, a Stanford ophthalmology professor and author of several books on artists and eyesight, warns doctors against making firm conclusions.

“Artists have license to paint as they wish, so style is mutable and not necessarily an indication of disease,” Marmor told Artsy. “Speculation is always fun, but not when it is presented as ‘evidence’ in scientific journals.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.