Scientists uncover surprising behavior of a fatty acid enzyme with potential biofuel applications

Derived from microscopic algae, the rare, light-driven enzyme converts fatty acids into starting ingredients for solvents and fuels.

A study using SLAC’s LCLS X-ray laser captured how light drives a series of complex structural changes in an enzyme called FAP, which catalyzes the transformation of fatty acids into starting ingredients for solvents and fuels. This drawing captures the starting state of the catalytic reaction. The dark green background represents the protein’s molecular structure. The enzyme’s light-sensing part, called the FAD cofactor, is shown at center right with its three rings absorbing a photon coming from bottom left. A fatty acid at upper left awaits transformation. The amino acid shown at middle left plays an important role in the catalytic cycle, and the red dot near the center is a water molecule. (Damien Sorigué/Université Aix-Marseille)

By Jennifer Huber

Although many organisms capture and respond to sunlight, it’s rare to find enzymes – proteins that promote chemical reactions in living things – that are driven by light. Scientists have identified only three so far. The newest one, discovered in 2017, is called fatty acid photodecarboxylase (FAP). Derived from microscopic algae, FAP uses blue light to convert fatty acids into hydrocarbons that are similar to those found in crude oil.

“A growing number of researchers envision using FAPs for green chemistry applications because they can efficiently produce important components of solvents and fuels, including gasoline and jet fuels.” says Martin Weik, the leader of a research group at the Institut de Biologie Structurale at the Université Grenoble Alpes.

Weik is one of the primary investigators in a new study that has captured the complex sequence of structural changes, or photocycle, that FAP undergoes in response to light, which drives this fatty acid transformation. Researchers had proposed a possible FAP photocycle, but the fundamental mechanism was not understood, partly because the process is so fast that it’s very difficult to measure. Specifically, scientists didn’t know how long it took FAP to split a fatty acid and release a hydrocarbon molecule.

Experiments at the Linac Coherent Light Source (LCLS) at the Department of Energy’s SLAC National Accelerator Laboratory helped answer many of these outstanding questions. The researchers describe their results in Science.

All the tools in a toolbox

To understand a light-sensitive enzyme like FAP, scientists use many different techniques to study processes that take place over a broad range of time scales. For instance, photon absorption happens in femtoseconds, or millionths of a billionth of a second, while biological responses on the molecular level often happen in thousandths of a second.

“Our international, interdisciplinary consortium, led by Frédéric Beisson at the Université Aix-Marseille, used a wealth of techniques, including spectroscopy, crystallography and computational approaches,” Weik says. “It’s the sum of these different results that enabled us to get a first glimpse of how this unique enzyme works as a function of time and in space.”

The consortium first studied the complex steps of the catalytic process at their home labs using optical spectroscopy methods, which investigate the electronic and geometric structure of atoms in the samples, including chemical bonding and charge. Spectroscopic experiments identified the intermediate states of the enzyme that accompanied each step, measured their lifetimes and provided information on their chemical nature. These results revealed the need for the ultrafast capabilities of the LCLS X-ray free-electron laser (XFEL), which can track the molecular motion with atomic precision.

A structural view of changes in the FAP molecule during the catalytic process was provided by serial femtosecond crystallography (SFX) at the LCLS. During these experiments, a jet of tiny FAP microcrystals was hit with optical laser pulses to kick off the catalytic reaction. This ensured that all the molecules react at a similar time, synchronizing their behavior and making it possible to track the process in detail. Extremely brief, ultrabright X-ray pulses then measured the resulting changes in the enzyme’s structure.

By integrating thousands of these measurements – acquired using various time delays between the optical and X-ray pulses – the researchers were able to follow structural changes in the enzyme. They also determined the structure of the enzyme’s resting state by probing without the optical laser.

Surprisingly, the researchers found that in the resting state, the light-sensing part of the enzyme has a bent shape. “This small molecule, called the FAD cofactor, is a derivative of vitamin B2 that acts like an antenna to capture photons,” Weik says. “It absorbs blue light and initiates the catalytic process. We thought the starting point of the FAD cofactor was planar, so this bent configuration was unexpected.”

The bent shape of the FAD cofactor was first discovered by X-ray crystallography at the European Synchrotron Radiation Facility, but the scientists had suspected this bend was an artifact of radiation damage, a common problem for crystallographic data collected at synchrotron light sources.

“Only SFX experiments could confirm this unusual configuration because of their unique ability to capture structural information before damaging the sample,” Weik says. “These experiments were complemented by computations. Without the high-level quantum calculations performed by Tatiana Domratcheva of Moscow State University, we wouldn’t have understood our experimental results.”

Next steps

Even with this improved understanding of FAP’s photocycle, unanswered questions remain. For example, researchers know carbon dioxide is formed during a certain step of the catalytic process at a specific time and location, but they don’t know if it is transformed into another molecule before leaving the enzyme.

“In future XFEL work, we want to identify the nature of the products and to take pictures of the process with a much smaller step size so as to resolve the process in much finer detail,” says Weik. “This is important for fundamental research, but it can also help scientists modify the enzyme to do a task for a specific application.”

Such precision experiments will be fully enabled by upcoming upgrades to the LCLS facility that will increase its pulse repetition rate from 120 pulses per second to 1 million pulses per second, transforming scientists’ ability to track complex processes like this.

Other researchers are already working towards industrial FAP applications, including a group that is designing an economic way to produce gases such as propane and butane.

The interdisciplinary consortium included researchers from the Institute of Structural Biology in Grenoble, Max Planck Institute for Medical Research in Heidelberg, Université Aix-Marseille, Ecole Polytechnique in Paris-Palaiseau, the Integrative Biology of the Cell Institute in Paris-Saclay, Moscow State University, the ESRF and SOLEIL synchrotrons in Grenoble and Paris-Saclay, and the team at SLAC National Accelerator Laboratory.

LCLS is a DOE Office of Science user facility. Major funding for this work came from the French National Research Agency (ANR).

Citation: D. Sorigué et al., Science, 9 April 2021 ((

For questions or comments, contact the SLAC Office of Communications at

This reposting of my news release, courtesy of SLAC National Accelerator Center.

Computer models show promise for personalizing chemotherapy

Computers have revolutionized many fields, so it isn’t surprising that they may be transforming cancer research. Computers are now being used to model the molecular and cellular changes associated with individual tumors, allowing scientists to simulate the tumor’s response to different combinations of chemotherapy drugs.  

Modeling big data to improve personalized cancer treatment was the focus of a recent episode of the Sirius radio show “The Future of Everything.” On hand was Sylvia Plevritis, PhD, a professor of biomedical data science and of radiology at Stanford, who discussed her work with Stanford professor and radio show host Russ Altman, MD, PhD.  

Plevritis and her colleagues are using multi-omics data — including measures of gene expression, protein function, metabolic processes and more — to extensively profile individual tumors of individual patients.

They are analyzing this data to better understand how tumors become drug-resistant. She explained in the podcast that tumors are often heterogeneous — not every cell has the same gene mutations — but chemotherapy drugs typically target specific genetic mutations. Tumors are also driven by complex mechanisms beyond genetic mutations. So her lab is comprehensively characterizing the different cell types in a tumor and how these different cell types respond to individual drugs. By better understanding the complexity of what drives the tumor’s response, they hope to identify the underlying mechanisms of drug resistance.

The goal, Plevritis said, is to more accurately estimate the response of the entire tumor to a given set of drugs without having to run clinical trials on every drug combination. Using their modeling, they hope to identify the most promising drug combinations to make clinical trials more efficient, she said.

The research team tested their computational model by measuring the multi-omics profile of human cancer cells in a dish, before and after exposing the cells to specific drugs. Their model then identified the minimum combination of drugs with the maximum effect. This work used archived cell samples, so their modeling results didn’t impact the patients’ treatment. But they compared their model’s prediction to what drugs the patients actually received.

They determined that the best chemotherapy cocktail for most of the patients would have been just one or two of the drugs that they received. For about 10 percent of the patients, they predicted that a totally different drug would have been the most effective, Plevaritis said in the podcast.

Thus, their computational model may be able to divide patients into different groups, based on tumor characteristics, and match those groups with specific chemotherapy cocktails that would be most effective for them. Plevaritis’ team is currently setting up a study to validate their computational predictions for a group of patients with acute myeloid leukemia, in parallel with a combination drug therapy trial, she said.

As a member of the Cancer Intervention Surveillance Network Modeling consortium, Plevritis is also using computational models to evaluate the impact of cancer screening guidelines — such as the recommended frequency of mammograms for general breast cancer screening — on mortality rates. For example, policy organizations like the U.S. Preventive Service Task Force often ask the consortium to simulate thousands of different screening policies — and rank their potential impact — to use as part of their selection criteria, she said.

One outcome of this work is an online decision tool for women who are at high risk for developing breast cancer because they carry a mutation in the BRCA1 or BRCA2 gene. Plevritis said about 45,000 people worldwide have used the tool, and her team has received a lot of positive feedback.

“It’s been very satisfying to get these emails and this feedback from individuals who feel that this complex information was distilled in a way that they can make sense of it,” Plevritis said.

Image of acute promyelocytic leukemia cells by Ed Uthman

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Stanford researchers watch proteins assemble a protective shell around bacteria

Many bacteria and viruses are protected from the immune system by a thin, hard outer shell  — called an S-layer — composed of a single layer of identical protein building blocks.

Understanding how microbes form these crystalline S-layers and the role they play could be important to human health, including our ability to treat bacterial pathogens that cause serious salmonella, C. difficile and anthrax infections. For instance, researchers are working on ways to remove this shell to fight anthrax and other diseases.

Now, a Stanford study has observed for the first time proteins assembling themselves into an S-layer in a bacterium called Caulobacter crescentus, which is present in many fresh water lakes and streams.

Although this bacteria isn’t harmful to humans, it is a well-understood organism that is important to various cellular processes. Scientists know that the S-shell of Caulobacter crescentus is vital for the microbe’s survival and made up of protein building blocks called RsaA.  

A recent news release describes how the research team from Stanford and SLAC National Accelerator Laboratory were able to watch this assembly, even though it happens on such a tiny scale:

“To watch it happen, the researchers stripped microbes of their S-layers and supplied them with synthetic RsaA building blocks labeled with chemicals that fluoresce in bright colors when stimulated with a particular wavelength of light.

Then they tracked the glowing building blocks with single-molecule microscopy as they formed a shell that covered the microbe in a hexagonal, tile-like pattern (shown in image above) in less than two hours. A technique called stimulated emission depletion (STED) microscopy allowed them to see structural details of the layer as small as 60 to 70 nanometers, or billionths of a meter, across – about one-thousandth the width of a human hair.”

The scientists were surprised by what they saw: the protein molecules spontaneously assembled themselves without the help of enzymes.

“It’s like watching a pile of bricks self-assemble into a two-story house,” said Jonathan Herrmann, a graduate student in structural biology at Stanford involved in the study, in the news release.

The researchers believe the protein building blocks are guided to form in specific regions of the cell surface by small defects and gaps within the S-layer. These naturally-occurring defects are inevitable because the flat crystalline sheet is trying to cover the constantly changing, three-dimensional shape of the bacterium, they said.

Among other applications, they hope their findings will offer potential new targets for drug treatments.

“Now that we know how they assemble, we can modify their properties so they can do specific types of work, like forming new types of hybrid materials or attacking biomedical problems,” said Soichi Wakatsuki, PhD, a professor of structural biology and photon science at SLAC, in the release.

Illustration by Greg Stewart/SLAC National Accelerator Laboratory

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Predicting women at risk of preeclampsia before clinical symptoms

Many of my female friends became pregnant with their first child in their late 30s or early 40s, which increased their risk of common complications such as high blood pressure, gestational diabetes and preeclampsia.

Affecting over 8 million women worldwide, preeclampsia can lead to serious, even fatal, complications for both the mother and baby. The clinical symptoms of preeclampsia typically start at 20 weeks of pregnancy and include high blood pressure and signs of kidney or liver damage.

“Once these clinical symptoms appear, irreparable harm to the mother or the fetus may have already occurred,” said Stanford immunologist Brice Gaudilliere, MD, PhD.  “The only available diagnostic blood test for preeclampsia is a proteomic test that measures a ratio of two proteins. While this test is good at ruling out preeclampsia once clinical symptoms have occurred, it has a poor positive predictive value.”

Now, Stanford researchers are working to develop a diagnostic blood test that can accurately predict preeclampsia prior to the onset of clinical symptoms.

A new study conducted at Stanford was led by senior authors Gaudilliere, statistical innovator Nima Aghaeepour, PhD, and clinical trial specialist Martin Angst, MD, and co-first authors and postdoctoral fellows Xiaoyuan Han, PhD, and Sajjad Ghaemi, PhD. Their results were recently published in Frontiers in Immunology.

They analyzed blood samples from 11 women who developed preeclampsia and 12 women with normal blood pressure during pregnancy. These samples were obtained at two timepoints, allowing the scientists to measure how immune cells behaved over time during pregnancy.

“Unlike prior studies that typically assessed just a few select immune cell types in the blood at a single timepoint during pregnancy, our study focused on immune cell dynamics,” Gaudilliere explained. “We utilized a powerful method called mass cytometry, which measured the distribution and functional behavior of virtually all immune cell types present in the blood samples.”

The team identified a set of eight immune cell responses that accurately predicted which of the women would develop preeclampsia — typically 13 weeks before clinical diagnosis.

At the top of their list was a signaling protein called STAT5. They observed higher activity of STAT5 in CD4+ T-cells, which help regulate the immune system, at the beginning of pregnancy for all but one patient who developed preeclampsia.

“Pregnancy is an amazing immunological phenomenon where the mother’s immune system ‘tolerates’ the fetus, a foreign entity, for nine months,” said Angst. “Our findings are consistent with past studies that found preeclampsia to be associated with increased inflammation and decreased immune tolerance towards the fetus.”

Although their results are encouraging, more research is needed before translating them to the clinic.

The authors explained that mass cytometry is a great tool to find the “needle in the haystack.” It allowed them to survey the entire immune system and identify the key elements that could predict preeclampsia, but it is an exploratory platform not suitable for the clinic, they said.

“Now that we have identified the elements of a diagnostic immunoassay, we can use conventional instruments such as those used in the clinic to measure them in a patient’s blood sample.” Aghaeepour said.

First though, the team needs to validate their findings in a large, multi-center study. They are also using machine learning to develop a “multiomics” model that integrates these mass cytometry measurements with other biological analysis approaches. And they are investigating how to objectively define different subtypes of preeclampsia.

Their goal is to accurately diagnose preeclampsia before the onset of clinical symptoms.

 “Diagnosing preeclampsia early would help ensure that patients at highest risk have access to health care facilities, are evaluated more frequently by obstetricians specialized in high-risk pregnancies and receive treatment,” said Gaudilliere.

Women with preeclampsia can receive care through the obstetric clinic at Lucile Packard Children’s Hospital Stanford.

Photo by Pilirodriquez

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Seeing the Web of Microbes

New Web-based Tool Hosted at NERSC Helps Visualize Exometabolomic Data

The Web view, BG11+MvExtract
WoM The Web: Microcoleus vaginatus and six heterotrophic biocrust isolates in M. vaginatus extract. The metabolite composition of the control medium is represented by the solid tan circles. Hollow circles are metabolites that were only identified after microbial transformation (indicating production/release by at least one of the organisms and not initially present in the control medium). Connecting lines indicate an increase (red) or decrease (blue) in the metabolite level in the spent medium compared to the control.

Understanding nutrient flows within microbial communities is important to a wide range of fields, including medicine, bioremediation, carbon sequestration, and sustainable biofuel development. Now, researchers from Lawrence Berkeley National Laboratory (Berkeley Lab) have built an interactive, web-based data visualization tool to observe how organisms transform their environments through the increase and decrease of metabolites — enabling scientists to quickly see patterns in microbial food webs.

This visualization tool — the first of its kind — is a key part of a new data repository, the Web of Microbes (WoM) that contains liquid chromatography mass spectrometry datasets from the Northen Metabolomics Lab located at the U.S. Department of Energy’s Joint Genome Institute (JGI). The Web of Microbes project is an interdisciplinary collaboration between biologists and computational researchers at Berkeley Lab and the National Energy Research Scientific Computing Center (NERSC). JGI and NERSC are both DOE Office of Science user facilities.

“While most existing databases focus on metabolic pathways or identifications, the Web of Microbes is unique in displaying information on which metabolites are consumed or released by an organism to an environment such as soil,” said Suzanne Kosina, a senior research associate in Berkeley Lab’s Environmental Genomics & Systems Biology (EGSB) Division, a member of the DOE ENIGMA Scientific Focus Area, and lead author on a paper describing WoM published in BMC Microbiology. “We call them exometabolites since they are outside of the cell. Knowing which exometabolites a microbe ‘eats’ and produces can help us determine which microbes might benefit from growing together or which might compete with each other for nutrients.”

Four Different Viewpoints

WoM is a python application built on the Django web development framework. It is served from a self-contained python environment on the NERSC global filesystem by an Apache web server. Visualizations are created with JavaScript, cascading style sheets, and the D3 JavaScript visualization library.

Four different viewing methods are available by selecting the tabs labeled “The Web”, “One Environment”, “One Organism”, and “One Metabolite.” “The Web” view graphically displays data constrained by the selection of an environment, while the other three tabs display tabular data from three constrainable dimensions: environment, organism, and metabolite.

“You can think of the 3D datasets as a data cube,” said NERSC engineer Annette Greiner, second author on the BMC Microbiology paper. “The visualization tool allows you to slice the data cube in any direction. And each of these slices gives one of the 2D views: One Environment, One Organism, or One Metabolite.”

The most intuitive way to view the data is via The Web, which displays an overview of connections between organisms and the nutrients they act on within a selected environment. After choosing the environment from a pull-down menu, The Web provides a network diagram in which each organism is represented as a little box, each metabolite as a circle, and their interactions as connecting lines. The size of the circle scales with the number of organisms that interact with that metabolite, whereas the color and shade of the connecting line indicate the amount of increase (red) or decrease (blue) in the metabolite level due to the microbial activation.

“Having a lot more connecting lines indicates there’s more going on in terms of metabolism with those compounds in the environment. You can clearly see differences in behavior between the organisms,” Greiner said. “For instance, an organism with a dense number of red lines indicates that it produces many metabolites.”

Although The Web view gives users a useful qualitative assessment of metabolite interaction patterns, the other three tabular views provide more detailed information.

The One Environment view addresses to what extent the organisms in a single environment compete or coexist with each other. The heatmap table shows which metabolites (shown in rows) are removed or added to the environment by each of the organisms (shown in columns), where the color of each table cell indicates the amount of metabolic increase or decrease. And icons identify whether pairs of organisms compete (X) or are compatible (interlocking rings) for a given metabolite.

“For example, if you’re trying to design a bioreactor and you want to know which organisms would probably work well together in the same environment, then you can look for things with interlocking rings and try to avoid the Xs,” said Greiner.

Similarly, the One Organism heatmap table allows users to compare the actions of a single microbe on many metabolites across multiple environments. And users can use the One Metabolite table to compare the actions of multiple organisms on a selected metabolite in multiple environments.

“Ultimately, WoM provides a means for improving our understanding of microbial communities,” said Trent Northen, a scientist at JGI and in Berkeley Lab’s EGSB Division. “The data and visualization tools help us predict and test microbial interactions with each other and their environment.”

Participatory Design

The WoM tools were developed iteratively using a participatory design process, where research scientists from Northen’s lab worked directly with Greiner to identify needs and quickly try out solutions. This differed from the more traditional approach in which Greiner completes a coherent design for the user interface before showing it to the scientists.

Both Greiner and Kosina agreed that collaborating together was fun and productive. “Instead of going off to a corner alone trying to come up with something, it’s useful to have a user sitting on my shoulder giving me feedback in real time,” said Greiner. “Scientists often have a strong idea about what they need to see, so it pays to have frequent interactions and to work side by side.”

In addition to contributing Greiner’s expertise in data visualization and web application development, NERSC hosts WoM and stores the data. NERSC’s computing resources and well-established science gateway infrastructure should enable WoM to grow both in volume and features in a stable and reliable environment, the development team noted in the BMC Microbiology paper.

According to Greiner, the data itself doesn’t take up much storage space but that may change. Currently, only Northen’s group can upload data but the team hopes to support multiple user groups in the future. For now, the Berkeley Lab researchers are excited to share their data on the Web of Microbes where it can be used by scientists all over the world. And they plan to add more data to the repository as they perform new experiments.

Kosina said it also made sense to work with NERSC on the Web of Microbes project because the Northen metabolomics lab relies on many other tools and resources at NERSC. “We already store all of our mass spectrometry data at NERCS and run our analysis software on their computing systems,” Kosina said.

Eventually, the team plans to link the Web of Microbes exometabolomics data to mass spectrometry and genomics databases such as JGI’s Genome Portal. They are also working with the DOE Systems Biology Knowledgebase (KBase) to allow users to take advantage of KBase’s predictive modeling capabilities, Northen added, which will enable researchers to determine the functions of unknown genes and predict microbial interactions.

This is a reposting of my news feature originally published by Berkeley Lab’s Computing Sciences.

Blocking Zika: New antiviral may treat and prevent infection, a Stanford study suggests

Image of the surface of the Zika virus by Purdue University/courtesy of Kuhn and Rossmann research groups

The Zika virus, which made headlines in 2016 following an outbreak in South America, is transmitted by mosquitos and can cause serious birth defects and neurological problems. Researchers are searching for antiviral treatments or effective vaccines to address this global health threat, but there are currently no approved treatments.

Now, Stanford researchers are taking a different approach — investigating cellular factors of humans that are essential for Zika to propagate. One of those factors is a type of protein called Hsp70, which helps proteins fold correctly and performs a wide range of housekeeping and quality-control functions in cells.

Based on a series of experiments in mosquito and human cells, the Stanford study found that certain Hsp70 proteins are required in multiple steps of the Zika virus’ lifecycle. By blocking Hsp70 with an Hsp70 inhibitor drug, the researchers were able to prevent virus replication, as recently reported in Cell Reports.

One advantage of targeting the human host protein to thwart Zika is that it is less likely to promote drug resistance, Judith Frydman, PhD, senior author of the paper and a professor of genetics and of biology at Stanford, told me.

“The emergence of drug-resistant variants is a major obstacle for the development of antiviral therapies,” she continued. “We hypothesize that because Hsp70 is required for several different steps in the Zika virus cycle, it would be difficult for Zika to acquire enough mutations to develop resistance to the Hsp70 inhibitors. This opens the way to both therapeutic and prophylactic use of these drugs for short courses of treatment without losing effectiveness due to resistance.”

In addition, the team found that the Hsp70 inhibitors showed negligible toxicity to the host cells at the concentrations needed to fully block virus production. They demonstrated this lack of toxicity in both human cells and mice.

“The virus has a much higher demand for Hsp70 than the host cellular processes,” Frydman said. “We can exploit the viral ‘addiction’ to Hsp70 for treatment to prevent the virus from producing the proteins it needs to replicate and infect cells. But most importantly, we show Hsp70 inhibitors can be administered to animals at therapeutically effective doses. To my knowledge, this is the first drug that actually works for Zika-infected animals, protecting them from lethal infection and disease symptoms.”

The researchers believe their new approach could serve to create broad-spectrum antivirals that work against other existing and emerging viruses. In fact, this class of drugs could also treat other insect-borne viruses including Dengue virus and Yellow Fever, Frydman said.

“Our findings provide new strategies to develop a novel class of antivirals that will not be rendered ineffective by the emergence of drug resistance,” Frydman said. “This unique property of targeting host factors used for viral protein folding therapeutically may close a fundamental gap in antiviral drug development.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Inherited Neanderthal genes protect us against viruses

Image by Claire Scully

When Neanderthals and modern humans interbred about 50,000 years ago, they exchanged snippets of DNA. Today, Europeans and Asians still carry 2 to 3 percent of Neanderthal DNA in their genomes.

During contact, they also exposed each other to viruses. This could have been deadly for the human species since Neanderthals encountered many novel infectious viruses while living for hundreds of thousands of years outside Africa. Luckily, the Neanderthals’ immune systems evolved genetic defenses against these viruses that were passed on to humans, according to a study reported in Cell.

“Neanderthal genes likely gave us some protection against viruses that our ancestors encountered when they left Africa,” said Dmitri Petrov, PhD, an evolutionary biologist at Stanford’s School of Humanities and Sciences, in a recent Stanford news release.

In the study, the researchers gathered a large dataset of several thousand proteins that interact with viruses in modern humans. They then identified 152 Neanderthal DNA snippets present in the genes that make these proteins. Most of the 152 genes create proteins that interact with a specific type of viruses, RNA viruses, which have RNA encased in a protein shell.

The team identified 11 RNA viruses with a high number of Neanderthal-inherited genes, including HIV, influenza A and hepatitis C. These viruses likely played a key role in shaping human genome evolution, they said.

Overall, their findings suggest that the genomes of humans and other species contain signatures of ancient epidemics.

“It’s similar to paleontology,” said David Enard, PhD, a former postdoctoral fellow in Petrov’s lab. “You can find hints of dinosaurs in different ways. Sometimes you’ll discover actual bones, but sometimes you find only footprints in fossilized mud. Our method is similarly indirect: Because we know which genes interact with which viruses, we can infer the types of viruses responsible for ancient disease outbreaks.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

New study observes Tuberculosis bacteria attacking antibiotics

Photograph by

Tuberculosis was one of the deadliest known diseases, until antibiotics were discovered and used to dramatically reduce its incidence throughout the world. Unfortunately, before the infectious disease could be eradicated, drug-resistant forms emerged as a major public health threat — one quarter of the world’s population is currently infected with TB and 600,000 people develop drug-resistant TB annually.

New research at SLAC National Accelerator Laboratory is seeking to better understand how this antibiotic resistance develops, as recently reported in BMC Biology.

TB is caused by Mycobacterium tuberculosis bacteria, which attack the lungs and then spread to other parts of the body. The bacteria are transmitted to other people through the air, when an infected person speaks, coughs or sneezes.

These bacteria survive antimicrobial drugs by mutating. Their resilience is enhanced by the lengthy and complex nature of standard treatment, which requires patients to take four drugs every day for six to nine months. Patients often don’t complete this full course of TB treatment, causing the bacteria to evolve to survive the antibiotics.

Now, a team of international researchers has investigated an enzyme, called beta-lactamase, that is produced by the Mycobacterium tuberculosis bacteria. They wanted to understand the critical role this enzyme plays in TB drug resistance.

Specifically, the researchers made tiny crystals of beta-lactamase and mixed them with the antibiotic ceftriaxone. A fraction of a second later, they hit the enzyme-antibiotic mixture with ultrafast, intense X-ray pulses from SLAC’s Linac Coherent Light Source — taking millions of X-ray snapshots of the chemical reaction in real time for two seconds.

Putting these snapshots together, the researchers mapped out the 3D structure of the antibiotic as it interacted with the enzyme. They watched the bacterial enzyme bind to the antibiotic and then break open one of its key chemical bonds, making the antibiotic ineffective.

“For structural biologists, this is how we learn exactly how biology functions,” said Mark Hunter, PhD, staff scientist at SLAC and co-author on the study, in a recent news release. “We decipher a molecule’s structure at a certain point in time, and it gives us a better idea of how the molecule works.”

The research team plans to use their method to study additional antibiotics, observing in real time the rapid molecular processes that occur as the bacteria’s enzymes breakdown the drugs. Ultimately, they hope this knowledge can be used to design better antibiotics that can fight off these attacks.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

New understanding of cellular signaling could help design better drugs, Stanford study finds

Photo by scanrail/Getty Images

An effective drug with minimal side effects — the dream of all drug companies, physicians and patients. But is it an impossible dream?

Perhaps not, in light of new research led by Ron Dror, PhD, an associate professor of computer science at Stanford. IN collaboration with other researchers, Dror used computer simulations and lab experiments to better understand G-protein-coupled receptors, which are critical to drug development.

G-protein-coupled receptors (GPCRs) are involved in an incredible array of physiological processes in the human body, including vision, taste, smell, mood regulation and pain, to name just a few. As a result, GPCRs are the primary target for drugs — about 34 percent of all prescription pharmaceuticals currently on the market target them. Unfortunately, despite all of this drug research, many of the underlying mechanisms of how GPCRs function are still unclear.

We do know that GPCRs act like an inbox for biochemical messages, which alert the cells that nutrients are nearby or communicate information sent by other cells. These messages symbolize a variety of signaling or pharmaceutical molecules. When one of these molecules binds to a GPCR, the GPCR changes shape — triggering many molecular changes within the cell.

Dror’s team investigated the relationship between these GPCRs and a key family of molecules inside cells called arrestins, which can be activated by GPCRs and can lead to unanticipated side effects from medications. Specifically, they sought to understand how GPCRs activate arrestin, so they can use this knowledge in the future to design drugs with fewer side effects.

“We want the good without the bad — more effective drugs with fewer dangerous side effects,” Dror said in a recent Stanford news release. “For GPCRs, that often boils down to whether or not the drug causes the GPCR to stimulate arrestin.”

Researchers know that GPCR is composed of a long tail and a rounder core, which bind to distinct locations on the arrestin molecule. Based on past studies, it was believed that only the receptor’s tail activated the arrestin — causing it to change shape and begin signaling other molecules on its own.

However, Dror’s new study demonstrated that either the tail or core can activate arrestin, as recently reported in Nature. And the core and tail together can activate the arrestin even more, Dror said.

Using this new understanding, the researchers hope in the future to design drugs that activate arrestin in a more selective way to reduce drug side effects.

Dror concluded in the release:

“These behaviors are critical to drug effects, and this should help us in the next phase of our research as we try to learn more about the interplay of GPCRs and arrestins, and potentially, new drugs.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

New way to understand tumor diversity combines CRISPR with genetic barcodes

Photo courtesy of PIXNIO

The growth of a particular tumor depends on multiple genetic factors, so it is difficult for cancer researchers to recreate and study this genetic diversity in the lab.

“Human cancers don’t have only one tumor-suppression mutation [which fuels tumor growth] — they have combinations. The question is, how do different mutated genes cooperate or not cooperate with one another?” said Monte Winslow, PhD, a Stanford assistant professor of genetics and of pathology, in a recent Stanford news release.

Now, Winslow and his colleagues have discovered a way to modify cancer-related gene and then track how these combinations of mutations impact tumor growth, as recently reported in Nature Genetics.

The researchers used a powerful gene-editing tool, called CRISPR-Cas9, to introduce multiple, genetically distinct tumors in the lungs of mice. They also attached short, unique DNA sequences to individual tumor cells — which acted as genetic barcodes and multiplied in number as the tumors grew. By counting the different barcodes, they were able to accurately and simultaneously track tumor growth.

“We can now generate a very large number of tumors with specific genetic signatures in the same mouse and follow their growth individually at scale and with high precision. The previous methods were both orders of magnitude slower and much less quantitative,” said Dmitri Petrov, PhD, a senior author of the study and an evolutionary biologist at Stanford, in the release.

The study showed that many tumor-suppressor genes only drive tumor growth when other specific genes are present. The researchers hope to use their new methodology to better understand why tumors with the same mutations sometimes grow to be very large in some patients and remain small in others, they said.

Their technique may also speed up cancer drug development, allowing a drug to be tested on thousands of tumor types simultaneously. Petrov explained in the release:

“We can help understand why targeted therapies and immunotherapies sometimes work amazingly well in patients and sometimes fail. We hypothesize that the genetic identify of tumors might be partially responsible, and we finally have a good way to test this.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.