Could the next generation of particle accelerators come out of the 3D printer?

SLAC scientists and collaborators are developing 3D copper printing techniques to build accelerator components.

Imagine being able to manufacture complex devices whenever you want and wherever you are. It would create unforeseen possibilities even in the most remote locations, such as building spare parts or new components on board a spacecraft. 3D printing, or additive manufacturing, could be a way of doing just that. All you would need is the device materials, a printer and a computer that controls the process.

Diana Gamzina, a staff scientist at the Department of Energy’s SLAC National Accelerator Laboratory; Timothy Horn, an assistant professor of mechanical and aerospace engineering at North Carolina State University; and researchers at RadiaBeam Technologies dream of developing the technique to print particle accelerators and vacuum electronic devices for applications in medical imaging and treatment, the electrical grid, satellite communications, defense systems and more.

In fact, the researchers are closer to making this a reality than you might think.

“We’re trying to print a particle accelerator, which is really ambitious,” Gamzina said. “We’ve been developing the process over the past few years, and we can already print particle accelerator components today. The whole point of 3D printing is to make stuff no matter where you are without a lot of infrastructure. So you can print your particle accelerator on a naval ship, in a small university lab or somewhere very remote.”

3D printing can be done with liquids and powders of numerous materials, but there aren’t any well-established processes for 3D printing ultra-high-purity copper and its alloys – the materials Gamzina, Horn and their colleagues want to use. Their research focuses on developing the method.

Indispensable copper

Accelerators boost the energy of particle beams, and vacuum electronic devices are used in amplifiers and generators. Both rely on components that can be easily shaped and conduct heat and electricity extremely well. Copper has all of these qualities and is therefore widely used.

Traditionally, each copper component is machined individually and bonded with others using heat to form complex geometries. This manufacturing technique is incredibly common, but it has its disadvantages.

“Brazing together multiple parts and components takes a great deal of time, precision and care,” Horn said. “And any time you have a joint between two materials, you add a potential failure point. So, there is a need to reduce or eliminate those assembly processes.”

Potential of 3D copper printing

3D printing of copper components could offer a solution.

It works by layering thin sheets of materials on top of one another and slowly building up specific shapes and objects. In Gamzina’s and Horn’s work, the material used is extremely pure copper powder.

The process starts with a 3D design, or “construction manual,” for the object. Controlled by a computer, the printer spreads a few-micron-thick layer of copper powder on a platform. It then moves the platform about 50 microns – half the thickness of a human hair – and spreads a second copper layer on top of the first, heats it with an electron beam to about 2,000 degrees Fahrenheit and welds it with the first layer. This process repeats over and over until the entire object has been built.

3D printing of a layer of a device known as a traveling wave tube using copper powder. (Christopher Ledford/North Carolina State University)

The amazing part: no specific tooling, fixtures or molds are needed for the procedure. As a result, 3D printing eliminates design constraints inherent in traditional fabrication processes and allows the construction of objects that are uniquely complex.

“The shape doesn’t really matter for 3D printing,” said SLAC staff scientist Chris Nantista, who designs and tests 3D-printed samples for Gamzina and Horn. “You just program it in, start your system and it can build up almost anything you want. It opens up a new space of potential shapes.”

The team took advantage of that, for example, when building part of a klystron – a specialized vacuum tube that amplifies radiofrequency signals – with internal cooling channels at NCSU. Building it in one piece improved the device’s heat transfer and performance.

Compared to traditional manufacturing, 3D printing is also less time consuming and could translate into cost savings of up to 70%, Gamzina said.

A challenging technique

But printing copper devices has its own challenges, as Horn, who began developing the technique with collaborators at RadiaBeam years ago, knows. One issue is finding the right balance between the thermal and electrical properties and strengths of the printed objects. The biggest hurdle for manufacturing accelerators and vacuum electronics, though, is that these high-vacuum devices require extremely high quality and pure materials to avoid part failures, such as cracking or vacuum leaks.

The research team tackled these challenges by first improving the material’s surface quality, using finer copper powder and varying the way they fused layers together. However, using finer copper powder led to the next challenge. It allowed more oxygen to attach to the copper powder, increasing the oxide in each layer and making the printed objects less pure.

So, Gamzina and Horn had to find a way to reduce the oxygen content in their copper powders. The method they came up with, which they recently reported in Applied Sciences, relies on hydrogen gas to bind oxygen into water vapor and drive it out of the powder.

Using this method is somewhat surprising, Horn said. In a traditionally manufactured copper object, the formation of water vapor would create high-pressure steam bubbles inside the material, the material would blister and fail. In the additive process, on the other hand, the water vapor escapes layer by layer, which releases the water vapor more effectively.

Although the technique has shown great promise, the scientists still have a ways to go to reduce the oxygen content enough to print an actual particle accelerator. But they have already succeeded in printing a few components, such as the klystron output cavity with internal cooling channels and a string of coupled cavities that could be used for particle acceleration.

Planning to team up with industry partners

The next phase of the project will be driven by the newly-formed Consortium on the Properties of Additive-Manufactured Copper, which is led by Horn. The consortium currently has four active industry members – Siemens, GE Additive, RadiaBeam and Calabazas Creek Research, Inc – with more on the way.

“This is a nice example of collaboration between an academic institution, a national lab and small and large businesses,” Gamzina said. “It would allow us to figure out this problem together. Our work has already allowed us to go from ‘just imagine, this is crazy’ to ‘we can do it’ in less than two years.”

This work was primarily funded by the Naval Sea Systems Command, as a Small Business Technology Transfer Program with Radiabeam, SLAC, and NCSU. Other SLAC contributors include Chris Pearson, Andy Nguyen, Arianna Gleason, Apurva Mehta, Kevin Stone, Chris Tassone and Johanna Weker. Additional contributions came from Christopher Ledford and Christopher Rock at NCSU and Pedro Frigola, Paul Carriere, Alexander Laurich, James Penney and Matt Heintz at RadiaBeam.

Citation: C. Ledford et al., Applied Sciences, 24 September 2019 (10.3390/app9193993)

For questions or comments, contact the SLAC Office of Communications at communications@slac.stanford.edu.

————————————————

SLAC is a vibrant multiprogram laboratory that explores how the universe works at the biggest, smallest and fastest scales and invents powerful tools used by scientists around the globe. With research spanning particle physics, astrophysics and cosmology, materials, chemistry, bio- and energy sciences and scientific computing, we help solve real-world problems and advance the interests of the nation.

SLAC is operated by Stanford University for the U.S. Department of Energy’s Office of Science. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit energy.gov/science.

Top figure: Examples of 3D-printed copper components that could be used in a particle accelerator: X-band klystron output cavity with micro-cooling channels (at left) and a set of coupled accelerator cavities. (Christopher Ledford/North Carolina State University)

This is a reposting of my news feature, courtesy of SLAC Linear Accelerator Center.

Designing buildings to improve health

Are the buildings that we live and work in stressing us out?

The answer is probably yes, according to Stanford engineer Sarah Billington, PhD, and her colleagues. They also believe this stress is taking a significant toll on our mental and physical health because Americans typically spend almost 90% of their lives indoors.

During a recent talk at a Stanford Reunion Homecoming alumni celebration, Billington described a typical noisy office cut off from nature and filled with artificial light and artificial materials. This built environment makes workers feel stress, anxiety and distraction, which reduces their productivity and their ability to collaborate with others, she explained.

Now, Billington’s multidisciplinary research team is working to design buildings that instead reduce stress and increase a sense of belonging, physical activity and creativity.

Their first step is to measure how building features — such as airflow, lighting and views of nature — affect human well-being. They are quantifying well-being by measuring levels of stress, belonging, creativity, physical activity and environmental behavior.

In a preliminary online study, the team showed about 300 participants pictures of different office environments and asked them to envision working there at a new job. Across the board, the fictitious work environment was viewed as important to well-being.

“In eight out of the nine things that we were looking at, there were statistically significant increases in their sense of belonging, their self-efficacy and their environmental efficacy when they believed they were going to be working in an environment that had natural materials, natural light or diverse representations,” said Billington.

The researchers are now expanding this work by performing larger lab studies and designing future field studies. They plan to collect data from “smart buildings,” which use high-tech sensors to control the heating, air conditioning, ventilation, lighting, security and other systems. The team also plans to collect data from personal devices such as smartwatches, smartphones and laptops.

By analyzing all of this data, they plan to infer the participants’ behaviors, emotions and physiological states. For example, the researchers will use the building’s occupancy sensors to detect if a worker is interacting with other people who are nearby. Or they will figure out someone’s stress level based on how he or she uses a laptop trackpad and mouse, Billington said.

Stanford computer scientist Pablo Paredes, PhD, who collaborates on the project, explained in a paper how their simple model of arm-hand dynamics can detect stress from mouse motion. Basically, your muscles get tense and stiff when you’re stressed, which changes how you move a computer mouse.

Next, the team plans to use statistical modeling and machine learning to connect these human states to specific building features. They believe this will allow them to design better buildings that improve the occupants’ health.

The researchers said they intend to bring nature indoors by engineering living walls with adaptable acoustic and thermal properties.

They also plan to incorporate dynamic digital displays — such as a large art display on the wall or a small one on an individual’s personal devices — that reflect occupant activity and well-being. For example, a digital image of a flower might represent the energy level of a working group based on how open the petals are, and this could nudge their behavior, Billington said in the talk.

“Our idea is, what if we could make our buildings shape us in a positive way and keep improving over time?” Billington said.

Photo by Nastuh Abootalebi

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Eponym debate: The case for biologically-descriptive names

Naming a disease after the scientist who discovered it, like Hashimoto’s thyroiditis or Diamond-Blackfan anemia, just doesn’t work anymore, some physicians say.

A main argument against eponyms is that plain-language names — which describe the disease symptoms or underlying biological mechanisms —  are more helpful for patients and medical trainees. For example, you can probably out a bit about acquired immunodeficiency syndrome (AIDS), whooping cough or pink eye just from their names.

“The more obscure and opaque the name — whether due to our profession’s Greek and Latin fetish or our predecessors’ narcissism — the more we separate ourselves from our patients,” says Caitlin Contag, MD, a resident physician at Stanford.

Stanford endocrinologist Danit Ariel, MD, agrees that patients are often confused by eponyms.

“I see this weekly in the clinic with autoimmune thyroid disease. Patients are often confusing Graves’ disease with Hashimoto’s thyroiditis because the names mean nothing to them,” says Ariel. “So when I’m educating them about their diagnosis, I try to use the simplest of terms so they understand what is going on with their body.”

Ariel says she explains to her patients that the thyroid is overactive in Graves’ disease and underactive in Hashimoto’s.

Ariel says she believes using biological names also helps medical students better understand the underlying mechanisms of diseases, whereas using eponyms relies on rote memorization that can hinder learning. “When using biologically-descriptive terms, it makes inherent sense and students are able to build on the concepts and embed the information more effectively,” Ariel says.

Medical eponyms are particularly confusing when more than one disease is named after the same person, Contag argues. For example, neurosurgeon Harvey Williams Cushing, MD, has 12 listings in the medical eponym dictionary. 

Stanford resident physician Angela Primbas, MD, agrees that having multiple syndromes named after the same person is confusing. She says it’s also confusing to have diseases named differently in different countries. In fact, the World Health Organization has tried to address this, along with other issues, by providing best-practice guidelines for naming infectious diseases. (Genetic disorders, however, lack a standard convention for naming.)

In addition, Primbas said she thinks naming a disease after a single person is an oversimplification of a complex story. “Often many people contribute to the discovery of a disease process or clinical finding, and naming it after one person is unfair to the other people who contributed,” she says. “Plus, it’s often disputed who first discovered a disease.”

Also, few disease names recognize the contributions (or suffering) of women and non-Europeans. And some eponyms are decidedly problematic, like those named after Nazi doctors. A famous example is Reiter’s syndrome named for Hans Reiter, MD, who was convicted of war crimes for his medical experiments performed at a concentration camp.

“Reiter’s syndrome is now called reactive arthritis for the simple reason that Reiter committed atrocities on other human beings to conduct his ‘science.’ Such people should not have their name tied to a profession that espouses the principles of beneficence and nonmaleficence,” says Vishesh Khanna, MD, a resident physician at Stanford. He says medicine is swinging away from using these controversial eponyms to describe them on the basis of their biology instead.

Personally, Khanna also admits that naming a disease after himself wouldn’t sit well.

“Receiving credit for discovering something can certainly be a wonderful feather in a physician’s career cap, but the thought of actually naming a disease after myself makes me cringe,” says Khanna. “Patients and doctors would utter my name every time they had to bring up a disease.”

Such sentiments may be why Contag’s example of a good disease name — cyclic vomiting syndrome — is in plain English. Was no one eager to lend his or her name to it?

While the debate over medical eponyms continues, Khanna suggests a potential solution. “Perhaps a reasonable approach to naming going forward is to allow the use of already established eponyms without dubious histories, while only naming newly discovered diseases based on pathophysiology,” he says.

Everyone I spoke with agrees that changing the medical eponyms will only happen slowly, if at all, since it is difficult to change language. However, it can be done, according to Dina Wang-Kraus, MD, a Stanford resident in psychiatry and behavioral sciences.

“I looked through our diagnostic manual and we do not have diseases named after people in psychiatry. This shift happened quite some time ago so as to avoid confusion and to allow clinicians from all over the world to have a unified language,” says Wang-Kraus. “In psych, we often say that we wish other specialties would adopt a universal nomenclature too.”

This is the conclusion of a series on naming diseases. The first part is available here.

Photo by 4772818

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Eponym debate: The case for naming diseases after people

Is it better to name a genetic disorder Potocki-Lupski syndrome or the 17p11.2 duplication syndrome? What about Addison’s disease as opposed to adrenal insufficiency? Or Tay-Sachs disease versus hexosaminidase alpha-subunit deficiency (variant B)?

If you have a strong opinion about which is preferable, you aren’t alone: there is an ongoing controversy on how to name diseases. In Western science and medicine, a long-standing tradition is to name a disease after a person. However, many physicians now argue that these eponyms should be abandoned for biologically-descriptive names.

First, a bit about how eponyms are created.

Although the media sometimes comes up with a catchy name that sticks, like swine flu, diseases are typically named by scientists when they first report them in scientific publications.

Oftentimes, diseases are named after prominent scientists who played a major role in identifying the disease. The example that leaps to my mind is Hodgkin’s disease — a type of cancer associated with enlarged lymph nodes — because I was diagnosed and treated for Hodgkin’s at Stanford years ago. Hodgkin’s disease was named after Thomas Hodgkin, an English physician and pathologist who described the disease in a paper in 1832.

Less frequently, diseases are named after a famous patient. For example, amyotrophic lateral sclerosis (ALS), commonly known as Lou Gehrig’s disease, was named after the famous New York Yankee baseball player who was forced to retire after developing the disease in 1939.

As these examples show, one of the reasons to keep eponyms is that they are embedded with medical traditions and history. They include some kind of story. And, oftentimes, they honor key people associated with the disease.

“I think the people who discover these conditions deserve recognition,” explains Angela Primbas, MD, a resident physician at Stanford. “I don’t think the medical community would know their names otherwise.”

Some physicians also feel eponyms bring color to medicine. “The use of eponyms in medicine, as in other areas, is often random, inconsistent, idiosyncratic, confused, and heavily influenced by local geography and culture. That is part of their beauty,” writes Australian medical researcher Judith Whitworth, MD, in an editorial in BMJ.

Other proponents of eponyms are more practical. They argue that eponymous disease names provide a convenient shorthand for doctors and patients.

Medical eponyms are also widely used by patients, physicians, textbooks and websites. According to a dictionary of medical eponyms, thousands of eponyms are used throughout the world particularly in the United States and Europe. They are even prominent in the World Health Organization’s international classification of diseases.

So is a massive effort to purge these eponyms worth it, or even realistic?

“There are certainly examples where eponymous disease names are so inculcated in medical vernacular that changing them to a pathology-based name might not be worth the effort,” says Vishesh Khanna, MD, a resident physician at Stanford. He gives the examples of Alzheimer’s disease and Crohn’s disease.

Jimmy Zheng, a medical student at Stanford, agrees that eponyms are here to stay. “At the level of medical school, eponyms are broadly dispensed in class, in USMLE study resources and in our clinical training,” Zheng says. “While some clinicians have called for the complete erasure of eponyms, this is unlikely to happen.”

Zheng and Stanford neurologist Carl Gold, MD, recently assessed the historical trends of medical eponym use in neurology literature. They also surveyed neurology residents on their knowledge and attitude towards eponyms. Their study’s findings were published in Neurology.

“Regardless of ‘should,’ our analyses demonstrate that eponyms are increasingly prevalent in the scientific literature and that new eponyms like the Potocki-Lupski syndrome continue to be coined,” Gold says. “Despite awareness of both the pros and cons of eponyms, the majority of Stanford neurology trainees in our study reported that historical precedent, pervasiveness and ease of use would drive the continued use of eponyms in neurology.”

So the debate rages on. According to my informal and small survey, some Stanford physicians favor eliminating eponymous disease names — stay tuned to find out why.

This is the beginning of a two-part series on naming diseases. The conclusion will appear this week.

Photo via Good Free Photos

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Measuring depression with wearables

Depression and emotional disorders can occur at any time of year — and do for millions of Americans. But feeling sad, lonely, anxious and depressed may seem particularly isolating during this holiday season, which is supposed to be a time of joy and celebration.

A team of Stanford researchers believes that one way to work towards ameliorating this suffering is to develop a better way to quantitatively measure stress, anxiety and depression.

“One of the biggest barriers for psychiatry in the field that I work in is that we don’t have objective tests. So the way that we assess mental health conditions and risks for them is by interview and asking you how do you feel,” said Leanne Williams, MD, a professor in psychiatry and behavioral sciences at Stanford, when she spoke at a Stanford Reunion Homecoming alumni celebration.

She added, “Imagine if you were diagnosing and treating diabetes without tests, without sensors. It’s really impossible to imagine, yet that is what we’re doing for mental health, right now.”

Instead, Stanford researchers want to collect and analyze data from wearable devices to quantitatively characterize mental states. The multidisciplinary team includes scientists from the departments of psychiatry, chemical engineering, bioengineering, computer science and global health.

Their first step was to use functional magnetic resonance imaging to map the brain activity of healthy controls compared to people with major depressive disorder who were imaged before and after they were treated with antidepressants.

The researchers identified six “biotypes” of depression, representing different ways brain circuitry can be disrupted to cause specific symptoms. They classified the biotypes as rumination, anxious avoidance, threat dysregulation, anhedonia, cognitive dyscontrol and inattention.

“For example, threat dysregulation is when the brain stays in alarm mode after acute stress and you feel heart racing, palpitations, sometimes panic attacks,” presented Williams, “and that’s the brain not switching off from that mode,” Williams said.

The team, which includes chemical engineer Zhenan Bao, PhD, then identified links between these different brain biotypes and various physiological differences, including changes in heart rate, skin conductance, electrolyte levels and hormone production. In particular, they found correlations between the biotypes and production of cortisol, a hormone strongly related to stress level.

Now, they are developing a wearable device — called MENTAID — that measures the physiological parameters continuously. Their current prototype can already measure cortisol levels in sweat in agreement with standard laboratory measurements. This was an incredibly challenging task due to the extremely low concentration and tiny molecular size of cortisol.

Going forward, they plan to validate their wearable device with clinical trials, including studies to assess its design and user interface. Ultimately, the researchers hope MENTAID will help prevent and treat mental illness — for example, by better predicting and evaluating patient response to specific anti-depressants.

Photo by Sora Sagano

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Floppy vibration modes explain negative thermal expansion in solids

Animation showing how solid crystals of ScF3 shrink upon heating. While the bonds between scandium (green) and fluorine (blue) remain relatively rigid, the fluorine atoms along the sides of the cubic crystals oscillate independently, resulting in a wide range of distances between neighboring fluorine atoms. The higher the temperature, the greater the buckling in the sides of the crystals leading to the overall contraction (negative thermal expansion) effect. Credit: Brookhaven National Laboratory

Matching the thermal expansion values of materials in contact is essential when manufacturing precision tools, engines, and medical devices. For example, a dental filling would cause a toothache if it expanded a different amount than the surrounding tooth when drinking a hot beverage. Fillings are therefore comprised of a composite of materials with positive and negative thermal expansion, creating an overall expansion tailored to the tooth enamel.

The underlying mechanisms of why crystalline materials with negative thermal expansion (NTE) shrink when heated have been a matter of scientific debate. Now, a multi-institutional research team led by Igor Zaliznyak, a physicist at Brookhaven National Laboratory, believes it has the answer.

As recently reported in Science Advances, the scientists measured the distance between atoms in scandium trifluoride powder, a cubic NTE material—at temperatures ranging from 2 K to 1099 K—using total neutron diffraction. The research team determined the probability that two particular atomic species would be found at a given distance. They studied scandium trifluoride because it has a simple atomic structure in which each scandium atom is surrounded by an octahedron of fluorine atoms. According to the prevailing rigid-unit-mode (RUM) theory, each fluorine octahedron should vibrate and move as a rigid unit when heated — but that is not what they observed.

“We found that the distances between scandium and fluorine were pretty rigidly-defined until a temperature of about 700 K, but the distances between the nearest-neighbor fluorines became ill-defined at temperatures above 300 K,” says Zaliznyak. “Their probability distributions became very broad, which is basically a direct manifestation of the fact that the shape of the octahedron is not preserved. If the fluorine octahedral had been rigid, the fluorine-fluorine distance would have been as well defined as scandium-fluorine.”

With the help of high school researcher David Wendt and condensed matter theorist Alexei Tkachenko, Zaliznyak developed a simple model to explain these experimental results. The team went back to the basics—the fundamental laws of physics.

“When we removed the ill-controlled constraint that there must be these rigid units, then we could explain the fundamental interactions that govern the atomic positions in the [ScF3] solid using just Coulomb interactions.”

The team developed a negative thermal expansion model that treats each Sc-F bond as a rigid monomer link and the entire ScF3 crystal structure as a floppy, under-constrained network of freely jointed monomers. Each scandium ion is constrained by rigid bonds in all three directions, whereas each fluorine ion is free to vibrate and displace orthogonally to its Sc-F bonds. This is a direct three-dimensional analogy of the well-established behavior of chainlike polymers. And their simple theory agreed remarkably well with their experiments, accurately predicting the distribution of distances between the nearest-neighbor fluorine pairs for all temperatures where NTE was observed.   

“Basically we figured out how these ceramic materials contract on warming and how to make a simple calculation that describes this phenomenon,” Zaliznyak says.

Angus Wilkinson, an expert on negative thermal expansion materials at the Georgia Institute of Technology who is not involved in the project, agrees that Zaliznyak’s work will change the way people think about negative thermal expansion in solids.

“While the RUM picture of NTE has been questioned for some time, the experimental data in this paper, along with the floppy network (FN) analysis, provide a compelling alternative view,” says Wilkinson. “I very much like the way the FN approach is applicable to both soft matter systems and crystalline materials. The floppy network analysis is novel and gives gratifyingly good agreement with a wide variety of experimental data.”

According to Zaliznyak, the next major step of their work will be to study more complex materials that exhibit NTE behavior now that they know what to look for.

Read the article in Science Advances.

This is a reposting of my news brief, courtesy of MRS Bulletin.

X-rays shed light on how anti-asthmatic drugs work

A new study uncovers how a critical protein binds to drugs used to treat asthma and other inflammatory diseases.

By studying the crystal structure of an important protein when it was bound to two drugs widely prescribed to treat asthma, an international team of scientists has discovered unique binding and signaling mechanisms that could lead to the development of more effective treatments for asthma and other inflammatory diseases.

The protein, called cysteinyl leukotriene receptor type 1 (CysLT1R), controls the dilation and inflammation of bronchial tubes in the lungs. It is therefore one of the primary targets for anti-asthma drugs, including the two drugs studied: zafirlukast, which acts on inflammatory cells in the lungs, and pranlukast, which reduces bronchospasms due to allergic reactions.

Using the Linac Coherent Light Source (LCLS) X-ray free-electron laser at the Department of Energy’s SLAC National Accelerator Laboratory, the team bombarded tiny crystals of CysLT1R-zafirlukast with X-ray pulses and measured its structure. They also used X-rays from the European Synchrotron Radiation Facility in Grenoble, France to collect data about CysLT1R-pran crystals. They published their findings in October in Science Advances.

The researchers gained a new understanding of how CysLT1R interacts with these anti-asthma drugs, observing surprising structural features and a new activation mechanism. For example, the study revealed major differences between how the two drugs attached to the binding site of the protein. In comparison to pranlukast, the zafirlukast molecule jammed open the entrance gate of CysLT1R’s binding site into a much wider configuration. This improved understanding of the protein suggests a new rationale for designing more effective anti-asthma drugs.

The study was performed by a collaboration of researchers at SLAC; Moscow Institute of Physics and Technology, Russia; University de Sherbrooke, Canada; University of Southern California; Research Center Juelich, Germany; Universite Grenoble Alpes-CEA-CNRS, France; Czech Academy of Sciences, Czech Republic; and Arizona State University.

Citation: Aleksandra Luginina et al., Science Advances, 09 October 2019 (10.1126/sciadv.aax2518).

For questions or comments, contact the SLAC Office of Communications at communications@slac.stanford.edu.

Image caption: Using X-rays, researchers uncovered details about two drugs widely prescribed to treat asthma: pranlukast (shown up top) and zafirlukast (shown beneath). Their results revealed major differences between how the two drugs attached to the binding site of the receptor protein. In comparison to pranlukast, the zafirlukast molecule jammed open the entrance gate of protein’s binding site into a much wider configuration. (10.1126/sciadv.aax2518)

This is a reposting of my SLAC news story, courtesy of SLAC Linear Accelerator Laboratory.