Superconductivity and charge density waves caught intertwining at the nanoscale

The team aimed infrared laser pulses at the YBCO sample to switch off its superconducting state, then used X-ray laser pulses to illuminate the sample and examined the X-ray light scattered from it. Their results revealed that regions of superconductivity and charge density waves were arranged in unexpected ways. (Courtesy Giacomo Coslovich/SLAC National Accelerator Laboratory)

Room-temperature superconductors could transform everything from electrical grids to particle accelerators to computers – but before they can be realized, researchers need to better understand how existing high-temperature superconductors work.

Now, researchers from the Department of Energy’s SLAC National Accelerator Laboratory, the University of British Columbia, Yale University and others have taken a step in that direction by studying the fast dynamics of a material called yttrium barium copper oxide, or YBCO.

The team reports May 20 in Science that YBCO’s superconductivity is intertwined in unexpected ways with another phenomenon known as charge density waves (CDWs), or ripples in the density of electrons in the material. As the researchers expected, CDWs get stronger when they turned off YBCO’s superconductivity. However, they were surprised to find the CDWs also suddenly became more spatially organized, suggesting superconductivity somehow fundamentally shapes the form of the CDWs at the nanoscale.

“A big part of what we don’t know is the relationship between charge density waves and superconductivity,” said Giacomo Coslovich, a staff scientist at the Department of Energy’s SLAC National Accelerator Laboratory, who led the study. “As one of the cleanest high-temperature superconductors that can be grown, YBCO offers us the opportunity to understand this physics in a very direct way, minimizing the effects of disorder.”

He added, “If we can better understand these materials, we can make new superconductors that work at higher temperatures, enabling many more applications and potentially addressing a lot of societal challenges – from climate change to energy efficiency to availability of fresh water.”

Observing fast dynamics

The researchers studied YBCO’s dynamics at SLAC’s Linac Coherent Light Source (LCLS) X-ray laser. They switched off superconductivity in the YBCO samples with infrared laser pulses, and then bounced X-ray pulses off those samples. For each shot of X-rays, the team pieced together a kind of snapshot of the CDWs’ electron ripples. By pasting those together, they recreated the CDWs rapid evolution.

“We did these experiments at the LCLS because we needed ultrashort pulses of X-rays, which can be made at very few places in the world. And we also needed soft X-rays, which have longer wavelengths than typical X-rays, to directly detect the CDWs,” said staff scientist and study co-author Joshua Turner, who is also a researcher at the Stanford Institute for Materials and Energy Sciences. “Plus, the people at LCLS are really great to work with.”

These LCLS runs generated terabytes of data, a challenge for processing. “Using many hours of supercomputing time, LCLS beamline scientists binned our huge amounts of data into a more manageable form so our algorithms could extract the feature characteristics,” said MengXing (Ketty) Na, a University of British Columbia graduate student and co-author on the project.

The team found that charge density waves within the YBCO samples became more correlated – that is, more electron ripples were periodic or spatially synchronized – after lasers switched off the superconductivity.

“Doubling the number of waves that are correlated with just a flash of light is quite remarkable, because light typically would produce the opposite effect. We can use light to completely disorder the charge density waves if we push too hard,” Coslovich said.

Blue areas are superconducting regions, and yellow areas represent charge density waves. After a laser pulse (red), the superconducting regions are rapidly turned off and the charge density waves react by rearranging their pattern, becoming more orderly and coherent. (Greg Stewart/SLAC National Accelerator Laboratory)

To explain these experimental observations, the researchers then modeled how regions of CDWs and superconductivity ought to interact given a variety of underlying assumptions about how YBCO works. For example, their initial model assumed that a uniform region of superconductivity when shut off with light would become a uniform CDW region – but of course that didn’t agree with their results.  

“The model that best fits our data so far indicates that superconductivity is acting like a defect within a pattern of the waves. This suggests that superconductivity and charge density waves like to be arranged in a very specific, nanoscopic way,” explained Coslovich. “They are intertwined orders at the length scale of the waves themselves.”

Illuminating the future

Coslovich said that being able to turn superconductivity off with light pulses was a significant advance, enabling observations on the time scale of less than a trillionth of a second, with major advantages over previous approaches.

“When you use other methods, like applying a high magnetic field, you have to wait a long time before making measurements, so CDWs rearrange around disorder and other phenomena can take place in the sample,” he said. “Using light allowed us to show this is an intrinsic effect, a real connection between superconductivity and charge density waves.”

The research team is excited to expand on this pivotal work, Turner said. First, they want to study how the CDWs become more organized when the superconductivity is shut off with light. They are also planning to tune the laser’s wavelength or polarization in future LCLS experiments in hopes of also using light to enhance, instead of quench, the superconducting state, so they could readily turn the superconducting state off and on.

“There is an overall interest in trying to do this with pulses of light on very fast timescales, because that can potentially lead to the development of superconducting, light-controlled devices for the new generation of electronics and computing,” said Coslovich. “Ultimately, this work can also help guide people who are trying to build room-temperature superconductors.”

This research is part of a collaboration between researchers from LCLS, SLAC’s Stanford Synchrotron Radiation Lightsource (SSRL), UBC, Yale University, the Institut National de la Recherche Scientifique in Canada, North Carolina State University, Universita CAattolica di Brescia and other institutions. This work was funded in part by the DOE Office of Science. LCLS and SSRL are DOE Office of Science user facilities.

Citation: Scott Wandel et al., Science, 20 May 2022 (10.1126/science.abd7213)

This is a reposting of my news feature courtesy of Stanford Linear Accelerator Laboratory.

Physicians re-evaluate use of lead aprons during X-rays

When you get routine X-rays of your teeth at the dentist’s office or a chest X-ray to determine if you have pneumonia, you expect the technologist to drape your pelvis in a heavy radioprotective apron. But that may not happen the next time you get X-rays.

There is growing evidence that shielding reproductive organs has negligible benefit; and because a protective cover can move out of place, using it can result in an increased radiation dose to the patient or impaired quality of diagnostic images.

Shielding testes and ovaries during X-ray imaging has been standard practice since the 1950s due to a fear of hereditary risks — namely, that the radiation would mutate germ cells and these mutations would be passed on to future generations. This concern was prompted by the genetic effects observed in studies of irradiated fruit flies. However, such hereditary effects have not been observed in humans.

“We now understand that the radiosensitivity of ovaries and testes is extremely low. In fact, they are some of the lower radiation-sensitive organs — much lower than the colon, stomach, bone marrow and breast tissue,” said  Donald Frush, MD, a professor of pediatric radiology at Lucile Packard Children’s Hospital Stanford.

In addition, he explained, technology improvements have dramatically reduced the radiation dose that a patient receives during standard X-ray films, computerized tomography scans and other radiographic procedures. For example, a review paper finds that the radiation dose to ovaries and testes dropped by 96% from 1959 to 2012 for equivalent X-ray exams of the pelvis without shielding.

But even if the radioprotective shielding may have minimal — or no — benefit, why not use it just to be safe?

The main problem is that so-called lead aprons — which aren’t made of lead anymore — are difficult to position accurately, Frush said. Even following shielding guidelines, the position of the ovaries is so variable that they may not be completely covered.  Also,  the protective shield can obscure the target anatomy. This forces doctors to live with poor-quality diagnostic information or to repeat the X-ray scan, thus increasing the radiation dose given to the patient, he said.

Positioning radioprotective aprons is particularly troublesome for small children.

“Kids kick their legs up and the shield moves while the technologists are stepping out of the room to take the exposure and can’t see them. So the X-rays have to be retaken, which means additional dose to the kids,” Frush said.

Another issue derives from something called automatic exposure control, a technology that optimizes image quality by adjusting the X-ray machine’s radiation output based on what is in the imaging field. Overall, automatic exposure control greatly improves the quality of the X-ray images and enables a lower dose to be used.  

However, if positioning errors cause the radioprotective apron to enter the imaging field, the radiographic system increases the magnitude and length of its output, in order to penetrate the shield.

“Automatic exposure control is a great tool, but it needs to be used appropriately. It’s not recommended for small children, particularly in combination with radioprotective shielding,”  said Frush.

With these concerns in mind, many technologists, medical physicists and radiologists are now recommending to discontinue the routine practice of shielding reproductive organs during X-ray imaging. However, they support giving technologists discretion to provide shielding in certain circumstances, such as on parental request. This position is supported by several groups, including the American Association of Physicists in MedicineNational Council on Radiation Protection and Measurements and American College of Radiology.

These new guidelines are also supported by the Image Gently Alliance, a coalition of heath care organizations dedicated to promoting safe pediatric imaging, which is chaired by Frush. And they are being adopted by Stanford hospitals.

“Lucile Packard Children’s revised policy on gonadal shielding has been formalized by the department,” he said. “There is still some work to do with education, including training providers and medical students to have a dialogue with patients and caregivers. But so far, pushback by patients has been much less than expected.”

Looking beyond the issue of shielding, Frush advised parents to be open to lifesaving medical imaging for their children, while also advocating for its best use. He said:

“Ask the doctor who is referring the test: Is it the right study? Is it the right thing to do now, or can it wait? Ask the imaging facility:  Are you taking into account the age and size of my child to keep the radiation dose reasonable?”

Photo by Shutterstock / pang-oasis

This is a reposting of my Scope story, courtesy of Stanford School of Medicine.

Nerve interface provides intuitive and precise control of prosthetic hand

Current state-of-the-art designs for a multifunctional prosthetic hand are restricted in functionality by the signals used to control it. A promising source for prosthetic motor control is the peripheral nerves that run from the spinal column down the arm, since they still function after an upper limb amputation. But building a direct interface to the peripheral nervous system is challenging, because these nerves and their electrical signals are incredibly small. Current interface techniques are hindered by signal amplitude and stability issues, so they provide amputees with only a limited number of independent movements. 

Now, researchers from the University of Michigan have developed a novel regenerative peripheral nerve interface (RPNI) that relies on tiny muscle grafts to amplify the peripheral nerve signals, which are then translated into motor control signals for the prosthesis using standard machine learning algorithms. The research team has demonstrated real-time, intuitive, finger-level control of a robotic hand for amputees, as reported in a recent issue of Science Translational Medicine.

“We take a small graft from one of the patient’s quadricep muscles, or from the amputated limb if they are doing the amputation right then, and wrap just the right amount of muscle around the nerve. The nerve then regrows into the muscle to form new neuromuscular junctions,” says Cindy Chestek, an associate professor of biomedical engineering at the University of Michigan and a senior author on the study. “This creates multiple innervated muscle fibers that are controlled by the small nerve and that all fire at the same time to create a much larger electrical signal—10 or 100 times bigger than you would record from inside or around a nerve. And we do this for several of the nerves in the arm.”

This surgical technique was initially developed by co-researcher Paul Cederna, a plastic surgeon at the University of Michigan, to treat phantom limb pain caused by neuromas. A neuroma is a painful growth of nerve cells that forms at the site of the amputation injury. Over 200 patients have undergone the surgery to treat neuroma pain.

“The impetus for these surgeries was to give nerve fibers a target, or a muscle, to latch on to so neuromas didn’t develop,” says Gregory Clark, an associate professor in biomedical engineering from the University of Utah who was not involved in the study. “Paul Cederna was insightful enough to realize these reinnervated mini-muscles also provided a wonderful opportunity to serve as signal sources for dexterous, intuitive control. That means there’s a ready population that could benefit from this approach.”

The Michigan team validated their technique with studies involving four participants with upper extremity amputations who had previously undergone RPNI surgery to treat neuroma pain. Each participant had a total of 3 to 9 muscle grafts implanted on nerves. Initially, the researchers measured the signals from these RPNIs using fine-wire, nickel-alloy electrodes, which were inserted through the skin into the grafts using ultrasound guidance. They measured high-amplitude electromyography signals, representing the electrical activity of the mini-muscles, when the participants imagined they were moving the fingers of their phantom hand. The ultrasound images showed the participants’ thoughts caused the associated specific mini-muscles to contract. These proof-of-concept measurements, however, were limited by the discomfort and movement of the percutaneous electrodes that pierced the skin.

Next, the team surgically implanted permanent electrodes into the RPNIs of two of the participants. They used a type of electrode commonly used for battery-powered diaphragm pacing systems, which electrically stimulate the diaphragm muscles and nerves of patients with chronic respiratory insufficiency to help regulate their breathing. These implanted electrodes allowed the researchers to measure even larger electrical signals—week after week from the same participant—by just plugging into the connector. After taking 5 to 15 minutes of calibration data, the electrical signals were translated into movement intent using machine learning algorithms and then passed on to a prosthetic hand. Both subjects were able to intuitively complete tasks like stacking physical blocks without any training—it worked on the first try just by thinking about it, says Chestek. Another key result is that the algorithm kept working even 300 days later.

“The ability to use the determined relationship between electrical activity and intended movement for a very long period of time has important practical consequences for the user of a prosthesis, because the last thing they want is to rely on a hand that is not reliable,” Clark says.

Although this clinical trial is ongoing, the Michigan team is now investigating how to replace the connector and computer card with an implantable device that communicates wirelessly, so patients can walk around in the real world. The researchers are also working to incorporate sensory feedback through the regenerative peripheral nerve interface. Their ultimate goal is for patients to feel like their prosthetic hand is alive, taking over the space in the brain where their natural hand used to be.

“People are excited because this is a novel approach that will provide high quality, intuitive, and very specific signals that can be used in a very straightforward, natural way to provide high degrees of dexterous control that are also very stable and last a long time,” Clark says.

Read the article in Science Translational Medicine.

Illustration of multiple regenerative peripheral nerve interfaces (RPNIs) created for each available nerve of an amputee. Fine-wire electrodes were embedded into his RPNI muscles during the readout session. Credit: Philip Vu/University of Michigan; Science Translational Medicine doi: 10.1126/scitranslmed.aay2857

This is a reposting of my news brief, courtesy of Materials Research Society.

Could the next generation of particle accelerators come out of the 3D printer?

SLAC scientists and collaborators are developing 3D copper printing techniques to build accelerator components.

Imagine being able to manufacture complex devices whenever you want and wherever you are. It would create unforeseen possibilities even in the most remote locations, such as building spare parts or new components on board a spacecraft. 3D printing, or additive manufacturing, could be a way of doing just that. All you would need is the device materials, a printer and a computer that controls the process.

Diana Gamzina, a staff scientist at the Department of Energy’s SLAC National Accelerator Laboratory; Timothy Horn, an assistant professor of mechanical and aerospace engineering at North Carolina State University; and researchers at RadiaBeam Technologies dream of developing the technique to print particle accelerators and vacuum electronic devices for applications in medical imaging and treatment, the electrical grid, satellite communications, defense systems and more.

In fact, the researchers are closer to making this a reality than you might think.

“We’re trying to print a particle accelerator, which is really ambitious,” Gamzina said. “We’ve been developing the process over the past few years, and we can already print particle accelerator components today. The whole point of 3D printing is to make stuff no matter where you are without a lot of infrastructure. So you can print your particle accelerator on a naval ship, in a small university lab or somewhere very remote.”

3D printing can be done with liquids and powders of numerous materials, but there aren’t any well-established processes for 3D printing ultra-high-purity copper and its alloys – the materials Gamzina, Horn and their colleagues want to use. Their research focuses on developing the method.

Indispensable copper

Accelerators boost the energy of particle beams, and vacuum electronic devices are used in amplifiers and generators. Both rely on components that can be easily shaped and conduct heat and electricity extremely well. Copper has all of these qualities and is therefore widely used.

Traditionally, each copper component is machined individually and bonded with others using heat to form complex geometries. This manufacturing technique is incredibly common, but it has its disadvantages.

“Brazing together multiple parts and components takes a great deal of time, precision and care,” Horn said. “And any time you have a joint between two materials, you add a potential failure point. So, there is a need to reduce or eliminate those assembly processes.”

Potential of 3D copper printing

3D printing of copper components could offer a solution.

It works by layering thin sheets of materials on top of one another and slowly building up specific shapes and objects. In Gamzina’s and Horn’s work, the material used is extremely pure copper powder.

The process starts with a 3D design, or “construction manual,” for the object. Controlled by a computer, the printer spreads a few-micron-thick layer of copper powder on a platform. It then moves the platform about 50 microns – half the thickness of a human hair – and spreads a second copper layer on top of the first, heats it with an electron beam to about 2,000 degrees Fahrenheit and welds it with the first layer. This process repeats over and over until the entire object has been built.

3D printing of a layer of a device known as a traveling wave tube using copper powder. (Christopher Ledford/North Carolina State University)

The amazing part: no specific tooling, fixtures or molds are needed for the procedure. As a result, 3D printing eliminates design constraints inherent in traditional fabrication processes and allows the construction of objects that are uniquely complex.

“The shape doesn’t really matter for 3D printing,” said SLAC staff scientist Chris Nantista, who designs and tests 3D-printed samples for Gamzina and Horn. “You just program it in, start your system and it can build up almost anything you want. It opens up a new space of potential shapes.”

The team took advantage of that, for example, when building part of a klystron – a specialized vacuum tube that amplifies radiofrequency signals – with internal cooling channels at NCSU. Building it in one piece improved the device’s heat transfer and performance.

Compared to traditional manufacturing, 3D printing is also less time consuming and could translate into cost savings of up to 70%, Gamzina said.

A challenging technique

But printing copper devices has its own challenges, as Horn, who began developing the technique with collaborators at RadiaBeam years ago, knows. One issue is finding the right balance between the thermal and electrical properties and strengths of the printed objects. The biggest hurdle for manufacturing accelerators and vacuum electronics, though, is that these high-vacuum devices require extremely high quality and pure materials to avoid part failures, such as cracking or vacuum leaks.

The research team tackled these challenges by first improving the material’s surface quality, using finer copper powder and varying the way they fused layers together. However, using finer copper powder led to the next challenge. It allowed more oxygen to attach to the copper powder, increasing the oxide in each layer and making the printed objects less pure.

So, Gamzina and Horn had to find a way to reduce the oxygen content in their copper powders. The method they came up with, which they recently reported in Applied Sciences, relies on hydrogen gas to bind oxygen into water vapor and drive it out of the powder.

Using this method is somewhat surprising, Horn said. In a traditionally manufactured copper object, the formation of water vapor would create high-pressure steam bubbles inside the material, the material would blister and fail. In the additive process, on the other hand, the water vapor escapes layer by layer, which releases the water vapor more effectively.

Although the technique has shown great promise, the scientists still have a ways to go to reduce the oxygen content enough to print an actual particle accelerator. But they have already succeeded in printing a few components, such as the klystron output cavity with internal cooling channels and a string of coupled cavities that could be used for particle acceleration.

Planning to team up with industry partners

The next phase of the project will be driven by the newly-formed Consortium on the Properties of Additive-Manufactured Copper, which is led by Horn. The consortium currently has four active industry members – Siemens, GE Additive, RadiaBeam and Calabazas Creek Research, Inc – with more on the way.

“This is a nice example of collaboration between an academic institution, a national lab and small and large businesses,” Gamzina said. “It would allow us to figure out this problem together. Our work has already allowed us to go from ‘just imagine, this is crazy’ to ‘we can do it’ in less than two years.”

This work was primarily funded by the Naval Sea Systems Command, as a Small Business Technology Transfer Program with Radiabeam, SLAC, and NCSU. Other SLAC contributors include Chris Pearson, Andy Nguyen, Arianna Gleason, Apurva Mehta, Kevin Stone, Chris Tassone and Johanna Weker. Additional contributions came from Christopher Ledford and Christopher Rock at NCSU and Pedro Frigola, Paul Carriere, Alexander Laurich, James Penney and Matt Heintz at RadiaBeam.

Citation: C. Ledford et al., Applied Sciences, 24 September 2019 (10.3390/app9193993)

For questions or comments, contact the SLAC Office of Communications at communications@slac.stanford.edu.

————————————————

SLAC is a vibrant multiprogram laboratory that explores how the universe works at the biggest, smallest and fastest scales and invents powerful tools used by scientists around the globe. With research spanning particle physics, astrophysics and cosmology, materials, chemistry, bio- and energy sciences and scientific computing, we help solve real-world problems and advance the interests of the nation.

SLAC is operated by Stanford University for the U.S. Department of Energy’s Office of Science. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit energy.gov/science.

Top figure: Examples of 3D-printed copper components that could be used in a particle accelerator: X-band klystron output cavity with micro-cooling channels (at left) and a set of coupled accelerator cavities. (Christopher Ledford/North Carolina State University)

This is a reposting of my news feature, courtesy of SLAC Linear Accelerator Center.

Measuring depression with wearables

Depression and emotional disorders can occur at any time of year — and do for millions of Americans. But feeling sad, lonely, anxious and depressed may seem particularly isolating during this holiday season, which is supposed to be a time of joy and celebration.

A team of Stanford researchers believes that one way to work towards ameliorating this suffering is to develop a better way to quantitatively measure stress, anxiety and depression.

“One of the biggest barriers for psychiatry in the field that I work in is that we don’t have objective tests. So the way that we assess mental health conditions and risks for them is by interview and asking you how do you feel,” said Leanne Williams, MD, a professor in psychiatry and behavioral sciences at Stanford, when she spoke at a Stanford Reunion Homecoming alumni celebration.

She added, “Imagine if you were diagnosing and treating diabetes without tests, without sensors. It’s really impossible to imagine, yet that is what we’re doing for mental health, right now.”

Instead, Stanford researchers want to collect and analyze data from wearable devices to quantitatively characterize mental states. The multidisciplinary team includes scientists from the departments of psychiatry, chemical engineering, bioengineering, computer science and global health.

Their first step was to use functional magnetic resonance imaging to map the brain activity of healthy controls compared to people with major depressive disorder who were imaged before and after they were treated with antidepressants.

The researchers identified six “biotypes” of depression, representing different ways brain circuitry can be disrupted to cause specific symptoms. They classified the biotypes as rumination, anxious avoidance, threat dysregulation, anhedonia, cognitive dyscontrol and inattention.

“For example, threat dysregulation is when the brain stays in alarm mode after acute stress and you feel heart racing, palpitations, sometimes panic attacks,” presented Williams, “and that’s the brain not switching off from that mode,” Williams said.

The team, which includes chemical engineer Zhenan Bao, PhD, then identified links between these different brain biotypes and various physiological differences, including changes in heart rate, skin conductance, electrolyte levels and hormone production. In particular, they found correlations between the biotypes and production of cortisol, a hormone strongly related to stress level.

Now, they are developing a wearable device — called MENTAID — that measures the physiological parameters continuously. Their current prototype can already measure cortisol levels in sweat in agreement with standard laboratory measurements. This was an incredibly challenging task due to the extremely low concentration and tiny molecular size of cortisol.

Going forward, they plan to validate their wearable device with clinical trials, including studies to assess its design and user interface. Ultimately, the researchers hope MENTAID will help prevent and treat mental illness — for example, by better predicting and evaluating patient response to specific anti-depressants.

Photo by Sora Sagano

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

X-rays shed light on how anti-asthmatic drugs work

A new study uncovers how a critical protein binds to drugs used to treat asthma and other inflammatory diseases.

By studying the crystal structure of an important protein when it was bound to two drugs widely prescribed to treat asthma, an international team of scientists has discovered unique binding and signaling mechanisms that could lead to the development of more effective treatments for asthma and other inflammatory diseases.

The protein, called cysteinyl leukotriene receptor type 1 (CysLT1R), controls the dilation and inflammation of bronchial tubes in the lungs. It is therefore one of the primary targets for anti-asthma drugs, including the two drugs studied: zafirlukast, which acts on inflammatory cells in the lungs, and pranlukast, which reduces bronchospasms due to allergic reactions.

Using the Linac Coherent Light Source (LCLS) X-ray free-electron laser at the Department of Energy’s SLAC National Accelerator Laboratory, the team bombarded tiny crystals of CysLT1R-zafirlukast with X-ray pulses and measured its structure. They also used X-rays from the European Synchrotron Radiation Facility in Grenoble, France to collect data about CysLT1R-pran crystals. They published their findings in October in Science Advances.

The researchers gained a new understanding of how CysLT1R interacts with these anti-asthma drugs, observing surprising structural features and a new activation mechanism. For example, the study revealed major differences between how the two drugs attached to the binding site of the protein. In comparison to pranlukast, the zafirlukast molecule jammed open the entrance gate of CysLT1R’s binding site into a much wider configuration. This improved understanding of the protein suggests a new rationale for designing more effective anti-asthma drugs.

The study was performed by a collaboration of researchers at SLAC; Moscow Institute of Physics and Technology, Russia; University de Sherbrooke, Canada; University of Southern California; Research Center Juelich, Germany; Universite Grenoble Alpes-CEA-CNRS, France; Czech Academy of Sciences, Czech Republic; and Arizona State University.

Citation: Aleksandra Luginina et al., Science Advances, 09 October 2019 (10.1126/sciadv.aax2518).

For questions or comments, contact the SLAC Office of Communications at communications@slac.stanford.edu.

Image caption: Using X-rays, researchers uncovered details about two drugs widely prescribed to treat asthma: pranlukast (shown up top) and zafirlukast (shown beneath). Their results revealed major differences between how the two drugs attached to the binding site of the receptor protein. In comparison to pranlukast, the zafirlukast molecule jammed open the entrance gate of protein’s binding site into a much wider configuration. (10.1126/sciadv.aax2518)

This is a reposting of my SLAC news story, courtesy of SLAC Linear Accelerator Laboratory.

Stanford researchers watch proteins assemble a protective shell around bacteria

Many bacteria and viruses are protected from the immune system by a thin, hard outer shell  — called an S-layer — composed of a single layer of identical protein building blocks.

Understanding how microbes form these crystalline S-layers and the role they play could be important to human health, including our ability to treat bacterial pathogens that cause serious salmonella, C. difficile and anthrax infections. For instance, researchers are working on ways to remove this shell to fight anthrax and other diseases.

Now, a Stanford study has observed for the first time proteins assembling themselves into an S-layer in a bacterium called Caulobacter crescentus, which is present in many fresh water lakes and streams.

Although this bacteria isn’t harmful to humans, it is a well-understood organism that is important to various cellular processes. Scientists know that the S-shell of Caulobacter crescentus is vital for the microbe’s survival and made up of protein building blocks called RsaA.  

A recent news release describes how the research team from Stanford and SLAC National Accelerator Laboratory were able to watch this assembly, even though it happens on such a tiny scale:

“To watch it happen, the researchers stripped microbes of their S-layers and supplied them with synthetic RsaA building blocks labeled with chemicals that fluoresce in bright colors when stimulated with a particular wavelength of light.

Then they tracked the glowing building blocks with single-molecule microscopy as they formed a shell that covered the microbe in a hexagonal, tile-like pattern (shown in image above) in less than two hours. A technique called stimulated emission depletion (STED) microscopy allowed them to see structural details of the layer as small as 60 to 70 nanometers, or billionths of a meter, across – about one-thousandth the width of a human hair.”

The scientists were surprised by what they saw: the protein molecules spontaneously assembled themselves without the help of enzymes.

“It’s like watching a pile of bricks self-assemble into a two-story house,” said Jonathan Herrmann, a graduate student in structural biology at Stanford involved in the study, in the news release.

The researchers believe the protein building blocks are guided to form in specific regions of the cell surface by small defects and gaps within the S-layer. These naturally-occurring defects are inevitable because the flat crystalline sheet is trying to cover the constantly changing, three-dimensional shape of the bacterium, they said.

Among other applications, they hope their findings will offer potential new targets for drug treatments.

“Now that we know how they assemble, we can modify their properties so they can do specific types of work, like forming new types of hybrid materials or attacking biomedical problems,” said Soichi Wakatsuki, PhD, a professor of structural biology and photon science at SLAC, in the release.

Illustration by Greg Stewart/SLAC National Accelerator Laboratory

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Simplified analysis method could lead to improved prosthetics, a Stanford study suggests

Brain-machine interfaces (BMI) are an emerging field at the intersection of neuroscience and engineering that may improve the quality of life for amputees and individuals with paralysis. These patients are unable to get signals from their motor cortex — the part of the brain that normally controls movement — to their muscles.

Researchers are overcoming this disconnect by implanting in the brain small electrode arrays, which measure and decode the electrical activity of neurons in the motor cortex. The sensors’ electrical signals are transmitted via a cable to a computer and then translated into commands that control a computer cursor or prosthetic limb. Someday, scientists also hope to eliminate the cable, using wireless brain sensors to control prosthetics.

In order to realize this dream, however, they need to improve both the brain sensors and the algorithms used to decode the neural signals. Stanford electrical engineer Krishna Shenoy, PhD, and his collaborators are tackling this algorithm challenge, as described in a recent paper in Neuron.

Currently, most neuroscientists process their BMI data looking for “spikes” of electrical activity from individual neurons. But this process requires time-consuming manual or computationally-intense automatic data sorting, which are both prone to errors.

Manual data sorting will also become unrealistic for future technologies, which are expected to record thousands to millions of electrode channels compared to the several hundred channels recorded by today’s state-of-the-art sensors. For example, a dataset composed of 1,000 channels could take over 100 hours to hand sort, the paper says. In addition, neuroscientists would like to measure a greater brain volume for longer durations.

So, how can they decode all of this data?

Shenoy suggests simplifying the data analysis by eliminating spike sorting for applications that depend on the activity of neural populations rather than single neurons — such as brain-machine interfaces for prosthetics.

In their new study, the Stanford team investigated whether eliminating this spike sorting step distorted BMI data. Turning to statistics, they developed an analysis method that retains accuracy while extracting information from groups rather than individual neurons. Using experimental data from three previous animal studies, they demonstrated that their algorithms could accurately decode neural activity with minimal distortion — even when each BMI electrode channel measured several neurons. They also validated these experimental results with theory.

 “This study has a bit of a hopeful message in that observing activity in the brain turns out to be easier than we initially expected,” says Shenoy in a recent Stanford Engineering news release. The researchers hope their work will guide the design and use of new low-power, higher-density devices for clinical applications since their simplified analysis method reduces the storage and processing requirements.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Photo by geralt.

Genetic roots of psychiatric disorders clearer now thanks to improved techniques

Photo by LionFive

New technology and access to large databases are fundamentally changing how researchers investigate the genetic roots of psychiatric disorders.

“In the past, a lot of the conditions that people knew to be genetic were found to have a relatively simple genetic cause. For example, Huntington’s disease is caused by mutations in just one gene,” said Laramie Duncan, PhD, an assistant professor of psychiatry and behavioral sciences at Stanford. “But the situation is entirely different for psychiatric disorders, because there are literally thousands of genetic influences on every psychiatric disorder. That’s been one of the really exciting findings that’s come out of modern genetic studies.”

These findings are possible thanks to genome-wide association studies (GWAS), which test for millions of genetic variations across the genome to identify the genes involved in human disease.

Duncan is the lead author of a recent commentary in Neuropsychopharmacology that explains how GWAS studies have demonstrated the inadequacy of previous methods. The paper also highlights new genetics findings for mental health.

Before the newer technologies and databases were available, scientists could only analyze a handful of genetic variations. So they had to guess that a specific genetic variation (a candidate) was associated with a disorder — based on what was known about the underlying biology — and then test their hypothesis. The body of research that has emerged from GWAS studies, however, show that nearly all of these earlier “candidate study” results are incorrect for psychiatric disorders.

“There are actually so many genetic variations in the genome, it would have been almost impossible for people to guess correctly,” Duncan said. “It was a reasonable thing to do at the time. But we now have better technology that’s just as affordable as the old ways of doing things, so traditional candidate gene studies are no longer needed.”

Duncan said she began questioning the candidate gene studies as a graduate student. As she studied the scientific literature, she noticed a pattern in the data that suggested the results were wrong. “The larger studies tended to have null results and the very small studies tended to have positive results. And the only reason you’d see that pattern is if there was strong publication bias,” said Duncan. “Namely, positive results were published even if the study was small, and null results were only published when the study was very large.”

In contrast, the findings from the GWAS studies become more and more precise as the sample size increases, she explained, which demonstrates their reliability.

Using GWAS, researchers now know that thousands of variations distributed across the genome likely contribute to any given mental disorder. By using the statistical power gleaned from giant databases such as the UK Biobank or the Million Veterans Program, they have learned that most of these variations aren’t even in the regions of the gene’s DNA that code for proteins, where scientists expected them to be. For example, only 1.1 percent of schizophrenia risk variants are in these coding regions.

“What’s so interesting about the modern genetic findings is that they are revealing entirely new clues about the underlying biology of psychiatric disorders,” Duncan said. “And this opens up lots of new avenues for treatment development.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Tips for discussing suicide on social media — A guide for youth

cell-phone-791365_1920
Photo by kaboompics

There are pros and cons to social media discussions of suicide. Social media can spread helpful knowledge and support, but it can also quickly disseminate harmful messaging and misinformation that puts vulnerable youth at risk.

New U.S. guidelines, called #chatsafe: A Young Person’s Guide for Communicating Safely Online About Suicides, aim to address this problem by offering evidence-based advice on how to constructively interact online about this difficult topic. The guidelines include specific language recommendations.

Vicki Harrison, MSW, the program director for the Stanford Center for Youth Mental Health and Wellbeing, discussed this new online education tool — developed in collaboration with a youth advisory panel — in a recent Healthier, Happy Lives Blog post.

“My hope is that these guidelines will create awareness about the fact that the way people talk about suicide actually matters an awful lot and doing so safely can potentially save lives. Yet we haven’t, up to this point, offered young people a lot of guidance for how to engage in constructive interactions about this difficult topic,” Harrison said in the blog post. “Hopefully, these guidelines will demystify the issue somewhat and offer practical suggestions that youth can easily apply in their daily interactions.”

A few main takeaways from the guidelines are below:

Before you post anything online about suicide

Remember that posts can go viral and they will never be completely erased. If you do post about suicide, carefully choose the language you use. For example, avoid words that describe suicide as criminal, sinful, selfish, brave, romantic or a solution to problems.

Also, monitor the comments for unsafe content like bullying, images or graphic descriptions of suicide methods, suicide pacts or goodbye notes. And include a link to prevention resources, like suicide help centers on social media platforms. From the guidelines:

“Indicate suicide is preventable, help is available, treatment can be successful, and that recovery is possible.”

Sharing your own thoughts, feelings or experience with suicidal behavior online

If you’re experiencing suicidal thoughts or feelings, try to reach out to a trusted adult, friend or professional mental health service before posting online. If you are feeling unsafe, call 911.

In general, think before you post: What do you hope to achieve by sharing your experience? How will sharing make you feel? Who will see your post and how will it affect them?

If you do post, share your experience in a safe and helpful way without graphic references, and consider including a trigger warning at the beginning to warn readers about potentially upsetting content.

Communicating about someone you know who is affected by suicidal thoughts, feelings or behaviors

If you’re concerned about someone, ask permission before posting or sharing content about them if possible. If someone you know has died by suicide, be sensitive to the feelings of their grieving family members and friends who might see your post. Also, avoid posting or sharing posts about celebrity suicides, because too much exposure to the suicide of well-known public figures can lead to copycat suicides.

Responding to someone who may be suicidal

Before you respond to someone who has indicated they may be at risk of suicide, check in with yourself: How are you feeling? Do you understand the role and limitations of the support you can provide?

If you do respond, always respond in private without judgement, assumptions or interruptions. Ask them directly if they are thinking of suicide. Acknowledge their feelings and let them know exactly why you are worried about them. Show that you care. And encourage them to seek professional help.

Memorial websites, pages and closed groups to honor the deceased

Setting up a page or group to remember someone who has died can be a good way to share stories and support, but it also raises concerns about copycat suicides. So make sure the memorial page or group is safe for others — by monitoring comments for harmful or unsafe content, quickly dealing with unsupportive comments and responding personally to participants in distress. Also outline the rules for participation.

Individuals in crisis can receive help from the Santa Clara County Suicide & Crisis Hotline at (855) 278-4204. Help is also available from anywhere in the United States via Crisis Text Line (text HOME to 741741) or the National Suicide Prevention Lifeline at (800) 273-8255. All three services are free, confidential and available 24 hours a day, seven days a week.

This is a resposting of my Scope blog story, courtesy of Stanford School of Medicine.