Eos: The Dawn of a New Era of Neutrino Detection

The bright yellow forklift crept forward, gracefully maneuvering the 20-ton steel tank through the entrance of Etcheverry Hall’s basement with only two millimeters to spare. Relying on the expertise of Berkeley Lab riggers, this tight squeeze was by design to maximize the size of the outer vessel of the Eos experiment.

“Named for the Titan goddess of dawn, Eos represents the dawn of a new era of neutrino detection technology,” says Gabriel Orebi Gann, a Berkeley Physics associate professor, Berkeley Lab faculty scientist, and the leader of Eos, an international collaboration of 24 institutions jointly led by UC Berkeley Physics and Berkeley Lab Nuclear Science.

Neutrinos are abundant, neutral, almost massless subatomic “ghost particles” created whenever atomic nuclei come together or break apart, including during fusion reactions at the core of the Sun and fission reactions inside nuclear reactors on Earth. Neutrinos are difficult to detect because they rarely interact with matter—about 100 trillion neutrinos harmlessly pass through the Earth and our bodies every second as if we don’t exist.

Berkeley researchers are using Eos as a testbed to explore advanced, hybrid technologies for detecting these mysterious particles.

“While at Berkeley, we’re characterizing the response of the detector using deployable optical and radioactive sources to understand how well our technologies are performing. And we’re developing detailed simulations of our detector performance to make sure they agree with the data,” says Berkeley Physics Postdoctoral Fellow Tanner Kaptanoglu. “Once we complete this validation, we hope to move Eos to a neutrino source for further testing.”

Ultimately, the team hopes to use their experimental results and simulations to design a much larger version of Eos—named after the Titan goddess Theia, mother of Eos—to realize an astonishing breadth of nuclear physics, high energy physics, and astrophysics research.

The Eos collaboration is also investigating whether these technologies could someday detect nuclear security threats, in partnership with the funding sponsor, National Nuclear Security Administration.

 “One nonproliferation application is using the absence of a neutrino signature to demonstrate that stockpile verification experiments are not nuclear,” says Orebi Gann. “A second application is verifying that nuclear-powered marine vessels are operating correctly.”

Like a nesting doll, Eos comprises several detector layers. The inner layer is a 4-ton acrylic tank, filled in stages during testing with air, then deionized water, and finally a water-based liquid scintillator (WbLS).

The barrel of this inner vessel is surrounded by 168 fast, high-performance, 8-inch photomultiplier tubes (PMTs) with electromagnetic shielding. Attached above the vessel are two dozen 12-inch PMTs. And attached below it are three dozen 8-inch “front-row” PMTs, with another dozen 10-inch PMTs below them.

In January, this detector assembly was gently lowered inside the 20-ton steel outer vessel, with Berkeley Physics Assistant Project Scientist Leon Pickard operating the crane as other team members anxiously watched.

“The big lift this was nerve-wracking. More than a year’s worth of work, dedication, and time from lots of people and then I was lifting it all together into the outer tank,” describes Pickard. “I knew the Berkeley Lab riggers taught me well so I was confident, excited, and definitely nervous.”

The buffer region between the acrylic and steel vessels is filled with water, submerging the PMTs. The outermost Eos layer is a muon tracker system consisting of solid scintillator paddles with PMTs.

By combining several novel detector technologies, Eos measures both Cherenkov radiation and scintillation light simultaneously. Its main challenge is to separate the faint Cherenkov signal from the overwhelming scintillation signal.

When neutrinos pass through Eos, one very occasionally interacts with the detector’s water or scintillator, transferring its energy to a charged particle. This charged particle then travels through the medium, emitting light that is detected by the PMTs.

When the charged particle travels faster than the speed of light in the medium, it creates a photonic boom—similar to the sonic boom created by a plane traveling faster than the speed of sound. This cone of Cherenkov light travels in the direction of the charged particle, making a ring-like image that is detected by the PMTs. In contrast, the scintillation light emits equally in all directions. Reconstructing the pattern of PMT hits helps distinguish between the two signals.

In addition to topological differences, Cherenkov radiation is emitted almost instantaneously in a picosecond burst, whereas scintillation light lasts for nanoseconds. The PMTs detect this time difference.

Finally, the observable Cherenkov radiation has a longer, redder wavelength spectra than the bluer scintillation light, which inspired the creation of dichroic photosensors that sort photons by wavelength. These dichroicons consist of an 8-inch PMT with a long-pass optical filter above the bulb and a crown of short-pass filters surrounding it. A dozen of the 8-inch, front-row PMTs attached to the bottom of the inner vessel are dichroicons. The concept for these novel photosensors was developed under the leadership of Eos collaborator Professor Joshua Klein, with Kaptanoglu playing a central role as part of his PhD thesis at the University of Pennsylvania.

If the light’s wavelength is above a certain threshold, a dichroicon guides Cherenkov light onto the central PMT. If the light is below that threshold, it passes through and is detected by the 10-inch, back-row PMTs.

“You effectively guide the Cherenkov light to specific PMTs and the scintillation light to other PMTs without losing light,” says Orebi Gann. “This gives us an additional way to separate Cherenkov and scintillation light.”

Another unique thing about Eos is its location.

“Although Eos is a Berkeley Physics project, the Nuclear Engineering department let us work in their space in the Etcheverry basement,” says Orebi Gann. “It’s unusual to work across departmental boundaries in this way. It’s a sign of how great and supportive Nuclear Engineering has been.”

Delivering the outer vessel into the building wasn’t the only tight squeeze—the Eos installation was temporally and physically tight.

Neutrino experiments often struggle to get their steel tanks manufactured, so everyone was excited last June when the tank headed towards Berkeley. Unfortunately, Orebi Gann received an email the next morning saying the tank was destroyed in a non-injury accident when the truck collided with an overpass in Saint Louis. After immediately calling her sponsor with the bad news, she mobilized.

“I started sweating. They would have killed our three-year project if we had to wait for the insurance claim,” says Orebi Gann. “Luckily, Berkeley Lab Nuclear Science Division Director Reiner Kruecken and others were really supportive, and we had enough contingency in the budget to buy another one. Within two weeks, we were under contract for a replacement. And the steel tank arrived three months later.”

Despite this delay, the collaboration assembled the detector, acquired and analyzed the data, and finished developing the detector simulations during the last year of funding.

“That’s the biggest setback you can have—your tank is crumpled. But with prudent planning, preparation, and scheduling agility, we were able to get right back on track,” says Pickard, also the installation manager.

In addition to Orebi Gann, Pickard, and Kaptanoglu, the Berkeley Physics installation team included former Project Scientists Zara Bagdasarian, Morgan Askins, and Guang Yang, Junior Specialist Sawyer Kaplan, graduate students Max Smiley, Ed Callaghan, and Martina Hebert, and undergraduate students Joseph Koplowitz, Ashley Rincon, and Hong Joo Ryoo. They were assisted by Berkeley Lab Staff Scientist Richard Bonventre, Senior Scientific Engineer Associate Joe Wallig, mechanical engineer Joseph Saba, and machinist James Daniel Boldi.

Given the tight timeline and limited space, another installation challenge was where to put all the detector components. Eos collaborators across the country coordinated to bring everything in at just the right time, fully tested and ready to go for the build.

“Some of the deliveries stayed temporarily at Berkeley Lab. Gabriel let us use her office to store hundreds of PMTs for a while. And the Nuclear Science folks were phenomenally accommodating, allowing us to store muon paddles, PMTs, and other parts on the Etcheverry mezzanine,” Pickard says. “We played a huge game of Tetris to get the detector put together.”

Once assembled, Eos acquired and analyzed data in three phases.

This March, it measured “first light” by flashing a blue LED into an optical fiber that points into the detector and then detecting this light with the PMTs. During initial tests, the inner vessel contained air while ensuring all the detector channels were working and the PMTs were measuring single photons.

Next, they filled the inner tank with optically-pure deionized water and took data using various radioactive sources, optical sources, a laser injection system, and cosmic muons to fully evaluate detector performance. During this phase, Eos operated as a water Cherenkov detector.

“In a water Cherenkov detector, you have only Cherenkov light so you can do a precise directional reconstruction of the event. This helps with particle identification at high energies and background discrimination at low energies,” says Kaptanoglu, also the commissioning manager who helps identify the data needed. Among his other roles, he co-leads the simulations and analysis team with Marc Bergevin, a staff scientist at Lawrence Livermore National Lab.

Lastly, the researchers turned Eos into a hybrid detector by injecting into the water a water-based liquid scintillator, which was supplied by Eos collaborator Minfang Yeh at Brookhaven National Laboratory. This allowed the team to explore the stability and neutrino detection capabilities of the novel scintillator. Adding WbLS improves energy and position reconstruction, but it makes event direction reconstruction difficult. A key goal was to show that Eos could still reconstruct the event direction with the WbLS—proving WbLS as a viable, effective, and impressive neutrino detection medium.

“Our hybrid detector gives us the best of both worlds. We measure event directionality with the Cherenkov light, and we achieve excellent energy and position resolution and low detector thresholds using the scintillation light,” says Kaptanoglu, “But by combining Cherenkov and scintillation, we get additional benefits. For example, we can better tell what type of particle is interacting in our detector— whether it’s an electron, neutron, or gamma.”

Eos data analysis combines traditional likelihood and machine learning algorithms to reconstruct events. These novel reconstruction algorithms simultaneously use the Cherenkov and scintillation light, finding a ring of PMTs hit by the Cherenkov light on top of the much larger isotropic scintillation light background. The team also compared the two methods to see if machine learning gave them any advantages.

 “Our goal was to show that we can do this hybrid reconstruction and that we can simulate it well to match with the experimental data,” says Kaptanoglu.  

Their simulations entail microphysical modeling of every aspect of the Eos detector, characterizing in detail how the light is created, propagated, and detected. In addition to producing cool 3D renderings of the detector, Eos simulations will be used to help design future neutrino experiments.

“Our Monte Carlo simulations make predictions, and we compare those to our experimental data. That allows us to validate and improve the Monte Carlo simulations,” say Orebi Gann. “We can use that improved Monte Carlo to predict performance in other scenarios. It’s the step that allows us to go from the measurements we make at Berkeley to predicting how this technology would perform in different application scenarios.”

Although their three-year project recently completed, Orebi Gann has applied for another three years of funding to extend Eos testing at Berkeley.

If funded, the team plans to explore different WbLS cocktails and various photosensor parameters. They are also considering upgrading to custom electronics.

During the additional three years, the team would also devise a plan for moving Eos to a neutrino source if they get follow-on funding. A likely location is the Spallation Neutron Source at Oak Ridge National Laboratory. This facility basically smashes neutrons into a target to produce a huge number of neutrinos.

“Moving Eos to the Spallation Neutron Source would allow us to demonstrate that we can see neutrinos with this technology, in a regime where it’s not as subject to the low energy backgrounds that make reactor neutrino or fission neutrino detection challenging. It’s a step on the road,” says Orebi Gann.

According to Orebi Gann, the next step after that would be to move Eos to a nuclear reactor to prove it can detect neutrino signals in an operational environment with all relevant backgrounds.

However, the ultimate plan is to use Eos experimental results and simulation models to guide how to design Theia-25 (or Theia-100), a massive hybrid neutrino detector with a 25-kiloton (or 100-kiloton) WbLS tank and tens of thousands of ultrafast photosensors.

Orebi Gann is a lead proponent of Theia, a Berkeley-led “experiment in the making.” If funded, Theia will likely reside at the Deep Underground Neutrino Experiment (DUNE) located in an abandoned gold mine in South Dakota.

Theia has two potential areas of fundamental physics research. The first is understanding the neutrinos themselves.

“In particle physics, we don’t know of any fundamental property that differentiates neutrinos from antineutrinos, so they could in fact be incarnations of the same particle,” she explains. “Understanding their fundamental properties and how they differ could, for example, help explain how the Universe evolved, including offering insights into why it is dominated by matter.”

The second area of fundamental physics research uses the very weakly interacting neutrinos to probe the world around us.

“A large WbLS detector would enable us to look at solar neutrinos, supernova neutrinos, geo-neutrinos naturally produced in the Earth, and a vast array of other measurements,” says Orebi Gann. “For example, solar neutrinos would give us a real-time monitor of the Sun.”

“What’s interesting about Theia is the breadth of its program. I can go on for an hour about the physics of Theia,” Orebi Gann adds. “I think Eos, and the other R&D technology demonstrators around the world, will allow us to realize something like Theia, which would have a rich program of world leading physics across nuclear physics, high energy physics, and astrophysics.”

This is a reposting of my magazine feature, courtesy of UC Berkeley’s 2024 Berkeley Physics Magazine.

Alum Pamela Caton likes to get her hands dirty

Pamela Caton (BA ’92) has always been a “maker.” As a young person, she took jewelry, machining, and programming classes. When her radio broke, her dad suggested, “Try to fix it. It’s already broken, so what’s the worst that can happen?” Working on hardware brings her joy, and she is especially drawn to multi-disciplinary projects.

Caton has primarily worked on micro-electromechanical systems (MEMS). MEMS engineers use the same tools that an electrical engineer would use to build silicon chips, but they build electronically-controlled mechanical structures with moving parts.

As an optical MEMS engineer at AEye, Caton is helping to develop a light detection and ranging (lidar) sensor for automotive and smart infrastructure applications. For example, self-driving car companies could use AEye’s sensors to detect and identify the features of an object on the freeway, allowing the car to understand if it needs to avoid the object—is it a brick or a plastic bag?

A lidar sensor measures the distance to a target by sending out a short laser pulse, reflecting it off an object, and recording the time between the outgoing and reflected light pulses. By doing an array of laser measurements, engineers create a big map of distance information. Caton works on developing and testing the MEMS mirrors used for laser scanning.

Caton credits some of her success in industry to her Berkeley Physics training. “I have a really solid understanding of the fundamentals. Physics is a fantastic basis for all types of engineering,” she says. Her favorite classes were the advanced physics labs. “Professor Sumner Davis was fantastic. And I loved Physics 111 because it was so hands-on. You couldn’t get through the lab without understanding the theory, but you got your hands dirty too.”

This is a reposting of my magazine alumni story, courtesy of UC Berkeley’s 2024 Berkeley Physics Magazine.

Superconductivity and charge density waves caught intertwining at the nanoscale

The team aimed infrared laser pulses at the YBCO sample to switch off its superconducting state, then used X-ray laser pulses to illuminate the sample and examined the X-ray light scattered from it. Their results revealed that regions of superconductivity and charge density waves were arranged in unexpected ways. (Courtesy Giacomo Coslovich/SLAC National Accelerator Laboratory)

Room-temperature superconductors could transform everything from electrical grids to particle accelerators to computers – but before they can be realized, researchers need to better understand how existing high-temperature superconductors work.

Now, researchers from the Department of Energy’s SLAC National Accelerator Laboratory, the University of British Columbia, Yale University and others have taken a step in that direction by studying the fast dynamics of a material called yttrium barium copper oxide, or YBCO.

The team reports May 20 in Science that YBCO’s superconductivity is intertwined in unexpected ways with another phenomenon known as charge density waves (CDWs), or ripples in the density of electrons in the material. As the researchers expected, CDWs get stronger when they turned off YBCO’s superconductivity. However, they were surprised to find the CDWs also suddenly became more spatially organized, suggesting superconductivity somehow fundamentally shapes the form of the CDWs at the nanoscale.

“A big part of what we don’t know is the relationship between charge density waves and superconductivity,” said Giacomo Coslovich, a staff scientist at the Department of Energy’s SLAC National Accelerator Laboratory, who led the study. “As one of the cleanest high-temperature superconductors that can be grown, YBCO offers us the opportunity to understand this physics in a very direct way, minimizing the effects of disorder.”

He added, “If we can better understand these materials, we can make new superconductors that work at higher temperatures, enabling many more applications and potentially addressing a lot of societal challenges – from climate change to energy efficiency to availability of fresh water.”

Observing fast dynamics

The researchers studied YBCO’s dynamics at SLAC’s Linac Coherent Light Source (LCLS) X-ray laser. They switched off superconductivity in the YBCO samples with infrared laser pulses, and then bounced X-ray pulses off those samples. For each shot of X-rays, the team pieced together a kind of snapshot of the CDWs’ electron ripples. By pasting those together, they recreated the CDWs rapid evolution.

“We did these experiments at the LCLS because we needed ultrashort pulses of X-rays, which can be made at very few places in the world. And we also needed soft X-rays, which have longer wavelengths than typical X-rays, to directly detect the CDWs,” said staff scientist and study co-author Joshua Turner, who is also a researcher at the Stanford Institute for Materials and Energy Sciences. “Plus, the people at LCLS are really great to work with.”

These LCLS runs generated terabytes of data, a challenge for processing. “Using many hours of supercomputing time, LCLS beamline scientists binned our huge amounts of data into a more manageable form so our algorithms could extract the feature characteristics,” said MengXing (Ketty) Na, a University of British Columbia graduate student and co-author on the project.

The team found that charge density waves within the YBCO samples became more correlated – that is, more electron ripples were periodic or spatially synchronized – after lasers switched off the superconductivity.

“Doubling the number of waves that are correlated with just a flash of light is quite remarkable, because light typically would produce the opposite effect. We can use light to completely disorder the charge density waves if we push too hard,” Coslovich said.

Blue areas are superconducting regions, and yellow areas represent charge density waves. After a laser pulse (red), the superconducting regions are rapidly turned off and the charge density waves react by rearranging their pattern, becoming more orderly and coherent. (Greg Stewart/SLAC National Accelerator Laboratory)

To explain these experimental observations, the researchers then modeled how regions of CDWs and superconductivity ought to interact given a variety of underlying assumptions about how YBCO works. For example, their initial model assumed that a uniform region of superconductivity when shut off with light would become a uniform CDW region – but of course that didn’t agree with their results.  

“The model that best fits our data so far indicates that superconductivity is acting like a defect within a pattern of the waves. This suggests that superconductivity and charge density waves like to be arranged in a very specific, nanoscopic way,” explained Coslovich. “They are intertwined orders at the length scale of the waves themselves.”

Illuminating the future

Coslovich said that being able to turn superconductivity off with light pulses was a significant advance, enabling observations on the time scale of less than a trillionth of a second, with major advantages over previous approaches.

“When you use other methods, like applying a high magnetic field, you have to wait a long time before making measurements, so CDWs rearrange around disorder and other phenomena can take place in the sample,” he said. “Using light allowed us to show this is an intrinsic effect, a real connection between superconductivity and charge density waves.”

The research team is excited to expand on this pivotal work, Turner said. First, they want to study how the CDWs become more organized when the superconductivity is shut off with light. They are also planning to tune the laser’s wavelength or polarization in future LCLS experiments in hopes of also using light to enhance, instead of quench, the superconducting state, so they could readily turn the superconducting state off and on.

“There is an overall interest in trying to do this with pulses of light on very fast timescales, because that can potentially lead to the development of superconducting, light-controlled devices for the new generation of electronics and computing,” said Coslovich. “Ultimately, this work can also help guide people who are trying to build room-temperature superconductors.”

This research is part of a collaboration between researchers from LCLS, SLAC’s Stanford Synchrotron Radiation Lightsource (SSRL), UBC, Yale University, the Institut National de la Recherche Scientifique in Canada, North Carolina State University, Universita CAattolica di Brescia and other institutions. This work was funded in part by the DOE Office of Science. LCLS and SSRL are DOE Office of Science user facilities.

Citation: Scott Wandel et al., Science, 20 May 2022 (10.1126/science.abd7213)

This is a reposting of my news feature courtesy of Stanford Linear Accelerator Laboratory.

Physicians re-evaluate use of lead aprons during X-rays

When you get routine X-rays of your teeth at the dentist’s office or a chest X-ray to determine if you have pneumonia, you expect the technologist to drape your pelvis in a heavy radioprotective apron. But that may not happen the next time you get X-rays.

There is growing evidence that shielding reproductive organs has negligible benefit; and because a protective cover can move out of place, using it can result in an increased radiation dose to the patient or impaired quality of diagnostic images.

Shielding testes and ovaries during X-ray imaging has been standard practice since the 1950s due to a fear of hereditary risks — namely, that the radiation would mutate germ cells and these mutations would be passed on to future generations. This concern was prompted by the genetic effects observed in studies of irradiated fruit flies. However, such hereditary effects have not been observed in humans.

“We now understand that the radiosensitivity of ovaries and testes is extremely low. In fact, they are some of the lower radiation-sensitive organs — much lower than the colon, stomach, bone marrow and breast tissue,” said  Donald Frush, MD, a professor of pediatric radiology at Lucile Packard Children’s Hospital Stanford.

In addition, he explained, technology improvements have dramatically reduced the radiation dose that a patient receives during standard X-ray films, computerized tomography scans and other radiographic procedures. For example, a review paper finds that the radiation dose to ovaries and testes dropped by 96% from 1959 to 2012 for equivalent X-ray exams of the pelvis without shielding.

But even if the radioprotective shielding may have minimal — or no — benefit, why not use it just to be safe?

The main problem is that so-called lead aprons — which aren’t made of lead anymore — are difficult to position accurately, Frush said. Even following shielding guidelines, the position of the ovaries is so variable that they may not be completely covered.  Also,  the protective shield can obscure the target anatomy. This forces doctors to live with poor-quality diagnostic information or to repeat the X-ray scan, thus increasing the radiation dose given to the patient, he said.

Positioning radioprotective aprons is particularly troublesome for small children.

“Kids kick their legs up and the shield moves while the technologists are stepping out of the room to take the exposure and can’t see them. So the X-rays have to be retaken, which means additional dose to the kids,” Frush said.

Another issue derives from something called automatic exposure control, a technology that optimizes image quality by adjusting the X-ray machine’s radiation output based on what is in the imaging field. Overall, automatic exposure control greatly improves the quality of the X-ray images and enables a lower dose to be used.  

However, if positioning errors cause the radioprotective apron to enter the imaging field, the radiographic system increases the magnitude and length of its output, in order to penetrate the shield.

“Automatic exposure control is a great tool, but it needs to be used appropriately. It’s not recommended for small children, particularly in combination with radioprotective shielding,”  said Frush.

With these concerns in mind, many technologists, medical physicists and radiologists are now recommending to discontinue the routine practice of shielding reproductive organs during X-ray imaging. However, they support giving technologists discretion to provide shielding in certain circumstances, such as on parental request. This position is supported by several groups, including the American Association of Physicists in MedicineNational Council on Radiation Protection and Measurements and American College of Radiology.

These new guidelines are also supported by the Image Gently Alliance, a coalition of heath care organizations dedicated to promoting safe pediatric imaging, which is chaired by Frush. And they are being adopted by Stanford hospitals.

“Lucile Packard Children’s revised policy on gonadal shielding has been formalized by the department,” he said. “There is still some work to do with education, including training providers and medical students to have a dialogue with patients and caregivers. But so far, pushback by patients has been much less than expected.”

Looking beyond the issue of shielding, Frush advised parents to be open to lifesaving medical imaging for their children, while also advocating for its best use. He said:

“Ask the doctor who is referring the test: Is it the right study? Is it the right thing to do now, or can it wait? Ask the imaging facility:  Are you taking into account the age and size of my child to keep the radiation dose reasonable?”

Photo by Shutterstock / pang-oasis

This is a reposting of my Scope story, courtesy of Stanford School of Medicine.

Nerve interface provides intuitive and precise control of prosthetic hand

Current state-of-the-art designs for a multifunctional prosthetic hand are restricted in functionality by the signals used to control it. A promising source for prosthetic motor control is the peripheral nerves that run from the spinal column down the arm, since they still function after an upper limb amputation. But building a direct interface to the peripheral nervous system is challenging, because these nerves and their electrical signals are incredibly small. Current interface techniques are hindered by signal amplitude and stability issues, so they provide amputees with only a limited number of independent movements. 

Now, researchers from the University of Michigan have developed a novel regenerative peripheral nerve interface (RPNI) that relies on tiny muscle grafts to amplify the peripheral nerve signals, which are then translated into motor control signals for the prosthesis using standard machine learning algorithms. The research team has demonstrated real-time, intuitive, finger-level control of a robotic hand for amputees, as reported in a recent issue of Science Translational Medicine.

“We take a small graft from one of the patient’s quadricep muscles, or from the amputated limb if they are doing the amputation right then, and wrap just the right amount of muscle around the nerve. The nerve then regrows into the muscle to form new neuromuscular junctions,” says Cindy Chestek, an associate professor of biomedical engineering at the University of Michigan and a senior author on the study. “This creates multiple innervated muscle fibers that are controlled by the small nerve and that all fire at the same time to create a much larger electrical signal—10 or 100 times bigger than you would record from inside or around a nerve. And we do this for several of the nerves in the arm.”

This surgical technique was initially developed by co-researcher Paul Cederna, a plastic surgeon at the University of Michigan, to treat phantom limb pain caused by neuromas. A neuroma is a painful growth of nerve cells that forms at the site of the amputation injury. Over 200 patients have undergone the surgery to treat neuroma pain.

“The impetus for these surgeries was to give nerve fibers a target, or a muscle, to latch on to so neuromas didn’t develop,” says Gregory Clark, an associate professor in biomedical engineering from the University of Utah who was not involved in the study. “Paul Cederna was insightful enough to realize these reinnervated mini-muscles also provided a wonderful opportunity to serve as signal sources for dexterous, intuitive control. That means there’s a ready population that could benefit from this approach.”

The Michigan team validated their technique with studies involving four participants with upper extremity amputations who had previously undergone RPNI surgery to treat neuroma pain. Each participant had a total of 3 to 9 muscle grafts implanted on nerves. Initially, the researchers measured the signals from these RPNIs using fine-wire, nickel-alloy electrodes, which were inserted through the skin into the grafts using ultrasound guidance. They measured high-amplitude electromyography signals, representing the electrical activity of the mini-muscles, when the participants imagined they were moving the fingers of their phantom hand. The ultrasound images showed the participants’ thoughts caused the associated specific mini-muscles to contract. These proof-of-concept measurements, however, were limited by the discomfort and movement of the percutaneous electrodes that pierced the skin.

Next, the team surgically implanted permanent electrodes into the RPNIs of two of the participants. They used a type of electrode commonly used for battery-powered diaphragm pacing systems, which electrically stimulate the diaphragm muscles and nerves of patients with chronic respiratory insufficiency to help regulate their breathing. These implanted electrodes allowed the researchers to measure even larger electrical signals—week after week from the same participant—by just plugging into the connector. After taking 5 to 15 minutes of calibration data, the electrical signals were translated into movement intent using machine learning algorithms and then passed on to a prosthetic hand. Both subjects were able to intuitively complete tasks like stacking physical blocks without any training—it worked on the first try just by thinking about it, says Chestek. Another key result is that the algorithm kept working even 300 days later.

“The ability to use the determined relationship between electrical activity and intended movement for a very long period of time has important practical consequences for the user of a prosthesis, because the last thing they want is to rely on a hand that is not reliable,” Clark says.

Although this clinical trial is ongoing, the Michigan team is now investigating how to replace the connector and computer card with an implantable device that communicates wirelessly, so patients can walk around in the real world. The researchers are also working to incorporate sensory feedback through the regenerative peripheral nerve interface. Their ultimate goal is for patients to feel like their prosthetic hand is alive, taking over the space in the brain where their natural hand used to be.

“People are excited because this is a novel approach that will provide high quality, intuitive, and very specific signals that can be used in a very straightforward, natural way to provide high degrees of dexterous control that are also very stable and last a long time,” Clark says.

Read the article in Science Translational Medicine.

Illustration of multiple regenerative peripheral nerve interfaces (RPNIs) created for each available nerve of an amputee. Fine-wire electrodes were embedded into his RPNI muscles during the readout session. Credit: Philip Vu/University of Michigan; Science Translational Medicine doi: 10.1126/scitranslmed.aay2857

This is a reposting of my news brief, courtesy of Materials Research Society.

Could the next generation of particle accelerators come out of the 3D printer?

SLAC scientists and collaborators are developing 3D copper printing techniques to build accelerator components.

Imagine being able to manufacture complex devices whenever you want and wherever you are. It would create unforeseen possibilities even in the most remote locations, such as building spare parts or new components on board a spacecraft. 3D printing, or additive manufacturing, could be a way of doing just that. All you would need is the device materials, a printer and a computer that controls the process.

Diana Gamzina, a staff scientist at the Department of Energy’s SLAC National Accelerator Laboratory; Timothy Horn, an assistant professor of mechanical and aerospace engineering at North Carolina State University; and researchers at RadiaBeam Technologies dream of developing the technique to print particle accelerators and vacuum electronic devices for applications in medical imaging and treatment, the electrical grid, satellite communications, defense systems and more.

In fact, the researchers are closer to making this a reality than you might think.

“We’re trying to print a particle accelerator, which is really ambitious,” Gamzina said. “We’ve been developing the process over the past few years, and we can already print particle accelerator components today. The whole point of 3D printing is to make stuff no matter where you are without a lot of infrastructure. So you can print your particle accelerator on a naval ship, in a small university lab or somewhere very remote.”

3D printing can be done with liquids and powders of numerous materials, but there aren’t any well-established processes for 3D printing ultra-high-purity copper and its alloys – the materials Gamzina, Horn and their colleagues want to use. Their research focuses on developing the method.

Indispensable copper

Accelerators boost the energy of particle beams, and vacuum electronic devices are used in amplifiers and generators. Both rely on components that can be easily shaped and conduct heat and electricity extremely well. Copper has all of these qualities and is therefore widely used.

Traditionally, each copper component is machined individually and bonded with others using heat to form complex geometries. This manufacturing technique is incredibly common, but it has its disadvantages.

“Brazing together multiple parts and components takes a great deal of time, precision and care,” Horn said. “And any time you have a joint between two materials, you add a potential failure point. So, there is a need to reduce or eliminate those assembly processes.”

Potential of 3D copper printing

3D printing of copper components could offer a solution.

It works by layering thin sheets of materials on top of one another and slowly building up specific shapes and objects. In Gamzina’s and Horn’s work, the material used is extremely pure copper powder.

The process starts with a 3D design, or “construction manual,” for the object. Controlled by a computer, the printer spreads a few-micron-thick layer of copper powder on a platform. It then moves the platform about 50 microns – half the thickness of a human hair – and spreads a second copper layer on top of the first, heats it with an electron beam to about 2,000 degrees Fahrenheit and welds it with the first layer. This process repeats over and over until the entire object has been built.

3D printing of a layer of a device known as a traveling wave tube using copper powder. (Christopher Ledford/North Carolina State University)

The amazing part: no specific tooling, fixtures or molds are needed for the procedure. As a result, 3D printing eliminates design constraints inherent in traditional fabrication processes and allows the construction of objects that are uniquely complex.

“The shape doesn’t really matter for 3D printing,” said SLAC staff scientist Chris Nantista, who designs and tests 3D-printed samples for Gamzina and Horn. “You just program it in, start your system and it can build up almost anything you want. It opens up a new space of potential shapes.”

The team took advantage of that, for example, when building part of a klystron – a specialized vacuum tube that amplifies radiofrequency signals – with internal cooling channels at NCSU. Building it in one piece improved the device’s heat transfer and performance.

Compared to traditional manufacturing, 3D printing is also less time consuming and could translate into cost savings of up to 70%, Gamzina said.

A challenging technique

But printing copper devices has its own challenges, as Horn, who began developing the technique with collaborators at RadiaBeam years ago, knows. One issue is finding the right balance between the thermal and electrical properties and strengths of the printed objects. The biggest hurdle for manufacturing accelerators and vacuum electronics, though, is that these high-vacuum devices require extremely high quality and pure materials to avoid part failures, such as cracking or vacuum leaks.

The research team tackled these challenges by first improving the material’s surface quality, using finer copper powder and varying the way they fused layers together. However, using finer copper powder led to the next challenge. It allowed more oxygen to attach to the copper powder, increasing the oxide in each layer and making the printed objects less pure.

So, Gamzina and Horn had to find a way to reduce the oxygen content in their copper powders. The method they came up with, which they recently reported in Applied Sciences, relies on hydrogen gas to bind oxygen into water vapor and drive it out of the powder.

Using this method is somewhat surprising, Horn said. In a traditionally manufactured copper object, the formation of water vapor would create high-pressure steam bubbles inside the material, the material would blister and fail. In the additive process, on the other hand, the water vapor escapes layer by layer, which releases the water vapor more effectively.

Although the technique has shown great promise, the scientists still have a ways to go to reduce the oxygen content enough to print an actual particle accelerator. But they have already succeeded in printing a few components, such as the klystron output cavity with internal cooling channels and a string of coupled cavities that could be used for particle acceleration.

Planning to team up with industry partners

The next phase of the project will be driven by the newly-formed Consortium on the Properties of Additive-Manufactured Copper, which is led by Horn. The consortium currently has four active industry members – Siemens, GE Additive, RadiaBeam and Calabazas Creek Research, Inc – with more on the way.

“This is a nice example of collaboration between an academic institution, a national lab and small and large businesses,” Gamzina said. “It would allow us to figure out this problem together. Our work has already allowed us to go from ‘just imagine, this is crazy’ to ‘we can do it’ in less than two years.”

This work was primarily funded by the Naval Sea Systems Command, as a Small Business Technology Transfer Program with Radiabeam, SLAC, and NCSU. Other SLAC contributors include Chris Pearson, Andy Nguyen, Arianna Gleason, Apurva Mehta, Kevin Stone, Chris Tassone and Johanna Weker. Additional contributions came from Christopher Ledford and Christopher Rock at NCSU and Pedro Frigola, Paul Carriere, Alexander Laurich, James Penney and Matt Heintz at RadiaBeam.

Citation: C. Ledford et al., Applied Sciences, 24 September 2019 (10.3390/app9193993)

For questions or comments, contact the SLAC Office of Communications at communications@slac.stanford.edu.

————————————————

SLAC is a vibrant multiprogram laboratory that explores how the universe works at the biggest, smallest and fastest scales and invents powerful tools used by scientists around the globe. With research spanning particle physics, astrophysics and cosmology, materials, chemistry, bio- and energy sciences and scientific computing, we help solve real-world problems and advance the interests of the nation.

SLAC is operated by Stanford University for the U.S. Department of Energy’s Office of Science. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit energy.gov/science.

Top figure: Examples of 3D-printed copper components that could be used in a particle accelerator: X-band klystron output cavity with micro-cooling channels (at left) and a set of coupled accelerator cavities. (Christopher Ledford/North Carolina State University)

This is a reposting of my news feature, courtesy of SLAC Linear Accelerator Center.

Measuring depression with wearables

Depression and emotional disorders can occur at any time of year — and do for millions of Americans. But feeling sad, lonely, anxious and depressed may seem particularly isolating during this holiday season, which is supposed to be a time of joy and celebration.

A team of Stanford researchers believes that one way to work towards ameliorating this suffering is to develop a better way to quantitatively measure stress, anxiety and depression.

“One of the biggest barriers for psychiatry in the field that I work in is that we don’t have objective tests. So the way that we assess mental health conditions and risks for them is by interview and asking you how do you feel,” said Leanne Williams, MD, a professor in psychiatry and behavioral sciences at Stanford, when she spoke at a Stanford Reunion Homecoming alumni celebration.

She added, “Imagine if you were diagnosing and treating diabetes without tests, without sensors. It’s really impossible to imagine, yet that is what we’re doing for mental health, right now.”

Instead, Stanford researchers want to collect and analyze data from wearable devices to quantitatively characterize mental states. The multidisciplinary team includes scientists from the departments of psychiatry, chemical engineering, bioengineering, computer science and global health.

Their first step was to use functional magnetic resonance imaging to map the brain activity of healthy controls compared to people with major depressive disorder who were imaged before and after they were treated with antidepressants.

The researchers identified six “biotypes” of depression, representing different ways brain circuitry can be disrupted to cause specific symptoms. They classified the biotypes as rumination, anxious avoidance, threat dysregulation, anhedonia, cognitive dyscontrol and inattention.

“For example, threat dysregulation is when the brain stays in alarm mode after acute stress and you feel heart racing, palpitations, sometimes panic attacks,” presented Williams, “and that’s the brain not switching off from that mode,” Williams said.

The team, which includes chemical engineer Zhenan Bao, PhD, then identified links between these different brain biotypes and various physiological differences, including changes in heart rate, skin conductance, electrolyte levels and hormone production. In particular, they found correlations between the biotypes and production of cortisol, a hormone strongly related to stress level.

Now, they are developing a wearable device — called MENTAID — that measures the physiological parameters continuously. Their current prototype can already measure cortisol levels in sweat in agreement with standard laboratory measurements. This was an incredibly challenging task due to the extremely low concentration and tiny molecular size of cortisol.

Going forward, they plan to validate their wearable device with clinical trials, including studies to assess its design and user interface. Ultimately, the researchers hope MENTAID will help prevent and treat mental illness — for example, by better predicting and evaluating patient response to specific anti-depressants.

Photo by Sora Sagano

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

X-rays shed light on how anti-asthmatic drugs work

A new study uncovers how a critical protein binds to drugs used to treat asthma and other inflammatory diseases.

By studying the crystal structure of an important protein when it was bound to two drugs widely prescribed to treat asthma, an international team of scientists has discovered unique binding and signaling mechanisms that could lead to the development of more effective treatments for asthma and other inflammatory diseases.

The protein, called cysteinyl leukotriene receptor type 1 (CysLT1R), controls the dilation and inflammation of bronchial tubes in the lungs. It is therefore one of the primary targets for anti-asthma drugs, including the two drugs studied: zafirlukast, which acts on inflammatory cells in the lungs, and pranlukast, which reduces bronchospasms due to allergic reactions.

Using the Linac Coherent Light Source (LCLS) X-ray free-electron laser at the Department of Energy’s SLAC National Accelerator Laboratory, the team bombarded tiny crystals of CysLT1R-zafirlukast with X-ray pulses and measured its structure. They also used X-rays from the European Synchrotron Radiation Facility in Grenoble, France to collect data about CysLT1R-pran crystals. They published their findings in October in Science Advances.

The researchers gained a new understanding of how CysLT1R interacts with these anti-asthma drugs, observing surprising structural features and a new activation mechanism. For example, the study revealed major differences between how the two drugs attached to the binding site of the protein. In comparison to pranlukast, the zafirlukast molecule jammed open the entrance gate of CysLT1R’s binding site into a much wider configuration. This improved understanding of the protein suggests a new rationale for designing more effective anti-asthma drugs.

The study was performed by a collaboration of researchers at SLAC; Moscow Institute of Physics and Technology, Russia; University de Sherbrooke, Canada; University of Southern California; Research Center Juelich, Germany; Universite Grenoble Alpes-CEA-CNRS, France; Czech Academy of Sciences, Czech Republic; and Arizona State University.

Citation: Aleksandra Luginina et al., Science Advances, 09 October 2019 (10.1126/sciadv.aax2518).

For questions or comments, contact the SLAC Office of Communications at communications@slac.stanford.edu.

Image caption: Using X-rays, researchers uncovered details about two drugs widely prescribed to treat asthma: pranlukast (shown up top) and zafirlukast (shown beneath). Their results revealed major differences between how the two drugs attached to the binding site of the receptor protein. In comparison to pranlukast, the zafirlukast molecule jammed open the entrance gate of protein’s binding site into a much wider configuration. (10.1126/sciadv.aax2518)

This is a reposting of my SLAC news story, courtesy of SLAC Linear Accelerator Laboratory.

Stanford researchers watch proteins assemble a protective shell around bacteria

Many bacteria and viruses are protected from the immune system by a thin, hard outer shell  — called an S-layer — composed of a single layer of identical protein building blocks.

Understanding how microbes form these crystalline S-layers and the role they play could be important to human health, including our ability to treat bacterial pathogens that cause serious salmonella, C. difficile and anthrax infections. For instance, researchers are working on ways to remove this shell to fight anthrax and other diseases.

Now, a Stanford study has observed for the first time proteins assembling themselves into an S-layer in a bacterium called Caulobacter crescentus, which is present in many fresh water lakes and streams.

Although this bacteria isn’t harmful to humans, it is a well-understood organism that is important to various cellular processes. Scientists know that the S-shell of Caulobacter crescentus is vital for the microbe’s survival and made up of protein building blocks called RsaA.  

A recent news release describes how the research team from Stanford and SLAC National Accelerator Laboratory were able to watch this assembly, even though it happens on such a tiny scale:

“To watch it happen, the researchers stripped microbes of their S-layers and supplied them with synthetic RsaA building blocks labeled with chemicals that fluoresce in bright colors when stimulated with a particular wavelength of light.

Then they tracked the glowing building blocks with single-molecule microscopy as they formed a shell that covered the microbe in a hexagonal, tile-like pattern (shown in image above) in less than two hours. A technique called stimulated emission depletion (STED) microscopy allowed them to see structural details of the layer as small as 60 to 70 nanometers, or billionths of a meter, across – about one-thousandth the width of a human hair.”

The scientists were surprised by what they saw: the protein molecules spontaneously assembled themselves without the help of enzymes.

“It’s like watching a pile of bricks self-assemble into a two-story house,” said Jonathan Herrmann, a graduate student in structural biology at Stanford involved in the study, in the news release.

The researchers believe the protein building blocks are guided to form in specific regions of the cell surface by small defects and gaps within the S-layer. These naturally-occurring defects are inevitable because the flat crystalline sheet is trying to cover the constantly changing, three-dimensional shape of the bacterium, they said.

Among other applications, they hope their findings will offer potential new targets for drug treatments.

“Now that we know how they assemble, we can modify their properties so they can do specific types of work, like forming new types of hybrid materials or attacking biomedical problems,” said Soichi Wakatsuki, PhD, a professor of structural biology and photon science at SLAC, in the release.

Illustration by Greg Stewart/SLAC National Accelerator Laboratory

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Simplified analysis method could lead to improved prosthetics, a Stanford study suggests

Brain-machine interfaces (BMI) are an emerging field at the intersection of neuroscience and engineering that may improve the quality of life for amputees and individuals with paralysis. These patients are unable to get signals from their motor cortex — the part of the brain that normally controls movement — to their muscles.

Researchers are overcoming this disconnect by implanting in the brain small electrode arrays, which measure and decode the electrical activity of neurons in the motor cortex. The sensors’ electrical signals are transmitted via a cable to a computer and then translated into commands that control a computer cursor or prosthetic limb. Someday, scientists also hope to eliminate the cable, using wireless brain sensors to control prosthetics.

In order to realize this dream, however, they need to improve both the brain sensors and the algorithms used to decode the neural signals. Stanford electrical engineer Krishna Shenoy, PhD, and his collaborators are tackling this algorithm challenge, as described in a recent paper in Neuron.

Currently, most neuroscientists process their BMI data looking for “spikes” of electrical activity from individual neurons. But this process requires time-consuming manual or computationally-intense automatic data sorting, which are both prone to errors.

Manual data sorting will also become unrealistic for future technologies, which are expected to record thousands to millions of electrode channels compared to the several hundred channels recorded by today’s state-of-the-art sensors. For example, a dataset composed of 1,000 channels could take over 100 hours to hand sort, the paper says. In addition, neuroscientists would like to measure a greater brain volume for longer durations.

So, how can they decode all of this data?

Shenoy suggests simplifying the data analysis by eliminating spike sorting for applications that depend on the activity of neural populations rather than single neurons — such as brain-machine interfaces for prosthetics.

In their new study, the Stanford team investigated whether eliminating this spike sorting step distorted BMI data. Turning to statistics, they developed an analysis method that retains accuracy while extracting information from groups rather than individual neurons. Using experimental data from three previous animal studies, they demonstrated that their algorithms could accurately decode neural activity with minimal distortion — even when each BMI electrode channel measured several neurons. They also validated these experimental results with theory.

 “This study has a bit of a hopeful message in that observing activity in the brain turns out to be easier than we initially expected,” says Shenoy in a recent Stanford Engineering news release. The researchers hope their work will guide the design and use of new low-power, higher-density devices for clinical applications since their simplified analysis method reduces the storage and processing requirements.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Photo by geralt.