Shedding New Light on Luminous Blue Variable Stars: 3D Simulations Disperse Some of the Mystery Surrounding Massive Stars

ResizedImage449270-LVB
A snapshot from a simulation of the churning gas that blankets a star 80 times the sun’s mass. Intense light from the star’s core pushes against helium-rich pockets in the star’s exterior, launching material outward in spectacular geyser-like eruptions. The solid colors denote radiation intensity, with bluer colors representing regions of larger intensity. The translucent purplish colors represent the gas density, with lighter colors denoting denser regions. Image: Joseph Insley, Argonne Leadership Computing Facility.

Three-dimensional (3D) simulations run at two of the U.S. Department of Energy’s national laboratory supercomputing facilities and the National Aeronautics and Space Administration (NASA) have provided new insights into the behavior of a unique class of celestial bodies known as luminous blue variables (LBVs) — rare, massive stars that can shine up to a million times brighter than the Sun.

Astrophysicists are intrigued by LBVs because their luminosity and size dramatically fluctuate on a timescale of months. They also periodically undergo giant eruptions, violently ejecting gaseous material into space. Although scientists have long observed the variability of LBVs, the physical processes causing their behavior are still largely unknown. According to Yan-Fei Jiang, an astrophysicist at UC Santa Barbara’s Kavli Institute for Theoretical Physics, the traditional one-dimensional (1D) models of star structure are inadequate for LBVs.

“This special class of massive stars cycles between two phases: a quiescent phase when they’re not doing anything interesting, and an outburst phase when they suddenly become bigger and brighter and then eject their outer envelope,” said Jiang. “People have been seeing this for years, but 1D, spherically-symmetric models can’t determine what is going on in this complex situation.”

Instead, Jiang is leading an effort to run first-principles, 3D simulations to understand the physics behind LBV outbursts — using large-scale computing facilities provided by Lawrence Berkeley National Laboratory’s National Energy Research Scientific Computing Center (NERSC), the Argonne Leadership Computing Facility (ALCF), and NASA. NERSC and ALCF are DOE Office of Science User Facilities.

Physics Revealed by 3D

In a study published in Nature, Jiang and his colleagues from UC Santa Barbara, UC Berkeley, and Princeton University ran three 3D simulations to study three different LBV configurations. All the simulations included convection, the action when a warm gas or liquid rises while its cold counterpart sinks. For instance, convection causes hot water at the bottom of a pot on a stove to rise up to the top surface. It also causes gas from a star’s hot core to push to its outer layers.

During the outburst phase, the new 3D simulations predict that convection causes a massive star’s radius to irregularly oscillate and its brightness to vary by 10 to 30 percent on a timescale of just a few days — in agreement with current observations.

“Convection causes the star to expand significantly to a much larger size than predicted by our 1D model without convection. As the star expands, its outer layers become cooler and more opaque,” Jiang said.

Opacity describes how a gas interacts with photons. The researchers discovered that the helium opacity in the star’s outer envelope doubles during the outburst phase, making it more difficult for photons to escape. This leads the star to reach an effective temperature of about 9,000 degrees Kelvin (16,000 degrees Fahrenheit) and triggers the ejection of mass.

“The radiation force is basically a product of the opacity and the fixed luminosity coming from the star’s core. When the helium opacity doubles, this increases the radiation force that is pushing material out until it overcomes the gravitational force that is pulling the material in,” said Jiang. “The star then generates a strong stellar wind, blowing away its outer envelope.”

Massive Simulations Required

Massive stars require massive and expensive 3D simulations, according to Jiang. So he and his colleagues needed all the computing resources available to them, including about 15 million CPU hours at NERSC, 60 million CPU hours at ALCF, and 10 million CPU hours at NASA. In addition, NERSC played a special role in the project.

“The Cori supercomputer at NERSC was essential to us in the beginning because it is very flexible,” Jiang said. “We did all of the earlier exploration at NERSC, figuring out the right parameters to use and submissions to do. We also got a lot of support from the NERSC team to speed up our input/output and solve problems.”

In addition to spending about 5 million CPU hours at NERSC on the early phase of the project, Jiang’s team used another 10 million CPU hours running part of the 3D simulations.

“We used NERSC to run half of one of the 3D simulations described in the Nature paper and the other half was run at NASA. Our other two simulations were run at Argonne, which has very different machines,” said Jiang. “These are quite expensive simulations, because even half a run takes a lot of time.”

Even so, Jiang believes that 3D simulations are worth the expense because illuminating the fundamental processes behind LBV outbursts is critical to many areas of astrophysics — including understanding the evolution of these massive stars that become black holes when they die, as well as understanding how their stellar winds and supernova explosions affect galaxies.

Jiang also used NERSC for earlier studies, and his collaboration is already running follow-up 3D simulations based on their latest results. These new simulations incorporate additional parameters — including the LBV star’s rotation and metallicity — varying the value of one of these parameters per run. For example, the speed from rotation is larger at the star’s equator than at its poles. The same is true on Earth, which is one of the reasons NASA launches rockets from Florida and California near the equator.

“A massive star has a strong rotation, which is very different at the poles and the equator. So rotation is expected to affect the symmetry of the mass loss rate,” said Jiang.

The team is also exploring metallicity, which in astrophysics refers to any element heavier than helium.

“Metallicity is important because it affects opacity. In our previous simulations, we assumed a constant metallicity, but massive stars can have very different metallicities,” said Jiang. “So we need to explore the parameter space to see how the structure of the stars change with metallicity. We’re currently running a simulation with one metallicity at NERSC, another at Argonne, and a third at NASA. Each set of calculations will take about three months to run.”

Meanwhile, Jiang and his colleagues already have new 2018 data to analyze. And they have a lot more simulations planned due to their recent allocation awards from INCITE, NERSC, and NASA.

“We need to do a lot more simulations to understand the physics of these special massive stars, and I think NERSC will be very helpful for this purpose,” he said.

This is a reposting of my news feature originally published by Berkeley Lab’s Computing Sciences.

Advertisements

Microrobots fly, walk and jump into the future

Assembling an ionocraft microrobot in UC Berkeley’s Swarm Lab. (Photos by Adam Lau)

A tiny robot takes off and drunkenly flies several centimeters above a table in the Berkeley Sensor and Actuator Center. Roughly the size and weight of a postage stamp, the microrobot consists of a mechanical structure, propulsion system, motion-tracking sensor and multiple wires that supply power and communication signals.

This flying robot is the project of Daniel Drew, a graduate student who is working under the guidance of electrical engineering and computer sciences professor Kris Pister (M.S.’89, Ph.D.’92 EECS). The culmination of decades of research, these microrobots arose from Pister’s invention of “smart dust,” tiny chips roughly the size of rice grains packed with sensors, microprocessors, wireless radios and batteries. Pister likes to refer to his microrobots as “smart dust with legs.”

“We’re pushing back the boundaries of knowledge in the field of miniaturization, robotic actuators, micro-motors, wireless communication and many other areas,” says Pister. “Where these results will lead us is difficult to predict.”

For now, Pister and his team are aiming to make microrobots that can self-deploy, in the hopes that they could be used by first responders to search for survivors after a disaster, industrial plants to detect chemical leaks or farmers to monitor and tend their crops.

These insect-sized robots come with a unique advantage for solving problems. For example, many farmers already use large drones to monitor and spray their plants to improve crop quality and yield. Microrobots could take this to a whole new level. “A standard quadcopter gives us a bird’s eye view of the field, but a microrobot would give us a bug’s eye view,” Drew says. “We could program them to do important jobs like pollination, looking for the same visual cues on flowers as insects [see].”

But to apply this kind of technology on a mass scale, first the team has to overcome significant challenges in microtechnolgy. And as Pister says, “Making tiny robots that fly, walk or jump hasn’t been easy. Every single piece of it has been hard.”

Flying silently with ion propulsion

Most flying microrobots have flapping wings that mimic real-life insects, like bees. But the team’s flying microrobot, called an ionocraft, uses a custom ion propulsion system unlike anything in nature. There are no moving parts, so it has the potential to be very durable. And it’s completely silent when it flies, so it doesn’t make an annoying buzz like a quadcopter rotor or mosquito.

The ionocraft’s propulsion system is novel, not just a scaled down version from NASA’s spacecrafts. “We use a mechanism that’s different than the one used in space, which ejects ions out the back to propel the spacecraft forward,” Drew says. “A key difference is that we have air on Earth.”

Instead, the ionocraft thruster consists of a thin emitter wire and a collector grid. When a voltage is applied between them, a positively-charged ion cloud is created around the wire. This ion cloud zips toward the negatively-charged collector grid, colliding with neutral air molecules along the way. The air molecules are knocked out of the way, creating a wind that moves the robot.

“If you put your hand under the collector grid of the ionocraft, you’ll feel wind on your hand — that’s the air stream that propels the microrobot upwards,” explains Drew. “It’s similar to the airstream that you’d feel if you put your hand under the rotor blades of a helicopter.”

The collector grid also provides the ionocraft’s mechanical structure. Having components play more than one role is critical for these tiny robots, which need to be compact and lightweight for the propulsion system to work.

Each ionocraft has four ion thrusters that are independently controlled by adjusting their voltages. This allows the team to control the orientation of the microrobot in a similar way as standard quadcopter drones. Namely, they can control the craft’s roll, pitch and yaw. What they can’t do yet is make the microrobot hover. “So far, we can fly it bouncing around like a bug in a web, but the goal is to get it to hover steadily in the air,” Pister says.

Taking first steps and jumps

In parallel, the researchers are developing microrobots that can walk or jump. Their micro-walker is composed of three silicon chips: a body chip that plugs perpendicularly into two chips with three legs each. “The hexapod microrobot is about the size of a really big ant, but it’s boxier,” says Pister.

Not only does the body chip provide structural support, but it also routes the external power and control signals to the leg chips. These leg chips are oriented vertically, allowing the legs to move along the table in a sweeping motion. Each leg is driven by two tiny on-chip linear motors, called electrostatic inchworm motors, which were invented by Pister. One motor lifts the robot’s body and the second pushes it forward. This unique walking mechanism allows three-dimensional microrobots to be fabricated more simply and cheaply.

Pister says the design should, in theory, allow the hexapod to run. So far it can only stand up and shuffle forward. However, he believes their recent fabrication and assembly improvements will have the microrobot walking more quickly and smoothly soon.

The jumping microrobot also uses on-chip inchworm motors. Its motor assembly compresses springs to store energy, which is then released when the microrobot jumps. Currently, it can only jump several millimeters in the air, but the team’s goal is to have it to jump six meters from the floor to the table. To achieve this, they are developing more efficient springs and motors.

“Having robots that can shuffle, jump a little and fly is a major achievement,” Pister says. “They are coming together. But they’re all still tethered by wires for control, data and power signals. ”

Working toward autonomy

Currently, high voltage control signals are passed over wires that connect a computer to a robot, complicating and restricting its movement. The team is developing better ways to control the microrobots, untethering them from the external computer. But transferring the controller onto the microrobot itself is challenging. “Small robots can’t carry the same kind of increasingly powerful computer chips that a standard quadcopter drone can carry,” Drew says. “We need to do more with less.”

So the group is designing and testing a single chip platform that will act as the robots’ brains for communication and control. They plan to send control messages to this chip from a cell phone using wireless technology such as Bluetooth. Ultimately, they hope to use only high-level commands, like “go pollinate the pumpkin field,” which the self-mobilizing microrobots can follow.

The team also plans to integrate on-board sensors, including a camera and microphone to act as the robot’s eyes and ears. These sensors will be used for navigation, as well as any tasks they want the robot to perform. “As the microrobot moves around, we could use its camera and microphone to transmit live video to a cell phone,” says Pister. “This could be used for many applications, including search and rescue.”

Using the brain chip interfaced with on-board sensors will allow the team to eliminate most of the troublesome wires. The next step will be to eliminate the power wires so the robots can move freely. Pister showed early on that solar cells are strong enough to power microrobots. In fact, a microrobot prototype that has been sitting on his office shelf for about 15 years still moves using solar power.

Now, his team is developing a power chip with solar cells in collaboration with Jason Stauth (M.S.’06, Ph.D.’08 EECS), who is an associate professor of engineering at Dartmouth. They’re also working with electrical engineering and computer sciences professor Ana Arias to investigate using batteries.

Finally, the researchers are developing clever machine learning algorithms that guide a microrobot’s motion, making it as smooth as possible.

In Drew’s case, the initial algorithms are based on data from flying a small quadcopter drone. “We’re first developing the machine learning platform with a centimeter-scale, off-the-shelf quadcopter,” says Drew. “Since the control system for an ionocraft is similar to a quadcopter, we’ll be able to adapt and apply the algorithms to our ionocraft. Hopefully, we’ll be able to make it hover.”

Putting it all together

Soon, the team hopes to have autonomous microrobots wandering around the lab directed by cell phone messages. But their ambitions don’t stop there. “I think it’s beneficial to have flying robots and walking robots cooperating together,” Drew says. “Flying robots will always consume more energy than walking robots, but they can overcome obstacles and sense the world from a higher vantage point. There is promise to having both or even a mixed-mobility microrobot, like a beetle that can fly or walk.”

Mixed-mobility microrobots could do things like monitor bridges, railways and airplanes. Currently, static sensors are used to monitor infrastructure, but they are difficult and time-consuming to deploy and maintain — picture changing the batteries of 100,000 sensors across a bridge. Mixed-mobility microrobots could also search for survivors after a disaster by flying, crawling and jumping through the debris.

“Imagine you’re a first responder who comes to the base of a collapsed building. Working by flashlight, it’s hard to see much but the dust hanging in the air,” says Drew. “Now, imagine pulling out a hundred insect-sized robots from your pack, tossing them into the air and having them disperse in all directions. Infrared cameras on each robot look for signs of life. When one spots a survivor, it sends a message back to you over a wireless network. Then a swarm of robots glowing like fireflies leads you to the victim’s location, while a group ahead clears out the debris in your path.”

The applications seem almost endless given the microrobots’ potential versatility and affordability. Pister estimates they might cost as little as one dollar someday, using batch manufacturing techniques. The technology is also likely to reach beyond microrobots.

For Pister’s team, the path forward is clear; the open question is when. “All the pieces are on the table now,” Pister says, “and it’s ‘just’ a matter of integration. But system integration is a challenge in its own right, especially with packaging. We may get results in the next six months — or it may take another five years.”

This is a reposting of my news feature previously published in the fall issue of the Berkeley Engineer magazine. © Berkeley Engineering

Blasting radiation therapy into the future: New systems may improve cancer treatment

Image by Greg Stewart/SLAC National Accelerator Laboratory

As a cancer survivor, I know radiation therapy lasting minutes can seem much longer as you lie on the patient bed trying not to move. Future accelerator technology may turn these dreaded minutes into a fraction of a second due to new funding.

Stanford University and SLAC National Accelerator Laboratory are teaming up to develop a faster and more precise way to deliver X-rays or protons, quickly zapping cancer cells before their surrounding organs can move. This will likely reduce treatment side effects by minimizing damage to healthy tissue.

“Delivering the radiation dose of an entire therapy session with a single flash lasting less than a second would be the ultimate way of managing the constant motion of organs and tissues, and a major advance compared with methods we’re using today,” said Billy Loo, MD, PhD, an associate professor of radiation oncology at Stanford, in a recent SLAC news release.

Currently, most radiation therapy systems work by accelerating electrons through a meter-long tube using radiofrequency fields that travel in the same direction. These electrons then collide with a heavy metal target to convert their energy into high energy X-rays, which are sharply focused and delivered to the tumors.

Now, researchers are developing a new way to more powerfully accelerate the electrons. The key element of the project, called PHASER, is a prototype accelerator component (shown in bronze in this video) that delivers hundreds of times more power than the standard device.

In addition, the researchers are developing a similar device for proton therapy. Although less common than X-rays, protons are sometimes used to kill tumors and are expected to have fewer side effects particularly in sensitive areas like the brain. That’s because protons enter the body at a low energy and release most of that energy at the tumor site, minimizing radiation dose to the healthy tissue as the particles exit the body.

However, proton therapy currently requires large and complex facilities. The Stanford and SLAC team hopes to increase availability by designing a compact, power-efficient and economical proton therapy system that can be used in a clinical setting.

In addition to being faster and possibly more accessible, animal studies indicate that these new X-ray and proton technologies may be more effective.

“We’ve seen in mice that healthy cells suffer less damage when we apply the radiation dose very quickly, and yet the tumor-killing is equal or even a little better than that of a conventional longer exposure,” Loo said in the release. “If the results hold for humans, it would be a whole new paradigm for the field of radiation therapy.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Sensors could provide dexterity to robots, with potential surgical applications

Stanford chemical engineer Zhenan Bao, PhD, has been working for decades to develop an electronic skin that can provide prosthetic or robotic hands with a sense of touch and human-like manual dexterity.

Her team’s latest achievement is a rubber glove with sensors attached to the fingertips. When the glove is placed on a robotic hand, the hand is able to delicately hold a blueberry between its fingertips. As the video shows, it can also gently move a ping-pong ball in and out of holes without crushing it.

The sensors in the glove’s fingertips mimic the biological sensors in our skin, simultaneously measuring the intensity and direction of pressure when touched. Each sensor is composed of three flexible layers that work together, as described in the recent paper published in Science Robotics.

The sensor’s two outer layers have rows of electrical components that are aligned perpendicular to each other. Together, they make up a dense array of small electrical sensing pixels. In between these layers is an insulating rubber spacer.

The electrically-active outer layers also have a bumpy bottom that acts like spinosum — a spiny sublayer in human skin with peaks and valleys. This microscopic terrain is used to measure the pressure intensity. When a robotic finger lightly touches an object, it is felt by sensing pixels on the peaks. When touching something more firmly, pixels in the valleys are also activated.

Similarly, the researchers use the terrain to detect the direction of the touch. For instance, when the pressure comes from the left, then its felt by pixels on the left side of the peaks more than the right side.

Once more sensors are added, such electronic gloves could be used for a wide range of applications. As a recent Stanford Engineering news release explains, “With proper programming a robotic hand wearing the current touch-sensing glove could perform a repetitive task such as lifting eggs off a conveyor belt and placing them into cartons. The technology could also have applications in robot-assisted surgery, where precise touch control is essential.”

However, Bao hopes in the future to develop a glove that can gently handle objects automatically. She said in the release:

“We can program a robotic hand to touch a raspberry without crushing it, but we’re a long way from being able to touch and detect that it is a raspberry and enable the robot to pick it up.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

How does media multitasking affect the mind?

Image by Mohamed Hassan

Imagine that you’re working on your computer, watching the Warriors game, exchanging texts and checking Facebook. Sound familiar? Many people simultaneously view multiple media streams every day.

Over the past decade, researchers have been studying the relationship between this type of heavy media multitasking and cognition to determine how our media use is shaping our minds and brains. This is a particularly critical question for teenagers, who use technology for almost 9 hours every day on average, not including school-related use.

Many studies have examined the cognitive performance in young adults using a variety of task-based cognitive tests — comparing the performance of heavy and light multitaskers. According to a recent review article, these studies show that heavy media multitaskers perform significantly worse, particularly when the tasks require sustained, goal-oriented attention.

For example, a pivotal study led by Anthony Wagner, PhD, a Stanford professor of psychology and co-author of the review article, developed a questionnaire-based media multitasking index to identify the two groups — based on the number of media streams a person juggles during a typical media consumption hour, as well as the time spent on each media. Twelve media forms were included, ranging from computer games to cell phone calls.

The team administered their questionnaire and several standard cognitive tests to Stanford students. In one series of tests, the researchers measured the working memory capabilities of 22 light multitaskers and 19 heavy multitaskers. Working memory is the mental post-it note used to keep track of information, like a set of simple instructions, in the short term.

“In one test, we show a set of oriented blue rectangles, then remove them from the screen and ask the subject to retain that information in mind. Then we’ll show them another set of rectangles and ask if any have changed orientation,” described Wagner in a recent Stanford Q&A. “To measure memory capacity, we do this task with a different number of rectangles and determine how performance changes with increasing memory loads. To measure the ability to filter out distraction, sometimes we add distractors, like red rectangles that the subjects are told to ignore.”

Wagner also performed standard task-switching experiments in which the students viewed images of paired numbers and letters and analyzed them. The students had to switch back and forth between classifying the numbers as even or odd and the letters as vowels or consonants.

The Stanford study showed that heavy multitaskers were less effective at filtering out irrelevant stimuli , whereas light multitaskers found it easier to focus on a single task in the face of distractions.

Overall, this previous study is representative of the twenty subsequent studies discussed in the recent review article. Wagner and co-author Melina Uncapher, PhD, a neuroscientist at the University of California, San Francisco, theorized that lapses in attention may explain most of the current findings — heavy media multitaskers have more difficulty staying on task and returning to task when attention has lapsed than light multitaskers.

However, the authors emphasized that the large diversity of the current studies and their results raise more questions than they answer, such as what is the direction of causation? Does heavier media multitasking cause cognitive and neural differences, or do individuals with such preexisting differences tend towards more multitasking behavior? They said more research is needed.

Wagner concluded in the Q&A:

“I would never tell anyone that the data unambiguously show that media multitasking causes a change in attention and memory. That would be premature… That said, multitasking isn’t efficient. We know there are costs of task switching. So that might be an argument to do less media multitasking.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine

A look at the cigarette epidemic in China

Image by Dimhou

The imagery of a cuddly panda bear has often been used to sell tobacco products in China. So a new book that examines China’s cigarette industry seems aptly titled: Poisonous Pandas: Chinese Cigarette Manufacturing in Critical Historical Perspectives.

The book brings together an interdisciplinary group of scholars — including Stanford editors Matthew Kohrman, PhD, a professor of anthropology, and Robert Proctor, PhD, a professor of history. Together the team has investigated how transnational tobacco companies have worked to triple the world’s annual cigarette consumption since the 1960s. They focus on the China National Tobacco Corporation, which currently produces forty percent of cigarettes sold globally.

In a recent Freeman Spolgi Institute Q&A, Kohrman discusses how he got involved in this work. “When I began my ethnographic fieldwork on tobacco in China, I initially studied mostly consumer behavior. But I quickly realized that focusing solely on cigarette consumption, without considering the relationship between supply and demand, was like studying obesity while ignoring food,” he says.

Kohrman explains that cigarettes have become the single greatest cause of preventable death in the world today and the problem is getting worse. “Instead of declining as we would expect based on our impressions living here in California, the number of daily cigarette smokers around the world is projected to continue climbing,” he says. In particular, he explains the big tobacco companies are targeting less-educated people from lower- and middle-income countries.

Kohrman does offer some hope in light of the Chinese government’s recent initiatives to restrict tobacco advertising and smoking in public places. But he says that there is a lot more work to do.

“The road towards comprehensive tobacco prevention in China is going to be a long one,” Kohrman concludes.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Inherited Neanderthal genes protect us against viruses

Image by Claire Scully

When Neanderthals and modern humans interbred about 50,000 years ago, they exchanged snippets of DNA. Today, Europeans and Asians still carry 2 to 3 percent of Neanderthal DNA in their genomes.

During contact, they also exposed each other to viruses. This could have been deadly for the human species since Neanderthals encountered many novel infectious viruses while living for hundreds of thousands of years outside Africa. Luckily, the Neanderthals’ immune systems evolved genetic defenses against these viruses that were passed on to humans, according to a study reported in Cell.

“Neanderthal genes likely gave us some protection against viruses that our ancestors encountered when they left Africa,” said Dmitri Petrov, PhD, an evolutionary biologist at Stanford’s School of Humanities and Sciences, in a recent Stanford news release.

In the study, the researchers gathered a large dataset of several thousand proteins that interact with viruses in modern humans. They then identified 152 Neanderthal DNA snippets present in the genes that make these proteins. Most of the 152 genes create proteins that interact with a specific type of viruses, RNA viruses, which have RNA encased in a protein shell.

The team identified 11 RNA viruses with a high number of Neanderthal-inherited genes, including HIV, influenza A and hepatitis C. These viruses likely played a key role in shaping human genome evolution, they said.

Overall, their findings suggest that the genomes of humans and other species contain signatures of ancient epidemics.

“It’s similar to paleontology,” said David Enard, PhD, a former postdoctoral fellow in Petrov’s lab. “You can find hints of dinosaurs in different ways. Sometimes you’ll discover actual bones, but sometimes you find only footprints in fossilized mud. Our method is similarly indirect: Because we know which genes interact with which viruses, we can infer the types of viruses responsible for ancient disease outbreaks.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.