On the importance of including pregnant women in clinical trials: A Q&A

hands-2568594_1920
Photo by StockSnap

As a research scientist, I’ve negotiated the complex nature of getting approval to image human subjects. So I know firsthand that it is common to exclude pregnant women from clinical trials. Although this practice is well-intentioned, it is also misguided — according to an opinion piece recently published in JAMA. To learn more, I spoke with one of the authors, Heather Byers, MD, a clinical assistant professor in pediatrics at Stanford.

Why are pregnant women excluded from clinical trials?

“Historically, women in general were excluded from clinical trials because men were thought to be a more homogenous group without hormonal cycles and other sex-based variables that might impact the medical conditions under study.

In addition, pregnant women are still classified as a ‘vulnerable’ population for all research studies, so investigators must take additional steps to enroll them to ensure minimum risk.

Also, the lack of data about what pregnant women can safely be exposed to leads to more uncertainty. So many investigators choose to exclude them, even if they might benefit from the study intervention.”

Why is this a problem?

“Excluding them is a problem because women don’t stop getting sick or stop having chronic medical conditions just because they are pregnant. The average woman is exposed to four medications during her pregnancy and over 80 percent of medications haven’t been studied in a like population. This forces pregnant women to take medications on an “off-label” basis — meaning, the medications weren’t studied or approved for use in pregnant women — because there’s no other option. Pregnant women deserve better. It’s a matter of justice.”

What are the barriers and how can we overcome them?

“First, we advocate reclassifying pregnant women from ‘vulnerable’ to ‘scientifically complex.’ Pregnancy doesn’t alter a woman’s capacity for autonomous decision-making. Indeed, a pregnant woman frequently makes complex medical decisions for herself and her fetus that reflect her family’s values.

Another barrier for medical investigators is the perceived legal risk regarding a potential adverse outcome in the fetus or mother. As we discuss in the JAMA Viewpoint, this barrier could be addressed by standardizing the informed consent process.

Finally, federal regulations don’t define ‘acceptable risk’ to the woman or fetus and this uncertainty is perceived as a risk in itself. But in some cases, pregnant women may accept the uncertainty and risk.

For example, it was imperative to reduce mother-to-child transmission of HIV. So obstetricians reluctantly included pregnant women with HIV in their study of antiretroviral treatments, since the risk of the drugs were thought to be low and the potential benefit high. And the effectiveness of this study helped transform the AIDS epidemic.”

Is progress being made?

“Although progress has been slow, there has been an increased effort to enroll pregnant women. Several high-profile clinical trials involving pregnant women recently completed and institutions like the National Institutes of Health are working to change their polices. For example, the NIH Task Force on Research Specific to Pregnant Women and Lactating Women recently issued a report that summarizes the current gaps in knowledge and provides recommendations for continued progress.”

How did you become involved?

“I first became interested in this subject as a medical student during my rotation at NIH with Pamela Stratton, MD, one of the obstetricians involved in the study of antiretrovirals to prevent vertical transmission of HIV.

Later, as an obstetrics resident, I was frustrated by the lack of information to share with my patients regarding the risk and clinical impact of various medications, vaccines and medical conditions in pregnancy. Every anecdotal story  — such as my patient who was hospitalized in intensive care for months with influenza because she’d been too afraid to get the flu vaccine earlier in her pregnancy — is one too many. The fear of uncertain risk can be dangerous. There should be a better way.

One thing that has changed is the rise of social media and patient support group accessibility. Although this should not replace the controlled setting of a clinical trial, partnerships between motivated patient advocacy groups and medical investigators can be a powerful tool for obtaining information about risk and benefits going forward.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Advertisements

Jamming with the Midnight Rounds: A Q&A

Photo by Victoria Bruzoni of Midnight Rounds band, from left to right: James Wall, Matias Bruzoni, Raji Koppolu, Yasser El-Sayed, Garret Vygantas, Jon Palma and Jeff Linnel

The next time you listen to music at a nearby pub or Bay Area event, double check to see who is playing. You just might see one of your Stanford physicians.

At a division holiday party in 2009, pediatric surgeon Matias Bruzoni, MD, on vocals, piano and guitar and nurse practitioner Raji Koppolu on vocals played acoustic versions of popular songs. Soon after, two more pediatric surgeons joined and the band Midnight Rounds was born. Over the years, the band has expanded to include piano, guitar, bass, percussion, drums, violin and vocals. I spoke with Bruzoni to learn more.

What is the origin of the name, Midnight Rounds?

“Our bass and guitar player James Wall, MD, and his family generously turn their guest house into a music studio whenever we need to practice. We all have very tight schedules due to our professional work, so once a week we practice late at night — many times going past midnight. As providers, we also make rounds every day to see our patients, so our drummer Yasser El-Sayed, MD, suggested we call ourselves Midnight Rounds.”

What kind of music do you play?

“Our repertoire includes oldies, country, 80’s, 90’s and more modern pop songs. We particularly enjoy creating mashups of songs, flipping back and forth between songs and adding our own twist. For instance, we like playing “Free” sung by the Zac Brown Band mashed up with “Into the Mystic” by Van Morrison and “Lodi” by Creedence Clearwater Revival mashed up with “Sloop John B” by the Beach Boys. Another favorite song is “Dixieland Delight” by Alabama, which features Jonathan Palma, MD, playing violin.”

Where do you play?

“We play in many different venues including weddings, wineries, local pubs, holiday parties, pumpkin festivals and wherever we’re invited. It varies, but we average a couple of events per month. We play quarterly at the Pioneer Saloon in Woodside — we’ll be there on January 12.

We sometimes make a little money during our performances at places like Pioneer. We decided as a group to donate the proceeds to different charity organizations that benefit women and children’s health.”

Is there any relation between playing in the band and medicine?

“For us, the band is a perfect excuse to get together outside of regular working hours. We feel this strengthens our relationships with each other in the hospital. It’s also a healthy way to recharge our batteries, avoid burn-out and thus take better care of our patients.

In addition, we’ve gotten to know a lot of people that work at Stanford — nurses, OR staff, social workers, interpreters and other docs like anesthesiologists — who come to our gigs. Our strongest crew are the NICU nurses and social workers, who follow us wherever we go. I think our patients definitely benefit, because teamwork is essential to patient care.

I also think performing under pressure is a great exercise since it is very similar to what we do every day here at the hospital. You get nervous even if you’re doing an acoustic session in front of 10 people, since you want to sound good. When I interview residents for positions here at Stanford, I pay a lot of attention to whether they excel in athletics or music, which gives me an idea on how well they can perform under pressure.”

Describe a favorite moment with the band.

“There are times when we have our kids sing or play an instrument with us. And that’s a very special moment. For instance, the other day we played at a pumpkin patch. People were playing games and stuff, not paying much attention to the band. But everyone went dead silent when my daughter came up to sing a Justin Bieber song. And then they started taping it. It was really magical.”

What’s next?

“We’re thinking about writing some original songs as an experiment. But mostly we just want to build up our repertoire. We started with maybe five songs and now we have about 50 songs that we can we can play — some of them without even practicing. We usually practice tow to three new songs for every new gig. Our band members have very different musical tastes, which makes it fun to blend all them together.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Medical students turn to peer-support groups for assistance: A Q&A

Photo by rawpixel

School can be overwhelming, especially medical school. But Stanford Medicine offers many different forms of mental health support, including a peer-to-peer support program for medical students called Ears 4 Peers. To learn more, I spoke with Dina Wang-Kraus, MD, a Stanford psychiatry and behavioral sciences resident and co-founder of the program.

What inspired you to start the Ears 4 Peers program?

“In 2012, I was a first-year medical student and I was noticing that a significant number of my classmates were experiencing compassion fatigue and burnout. We were encouraged to reach out to the counseling and psychology services but there was some hesitancy, either from busy schedules or anxieties surrounding stigma. So, Norma Villalon, MD, and I decided to found a peer-to-peer support program. I started a similar program in college at Johns Hopkins, called A Place to Talk.

The hope was to have near-peers — those who were just walking in your shoes — provide support. Our goal was to bridge the distance students often feel when in a competitive, challenging situation. We may have been adults in our mid-twenties to forties, but we were only in the infancy of our training.

Rebecca Smith-Coggins, MD, is our faculty adviser and leader. From day one, she’s believed in our cause.”

What are some issues the program addresses?

“We receive calls regarding issues like academic stress, interpersonal relationship conflicts, imposter syndrome, intimate partner violence, Stanford Duck syndrome and suicidal thoughts. We also receive calls from students feeling lonely, disconnected and homesick, especially around finals, holidays and medical board exams. And some students call hoping to be referred for additional support.”

How are Ears 4 Peers mentors selected and trained?

“Ears 4 Peers mentors are nominated by their peers or self-nominated. They complete an application to tell us more about themselves, what draws them to this type of work and what they hope to gain from the experience.

We’re very lucky to have the support of Alejandro Martinez, PhD, the Associate Dean of Students for the Stanford undergraduate campus. He and his team designed a curriculum specifically for Stanford School of Medicine.”

What role do you play in the program now?

“As a resident, I’ve transitioned out of being an official Ears 4 Peers mentor but I continue to remain actively involved in near-peer mentoring for medical students. Two years ago as an intern in psychiatry, I worked with Jessi Gold, MD, to inaugurate Stanford’s  Medical Student Reflection Groups. Each group is made up of four to 10 medical students who commit to joining for six to 12 months. We meet every other week, and groups are facilitated by psychiatry residents trained in group therapy and psychotherapy. As resident physicians, we remain near-peers; however, we’re able to facilitate a different kind of support and personal growth given our psychiatry training.

Stanford students are welcome to reach out to me at sdwangkraus@stanford.edu to learn more.”

What advise can you give medical students and residents?

“I recall medical school to be an exhilarating time, but it also felt like I was drinking from Niagara Falls, one cup at a time. There were times when I felt overwhelmed and even burnt out.

We see a lot of beauty and humility in medicine, but there are also times when we see a lot of tragedy and suffering. Having peer-support, knowing that I was not alone, was empowering and liberating — and it continues to be.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Shedding New Light on Luminous Blue Variable Stars: 3D Simulations Disperse Some of the Mystery Surrounding Massive Stars

ResizedImage449270-LVB
A snapshot from a simulation of the churning gas that blankets a star 80 times the sun’s mass. Intense light from the star’s core pushes against helium-rich pockets in the star’s exterior, launching material outward in spectacular geyser-like eruptions. The solid colors denote radiation intensity, with bluer colors representing regions of larger intensity. The translucent purplish colors represent the gas density, with lighter colors denoting denser regions. Image: Joseph Insley, Argonne Leadership Computing Facility.

Three-dimensional (3D) simulations run at two of the U.S. Department of Energy’s national laboratory supercomputing facilities and the National Aeronautics and Space Administration (NASA) have provided new insights into the behavior of a unique class of celestial bodies known as luminous blue variables (LBVs) — rare, massive stars that can shine up to a million times brighter than the Sun.

Astrophysicists are intrigued by LBVs because their luminosity and size dramatically fluctuate on a timescale of months. They also periodically undergo giant eruptions, violently ejecting gaseous material into space. Although scientists have long observed the variability of LBVs, the physical processes causing their behavior are still largely unknown. According to Yan-Fei Jiang, an astrophysicist at UC Santa Barbara’s Kavli Institute for Theoretical Physics, the traditional one-dimensional (1D) models of star structure are inadequate for LBVs.

“This special class of massive stars cycles between two phases: a quiescent phase when they’re not doing anything interesting, and an outburst phase when they suddenly become bigger and brighter and then eject their outer envelope,” said Jiang. “People have been seeing this for years, but 1D, spherically-symmetric models can’t determine what is going on in this complex situation.”

Instead, Jiang is leading an effort to run first-principles, 3D simulations to understand the physics behind LBV outbursts — using large-scale computing facilities provided by Lawrence Berkeley National Laboratory’s National Energy Research Scientific Computing Center (NERSC), the Argonne Leadership Computing Facility (ALCF), and NASA. NERSC and ALCF are DOE Office of Science User Facilities.

Physics Revealed by 3D

In a study published in Nature, Jiang and his colleagues from UC Santa Barbara, UC Berkeley, and Princeton University ran three 3D simulations to study three different LBV configurations. All the simulations included convection, the action when a warm gas or liquid rises while its cold counterpart sinks. For instance, convection causes hot water at the bottom of a pot on a stove to rise up to the top surface. It also causes gas from a star’s hot core to push to its outer layers.

During the outburst phase, the new 3D simulations predict that convection causes a massive star’s radius to irregularly oscillate and its brightness to vary by 10 to 30 percent on a timescale of just a few days — in agreement with current observations.

“Convection causes the star to expand significantly to a much larger size than predicted by our 1D model without convection. As the star expands, its outer layers become cooler and more opaque,” Jiang said.

Opacity describes how a gas interacts with photons. The researchers discovered that the helium opacity in the star’s outer envelope doubles during the outburst phase, making it more difficult for photons to escape. This leads the star to reach an effective temperature of about 9,000 degrees Kelvin (16,000 degrees Fahrenheit) and triggers the ejection of mass.

“The radiation force is basically a product of the opacity and the fixed luminosity coming from the star’s core. When the helium opacity doubles, this increases the radiation force that is pushing material out until it overcomes the gravitational force that is pulling the material in,” said Jiang. “The star then generates a strong stellar wind, blowing away its outer envelope.”

Massive Simulations Required

Massive stars require massive and expensive 3D simulations, according to Jiang. So he and his colleagues needed all the computing resources available to them, including about 15 million CPU hours at NERSC, 60 million CPU hours at ALCF, and 10 million CPU hours at NASA. In addition, NERSC played a special role in the project.

“The Cori supercomputer at NERSC was essential to us in the beginning because it is very flexible,” Jiang said. “We did all of the earlier exploration at NERSC, figuring out the right parameters to use and submissions to do. We also got a lot of support from the NERSC team to speed up our input/output and solve problems.”

In addition to spending about 5 million CPU hours at NERSC on the early phase of the project, Jiang’s team used another 10 million CPU hours running part of the 3D simulations.

“We used NERSC to run half of one of the 3D simulations described in the Nature paper and the other half was run at NASA. Our other two simulations were run at Argonne, which has very different machines,” said Jiang. “These are quite expensive simulations, because even half a run takes a lot of time.”

Even so, Jiang believes that 3D simulations are worth the expense because illuminating the fundamental processes behind LBV outbursts is critical to many areas of astrophysics — including understanding the evolution of these massive stars that become black holes when they die, as well as understanding how their stellar winds and supernova explosions affect galaxies.

Jiang also used NERSC for earlier studies, and his collaboration is already running follow-up 3D simulations based on their latest results. These new simulations incorporate additional parameters — including the LBV star’s rotation and metallicity — varying the value of one of these parameters per run. For example, the speed from rotation is larger at the star’s equator than at its poles. The same is true on Earth, which is one of the reasons NASA launches rockets from Florida and California near the equator.

“A massive star has a strong rotation, which is very different at the poles and the equator. So rotation is expected to affect the symmetry of the mass loss rate,” said Jiang.

The team is also exploring metallicity, which in astrophysics refers to any element heavier than helium.

“Metallicity is important because it affects opacity. In our previous simulations, we assumed a constant metallicity, but massive stars can have very different metallicities,” said Jiang. “So we need to explore the parameter space to see how the structure of the stars change with metallicity. We’re currently running a simulation with one metallicity at NERSC, another at Argonne, and a third at NASA. Each set of calculations will take about three months to run.”

Meanwhile, Jiang and his colleagues already have new 2018 data to analyze. And they have a lot more simulations planned due to their recent allocation awards from INCITE, NERSC, and NASA.

“We need to do a lot more simulations to understand the physics of these special massive stars, and I think NERSC will be very helpful for this purpose,” he said.

This is a reposting of my news feature originally published by Berkeley Lab’s Computing Sciences.

Microrobots fly, walk and jump into the future

Assembling an ionocraft microrobot in UC Berkeley’s Swarm Lab. (Photos by Adam Lau)

A tiny robot takes off and drunkenly flies several centimeters above a table in the Berkeley Sensor and Actuator Center. Roughly the size and weight of a postage stamp, the microrobot consists of a mechanical structure, propulsion system, motion-tracking sensor and multiple wires that supply power and communication signals.

This flying robot is the project of Daniel Drew, a graduate student who is working under the guidance of electrical engineering and computer sciences professor Kris Pister (M.S.’89, Ph.D.’92 EECS). The culmination of decades of research, these microrobots arose from Pister’s invention of “smart dust,” tiny chips roughly the size of rice grains packed with sensors, microprocessors, wireless radios and batteries. Pister likes to refer to his microrobots as “smart dust with legs.”

“We’re pushing back the boundaries of knowledge in the field of miniaturization, robotic actuators, micro-motors, wireless communication and many other areas,” says Pister. “Where these results will lead us is difficult to predict.”

For now, Pister and his team are aiming to make microrobots that can self-deploy, in the hopes that they could be used by first responders to search for survivors after a disaster, industrial plants to detect chemical leaks or farmers to monitor and tend their crops.

These insect-sized robots come with a unique advantage for solving problems. For example, many farmers already use large drones to monitor and spray their plants to improve crop quality and yield. Microrobots could take this to a whole new level. “A standard quadcopter gives us a bird’s eye view of the field, but a microrobot would give us a bug’s eye view,” Drew says. “We could program them to do important jobs like pollination, looking for the same visual cues on flowers as insects [see].”

But to apply this kind of technology on a mass scale, first the team has to overcome significant challenges in microtechnolgy. And as Pister says, “Making tiny robots that fly, walk or jump hasn’t been easy. Every single piece of it has been hard.”

Flying silently with ion propulsion

Most flying microrobots have flapping wings that mimic real-life insects, like bees. But the team’s flying microrobot, called an ionocraft, uses a custom ion propulsion system unlike anything in nature. There are no moving parts, so it has the potential to be very durable. And it’s completely silent when it flies, so it doesn’t make an annoying buzz like a quadcopter rotor or mosquito.

The ionocraft’s propulsion system is novel, not just a scaled down version from NASA’s spacecrafts. “We use a mechanism that’s different than the one used in space, which ejects ions out the back to propel the spacecraft forward,” Drew says. “A key difference is that we have air on Earth.”

Instead, the ionocraft thruster consists of a thin emitter wire and a collector grid. When a voltage is applied between them, a positively-charged ion cloud is created around the wire. This ion cloud zips toward the negatively-charged collector grid, colliding with neutral air molecules along the way. The air molecules are knocked out of the way, creating a wind that moves the robot.

“If you put your hand under the collector grid of the ionocraft, you’ll feel wind on your hand — that’s the air stream that propels the microrobot upwards,” explains Drew. “It’s similar to the airstream that you’d feel if you put your hand under the rotor blades of a helicopter.”

The collector grid also provides the ionocraft’s mechanical structure. Having components play more than one role is critical for these tiny robots, which need to be compact and lightweight for the propulsion system to work.

Each ionocraft has four ion thrusters that are independently controlled by adjusting their voltages. This allows the team to control the orientation of the microrobot in a similar way as standard quadcopter drones. Namely, they can control the craft’s roll, pitch and yaw. What they can’t do yet is make the microrobot hover. “So far, we can fly it bouncing around like a bug in a web, but the goal is to get it to hover steadily in the air,” Pister says.

Taking first steps and jumps

In parallel, the researchers are developing microrobots that can walk or jump. Their micro-walker is composed of three silicon chips: a body chip that plugs perpendicularly into two chips with three legs each. “The hexapod microrobot is about the size of a really big ant, but it’s boxier,” says Pister.

Not only does the body chip provide structural support, but it also routes the external power and control signals to the leg chips. These leg chips are oriented vertically, allowing the legs to move along the table in a sweeping motion. Each leg is driven by two tiny on-chip linear motors, called electrostatic inchworm motors, which were invented by Pister. One motor lifts the robot’s body and the second pushes it forward. This unique walking mechanism allows three-dimensional microrobots to be fabricated more simply and cheaply.

Pister says the design should, in theory, allow the hexapod to run. So far it can only stand up and shuffle forward. However, he believes their recent fabrication and assembly improvements will have the microrobot walking more quickly and smoothly soon.

The jumping microrobot also uses on-chip inchworm motors. Its motor assembly compresses springs to store energy, which is then released when the microrobot jumps. Currently, it can only jump several millimeters in the air, but the team’s goal is to have it to jump six meters from the floor to the table. To achieve this, they are developing more efficient springs and motors.

“Having robots that can shuffle, jump a little and fly is a major achievement,” Pister says. “They are coming together. But they’re all still tethered by wires for control, data and power signals. ”

Working toward autonomy

Currently, high voltage control signals are passed over wires that connect a computer to a robot, complicating and restricting its movement. The team is developing better ways to control the microrobots, untethering them from the external computer. But transferring the controller onto the microrobot itself is challenging. “Small robots can’t carry the same kind of increasingly powerful computer chips that a standard quadcopter drone can carry,” Drew says. “We need to do more with less.”

So the group is designing and testing a single chip platform that will act as the robots’ brains for communication and control. They plan to send control messages to this chip from a cell phone using wireless technology such as Bluetooth. Ultimately, they hope to use only high-level commands, like “go pollinate the pumpkin field,” which the self-mobilizing microrobots can follow.

The team also plans to integrate on-board sensors, including a camera and microphone to act as the robot’s eyes and ears. These sensors will be used for navigation, as well as any tasks they want the robot to perform. “As the microrobot moves around, we could use its camera and microphone to transmit live video to a cell phone,” says Pister. “This could be used for many applications, including search and rescue.”

Using the brain chip interfaced with on-board sensors will allow the team to eliminate most of the troublesome wires. The next step will be to eliminate the power wires so the robots can move freely. Pister showed early on that solar cells are strong enough to power microrobots. In fact, a microrobot prototype that has been sitting on his office shelf for about 15 years still moves using solar power.

Now, his team is developing a power chip with solar cells in collaboration with Jason Stauth (M.S.’06, Ph.D.’08 EECS), who is an associate professor of engineering at Dartmouth. They’re also working with electrical engineering and computer sciences professor Ana Arias to investigate using batteries.

Finally, the researchers are developing clever machine learning algorithms that guide a microrobot’s motion, making it as smooth as possible.

In Drew’s case, the initial algorithms are based on data from flying a small quadcopter drone. “We’re first developing the machine learning platform with a centimeter-scale, off-the-shelf quadcopter,” says Drew. “Since the control system for an ionocraft is similar to a quadcopter, we’ll be able to adapt and apply the algorithms to our ionocraft. Hopefully, we’ll be able to make it hover.”

Putting it all together

Soon, the team hopes to have autonomous microrobots wandering around the lab directed by cell phone messages. But their ambitions don’t stop there. “I think it’s beneficial to have flying robots and walking robots cooperating together,” Drew says. “Flying robots will always consume more energy than walking robots, but they can overcome obstacles and sense the world from a higher vantage point. There is promise to having both or even a mixed-mobility microrobot, like a beetle that can fly or walk.”

Mixed-mobility microrobots could do things like monitor bridges, railways and airplanes. Currently, static sensors are used to monitor infrastructure, but they are difficult and time-consuming to deploy and maintain — picture changing the batteries of 100,000 sensors across a bridge. Mixed-mobility microrobots could also search for survivors after a disaster by flying, crawling and jumping through the debris.

“Imagine you’re a first responder who comes to the base of a collapsed building. Working by flashlight, it’s hard to see much but the dust hanging in the air,” says Drew. “Now, imagine pulling out a hundred insect-sized robots from your pack, tossing them into the air and having them disperse in all directions. Infrared cameras on each robot look for signs of life. When one spots a survivor, it sends a message back to you over a wireless network. Then a swarm of robots glowing like fireflies leads you to the victim’s location, while a group ahead clears out the debris in your path.”

The applications seem almost endless given the microrobots’ potential versatility and affordability. Pister estimates they might cost as little as one dollar someday, using batch manufacturing techniques. The technology is also likely to reach beyond microrobots.

For Pister’s team, the path forward is clear; the open question is when. “All the pieces are on the table now,” Pister says, “and it’s ‘just’ a matter of integration. But system integration is a challenge in its own right, especially with packaging. We may get results in the next six months — or it may take another five years.”

This is a reposting of my news feature previously published in the fall issue of the Berkeley Engineer magazine. © Berkeley Engineering

Blasting radiation therapy into the future: New systems may improve cancer treatment

Image by Greg Stewart/SLAC National Accelerator Laboratory

As a cancer survivor, I know radiation therapy lasting minutes can seem much longer as you lie on the patient bed trying not to move. Future accelerator technology may turn these dreaded minutes into a fraction of a second due to new funding.

Stanford University and SLAC National Accelerator Laboratory are teaming up to develop a faster and more precise way to deliver X-rays or protons, quickly zapping cancer cells before their surrounding organs can move. This will likely reduce treatment side effects by minimizing damage to healthy tissue.

“Delivering the radiation dose of an entire therapy session with a single flash lasting less than a second would be the ultimate way of managing the constant motion of organs and tissues, and a major advance compared with methods we’re using today,” said Billy Loo, MD, PhD, an associate professor of radiation oncology at Stanford, in a recent SLAC news release.

Currently, most radiation therapy systems work by accelerating electrons through a meter-long tube using radiofrequency fields that travel in the same direction. These electrons then collide with a heavy metal target to convert their energy into high energy X-rays, which are sharply focused and delivered to the tumors.

Now, researchers are developing a new way to more powerfully accelerate the electrons. The key element of the project, called PHASER, is a prototype accelerator component (shown in bronze in this video) that delivers hundreds of times more power than the standard device.

In addition, the researchers are developing a similar device for proton therapy. Although less common than X-rays, protons are sometimes used to kill tumors and are expected to have fewer side effects particularly in sensitive areas like the brain. That’s because protons enter the body at a low energy and release most of that energy at the tumor site, minimizing radiation dose to the healthy tissue as the particles exit the body.

However, proton therapy currently requires large and complex facilities. The Stanford and SLAC team hopes to increase availability by designing a compact, power-efficient and economical proton therapy system that can be used in a clinical setting.

In addition to being faster and possibly more accessible, animal studies indicate that these new X-ray and proton technologies may be more effective.

“We’ve seen in mice that healthy cells suffer less damage when we apply the radiation dose very quickly, and yet the tumor-killing is equal or even a little better than that of a conventional longer exposure,” Loo said in the release. “If the results hold for humans, it would be a whole new paradigm for the field of radiation therapy.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Sensors could provide dexterity to robots, with potential surgical applications

Stanford chemical engineer Zhenan Bao, PhD, has been working for decades to develop an electronic skin that can provide prosthetic or robotic hands with a sense of touch and human-like manual dexterity.

Her team’s latest achievement is a rubber glove with sensors attached to the fingertips. When the glove is placed on a robotic hand, the hand is able to delicately hold a blueberry between its fingertips. As the video shows, it can also gently move a ping-pong ball in and out of holes without crushing it.

The sensors in the glove’s fingertips mimic the biological sensors in our skin, simultaneously measuring the intensity and direction of pressure when touched. Each sensor is composed of three flexible layers that work together, as described in the recent paper published in Science Robotics.

The sensor’s two outer layers have rows of electrical components that are aligned perpendicular to each other. Together, they make up a dense array of small electrical sensing pixels. In between these layers is an insulating rubber spacer.

The electrically-active outer layers also have a bumpy bottom that acts like spinosum — a spiny sublayer in human skin with peaks and valleys. This microscopic terrain is used to measure the pressure intensity. When a robotic finger lightly touches an object, it is felt by sensing pixels on the peaks. When touching something more firmly, pixels in the valleys are also activated.

Similarly, the researchers use the terrain to detect the direction of the touch. For instance, when the pressure comes from the left, then its felt by pixels on the left side of the peaks more than the right side.

Once more sensors are added, such electronic gloves could be used for a wide range of applications. As a recent Stanford Engineering news release explains, “With proper programming a robotic hand wearing the current touch-sensing glove could perform a repetitive task such as lifting eggs off a conveyor belt and placing them into cartons. The technology could also have applications in robot-assisted surgery, where precise touch control is essential.”

However, Bao hopes in the future to develop a glove that can gently handle objects automatically. She said in the release:

“We can program a robotic hand to touch a raspberry without crushing it, but we’re a long way from being able to touch and detect that it is a raspberry and enable the robot to pick it up.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.