Blasting radiation therapy into the future: New systems may improve cancer treatment

Image by Greg Stewart/SLAC National Accelerator Laboratory

As a cancer survivor, I know radiation therapy lasting minutes can seem much longer as you lie on the patient bed trying not to move. Future accelerator technology may turn these dreaded minutes into a fraction of a second due to new funding.

Stanford University and SLAC National Accelerator Laboratory are teaming up to develop a faster and more precise way to deliver X-rays or protons, quickly zapping cancer cells before their surrounding organs can move. This will likely reduce treatment side effects by minimizing damage to healthy tissue.

“Delivering the radiation dose of an entire therapy session with a single flash lasting less than a second would be the ultimate way of managing the constant motion of organs and tissues, and a major advance compared with methods we’re using today,” said Billy Loo, MD, PhD, an associate professor of radiation oncology at Stanford, in a recent SLAC news release.

Currently, most radiation therapy systems work by accelerating electrons through a meter-long tube using radiofrequency fields that travel in the same direction. These electrons then collide with a heavy metal target to convert their energy into high energy X-rays, which are sharply focused and delivered to the tumors.

Now, researchers are developing a new way to more powerfully accelerate the electrons. The key element of the project, called PHASER, is a prototype accelerator component (shown in bronze in this video) that delivers hundreds of times more power than the standard device.

In addition, the researchers are developing a similar device for proton therapy. Although less common than X-rays, protons are sometimes used to kill tumors and are expected to have fewer side effects particularly in sensitive areas like the brain. That’s because protons enter the body at a low energy and release most of that energy at the tumor site, minimizing radiation dose to the healthy tissue as the particles exit the body.

However, proton therapy currently requires large and complex facilities. The Stanford and SLAC team hopes to increase availability by designing a compact, power-efficient and economical proton therapy system that can be used in a clinical setting.

In addition to being faster and possibly more accessible, animal studies indicate that these new X-ray and proton technologies may be more effective.

“We’ve seen in mice that healthy cells suffer less damage when we apply the radiation dose very quickly, and yet the tumor-killing is equal or even a little better than that of a conventional longer exposure,” Loo said in the release. “If the results hold for humans, it would be a whole new paradigm for the field of radiation therapy.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Advertisements

Sensors could provide dexterity to robots, with potential surgical applications

Stanford chemical engineer Zhenan Bao, PhD, has been working for decades to develop an electronic skin that can provide prosthetic or robotic hands with a sense of touch and human-like manual dexterity.

Her team’s latest achievement is a rubber glove with sensors attached to the fingertips. When the glove is placed on a robotic hand, the hand is able to delicately hold a blueberry between its fingertips. As the video shows, it can also gently move a ping-pong ball in and out of holes without crushing it.

The sensors in the glove’s fingertips mimic the biological sensors in our skin, simultaneously measuring the intensity and direction of pressure when touched. Each sensor is composed of three flexible layers that work together, as described in the recent paper published in Science Robotics.

The sensor’s two outer layers have rows of electrical components that are aligned perpendicular to each other. Together, they make up a dense array of small electrical sensing pixels. In between these layers is an insulating rubber spacer.

The electrically-active outer layers also have a bumpy bottom that acts like spinosum — a spiny sublayer in human skin with peaks and valleys. This microscopic terrain is used to measure the pressure intensity. When a robotic finger lightly touches an object, it is felt by sensing pixels on the peaks. When touching something more firmly, pixels in the valleys are also activated.

Similarly, the researchers use the terrain to detect the direction of the touch. For instance, when the pressure comes from the left, then its felt by pixels on the left side of the peaks more than the right side.

Once more sensors are added, such electronic gloves could be used for a wide range of applications. As a recent Stanford Engineering news release explains, “With proper programming a robotic hand wearing the current touch-sensing glove could perform a repetitive task such as lifting eggs off a conveyor belt and placing them into cartons. The technology could also have applications in robot-assisted surgery, where precise touch control is essential.”

However, Bao hopes in the future to develop a glove that can gently handle objects automatically. She said in the release:

“We can program a robotic hand to touch a raspberry without crushing it, but we’re a long way from being able to touch and detect that it is a raspberry and enable the robot to pick it up.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

How does media multitasking affect the mind?

Image by Mohamed Hassan

Imagine that you’re working on your computer, watching the Warriors game, exchanging texts and checking Facebook. Sound familiar? Many people simultaneously view multiple media streams every day.

Over the past decade, researchers have been studying the relationship between this type of heavy media multitasking and cognition to determine how our media use is shaping our minds and brains. This is a particularly critical question for teenagers, who use technology for almost 9 hours every day on average, not including school-related use.

Many studies have examined the cognitive performance in young adults using a variety of task-based cognitive tests — comparing the performance of heavy and light multitaskers. According to a recent review article, these studies show that heavy media multitaskers perform significantly worse, particularly when the tasks require sustained, goal-oriented attention.

For example, a pivotal study led by Anthony Wagner, PhD, a Stanford professor of psychology and co-author of the review article, developed a questionnaire-based media multitasking index to identify the two groups — based on the number of media streams a person juggles during a typical media consumption hour, as well as the time spent on each media. Twelve media forms were included, ranging from computer games to cell phone calls.

The team administered their questionnaire and several standard cognitive tests to Stanford students. In one series of tests, the researchers measured the working memory capabilities of 22 light multitaskers and 19 heavy multitaskers. Working memory is the mental post-it note used to keep track of information, like a set of simple instructions, in the short term.

“In one test, we show a set of oriented blue rectangles, then remove them from the screen and ask the subject to retain that information in mind. Then we’ll show them another set of rectangles and ask if any have changed orientation,” described Wagner in a recent Stanford Q&A. “To measure memory capacity, we do this task with a different number of rectangles and determine how performance changes with increasing memory loads. To measure the ability to filter out distraction, sometimes we add distractors, like red rectangles that the subjects are told to ignore.”

Wagner also performed standard task-switching experiments in which the students viewed images of paired numbers and letters and analyzed them. The students had to switch back and forth between classifying the numbers as even or odd and the letters as vowels or consonants.

The Stanford study showed that heavy multitaskers were less effective at filtering out irrelevant stimuli , whereas light multitaskers found it easier to focus on a single task in the face of distractions.

Overall, this previous study is representative of the twenty subsequent studies discussed in the recent review article. Wagner and co-author Melina Uncapher, PhD, a neuroscientist at the University of California, San Francisco, theorized that lapses in attention may explain most of the current findings — heavy media multitaskers have more difficulty staying on task and returning to task when attention has lapsed than light multitaskers.

However, the authors emphasized that the large diversity of the current studies and their results raise more questions than they answer, such as what is the direction of causation? Does heavier media multitasking cause cognitive and neural differences, or do individuals with such preexisting differences tend towards more multitasking behavior? They said more research is needed.

Wagner concluded in the Q&A:

“I would never tell anyone that the data unambiguously show that media multitasking causes a change in attention and memory. That would be premature… That said, multitasking isn’t efficient. We know there are costs of task switching. So that might be an argument to do less media multitasking.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine

Wearable device designed to measure cortisol in sweat

Photo by Brodie Vissers

Scientists are sweating over how to measure perspiration. That’s because sweat provides a lot of information about a person’s health status, since it contains important electrolytes, proteins, hormones and other factors.

Now, Stanford researchers have developed a wearable device to measure how much cortisol people produce in their sweat.

Cortisol is a hormone critical for many processes in the body, including blood pressure, metabolism, inflammation, memory formation and emotional stress. Too much cortisol over a prolonged period of time can lead to chronic diseases, such as Cushing syndrome.

“We are particularly interested in sweat sensing, because it offers noninvasive and continuous monitoring of various biomarkers for a range of physiological conditions,” said Onur Parlak, PhD, a Stanford postdoctoral research fellow in materials science and engineering, in a recent news release. “This offers a novel approach for the early detection of various diseases and evaluation of sports performance.”

Currently, cortisol levels are usually measured with a blood test that takes several days to analyze in the lab. So Stanford material scientists developed a wearable sensor — a stretchy patch placed on the skin. After the patch soaks up sweat, the user attaches it to a device for analysis and gets the cortisol level measurements in seconds.

As recently reported in Science Advances, the new wearable sensor is composed of four layers of materials. The bottom layer next to the skin passively wicks in sweat through an array of channels, and then the sweat collects in the reservoir layer. Sitting on top of the reservoir is the critical component, a specialized membrane that specifically binds to cortisol. Charged ions in the sweat, like sodium or potassium, pass through the membrane unless the bound cortisol blocks them — and those charged ions are detected by the analysis device, rather than directly measuring the cortisol. Finally, the top waterproof layer protects the sensor from contamination.

The Stanford researchers did a series of validation tests in the lab, and then they strapped the device onto the forearms of two volunteers after they went for a 20-minute outdoor run. Their device’s lab and real-world results were comparable to the corresponding cortisol measurements made with a standard analytic biochemistry assay.

Before this prototype becomes available, however, more research is needed. The research team plans to integrate the wearable patch with the analysis device, while also making it more robust when saturated with sweat so it’s reusable. They also hope to generalize the design to measure several biomarkers at once, not just cortisol.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Stanford and Common Sense Media explore effects of virtual reality on kids

Photo by Andri Koolme

Although we’re still a long ways off from the virtual reality universe depicted in the new movie “Ready Player One,” VR is becoming a reality in many homes. But how is this immersive technology impacting our kid’s cognitive, social and physical well-being?

Stanford researchers and Common Sense Media are investigating the potential effects of virtual reality on children. And a  just-released report provides parents and educators with a practical guide on VR use.

“The truth is, when it comes to VR and kids, we just don’t know that much. As a community, we need more research to understand these effects,” Jeremy Bailenson, PhD, a Stanford communication professor and the founder of Stanford’s Virtual Human Interaction Lab, wrote in an introduction to the report.

The research team surveyed over 3600 U.S. parents about their family’s use of virtual reality. “Until this survey, it was unclear how, and even how many, kids were using virtual reality,” said Bailenson in a recent Stanford news release. “Now we have an initial picture of its adoption and use.”

The report summarizes results from this survey and previous VR research. Here are its key findings:

  • VR powerfully affects kids, because it can provoke a response to virtual experiences similar to actual experiences.
  • Long-terms effects of VR on developing brains and health are unknown. Most parents are concerned, and experts advocate moderation and supervision.
  • Only one in five parents report living in a household with VR and their interest is mixed, but children are
  • Characters in VR may be especially influential on young children.
  • Students are more enthusiastic about learning while using VR, but they don’t necessarily learn more.
  • VR has the potential to encourage empathy and diminish implicit racial bias, but most parents are skeptical.
  • When choosing VR content, parents should consider whether they would want their children to have the same experience in the real world.

Ultimately, the report recommends moderation. “Instead of hours of use, which might apply to other screens, think in terms of minutes,” Bailenson wrote. “Most VR is meant to be done on the five- to 10-minute scale.”  At Stanford’s Virtual Human Interaction Lab, even adults use VR for 20 minutes or less.

One known potential side effect from overuse is simulator sickness, which is caused by a lag in time between a person’s body movements and the virtual world’s response. Some parents also reported that their child experienced a headache, dizziness or eye strain after VR use.

In addition, the researchers advise parents to consider safety. Virtual reality headsets block out stimuli from the physical world, including hazards, so users can bump into things, trip or otherwise harm themselves.

A good option, they wrote, is to bring your child to a location-based VR center that provides well-maintained equipment, safety spotters and social interactions with other kids.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Local knowledge key to building healthier communities

Photo by Chris Waits

Your zip code is just a number meant to guide mail delivery, but studies show that it predicts your lifespan better than your genetic code. For instance, the average life expectancy in New Orleans varies by as much as 25 years in communities only a few miles apart.

This health disparity is driving health care providers, researchers, urban planners and community members to work together to build healthier, more equitable communities — addressing the key factors that determine health and well-being outside the clinic.

““It’s not enough to ask how we can build healthier, happier and greener communities without first addressing the real inequalities that are impacting the design of our cities,” said Antwi Akom, PhD, an associate professor of environmental sociology, public health and STEM education at San Francisco State University, at Stanford Medicine X earlier this month.

However, this design movement depends on access to reliable data, which led the Obama administration to launch The Opportunity Project to “unleash the power of data and technology to expand economic opportunity in communities nationwide.” The project released 12 smartphone apps to provide easy access to governmental data on housing, transportation, schools, neighborhood amenities and other critical community resources.

One of these apps, called Streetwyze, was developed by Akom and Aekta Shah, a PhD candidate at Stanford University, through the Institute for Economic, Educational and Environmental Design. Streetwyze is a mobile, mapping and SMS platform that collects real-time information about how people are experiencing cities and local services, so the data can be turned into actionable analytics.

“The real challenge of the 21st century health data revolution is how do you bridge this gap between official knowledge and local knowledge in ways that make the data more reliable, valuable, authentic and meaningful from the perspective of everyday people?” said Akom at Medicine X. “We think the missing link is real-time two-way communication with every day people so they can participate in the design solutions that meet their every day needs.”

Streetwyze harnesses local knowledge to address questions like: How walkable is my neighborhood? Where can I buy affordable healthy food? How safe is my local park?

For example, a map of East Oakland based on county and city business permits shows many grocery stores in the area. But the reality, according to Akom and Streetwyze, is that most of these supposed grocery stories are actually liquor or corner stores, where you can’t find fresh vegetables or food.

In addition to providing more reliable data to design healthier communities in the future, the Streetwyze data already plays a critical role for community members and some organizations. “Every community has assets,” said Shah. “The Streetwyze platform actually helps lift those up, so that communities can better share those resources and organize around those assets that already exist.”

At Stanford, Shah is using Streetwyze to research how this digital technology may impact youth self-esteem, civic engagement, environmental stewardship and more.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Stanford researcher explores use of meditation app to reduce physician burnout

Photo courtesy of Louise Wen

Slammed by long and unpredictable hours, heavy clinical workloads, fatigue and limited professional control, many medical residents experience stress and even burnout. And surveys indicate this burnout can seriously impact physician well-being and patient care outcomes.

How do you combat burnout? Studies show that meditation can improve well-being, but jamming one more thing into a resident’s hectic day is tough, as Louise Wen, MD, a clinical instructor at Stanford’s Department of Anesthesiology, Perioperative and Pain Medicine, points out. So Wen joined a team of Stanford researchers to test the effectiveness of a mindfulness app, and there work was published this summer in Academic Psychiatry.

I recently spoke with her about the pilot study.

What inspired your study?

“I experienced burnout as a resident, and meditation was a key aspect to my recovery. Growing up, I had been introduced to meditation by my family. In college, I trained to become a yoga teacher and therapist. However, once residency started, my mediation practice essentially stopped.

My low point in residency was precipitated by a HIV needle-stick injury. The month-long antiretroviral prophylactic therapy was effective, but I struggled with the medication’s side effects. My mother advised me to meditate, and afterwards, I felt like my brain had been rebooted. Surprised by the effect of such a brief intervention, I wanted to explore ways to introduce this technique to other time-strapped and stressed residents.”

Why did you use a mindfulness app?

“The gold standard for mindfulness studies is a Mindfulness Based Stress Reduction course developed by Jon Kabat-Zinn, PhD. This eight-week course entails a two-hour group class weekly and 45 minutes of individual home practice daily, plus one full-day silent retreat. This excellent and evidence-based intervention is unfortunately not a feasible format for residents. Instead, the Headspace app on a smart phone delivers guided meditations in an efficient and accessible format.

For the study, we recruited 43 residents from general surgery, anesthesia and obstetrics and gynecology. They were asked to use the app at least two times per week for a month. The app provided 10-minute guided audio meditations, animated videos and longer focused meditations.”

How did you measure whether the app improved wellness?

“Our participating residents were asked to complete surveys measuring their stress, mindfulness and app usage — at enrollment, week 2 and week 4. We found that residents benefitted from using the app and this benefit correlated with increasing app usage.”

Are you doing any follow-up studies?

“A significant challenge of our app study was motivating people to practice the intervention. We’re now working on a study based on the concept of the popular opinion leader. We have developed a four-week, video-based curriculum for anesthesia residents. These videos feature interviews with attendings from our department, where they share their personal meditation and gratitude practices. We showed the videos to the intervention group of residents, whereas the control group watched a boring video of me saying that they should meditate. We are now analyzing the data.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.