Designing an inexpensive surgical headlight: A Q&A with a Stanford surgeon

2017_Ethiopia_Jared Forrester_Black Lion_Pediatric OR by cell phone_2_crop
Photo by Jared Forrester / © Lifebox 2017

For millions of people throughout the world, even the simplest surgeries can be risky due to challenging conditions like frequent power outages. In response, Stanford surgeon Thomas Weiser, MD, is part of a team from Lifebox working to develop a durable, affordable and high-quality surgical headlamp for use in low-resource settings. Lifebox is a nonprofit that aims to make surgery safer throughout the world.

Why is an inexpensive surgical headlight important?

“The least expensive headlight in the United States costs upwards of $1000, and most cost quite a bit more. They are very powerful and provide excellent light, but they’re not fit for purpose in lower-resource settings. They are Ferraris when what we need is a Tata – functional, but affordable.

Jared Forrester, MD, a third-year Stanford surgical resident, lived and worked in Ethiopia for the last two years. During that time, he noted that 80 percent of surgeons working in low- and middle-income countries identify poor lighting as a safety issue and nearly 20 percent report direct experience of poor-quality lighting leading to negative patient outcomes. So there is a massive need for a lighting solution.”

How did you develop your headlamp specifications?

“Jared started by passing around a number of off-the-shelf medical headlights with surgeons in Ethiopia. We also asked surgeons in the U.S. and the U.K. to try them out to see how they felt and evaluate what was good and bad about them.

We performed some illumination and identification tests using pieces of meat in a shoebox with a slit cut in it to mimic a limited field of view and a deep hole. We asked surgeons to use lights at various power with the room lights off, with just the room lights on and with overhead surgical lights focused on the field. That way we could evaluate the range of light needed in settings with highly variable lighting, something that does not really exist here in the U.S.”

How do they differ from recreational headlamps?

“Recreational headlights have their uses and I’ve seen them used for providing care — including surgery. However, they tend to be uncomfortable during long cases and not secure on the head. Also, the light isn’t uniformly bright. You can see this when you shine a recreational light on a wall: there is a halo and the center is a different brightness than the outer edge of the light. This makes distinguishing tissue planes and anatomy more difficult.”

What are the barriers to implementation?

“While surgeons working in these settings all express interest in having a quality headlight, there is no reliable manufacturer or distributor for them. Surgeons cannot afford expensive lights, and no one has stepped up to provide a low-cost alternative that is robust, high quality and durable. We’re working to change that.”

What are your next steps?

“We are now evaluating a select number of headlights and engaging manufacturers in discussions about their current devices and what changes might be needed to make a final light at a price point that would be affordable to clinicians and facilities in these settings. By working through our networks and using our logistical capacity, we can connect the manufacturer with a new market that currently does not exist  — but is ready and waiting to be developed.

We believe these lights will improve the ability of surgeons to provide better, safer surgical care and also allow emergency cases to be completed at night when power fluctuations are most problematic. These lights should increase the confidence of the surgeon that the operation can be performed safely.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

The future hope of “flash” radiation cancer therapy

aqua-1756734_1920_crop.jpg

The goal of cancer therapy is to destroy the cancer cells while minimizing side effects and damage to the rest of the body. Common types of treatment include surgery, chemotherapy, targeted therapy and radiation therapy. Often combined with surgery or drugs, radiation therapy uses high-energy X-rays to harm the DNA and other critical processes of the rapidly-dividing cancer cells.

New innovations in radiation therapy were the focus of a recent episode of the Sirius radio show “The Future of Everything.” On hand was Stanford’s Billy Loo, MD, PhD, a professor of radiation oncology, who spoke with Stanford professor and radio host Russ Altman, MD, PhD.

Radiation has been used to treat cancer for over a century, but today’s technologies target the tumor with far greater precision and speed than the old days. Loo explained that modern radiotherapy now delivers low-dose beams of X-rays from multiple directions, which are accurately focused on the tumor so the surrounding healthy tissues get only a small dose while the tumor gets blasted. Radiation oncologists use imaging — CT, MRI or PET — to determine the three-dimensional sculpture of the tumor to target.

“We identify the area that needs to be treated, where the tumor is in relationship to the normal organs, and create a plan of the sculpted treatment,” Loo said. “And then during the treatment, we also use imaging … to see, for example, whether the radiation is going where we want it to go.”

In addition, oncologists now implement technologies in the clinic to compensate for motion, since organs like the lungs are constantly moving and patients have trouble lying still even for a few minutes. “We call it motion management. We do all kinds of tricks like turning on the radiation beam synchronized with the breathing cycle or following tumors around with the radiation beam,” explained Loo.

Currently, that is how standard radiation therapy works. However, Stanford radiation oncologists are collaborating with scientists at SLAC Linear Accelerator Center to develop an innovative technology called PHASER. Although Loo admits that the acronym was inspired because he loves Star Trek, PHASER stands for pluridirectional high-energy agile scanning electronic radiotherapy. This new technology delivers the radiation dose of an entire therapy session in a single flash lasting less than a second — faster than the body moves.

“We wondered, what if the treatment was done so fast — like in a flash photography — that all the motion is frozen? That’s a fundamental solution to this motion problem that gives us the ultimate precision,” he said. “If we’re able to treat more precisely with less spillage of radiation dose into normal tissues, that gives us the benefit of being able to kill the cancer and cause less collateral damage.”

The research team is currently testing the PHASER technology in mice, resulting in an exciting discovery — the biological response to flash radiotherapy may differ from slower traditional radiotherapy.

“We and a few other labs around the world have started to see that when the radiation is given in a flash, we see equal or better tumor killing but much better normal tissue protection than with the conventional speed of radiation,” Loo said. “And if that translates to humans, that’s a huge breakthrough.”

Loo also explained that their PHASER technology has been designed to be compact, economical, reliable and clinically efficient to provide a robust, mobile unit for global use. They expect it to fit in a standard cargo shipping container and to power it using solar energy and batteries.

“About half of the patients in the world today have no access to radiation therapy for technological and logistical reasons. That means millions of patients who could potentially be receiving curative cancer therapy are getting treated purely palliatively. And that’s a huge tragedy,” Loo said. “We don’t want to create a solution that everyone in the world has to come here to get — that would have limited impact. And so that’s been a core principle from the beginning.”

This is a reposting of my Scope blog post, courtesy of Stanford School of Medicine.

Can AI improve access to mental health care? Possibly, Stanford psychologist says

artificial-2970158_1920
Image by geralt

“Hey Siri, am I depressed?” When I posed this question to my iPhone, Siri’s reply was “I can’t really say, Jennifer.” But someday, software programs like Siri or Alexa may be able to talk to patients about their mental health symptoms to assist human therapists.

To learn more, I spoke with Adam Miner, PsyD, an instructor and co-director of Stanford’s Virtual Reality-Immersive Technology Clinic, who is working to improve conversational AI to recognize and respond to health issues.

What do you do as an AI psychologist?

“AI psychology isn’t a new specialty yet, but I do see it as a growing interdisciplinary need. I work to improve mental health access and quality through safe and effective artificial intelligence. I use methods from social science and computer science to answer questions about AI and vulnerable groups who may benefit or be harmed.”

How did you become interested in this field?

“During my training as a clinical psychologist, I had patients who waited years to tell anyone about their problems for many different reasons. I believe the role of a clinician isn’t to blame people who don’t come into the hospital. Instead, we should look for opportunities to provide care when people are ready and willing to ask for it, even if that is through machines.

I was reading research from different fields like communication and computer science and I was struck by the idea that people may confide intimate feelings to computers and be impacted by how computers respond. I started testing different digital assistants, like Siri, to see how they responded to sensitive health questions. The potential for good outcomes — as well as bad — quickly came into focus.”

Why is technology needed to assess the mental health of patients?

“We have a mental health crisis and existing barriers to care — like social stigma, cost and treatment access. Technology, specifically AI, has been called on to help. The big hope is that AI-based systems, unlike human clinicians, would never get tired, be available wherever and whenever the patient needs and know more than any human could ever know.

However, we need to avoid inflated expectations. There are real risks around privacy, ineffective care and worsening disparities for vulnerable populations. There’s a lot of excitement, but also a gap in knowledge. We don’t yet fully understand all the complexities of human–AI interactions.

People may not feel judged when they talk to a machine the same way they do when they talk to a human — the conversation may feel more private. But it may in fact be more public because information could be shared in unexpected ways or with unintended parties, such as advertisers or insurance companies.”

What are you hoping to accomplish with AI?

“If successful, AI could help improve access in three key ways. First, it could reach people who aren’t accessing traditional, clinic-based care for financial, geographic or other reasons like social anxiety. Second, it could help create a ‘learning healthcare system’ in which patient data is used to improve evidence-based care and clinician training.

Lastly, I have an ethical duty to practice culturally sensitive care as a licensed clinical psychologist. But a patient might use a word to describe anxiety that I don’t know and I might miss the symptom. AI, if designed well, could recognize cultural idioms of distress or speak multiple languages better than I ever will. But AI isn’t magic. We’ll need to thoughtfully design and train AI to do well with different genders, ethnicities, races and ages to prevent further marginalizing vulnerable groups.

If AI could help with diagnostic assessments, it might allow people to access care who otherwise wouldn’t. This may help avoid downstream health emergencies like suicide.”

How long until AI is used in the clinic?

“I hesitate to give any timeline, as AI can mean so many different things. But a few key challenges need to be addressed before wide deployment, including the privacy issues, the impact of AI-mediated communications on clinician-patient relationships and the inclusion of cultural respect.

The clinician–patient relationship is often overlooked when imagining a future with AI. We know from research that people can feel an emotional connection to health-focused conversational AI. What we don’t know is whether this will strengthen or weaken the patient-clinician relationship, which is central to both patient care and a clinician’s sense of self. If patients lose trust in mental health providers, it will cause real and lasting harm.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Seeing the Web of Microbes

New Web-based Tool Hosted at NERSC Helps Visualize Exometabolomic Data

The Web view, BG11+MvExtract
WoM The Web: Microcoleus vaginatus and six heterotrophic biocrust isolates in M. vaginatus extract. The metabolite composition of the control medium is represented by the solid tan circles. Hollow circles are metabolites that were only identified after microbial transformation (indicating production/release by at least one of the organisms and not initially present in the control medium). Connecting lines indicate an increase (red) or decrease (blue) in the metabolite level in the spent medium compared to the control.

Understanding nutrient flows within microbial communities is important to a wide range of fields, including medicine, bioremediation, carbon sequestration, and sustainable biofuel development. Now, researchers from Lawrence Berkeley National Laboratory (Berkeley Lab) have built an interactive, web-based data visualization tool to observe how organisms transform their environments through the increase and decrease of metabolites — enabling scientists to quickly see patterns in microbial food webs.

This visualization tool — the first of its kind — is a key part of a new data repository, the Web of Microbes (WoM) that contains liquid chromatography mass spectrometry datasets from the Northen Metabolomics Lab located at the U.S. Department of Energy’s Joint Genome Institute (JGI). The Web of Microbes project is an interdisciplinary collaboration between biologists and computational researchers at Berkeley Lab and the National Energy Research Scientific Computing Center (NERSC). JGI and NERSC are both DOE Office of Science user facilities.

“While most existing databases focus on metabolic pathways or identifications, the Web of Microbes is unique in displaying information on which metabolites are consumed or released by an organism to an environment such as soil,” said Suzanne Kosina, a senior research associate in Berkeley Lab’s Environmental Genomics & Systems Biology (EGSB) Division, a member of the DOE ENIGMA Scientific Focus Area, and lead author on a paper describing WoM published in BMC Microbiology. “We call them exometabolites since they are outside of the cell. Knowing which exometabolites a microbe ‘eats’ and produces can help us determine which microbes might benefit from growing together or which might compete with each other for nutrients.”

Four Different Viewpoints

WoM is a python application built on the Django web development framework. It is served from a self-contained python environment on the NERSC global filesystem by an Apache web server. Visualizations are created with JavaScript, cascading style sheets, and the D3 JavaScript visualization library.

Four different viewing methods are available by selecting the tabs labeled “The Web”, “One Environment”, “One Organism”, and “One Metabolite.” “The Web” view graphically displays data constrained by the selection of an environment, while the other three tabs display tabular data from three constrainable dimensions: environment, organism, and metabolite.

“You can think of the 3D datasets as a data cube,” said NERSC engineer Annette Greiner, second author on the BMC Microbiology paper. “The visualization tool allows you to slice the data cube in any direction. And each of these slices gives one of the 2D views: One Environment, One Organism, or One Metabolite.”

The most intuitive way to view the data is via The Web, which displays an overview of connections between organisms and the nutrients they act on within a selected environment. After choosing the environment from a pull-down menu, The Web provides a network diagram in which each organism is represented as a little box, each metabolite as a circle, and their interactions as connecting lines. The size of the circle scales with the number of organisms that interact with that metabolite, whereas the color and shade of the connecting line indicate the amount of increase (red) or decrease (blue) in the metabolite level due to the microbial activation.

“Having a lot more connecting lines indicates there’s more going on in terms of metabolism with those compounds in the environment. You can clearly see differences in behavior between the organisms,” Greiner said. “For instance, an organism with a dense number of red lines indicates that it produces many metabolites.”

Although The Web view gives users a useful qualitative assessment of metabolite interaction patterns, the other three tabular views provide more detailed information.

The One Environment view addresses to what extent the organisms in a single environment compete or coexist with each other. The heatmap table shows which metabolites (shown in rows) are removed or added to the environment by each of the organisms (shown in columns), where the color of each table cell indicates the amount of metabolic increase or decrease. And icons identify whether pairs of organisms compete (X) or are compatible (interlocking rings) for a given metabolite.

“For example, if you’re trying to design a bioreactor and you want to know which organisms would probably work well together in the same environment, then you can look for things with interlocking rings and try to avoid the Xs,” said Greiner.

Similarly, the One Organism heatmap table allows users to compare the actions of a single microbe on many metabolites across multiple environments. And users can use the One Metabolite table to compare the actions of multiple organisms on a selected metabolite in multiple environments.

“Ultimately, WoM provides a means for improving our understanding of microbial communities,” said Trent Northen, a scientist at JGI and in Berkeley Lab’s EGSB Division. “The data and visualization tools help us predict and test microbial interactions with each other and their environment.”

Participatory Design

The WoM tools were developed iteratively using a participatory design process, where research scientists from Northen’s lab worked directly with Greiner to identify needs and quickly try out solutions. This differed from the more traditional approach in which Greiner completes a coherent design for the user interface before showing it to the scientists.

Both Greiner and Kosina agreed that collaborating together was fun and productive. “Instead of going off to a corner alone trying to come up with something, it’s useful to have a user sitting on my shoulder giving me feedback in real time,” said Greiner. “Scientists often have a strong idea about what they need to see, so it pays to have frequent interactions and to work side by side.”

In addition to contributing Greiner’s expertise in data visualization and web application development, NERSC hosts WoM and stores the data. NERSC’s computing resources and well-established science gateway infrastructure should enable WoM to grow both in volume and features in a stable and reliable environment, the development team noted in the BMC Microbiology paper.

According to Greiner, the data itself doesn’t take up much storage space but that may change. Currently, only Northen’s group can upload data but the team hopes to support multiple user groups in the future. For now, the Berkeley Lab researchers are excited to share their data on the Web of Microbes where it can be used by scientists all over the world. And they plan to add more data to the repository as they perform new experiments.

Kosina said it also made sense to work with NERSC on the Web of Microbes project because the Northen metabolomics lab relies on many other tools and resources at NERSC. “We already store all of our mass spectrometry data at NERCS and run our analysis software on their computing systems,” Kosina said.

Eventually, the team plans to link the Web of Microbes exometabolomics data to mass spectrometry and genomics databases such as JGI’s Genome Portal. They are also working with the DOE Systems Biology Knowledgebase (KBase) to allow users to take advantage of KBase’s predictive modeling capabilities, Northen added, which will enable researchers to determine the functions of unknown genes and predict microbial interactions.

This is a reposting of my news feature originally published by Berkeley Lab’s Computing Sciences.

Microrobots fly, walk and jump into the future

Assembling an ionocraft microrobot in UC Berkeley’s Swarm Lab. (Photos by Adam Lau)

A tiny robot takes off and drunkenly flies several centimeters above a table in the Berkeley Sensor and Actuator Center. Roughly the size and weight of a postage stamp, the microrobot consists of a mechanical structure, propulsion system, motion-tracking sensor and multiple wires that supply power and communication signals.

This flying robot is the project of Daniel Drew, a graduate student who is working under the guidance of electrical engineering and computer sciences professor Kris Pister (M.S.’89, Ph.D.’92 EECS). The culmination of decades of research, these microrobots arose from Pister’s invention of “smart dust,” tiny chips roughly the size of rice grains packed with sensors, microprocessors, wireless radios and batteries. Pister likes to refer to his microrobots as “smart dust with legs.”

“We’re pushing back the boundaries of knowledge in the field of miniaturization, robotic actuators, micro-motors, wireless communication and many other areas,” says Pister. “Where these results will lead us is difficult to predict.”

For now, Pister and his team are aiming to make microrobots that can self-deploy, in the hopes that they could be used by first responders to search for survivors after a disaster, industrial plants to detect chemical leaks or farmers to monitor and tend their crops.

These insect-sized robots come with a unique advantage for solving problems. For example, many farmers already use large drones to monitor and spray their plants to improve crop quality and yield. Microrobots could take this to a whole new level. “A standard quadcopter gives us a bird’s eye view of the field, but a microrobot would give us a bug’s eye view,” Drew says. “We could program them to do important jobs like pollination, looking for the same visual cues on flowers as insects [see].”

But to apply this kind of technology on a mass scale, first the team has to overcome significant challenges in microtechnolgy. And as Pister says, “Making tiny robots that fly, walk or jump hasn’t been easy. Every single piece of it has been hard.”

Flying silently with ion propulsion

Most flying microrobots have flapping wings that mimic real-life insects, like bees. But the team’s flying microrobot, called an ionocraft, uses a custom ion propulsion system unlike anything in nature. There are no moving parts, so it has the potential to be very durable. And it’s completely silent when it flies, so it doesn’t make an annoying buzz like a quadcopter rotor or mosquito.

The ionocraft’s propulsion system is novel, not just a scaled down version from NASA’s spacecrafts. “We use a mechanism that’s different than the one used in space, which ejects ions out the back to propel the spacecraft forward,” Drew says. “A key difference is that we have air on Earth.”

Instead, the ionocraft thruster consists of a thin emitter wire and a collector grid. When a voltage is applied between them, a positively-charged ion cloud is created around the wire. This ion cloud zips toward the negatively-charged collector grid, colliding with neutral air molecules along the way. The air molecules are knocked out of the way, creating a wind that moves the robot.

“If you put your hand under the collector grid of the ionocraft, you’ll feel wind on your hand — that’s the air stream that propels the microrobot upwards,” explains Drew. “It’s similar to the airstream that you’d feel if you put your hand under the rotor blades of a helicopter.”

The collector grid also provides the ionocraft’s mechanical structure. Having components play more than one role is critical for these tiny robots, which need to be compact and lightweight for the propulsion system to work.

Each ionocraft has four ion thrusters that are independently controlled by adjusting their voltages. This allows the team to control the orientation of the microrobot in a similar way as standard quadcopter drones. Namely, they can control the craft’s roll, pitch and yaw. What they can’t do yet is make the microrobot hover. “So far, we can fly it bouncing around like a bug in a web, but the goal is to get it to hover steadily in the air,” Pister says.

Taking first steps and jumps

In parallel, the researchers are developing microrobots that can walk or jump. Their micro-walker is composed of three silicon chips: a body chip that plugs perpendicularly into two chips with three legs each. “The hexapod microrobot is about the size of a really big ant, but it’s boxier,” says Pister.

Not only does the body chip provide structural support, but it also routes the external power and control signals to the leg chips. These leg chips are oriented vertically, allowing the legs to move along the table in a sweeping motion. Each leg is driven by two tiny on-chip linear motors, called electrostatic inchworm motors, which were invented by Pister. One motor lifts the robot’s body and the second pushes it forward. This unique walking mechanism allows three-dimensional microrobots to be fabricated more simply and cheaply.

Pister says the design should, in theory, allow the hexapod to run. So far it can only stand up and shuffle forward. However, he believes their recent fabrication and assembly improvements will have the microrobot walking more quickly and smoothly soon.

The jumping microrobot also uses on-chip inchworm motors. Its motor assembly compresses springs to store energy, which is then released when the microrobot jumps. Currently, it can only jump several millimeters in the air, but the team’s goal is to have it to jump six meters from the floor to the table. To achieve this, they are developing more efficient springs and motors.

“Having robots that can shuffle, jump a little and fly is a major achievement,” Pister says. “They are coming together. But they’re all still tethered by wires for control, data and power signals. ”

Working toward autonomy

Currently, high voltage control signals are passed over wires that connect a computer to a robot, complicating and restricting its movement. The team is developing better ways to control the microrobots, untethering them from the external computer. But transferring the controller onto the microrobot itself is challenging. “Small robots can’t carry the same kind of increasingly powerful computer chips that a standard quadcopter drone can carry,” Drew says. “We need to do more with less.”

So the group is designing and testing a single chip platform that will act as the robots’ brains for communication and control. They plan to send control messages to this chip from a cell phone using wireless technology such as Bluetooth. Ultimately, they hope to use only high-level commands, like “go pollinate the pumpkin field,” which the self-mobilizing microrobots can follow.

The team also plans to integrate on-board sensors, including a camera and microphone to act as the robot’s eyes and ears. These sensors will be used for navigation, as well as any tasks they want the robot to perform. “As the microrobot moves around, we could use its camera and microphone to transmit live video to a cell phone,” says Pister. “This could be used for many applications, including search and rescue.”

Using the brain chip interfaced with on-board sensors will allow the team to eliminate most of the troublesome wires. The next step will be to eliminate the power wires so the robots can move freely. Pister showed early on that solar cells are strong enough to power microrobots. In fact, a microrobot prototype that has been sitting on his office shelf for about 15 years still moves using solar power.

Now, his team is developing a power chip with solar cells in collaboration with Jason Stauth (M.S.’06, Ph.D.’08 EECS), who is an associate professor of engineering at Dartmouth. They’re also working with electrical engineering and computer sciences professor Ana Arias to investigate using batteries.

Finally, the researchers are developing clever machine learning algorithms that guide a microrobot’s motion, making it as smooth as possible.

In Drew’s case, the initial algorithms are based on data from flying a small quadcopter drone. “We’re first developing the machine learning platform with a centimeter-scale, off-the-shelf quadcopter,” says Drew. “Since the control system for an ionocraft is similar to a quadcopter, we’ll be able to adapt and apply the algorithms to our ionocraft. Hopefully, we’ll be able to make it hover.”

Putting it all together

Soon, the team hopes to have autonomous microrobots wandering around the lab directed by cell phone messages. But their ambitions don’t stop there. “I think it’s beneficial to have flying robots and walking robots cooperating together,” Drew says. “Flying robots will always consume more energy than walking robots, but they can overcome obstacles and sense the world from a higher vantage point. There is promise to having both or even a mixed-mobility microrobot, like a beetle that can fly or walk.”

Mixed-mobility microrobots could do things like monitor bridges, railways and airplanes. Currently, static sensors are used to monitor infrastructure, but they are difficult and time-consuming to deploy and maintain — picture changing the batteries of 100,000 sensors across a bridge. Mixed-mobility microrobots could also search for survivors after a disaster by flying, crawling and jumping through the debris.

“Imagine you’re a first responder who comes to the base of a collapsed building. Working by flashlight, it’s hard to see much but the dust hanging in the air,” says Drew. “Now, imagine pulling out a hundred insect-sized robots from your pack, tossing them into the air and having them disperse in all directions. Infrared cameras on each robot look for signs of life. When one spots a survivor, it sends a message back to you over a wireless network. Then a swarm of robots glowing like fireflies leads you to the victim’s location, while a group ahead clears out the debris in your path.”

The applications seem almost endless given the microrobots’ potential versatility and affordability. Pister estimates they might cost as little as one dollar someday, using batch manufacturing techniques. The technology is also likely to reach beyond microrobots.

For Pister’s team, the path forward is clear; the open question is when. “All the pieces are on the table now,” Pister says, “and it’s ‘just’ a matter of integration. But system integration is a challenge in its own right, especially with packaging. We may get results in the next six months — or it may take another five years.”

This is a reposting of my news feature previously published in the fall issue of the Berkeley Engineer magazine. © Berkeley Engineering

Blasting radiation therapy into the future: New systems may improve cancer treatment

Image by Greg Stewart/SLAC National Accelerator Laboratory

As a cancer survivor, I know radiation therapy lasting minutes can seem much longer as you lie on the patient bed trying not to move. Future accelerator technology may turn these dreaded minutes into a fraction of a second due to new funding.

Stanford University and SLAC National Accelerator Laboratory are teaming up to develop a faster and more precise way to deliver X-rays or protons, quickly zapping cancer cells before their surrounding organs can move. This will likely reduce treatment side effects by minimizing damage to healthy tissue.

“Delivering the radiation dose of an entire therapy session with a single flash lasting less than a second would be the ultimate way of managing the constant motion of organs and tissues, and a major advance compared with methods we’re using today,” said Billy Loo, MD, PhD, an associate professor of radiation oncology at Stanford, in a recent SLAC news release.

Currently, most radiation therapy systems work by accelerating electrons through a meter-long tube using radiofrequency fields that travel in the same direction. These electrons then collide with a heavy metal target to convert their energy into high energy X-rays, which are sharply focused and delivered to the tumors.

Now, researchers are developing a new way to more powerfully accelerate the electrons. The key element of the project, called PHASER, is a prototype accelerator component (shown in bronze in this video) that delivers hundreds of times more power than the standard device.

In addition, the researchers are developing a similar device for proton therapy. Although less common than X-rays, protons are sometimes used to kill tumors and are expected to have fewer side effects particularly in sensitive areas like the brain. That’s because protons enter the body at a low energy and release most of that energy at the tumor site, minimizing radiation dose to the healthy tissue as the particles exit the body.

However, proton therapy currently requires large and complex facilities. The Stanford and SLAC team hopes to increase availability by designing a compact, power-efficient and economical proton therapy system that can be used in a clinical setting.

In addition to being faster and possibly more accessible, animal studies indicate that these new X-ray and proton technologies may be more effective.

“We’ve seen in mice that healthy cells suffer less damage when we apply the radiation dose very quickly, and yet the tumor-killing is equal or even a little better than that of a conventional longer exposure,” Loo said in the release. “If the results hold for humans, it would be a whole new paradigm for the field of radiation therapy.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Sensors could provide dexterity to robots, with potential surgical applications

Stanford chemical engineer Zhenan Bao, PhD, has been working for decades to develop an electronic skin that can provide prosthetic or robotic hands with a sense of touch and human-like manual dexterity.

Her team’s latest achievement is a rubber glove with sensors attached to the fingertips. When the glove is placed on a robotic hand, the hand is able to delicately hold a blueberry between its fingertips. As the video shows, it can also gently move a ping-pong ball in and out of holes without crushing it.

The sensors in the glove’s fingertips mimic the biological sensors in our skin, simultaneously measuring the intensity and direction of pressure when touched. Each sensor is composed of three flexible layers that work together, as described in the recent paper published in Science Robotics.

The sensor’s two outer layers have rows of electrical components that are aligned perpendicular to each other. Together, they make up a dense array of small electrical sensing pixels. In between these layers is an insulating rubber spacer.

The electrically-active outer layers also have a bumpy bottom that acts like spinosum — a spiny sublayer in human skin with peaks and valleys. This microscopic terrain is used to measure the pressure intensity. When a robotic finger lightly touches an object, it is felt by sensing pixels on the peaks. When touching something more firmly, pixels in the valleys are also activated.

Similarly, the researchers use the terrain to detect the direction of the touch. For instance, when the pressure comes from the left, then its felt by pixels on the left side of the peaks more than the right side.

Once more sensors are added, such electronic gloves could be used for a wide range of applications. As a recent Stanford Engineering news release explains, “With proper programming a robotic hand wearing the current touch-sensing glove could perform a repetitive task such as lifting eggs off a conveyor belt and placing them into cartons. The technology could also have applications in robot-assisted surgery, where precise touch control is essential.”

However, Bao hopes in the future to develop a glove that can gently handle objects automatically. She said in the release:

“We can program a robotic hand to touch a raspberry without crushing it, but we’re a long way from being able to touch and detect that it is a raspberry and enable the robot to pick it up.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

How does media multitasking affect the mind?

Image by Mohamed Hassan

Imagine that you’re working on your computer, watching the Warriors game, exchanging texts and checking Facebook. Sound familiar? Many people simultaneously view multiple media streams every day.

Over the past decade, researchers have been studying the relationship between this type of heavy media multitasking and cognition to determine how our media use is shaping our minds and brains. This is a particularly critical question for teenagers, who use technology for almost 9 hours every day on average, not including school-related use.

Many studies have examined the cognitive performance in young adults using a variety of task-based cognitive tests — comparing the performance of heavy and light multitaskers. According to a recent review article, these studies show that heavy media multitaskers perform significantly worse, particularly when the tasks require sustained, goal-oriented attention.

For example, a pivotal study led by Anthony Wagner, PhD, a Stanford professor of psychology and co-author of the review article, developed a questionnaire-based media multitasking index to identify the two groups — based on the number of media streams a person juggles during a typical media consumption hour, as well as the time spent on each media. Twelve media forms were included, ranging from computer games to cell phone calls.

The team administered their questionnaire and several standard cognitive tests to Stanford students. In one series of tests, the researchers measured the working memory capabilities of 22 light multitaskers and 19 heavy multitaskers. Working memory is the mental post-it note used to keep track of information, like a set of simple instructions, in the short term.

“In one test, we show a set of oriented blue rectangles, then remove them from the screen and ask the subject to retain that information in mind. Then we’ll show them another set of rectangles and ask if any have changed orientation,” described Wagner in a recent Stanford Q&A. “To measure memory capacity, we do this task with a different number of rectangles and determine how performance changes with increasing memory loads. To measure the ability to filter out distraction, sometimes we add distractors, like red rectangles that the subjects are told to ignore.”

Wagner also performed standard task-switching experiments in which the students viewed images of paired numbers and letters and analyzed them. The students had to switch back and forth between classifying the numbers as even or odd and the letters as vowels or consonants.

The Stanford study showed that heavy multitaskers were less effective at filtering out irrelevant stimuli , whereas light multitaskers found it easier to focus on a single task in the face of distractions.

Overall, this previous study is representative of the twenty subsequent studies discussed in the recent review article. Wagner and co-author Melina Uncapher, PhD, a neuroscientist at the University of California, San Francisco, theorized that lapses in attention may explain most of the current findings — heavy media multitaskers have more difficulty staying on task and returning to task when attention has lapsed than light multitaskers.

However, the authors emphasized that the large diversity of the current studies and their results raise more questions than they answer, such as what is the direction of causation? Does heavier media multitasking cause cognitive and neural differences, or do individuals with such preexisting differences tend towards more multitasking behavior? They said more research is needed.

Wagner concluded in the Q&A:

“I would never tell anyone that the data unambiguously show that media multitasking causes a change in attention and memory. That would be premature… That said, multitasking isn’t efficient. We know there are costs of task switching. So that might be an argument to do less media multitasking.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine

Wearable device designed to measure cortisol in sweat

Photo by Brodie Vissers

Scientists are sweating over how to measure perspiration. That’s because sweat provides a lot of information about a person’s health status, since it contains important electrolytes, proteins, hormones and other factors.

Now, Stanford researchers have developed a wearable device to measure how much cortisol people produce in their sweat.

Cortisol is a hormone critical for many processes in the body, including blood pressure, metabolism, inflammation, memory formation and emotional stress. Too much cortisol over a prolonged period of time can lead to chronic diseases, such as Cushing syndrome.

“We are particularly interested in sweat sensing, because it offers noninvasive and continuous monitoring of various biomarkers for a range of physiological conditions,” said Onur Parlak, PhD, a Stanford postdoctoral research fellow in materials science and engineering, in a recent news release. “This offers a novel approach for the early detection of various diseases and evaluation of sports performance.”

Currently, cortisol levels are usually measured with a blood test that takes several days to analyze in the lab. So Stanford material scientists developed a wearable sensor — a stretchy patch placed on the skin. After the patch soaks up sweat, the user attaches it to a device for analysis and gets the cortisol level measurements in seconds.

As recently reported in Science Advances, the new wearable sensor is composed of four layers of materials. The bottom layer next to the skin passively wicks in sweat through an array of channels, and then the sweat collects in the reservoir layer. Sitting on top of the reservoir is the critical component, a specialized membrane that specifically binds to cortisol. Charged ions in the sweat, like sodium or potassium, pass through the membrane unless the bound cortisol blocks them — and those charged ions are detected by the analysis device, rather than directly measuring the cortisol. Finally, the top waterproof layer protects the sensor from contamination.

The Stanford researchers did a series of validation tests in the lab, and then they strapped the device onto the forearms of two volunteers after they went for a 20-minute outdoor run. Their device’s lab and real-world results were comparable to the corresponding cortisol measurements made with a standard analytic biochemistry assay.

Before this prototype becomes available, however, more research is needed. The research team plans to integrate the wearable patch with the analysis device, while also making it more robust when saturated with sweat so it’s reusable. They also hope to generalize the design to measure several biomarkers at once, not just cortisol.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Stanford and Common Sense Media explore effects of virtual reality on kids

Photo by Andri Koolme

Although we’re still a long ways off from the virtual reality universe depicted in the new movie “Ready Player One,” VR is becoming a reality in many homes. But how is this immersive technology impacting our kid’s cognitive, social and physical well-being?

Stanford researchers and Common Sense Media are investigating the potential effects of virtual reality on children. And a  just-released report provides parents and educators with a practical guide on VR use.

“The truth is, when it comes to VR and kids, we just don’t know that much. As a community, we need more research to understand these effects,” Jeremy Bailenson, PhD, a Stanford communication professor and the founder of Stanford’s Virtual Human Interaction Lab, wrote in an introduction to the report.

The research team surveyed over 3600 U.S. parents about their family’s use of virtual reality. “Until this survey, it was unclear how, and even how many, kids were using virtual reality,” said Bailenson in a recent Stanford news release. “Now we have an initial picture of its adoption and use.”

The report summarizes results from this survey and previous VR research. Here are its key findings:

  • VR powerfully affects kids, because it can provoke a response to virtual experiences similar to actual experiences.
  • Long-terms effects of VR on developing brains and health are unknown. Most parents are concerned, and experts advocate moderation and supervision.
  • Only one in five parents report living in a household with VR and their interest is mixed, but children are
  • Characters in VR may be especially influential on young children.
  • Students are more enthusiastic about learning while using VR, but they don’t necessarily learn more.
  • VR has the potential to encourage empathy and diminish implicit racial bias, but most parents are skeptical.
  • When choosing VR content, parents should consider whether they would want their children to have the same experience in the real world.

Ultimately, the report recommends moderation. “Instead of hours of use, which might apply to other screens, think in terms of minutes,” Bailenson wrote. “Most VR is meant to be done on the five- to 10-minute scale.”  At Stanford’s Virtual Human Interaction Lab, even adults use VR for 20 minutes or less.

One known potential side effect from overuse is simulator sickness, which is caused by a lag in time between a person’s body movements and the virtual world’s response. Some parents also reported that their child experienced a headache, dizziness or eye strain after VR use.

In addition, the researchers advise parents to consider safety. Virtual reality headsets block out stimuli from the physical world, including hazards, so users can bump into things, trip or otherwise harm themselves.

A good option, they wrote, is to bring your child to a location-based VR center that provides well-maintained equipment, safety spotters and social interactions with other kids.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

%d bloggers like this: