Stanford researchers watch proteins assemble a protective shell around bacteria

Many bacteria and viruses are protected from the immune system by a thin, hard outer shell  — called an S-layer — composed of a single layer of identical protein building blocks.

Understanding how microbes form these crystalline S-layers and the role they play could be important to human health, including our ability to treat bacterial pathogens that cause serious salmonella, C. difficile and anthrax infections. For instance, researchers are working on ways to remove this shell to fight anthrax and other diseases.

Now, a Stanford study has observed for the first time proteins assembling themselves into an S-layer in a bacterium called Caulobacter crescentus, which is present in many fresh water lakes and streams.

Although this bacteria isn’t harmful to humans, it is a well-understood organism that is important to various cellular processes. Scientists know that the S-shell of Caulobacter crescentus is vital for the microbe’s survival and made up of protein building blocks called RsaA.  

A recent news release describes how the research team from Stanford and SLAC National Accelerator Laboratory were able to watch this assembly, even though it happens on such a tiny scale:

“To watch it happen, the researchers stripped microbes of their S-layers and supplied them with synthetic RsaA building blocks labeled with chemicals that fluoresce in bright colors when stimulated with a particular wavelength of light.

Then they tracked the glowing building blocks with single-molecule microscopy as they formed a shell that covered the microbe in a hexagonal, tile-like pattern (shown in image above) in less than two hours. A technique called stimulated emission depletion (STED) microscopy allowed them to see structural details of the layer as small as 60 to 70 nanometers, or billionths of a meter, across – about one-thousandth the width of a human hair.”

The scientists were surprised by what they saw: the protein molecules spontaneously assembled themselves without the help of enzymes.

“It’s like watching a pile of bricks self-assemble into a two-story house,” said Jonathan Herrmann, a graduate student in structural biology at Stanford involved in the study, in the news release.

The researchers believe the protein building blocks are guided to form in specific regions of the cell surface by small defects and gaps within the S-layer. These naturally-occurring defects are inevitable because the flat crystalline sheet is trying to cover the constantly changing, three-dimensional shape of the bacterium, they said.

Among other applications, they hope their findings will offer potential new targets for drug treatments.

“Now that we know how they assemble, we can modify their properties so they can do specific types of work, like forming new types of hybrid materials or attacking biomedical problems,” said Soichi Wakatsuki, PhD, a professor of structural biology and photon science at SLAC, in the release.

Illustration by Greg Stewart/SLAC National Accelerator Laboratory

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Advertisements

Simplified analysis method could lead to improved prosthetics, a Stanford study suggests

Brain-machine interfaces (BMI) are an emerging field at the intersection of neuroscience and engineering that may improve the quality of life for amputees and individuals with paralysis. These patients are unable to get signals from their motor cortex — the part of the brain that normally controls movement — to their muscles.

Researchers are overcoming this disconnect by implanting in the brain small electrode arrays, which measure and decode the electrical activity of neurons in the motor cortex. The sensors’ electrical signals are transmitted via a cable to a computer and then translated into commands that control a computer cursor or prosthetic limb. Someday, scientists also hope to eliminate the cable, using wireless brain sensors to control prosthetics.

In order to realize this dream, however, they need to improve both the brain sensors and the algorithms used to decode the neural signals. Stanford electrical engineer Krishna Shenoy, PhD, and his collaborators are tackling this algorithm challenge, as described in a recent paper in Neuron.

Currently, most neuroscientists process their BMI data looking for “spikes” of electrical activity from individual neurons. But this process requires time-consuming manual or computationally-intense automatic data sorting, which are both prone to errors.

Manual data sorting will also become unrealistic for future technologies, which are expected to record thousands to millions of electrode channels compared to the several hundred channels recorded by today’s state-of-the-art sensors. For example, a dataset composed of 1,000 channels could take over 100 hours to hand sort, the paper says. In addition, neuroscientists would like to measure a greater brain volume for longer durations.

So, how can they decode all of this data?

Shenoy suggests simplifying the data analysis by eliminating spike sorting for applications that depend on the activity of neural populations rather than single neurons — such as brain-machine interfaces for prosthetics.

In their new study, the Stanford team investigated whether eliminating this spike sorting step distorted BMI data. Turning to statistics, they developed an analysis method that retains accuracy while extracting information from groups rather than individual neurons. Using experimental data from three previous animal studies, they demonstrated that their algorithms could accurately decode neural activity with minimal distortion — even when each BMI electrode channel measured several neurons. They also validated these experimental results with theory.

 “This study has a bit of a hopeful message in that observing activity in the brain turns out to be easier than we initially expected,” says Shenoy in a recent Stanford Engineering news release. The researchers hope their work will guide the design and use of new low-power, higher-density devices for clinical applications since their simplified analysis method reduces the storage and processing requirements.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Photo by geralt.

Genetic roots of psychiatric disorders clearer now thanks to improved techniques

Photo by LionFive

New technology and access to large databases are fundamentally changing how researchers investigate the genetic roots of psychiatric disorders.

“In the past, a lot of the conditions that people knew to be genetic were found to have a relatively simple genetic cause. For example, Huntington’s disease is caused by mutations in just one gene,” said Laramie Duncan, PhD, an assistant professor of psychiatry and behavioral sciences at Stanford. “But the situation is entirely different for psychiatric disorders, because there are literally thousands of genetic influences on every psychiatric disorder. That’s been one of the really exciting findings that’s come out of modern genetic studies.”

These findings are possible thanks to genome-wide association studies (GWAS), which test for millions of genetic variations across the genome to identify the genes involved in human disease.

Duncan is the lead author of a recent commentary in Neuropsychopharmacology that explains how GWAS studies have demonstrated the inadequacy of previous methods. The paper also highlights new genetics findings for mental health.

Before the newer technologies and databases were available, scientists could only analyze a handful of genetic variations. So they had to guess that a specific genetic variation (a candidate) was associated with a disorder — based on what was known about the underlying biology — and then test their hypothesis. The body of research that has emerged from GWAS studies, however, show that nearly all of these earlier “candidate study” results are incorrect for psychiatric disorders.

“There are actually so many genetic variations in the genome, it would have been almost impossible for people to guess correctly,” Duncan said. “It was a reasonable thing to do at the time. But we now have better technology that’s just as affordable as the old ways of doing things, so traditional candidate gene studies are no longer needed.”

Duncan said she began questioning the candidate gene studies as a graduate student. As she studied the scientific literature, she noticed a pattern in the data that suggested the results were wrong. “The larger studies tended to have null results and the very small studies tended to have positive results. And the only reason you’d see that pattern is if there was strong publication bias,” said Duncan. “Namely, positive results were published even if the study was small, and null results were only published when the study was very large.”

In contrast, the findings from the GWAS studies become more and more precise as the sample size increases, she explained, which demonstrates their reliability.

Using GWAS, researchers now know that thousands of variations distributed across the genome likely contribute to any given mental disorder. By using the statistical power gleaned from giant databases such as the UK Biobank or the Million Veterans Program, they have learned that most of these variations aren’t even in the regions of the gene’s DNA that code for proteins, where scientists expected them to be. For example, only 1.1 percent of schizophrenia risk variants are in these coding regions.

“What’s so interesting about the modern genetic findings is that they are revealing entirely new clues about the underlying biology of psychiatric disorders,” Duncan said. “And this opens up lots of new avenues for treatment development.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Tips for discussing suicide on social media — A guide for youth

cell-phone-791365_1920
Photo by kaboompics

There are pros and cons to social media discussions of suicide. Social media can spread helpful knowledge and support, but it can also quickly disseminate harmful messaging and misinformation that puts vulnerable youth at risk.

New U.S. guidelines, called #chatsafe: A Young Person’s Guide for Communicating Safely Online About Suicides, aim to address this problem by offering evidence-based advice on how to constructively interact online about this difficult topic. The guidelines include specific language recommendations.

Vicki Harrison, MSW, the program director for the Stanford Center for Youth Mental Health and Wellbeing, discussed this new online education tool — developed in collaboration with a youth advisory panel — in a recent Healthier, Happy Lives Blog post.

“My hope is that these guidelines will create awareness about the fact that the way people talk about suicide actually matters an awful lot and doing so safely can potentially save lives. Yet we haven’t, up to this point, offered young people a lot of guidance for how to engage in constructive interactions about this difficult topic,” Harrison said in the blog post. “Hopefully, these guidelines will demystify the issue somewhat and offer practical suggestions that youth can easily apply in their daily interactions.”

A few main takeaways from the guidelines are below:

Before you post anything online about suicide

Remember that posts can go viral and they will never be completely erased. If you do post about suicide, carefully choose the language you use. For example, avoid words that describe suicide as criminal, sinful, selfish, brave, romantic or a solution to problems.

Also, monitor the comments for unsafe content like bullying, images or graphic descriptions of suicide methods, suicide pacts or goodbye notes. And include a link to prevention resources, like suicide help centers on social media platforms. From the guidelines:

“Indicate suicide is preventable, help is available, treatment can be successful, and that recovery is possible.”

Sharing your own thoughts, feelings or experience with suicidal behavior online

If you’re experiencing suicidal thoughts or feelings, try to reach out to a trusted adult, friend or professional mental health service before posting online. If you are feeling unsafe, call 911.

In general, think before you post: What do you hope to achieve by sharing your experience? How will sharing make you feel? Who will see your post and how will it affect them?

If you do post, share your experience in a safe and helpful way without graphic references, and consider including a trigger warning at the beginning to warn readers about potentially upsetting content.

Communicating about someone you know who is affected by suicidal thoughts, feelings or behaviors

If you’re concerned about someone, ask permission before posting or sharing content about them if possible. If someone you know has died by suicide, be sensitive to the feelings of their grieving family members and friends who might see your post. Also, avoid posting or sharing posts about celebrity suicides, because too much exposure to the suicide of well-known public figures can lead to copycat suicides.

Responding to someone who may be suicidal

Before you respond to someone who has indicated they may be at risk of suicide, check in with yourself: How are you feeling? Do you understand the role and limitations of the support you can provide?

If you do respond, always respond in private without judgement, assumptions or interruptions. Ask them directly if they are thinking of suicide. Acknowledge their feelings and let them know exactly why you are worried about them. Show that you care. And encourage them to seek professional help.

Memorial websites, pages and closed groups to honor the deceased

Setting up a page or group to remember someone who has died can be a good way to share stories and support, but it also raises concerns about copycat suicides. So make sure the memorial page or group is safe for others — by monitoring comments for harmful or unsafe content, quickly dealing with unsupportive comments and responding personally to participants in distress. Also outline the rules for participation.

Individuals in crisis can receive help from the Santa Clara County Suicide & Crisis Hotline at (855) 278-4204. Help is also available from anywhere in the United States via Crisis Text Line (text HOME to 741741) or the National Suicide Prevention Lifeline at (800) 273-8255. All three services are free, confidential and available 24 hours a day, seven days a week.

This is a resposting of my Scope blog story, courtesy of Stanford School of Medicine.

Designing an inexpensive surgical headlight: A Q&A with a Stanford surgeon

2017_Ethiopia_Jared Forrester_Black Lion_Pediatric OR by cell phone_2_crop
Photo by Jared Forrester / © Lifebox 2017

For millions of people throughout the world, even the simplest surgeries can be risky due to challenging conditions like frequent power outages. In response, Stanford surgeon Thomas Weiser, MD, is part of a team from Lifebox working to develop a durable, affordable and high-quality surgical headlamp for use in low-resource settings. Lifebox is a nonprofit that aims to make surgery safer throughout the world.

Why is an inexpensive surgical headlight important?

“The least expensive headlight in the United States costs upwards of $1000, and most cost quite a bit more. They are very powerful and provide excellent light, but they’re not fit for purpose in lower-resource settings. They are Ferraris when what we need is a Tata – functional, but affordable.

Jared Forrester, MD, a third-year Stanford surgical resident, lived and worked in Ethiopia for the last two years. During that time, he noted that 80 percent of surgeons working in low- and middle-income countries identify poor lighting as a safety issue and nearly 20 percent report direct experience of poor-quality lighting leading to negative patient outcomes. So there is a massive need for a lighting solution.”

How did you develop your headlamp specifications?

“Jared started by passing around a number of off-the-shelf medical headlights with surgeons in Ethiopia. We also asked surgeons in the U.S. and the U.K. to try them out to see how they felt and evaluate what was good and bad about them.

We performed some illumination and identification tests using pieces of meat in a shoebox with a slit cut in it to mimic a limited field of view and a deep hole. We asked surgeons to use lights at various power with the room lights off, with just the room lights on and with overhead surgical lights focused on the field. That way we could evaluate the range of light needed in settings with highly variable lighting, something that does not really exist here in the U.S.”

How do they differ from recreational headlamps?

“Recreational headlights have their uses and I’ve seen them used for providing care — including surgery. However, they tend to be uncomfortable during long cases and not secure on the head. Also, the light isn’t uniformly bright. You can see this when you shine a recreational light on a wall: there is a halo and the center is a different brightness than the outer edge of the light. This makes distinguishing tissue planes and anatomy more difficult.”

What are the barriers to implementation?

“While surgeons working in these settings all express interest in having a quality headlight, there is no reliable manufacturer or distributor for them. Surgeons cannot afford expensive lights, and no one has stepped up to provide a low-cost alternative that is robust, high quality and durable. We’re working to change that.”

What are your next steps?

“We are now evaluating a select number of headlights and engaging manufacturers in discussions about their current devices and what changes might be needed to make a final light at a price point that would be affordable to clinicians and facilities in these settings. By working through our networks and using our logistical capacity, we can connect the manufacturer with a new market that currently does not exist  — but is ready and waiting to be developed.

We believe these lights will improve the ability of surgeons to provide better, safer surgical care and also allow emergency cases to be completed at night when power fluctuations are most problematic. These lights should increase the confidence of the surgeon that the operation can be performed safely.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

The future hope of “flash” radiation cancer therapy

aqua-1756734_1920_crop.jpg

The goal of cancer therapy is to destroy the cancer cells while minimizing side effects and damage to the rest of the body. Common types of treatment include surgery, chemotherapy, targeted therapy and radiation therapy. Often combined with surgery or drugs, radiation therapy uses high-energy X-rays to harm the DNA and other critical processes of the rapidly-dividing cancer cells.

New innovations in radiation therapy were the focus of a recent episode of the Sirius radio show “The Future of Everything.” On hand was Stanford’s Billy Loo, MD, PhD, a professor of radiation oncology, who spoke with Stanford professor and radio host Russ Altman, MD, PhD.

Radiation has been used to treat cancer for over a century, but today’s technologies target the tumor with far greater precision and speed than the old days. Loo explained that modern radiotherapy now delivers low-dose beams of X-rays from multiple directions, which are accurately focused on the tumor so the surrounding healthy tissues get only a small dose while the tumor gets blasted. Radiation oncologists use imaging — CT, MRI or PET — to determine the three-dimensional sculpture of the tumor to target.

“We identify the area that needs to be treated, where the tumor is in relationship to the normal organs, and create a plan of the sculpted treatment,” Loo said. “And then during the treatment, we also use imaging … to see, for example, whether the radiation is going where we want it to go.”

In addition, oncologists now implement technologies in the clinic to compensate for motion, since organs like the lungs are constantly moving and patients have trouble lying still even for a few minutes. “We call it motion management. We do all kinds of tricks like turning on the radiation beam synchronized with the breathing cycle or following tumors around with the radiation beam,” explained Loo.

Currently, that is how standard radiation therapy works. However, Stanford radiation oncologists are collaborating with scientists at SLAC Linear Accelerator Center to develop an innovative technology called PHASER. Although Loo admits that the acronym was inspired because he loves Star Trek, PHASER stands for pluridirectional high-energy agile scanning electronic radiotherapy. This new technology delivers the radiation dose of an entire therapy session in a single flash lasting less than a second — faster than the body moves.

“We wondered, what if the treatment was done so fast — like in a flash photography — that all the motion is frozen? That’s a fundamental solution to this motion problem that gives us the ultimate precision,” he said. “If we’re able to treat more precisely with less spillage of radiation dose into normal tissues, that gives us the benefit of being able to kill the cancer and cause less collateral damage.”

The research team is currently testing the PHASER technology in mice, resulting in an exciting discovery — the biological response to flash radiotherapy may differ from slower traditional radiotherapy.

“We and a few other labs around the world have started to see that when the radiation is given in a flash, we see equal or better tumor killing but much better normal tissue protection than with the conventional speed of radiation,” Loo said. “And if that translates to humans, that’s a huge breakthrough.”

Loo also explained that their PHASER technology has been designed to be compact, economical, reliable and clinically efficient to provide a robust, mobile unit for global use. They expect it to fit in a standard cargo shipping container and to power it using solar energy and batteries.

“About half of the patients in the world today have no access to radiation therapy for technological and logistical reasons. That means millions of patients who could potentially be receiving curative cancer therapy are getting treated purely palliatively. And that’s a huge tragedy,” Loo said. “We don’t want to create a solution that everyone in the world has to come here to get — that would have limited impact. And so that’s been a core principle from the beginning.”

This is a reposting of my Scope blog post, courtesy of Stanford School of Medicine.

Can AI improve access to mental health care? Possibly, Stanford psychologist says

artificial-2970158_1920
Image by geralt

“Hey Siri, am I depressed?” When I posed this question to my iPhone, Siri’s reply was “I can’t really say, Jennifer.” But someday, software programs like Siri or Alexa may be able to talk to patients about their mental health symptoms to assist human therapists.

To learn more, I spoke with Adam Miner, PsyD, an instructor and co-director of Stanford’s Virtual Reality-Immersive Technology Clinic, who is working to improve conversational AI to recognize and respond to health issues.

What do you do as an AI psychologist?

“AI psychology isn’t a new specialty yet, but I do see it as a growing interdisciplinary need. I work to improve mental health access and quality through safe and effective artificial intelligence. I use methods from social science and computer science to answer questions about AI and vulnerable groups who may benefit or be harmed.”

How did you become interested in this field?

“During my training as a clinical psychologist, I had patients who waited years to tell anyone about their problems for many different reasons. I believe the role of a clinician isn’t to blame people who don’t come into the hospital. Instead, we should look for opportunities to provide care when people are ready and willing to ask for it, even if that is through machines.

I was reading research from different fields like communication and computer science and I was struck by the idea that people may confide intimate feelings to computers and be impacted by how computers respond. I started testing different digital assistants, like Siri, to see how they responded to sensitive health questions. The potential for good outcomes — as well as bad — quickly came into focus.”

Why is technology needed to assess the mental health of patients?

“We have a mental health crisis and existing barriers to care — like social stigma, cost and treatment access. Technology, specifically AI, has been called on to help. The big hope is that AI-based systems, unlike human clinicians, would never get tired, be available wherever and whenever the patient needs and know more than any human could ever know.

However, we need to avoid inflated expectations. There are real risks around privacy, ineffective care and worsening disparities for vulnerable populations. There’s a lot of excitement, but also a gap in knowledge. We don’t yet fully understand all the complexities of human–AI interactions.

People may not feel judged when they talk to a machine the same way they do when they talk to a human — the conversation may feel more private. But it may in fact be more public because information could be shared in unexpected ways or with unintended parties, such as advertisers or insurance companies.”

What are you hoping to accomplish with AI?

“If successful, AI could help improve access in three key ways. First, it could reach people who aren’t accessing traditional, clinic-based care for financial, geographic or other reasons like social anxiety. Second, it could help create a ‘learning healthcare system’ in which patient data is used to improve evidence-based care and clinician training.

Lastly, I have an ethical duty to practice culturally sensitive care as a licensed clinical psychologist. But a patient might use a word to describe anxiety that I don’t know and I might miss the symptom. AI, if designed well, could recognize cultural idioms of distress or speak multiple languages better than I ever will. But AI isn’t magic. We’ll need to thoughtfully design and train AI to do well with different genders, ethnicities, races and ages to prevent further marginalizing vulnerable groups.

If AI could help with diagnostic assessments, it might allow people to access care who otherwise wouldn’t. This may help avoid downstream health emergencies like suicide.”

How long until AI is used in the clinic?

“I hesitate to give any timeline, as AI can mean so many different things. But a few key challenges need to be addressed before wide deployment, including the privacy issues, the impact of AI-mediated communications on clinician-patient relationships and the inclusion of cultural respect.

The clinician–patient relationship is often overlooked when imagining a future with AI. We know from research that people can feel an emotional connection to health-focused conversational AI. What we don’t know is whether this will strengthen or weaken the patient-clinician relationship, which is central to both patient care and a clinician’s sense of self. If patients lose trust in mental health providers, it will cause real and lasting harm.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.