Will antidepressants work? Brain activity can help predict

You, or someone you care about, probably take an antidepressant — given that one in eight Americans do. Despite this widespread use, many experts question whether these drugs even work. Studies have shown that antidepressants are only slightly more effective than a placebo for treating depression.

“The interpretation of these studies is that antidepressants don’t work well as medications,” said Stanford psychiatrist and neurobiologist Amit Etkin, MD, PhD. “An alternative explanation is that the drugs work well for a small portion of people, but we’re giving them to too broad of a population and diminishing overall efficacy. Right now, we prescribe antidepressants based on patients’ clinical symptoms rather than an understanding of their biology.”

In a new study, Etkin and his collaborators sought a biologically-based method for predicting whether antidepressants will work for an individual patient.

The researchers analyzed data from the EMBARC study, the first randomized clinical trial for depression with both neuroimaging and a placebo control group. EMBARC included over 300 medication-free depressed outpatients who were randomized to receive either the antidepressant sertraline (brand name Zoloft) or a placebo for eight weeks.

Etkin’s team analyzed functional magnetic resonance imaging (fMRI) data — taken before treatment started — to view the patients’ brain activity while they performed an established emotional-conflict task. The researchers were interested in the brain circuitry that responds to emotion because depression is known to cause various changes in how emotions are processed and regulated.

During the task, the patients were shown pictures of faces and asked to categorize whether each facial expression depicted fear or happiness, while trying to ignore a word written across the face. The distracting word either matched or mismatched the facial expression. For example, the fearful face either had “happy” or “fear” written across it, as shown below.

Participants were asked to decide if this expression was happy or fearful.
(Image courtesy of Amit Etkins)

As expected, having a word that was incongruent with the facial expression slowed down the participants’ response time, but their brains were able to automatically adapt when a mismatch trial was followed by another mismatch trial.

“You experience the mismatched word as less interfering, causing less of a slowdown in your behavior, because your brain has gotten ready for it,” explained Etkin.

However, the participants varied in their ability to adapt. The study found that the people who could adapt well to the mismatched emotional stimuli had increased activity in certain brain regions, but they also had massively decreased activity in other brain regions — particularly in places important for emotional response and attention. In essence, these patients were better able to dampen the distracting effects of the stimuli.

Using machine learning, the researchers determined that they could use this fMRI brain activation signature to successfully predict which individual patients responded well to the antidepressant compared to the placebo.

“The better you’re able to dampen the effects of emotional stimuli on emotional and cognitive centers, the better you respond to an antidepressant medication compared to a placebo,” Etkin said. “This means that we’ve established a neurobiological signature reflective of the kind of person who is responsive to antidepressant treatment.”

This brain activation signature could be used to separate the people for whom a regular antidepressant works well from those who might need something new and more tailored. But it could also be used to assess potential interventions — such as medications, brain stimulation, cognitive training or mindfulness training — to help individuals become treatment responsive to antidepressants, he said.

“I think the most important result is that it turns out that antidepressants are not ineffective. In fact, they are quite effective compared to placebo if you give them to the right people. And we’ve identified who those people are using objective biological measures of brain activity.”

The team is currently investigating in clinics around the country whether they can replace the costly fMRI neuroimaging with electroencephalography, a less expensive and more widely available way to measure brain activity.  

Etkin concluded with a hopeful message for all patients suffering from depression: “Our data echoes the experience that antidepressants really help some people. It’s just a question of who those people are. And our new understanding will hopefully accelerate the development of new medications for the people who don’t respond to an antidepressant compared to placebo because we also understand their biology.”

Feature image by inspiredImages

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Stanford researchers watch proteins assemble a protective shell around bacteria

Many bacteria and viruses are protected from the immune system by a thin, hard outer shell  — called an S-layer — composed of a single layer of identical protein building blocks.

Understanding how microbes form these crystalline S-layers and the role they play could be important to human health, including our ability to treat bacterial pathogens that cause serious salmonella, C. difficile and anthrax infections. For instance, researchers are working on ways to remove this shell to fight anthrax and other diseases.

Now, a Stanford study has observed for the first time proteins assembling themselves into an S-layer in a bacterium called Caulobacter crescentus, which is present in many fresh water lakes and streams.

Although this bacteria isn’t harmful to humans, it is a well-understood organism that is important to various cellular processes. Scientists know that the S-shell of Caulobacter crescentus is vital for the microbe’s survival and made up of protein building blocks called RsaA.  

A recent news release describes how the research team from Stanford and SLAC National Accelerator Laboratory were able to watch this assembly, even though it happens on such a tiny scale:

“To watch it happen, the researchers stripped microbes of their S-layers and supplied them with synthetic RsaA building blocks labeled with chemicals that fluoresce in bright colors when stimulated with a particular wavelength of light.

Then they tracked the glowing building blocks with single-molecule microscopy as they formed a shell that covered the microbe in a hexagonal, tile-like pattern (shown in image above) in less than two hours. A technique called stimulated emission depletion (STED) microscopy allowed them to see structural details of the layer as small as 60 to 70 nanometers, or billionths of a meter, across – about one-thousandth the width of a human hair.”

The scientists were surprised by what they saw: the protein molecules spontaneously assembled themselves without the help of enzymes.

“It’s like watching a pile of bricks self-assemble into a two-story house,” said Jonathan Herrmann, a graduate student in structural biology at Stanford involved in the study, in the news release.

The researchers believe the protein building blocks are guided to form in specific regions of the cell surface by small defects and gaps within the S-layer. These naturally-occurring defects are inevitable because the flat crystalline sheet is trying to cover the constantly changing, three-dimensional shape of the bacterium, they said.

Among other applications, they hope their findings will offer potential new targets for drug treatments.

“Now that we know how they assemble, we can modify their properties so they can do specific types of work, like forming new types of hybrid materials or attacking biomedical problems,” said Soichi Wakatsuki, PhD, a professor of structural biology and photon science at SLAC, in the release.

Illustration by Greg Stewart/SLAC National Accelerator Laboratory

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

CTs predict survival by measuring frailty following hip fractures, study shows

Photo by PublicDomainPictures

When elderly people fall, a hip fracture is a common and serious result. It is typically treated with surgery, but physicians need a better way to determine how frail a patient is in order to select the best surgical method. The need is great: Each year over 300,000 older people are hospitalized in the United States for hip fractures. These disabling injuries are associated with significant mortality, loss of independence and financial burden.

Now, a new research study led by radiologists from the University of California, Davis and Wake Forest Baptist medical centers may help guide these critical treatment decisions.

The research team performed a retrospective 10-year study of 274 patients who were 65 years or older and treated for hip fractures at the UC Davis Medical Center. Using CT images previously taken to diagnose the hip fracture, the researchers measured the size and density of the patients’ core muscles that stabilize the spine. They then compared the health of these core muscles with survival rates. The reported 1-year mortality rate after fracturing a hip is between 14 and 58 percent.

They found that hip-fracture patients with better thoracic (mid-upper back) core muscles had significantly better survival rates, whereas no significant trend was seen for patients with better lumbar (low back) muscles, as recently reported in the American Journal of Roentgenology.

Robert Boutin, MD, a radiologist at UC Davis and the lead author, summarized the importance of their results in a recent news release:

As patients age, it becomes increasingly important to identify the safest and most beneficial orthopaedic treatments, but there currently is no objective way to do this. Using CT scans to evaluate the muscles is addition to hip bones can help predict longevity and personalize treatment to a patient’s needs. We’re excited because information on muscles is included on every routine CT scan of the chest, abdomen and pelvis, so the additional evaluations can be done without the costs of additional tests, equipment or software.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Do MRI scans damage your genes?

Photo by Jan Ainali
Photo by Jan Ainali

MRI is a powerful, non-invasive diagnostic tool widely used to investigate anatomical structures and functions in the body.

Though generally considered to be safe, several studies in the last decade have reported an increase in DNA damage, or genotoxicity, due to cardiac MRI scans. Other research doesn’t support these findings — raising the controversial question of whether an MRI’s electromagnetic fields pose a health threat.

A multi-institutional research team explored this issue by reviewing the literature published between 2007 and 2016. Specifically, the group considered three questions during their review:

  • Do MRIs really cause genotoxicity?
  • What are the potential adverse health effects of exposure to MRI electromagnetic fields?
  • What impact does this have on patient health?

As outlined in a commentary appearing in Radiation Research, the evidence correlating MRIs with genotoxicity “is, at best, mixed.” After emphasizing the limitations of existing studies, which typically included at most 20 participants and lacked sufficient quality control measures, the authors summarized:

“We conclude that while a few studies raise the possibility that MRI exams can damage a patient’s DNA, they are not sufficient to establish such effects, let alone any health risk to patients. … We consider that genotoxic effects of MRI are highly unlikely.”

A previous 2015 review paper published in Mutation Research called for comprehensive, international, multi-centered collaborative studies to address this issue, using a common and widely used MRI exposure protocol with a large number of patients.

The authors of the new review note that such studies would be very expensive and would require hundreds of thousands of participants, which may not be warranted.

“If you want to do something next, do a very well-designed, large study of the types that have already been done, but with better statistics and better controls,” said John Moulder, PhD, a professor emeritus at the Medical College of Wisconsin, in a recent news story. “And make sure that this punitive genotoxicity is even real before beginning more expensive follow-up studies.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Brain scans detect lies better than polygraph tests, new study shows

8742707669_786cfa376b_k
Photo by Tristan Schmurr

Forget fact checkers or polygraph tests. A functional magnetic resonance imaging (fMRI) brain scan might be the best way to tell if someone is lying.

According to a study from the University of Pennsylvania, our brains are more likely to give us away when we’re lying than sweaty palms, rapid breathing or spikes in blood pressure, the factors tracked by polygraph tests.

The researchers directly compared the ability of two techniques — fMRI and polygraph tests — to detect concealed information. They had 28 participants secretly write down a number between 3 and 8 on a slip of paper. Each participant then had both lie-detection tests, in random order, a few hours apart. During both sessions, they always answered “no” when asked if they had picked a certain number, which meant that one out of the six answers was a lie.

Three fMRI experts and three professional polygraph examiners then independently analyzed the results. The fMRI experts were 24 percent more likely to detect the lie than the polygraph experts, as recently reported in the Journal of Clinical Psychiatry.

Although the study wasn’t designed to evaluate the combined use of both techniques, the polygraph and fMRI results agreed correctly on the concealed number for 17 participants. So they plan to investigate in the future whether these techniques are complementary.

The study includes only a small number of participants, but the research team is encouraged by the results. “While the jury remains out on whether fMRI will ever become a forensic tool, these data certainly justify further investigation of its potential,” said Daniel Langleben, MD, first author and a professor in psychiatry, in a recent news release.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Researchers discover “brain signature” for fibromyalgia using brain scans

portrait-1006703_1280Millions of patients suffering from fibromyalgia often experience widespread musculoskeletal pain, sleep disturbances, fatigue, headaches and mood disorders. Many also struggle to even get diagnosed, since there are currently no laboratory tests for fibromyalgia and the main symptoms overlap with many other conditions. However, new research may help.

Scientists from the University of Colorado, Boulder may have found a pattern of brain activity that identifies the disease. They used functional MRI (fMRI) scans to study the brain activity of 37 fibromyalgia patients and 35 matched healthy controls, while the participants were exposed to a series of painful and non-painful sensations.

As reported recently in the journal PAIN, the research team identified three specific neurological patterns correlated with fibromyalgia patients’ hypersensitivity to pain.

Using the combination of all three patterns, they were able to correctly classify the fibromyalgia patients and the controls with 92 percent sensitivity and 94 percent specificity — meaning that their test accurately identified 92 percent of those with and 94 percent of those without the disease.

Tor Wager, PhD, senior author and director of the school’s Cognitive and Affective Control Laboratory, explained the significance of the work in a recent news release:

“Though many pain specialists have established clinical procedures for diagnosing fibromyalgia, the clinical label does not explain what is happening neurologically and it does not reflect the full individuality of patients’ suffering. The potential for brain measures like the ones we developed here is that they can tell us something about the particular brain abnormalities that drive an individual’s suffering. That can help us both recognize fibromyalgia for what it is – a disorder of the central nervous system – and treat it more effectively.”

More research is needed, but this study sheds a bit of light on this “invisible” disease.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Stanford otolaryngologist champions ultrasound imaging

Photo, of Lisa Orloff performing an ultrasound exame, by Stuart Kraybill
Photo, of Lisa Orloff performing an ultrasound exam, by Stuart Kraybill

Patients with thyroid nodules — extremely common lumps on the thyroid that are usually benign, but can be malignant — are typically sent for ultrasound imaging to evaluate the size and structure of their thyroid and nodules. A radiologist’s report is then sent to the treating physician, who discusses the report with the patient and recommends next steps.

Lisa Orloff, MD, director of the endocrine head and neck surgery program at Stanford, doesn’t follow this traditional procedure: she performs her own ultrasound exams in the office and is training other head and neck surgeons to do the same. I recently spoke with Orloff about the role of ultrasound imaging in her practice.

Why do you primarily use ultrasound imaging to diagnose head and neck disease?

“My clinical practice focuses on the surgical management of thyroid and parathyroid disease, especially thyroid cancer. In the head and neck region, ultrasound imaging has long been recognized as the ‘go to’ study if you want to evaluate the structure, size and content of the thyroid gland. What’s been recognized more recently is how great ultrasound is for most of the head and neck structures. So we’re moving into an era of ‘ultrasound first:’ See what you can see with ultrasound, and then decide if you need additional cross-sectional imaging to corroborate or complement the ultrasound findings. For patients with thyroid cancer, ultrasound is extremely useful for evaluating not only the thyroid, but the rest of the neck for aggressive features including possible metastases.

Ultrasound is a low risk, low cost and very high yield imaging study that better characterizes the details within thyroid nodules or lymph nodes; whereas, CT and MRI often rely more on size to say whether or not a thyroid nodule or lymph node is suspicious. It’s really phenomenal what you can see with modern, high-resolution ultrasound equipment.

However, ultrasound has been blamed for the recent increase in incidence of thyroid cancer, which is largely due to increased detection. Even malignant thyroid nodules can sometimes be very indolent cancers that may not require intervention, but can be monitored. A major challenge in thyroid cancer care is distinguishing potentially aggressive tumors from those that are very low risk.”

Why is it helpful to have a clinical doctor, instead of a technician, perform ultrasound?

“When used at the point of care — performed by the clinician who is taking care of the patient — ultrasound enables the treating clinician to immediately investigate and answer questions with ultrasound information, and then implement treatments. It’s sort of one-stop shopping.

There’s an invaluable connection made with the patient when the treating physician performs the ultrasound exam, while explaining findings to the patient and discussing whether and how to treat them. I think it translates into improved patient care. If I’m the one doing the ultrasound exam, I can plan and execute surgery better with first-hand knowledge of what lies beneath the surface — rather than relying on images that someone else captured. I can perform ultrasound-guided biopsies and treatments in the office. I can also judge firsthand when an intervention or even biopsy isn’t necessary.

At present, I’m the surgeon in the head and neck division who routinely uses office-based ultrasound to evaluate patients, many of whom are referred to me specifically for that reason. But my colleagues in comprehensive ENT also perform ultrasonography [ultrasound imaging], as do our fellows and residents. We’re very motivated to train the next generation of otolaryngologists so it becomes more widely practiced in the office setting. We want to reduce the need for multiple appointments and more costly or invasive studies.”

I heard you recently traveled to Zimbabwe. What did you do there?

“My department has developed a relationship with the only medical school in the country, the University of Zimbabwe. I spent two weeks this summer, mainly teaching ultrasonography to residents in both otolaryngology and surgery — introducing the concept of point-of-care ultrasound to a low-resource practice environment where this has the potential for even greater impact. Most patients there don’t have ready access to get an expensive CT or MRI scan. I think ultrasound has a particular application in that setting, because it’s inexpensive, portable, fast and so user friendly. It’s also painless and non-threatening — you can do it on kids without having to anesthetize them to stay still.

Going over there to teach was a really rewarding experience. I hope to go back soon. We were very fortunate to have ultrasound equipment loaned for teaching purposes by GE based in South Africa. My next goal is to raise funds for an ultrasound machine to equip the Zimbabwe program with this wonderful tool for their continuing use.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Imaging study shows genetics and environment affect different parts of the brain

Photo by AdinaVoicu
Photo by AdinaVoicu

One of the oldest scientific debates is “nature versus nurture” — do inherited traits or environmental factors shape who we are, and what we do?

So far it’s a draw.

For instance, a massive meta-study, reported in Nature Genetics, quantified the heritability of human traits by analyzing more than 50 years of data on almost 18 thousand traits measured in over 14.5 million pairs of twins. They determined that heritability accounted for 49 percent of all traits and environmental influences for 51 percent.

They essentially found that genes and the environment play an equal role in human development. But that isn’t the end of the debate.

Researchers in Osaka University Graduate School of Medicine in Japan have now added a new twist. They used positron emission tomography (PET) to examine how genetics and environmental factors affect the brain, as reported in the March issue of Journal of Nuclear Medicine.

The researchers used PET imaging to measure the glucose — or energy — metabolism throughout the brain. The authors explained their motivation in the JNM article:

“The patterns of glucose metabolism in the brain appear to be influenced by various factors, including genetic and environmental factors. However, the magnitude and proportion of these influences remain unknown.”

The researchers studied 40 identical twin pairs and 18 fraternal twin pairs. Any differences between identical twins is expected to be due to environmental factors since they are genetically identical, whereas fraternal twins only share half the same genes on average.

The researchers compared imaging results between the two types of twins to estimate the extent of genetic and environmental influences. When a genetic influence is dominant, the identical twins would have more trait similarity than fraternal twins. When an environmental influence is dominant, the trait similarity would be the same for identical and fraternal twins.

The researchers found that both genetic and environmental factors influenced glucose metabolism in the brain, but they predominantly affected different areas. Genetic influences played a major role in the left and right parietal lobes and the left temporal lobe, whereas environmental influences were dominant in other regions of the brain.

The brain’s parietal lobes process sensory information such as taste, temperature and touch, and the temporal lobes process sounds and speech comprehension. More research is needed to understand why these areas of the brain where influenced more by genetics.

In addition to adding new information to the “nature verses nurture” debate, these results could be applied to other research areas, such as using imaging to better understand the underlying cause of Alzheimer’s disease or psychiatric disorders. Identifying which regions of the brain are more influenced by genetics or the environment may add critical information to help better understand and treat diseases.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Tattoo ink may mimic cancer on PET-CT images

19824000898_7611622d67_o_Flickr_PauloGuereta
Photograph by Paulo Guereta

The hit new crime thriller Blindspot is about a mysterious woman, Jane Doe, who is covered in extensive full-body tattoos. If Jane Doe were a real woman who ever needed medical imaging, she might need to be concerned.

In a case report published recently in the journal Obstetrics & Gynecology, researchers found that extensive tattoos can mimic metastases on images from positron emission tomography (PET) fused with computed tomography (CT). PET-CT imaging is commonly used to detect cancer, determine whether the cancer has spread and guide treatment decisions. A false-positive finding can result in unnecessary or incorrect treatment.

Ramez N. Eskander, MD, assistant professor of obstetrics and gynecology at UC Irvine, and his colleagues describe the case study of a 32-year-old woman with cervical cancer and extensive tattoos. The pre-operative PET-CT scan using fluorine-18-deoxyglucose confirmed that there was a large cervical cancer mass, but the scan also identified two ileac lymph nodes as suspicious for metastatic disease. However, final pathology showed extensive deposition of tattoo ink and no malignant cells in those ileac lymph nodes.

It is believed that carbon particles in the tattoo pigment migrate to the nearby lymph nodes through macrophages, using mechanisms similar to those seen in malignant melanoma. The researchers explain in their case report:

Our literature search yielded case reports describing the migration of tattoo ink to regional lymph nodes in patients with breast cancer, melanoma, testicular seminoma, and vulvar squamous cell carcinoma, making it difficult to differentiate grossly between the pigment and the metastatic disease, resulting in unnecessary treatment.

The authors warn other physicians to be aware of the possible effects of tattoo ink on PET-CT findings when formulating treatment plans, particularly for patients with extensive tattoos.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

%d bloggers like this: