Will antidepressants work? Brain activity can help predict

You, or someone you care about, probably take an antidepressant — given that one in eight Americans do. Despite this widespread use, many experts question whether these drugs even work. Studies have shown that antidepressants are only slightly more effective than a placebo for treating depression.

“The interpretation of these studies is that antidepressants don’t work well as medications,” said Stanford psychiatrist and neurobiologist Amit Etkin, MD, PhD. “An alternative explanation is that the drugs work well for a small portion of people, but we’re giving them to too broad of a population and diminishing overall efficacy. Right now, we prescribe antidepressants based on patients’ clinical symptoms rather than an understanding of their biology.”

In a new study, Etkin and his collaborators sought a biologically-based method for predicting whether antidepressants will work for an individual patient.

The researchers analyzed data from the EMBARC study, the first randomized clinical trial for depression with both neuroimaging and a placebo control group. EMBARC included over 300 medication-free depressed outpatients who were randomized to receive either the antidepressant sertraline (brand name Zoloft) or a placebo for eight weeks.

Etkin’s team analyzed functional magnetic resonance imaging (fMRI) data — taken before treatment started — to view the patients’ brain activity while they performed an established emotional-conflict task. The researchers were interested in the brain circuitry that responds to emotion because depression is known to cause various changes in how emotions are processed and regulated.

During the task, the patients were shown pictures of faces and asked to categorize whether each facial expression depicted fear or happiness, while trying to ignore a word written across the face. The distracting word either matched or mismatched the facial expression. For example, the fearful face either had “happy” or “fear” written across it, as shown below.

Participants were asked to decide if this expression was happy or fearful.
(Image courtesy of Amit Etkins)

As expected, having a word that was incongruent with the facial expression slowed down the participants’ response time, but their brains were able to automatically adapt when a mismatch trial was followed by another mismatch trial.

“You experience the mismatched word as less interfering, causing less of a slowdown in your behavior, because your brain has gotten ready for it,” explained Etkin.

However, the participants varied in their ability to adapt. The study found that the people who could adapt well to the mismatched emotional stimuli had increased activity in certain brain regions, but they also had massively decreased activity in other brain regions — particularly in places important for emotional response and attention. In essence, these patients were better able to dampen the distracting effects of the stimuli.

Using machine learning, the researchers determined that they could use this fMRI brain activation signature to successfully predict which individual patients responded well to the antidepressant compared to the placebo.

“The better you’re able to dampen the effects of emotional stimuli on emotional and cognitive centers, the better you respond to an antidepressant medication compared to a placebo,” Etkin said. “This means that we’ve established a neurobiological signature reflective of the kind of person who is responsive to antidepressant treatment.”

This brain activation signature could be used to separate the people for whom a regular antidepressant works well from those who might need something new and more tailored. But it could also be used to assess potential interventions — such as medications, brain stimulation, cognitive training or mindfulness training — to help individuals become treatment responsive to antidepressants, he said.

“I think the most important result is that it turns out that antidepressants are not ineffective. In fact, they are quite effective compared to placebo if you give them to the right people. And we’ve identified who those people are using objective biological measures of brain activity.”

The team is currently investigating in clinics around the country whether they can replace the costly fMRI neuroimaging with electroencephalography, a less expensive and more widely available way to measure brain activity.  

Etkin concluded with a hopeful message for all patients suffering from depression: “Our data echoes the experience that antidepressants really help some people. It’s just a question of who those people are. And our new understanding will hopefully accelerate the development of new medications for the people who don’t respond to an antidepressant compared to placebo because we also understand their biology.”

Feature image by inspiredImages

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

How does radiation in space affect the brain?

Exposure to deep space poses many potential risks to the health of astronauts, but one of the biggest dangers is space radiation. Above Earth’s protective shielding, astronauts are exposed to radiation from energetic charged particles that increases their risk of cancer, damage to the central nervous system and a host of other health problems.

A new study has now investigated how chronic, space-like irradiation impacts the brain function of mice. To learn more, I spoke with Ivan Soltesz, PhD, a senior author on the study and a professor of neurosurgery and neurosciences at Stanford.

What was the goal of your study?

“Our basic question was ‘what happens to your brain during a mission to Mars?’ So far, only the Apollo astronauts have traveled far enough beyond the Earth’s protective magnetic field to be exposed to similar galactic cosmic radiation levels, albeit only for short durations.

In previous rodent studies, my lab observed that neuronal function is disrupted by low levels of radiation, a fraction of the dose used for cancer therapy. However, technical constraints required us to deliver the entire radiation dose within minutes, rather than across several months as during a mission to Mars. In the current study, we are the first to investigate the impact of prolonged radiation exposures, at Mars-relevant doses and dose rates, on the neurological function. We used a new neutron irradiation facility at Colorado State University.”

What part of the brain did you study?

“The hippocampus, which is critical for several important brain functions, including the formation of new memories and spatial navigation. And the medial prefrontal cortex, which is important for retrieving preexisting memories, making decisions and processing social information. Thus, deficits in either of these two brain regions could detrimentally impact the ability of astronauts to safely and successfully carry out a mission to Mars.”

What did you find?

“My lab at Stanford measured electrical properties of individual neurons from mice that were exposed to six months of chronic neutron radiation. We determined that after chronic radiation exposure, neurons in the hippocampus were less likely to respond to incoming stimuli and they received a reduced frequency of communication from neighboring neurons.

Our collaborators at UC, Irvine found that chronic neutron radiation also caused neuronal circuits in both the hippocampus and medial prefrontal cortex to no longer show long-lasting strengthening of their responses to electrical stimulation, normally referred to as long-term potentiation. Long-term potentiation is a cellular mechanism that allows memory formation.

Our collaborators also conducted behavioral tests. The mice displayed lasting deficits in learning, memory, anxiety and social behavior — even months after radiation exposure. Based on these results, our team predicts that nearly 1 in 5 astronauts would experience elevated anxiety behavior during a mission to Mars, while 1 in every 3 astronauts would struggle with memory recall.”

How can these findings facilitate safe space exploration?

“By understanding radiation risks, future missions can plan practical changes — such as locating astronaut sleeping spaces towards the center of the spacecraft where intervening material blocks more incoming radiation — that may help to mitigate the risks associated with interplanetary travel.

However, my lab believes the best way to protect astronauts from the harmful effects of space radiation is to understand at a basic science level how neuronal activity is disrupted by chronic radiation exposures.

One promising sign is that radiation exposures that occur in space rarely cause neurons in the brain to die, but rather cause smaller scale cellular changes. Thus, we should be able to develop strategies to modulate neuronal activity to compensate for radiation-induced changes. Our team is already starting a new set of chronic space-radiation experiments to test a candidate countermeasure drug.”

Would you ever go to space, given how harmful it is on the human body?

“The radiation risks we discovered are mostly a concern for travel beyond low earth orbit, such as months-long missions to Mars. Shorter trips to the moon — such as the Apollo missions — or months spent in Earth orbit aboard the International Space Station appear to pose a much lower risk of radiation-induced cognitive deficits. I would definitely like to go into space for at least a few quick orbits.

I’m also confident that my lab and others will expand our understanding of how chronic radiation impacts the nervous system and to develop the effective countermeasures needed to enable safe missions towards the moon or Mars within the next decade. However, I’m not sure I’m ready to leave my lab unattended for two years while I take a sabbatical to Mars.”

Photo by ColiN00B

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Explaining neuroscience in ongoing Instagram video series: A Q&A

At the beginning of the year, Stanford neuroscientist Andrew Huberman, PhD, pledged to post on Instagram one-minute educational videos about neuroscience for an entire year. Since a third of his regular followers come from Spanish-speaking countries, he posts them in both English and Spanish. We spoke soon after he launched the project. And now that half the year is over, I checked in with him about his New Year’s resolution.

How is your Instagram project going?

“It’s going great. I haven’t kept up with the frequency of posts that I initially set out to do, but it’s been relatively steady. The account has grown to about 13,500 followers and there is a lot of engagement. They ask great questions and the vast majority of comments indicate to me that people understand and appreciate the content. I’m really grateful for my followers. Everyone’s time is valuable and the fact that they comment and seem to enjoy the content is gratifying.”

What have you learned?

“The feedback informed me that 60 seconds of information is a lot for some people, especially if the topic requires new terms. That was surprising. So I have opted to do shorter 45-second videos and those get double or more views and reposts. I also have started posting images and videos of brains and such with ‘voice over’ content. It’s more work to produce, but people seem to like that more than the ‘professor talking’ videos.

I still get the ‘you need to blink more!’ comments, but fortunately that has tapered off. My Spanish is also getting better but I’m still not fluent. Neural plasticity takes time but I’ll get there.”

What is your favorite video so far?

“People naturally like the videos that provide something actionable for their health and well-being. The brief series on light and circadian rhythms was especially popular, as well as the one on how looking at the blue light from your cell phone in the middle of the night can potentially alter sleep and mood. I particularly enjoyed making that post since it combined vision science and mental health, which is one of my lab’s main focuses.”

What are you planning for the rest of the year?

“I’m kicking off some longer content through the Instagram TV format, which will allow people who want more in-depth information to get that. I’m also helping The Society for Neuroscience get their message out about their annual meeting. Other than that, I’m just going to keep grinding away at delivering what I think is interesting neuroscience to people that would otherwise not hear about it.”

Is it fun or an obligation at this point?

“There are days where other things take priority of course — research, teaching and caring for my bulldog Costello — but I have to do it anyway since I promised I’d post. However, it’s always fun once I get started. If only I could get Costello to fill in for me when I get busy…”

This is a reposting of my Scope story, courtesy of Stanford School of Medicine.

Simplified analysis method could lead to improved prosthetics, a Stanford study suggests

Brain-machine interfaces (BMI) are an emerging field at the intersection of neuroscience and engineering that may improve the quality of life for amputees and individuals with paralysis. These patients are unable to get signals from their motor cortex — the part of the brain that normally controls movement — to their muscles.

Researchers are overcoming this disconnect by implanting in the brain small electrode arrays, which measure and decode the electrical activity of neurons in the motor cortex. The sensors’ electrical signals are transmitted via a cable to a computer and then translated into commands that control a computer cursor or prosthetic limb. Someday, scientists also hope to eliminate the cable, using wireless brain sensors to control prosthetics.

In order to realize this dream, however, they need to improve both the brain sensors and the algorithms used to decode the neural signals. Stanford electrical engineer Krishna Shenoy, PhD, and his collaborators are tackling this algorithm challenge, as described in a recent paper in Neuron.

Currently, most neuroscientists process their BMI data looking for “spikes” of electrical activity from individual neurons. But this process requires time-consuming manual or computationally-intense automatic data sorting, which are both prone to errors.

Manual data sorting will also become unrealistic for future technologies, which are expected to record thousands to millions of electrode channels compared to the several hundred channels recorded by today’s state-of-the-art sensors. For example, a dataset composed of 1,000 channels could take over 100 hours to hand sort, the paper says. In addition, neuroscientists would like to measure a greater brain volume for longer durations.

So, how can they decode all of this data?

Shenoy suggests simplifying the data analysis by eliminating spike sorting for applications that depend on the activity of neural populations rather than single neurons — such as brain-machine interfaces for prosthetics.

In their new study, the Stanford team investigated whether eliminating this spike sorting step distorted BMI data. Turning to statistics, they developed an analysis method that retains accuracy while extracting information from groups rather than individual neurons. Using experimental data from three previous animal studies, they demonstrated that their algorithms could accurately decode neural activity with minimal distortion — even when each BMI electrode channel measured several neurons. They also validated these experimental results with theory.

 “This study has a bit of a hopeful message in that observing activity in the brain turns out to be easier than we initially expected,” says Shenoy in a recent Stanford Engineering news release. The researchers hope their work will guide the design and use of new low-power, higher-density devices for clinical applications since their simplified analysis method reduces the storage and processing requirements.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Photo by geralt.

Creativity can jump or slump during middle childhood, a Stanford study shows

 

Photo by Free-Photos

As a postdoctoral fellow in psychiatry, Manish Saggar, PhD, stumbled across a paper published in 1968 by a creativity pioneer named E. Paul Torrance, PhD. The paper described an unexplained creativity slump occurring during fourth grade that was associated with underachievement and increased risk for mental health problems. He was intrigued and wondered what exactly was going on.  “It seemed like a profound problem to solve,” says Saggar, who is now a Stanford assistant professor of psychiatry and behavioral sciences.

Saggar’s latest research study, recently published in NeuroImage, provides new clues about creativity during middle childhood. The research team studied the creative thinking ability of 48 children — 21 starting third grade and 27 starting fourth grade — at three time points across one year. This allowed the researchers to piece together data from the two groups to estimate how creativity changes from 8 to 10 years of age.

At each of the time points, the students were assessed using an extensive set of standardized tests for intelligence, aberrant behavior, response inhibition, temperament and creativity. Their brains were also scanned using a functional near-infrared spectroscopy (fNIRS) cap, which imaged brain function as they performed a standardized Figural Torrance Test of Creative Thinking.

During this test, the children sat at a desk and used a pen and paper to complete three different incomplete figures to “tell an unusual story that no one else will think of.” Their brains were scanned during these creativity drawing tasks, as well as when they rested (looking at a picture of a plus sign) and they completed a control drawing (connecting the dots on a grid).

Rather than using the conventional categories of age or grade level, the researchers grouped the participants based on the data — revealing three distinct patterns in how creativity could change during middle childhood.

The first groups of kids slumped in creativity initially and then showed an increase in creativity after transitioning to the next grade, while the second group showed the inverse. The final group of children had no change in creativity and then a boost after transitioning to the next grade.

“A key finding of our study is that we cannot group children together based on grade or age, because everybody is on their own trajectory,” says Saggar.

The researchers also found a correlation between creativity and rule-breaking or aggressive behaviors for these participating children, who scored well within the normal range of the standard child behavior checklist used to assess behavioral and emotional problems. As Saggar clarifies, these “problem behaviors” were things like arguing a lot or preferring to be with older kids rather than actions like fighting.

“In our cohort, the aggression and rule-breaking behaviors point towards enhanced curiosity and to not conforming to societal rules, consistent with the lay notion of ‘thinking outside the box’ to create unusual and novel ideas,” Saggar explains. “Classic creative thinking tasks require people to break rules between cognitive elements to form new links between previously unassociated elements.”

They also found a correlation between creativity and increased functional segregation of the frontal regions of the brain. Certain functions of our brain are done by regions independently and other functions are done by integration, when different brain regions come together to help us do the task. For example, a relaxing walk in the park with a wandering mind might have brain regions chattering in a segregated independent fashion, while focusing intently to memorize a series of numbers might require brain integration. And our brain needs to balance between this segregation and integration. In the study, they showed that increases in creativity tracked with increased segregation of the frontal regions.

“Having increased segregation in the frontal regions indicates that they weren’t really focusing on something specific,” Saggar says. “The hypothesis we have is perhaps you need more diffused attention to be more creative. Like when you get your best ideas while taking a shower or a walk.”

Saggar hopes their findings will help develop new interventions for teachers and parents in the future, but he says that longer studies, with a larger and more diverse group of children, are first needed to validate their results.

Once they confirm that the profiles observed in their current study actually exist in larger samples, the next step will be to see if they can train kids to improvise and become more creative, similar to a neuroscience study that successfully trained adults to enhance their creativity.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

“The brain is just so amazing:” New Instagram video series explains neuroscience

Huberman
Photo by Photo by Norbert von der Groeben

Many people make New Year’s resolutions to exercise more or eat healthier. Not Stanford neurobiology professor Andrew Huberman, PhD. This year he set out to educate the public about exciting discoveries in neuroscience using Instagram.

Huberman’s sights are high: he pledged to post on Instagram one-minute educational videos about neuroscience an average of five times per week for an entire year. I recently spoke with him to see how he’s doing on his resolution.

Why did you start the Instagram video series?

“Although I’m running a lab where we’re focused on making discoveries, I’ve also been communicating science to the general public for a while. I’ve found that there’s just immense interest in the brain — about diseases, what’s going on in neuroscience now, and how these discoveries might impact the audience. The brain is just so amazing, so the interest makes sense to me.

I don’t spend much time on social media, but Instagram seemed like an interesting venue for science communication because it’s mostly visual. My lab already had an Instagram account that we successfully used to recruit human subjects for our studies. So at the end of last year, I was talking with a friend about public service. I told him I was thinking about creating short, daily educational videos about neuroscience — a free, open resource that anyone can view and learn from. He and some other friends said they’d totally watch that. So I committed to it in a video post to 5000 people, and then there was no backing down.”

What topics do you cover?

“I cover a lot of topics. But I feel there are two neuroscience topics that will potentially impact the general public in many positive ways if they can understand the underlying biology: neuroplasticity — the brain’s ability to change— and stress regulation. My primary interest is in vision science, so I like to highlight how the visual system interacts with other systems.

I discuss the literature, dispel myths, touch on some of the interesting mysteries and describe some of the emerging tools and technologies. I talk a bit about my work but mostly about work from other labs. And I’m always careful not to promote any specific tools or practices.”

How popular are your videos?

 “We have grown to about 8,000 followers in the last six weeks and it’s getting more viewers worldwide. According to the stats from Instagram, about a third of my regular listeners are in Spanish-speaking countries. Some of these Spanish-speaking followers started requesting that I make the videos in Spanish so they could share them. Last week I started posting the videos in both English and Spanish and there’s been a great response. My Spanish is weak but it’s getting better, so I’m also out to prove neural plasticity is possible in adulthood. By the end of the year I plan to be fluent in Spanish.

I’ve also had requests to do it in French, German, Chinese and Dutch but I’m not planning to expand to additional languages yet. I think my pronunciation of those languages would be so bad that it would be painful for everybody.

Currently, my most popular video series is about the effects of light on wakefulness and sleep — such as how exposure to blue light from looking at your phone in the middle of the night might trigger a depression-like circuit. But my most popular videos include Julian, a high school kid that I mentor. People have started commenting #teamjulianscience, which is pretty amusing.”

What have you learned?

“It’s turned out to be a lot harder to explain things in 60 seconds than I initially thought. I have to really distill down ideas to their core elements. Many professors are notorious for going on and on about what they do, saying it in language that nobody can understand. My goal is to not be THAT professor.

I’ve also learned that I don’t blink. Sixty seconds goes by fast so I just dive in and rattle it off. After a couple of weeks, people started posting “you never blink!” — so now I insert blinks to get them to stop saying that.

I’ve also found the viewer comments and questions to be really interesting. They cue to me what the general public is confused about. But I’ve also found that many people have a really nuanced and deep curiosity about brain science. It’s been a real pleasure to see that.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

How does media multitasking affect the mind?

Image by Mohamed Hassan

Imagine that you’re working on your computer, watching the Warriors game, exchanging texts and checking Facebook. Sound familiar? Many people simultaneously view multiple media streams every day.

Over the past decade, researchers have been studying the relationship between this type of heavy media multitasking and cognition to determine how our media use is shaping our minds and brains. This is a particularly critical question for teenagers, who use technology for almost 9 hours every day on average, not including school-related use.

Many studies have examined the cognitive performance in young adults using a variety of task-based cognitive tests — comparing the performance of heavy and light multitaskers. According to a recent review article, these studies show that heavy media multitaskers perform significantly worse, particularly when the tasks require sustained, goal-oriented attention.

For example, a pivotal study led by Anthony Wagner, PhD, a Stanford professor of psychology and co-author of the review article, developed a questionnaire-based media multitasking index to identify the two groups — based on the number of media streams a person juggles during a typical media consumption hour, as well as the time spent on each media. Twelve media forms were included, ranging from computer games to cell phone calls.

The team administered their questionnaire and several standard cognitive tests to Stanford students. In one series of tests, the researchers measured the working memory capabilities of 22 light multitaskers and 19 heavy multitaskers. Working memory is the mental post-it note used to keep track of information, like a set of simple instructions, in the short term.

“In one test, we show a set of oriented blue rectangles, then remove them from the screen and ask the subject to retain that information in mind. Then we’ll show them another set of rectangles and ask if any have changed orientation,” described Wagner in a recent Stanford Q&A. “To measure memory capacity, we do this task with a different number of rectangles and determine how performance changes with increasing memory loads. To measure the ability to filter out distraction, sometimes we add distractors, like red rectangles that the subjects are told to ignore.”

Wagner also performed standard task-switching experiments in which the students viewed images of paired numbers and letters and analyzed them. The students had to switch back and forth between classifying the numbers as even or odd and the letters as vowels or consonants.

The Stanford study showed that heavy multitaskers were less effective at filtering out irrelevant stimuli , whereas light multitaskers found it easier to focus on a single task in the face of distractions.

Overall, this previous study is representative of the twenty subsequent studies discussed in the recent review article. Wagner and co-author Melina Uncapher, PhD, a neuroscientist at the University of California, San Francisco, theorized that lapses in attention may explain most of the current findings — heavy media multitaskers have more difficulty staying on task and returning to task when attention has lapsed than light multitaskers.

However, the authors emphasized that the large diversity of the current studies and their results raise more questions than they answer, such as what is the direction of causation? Does heavier media multitasking cause cognitive and neural differences, or do individuals with such preexisting differences tend towards more multitasking behavior? They said more research is needed.

Wagner concluded in the Q&A:

“I would never tell anyone that the data unambiguously show that media multitasking causes a change in attention and memory. That would be premature… That said, multitasking isn’t efficient. We know there are costs of task switching. So that might be an argument to do less media multitasking.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine

Brain’s serotonin system includes multiple, sometimes conflicting, pathways

Photo by Pablo García Saldaña

Although the serotonin system — which helps regulate mood and social behavior, appetite and digestion, sleep, memory and motor skills — is critical to so many functions in the human body, its underlying organization and properties are not well understood. Past studies have even reported divergent results.

New research may help clear up this confusion, as recently reported in Cell. Stanford biologist Liqun Luo, PhD, discovered the serotonin system is actually composed of multiple parallel subsystems that function differently, at times in opposing ways.

“The field’s understanding of the serotonin system was like the story of the blind men touching the elephant,” Luo said in a recent Stanford news release. “Scientists were discovering distinct functions of serotonin in the brain and attributing them to a monolithic serotonin system, which at least partly accounts for the controversy about what serotonin actually does. This study allows us to see different parts of the elephant at the same time.”

Luo’s team studied the dorsal raphe, a region of the brainstem containing a high concentration of serotonin-producing neurons, in mice. They injected this region’s nerve fibers with a modified virus engineered to exhibit bright green fluorescence — allowing them to image and trace how the dorsal raphe’s neurons are connected to other regions in the brain. They observed two distinct groups of neurons in the dorsal raphe.

Using behavioral tests, they then determined that these two neuron groups sometimes responded differently to stimuli. For instance, in response to a mild punishment, neurons from the two groups showed opposite responses.

The researchers also found these neurons released the chemical glutamate in addition to serotonin, raising the question of whether they should even be called serotonin neurons.

These research findings have the potential for wide-ranging clinical applications, including the development of better drugs to treat depression and anxiety. Currently, the most commonly prescribed type of antidepressant are selective serotonin reuptake inhibitors (SSRIs), which target the serotonin system. However, some people can’t tolerate the side effects from SSRI antidepressants. A better understanding of the serotonin system may help.

“If we can target the relevant pathways of the serotonin system individually, then we may be able to eliminate the unwanted side effects and treat only the disorder,” said study first author Jing Ren, PhD, a postdoctoral fellow in Luo’s lab.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Awe, anxiety, joy: Researchers identify 27 categories for human emotions

Photo by hannahlouise123

Scores of words describe the wide range of emotions we experience. And as we grasp for words to describe our feelings, scientists are similarly struggling to comprehend how our brain processes and connects these feelings.

Now, a new study from the University of California, Berkeley challenges the assumptions traditionally made in the science of emotion. It was published recently in the Proceedings of the National Academy of Sciences.

Past research has generally categorized all emotions into six to 12 groups, such as happiness, sadness, anger, fear, surprise and disgust. However, the Berkeley researchers identified 27 distinct categories of emotions.

They asked a diverse group of over 850 men and women to view a random sampling of 2185 short, silent videos that depicted a wide range of emotional situations — including births, endearing animals, natural beauty, vomit, warfare and natural disasters, to name just a few. The participants reported their emotional response after each video — using a variety of techniques, including independently naming their emotions or ranking the degree they felt 34 specific emotions. The researchers analyzed these responses using statistical modeling.

The results showed that participants generally had a similar emotional response to each of the videos, and these responses could be categorized into 27 distinct groups of emotions. The team also organized and mapped the emotional responses for all the videos, using a particular color for each of the 27 categories. They created an interactive map that includes links to the video clips and lists their emotional scores.

“We sought to shed light on the full palette of emotions that color our inner world,” said lead author Alan Cowen, a graduate student in neuroscience at the UC Berkeley, in a recent news release.

In addition, the new study refuted the traditional view that emotional categories were entirely distinct islands. Instead, they found many categories to be linked by fuzzy boundaries. For example, there are smooth gradients between emotions like awe and peacefulness, they said.

Cowen explained in the release:

“We don’t get finite clusters of emotions in the map because everything is interconnected. Emotional experiences are so much richer and more nuanced than previously thought.

Our hope is that our findings will help other scientists and engineers more precisely capture the emotional states that underlie moods, brain activity and expressive signals, leading to improved psychiatric treatments, an understanding of the brain basis of emotion and technology responsive to our emotional needs.”

The team hopes to expand their research to include other types of stimuli such as music, as well as participants from a wider range of cultures using languages other than English.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Artificial Intelligence can help predict who will develop dementia, a new study finds

 

Photo by Lukas Budimaier

If you could find out years ahead that you were likely to develop Alzheimer’s, would you want to know?

Researchers from McGill University argue that patients and their families could better plan and manage care given this extra time. So the team has developed new artificial intelligence software that uses positron emission tomography (PET) scans to predict whether at-risk patients will develop Alzheimer’s within two years.

They retrospectively studied 273 individuals with mild cognitive impairment who participated in the Alzheimer’s Disease Neuroimaging Initiative, a global research study that collects imaging, genetics, cognitive, cerebrospinal fluid and blood data to help define the progression of Alzheimer’s disease.

Patients with mild cognitive impairment have noticeable problems with memory and thinking tasks that are not severe enough to interfere with daily life. Scientists know these patients have abnormal amounts of tau and beta-amyloid proteins in specific brain regions involved in memory, and this protein accumulation occurs years before the patients have dementia symptoms.

However, not everyone with mild cognitive impairment will go on to develop dementia, and the McGill researchers aimed to predict which ones will.

First, the team trained their artificial intelligence software to identify patients who would develop Alzheimer’s, by identifying key features in the amyloid PET scans of the ADNI participants. Next, they assessed the performance of the trained AI using an independent set of ADNI amyloid PET scans. It predicted Alzheimer’s progression with an accuracy of 84 percent before symptom onset, as reported in a recent paper in Neurobiology of Aging.

The researchers hope their new AI tool will help improve patient care, as well as accelerate research to find a treatment for Alzheimer’s disease by identifying which patients to select for clinical trials.

“By using this tool, clinical trials could focus only on individuals with a higher likelihood of progressing to dementia within the time frame of the study. This will greatly reduce the cost and time necessary to conduct these studies,” said Serge Gauthier, MD, a senior author and professor of neurology and neurosurgery and of psychiatry at McGill, in a recent news release.

The new AI tool is now available to scientists and students, but the McGill researchers need to conduct further testing before it will be approved and available to clinicians.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

%d bloggers like this: