Explaining neuroscience in ongoing Instagram video series: A Q&A

At the beginning of the year, Stanford neuroscientist Andrew Huberman, PhD, pledged to post on Instagram one-minute educational videos about neuroscience for an entire year. Since a third of his regular followers come from Spanish-speaking countries, he posts them in both English and Spanish. We spoke soon after he launched the project. And now that half the year is over, I checked in with him about his New Year’s resolution.

How is your Instagram project going?

“It’s going great. I haven’t kept up with the frequency of posts that I initially set out to do, but it’s been relatively steady. The account has grown to about 13,500 followers and there is a lot of engagement. They ask great questions and the vast majority of comments indicate to me that people understand and appreciate the content. I’m really grateful for my followers. Everyone’s time is valuable and the fact that they comment and seem to enjoy the content is gratifying.”

What have you learned?

“The feedback informed me that 60 seconds of information is a lot for some people, especially if the topic requires new terms. That was surprising. So I have opted to do shorter 45-second videos and those get double or more views and reposts. I also have started posting images and videos of brains and such with ‘voice over’ content. It’s more work to produce, but people seem to like that more than the ‘professor talking’ videos.

I still get the ‘you need to blink more!’ comments, but fortunately that has tapered off. My Spanish is also getting better but I’m still not fluent. Neural plasticity takes time but I’ll get there.”

What is your favorite video so far?

“People naturally like the videos that provide something actionable for their health and well-being. The brief series on light and circadian rhythms was especially popular, as well as the one on how looking at the blue light from your cell phone in the middle of the night can potentially alter sleep and mood. I particularly enjoyed making that post since it combined vision science and mental health, which is one of my lab’s main focuses.”

What are you planning for the rest of the year?

“I’m kicking off some longer content through the Instagram TV format, which will allow people who want more in-depth information to get that. I’m also helping The Society for Neuroscience get their message out about their annual meeting. Other than that, I’m just going to keep grinding away at delivering what I think is interesting neuroscience to people that would otherwise not hear about it.”

Is it fun or an obligation at this point?

“There are days where other things take priority of course — research, teaching and caring for my bulldog Costello — but I have to do it anyway since I promised I’d post. However, it’s always fun once I get started. If only I could get Costello to fill in for me when I get busy…”

This is a reposting of my Scope story, courtesy of Stanford School of Medicine.

Advertisements

Simplified analysis method could lead to improved prosthetics, a Stanford study suggests

Brain-machine interfaces (BMI) are an emerging field at the intersection of neuroscience and engineering that may improve the quality of life for amputees and individuals with paralysis. These patients are unable to get signals from their motor cortex — the part of the brain that normally controls movement — to their muscles.

Researchers are overcoming this disconnect by implanting in the brain small electrode arrays, which measure and decode the electrical activity of neurons in the motor cortex. The sensors’ electrical signals are transmitted via a cable to a computer and then translated into commands that control a computer cursor or prosthetic limb. Someday, scientists also hope to eliminate the cable, using wireless brain sensors to control prosthetics.

In order to realize this dream, however, they need to improve both the brain sensors and the algorithms used to decode the neural signals. Stanford electrical engineer Krishna Shenoy, PhD, and his collaborators are tackling this algorithm challenge, as described in a recent paper in Neuron.

Currently, most neuroscientists process their BMI data looking for “spikes” of electrical activity from individual neurons. But this process requires time-consuming manual or computationally-intense automatic data sorting, which are both prone to errors.

Manual data sorting will also become unrealistic for future technologies, which are expected to record thousands to millions of electrode channels compared to the several hundred channels recorded by today’s state-of-the-art sensors. For example, a dataset composed of 1,000 channels could take over 100 hours to hand sort, the paper says. In addition, neuroscientists would like to measure a greater brain volume for longer durations.

So, how can they decode all of this data?

Shenoy suggests simplifying the data analysis by eliminating spike sorting for applications that depend on the activity of neural populations rather than single neurons — such as brain-machine interfaces for prosthetics.

In their new study, the Stanford team investigated whether eliminating this spike sorting step distorted BMI data. Turning to statistics, they developed an analysis method that retains accuracy while extracting information from groups rather than individual neurons. Using experimental data from three previous animal studies, they demonstrated that their algorithms could accurately decode neural activity with minimal distortion — even when each BMI electrode channel measured several neurons. They also validated these experimental results with theory.

 “This study has a bit of a hopeful message in that observing activity in the brain turns out to be easier than we initially expected,” says Shenoy in a recent Stanford Engineering news release. The researchers hope their work will guide the design and use of new low-power, higher-density devices for clinical applications since their simplified analysis method reduces the storage and processing requirements.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Photo by geralt.

Creativity can jump or slump during middle childhood, a Stanford study shows

 

Photo by Free-Photos

As a postdoctoral fellow in psychiatry, Manish Saggar, PhD, stumbled across a paper published in 1968 by a creativity pioneer named E. Paul Torrance, PhD. The paper described an unexplained creativity slump occurring during fourth grade that was associated with underachievement and increased risk for mental health problems. He was intrigued and wondered what exactly was going on.  “It seemed like a profound problem to solve,” says Saggar, who is now a Stanford assistant professor of psychiatry and behavioral sciences.

Saggar’s latest research study, recently published in NeuroImage, provides new clues about creativity during middle childhood. The research team studied the creative thinking ability of 48 children — 21 starting third grade and 27 starting fourth grade — at three time points across one year. This allowed the researchers to piece together data from the two groups to estimate how creativity changes from 8 to 10 years of age.

At each of the time points, the students were assessed using an extensive set of standardized tests for intelligence, aberrant behavior, response inhibition, temperament and creativity. Their brains were also scanned using a functional near-infrared spectroscopy (fNIRS) cap, which imaged brain function as they performed a standardized Figural Torrance Test of Creative Thinking.

During this test, the children sat at a desk and used a pen and paper to complete three different incomplete figures to “tell an unusual story that no one else will think of.” Their brains were scanned during these creativity drawing tasks, as well as when they rested (looking at a picture of a plus sign) and they completed a control drawing (connecting the dots on a grid).

Rather than using the conventional categories of age or grade level, the researchers grouped the participants based on the data — revealing three distinct patterns in how creativity could change during middle childhood.

The first groups of kids slumped in creativity initially and then showed an increase in creativity after transitioning to the next grade, while the second group showed the inverse. The final group of children had no change in creativity and then a boost after transitioning to the next grade.

“A key finding of our study is that we cannot group children together based on grade or age, because everybody is on their own trajectory,” says Saggar.

The researchers also found a correlation between creativity and rule-breaking or aggressive behaviors for these participating children, who scored well within the normal range of the standard child behavior checklist used to assess behavioral and emotional problems. As Saggar clarifies, these “problem behaviors” were things like arguing a lot or preferring to be with older kids rather than actions like fighting.

“In our cohort, the aggression and rule-breaking behaviors point towards enhanced curiosity and to not conforming to societal rules, consistent with the lay notion of ‘thinking outside the box’ to create unusual and novel ideas,” Saggar explains. “Classic creative thinking tasks require people to break rules between cognitive elements to form new links between previously unassociated elements.”

They also found a correlation between creativity and increased functional segregation of the frontal regions of the brain. Certain functions of our brain are done by regions independently and other functions are done by integration, when different brain regions come together to help us do the task. For example, a relaxing walk in the park with a wandering mind might have brain regions chattering in a segregated independent fashion, while focusing intently to memorize a series of numbers might require brain integration. And our brain needs to balance between this segregation and integration. In the study, they showed that increases in creativity tracked with increased segregation of the frontal regions.

“Having increased segregation in the frontal regions indicates that they weren’t really focusing on something specific,” Saggar says. “The hypothesis we have is perhaps you need more diffused attention to be more creative. Like when you get your best ideas while taking a shower or a walk.”

Saggar hopes their findings will help develop new interventions for teachers and parents in the future, but he says that longer studies, with a larger and more diverse group of children, are first needed to validate their results.

Once they confirm that the profiles observed in their current study actually exist in larger samples, the next step will be to see if they can train kids to improvise and become more creative, similar to a neuroscience study that successfully trained adults to enhance their creativity.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

“The brain is just so amazing:” New Instagram video series explains neuroscience

Huberman
Photo by Photo by Norbert von der Groeben

Many people make New Year’s resolutions to exercise more or eat healthier. Not Stanford neurobiology professor Andrew Huberman, PhD. This year he set out to educate the public about exciting discoveries in neuroscience using Instagram.

Huberman’s sights are high: he pledged to post on Instagram one-minute educational videos about neuroscience an average of five times per week for an entire year. I recently spoke with him to see how he’s doing on his resolution.

Why did you start the Instagram video series?

“Although I’m running a lab where we’re focused on making discoveries, I’ve also been communicating science to the general public for a while. I’ve found that there’s just immense interest in the brain — about diseases, what’s going on in neuroscience now, and how these discoveries might impact the audience. The brain is just so amazing, so the interest makes sense to me.

I don’t spend much time on social media, but Instagram seemed like an interesting venue for science communication because it’s mostly visual. My lab already had an Instagram account that we successfully used to recruit human subjects for our studies. So at the end of last year, I was talking with a friend about public service. I told him I was thinking about creating short, daily educational videos about neuroscience — a free, open resource that anyone can view and learn from. He and some other friends said they’d totally watch that. So I committed to it in a video post to 5000 people, and then there was no backing down.”

What topics do you cover?

“I cover a lot of topics. But I feel there are two neuroscience topics that will potentially impact the general public in many positive ways if they can understand the underlying biology: neuroplasticity — the brain’s ability to change— and stress regulation. My primary interest is in vision science, so I like to highlight how the visual system interacts with other systems.

I discuss the literature, dispel myths, touch on some of the interesting mysteries and describe some of the emerging tools and technologies. I talk a bit about my work but mostly about work from other labs. And I’m always careful not to promote any specific tools or practices.”

How popular are your videos?

 “We have grown to about 8,000 followers in the last six weeks and it’s getting more viewers worldwide. According to the stats from Instagram, about a third of my regular listeners are in Spanish-speaking countries. Some of these Spanish-speaking followers started requesting that I make the videos in Spanish so they could share them. Last week I started posting the videos in both English and Spanish and there’s been a great response. My Spanish is weak but it’s getting better, so I’m also out to prove neural plasticity is possible in adulthood. By the end of the year I plan to be fluent in Spanish.

I’ve also had requests to do it in French, German, Chinese and Dutch but I’m not planning to expand to additional languages yet. I think my pronunciation of those languages would be so bad that it would be painful for everybody.

Currently, my most popular video series is about the effects of light on wakefulness and sleep — such as how exposure to blue light from looking at your phone in the middle of the night might trigger a depression-like circuit. But my most popular videos include Julian, a high school kid that I mentor. People have started commenting #teamjulianscience, which is pretty amusing.”

What have you learned?

“It’s turned out to be a lot harder to explain things in 60 seconds than I initially thought. I have to really distill down ideas to their core elements. Many professors are notorious for going on and on about what they do, saying it in language that nobody can understand. My goal is to not be THAT professor.

I’ve also learned that I don’t blink. Sixty seconds goes by fast so I just dive in and rattle it off. After a couple of weeks, people started posting “you never blink!” — so now I insert blinks to get them to stop saying that.

I’ve also found the viewer comments and questions to be really interesting. They cue to me what the general public is confused about. But I’ve also found that many people have a really nuanced and deep curiosity about brain science. It’s been a real pleasure to see that.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

How does media multitasking affect the mind?

Image by Mohamed Hassan

Imagine that you’re working on your computer, watching the Warriors game, exchanging texts and checking Facebook. Sound familiar? Many people simultaneously view multiple media streams every day.

Over the past decade, researchers have been studying the relationship between this type of heavy media multitasking and cognition to determine how our media use is shaping our minds and brains. This is a particularly critical question for teenagers, who use technology for almost 9 hours every day on average, not including school-related use.

Many studies have examined the cognitive performance in young adults using a variety of task-based cognitive tests — comparing the performance of heavy and light multitaskers. According to a recent review article, these studies show that heavy media multitaskers perform significantly worse, particularly when the tasks require sustained, goal-oriented attention.

For example, a pivotal study led by Anthony Wagner, PhD, a Stanford professor of psychology and co-author of the review article, developed a questionnaire-based media multitasking index to identify the two groups — based on the number of media streams a person juggles during a typical media consumption hour, as well as the time spent on each media. Twelve media forms were included, ranging from computer games to cell phone calls.

The team administered their questionnaire and several standard cognitive tests to Stanford students. In one series of tests, the researchers measured the working memory capabilities of 22 light multitaskers and 19 heavy multitaskers. Working memory is the mental post-it note used to keep track of information, like a set of simple instructions, in the short term.

“In one test, we show a set of oriented blue rectangles, then remove them from the screen and ask the subject to retain that information in mind. Then we’ll show them another set of rectangles and ask if any have changed orientation,” described Wagner in a recent Stanford Q&A. “To measure memory capacity, we do this task with a different number of rectangles and determine how performance changes with increasing memory loads. To measure the ability to filter out distraction, sometimes we add distractors, like red rectangles that the subjects are told to ignore.”

Wagner also performed standard task-switching experiments in which the students viewed images of paired numbers and letters and analyzed them. The students had to switch back and forth between classifying the numbers as even or odd and the letters as vowels or consonants.

The Stanford study showed that heavy multitaskers were less effective at filtering out irrelevant stimuli , whereas light multitaskers found it easier to focus on a single task in the face of distractions.

Overall, this previous study is representative of the twenty subsequent studies discussed in the recent review article. Wagner and co-author Melina Uncapher, PhD, a neuroscientist at the University of California, San Francisco, theorized that lapses in attention may explain most of the current findings — heavy media multitaskers have more difficulty staying on task and returning to task when attention has lapsed than light multitaskers.

However, the authors emphasized that the large diversity of the current studies and their results raise more questions than they answer, such as what is the direction of causation? Does heavier media multitasking cause cognitive and neural differences, or do individuals with such preexisting differences tend towards more multitasking behavior? They said more research is needed.

Wagner concluded in the Q&A:

“I would never tell anyone that the data unambiguously show that media multitasking causes a change in attention and memory. That would be premature… That said, multitasking isn’t efficient. We know there are costs of task switching. So that might be an argument to do less media multitasking.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine

Brain’s serotonin system includes multiple, sometimes conflicting, pathways

Photo by Pablo García Saldaña

Although the serotonin system — which helps regulate mood and social behavior, appetite and digestion, sleep, memory and motor skills — is critical to so many functions in the human body, its underlying organization and properties are not well understood. Past studies have even reported divergent results.

New research may help clear up this confusion, as recently reported in Cell. Stanford biologist Liqun Luo, PhD, discovered the serotonin system is actually composed of multiple parallel subsystems that function differently, at times in opposing ways.

“The field’s understanding of the serotonin system was like the story of the blind men touching the elephant,” Luo said in a recent Stanford news release. “Scientists were discovering distinct functions of serotonin in the brain and attributing them to a monolithic serotonin system, which at least partly accounts for the controversy about what serotonin actually does. This study allows us to see different parts of the elephant at the same time.”

Luo’s team studied the dorsal raphe, a region of the brainstem containing a high concentration of serotonin-producing neurons, in mice. They injected this region’s nerve fibers with a modified virus engineered to exhibit bright green fluorescence — allowing them to image and trace how the dorsal raphe’s neurons are connected to other regions in the brain. They observed two distinct groups of neurons in the dorsal raphe.

Using behavioral tests, they then determined that these two neuron groups sometimes responded differently to stimuli. For instance, in response to a mild punishment, neurons from the two groups showed opposite responses.

The researchers also found these neurons released the chemical glutamate in addition to serotonin, raising the question of whether they should even be called serotonin neurons.

These research findings have the potential for wide-ranging clinical applications, including the development of better drugs to treat depression and anxiety. Currently, the most commonly prescribed type of antidepressant are selective serotonin reuptake inhibitors (SSRIs), which target the serotonin system. However, some people can’t tolerate the side effects from SSRI antidepressants. A better understanding of the serotonin system may help.

“If we can target the relevant pathways of the serotonin system individually, then we may be able to eliminate the unwanted side effects and treat only the disorder,” said study first author Jing Ren, PhD, a postdoctoral fellow in Luo’s lab.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Awe, anxiety, joy: Researchers identify 27 categories for human emotions

Photo by hannahlouise123

Scores of words describe the wide range of emotions we experience. And as we grasp for words to describe our feelings, scientists are similarly struggling to comprehend how our brain processes and connects these feelings.

Now, a new study from the University of California, Berkeley challenges the assumptions traditionally made in the science of emotion. It was published recently in the Proceedings of the National Academy of Sciences.

Past research has generally categorized all emotions into six to 12 groups, such as happiness, sadness, anger, fear, surprise and disgust. However, the Berkeley researchers identified 27 distinct categories of emotions.

They asked a diverse group of over 850 men and women to view a random sampling of 2185 short, silent videos that depicted a wide range of emotional situations — including births, endearing animals, natural beauty, vomit, warfare and natural disasters, to name just a few. The participants reported their emotional response after each video — using a variety of techniques, including independently naming their emotions or ranking the degree they felt 34 specific emotions. The researchers analyzed these responses using statistical modeling.

The results showed that participants generally had a similar emotional response to each of the videos, and these responses could be categorized into 27 distinct groups of emotions. The team also organized and mapped the emotional responses for all the videos, using a particular color for each of the 27 categories. They created an interactive map that includes links to the video clips and lists their emotional scores.

“We sought to shed light on the full palette of emotions that color our inner world,” said lead author Alan Cowen, a graduate student in neuroscience at the UC Berkeley, in a recent news release.

In addition, the new study refuted the traditional view that emotional categories were entirely distinct islands. Instead, they found many categories to be linked by fuzzy boundaries. For example, there are smooth gradients between emotions like awe and peacefulness, they said.

Cowen explained in the release:

“We don’t get finite clusters of emotions in the map because everything is interconnected. Emotional experiences are so much richer and more nuanced than previously thought.

Our hope is that our findings will help other scientists and engineers more precisely capture the emotional states that underlie moods, brain activity and expressive signals, leading to improved psychiatric treatments, an understanding of the brain basis of emotion and technology responsive to our emotional needs.”

The team hopes to expand their research to include other types of stimuli such as music, as well as participants from a wider range of cultures using languages other than English.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.