How does media multitasking affect the mind?

Image by Mohamed Hassan

Imagine that you’re working on your computer, watching the Warriors game, exchanging texts and checking Facebook. Sound familiar? Many people simultaneously view multiple media streams every day.

Over the past decade, researchers have been studying the relationship between this type of heavy media multitasking and cognition to determine how our media use is shaping our minds and brains. This is a particularly critical question for teenagers, who use technology for almost 9 hours every day on average, not including school-related use.

Many studies have examined the cognitive performance in young adults using a variety of task-based cognitive tests — comparing the performance of heavy and light multitaskers. According to a recent review article, these studies show that heavy media multitaskers perform significantly worse, particularly when the tasks require sustained, goal-oriented attention.

For example, a pivotal study led by Anthony Wagner, PhD, a Stanford professor of psychology and co-author of the review article, developed a questionnaire-based media multitasking index to identify the two groups — based on the number of media streams a person juggles during a typical media consumption hour, as well as the time spent on each media. Twelve media forms were included, ranging from computer games to cell phone calls.

The team administered their questionnaire and several standard cognitive tests to Stanford students. In one series of tests, the researchers measured the working memory capabilities of 22 light multitaskers and 19 heavy multitaskers. Working memory is the mental post-it note used to keep track of information, like a set of simple instructions, in the short term.

“In one test, we show a set of oriented blue rectangles, then remove them from the screen and ask the subject to retain that information in mind. Then we’ll show them another set of rectangles and ask if any have changed orientation,” described Wagner in a recent Stanford Q&A. “To measure memory capacity, we do this task with a different number of rectangles and determine how performance changes with increasing memory loads. To measure the ability to filter out distraction, sometimes we add distractors, like red rectangles that the subjects are told to ignore.”

Wagner also performed standard task-switching experiments in which the students viewed images of paired numbers and letters and analyzed them. The students had to switch back and forth between classifying the numbers as even or odd and the letters as vowels or consonants.

The Stanford study showed that heavy multitaskers were less effective at filtering out irrelevant stimuli , whereas light multitaskers found it easier to focus on a single task in the face of distractions.

Overall, this previous study is representative of the twenty subsequent studies discussed in the recent review article. Wagner and co-author Melina Uncapher, PhD, a neuroscientist at the University of California, San Francisco, theorized that lapses in attention may explain most of the current findings — heavy media multitaskers have more difficulty staying on task and returning to task when attention has lapsed than light multitaskers.

However, the authors emphasized that the large diversity of the current studies and their results raise more questions than they answer, such as what is the direction of causation? Does heavier media multitasking cause cognitive and neural differences, or do individuals with such preexisting differences tend towards more multitasking behavior? They said more research is needed.

Wagner concluded in the Q&A:

“I would never tell anyone that the data unambiguously show that media multitasking causes a change in attention and memory. That would be premature… That said, multitasking isn’t efficient. We know there are costs of task switching. So that might be an argument to do less media multitasking.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine

Advertisements

Brain’s serotonin system includes multiple, sometimes conflicting, pathways

Photo by Pablo García Saldaña

Although the serotonin system — which helps regulate mood and social behavior, appetite and digestion, sleep, memory and motor skills — is critical to so many functions in the human body, its underlying organization and properties are not well understood. Past studies have even reported divergent results.

New research may help clear up this confusion, as recently reported in Cell. Stanford biologist Liqun Luo, PhD, discovered the serotonin system is actually composed of multiple parallel subsystems that function differently, at times in opposing ways.

“The field’s understanding of the serotonin system was like the story of the blind men touching the elephant,” Luo said in a recent Stanford news release. “Scientists were discovering distinct functions of serotonin in the brain and attributing them to a monolithic serotonin system, which at least partly accounts for the controversy about what serotonin actually does. This study allows us to see different parts of the elephant at the same time.”

Luo’s team studied the dorsal raphe, a region of the brainstem containing a high concentration of serotonin-producing neurons, in mice. They injected this region’s nerve fibers with a modified virus engineered to exhibit bright green fluorescence — allowing them to image and trace how the dorsal raphe’s neurons are connected to other regions in the brain. They observed two distinct groups of neurons in the dorsal raphe.

Using behavioral tests, they then determined that these two neuron groups sometimes responded differently to stimuli. For instance, in response to a mild punishment, neurons from the two groups showed opposite responses.

The researchers also found these neurons released the chemical glutamate in addition to serotonin, raising the question of whether they should even be called serotonin neurons.

These research findings have the potential for wide-ranging clinical applications, including the development of better drugs to treat depression and anxiety. Currently, the most commonly prescribed type of antidepressant are selective serotonin reuptake inhibitors (SSRIs), which target the serotonin system. However, some people can’t tolerate the side effects from SSRI antidepressants. A better understanding of the serotonin system may help.

“If we can target the relevant pathways of the serotonin system individually, then we may be able to eliminate the unwanted side effects and treat only the disorder,” said study first author Jing Ren, PhD, a postdoctoral fellow in Luo’s lab.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Awe, anxiety, joy: Researchers identify 27 categories for human emotions

Photo by hannahlouise123

Scores of words describe the wide range of emotions we experience. And as we grasp for words to describe our feelings, scientists are similarly struggling to comprehend how our brain processes and connects these feelings.

Now, a new study from the University of California, Berkeley challenges the assumptions traditionally made in the science of emotion. It was published recently in the Proceedings of the National Academy of Sciences.

Past research has generally categorized all emotions into six to 12 groups, such as happiness, sadness, anger, fear, surprise and disgust. However, the Berkeley researchers identified 27 distinct categories of emotions.

They asked a diverse group of over 850 men and women to view a random sampling of 2185 short, silent videos that depicted a wide range of emotional situations — including births, endearing animals, natural beauty, vomit, warfare and natural disasters, to name just a few. The participants reported their emotional response after each video — using a variety of techniques, including independently naming their emotions or ranking the degree they felt 34 specific emotions. The researchers analyzed these responses using statistical modeling.

The results showed that participants generally had a similar emotional response to each of the videos, and these responses could be categorized into 27 distinct groups of emotions. The team also organized and mapped the emotional responses for all the videos, using a particular color for each of the 27 categories. They created an interactive map that includes links to the video clips and lists their emotional scores.

“We sought to shed light on the full palette of emotions that color our inner world,” said lead author Alan Cowen, a graduate student in neuroscience at the UC Berkeley, in a recent news release.

In addition, the new study refuted the traditional view that emotional categories were entirely distinct islands. Instead, they found many categories to be linked by fuzzy boundaries. For example, there are smooth gradients between emotions like awe and peacefulness, they said.

Cowen explained in the release:

“We don’t get finite clusters of emotions in the map because everything is interconnected. Emotional experiences are so much richer and more nuanced than previously thought.

Our hope is that our findings will help other scientists and engineers more precisely capture the emotional states that underlie moods, brain activity and expressive signals, leading to improved psychiatric treatments, an understanding of the brain basis of emotion and technology responsive to our emotional needs.”

The team hopes to expand their research to include other types of stimuli such as music, as well as participants from a wider range of cultures using languages other than English.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Artificial Intelligence can help predict who will develop dementia, a new study finds

 

Photo by Lukas Budimaier

If you could find out years ahead that you were likely to develop Alzheimer’s, would you want to know?

Researchers from McGill University argue that patients and their families could better plan and manage care given this extra time. So the team has developed new artificial intelligence software that uses positron emission tomography (PET) scans to predict whether at-risk patients will develop Alzheimer’s within two years.

They retrospectively studied 273 individuals with mild cognitive impairment who participated in the Alzheimer’s Disease Neuroimaging Initiative, a global research study that collects imaging, genetics, cognitive, cerebrospinal fluid and blood data to help define the progression of Alzheimer’s disease.

Patients with mild cognitive impairment have noticeable problems with memory and thinking tasks that are not severe enough to interfere with daily life. Scientists know these patients have abnormal amounts of tau and beta-amyloid proteins in specific brain regions involved in memory, and this protein accumulation occurs years before the patients have dementia symptoms.

However, not everyone with mild cognitive impairment will go on to develop dementia, and the McGill researchers aimed to predict which ones will.

First, the team trained their artificial intelligence software to identify patients who would develop Alzheimer’s, by identifying key features in the amyloid PET scans of the ADNI participants. Next, they assessed the performance of the trained AI using an independent set of ADNI amyloid PET scans. It predicted Alzheimer’s progression with an accuracy of 84 percent before symptom onset, as reported in a recent paper in Neurobiology of Aging.

The researchers hope their new AI tool will help improve patient care, as well as accelerate research to find a treatment for Alzheimer’s disease by identifying which patients to select for clinical trials.

“By using this tool, clinical trials could focus only on individuals with a higher likelihood of progressing to dementia within the time frame of the study. This will greatly reduce the cost and time necessary to conduct these studies,” said Serge Gauthier, MD, a senior author and professor of neurology and neurosurgery and of psychiatry at McGill, in a recent news release.

The new AI tool is now available to scientists and students, but the McGill researchers need to conduct further testing before it will be approved and available to clinicians.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Strong association between vision loss and cognitive decline

Photo by Les Black

In a nationally representative sample of older adults in the United States, Stanford researchers found a strong relationship between visual impairment and cognitive decline, as recently reported in JAMA Ophthalmology.

The research team investigated this association in elderly populations by analyzing two large US population data sets — over 30,000 respondents from the National Health and Aging Trends Study (NHATS) and almost 3,000 respondents from the National Health and Nutrition Examination Study (NHANES) — which both included measurements of cognitive and vision function.

“After adjusting for hearing impairment, physical limitations, patient demographics, socioeconomic status and other clinical comorbidities, we found an over two-fold increase in odds of cognitive impairment among patients with poor vision,” said Suzann Pershing, MD, assistant professor of ophthalmology at Stanford and chief of ophthalmology for the VA Palo Alto Health Care System. “These results are highly relevant to an aging US population.”

Previous studies have shown that vision impairment and dementia are conditions of aging, and their prevalence is increasing as our populations become older. However, the Stanford authors noted that their results are purely observational and do not establish a causative relationship.

The complexity of the relationship between vision and cognition was discussed in a related commentary by Jennifer Evans, PhD, an assistant professor of epidemiology at the London School of Hygiene and Tropical Medicine. She stated that this association could arise owing to problems with measuring vision and cognitive impairment tests in this population. “People with vision impairment may find it more difficult to complete the cognitive impairment tests and … people with cognitive impairment may struggle with visual acuity tests,” she wrote.

Assuming the association between vision and cognitive impairment holds, Evans also raised questions relevant patient care, such as: Which impairment developed first? Would successful intervention for visual impairment reduce the risk of cognitive impairment? Is sensory impairment an early marker of decline?

Pershing said she plans to follow up on the study:

“I am drawn to better understand the interplay between neurosensory vision, hearing impairment and cognitive function, since these are likely synergistic and bidirectional in their detrimental effects. For instance, vision impairment may accelerate cognitive decline and cognitive decline may lead to worsening ability to perform visual tasks. Ultimately, we can aim to better identify impairment and deliver treatments to optimize all components of patients’ health.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Study shows link between playing football and neurodegenerative disease

Photo by Cpl. Michelle M Dickson

You’ll likely hear quite a bit this week about a new study that suggests football players have an increased risk of developing chronic traumatic encephalopathy, or CTE, which is a progressive degenerative brain disease associated with repetitive head trauma.

As reported today in JAMA, researchers from the Boston University CTE Center and the VA Boston Healthcare System found pathological evidence of CTE in 177 of the 202 former football players whose brains were donated for research — including 117 of the 119 who played professionally in the United States or Canada. Their study nearly doubles the number of CTE cases described in literature.

The co-first author, Daniel Daneshvar, MD, PhD, is a new resident at Stanford in the orthopaedic surgery’s physical medicine and rehabilitation program, which treats traumatic brain injury and sports injury patients. He recently spoke with me about the study that he participated in while at BU.

“I really enjoyed playing football in high school. I think it’s an important sport for team building, learning leadership and gaining maturity,” he explained. “That being said, I think this study provides evidence of a relationship between playing football and developing a neurodegenerative disease. And that is very concerning, since we have kids as young as 8 years old potentially subjecting themselves to risk of this disease.”

The researchers studied the donated brains of deceased former football players who played in high school, college and the pros. They diagnosed CTE based on criteria recently defined by the National Institutes of Health. Currently, CTE can only be confirmed postmortem.

The study found evidence of mild CTE in three of the 14 former high school players and severe CTE in the majority of former college, semiprofessional and professional players. However, the researchers are quick to acknowledge that their sample is skewed, because brain bank donors don’t represent the overall population of former football players. Daneshvar explained:

“The number of NFL players with CTE is certainly less than the 99 percent that we’re reporting here, based on the fact that we have a biased sample. But the fact that 110 out of the 111 NFL players in our group had CTE means that this is in no way a small problem amongst NFL players.”

The research team also performed retrospective clinical evaluations, speaking with the players’ loved ones to learn their athletic histories and disease symptoms. Daneshvar worked on this clinical component — helping to design the study, organize the brain donations, conduct the interviews and analyze the data. The clinical assessment and pathology teams worked independently, blind to each other’s results.

“It’s difficult to determine after people have passed away exactly what symptoms they initially presented with and what their disease course was,” he told me. “We developed a novel mechanism for this comprehensive, retrospective clinical assessment. I was one of the people doing the phone interviews with the participant’s family members and friends to assess cognitive, behavioral, mood and motor symptoms.”

At this point, there aren’t any clinical diagnosis criteria for CTE, Daneshvar said. Although the current study wasn’t designed to establish these criteria, the researchers are going to use this data to correlate the clinical symptoms that a patient suffers through in life and their pathology at time of death, Daneshvar said. He went on to explain:

“The important thing about this study is that it isn’t just characterizing disease in this population. It’s about learning as much as we can from this methodologically rigorous cohort going forward, so we can begin to apply the knowledge that we’ve gained to help living athletes.”

Daneshvar and his colleagues are already working on a new study to better understand the prevalence and incidence of CTE in the overall population of football players. And they have begun to investigate what types of risk factors affect the likelihood of developing CTE.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

The implications of male and female brain differences: A discussion

Photo by George Hodan

Men and women are equal, but they and their brains aren’t the same, according to a growing pile of scientific evidence. So why is most research still performed on only male animals and men? A panel of researchers explored this question and its implications on a recent episode of KALW’s City Visions radio show.

“It’s important to study sex differences because they are everywhere affecting everything,” said panelist Larry Cahill, PhD, a professor of neurobiology and behavior at the University of California, Irvine. “Over the last 20 years in particular, neuroscientists and really medicine generally have discovered that there are sex differences of all sizes and shapes really at every level of brain function. And we can’t truly treat women equally if we continue to essentially ignore them, which is what we’ve been doing.”

Neuropsychiatrist and author Louann Brizendine, MD, went on to say that many prescription medicines are only tested on male animals and men, even birth control pills designed for women. This is because the researchers don’t want the fluctuations of hormones associated with the menstrual cycle to “mess up” the research data, she said.

However, this practice can lead to dangerous side effects for women, she explained. For example, the U.S. Food and Drug Administration determined that many women metabolized the common sleep aid, Ambien, more slowly than men so the medication remained at a high level in their blood stream in the morning, which impaired activities like driving. After reassessing the clinical data on Ambien, Brizendine said, the FDA reset the male dose to 10 mg and the female dose down to 5 mg.

Niaro Shah, MD, PhD, a professor of psychiatry and behavioral sciences and of neurobiology at Stanford, said this action by the FDA was a sign of progress. “Decisions like what were made about Ambien represent people starting slowly to wake up and realize that we’ve been assuming that we don’t have to worry fundamentally about sex. And in not worrying about it, we are disproportionally harming women. Bare in mind, women absolutely, clearly and disproportionally bear the brunt of side effects of drugs and medicine.” In fact, he explained, eight out of ten drugs are withdrawn from the market due to worse side effects in women. He later added, “This issue is deeply affecting medical health, especially for women.”

So why are most researchers still studying only male animals or men?

According to Cahill, researchers have a deeply ingrained bias against studying sex differences, believing that sex differences aren’t fundamental because they aren’t shared by both men and women. He also said that resistance to this research boils down to the implicit and false assumption that equal has to mean the same. “If a neuroscientist shows that males and females (be that mice or monkeys or humans) are not the same in some aspect of brain function, then [many people think] the neuroscientist is showing that they are not equal — and that is false.”

Cahill offered advice for consumers: “You can go to the FDA website and for almost any approved drug you can get the essentials on how the testing was done. You’re going to find a mixed bag. For some drugs, you’re going to find there is pretty darn good evidence that the drug probably has roughly equal effects in men and women. On the other hand, you’re going to find a lot of cases when the testing was done mostly or exclusively in males and basically people don’t know [the effects in women].”

“You should be discerning and do your homework,” Brizendine agreed.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.