Creativity can jump or slump during middle childhood, a Stanford study shows

 

Photo by Free-Photos

As a postdoctoral fellow in psychiatry, Manish Saggar, PhD, stumbled across a paper published in 1968 by a creativity pioneer named E. Paul Torrance, PhD. The paper described an unexplained creativity slump occurring during fourth grade that was associated with underachievement and increased risk for mental health problems. He was intrigued and wondered what exactly was going on.  “It seemed like a profound problem to solve,” says Saggar, who is now a Stanford assistant professor of psychiatry and behavioral sciences.

Saggar’s latest research study, recently published in NeuroImage, provides new clues about creativity during middle childhood. The research team studied the creative thinking ability of 48 children — 21 starting third grade and 27 starting fourth grade — at three time points across one year. This allowed the researchers to piece together data from the two groups to estimate how creativity changes from 8 to 10 years of age.

At each of the time points, the students were assessed using an extensive set of standardized tests for intelligence, aberrant behavior, response inhibition, temperament and creativity. Their brains were also scanned using a functional near-infrared spectroscopy (fNIRS) cap, which imaged brain function as they performed a standardized Figural Torrance Test of Creative Thinking.

During this test, the children sat at a desk and used a pen and paper to complete three different incomplete figures to “tell an unusual story that no one else will think of.” Their brains were scanned during these creativity drawing tasks, as well as when they rested (looking at a picture of a plus sign) and they completed a control drawing (connecting the dots on a grid).

Rather than using the conventional categories of age or grade level, the researchers grouped the participants based on the data — revealing three distinct patterns in how creativity could change during middle childhood.

The first groups of kids slumped in creativity initially and then showed an increase in creativity after transitioning to the next grade, while the second group showed the inverse. The final group of children had no change in creativity and then a boost after transitioning to the next grade.

“A key finding of our study is that we cannot group children together based on grade or age, because everybody is on their own trajectory,” says Saggar.

The researchers also found a correlation between creativity and rule-breaking or aggressive behaviors for these participating children, who scored well within the normal range of the standard child behavior checklist used to assess behavioral and emotional problems. As Saggar clarifies, these “problem behaviors” were things like arguing a lot or preferring to be with older kids rather than actions like fighting.

“In our cohort, the aggression and rule-breaking behaviors point towards enhanced curiosity and to not conforming to societal rules, consistent with the lay notion of ‘thinking outside the box’ to create unusual and novel ideas,” Saggar explains. “Classic creative thinking tasks require people to break rules between cognitive elements to form new links between previously unassociated elements.”

They also found a correlation between creativity and increased functional segregation of the frontal regions of the brain. Certain functions of our brain are done by regions independently and other functions are done by integration, when different brain regions come together to help us do the task. For example, a relaxing walk in the park with a wandering mind might have brain regions chattering in a segregated independent fashion, while focusing intently to memorize a series of numbers might require brain integration. And our brain needs to balance between this segregation and integration. In the study, they showed that increases in creativity tracked with increased segregation of the frontal regions.

“Having increased segregation in the frontal regions indicates that they weren’t really focusing on something specific,” Saggar says. “The hypothesis we have is perhaps you need more diffused attention to be more creative. Like when you get your best ideas while taking a shower or a walk.”

Saggar hopes their findings will help develop new interventions for teachers and parents in the future, but he says that longer studies, with a larger and more diverse group of children, are first needed to validate their results.

Once they confirm that the profiles observed in their current study actually exist in larger samples, the next step will be to see if they can train kids to improvise and become more creative, similar to a neuroscience study that successfully trained adults to enhance their creativity.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Pokémon experts’ brains shed light on neurological development

Photo by Colin Woodcock

Whether parents are dreading or looking forward to taking their kids to see the new “Pokémon” movie may depend on how their brains developed as a child. If they played the video game a lot growing up, a specific region of their visual cortex — the part of the brain that processes what we see — may preferentially respond to Pokémon characters, according to a new research study.

The Stanford psychologists studied the brains of Pokémon experts and novices to answer fundamental questions about how experience contributes to your brain’s development and organization.

Jesse Gomez, PhD, first author of the study and a former neuroscience Stanford graduate student, started playing a lot of Pokémon around first grade. So he realized that early exposure to Pokémon provided a natural neuroscience experiment. Namely, children that played the video game used the same tiny handheld device at roughly the same arm’s length. They also spent countless hours learning the hundreds of animated, pixelated characters, which represent a unique category of stimuli that activates a unique region of the brain.

The research team identified this specialized brain response using functional magnetic resonance to image the brains of 11 Pokémon experts and 11 Pokémon novices, who were adults similarly aged and educated. During the fMRI scan, each participant randomly viewed different kinds of stimuli, including faces, animals, cartoons, bodies, pseudowords, cars, corridors and Pokémon characters.

“We find a big difference between people who played Pokémon in their childhood versus those who didn’t,” explained Gomez in the video below. “People who are Pokémon experts not only develop a unique brain representation for Pokémon in the visual cortex, but the most interesting part to us is that the location of that response to Pokémon is consistent across people.”

In the expert participants, Pokémon activated a specific region in the high-level visual cortex, the part of the brain involved in recognizing things like words and faces. “This helped us pinpoint which theory of brain organization might be the most responsible for determining how the visual cortex develops from childhood to adulthood,” Gomez said.

The study results support a theory called eccentricity bias, which suggests the brain region that is activated by a stimulus is determined by the size and location of how it is viewed on the retina. For example, our ability to discriminate between faces is thought to activate the fusiform gyrus in the temporal lobe near the ears and to require the high visual acuity of the central field of vision. Similarly, the study showed viewing Pokémon characters activates part of the fusiform gyrus and the neighboring region called the occipitotemporal sulcus — which both get input from the central part of the retina — but only for the expert participants.

The eccentricity bias theory implies that a different or larger region of the brain would be preferentially activated by early exposure to Pokémon played on a large computer monitor. However, this wasn’t an option for the 20-something participants when they were children.

These findings have applications well beyond Pokémon, as Gomez explained in the video:

“The findings suggest that the very way that you look at a visual stimulus, like Pokémon or words, determines why your brain is organized the way it is. And that’s useful going forward because it might suggest that visual deficits like dyslexia or face blindness might result simply from the way you look at stimuli. And so that’s a promising future avenue.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Artificial Intelligence can help predict who will develop dementia, a new study finds

 

Photo by Lukas Budimaier

If you could find out years ahead that you were likely to develop Alzheimer’s, would you want to know?

Researchers from McGill University argue that patients and their families could better plan and manage care given this extra time. So the team has developed new artificial intelligence software that uses positron emission tomography (PET) scans to predict whether at-risk patients will develop Alzheimer’s within two years.

They retrospectively studied 273 individuals with mild cognitive impairment who participated in the Alzheimer’s Disease Neuroimaging Initiative, a global research study that collects imaging, genetics, cognitive, cerebrospinal fluid and blood data to help define the progression of Alzheimer’s disease.

Patients with mild cognitive impairment have noticeable problems with memory and thinking tasks that are not severe enough to interfere with daily life. Scientists know these patients have abnormal amounts of tau and beta-amyloid proteins in specific brain regions involved in memory, and this protein accumulation occurs years before the patients have dementia symptoms.

However, not everyone with mild cognitive impairment will go on to develop dementia, and the McGill researchers aimed to predict which ones will.

First, the team trained their artificial intelligence software to identify patients who would develop Alzheimer’s, by identifying key features in the amyloid PET scans of the ADNI participants. Next, they assessed the performance of the trained AI using an independent set of ADNI amyloid PET scans. It predicted Alzheimer’s progression with an accuracy of 84 percent before symptom onset, as reported in a recent paper in Neurobiology of Aging.

The researchers hope their new AI tool will help improve patient care, as well as accelerate research to find a treatment for Alzheimer’s disease by identifying which patients to select for clinical trials.

“By using this tool, clinical trials could focus only on individuals with a higher likelihood of progressing to dementia within the time frame of the study. This will greatly reduce the cost and time necessary to conduct these studies,” said Serge Gauthier, MD, a senior author and professor of neurology and neurosurgery and of psychiatry at McGill, in a recent news release.

The new AI tool is now available to scientists and students, but the McGill researchers need to conduct further testing before it will be approved and available to clinicians.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Opioid receptors in brain affect reaction to another person’s pain

Image by P. Simonau

Watching someone else suffering from pain is distressing. What mechanisms cause that distress? And why do some of us experience it more strongly than others?

A new Finnish research study has now demonstrated that seeing others in pain activates the same brain regions involved in firsthand pain, which suggests that a shared neuromolecular pathway processes both types of pain. Specifically, the researchers showed that the endogenous opioid system, but not the dopamine system, contribute to vicarious pain.

The endogenous opioid system is a set of neurons in the brain that naturally produces opioids to help modulate emotions and pain. Similarly, the dopamine system consists of neurons that synthesize and release dopamine, which helps manage motor control, pain, reward and addictive behaviors. So both of these systems are known to play an important role in processing firsthand pain, but their role in vicarious pain was unexplored.

The research team conducted the study by imaging 35 healthy women ranging in age from 19 to 58 years old. First, they performed two positron emission tomography (PET) studies on different days using radiopharmaceuticals that quantified the availability of opioid and dopamine receptors in each woman’s brain to better understand the individual opioid and dopamine systems. Next, they investigated how each woman responded to vicarious pain by performing a functional MRI scan while she watched videos of humans experiencing painful and painless situations.

The researchers found a negative correlation between opioid receptor availability and response to vicarious pain — women with less opioid receptors reacted more strongly to seeing someone else’s distress, as recently reported in Cerebral Cortex. In contrast, they found no correlation with the dopamine receptor availability.

The authors concluded in the paper, “These results suggest that the opioid system contributes to neural processing of vicarious pain, and that interindividual differences in opioidergic system could explain why some individuals react more strongly than others to seeing pain.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

New imaging study investigates role of dopamine in migraine attacks

Many people suffer from migraines —throbbing, painful headaches that last up to 72 hours and are often accompanied by nausea, vomiting and sensitivity to light and sound.

Although not fully understood, an imbalance in a brain neurotransmitter is thought to contribute to migraines. The neurotransmitter, dopamine, is a chemical in your brain that affects your movements, emotions, motivations and sensory perceptions, including the ability to modulate pain.

Now, researchers at the University of Michigan have shown that dopamine levels in the brain fall during a migraine attack relative to their baseline level between attacks, as reported in a recent news release.

The research team performed two PET scans on different days to study eight migraine sufferers during a spontaneous migraine and in between attacks, comparing their brain activity and dopamine levels with and without a headache. On average, these patients were 27 years old and experienced migraines about six times per month. The scientists also imaged eight healthy adults, comparing migraine sufferers to controls.

They found that dopamine levels in the brain fluctuated, temporarily reducing during migraine attacks. They also found that the study participants were more sensitive to non-painful stimuli, such as warmth applied to the forehead, during a migraine.

“With this study, we better understand how dopamine is related to the suffering during a migraine attack,” said Alex DaSilva, DDS, DMedSc, assistant professor of dentistry and of the Center for Human Growth and Development at the University of Michigan, in the video above. “Lower dopamine levels mean you are more sensitive to pain and stimulation. Second, lower dopamine levels also inhibit your behavior. You want a dark room. You want to avoid social interactions.”

In their paper, the researchers call for additional studies to confirm the results and evaluate how they can be used to develop more effective migraine therapies.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Stanford researchers map brain circuitry affected by Parkinson’s disease

Image by iStock/D3Damon
Image by iStock/D3Damon

In the brain, neurons never work alone. Instead, critical functions of the nervous system are orchestrated by interconnected networks of neurons distributed across the brain — such as the circuit responsible for motor control.

Researchers are trying to map out these neural circuits to understand how disease or injury disrupts healthy brain cell communication. For instance, neuroscientists are investigating how Parkinson’s disease causes malfunctions in the neural pathways that control motion.

Now, Stanford researchers have developed a new brain mapping technique that reveals the circuitry associated with Parkinson’s tremors, a hallmark of the disease. The multi-disciplinary team turned on specific types of neurons and observed how this affected the entire brain, which allowed them to map out the associated neural circuit.

Specifically, they performed rat studies using optogenetics to modify and turn on specific types of neurons in response to light and functional MRI to measure the resulting brain activity based on changes in blood flow. These data were then computationally modeled to map out the neural circuit and determine its function.

The research was led by Jin Hyang Lee, PhD, a Stanford electrical engineer who is an assistant professor of neurology and neurological sciences, of neurosurgery and of bioengineering. A recent Stanford News release explains the results:

“Testing her approach on rats, Lee probed two different types of neurons known to be involved in Parkinson’s disease — although it wasn’t known exactly how. Her team found that one type of neuron activated a pathway that called for greater motion while the other activated a signal for less motion. Lee’s team then designed a computational approach to draw circuit diagrams that underlie these neuron-specific brain circuit functions.”

“This is the first time anyone has shown how different neuron types form distinct whole brain circuits with opposite outcomes,” Lee said in the release.

Lee hopes their research will help improve treatments for Parkinson’s disease by providing a more precise understanding of how neurons work to control motion. In the long run, she also thinks their new brain mapping technique can be used to help design better therapies for other brain diseases.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Motor control problems may be core issue for people with autism

Photograph by Michael Phillips
Photograph by Michael Phillips

If you’ve ever had an MRI scan, you know that it can be hard to lie still in the noisy, claustrophobic scanner. People often move involuntarily, requiring scientists to correct or eliminate the imaging data during movement.

Recently, a collaboration of Rutgers University and Columbia University researchers used this seemingly unhelpful data to further their understanding of a neurodevelopmental disease.

“We asked ourselves, ‘What could these involuntary movements, which researchers usually consider a nuisance, tell us about autism?’” Elizabeth Torres, PhD, an associate professor of cognitive psychology at Rutgers University, said in a news release.

The neuroscientists analyzed functional magnetic resonance imaging (fMRI) data for 1048 participants, aged 6 to 50 years old, including individuals with autism spectrum disorders and healthy controls. The data was publicly available primarily through the Autism Brain Imaging Data Exchange databases.

The researchers determined that people with autism had more problems controlling their head movements than healthy controls. They also found that motor control problems were exacerbated with the presence of secondary neuropsychiatric diagnoses, lower verbal and performance intelligence and autism severity, as reported in a recent paper in Scientific Reports.

“For the first time, we can demonstrate unambiguously that motor issues are core issues that need to be included in the diagnosis criteria for autism,” Torres said in the release.

In addition, they found that psychotropic medications, commonly used to treat people on the autism spectrum, were associated with lower levels of motor control. These medications include anti-convulsants and anti-depressants. Autistic people who were taking more than one psychotropic medication moved the most during the fMRIs, and their movement worsened over the scanning session.

The researchers conclude in their paper, “Nevertheless, it remains to be demonstrated if changes in head micro-movements directly capture targeted changes in symptomology brought about by a specific medication.” Their findings are also complicated by the simultaneous presence of autism and other diseases, such as attention deficit hyperactivity disorder. So more research is needed.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

%d bloggers like this: