Can MRI Brain Scans Predict Dyslexia Early?

Kindergartener learning to read a book (Holtsman/flickr)
Kindergartener learning to read a book (Holtsman/flickr)

For many, the word dyslexia represents painful struggles with reading and speech that impact their self-confidence –- 20 percent of school-aged children and over 40 million adults in the U.S. are dyslexic. Dyslexics are often very intelligent and can learn successfully with appropriate teaching methods, but early diagnosis and intervention are critical.

UC San Francisco (UCSF) researchers in the Dyslexia Program aim to predict whether children will develop dyslexia before they show signs of reading and speech problems, so early intervention can improve their quality of life.

“Early identification and interventions are extremely important in children with dyslexia as well as most neurodevelopmental disorders,” said Fumiko Hoeft, UCSF associate professor and member of the UCSF Dyslexia Center, in a press release. “Accumulation of research data such as ours may one day help us to identify kids who might be at risk for dyslexia, rather than waiting for children to become poor readers and experience failure.”

In a recent longitudinal study, Hoeft’s research team studied 38 young children using structural MRI to track their brain development between kindergarten and third grade as they formally learned to read in school. The participating children were healthy, native-English speakers with varying preliteracy skills and family histories of reading difficulties. They had MRI brain scans at age 5 or 6 and again 3 years later. At both time points, they also completed a battery of standardized tests, including reading and cognitive assessments.

In particular, the researchers were interested in the children’s white matter development, which is critical for perceiving, thinking and learning. They found that volume changes in the left hemisphere white matter in the temporo-parietal region (just behind and above the left ear) was highly predictive of reading outcomes. This region is known to be important for language, reading and speech.

Using MRI brain scans to measure these developmental changes improved the prediction accuracy of reading difficulties by 60%, compared to traditional assessments alone.

“What was intriguing in this study was that brain development in regions important to reading predicted above and beyond all these (other) measures,” said Hoelt.

Despite this predictive relationship, MRI brain imaging is unlikely to be a widespread means of diagnosis because of cost and time constraints. Instead, the researchers hope their findings lead to further investigation of what may be influencing the brain during this critical period of reading development.

The UCSF Dyslexia Center is also investigating cheaper methods for early diagnosis of reading problems. For example, they are collaborating with research labs worldwide to construct growth charts for the reading brain network, similar to those one would find in a doctor’s office for height and weight.

Since screening for reading disorder risk is currently a resource-intensive process, UCSF is also developing a tablet-based mobile health application that could be used by schools or parents as a fast, easy and cheap screening tool.

UCSF researchers hope that understanding each child’s neurocognitive profile will help educators provide improved, personalized education and interventions.

This is a repost of my KQED Science blog.

Researchers Have Vision-Correcting Computer Screens In Their Sights

eyeglasses resting on laptop
Eyeglasses may no longer be necessary to see computer screens. (F H Mira, flickr)

What if everyone could clearly see their smart phone, tablet, computer and TV screens without having to wear corrective eyeglasses or contact lenses?

Approximately 75% of American adults use some form of corrective lenses to see or read properly. And most of us need them to see computer screens on a daily basis. Now researchers are developing new technology that uses computer algorithms to compensate for an individual’s visual impairment, so many of us may soon be able to ditch our glasses and contacts.

Brian Barsky, UC Berkeley professor of computer science and vision science and affiliate professor of optometry, teamed up with colleagues at UC Berkeley and MIT to improve vision-correcting display technology. They developed a combination of hardware and software improvements to achieve both high image resolution and contrast simultaneously, a major milestone. Their results were recently published in a paper in the ACM Transactions on Graphics.

First, they modified an iPod touchscreen by adding a standard light field display – a mask with an array of pinholes sandwiched between thin layers of plastic. The tiny pinholes were each 75 micrometers in diameter and spaced 390 micrometers apart. This light field display was used to enhance image contrast, providing a full range of bright colors in the displayed images.

The researchers also developed complex, innovative computer algorithms to adjust the light intensity from each pinhole. These algorithms helped enhance the resolution or sharpness of the displayed images. The researchers can use a person’s eyeglass prescription to compute an altered image, that when viewed through the light field display, appears in sharp focus for that individual.

“Our technique distorts the image such that, when the intended user looks at the screen, the image will appear sharp to that particular viewer,” said Barsky in a press release. “But if someone else were to look at the image, it would look bad.”

The technology could not only help the millions of people who wear glasses and contacts, but also those with complex vision problems that cannot be corrected. The most common vision problems – nearsightedness, farsightedness and astigmatism – are usually easily corrected with standard lenses. However, people with complex vision problems often have irregularities in the shape of their eyes’ surface or cornea, requiring new kinds of corrective lenses that are still under development.

“We now live in a world where displays are ubiquitous, and being able to interact with displays is taken for granted,” said Barsky. “People who are unable to view displays are at a disadvantage in the workplace and life in general, so this research could transform their lives.”

In the future, the researchers plan to incorporate commercially available eye trackers to adapt the displayed images to the user’s head position. They also hope to develop display screens that appear clear to multiple users with different visual problems.

This is a repost of my KQED Science blog.