Physicians re-evaluate use of lead aprons during X-rays

When you get routine X-rays of your teeth at the dentist’s office or a chest X-ray to determine if you have pneumonia, you expect the technologist to drape your pelvis in a heavy radioprotective apron. But that may not happen the next time you get X-rays.

There is growing evidence that shielding reproductive organs has negligible benefit; and because a protective cover can move out of place, using it can result in an increased radiation dose to the patient or impaired quality of diagnostic images.

Shielding testes and ovaries during X-ray imaging has been standard practice since the 1950s due to a fear of hereditary risks — namely, that the radiation would mutate germ cells and these mutations would be passed on to future generations. This concern was prompted by the genetic effects observed in studies of irradiated fruit flies. However, such hereditary effects have not been observed in humans.

“We now understand that the radiosensitivity of ovaries and testes is extremely low. In fact, they are some of the lower radiation-sensitive organs — much lower than the colon, stomach, bone marrow and breast tissue,” said  Donald Frush, MD, a professor of pediatric radiology at Lucile Packard Children’s Hospital Stanford.

In addition, he explained, technology improvements have dramatically reduced the radiation dose that a patient receives during standard X-ray films, computerized tomography scans and other radiographic procedures. For example, a review paper finds that the radiation dose to ovaries and testes dropped by 96% from 1959 to 2012 for equivalent X-ray exams of the pelvis without shielding.

But even if the radioprotective shielding may have minimal — or no — benefit, why not use it just to be safe?

The main problem is that so-called lead aprons — which aren’t made of lead anymore — are difficult to position accurately, Frush said. Even following shielding guidelines, the position of the ovaries is so variable that they may not be completely covered.  Also,  the protective shield can obscure the target anatomy. This forces doctors to live with poor-quality diagnostic information or to repeat the X-ray scan, thus increasing the radiation dose given to the patient, he said.

Positioning radioprotective aprons is particularly troublesome for small children.

“Kids kick their legs up and the shield moves while the technologists are stepping out of the room to take the exposure and can’t see them. So the X-rays have to be retaken, which means additional dose to the kids,” Frush said.

Another issue derives from something called automatic exposure control, a technology that optimizes image quality by adjusting the X-ray machine’s radiation output based on what is in the imaging field. Overall, automatic exposure control greatly improves the quality of the X-ray images and enables a lower dose to be used.  

However, if positioning errors cause the radioprotective apron to enter the imaging field, the radiographic system increases the magnitude and length of its output, in order to penetrate the shield.

“Automatic exposure control is a great tool, but it needs to be used appropriately. It’s not recommended for small children, particularly in combination with radioprotective shielding,”  said Frush.

With these concerns in mind, many technologists, medical physicists and radiologists are now recommending to discontinue the routine practice of shielding reproductive organs during X-ray imaging. However, they support giving technologists discretion to provide shielding in certain circumstances, such as on parental request. This position is supported by several groups, including the American Association of Physicists in MedicineNational Council on Radiation Protection and Measurements and American College of Radiology.

These new guidelines are also supported by the Image Gently Alliance, a coalition of heath care organizations dedicated to promoting safe pediatric imaging, which is chaired by Frush. And they are being adopted by Stanford hospitals.

“Lucile Packard Children’s revised policy on gonadal shielding has been formalized by the department,” he said. “There is still some work to do with education, including training providers and medical students to have a dialogue with patients and caregivers. But so far, pushback by patients has been much less than expected.”

Looking beyond the issue of shielding, Frush advised parents to be open to lifesaving medical imaging for their children, while also advocating for its best use. He said:

“Ask the doctor who is referring the test: Is it the right study? Is it the right thing to do now, or can it wait? Ask the imaging facility:  Are you taking into account the age and size of my child to keep the radiation dose reasonable?”

Photo by Shutterstock / pang-oasis

This is a reposting of my Scope story, courtesy of Stanford School of Medicine.

AI could help radiologists improve their mammography interpretation

The guidelines for screening women for breast cancer are a bit confusing. The American Cancer Society recommends annual mammograms for women older than 45 years with average risk, but other groups like the U.S. Preventative Services Task Force (USPSTF) recommend less aggressive breast screening.

This controversy centers on mammography’s frequent false-positive detections — or false alarms — which lead to unnecessary stress, additional imaging exams and biopsies. USPSTF argues that the harms of early and frequent mammography outweigh the benefits.

However, a recent Stanford study suggests a better way to reduce these false alarms without increasing the number of missed cancers. Using over 112,000 mammography cases collected from 13 radiologists across two teaching hospitals, the researchers developed and tested a machine-learning model that could help radiologists improve their mammography practice.

Each mammography case included the radiologist’s observations and diagnostic classification from the mammogram, the patient’s risk factors and the “ground-truth” of whether or not the patient had breast cancer based on follow-up procedures. The researchers used the data to train and evaluate their computer model.

They compared the radiologists’ performance against their machine-learning model, doing a separate analysis for each of the 13 radiologists. They found significant variability among radiologists.

Based on accepted clinical guidelines, radiologists should recommend follow-up imaging or a biopsy when a mammographic finding has a two percent probability of being malignant. However, the Stanford study found participating radiologists used a threshold that varied from 0.6 to 3.0%. In the future, similar quantitative observations could be used to identify sources of variability and to improve radiologist training, the paper said.

The study included 1,214 malignant cases, which represents 1.1 percent of the total number. Overall, the radiologists reported 176 false negatives indicating cancers missed at the time of the mammograms. They also reported 12,476 false positives or false alarms. In comparison, the machine-learning model missed one additional cancer but it decreased the number of false alarms by 3,612 cases relative to the radiologists’ assessment.

The study concluded: “Our results show that we can significantly reduce screening mammography false positives with a minimal increase in false negatives.”

However, their computer model was developed using data from 1999 to 2010, the era of analog film mammography. In future work, the researchers plan to update the computer algorithm to use the newer descriptors and classifications for digital mammography and three-dimensional breast tomosynthesis.

Ross Shachter, PhD, a Stanford associate professor of management science and engineering and lead author on the paper, summarized in a recent Stanford Engineering news release, “Our approach demonstrates the potential to help all radiologists, even experts, perform better.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

The future hope of “flash” radiation cancer therapy

aqua-1756734_1920_crop.jpg

The goal of cancer therapy is to destroy the cancer cells while minimizing side effects and damage to the rest of the body. Common types of treatment include surgery, chemotherapy, targeted therapy and radiation therapy. Often combined with surgery or drugs, radiation therapy uses high-energy X-rays to harm the DNA and other critical processes of the rapidly-dividing cancer cells.

New innovations in radiation therapy were the focus of a recent episode of the Sirius radio show “The Future of Everything.” On hand was Stanford’s Billy Loo, MD, PhD, a professor of radiation oncology, who spoke with Stanford professor and radio host Russ Altman, MD, PhD.

Radiation has been used to treat cancer for over a century, but today’s technologies target the tumor with far greater precision and speed than the old days. Loo explained that modern radiotherapy now delivers low-dose beams of X-rays from multiple directions, which are accurately focused on the tumor so the surrounding healthy tissues get only a small dose while the tumor gets blasted. Radiation oncologists use imaging — CT, MRI or PET — to determine the three-dimensional sculpture of the tumor to target.

“We identify the area that needs to be treated, where the tumor is in relationship to the normal organs, and create a plan of the sculpted treatment,” Loo said. “And then during the treatment, we also use imaging … to see, for example, whether the radiation is going where we want it to go.”

In addition, oncologists now implement technologies in the clinic to compensate for motion, since organs like the lungs are constantly moving and patients have trouble lying still even for a few minutes. “We call it motion management. We do all kinds of tricks like turning on the radiation beam synchronized with the breathing cycle or following tumors around with the radiation beam,” explained Loo.

Currently, that is how standard radiation therapy works. However, Stanford radiation oncologists are collaborating with scientists at SLAC Linear Accelerator Center to develop an innovative technology called PHASER. Although Loo admits that the acronym was inspired because he loves Star Trek, PHASER stands for pluridirectional high-energy agile scanning electronic radiotherapy. This new technology delivers the radiation dose of an entire therapy session in a single flash lasting less than a second — faster than the body moves.

“We wondered, what if the treatment was done so fast — like in a flash photography — that all the motion is frozen? That’s a fundamental solution to this motion problem that gives us the ultimate precision,” he said. “If we’re able to treat more precisely with less spillage of radiation dose into normal tissues, that gives us the benefit of being able to kill the cancer and cause less collateral damage.”

The research team is currently testing the PHASER technology in mice, resulting in an exciting discovery — the biological response to flash radiotherapy may differ from slower traditional radiotherapy.

“We and a few other labs around the world have started to see that when the radiation is given in a flash, we see equal or better tumor killing but much better normal tissue protection than with the conventional speed of radiation,” Loo said. “And if that translates to humans, that’s a huge breakthrough.”

Loo also explained that their PHASER technology has been designed to be compact, economical, reliable and clinically efficient to provide a robust, mobile unit for global use. They expect it to fit in a standard cargo shipping container and to power it using solar energy and batteries.

“About half of the patients in the world today have no access to radiation therapy for technological and logistical reasons. That means millions of patients who could potentially be receiving curative cancer therapy are getting treated purely palliatively. And that’s a huge tragedy,” Loo said. “We don’t want to create a solution that everyone in the world has to come here to get — that would have limited impact. And so that’s been a core principle from the beginning.”

This is a reposting of my Scope blog post, courtesy of Stanford School of Medicine.

Blasting radiation therapy into the future: New systems may improve cancer treatment

Image by Greg Stewart/SLAC National Accelerator Laboratory

As a cancer survivor, I know radiation therapy lasting minutes can seem much longer as you lie on the patient bed trying not to move. Future accelerator technology may turn these dreaded minutes into a fraction of a second due to new funding.

Stanford University and SLAC National Accelerator Laboratory are teaming up to develop a faster and more precise way to deliver X-rays or protons, quickly zapping cancer cells before their surrounding organs can move. This will likely reduce treatment side effects by minimizing damage to healthy tissue.

“Delivering the radiation dose of an entire therapy session with a single flash lasting less than a second would be the ultimate way of managing the constant motion of organs and tissues, and a major advance compared with methods we’re using today,” said Billy Loo, MD, PhD, an associate professor of radiation oncology at Stanford, in a recent SLAC news release.

Currently, most radiation therapy systems work by accelerating electrons through a meter-long tube using radiofrequency fields that travel in the same direction. These electrons then collide with a heavy metal target to convert their energy into high energy X-rays, which are sharply focused and delivered to the tumors.

Now, researchers are developing a new way to more powerfully accelerate the electrons. The key element of the project, called PHASER, is a prototype accelerator component (shown in bronze in this video) that delivers hundreds of times more power than the standard device.

In addition, the researchers are developing a similar device for proton therapy. Although less common than X-rays, protons are sometimes used to kill tumors and are expected to have fewer side effects particularly in sensitive areas like the brain. That’s because protons enter the body at a low energy and release most of that energy at the tumor site, minimizing radiation dose to the healthy tissue as the particles exit the body.

However, proton therapy currently requires large and complex facilities. The Stanford and SLAC team hopes to increase availability by designing a compact, power-efficient and economical proton therapy system that can be used in a clinical setting.

In addition to being faster and possibly more accessible, animal studies indicate that these new X-ray and proton technologies may be more effective.

“We’ve seen in mice that healthy cells suffer less damage when we apply the radiation dose very quickly, and yet the tumor-killing is equal or even a little better than that of a conventional longer exposure,” Loo said in the release. “If the results hold for humans, it would be a whole new paradigm for the field of radiation therapy.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Artificial intelligence could help diagnose tuberculosis in remote regions, study finds

Image courtesy of Paras Laknani

Tuberculosis is an infectious disease that kills almost two million people worldwide each year, even though the disease can be identified on a simple chest X-ray and treated with antibiotics. One major challenge is that TB-prevalent areas typically lack the radiologists needed to screen and diagnose the disease.

New artificial intelligence models may help. Researchers from the Thomas Jefferson University Hospital in Pennsylvania have developed and tested an artificial intelligence model to accurately identify tuberculosis from chest X-rays, such as the TB-positive scan shown at right.

The model could provide a cost-effective way to expand TB diagnosis and treatment in developing nations, said Paras Lakhani, MD, study co-author and TJUH radiologist, in a recent news release.

Lakhani performed the retrospective study with his colleague Baskaran Sundaram, MD, a TJUH cardiothoracic radiologist. They obtained 1007 chest X-rays of patients with and without active TB from publically available datasets. The data were split into three categories: training (685 patients), validation (172 patients) and test (150 patients).

The training dataset was used to teach two artificial intelligence models — AlexNet and GoogLeNet — to analyze the chest X-ray data and classify the patients as having TB or being healthy. These existing deep learning models had already been pre-trained with everyday nonmedical images on ImageNet. Once the models were trained, the validation dataset was used to select the best-performing model and then the test dataset was used to assess its accuracy.

The researchers got the best performance using an ensemble of AlexNet and GoogLeNet that statistically combined the probability scores for both artificial intelligence models — with a net accuracy of 96 percent.

The authors explain that the workflow of combining artificial intelligence and human diagnosis could work well in TB-prevalent regions, where an automated method could interpret most cases and only the ambiguous cases would be sent to a radiologist.

The researchers plan to further improve their artificial intelligence models with more training cases and other artificial intelligence algorithms, and then they hope to apply it in community

“The relatively high accuracy of the deep learning models is exciting,” Lakhani said in the release. “The applicability for TB is important because it’s a condition for which we have treatment options. It’s a problem that we can solve.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Enlisting artificial intelligence to assist radiologists

Photo by Gerd Leonhard
Photo by Gerd Leonhard

Specialized electronic circuits called graphic processing units, or GPUs, are at the heart of modern mobile phones, personal computers and gaming consoles. By combining multiple GPUs in concert, researchers can now solve previously elusive image processing problems. For example, Google and Facebook have both developed extremely accurate facial recognition software using these new techniques.

GPUs are also crucial to radiologists, because they can rapidly process large medical imaging datasets from CT, MRI, ultrasound and even conventional x-rays.

Now some radiology groups and technology companies are combining multiple GPUs with artificial intelligence (AI) algorithms to help improve radiology care. Simply put, an AI computer program can do tasks normally performed by intelligent people. In this case, AI algorithms can be trained to recognize and interpret subtle differences in medical images.

Stanford researchers have used machine learning for many years to look at medical images and computationally extract the features used to predict something about the patient, much as a radiologist would. However, the use of artificial intelligence, or deep learning algorithms, is new. Sandy Napel, PhD, a professor of radiology, explained:

“These deep learning paradigms are a deeply layered set of connections, not unlike the human brain, that are trained by giving them a massive amount of data with known truth. They basically iterate on the strength of the connections until they are able to predict the known truth very accurately.”

“You can give it 10,000 images of colon cancer. It will find the common features across those images automatically,” said Garry Choy, MD, a staff radiologist and assistant chief medical information officer at Massachusetts General Hospital, in a recent Diagnostic Imaging article. “If there are large data sets, it can teach itself what to look for.”

A major challenge is that these AI algorithms may require thousands of annotated radiology images to train them. So Stanford researchers are creating a database containing millions of de-identified radiology studies, including billions of images, totaling about a half million gigabytes. Each study in the database is associated with the de‐identified report that was created by the radiologist when the images were originally used for patient care.

“To enable our deep learning research, we are also applying machine learning methods to our large database of narrative radiology reports,” said Curtis Langlotz, MD, PhD, a Stanford professor of radiology and biomedical informatics. “We use natural language processing methods to extract discrete concepts, such as anatomy and pathology, from the radiology reports. This discrete data can then be used to train AI systems to recognize the abnormalities shown on the images themselves.”

Potential applications include using AI systems to help radiologists more quickly identify intracranial hemorrhages or more effectively detect malignant lung nodules. Deep learning systems are also being developed to perform triage — looking through all incoming cases and prioritizing the most critical ones to the top of the radiologist’s work queue.

However, the potential clinical applications have not been validated yet, according to Langlotz:

“We’re cautious about automated detection of abnormalities like lung nodules and colon polyps. Even with high sensitivity, these systems can distract radiologists with numerous false positives. And radiology images are significantly more complex than photos from the web or even other medical images. Few deep learning results of clinical relevance have been published or peer-reviewed yet.”

Researchers say the goal is to improve patient care and workflow, not replace doctors with intelligent computers.

“Reading about these advances in the news, and seeing demonstrations at meetings, some radiologists have become concerned that their jobs are at risk,” said Langlotz. “I disagree. Instead, radiologists will benefit from even more sophisticated electronic tools that focus on assistance with repetitive tasks, rare conditions, or meticulous exhaustive search — things that most humans aren’t very good at anyway.”

Napel concluded:

“At the end of the day, what matters to physicians is whether or not they can trust the information a diagnostic device, whether it be based in AI or something else, gives them. It doesn’t matter whether the opinion comes from a human or a machine. … Some day we may believe in the accuracy of these deep learning algorithms, when given the right kind of data, to create useful information for patient management. We’re just not there yet.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

MRI use flushes gadolinium into San Francisco Bay

Photo by Science Activism
Photo by Science Activism

The levels of gadolinium in the San Francisco Bay have been steadily increasing over the past two decades, according to a study recently published in Environmental Science & Technology. Gadolinium is a rare-earth metal and the potential long-term effects of its exposure have not been studied in detail.

Russell Flegal, PhD, and his research team at UC Santa Cruz collected and analyzed water samples throughout the San Francisco Bay from 1993 to 2013, as part of the San Francisco Bay Regional Monitoring Program.

They found the gadolinium levels to be much higher in the southern end of the Bay, which is home to about 5 million people and densely populated with medical and industrial facilities, than in the central and northern regions. They also observed a sevenfold rise in gadolinium concentration in the South Bay over that time period.

The study attributes the rising level of gadolinium contamination largely to the growing number of magnetic resonance imaging (MRI) scans performed with a gadolinium contrast agent. A gadolinium contrast agent is used for about 30 percent of MRI scans to improve the clarity of the images. It is injected into the patient then excreted out of the body in urine within 24 hours.

Lewis Shin, MD, assistant professor of radiology and a MRI radiologist, explained to me the importance of using intravenous gadolinium contrast agents:

“Gadolinium contrast agents allow us to detect abnormalities that would otherwise be hidden from view and to improve our characterization of the abnormalities that we do find. Gadolinium is not always used; for example, if a physician is just concerned about identifying a herniated disk in the spine, an MRI without contrast agent is sufficient.

However, gadolinium is routinely administered to detect and characterize lesions if there is a clinical concern of cancer. Also, if a patient was previously treated for cancer, gadolinium administration is often extremely helpful to detect early recurrences. In summary, MRI with a gadolinium contrast agent greatly improves our ability to make an accurate diagnosis not only for cancer but for many other disease processes as well.”

According to UCSC researchers, gadolinium is not removed by standard wastewater treatment technologies, so it is discharged by wastewater treatment plants into surface waters that reach the Bay.

Shin expressed some surprise when he learned about this study:

“The majority of radiologists probably don’t even think about gadolinium once it’s excreted out of a patient’s body. Of course it’s concerning that there is a rise in gadolinium levels in the environment, but the next questions are how is this impacting the environment and whether there is a safe level or not? Since most of the gadolinium contrast agents used for MRI studies are excreted through the urine within 12 to 24 hours, one strategy to reduce environmental release of gadolinium could be to collect patients’ urine for a brief period of time for proper disposal or even recycling of the gadolinium itself.”

The UCSC researchers assert that the current levels of gadolinium observed in San Francisco Bay are well below the peak concentrations that could pose harmful effects on the aquatic ecosystem. However, they recommend in their paper, “new public policies and the development of more effective treatment technologies may be necessary to control sources and minimize future contamination.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

%d bloggers like this: