Posted tagged ‘artificial intelligence’

Artificial intelligence could help diagnose tuberculosis in remote regions, study finds

May 3, 2017

Image courtesy of Paras Laknani

Tuberculosis is an infectious disease that kills almost two million people worldwide each year, even though the disease can be identified on a simple chest X-ray and treated with antibiotics. One major challenge is that TB-prevalent areas typically lack the radiologists needed to screen and diagnose the disease.

New artificial intelligence models may help. Researchers from the Thomas Jefferson University Hospital in Pennsylvania have developed and tested an artificial intelligence model to accurately identify tuberculosis from chest X-rays, such as the TB-positive scan shown at right.

The model could provide a cost-effective way to expand TB diagnosis and treatment in developing nations, said Paras Lakhani, MD, study co-author and TJUH radiologist, in a recent news release.

Lakhani performed the retrospective study with his colleague Baskaran Sundaram, MD, a TJUH cardiothoracic radiologist. They obtained 1007 chest X-rays of patients with and without active TB from publically available datasets. The data were split into three categories: training (685 patients), validation (172 patients) and test (150 patients).

The training dataset was used to teach two artificial intelligence models — AlexNet and GoogLeNet — to analyze the chest X-ray data and classify the patients as having TB or being healthy. These existing deep learning models had already been pre-trained with everyday nonmedical images on ImageNet. Once the models were trained, the validation dataset was used to select the best-performing model and then the test dataset was used to assess its accuracy.

The researchers got the best performance using an ensemble of AlexNet and GoogLeNet that statistically combined the probability scores for both artificial intelligence models — with a net accuracy of 96 percent.

The authors explain that the workflow of combining artificial intelligence and human diagnosis could work well in TB-prevalent regions, where an automated method could interpret most cases and only the ambiguous cases would be sent to a radiologist.

The researchers plan to further improve their artificial intelligence models with more training cases and other artificial intelligence algorithms, and then they hope to apply it in community

“The relatively high accuracy of the deep learning models is exciting,” Lakhani said in the release. “The applicability for TB is important because it’s a condition for which we have treatment options. It’s a problem that we can solve.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Advertisements

Computer algorithm predicts outcome for leukemia patients

February 22, 2017
Image by PeteLinforth

Image by PeteLinforth

Researchers have developed a machine-learning computer algorithm that predicts the health outcome of patients with acute myeloid leukemia — identifying who is likely to relapse or go into remission after treatment.

Acute myeloid leukemia (AML) is a cancer characterized by the rapid growth of abnormal white blood cells that build up in the bone marrow and interfere with the production of normal blood cells.

A standard tool used for AML diagnosis and treatment monitoring is flow cytometry, which measures the physical and chemical characteristics of cells in a blood or bone marrow sample to identify malignant leukemic cells. The tool can even detect residual levels of the disease after treatment.

Unfortunately, scientists typically analyze this flow cytometry data using a time-consuming manual process. Now, researchers from Purdue University and Roswell Park Cancer Institute believe they have developed a machine-learning computer algorithm that can extract information from the data better than humans.

“Machine learning is not about modeling data. It’s about extracting knowledge from the data you have so you can build a powerful, intuitive tool that can make predictions about future data that the computer has not previously seen — the machine is learning, not memorizing — and that’s what we did,” said Murat Dundar, PhD, associate processor at Indiana University-Purdue University, in a recent news release.

The research team trained their computer algorithm using bone marrow data and medical histories of AML patients along with blood data from healthy individuals. They then tested the algorithm using data collected from 36 additional AML patients.

In addition to being able to differentiate between normal and abnormal samples, they were able to use the flow cytometry bone marrow data to predict patient outcome — with between 90 and 100 percent accuracy — as recently reported in IEEE Transactions on Biomedical Engineering.

Although more work is needed, the researchers hope their algorithm will improve monitoring of treatment response and enable early detection of disease progression.

Dudar explained in the release:

“It’s pretty straightforward to teach a computer to recognize AML. … What was challenging was to go beyond that work and teach the computer to accurately predict the direction of change in disease progression in AML patients, interpreting new data to predict the unknown: which new AML patients will go into remission and which will relapse.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Computer identifies skin cancer as well as dermatologists, Stanford researchers report

January 25, 2017

When I was a kid, I spent all summer swimming and lying out by the pool without sunscreen. Now, I go to a dermatologist annually because I know early detection of melanoma is critical.

But not everyone has easy access to a dermatologist. So Stanford researchers have created an artificially intelligent computer algorithm to diagnose cancer from photographs of skin lesions, as described in a recent Stanford News release.

The interdisciplinary team of computer scientists, dermatologists, pathologists and a microbiologist started with a deep learning algorithm developed by Google, which was already trained to classify 1.28 million images into 1,000 categories — such as differentiating pictures of cats from dogs. The Stanford researchers adapted this algorithm to differentiate between images of malignant versus benign skin lesions.

They trained the algorithm for the task using a newly acquired database of nearly 130,000 clinical images of skin lesions corresponding to over 2,000 different diseases. The algorithm was given each image with an associated disease label, so it could learn how to classify the lesions.

The effectiveness of the algorithm was tested with a second set of lesion images with biopsy-proven diagnoses. The algorithm identified the lesions as benign, malignant carcinomas or malignant melanomas. The same images were also diagnosed by 21 board-certified dermatologists. The algorithm matched the performance of the dermatologists, as recently reported in Nature.

The researchers now plan to make their algorithm smartphone compatible to broaden its clinical applications. “Everyone will have a supercomputer in their pockets with a number of sensors in it, including a camera,” said Andre Esteva, a Stanford electrical engineering graduate student and co-lead author of the paper. “What if we could use it to visually screen for skin cancer? Or other ailments?”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

New models may help predict diabetes progression

December 2, 2016
Photo by InfoWire.dk

Photo by InfoWire.dk

Diabetics exposed to consistently high blood glucose levels can develop serious secondary complications, including heart disease, stroke, blindness, kidney failure and ulcers that require the amputation of toes, feet or legs.

In order to predict which diabetic patients have a high risk for these complications, physicians may use mathematical models. For example, the UKPDS Risk Engine calculates a diabetic patient’s risk of coronary heart disease and stroke — based on their age, sex, ethnicity, smoking status, time since diabetes diagnosis and other variables.

But this strategy doesn’t provide the accuracy needed by doctors. In response, a research team at Duke University has developed machine-learning computer algorithms to search for patterns and correlations in EHR data from approximately 17,000 diabetic patients in the Duke health system.

The group, led by Ricardo Henao, an assistant research professor in electrical and computer engineering, has demonstrated more accurate predictions than the UKPDS Risk Engine. A recent news story explains:

“This new model can project whether a patient will require amputation within a year with almost 90 percent accuracy, and can correctly predict the risks of coronary artery disease, heart failure and kidney disease in four out of five cases. The model looks at what was typed into a patient’s chart — diagnosis codes, medications, laboratory tests — and picks up on which pieces of information in the EHR are correlated with the development of a comorbidity in the following year.”

The Duke researchers plan to improve the model by training their machine-learning algorithms on a larger data set of diabetic patients from additional hospitals.

However, relying on EHR data has drawbacks. For instance, a patient’s EHR may be incomplete, particularly if the patient doesn’t consistently see the same doctors. Another major challenge is gaining access to the medical records for research. The Duke team had to contact all 17,000 patients to get their informed consent and may encounter similar challenges for a larger scale project.

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Electrocardiogram: Blog illustrates value of old, but still vital cardiac test

September 23, 2016

Stephen Smith, MD, an emergency medicine physician at Hennepin County Medical Center in Minnesota, is passionate about using electrocardiograms to save lives. He even writes a popular blog called Dr. Smith’s ECG Blog to train others to more accurately interpret them.

If you’re one of the 735,000 Americans that had a heart attack in the last year, you almost certainly had your heart evaluated with an electrocardiogram, or ECG for short, as soon as you were brought into the emergency room. The heart produces small electrical impulses with each beat, which cause the heart muscle to contract and pump blood throughout your body. The ECG records this electrical activity using electrodes placed on the skin, allowing physicians to detect abnormal heart rhythms and heart muscle damage.

On the surface, an ECG just produces a simple line graph based on technology that was invented over a century ago. So why does it still play such a vital role in the clinic? And how can a physician diagnose a heart condition from a little blip on the line? I recently spoke with Smith, who is also a professor affiliated with the University of Minnesota Twin Cities, about the importance and subtleties of interpreting ECGs.

How do you use ECGs in your medical practice?

“I work full time as an emergency medicine physician and see thousands of patients per year. In the emergency room, the ECG is the first test that we use on everyone with chest pain because it’s the easiest, most non-invasive and cheapest cardiac test. Most of the time when someone is having a big heart attack (myocardial infarction), the ECG will show it. So this is all about patient care. It’s a really amazing diagnostic tool.”

Why did you start your ECG blog?

“Every day I use ECGs to improve the care of my patients, but the purpose of my blog is to help other people do so. I write it for cardiologists, cardiologist fellows, emergency medicine physicians, internal medicine physicians and paramedics — anyone who has to record and interpret ECGs — in order to improve their training and expertise. It’s easy to interpret a completely normal ECG, but many physicians fail to look at all aspects of the ECG together and many abnormalities go unrecognized. Reading ECGs correctly requires a lot of training.

For instance, one of my most popular blog posts presented the case of a 37-year-old woman with chest pain after a stressful interpersonal conflict. She was a non-smoker, with no hyperlipidemia and no family history of coronary artery disease. Her ECG showed an unequivocal, but extremely subtle, sign of a devastating myocardial infarction due to a complete closure of the artery supplying blood oxygen to the front wall of the heart. Her blood testing for a heart attack didn’t detect it, so she was discharged and died at home within 12 hours. It was a terrible outcome, but it demonstrates how training caregivers to recognize these subtle findings on the ECG can mean the difference between life and death.

I get very excited when I see an unusual ECG, and I see several every day. In 2008, I started posting these subtle ECG cases online and, to my surprise, people all over the world became interested in my blog. In July, I had 280,000 visits to my blog and about 90,000 visits to my Facebook page. People from 190 countries are viewing and learning from my posts. And I get messages from all over the world saying how nice it is to have free access to such a high-quality educational tool. I spend about eight hours per week seeking out interesting ECG cases, writing them up and answering questions on my blog, Facebook and Twitter.”

Will ECGs ever be obsolete?

“I don’t think ECGs will ever be outdated, because there is so much information that can be gleaned from them. We’re also improving how to interpret them. The main limitation is having good data on the underlying physiology for each ECG, which can be fed into an artificial intelligence computer algorithm. An AI could learn many patterns that we don’t recognize today.

Right now I’m working with a startup company in France. They’re a bunch of genius programmers who are creating neural network artificial intelligence software. We’re basically training the computer to read ECGs better. We need many, many good data sets to train the AI. I’ve already provided the company with over 100,000 ECGs along with their associated cardiologist or emergency medicine physician interpretations. We’re in the process of testing the AI against experts and against other computer algorithms.

My only role is to help direct the research. I receive no money from the company and have no financial interests. But I do have an interest in making better ECG algorithms for better.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.

Enlisting artificial intelligence to assist radiologists

June 24, 2016
Photo by Gerd Leonhard

Photo by Gerd Leonhard

Specialized electronic circuits called graphic processing units, or GPUs, are at the heart of modern mobile phones, personal computers and gaming consoles. By combining multiple GPUs in concert, researchers can now solve previously elusive image processing problems. For example, Google and Facebook have both developed extremely accurate facial recognition software using these new techniques.

GPUs are also crucial to radiologists, because they can rapidly process large medical imaging datasets from CT, MRI, ultrasound and even conventional x-rays.

Now some radiology groups and technology companies are combining multiple GPUs with artificial intelligence (AI) algorithms to help improve radiology care. Simply put, an AI computer program can do tasks normally performed by intelligent people. In this case, AI algorithms can be trained to recognize and interpret subtle differences in medical images.

Stanford researchers have used machine learning for many years to look at medical images and computationally extract the features used to predict something about the patient, much as a radiologist would. However, the use of artificial intelligence, or deep learning algorithms, is new. Sandy Napel, PhD, a professor of radiology, explained:

“These deep learning paradigms are a deeply layered set of connections, not unlike the human brain, that are trained by giving them a massive amount of data with known truth. They basically iterate on the strength of the connections until they are able to predict the known truth very accurately.”

“You can give it 10,000 images of colon cancer. It will find the common features across those images automatically,” said Garry Choy, MD, a staff radiologist and assistant chief medical information officer at Massachusetts General Hospital, in a recent Diagnostic Imaging article. “If there are large data sets, it can teach itself what to look for.”

A major challenge is that these AI algorithms may require thousands of annotated radiology images to train them. So Stanford researchers are creating a database containing millions of de-identified radiology studies, including billions of images, totaling about a half million gigabytes. Each study in the database is associated with the de‐identified report that was created by the radiologist when the images were originally used for patient care.

“To enable our deep learning research, we are also applying machine learning methods to our large database of narrative radiology reports,” said Curtis Langlotz, MD, PhD, a Stanford professor of radiology and biomedical informatics. “We use natural language processing methods to extract discrete concepts, such as anatomy and pathology, from the radiology reports. This discrete data can then be used to train AI systems to recognize the abnormalities shown on the images themselves.”

Potential applications include using AI systems to help radiologists more quickly identify intracranial hemorrhages or more effectively detect malignant lung nodules. Deep learning systems are also being developed to perform triage — looking through all incoming cases and prioritizing the most critical ones to the top of the radiologist’s work queue.

However, the potential clinical applications have not been validated yet, according to Langlotz:

“We’re cautious about automated detection of abnormalities like lung nodules and colon polyps. Even with high sensitivity, these systems can distract radiologists with numerous false positives. And radiology images are significantly more complex than photos from the web or even other medical images. Few deep learning results of clinical relevance have been published or peer-reviewed yet.”

Researchers say the goal is to improve patient care and workflow, not replace doctors with intelligent computers.

“Reading about these advances in the news, and seeing demonstrations at meetings, some radiologists have become concerned that their jobs are at risk,” said Langlotz. “I disagree. Instead, radiologists will benefit from even more sophisticated electronic tools that focus on assistance with repetitive tasks, rare conditions, or meticulous exhaustive search — things that most humans aren’t very good at anyway.”

Napel concluded:

“At the end of the day, what matters to physicians is whether or not they can trust the information a diagnostic device, whether it be based in AI or something else, gives them. It doesn’t matter whether the opinion comes from a human or a machine. … Some day we may believe in the accuracy of these deep learning algorithms, when given the right kind of data, to create useful information for patient management. We’re just not there yet.”

This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.


%d bloggers like this: