When I was a kid, I spent all summer swimming and lying out by the pool without sunscreen. Now, I go to a dermatologist annually because I know early detection of melanoma is critical.
But not everyone has easy access to a dermatologist. So Stanford researchers have created an artificially intelligent computer algorithm to diagnose cancer from photographs of skin lesions, as described in a recent Stanford News release.
The interdisciplinary team of computer scientists, dermatologists, pathologists and a microbiologist started with a deep learning algorithm developed by Google, which was already trained to classify 1.28 million images into 1,000 categories — such as differentiating pictures of cats from dogs. The Stanford researchers adapted this algorithm to differentiate between images of malignant versus benign skin lesions.
They trained the algorithm for the task using a newly acquired database of nearly 130,000 clinical images of skin lesions corresponding to over 2,000 different diseases. The algorithm was given each image with an associated disease label, so it could learn how to classify the lesions.
The effectiveness of the algorithm was tested with a second set of lesion images with biopsy-proven diagnoses. The algorithm identified the lesions as benign, malignant carcinomas or malignant melanomas. The same images were also diagnosed by 21 board-certified dermatologists. The algorithm matched the performance of the dermatologists, as recently reported in Nature.
The researchers now plan to make their algorithm smartphone compatible to broaden its clinical applications. “Everyone will have a supercomputer in their pockets with a number of sensors in it, including a camera,” said Andre Esteva, a Stanford electrical engineering graduate student and co-lead author of the paper. “What if we could use it to visually screen for skin cancer? Or other ailments?”
This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.