“Hey Siri, am I depressed?” When I posed this question to my iPhone, Siri’s reply was “I can’t really say, Jennifer.” But someday, software programs like Siri or Alexa may be able to talk to patients about their mental health symptoms to assist human therapists.
To learn more, I spoke with Adam Miner, PsyD, an instructor and co-director of Stanford’s Virtual Reality-Immersive Technology Clinic, who is working to improve conversational AI to recognize and respond to health issues.
What do you do as an AI psychologist?
“AI psychology isn’t a new specialty yet, but I do see it as a growing interdisciplinary need. I work to improve mental health access and quality through safe and effective artificial intelligence. I use methods from social science and computer science to answer questions about AI and vulnerable groups who may benefit or be harmed.”
How did you become interested in this field?
“During my training as a clinical psychologist, I had patients who waited years to tell anyone about their problems for many different reasons. I believe the role of a clinician isn’t to blame people who don’t come into the hospital. Instead, we should look for opportunities to provide care when people are ready and willing to ask for it, even if that is through machines.
I was reading research from different fields like communication and computer science and I was struck by the idea that people may confide intimate feelings to computers and be impacted by how computers respond. I started testing different digital assistants, like Siri, to see how they responded to sensitive health questions. The potential for good outcomes — as well as bad — quickly came into focus.”
Why is technology needed to assess the mental health of patients?
“We have a mental health crisis and existing barriers to care — like social stigma, cost and treatment access. Technology, specifically AI, has been called on to help. The big hope is that AI-based systems, unlike human clinicians, would never get tired, be available wherever and whenever the patient needs and know more than any human could ever know.
However, we need to avoid inflated expectations. There are real risks around privacy, ineffective care and worsening disparities for vulnerable populations. There’s a lot of excitement, but also a gap in knowledge. We don’t yet fully understand all the complexities of human–AI interactions.
People may not feel judged when they talk to a machine the same way they do when they talk to a human — the conversation may feel more private. But it may in fact be more public because information could be shared in unexpected ways or with unintended parties, such as advertisers or insurance companies.”
What are you hoping to accomplish with AI?
“If successful, AI could help improve access in three key ways. First, it could reach people who aren’t accessing traditional, clinic-based care for financial, geographic or other reasons like social anxiety. Second, it could help create a ‘learning healthcare system’ in which patient data is used to improve evidence-based care and clinician training.
Lastly, I have an ethical duty to practice culturally sensitive care as a licensed clinical psychologist. But a patient might use a word to describe anxiety that I don’t know and I might miss the symptom. AI, if designed well, could recognize cultural idioms of distress or speak multiple languages better than I ever will. But AI isn’t magic. We’ll need to thoughtfully design and train AI to do well with different genders, ethnicities, races and ages to prevent further marginalizing vulnerable groups.
If AI could help with diagnostic assessments, it might allow people to access care who otherwise wouldn’t. This may help avoid downstream health emergencies like suicide.”
How long until AI is used in the clinic?
“I hesitate to give any timeline, as AI can mean so many different things. But a few key challenges need to be addressed before wide deployment, including the privacy issues, the impact of AI-mediated communications on clinician-patient relationships and the inclusion of cultural respect.
The clinician–patient relationship is often overlooked when imagining a future with AI. We know from research that people can feel an emotional connection to health-focused conversational AI. What we don’t know is whether this will strengthen or weaken the patient-clinician relationship, which is central to both patient care and a clinician’s sense of self. If patients lose trust in mental health providers, it will cause real and lasting harm.”
This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.
Assembling an ionocraft microrobot in UC Berkeley’s Swarm Lab. (Photos by Adam Lau)
A tiny robot takes off and drunkenly flies several centimeters above a table in the Berkeley Sensor and Actuator Center. Roughly the size and weight of a postage stamp, the microrobot consists of a mechanical structure, propulsion system, motion-tracking sensor and multiple wires that supply power and communication signals.
This flying robot is the project of Daniel Drew, a graduate student who is working under the guidance of electrical engineering and computer sciences professor Kris Pister (M.S.’89, Ph.D.’92 EECS). The culmination of decades of research, these microrobots arose from Pister’s invention of “smart dust,” tiny chips roughly the size of rice grains packed with sensors, microprocessors, wireless radios and batteries. Pister likes to refer to his microrobots as “smart dust with legs.”
“We’re pushing back the boundaries of knowledge in the field of miniaturization, robotic actuators, micro-motors, wireless communication and many other areas,” says Pister. “Where these results will lead us is difficult to predict.”
For now, Pister and his team are aiming to make microrobots that can self-deploy, in the hopes that they could be used by first responders to search for survivors after a disaster, industrial plants to detect chemical leaks or farmers to monitor and tend their crops.
These insect-sized robots come with a unique advantage for solving problems. For example, many farmers already use large drones to monitor and spray their plants to improve crop quality and yield. Microrobots could take this to a whole new level. “A standard quadcopter gives us a bird’s eye view of the field, but a microrobot would give us a bug’s eye view,” Drew says. “We could program them to do important jobs like pollination, looking for the same visual cues on flowers as insects [see].”
But to apply this kind of technology on a mass scale, first the team has to overcome significant challenges in microtechnolgy. And as Pister says, “Making tiny robots that fly, walk or jump hasn’t been easy. Every single piece of it has been hard.”
Flying silently with ion propulsion
Most flying microrobots have flapping wings that mimic real-life insects, like bees. But the team’s flying microrobot, called an ionocraft, uses a custom ion propulsion system unlike anything in nature. There are no moving parts, so it has the potential to be very durable. And it’s completely silent when it flies, so it doesn’t make an annoying buzz like a quadcopter rotor or mosquito.
The ionocraft’s propulsion system is novel, not just a scaled down version from NASA’s spacecrafts. “We use a mechanism that’s different than the one used in space, which ejects ions out the back to propel the spacecraft forward,” Drew says. “A key difference is that we have air on Earth.”
Instead, the ionocraft thruster consists of a thin emitter wire and a collector grid. When a voltage is applied between them, a positively-charged ion cloud is created around the wire. This ion cloud zips toward the negatively-charged collector grid, colliding with neutral air molecules along the way. The air molecules are knocked out of the way, creating a wind that moves the robot.
“If you put your hand under the collector grid of the ionocraft, you’ll feel wind on your hand — that’s the air stream that propels the microrobot upwards,” explains Drew. “It’s similar to the airstream that you’d feel if you put your hand under the rotor blades of a helicopter.”
The collector grid also provides the ionocraft’s mechanical structure. Having components play more than one role is critical for these tiny robots, which need to be compact and lightweight for the propulsion system to work.
Each ionocraft has four ion thrusters that are independently controlled by adjusting their voltages. This allows the team to control the orientation of the microrobot in a similar way as standard quadcopter drones. Namely, they can control the craft’s roll, pitch and yaw. What they can’t do yet is make the microrobot hover. “So far, we can fly it bouncing around like a bug in a web, but the goal is to get it to hover steadily in the air,” Pister says.
Taking first steps and jumps
In parallel, the researchers are developing microrobots that can walk or jump. Their micro-walker is composed of three silicon chips: a body chip that plugs perpendicularly into two chips with three legs each. “The hexapod microrobot is about the size of a really big ant, but it’s boxier,” says Pister.
Not only does the body chip provide structural support, but it also routes the external power and control signals to the leg chips. These leg chips are oriented vertically, allowing the legs to move along the table in a sweeping motion. Each leg is driven by two tiny on-chip linear motors, called electrostatic inchworm motors, which were invented by Pister. One motor lifts the robot’s body and the second pushes it forward. This unique walking mechanism allows three-dimensional microrobots to be fabricated more simply and cheaply.
Pister says the design should, in theory, allow the hexapod to run. So far it can only stand up and shuffle forward. However, he believes their recent fabrication and assembly improvements will have the microrobot walking more quickly and smoothly soon.
The jumping microrobot also uses on-chip inchworm motors. Its motor assembly compresses springs to store energy, which is then released when the microrobot jumps. Currently, it can only jump several millimeters in the air, but the team’s goal is to have it to jump six meters from the floor to the table. To achieve this, they are developing more efficient springs and motors.
“Having robots that can shuffle, jump a little and fly is a major achievement,” Pister says. “They are coming together. But they’re all still tethered by wires for control, data and power signals. ”
Working toward autonomy
Currently, high voltage control signals are passed over wires that connect a computer to a robot, complicating and restricting its movement. The team is developing better ways to control the microrobots, untethering them from the external computer. But transferring the controller onto the microrobot itself is challenging. “Small robots can’t carry the same kind of increasingly powerful computer chips that a standard quadcopter drone can carry,” Drew says. “We need to do more with less.”
So the group is designing and testing a single chip platform that will act as the robots’ brains for communication and control. They plan to send control messages to this chip from a cell phone using wireless technology such as Bluetooth. Ultimately, they hope to use only high-level commands, like “go pollinate the pumpkin field,” which the self-mobilizing microrobots can follow.
The team also plans to integrate on-board sensors, including a camera and microphone to act as the robot’s eyes and ears. These sensors will be used for navigation, as well as any tasks they want the robot to perform. “As the microrobot moves around, we could use its camera and microphone to transmit live video to a cell phone,” says Pister. “This could be used for many applications, including search and rescue.”
Using the brain chip interfaced with on-board sensors will allow the team to eliminate most of the troublesome wires. The next step will be to eliminate the power wires so the robots can move freely. Pister showed early on that solar cells are strong enough to power microrobots. In fact, a microrobot prototype that has been sitting on his office shelf for about 15 years still moves using solar power.
Now, his team is developing a power chip with solar cells in collaboration with Jason Stauth (M.S.’06, Ph.D.’08 EECS), who is an associate professor of engineering at Dartmouth. They’re also working with electrical engineering and computer sciences professor Ana Arias to investigate using batteries.
Finally, the researchers are developing clever machine learning algorithms that guide a microrobot’s motion, making it as smooth as possible.
In Drew’s case, the initial algorithms are based on data from flying a small quadcopter drone. “We’re first developing the machine learning platform with a centimeter-scale, off-the-shelf quadcopter,” says Drew. “Since the control system for an ionocraft is similar to a quadcopter, we’ll be able to adapt and apply the algorithms to our ionocraft. Hopefully, we’ll be able to make it hover.”
Putting it all together
Soon, the team hopes to have autonomous microrobots wandering around the lab directed by cell phone messages. But their ambitions don’t stop there. “I think it’s beneficial to have flying robots and walking robots cooperating together,” Drew says. “Flying robots will always consume more energy than walking robots, but they can overcome obstacles and sense the world from a higher vantage point. There is promise to having both or even a mixed-mobility microrobot, like a beetle that can fly or walk.”
Mixed-mobility microrobots could do things like monitor bridges, railways and airplanes. Currently, static sensors are used to monitor infrastructure, but they are difficult and time-consuming to deploy and maintain — picture changing the batteries of 100,000 sensors across a bridge. Mixed-mobility microrobots could also search for survivors after a disaster by flying, crawling and jumping through the debris.
“Imagine you’re a first responder who comes to the base of a collapsed building. Working by flashlight, it’s hard to see much but the dust hanging in the air,” says Drew. “Now, imagine pulling out a hundred insect-sized robots from your pack, tossing them into the air and having them disperse in all directions. Infrared cameras on each robot look for signs of life. When one spots a survivor, it sends a message back to you over a wireless network. Then a swarm of robots glowing like fireflies leads you to the victim’s location, while a group ahead clears out the debris in your path.”
The applications seem almost endless given the microrobots’ potential versatility and affordability. Pister estimates they might cost as little as one dollar someday, using batch manufacturing techniques. The technology is also likely to reach beyond microrobots.
For Pister’s team, the path forward is clear; the open question is when. “All the pieces are on the table now,” Pister says, “and it’s ‘just’ a matter of integration. But system integration is a challenge in its own right, especially with packaging. We may get results in the next six months — or it may take another five years.”
Tuberculosis is an infectious disease that kills almost two million people worldwide each year, even though the disease can be identified on a simple chest X-ray and treated with antibiotics. One major challenge is that TB-prevalent areas typically lack the radiologists needed to screen and diagnose the disease.
New artificial intelligence models may help. Researchers from the Thomas Jefferson University Hospital in Pennsylvania have developed and tested an artificial intelligence model to accurately identify tuberculosis from chest X-rays, such as the TB-positive scan shown at right.
The model could provide a cost-effective way to expand TB diagnosis and treatment in developing nations, said Paras Lakhani, MD, study co-author and TJUH radiologist, in a recent news release.
Lakhani performed the retrospective study with his colleague Baskaran Sundaram, MD, a TJUH cardiothoracic radiologist. They obtained 1007 chest X-rays of patients with and without active TB from publically available datasets. The data were split into three categories: training (685 patients), validation (172 patients) and test (150 patients).
The training dataset was used to teach two artificial intelligence models — AlexNet and GoogLeNet — to analyze the chest X-ray data and classify the patients as having TB or being healthy. These existing deep learning models had already been pre-trained with everyday nonmedical images on ImageNet. Once the models were trained, the validation dataset was used to select the best-performing model and then the test dataset was used to assess its accuracy.
The researchers got the best performance using an ensemble of AlexNet and GoogLeNet that statistically combined the probability scores for both artificial intelligence models — with a net accuracy of 96 percent.
The authors explain that the workflow of combining artificial intelligence and human diagnosis could work well in TB-prevalent regions, where an automated method could interpret most cases and only the ambiguous cases would be sent to a radiologist.
The researchers plan to further improve their artificial intelligence models with more training cases and other artificial intelligence algorithms, and then they hope to apply it in community
“The relatively high accuracy of the deep learning models is exciting,” Lakhani said in the release. “The applicability for TB is important because it’s a condition for which we have treatment options. It’s a problem that we can solve.”
This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.
Researchers have developed a machine-learning computer algorithm that predicts the health outcome of patients with acute myeloid leukemia — identifying who is likely to relapse or go into remission after treatment.
Acute myeloid leukemia (AML) is a cancer characterized by the rapid growth of abnormal white blood cells that build up in the bone marrow and interfere with the production of normal blood cells.
A standard tool used for AML diagnosis and treatment monitoring is flow cytometry, which measures the physical and chemical characteristics of cells in a blood or bone marrow sample to identify malignant leukemic cells. The tool can even detect residual levels of the disease after treatment.
Unfortunately, scientists typically analyze this flow cytometry data using a time-consuming manual process. Now, researchers from Purdue University and Roswell Park Cancer Institute believe they have developed a machine-learning computer algorithm that can extract information from the data better than humans.
“Machine learning is not about modeling data. It’s about extracting knowledge from the data you have so you can build a powerful, intuitive tool that can make predictions about future data that the computer has not previously seen — the machine is learning, not memorizing — and that’s what we did,” said Murat Dundar, PhD, associate processor at Indiana University-Purdue University, in a recent news release.
The research team trained their computer algorithm using bone marrow data and medical histories of AML patients along with blood data from healthy individuals. They then tested the algorithm using data collected from 36 additional AML patients.
In addition to being able to differentiate between normal and abnormal samples, they were able to use the flow cytometry bone marrow data to predict patient outcome — with between 90 and 100 percent accuracy — as recently reported in IEEE Transactions on Biomedical Engineering.
Although more work is needed, the researchers hope their algorithm will improve monitoring of treatment response and enable early detection of disease progression.
Dudar explained in the release:
“It’s pretty straightforward to teach a computer to recognize AML. … What was challenging was to go beyond that work and teach the computer to accurately predict the direction of change in disease progression in AML patients, interpreting new data to predict the unknown: which new AML patients will go into remission and which will relapse.”
This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.
When I was a kid, I spent all summer swimming and lying out by the pool without sunscreen. Now, I go to a dermatologist annually because I know early detection of melanoma is critical.
But not everyone has easy access to a dermatologist. So Stanford researchers have created an artificially intelligent computer algorithm to diagnose cancer from photographs of skin lesions, as described in a recent Stanford News release.
The interdisciplinary team of computer scientists, dermatologists, pathologists and a microbiologist started with a deep learning algorithm developed by Google, which was already trained to classify 1.28 million images into 1,000 categories — such as differentiating pictures of cats from dogs. The Stanford researchers adapted this algorithm to differentiate between images of malignant versus benign skin lesions.
They trained the algorithm for the task using a newly acquired database of nearly 130,000 clinical images of skin lesions corresponding to over 2,000 different diseases. The algorithm was given each image with an associated disease label, so it could learn how to classify the lesions.
The effectiveness of the algorithm was tested with a second set of lesion images with biopsy-proven diagnoses. The algorithm identified the lesions as benign, malignant carcinomas or malignant melanomas. The same images were also diagnosed by 21 board-certified dermatologists. The algorithm matched the performance of the dermatologists, as recently reported in Nature.
The researchers now plan to make their algorithm smartphone compatible to broaden its clinical applications. “Everyone will have a supercomputer in their pockets with a number of sensors in it, including a camera,” said Andre Esteva, a Stanford electrical engineering graduate student and co-lead author of the paper. “What if we could use it to visually screen for skin cancer? Or other ailments?”
This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.
Diabetics exposed to consistently high blood glucose levels can develop serious secondary complications, including heart disease, stroke, blindness, kidney failure and ulcers that require the amputation of toes, feet or legs.
In order to predict which diabetic patients have a high risk for these complications, physicians may use mathematical models. For example, the UKPDS Risk Engine calculates a diabetic patient’s risk of coronary heart disease and stroke — based on their age, sex, ethnicity, smoking status, time since diabetes diagnosis and other variables.
But this strategy doesn’t provide the accuracy needed by doctors. In response, a research team at Duke University has developed machine-learning computer algorithms to search for patterns and correlations in EHR data from approximately 17,000 diabetic patients in the Duke health system.
The group, led by Ricardo Henao, an assistant research professor in electrical and computer engineering, has demonstrated more accurate predictions than the UKPDS Risk Engine. A recent news story explains:
“This new model can project whether a patient will require amputation within a year with almost 90 percent accuracy, and can correctly predict the risks of coronary artery disease, heart failure and kidney disease in four out of five cases. The model looks at what was typed into a patient’s chart — diagnosis codes, medications, laboratory tests — and picks up on which pieces of information in the EHR are correlated with the development of a comorbidity in the following year.”
The Duke researchers plan to improve the model by training their machine-learning algorithms on a larger data set of diabetic patients from additional hospitals.
However, relying on EHR data has drawbacks. For instance, a patient’s EHR may be incomplete, particularly if the patient doesn’t consistently see the same doctors. Another major challenge is gaining access to the medical records for research. The Duke team had to contact all 17,000 patients to get their informed consent and may encounter similar challenges for a larger scale project.
This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.
Stephen Smith, MD, an emergency medicine physician at Hennepin County Medical Center in Minnesota, is passionate about using electrocardiograms to save lives. He even writes a popular blog called Dr. Smith’s ECG Blog to train others to more accurately interpret them.
If you’re one of the 735,000 Americans that had a heart attack in the last year, you almost certainly had your heart evaluated with an electrocardiogram, or ECG for short, as soon as you were brought into the emergency room. The heart produces small electrical impulses with each beat, which cause the heart muscle to contract and pump blood throughout your body. The ECG records this electrical activity using electrodes placed on the skin, allowing physicians to detect abnormal heart rhythms and heart muscle damage.
On the surface, an ECG just produces a simple line graph based on technology that was invented over a century ago. So why does it still play such a vital role in the clinic? And how can a physician diagnose a heart condition from a little blip on the line? I recently spoke with Smith, who is also a professor affiliated with the University of Minnesota Twin Cities, about the importance and subtleties of interpreting ECGs.
How do you use ECGs in your medical practice?
“I work full time as an emergency medicine physician and see thousands of patients per year. In the emergency room, the ECG is the first test that we use on everyone with chest pain because it’s the easiest, most non-invasive and cheapest cardiac test. Most of the time when someone is having a big heart attack (myocardial infarction), the ECG will show it. So this is all about patient care. It’s a really amazing diagnostic tool.”
Why did you start your ECG blog?
“Every day I use ECGs to improve the care of my patients, but the purpose of my blog is to help other people do so. I write it for cardiologists, cardiologist fellows, emergency medicine physicians, internal medicine physicians and paramedics — anyone who has to record and interpret ECGs — in order to improve their training and expertise. It’s easy to interpret a completely normal ECG, but many physicians fail to look at all aspects of the ECG together and many abnormalities go unrecognized. Reading ECGs correctly requires a lot of training.
For instance, one of my most popular blog posts presented the case of a 37-year-old woman with chest pain after a stressful interpersonal conflict. She was a non-smoker, with no hyperlipidemia and no family history of coronary artery disease. Her ECG showed an unequivocal, but extremely subtle, sign of a devastating myocardial infarction due to a complete closure of the artery supplying blood oxygen to the front wall of the heart. Her blood testing for a heart attack didn’t detect it, so she was discharged and died at home within 12 hours. It was a terrible outcome, but it demonstrates how training caregivers to recognize these subtle findings on the ECG can mean the difference between life and death.
I get very excited when I see an unusual ECG, and I see several every day. In 2008, I started posting these subtle ECG cases online and, to my surprise, people all over the world became interested in my blog. In July, I had 280,000 visits to my blog and about 90,000 visits to my Facebook page. People from 190 countries are viewing and learning from my posts. And I get messages from all over the world saying how nice it is to have free access to such a high-quality educational tool. I spend about eight hours per week seeking out interesting ECG cases, writing them up and answering questions on my blog, Facebook and Twitter.”
Will ECGs ever be obsolete?
“I don’t think ECGs will ever be outdated, because there is so much information that can be gleaned from them. We’re also improving how to interpret them. The main limitation is having good data on the underlying physiology for each ECG, which can be fed into an artificial intelligence computer algorithm. An AI could learn many patterns that we don’t recognize today.
Right now I’m working with a startup company in France. They’re a bunch of genius programmers who are creating neural network artificial intelligence software. We’re basically training the computer to read ECGs better. We need many, many good data sets to train the AI. I’ve already provided the company with over 100,000 ECGs along with their associated cardiologist or emergency medicine physician interpretations. We’re in the process of testing the AI against experts and against other computer algorithms.
My only role is to help direct the research. I receive no money from the company and have no financial interests. But I do have an interest in making better ECG algorithms for better.”
This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.
Specialized electronic circuits called graphic processing units, or GPUs, are at the heart of modern mobile phones, personal computers and gaming consoles. By combining multiple GPUs in concert, researchers can now solve previously elusive image processing problems. For example, Google and Facebook have both developed extremely accurate facial recognition software using these new techniques.
GPUs are also crucial to radiologists, because they can rapidly process large medical imaging datasets from CT, MRI, ultrasound and even conventional x-rays.
Now some radiology groups and technology companies are combining multiple GPUs with artificial intelligence (AI) algorithms to help improve radiology care. Simply put, an AI computer program can do tasks normally performed by intelligent people. In this case, AI algorithms can be trained to recognize and interpret subtle differences in medical images.
Stanford researchers have used machine learning for many years to look at medical images and computationally extract the features used to predict something about the patient, much as a radiologist would. However, the use of artificial intelligence, or deep learning algorithms, is new. Sandy Napel, PhD, a professor of radiology, explained:
“These deep learning paradigms are a deeply layered set of connections, not unlike the human brain, that are trained by giving them a massive amount of data with known truth. They basically iterate on the strength of the connections until they are able to predict the known truth very accurately.”
“You can give it 10,000 images of colon cancer. It will find the common features across those images automatically,” said Garry Choy, MD, a staff radiologist and assistant chief medical information officer at Massachusetts General Hospital, in a recent Diagnostic Imagingarticle. “If there are large data sets, it can teach itself what to look for.”
A major challenge is that these AI algorithms may require thousands of annotated radiology images to train them. So Stanford researchers are creating a database containing millions of de-identified radiology studies, including billions of images, totaling about a half million gigabytes. Each study in the database is associated with the de‐identified report that was created by the radiologist when the images were originally used for patient care.
“To enable our deep learning research, we are also applying machine learning methods to our large database of narrative radiology reports,” said Curtis Langlotz, MD, PhD, a Stanford professor of radiology and biomedical informatics. “We use natural language processing methods to extract discrete concepts, such as anatomy and pathology, from the radiology reports. This discrete data can then be used to train AI systems to recognize the abnormalities shown on the images themselves.”
Potential applications include using AI systems to help radiologists more quickly identify intracranial hemorrhages or more effectively detect malignant lung nodules. Deep learning systems are also being developed to perform triage — looking through all incoming cases and prioritizing the most critical ones to the top of the radiologist’s work queue.
However, the potential clinical applications have not been validated yet, according to Langlotz:
“We’re cautious about automated detection of abnormalities like lung nodules and colon polyps. Even with high sensitivity, these systems can distract radiologists with numerous false positives. And radiology images are significantly more complex than photos from the web or even other medical images. Few deep learning results of clinical relevance have been published or peer-reviewed yet.”
Researchers say the goal is to improve patient care and workflow, not replace doctors with intelligent computers.
“Reading about these advances in the news, and seeing demonstrations at meetings, some radiologists have become concerned that their jobs are at risk,” said Langlotz. “I disagree. Instead, radiologists will benefit from even more sophisticated electronic tools that focus on assistance with repetitive tasks, rare conditions, or meticulous exhaustive search — things that most humans aren’t very good at anyway.”
Napel concluded:
“At the end of the day, what matters to physicians is whether or not they can trust the information a diagnostic device, whether it be based in AI or something else, gives them. It doesn’t matter whether the opinion comes from a human or a machine. … Some day we may believe in the accuracy of these deep learning algorithms, when given the right kind of data, to create useful information for patient management. We’re just not there yet.”
This is a reposting of my Scope blog story, courtesy of Stanford School of Medicine.