For those who want the specifics by clinical specialty, I published interview summaries. Note they are de-identified for ease of reading and privacy, and they were approved for distribution by the interviewee.
The clinician interview guide that I used across 18 interviews
“My current thought is that all new clinicians should be at least somewhat aware of the technology at a bare minimum — knowing very vaguely of how it works, which use cases are better vs. worse, how human clinical judgement will be impacted, and how clinical specialties might look in the future. At the moment, I think all of this information could be included in about four short lectures. But in the future, there may need to be a significant curriculum redesign.”
“I think it is a bad idea for young clinicians to use ML. There are a lot of subtleties that exist, and if you use ML, then you don’t get the knowledge nor art form of medicine.”
“I worry that ML will be used by people who don’t have the training to confirm the model’s output.”
“I also think about the ethical questions around how these ML tools get deployed in an equitable way that is usable for all patients.”
“You can win a Kaggle competition on performance, but it doesn’t mean much when I am thinking about using it with patients’ lives.”
“I am excited by the amount of information that I will get access to.”
“I have been working in the field for so long, so it is hard for me to have the time or capacity to understand how things like ML work.”
“…there is a palpable sense that we are closer than we have ever been to computers being able to reason over healthcare data. That is something that is unbelievably philosophically amazing!”
“It is tough for me to say. I don’t think I would be inclined to use it [Machine Learning], since I don’t have any personal experience with it. I can’t compare it to anything.”
“In a world with full-blown ML, training clinicians would be totally different. I would have to be both a data scientist and a clinician. Our jobs would be all about communication with patients and communicating with models — being data science and medical science translators.”
“I am concerned that ML will get rid of the human connection.”
“I know how to use the outputs of X-ray machines, CT scanners, and MRI machines to help my patients. And as I think of it, I only have a very rudimentary understanding of how those things work. So maybe for an ML tool, I don’t need to know as much.”
“It has to have a very clear value proposition. What is the ROI going to be? Will there be a meaningful return? These companies tell me how we will practice better, but I also need to know how we will save or make money. Sadly, the system doesn’t incentivize us to do better, it incentivizes us to work faster.”
“Much of my time, up to eight hours per week, is spent on detailed interpretation and labeling images. If I had a reliable ML tool, I could better focus on acute issues of my patients or treat more people.”
“Sometimes the computers actually do know more than we do, so it’s not the worst thing to have.”
“I think if systemwide diagnostic accuracy can increase, then that will be more important to humankind than what other changes happen to the profession.”