What an Internal Medicine Physician thinks about ML in healthcare

Original article was published by Harry Goldberg on Artificial Intelligence on Medium

“It is tough for me to say. I don’t think I would be inclined to use it, since I don’t have any personal experience with it. I can’t compare it to anything.”

Job Background

I am an Internal Medicine Physician — about 60% of my time is in the hospital and the rest is in the outpatient office. We see all kinds of patients. In the office, it is mostly preventative care. In the hospital, it is mostly older patients. I finished my residency two years ago and plan to work about 20 more years.

Familiarity with ML in healthcare

To start off, what have you heard about artificial intelligence and/or machine learning in healthcare?

Not a lot, and I wouldn’t know what it looks like.

Past and future use

Have you used any ML tools? Would you?

It is tough for me to say. I don’t think I would be inclined to use it, since I don’t have any personal experience with it. I can’t compare it to anything.

Excitement and concerns

What about ML in healthcare is concerning or exciting for you? What else is exciting or concerning for you?

A major concern is that medicine would become more algorithmic. In the example of population health, algorithms are useful. However, things in medicine are not always determined by an algorithm; actually, it more often wouldn’t be. Yet, I do see a benefit of using some ML tools in situations when there is a lack of access to care. They could reduce wait times or get things started sooner.

Ethics and privacy

Where do ethics play into this? What could go wrong, or what could be done well?

In triage tools, I don’t see a big risk. The bigger ethical issues show up in diagnostics and clinical decision support.

How does privacy fit into all of this?

I think companies naturally use patient data to improve products. We live in a time with little privacy, so our data are always available to someone else. With algorithms, I assume they need to learn more over time with more data to improve results.

External validation needs

For you to be willing to use an ML tool, what external validation would you need to see?

First, it would need to show some benefit for the current process. Obviously, there would need to be some improvement in how quickly things get done or how effectively things get followed up on. This would make me listen. Then after, I would need to see that it is safe. Then, I would need to see how it works in my daily life. I don’t think it would replace my judgement, but it could be a helpful tool.

Clinical education

How would clinical education be impacted?

To introduce something new like ML, then there would need to be a lot of teaching and ongoing support if you wanted a clinician to use it daily.

Desired use cases

Where are there opportunities to assist clinicians with ML? Imagine this: A world-class technology company, developed an ML tool that suggests possible diagnoses or triages a patient population. What is the best thing for them to build now and why?

I think we need an ML triage tool for the ER. When patients show up, there is a nurse who checks people in and figures out how quickly a patient needs to be seen. These people don’t have as much medical training despite having much experience working in ER — or so we hope. Some type of ML tool could help this person triage better, especially in situations when the nurse is unsure. It would also be helpful if there were some clinical decision support that could automatically place basic orders, so that once the physician shows up a step is removed and a patient can be cared for more quickly.


When an ML tool gets implemented, how should that be done? Who should have access first; who should not?

I don’t know really. When new technology is available to us, the IT department communicates via email, but that isn’t effective. So, then they will come to our lunch room two to three times per week to catch everyone and sit at a table to promote new things.