Healthcare Ethics In AI: Can Software Make Ethical Decisions?

As data analytics and other digital innovations become more widely adopted in healthcare, artificial intelligence (AI) will move from an administrative role to a clinical decision-making support role. Hospitals already use AI-based tools to develop custom care plans, check in patients for appointments and answer basic questions such as “How do I pay my bill?”

AI is gaining traction as an “intelligent assistant” for physicians and clinicians. AI helps radiologists analyze images faster and organize them better. It pours through volumes of electronic medical record (EMR) data and symptoms to diagnose disease. And it determines which neighborhoods have a higher risk of diabetes and heart disease so health systems can begin interventions.

For these tools to provide the greatest benefit to public health and medicine, though, they must abide by basic principles of medical ethics. Can machines behave ethically, either with or without human oversight? It’s a question without an easy answer.

To facilitate discussion, let’s explore a few topics related to AI-based clinical tools and the steps developers can take to ensure these tools operate in line with medical ethical principles.

AI Will See You Now

Studies have compared the performance of humans and machines in diagnosing disease. Tools that use machine learning or deep learning perform about the same as physicians on spot comparisons. That’s quite reassuring, especially considering that AI tools do not suffer from human vagaries such as fatigue and subjectivity. Does that mean software will replace physicians? Except for rote administrative tasks, I don’t think so.

Doctors evaluate data and use deductive reasoning to make diagnosis and treatment decisions. They use years of training to estimate the probability of disease based on evidence: symptoms, patient history, examination findings and lab results. Physicians also have two essential qualities that machines do not: empathy, which is necessary for developing trusting patient-physician relationships, and beneficence, a medical-ethical principle.

Developers can’t build an algorithm with empathy, beneficence, intuition and the art of listening. For these reasons, doctors will remain at the side of AI-based tools for the foreseeable future.

Shining Light On The Black Box

Some AI-based software use “black box” models. In black box AI, users don’t know how the program produces results. Turning data into insights is so complicated that even program designers may not know how they work.

Black box AI in healthcare conflicts with the ethical principles of autonomy and justice. Without understanding, there can be no equity. For the sake of honesty and transparency, not to mention liability, physicians must be able to independently review the clinical basis for AI’s decisions.

To break through the black box, developers must design AI-based products with transparency and ethical principles in mind from concept to completion. To improve data transparency, the Clinical Decision Support Coalition offers the following developer guidelines.

• If a recommendation is based on proven systems, such as cost or medical record data, explain that reason.

• Explain the quality of the machine learning algorithm and keep that information up to date.

• Identify and explain the data sources used for machine learning.

• Explain the association between a specific patient’s data and the sea of data the algorithm uses to “learn.”

• Allow machine learning-based software to state the confidence level it has in reaching a conclusion.

In defense of the black box, physicians can’t entirely explain how they arrive at their own conclusions. They take in volumes of information from medical education, residencies and years of practice. They store all that information in the black box called their mind, combine it with what they observe and know about the patient, and make a conclusion. They can explain why they think the patient has a certain disease, but can they explain how?

Transparency In Medical Software Used At Home

The increased adoption of telemedicine has led to a comparative rise in the use of remote patient monitoring devices that record patients’ vital signs at home. These devices range from blood pressure cuffs to insulin delivery devices. Without continuous physician oversight, how do we know these devices operate in an ethical manner?

To help ensure these devices perform responsibly, the FDA and other regulatory agencies came together to develop 10 guiding principles for medical device software that uses AI and machine learning (AI/ML). These guidelines apply to all regulated devices, but they’re especially important in devices used by patients.

From an ethics perspective, the guidelines encourage patient involvement early in the development cycle. Software developers must seek input from all stakeholders—patients, caregivers, physicians and clinicians, among others—to ensure the product will be easy to operate and understand.

The Bottom Line

AI has the potential to improve the delivery of care, but it must not violate medical ethical principles. Can AI operate ethically without the oversight of an ethical human? Yes, I think it can. How? By keeping the patient at the center: putting in place the appropriate device development framework that ensures the involvement of the right stakeholders, at the right times, to answer the right ethical questions and to review ethical performance throughout the product lifecycle.


Original post:

Leave a Reply

Your email address will not be published. Required fields are marked *