Healthcare is a people business; delivered by caring health professionals to their patients. It is often delivered during periods of health compromise, sickness or medical uncertainty. In this time of COVID-19, it is arguably the most important and respected service of any sector.
On the surface, artificial intelligence (AI) may seem counterintuitive as a form of healthcare – given It is also known as machine intelligence. As that name suggests, AI has software, algorithms and machine learning at its heart. Yet, it is fast becoming one of the most exciting and effective developments in healthcare services and delivery.
“A comprehensive governance framework for any AI strategy in a healthcare setting needs to start from the position that healthcare is centred on the consumer.”
AI has the ability to help deliver some health services more quickly, efficiently, at lower cost and with greater accuracy. But, like any new technology or therapeutic intervention that is introduced in a healthcare setting, AI must be properly regulated.
Those who are accountable for managing risk and regulatory compliance within organisations should have assurance the implementation and use of any AI tool will be safe and cost effective, produce better outcomes, with an improved experience, for all stakeholders.
Legal and regulatory frameworks can have difficulty keeping up with the rapid technological advancements and uptake of AI in the healthcare sector. The regulatory requirements for medical devices arise under the Therapeutic Goods Act 1989 and associated regulations provides guidance for regulating AI. Software as a medical device (SaMD) is specifically included under this regime but is subject to ongoing consultation, with refinements expected in the next 12 to 18 months.
Organisations intending to implement AI tools for health-related purposes must implement a governance and risk management framework which goes beyond those minimum legal and regulatory requirements.
A comprehensive governance framework for any AI strategy in a healthcare setting needs to start from the position that healthcare is centred on the consumer. Introduction of market leading technology can create an edge in a crowded industry, however it is important to keep in mind the end user – not only the health team of clinicians but the patient, aged care resident, person with a disability or other consumer.
A governance and risk management framework for AI may include the following elements;
- AI partner. Engage with a trusted partner who has a track record in health and with the ability to successful engage with clinicians, ensuring that you undertake appropriate due diligence before embarking on the project.
- Assurance. The governing body of the organisation is ultimately accountable and should reach an appropriate level of assurance the AI tool is safe, compliant, cost-effective, achieve better outcomes with an improved experience, within a framework that appropriately manages risk. The criteria and weighting of these matters will be different for each organisation. Rigorous validation and testing is essential before introduction, along with undertaking clinical trials, guarding against a rush to introduce an exciting new technology. Ongoing monitoring following introduction is critical, with regular reporting to the governing body against the assessment criteria.
- Data. Health and personal information will be used to train the AI tool and then supplied by patients or consumers once it becomes operational. All organisations should be aware of their obligations under the Privacy Act and state-based privacy legislation, including in respect of mandatory data breach reporting and data mining. Particularly where data is not de-identified, organisations should ensure they have prepared a privacy impact assessment to assess privacy risks and ensure compliance with privacy laws, data breach management plan and/or cybersecurity risk management plan (as applicable).
- Service redesign and patient readiness. AI must integrate within an existing health setting, which may necessitate service redesign and ensuring people who are using the technology and delivering health services are ready. Transition risk should be managed when moving from a legacy into a new system, with heightened risk occurring when dual systems are operating during the transitional period.
- Managing cultural shifts. The direct and ongoing engagement of clinicians is a critical factor throughout each stage of the project, from planning to ongoing operation. Education and training are important, including to reassure stakeholders this is an enhancement or assistance to clinical decisions and care, rather than a replacement. In addition, it ensures positive and widespread adoption of the AI, as well as the management of patient and clinician concerns about AI as an alternative to traditional tools.
- Financial risk. To assess the cost benefit of AI, assessing compliance early with relevant health insurer items is important to clarify the eligibility and cost of claims. AI strategies come with increased risks of replicated errors from scale and latent risks due to delayed error identification. Adequate insurance that will respond to these risks and indemnities from entities that have appropriate resources to meet the indemnities will be critical to managing financial risks for the organisation and clinicians involved.
The COVID-19 pandemic has accelerated changes in models of healthcare by many years, with increased expectations from the consumer that is centred around outcomes, experience and delivery of care directly to them where possible.
Considering these matters at the very start of any AI journey and asking if AI in each instance will achieve a better outcome and provide a better experience, will ensure the end-product meets expectations and will be adopted by consumers, within a governance framework that appropriately balances and manages risk.
Shane Evans is Partner and Health Industry Lead at MinterEllison