By Mario Romao, Global Director of Health Policy & Prashant Shah, Global Head of Artificial Intelligence for Health and Life Sciences
Much has been written about the present and future role of Artificial Intelligence (AI) in healthcare. Real world examples show that AI can help physicians and researchers prevent disease, speed recovery and save lives, speed genomics processing and make medical image analysis faster and more accurate for personalized treatment. It can also be used to detect and correct waste, fraud and abuse in healthcare spending.
However, not as much has been laid out regarding the steps required for nations to embark on a journey toward AI maturity in healthcare. The recent report Reimagining Global Health through Artificial Intelligence: The Roadmap to AI Maturity from the Broadband Commission’s Working Group on Data, Digital, and AI in Health provides robust guidance to governments on how to foster an enabling environment leading to systematic integration of AI-enabled tools into the way healthcare is delivered.
The report stresses the relevance of AI for low- and middle-income countries to address longstanding, systemic health issues (e.g. Sub-Saharan Africa represents 12% of the global population but only houses 3% of the world’s health workers and spends only 1% of the world’s total health expenditures) but its contents are equally pertinent to high-income countries.
To develop this report, more than 80 AI and global health experts were interviewed, over 200 secondary reports reviewed and 100+ AI solutions assessed. The report sets forth five use cases on how AI is applied to address global public health priorities and proposes six areas for AI maturity in health with call to actions.
We recommend reading the report and its executive summary, here is a summary of the six recommended areas of action for AI maturity and additional considerations.
- People & workforce: updating a country’s workforce and educational
- Data & technology: planning for quality and privacy-preserving data, robust computing infrastructure, fair and transparent algorithms and AI models, explainability, standards and interoperability;
- Governance & regulatory: ensure the ethical management of health data, AI in health and care delivery;
- Design & processes: using human -centred design to integrate AI tools into healthcare processes;
- Partnerships & stakeholders: focus on collaborations and agreements to aggregate and use health data for better delivery of health and care, government engagement and stakeholder involvement;
- Business model: address sustainability of funding and incentive structures for innovators.
These areas are interdependent and as the report rightly points out when referring to Data & Technology: “Making data accessible, publicly or through agreements, is a strong AI enabler. Innovative ways to do so while ensuring privacy include data anonymization, aggregation deidentification, virtual cohorts, differential privacy, pseudonymization, federated learning, sanitization, encryption, and privacy-preserving machine learning”.
We believe that privacy and public health should not be a zero-sum game. We have been working on federated and privacy protective machine learning approaches that enable organisations to collaborate on machine learning projects without sharing sensitive data such as patient records.
Recently, Intel announced a collaboration with the Perelman School of Medicine at the University of Pennsylvania (Penn Medicine) to co-develop technology to enable a federation of 29 international healthcare and research institutions led by Penn Medicine to train artificial intelligence (AI) models that identify brain tumors using federated learning.
Countries around the globe look on how to achieve the best balance between protecting individual’s rights and promoting the availability of health data for research (including for AI). Some jurisdictions (e.g. Finland) consider research in the public interest as a valid legal basis to process health data when in presence of robust safeguards. In these cases, consent to process health data is not required, in the condition that the data is protected with a mix of de-identification (pseudonymisation, anonymisation, aggregation), processing controls (e.g., data use agreements, reference methodologies, secure environments) and other safeguard procedures such as internal ethical review boards.
Intel has long promoted the innovative and ethical use of data for the transformative positive impact it can make on the lives of individuals. These type of regulatory approaches to the processing of health data are a step in the right direction to facilitate their better use while respecting the privacy of individuals. Recent technical solutions such as Privacy-Enhancing Technologies (e.g., federated learning, trusted execution environments, homomorphic encryption) should also be fostered to ensure a high level of protection.
As governments develop and implement horizonal AI strategies, it will be important to consider specific plans for AI in healthcare. This Broadband Commission’s report is a good inspiration for governments and stakeholders alike to achieve precisely that.