Policy@Intel
A place to exchange ideas and perspectives, promoting a thriving innovation economy through public policy
643 Discussions

When Patients Benefit from AI, Society Benefits: Report Highlights the Socioeconomic Benefits of AI in Healthcare

Intel_Blog_Admin
Employee
0 0 262
AI-healthcare-2x1-1.jpgBy Mario Romao, Global Director of Health Policy for Intel

As we enter a new year with a continued focus on advancements in healthcare, AI has the potential to make a positive impact at every stage of the patient journey. A recent report from MedTech Europe and Deloitte outlines eight distinct applications for AI in healthcare. From using wearable devices to detect and prevent health issues to using virtual health assistants or robots to assist general practitioners and surgeons, AI can play a vital role in creating a better, healthier future for everyone.

How did they come to this conclusion about AI? Using publicly available information — including peer-reviewed articles and employee, salary and population data — researchers were able to establish a baseline of data and existing AI applications. From there, they identified real-world use cases and extrapolated the impact on patients today.

One of the report’s key findings is that AI applications have the potential to save 400,000 lives a year in Europe through a combination of preventive measures, faster analysis of test results and more robust monitoring. Patients aren’t the only ones who stand to benefit from AI, however. Providers and practitioners will also benefit from cost-savings and efficiencies created by AI. This report estimates that AI can also save 200 billion Euros and 1.8 billion work hours a year in Europe alone.

Imagine the increase in savings when the eight AI applications scale up to the global population and are implemented around the world. This could be a major game-changer in healthcare — if approached correctly. Today, widespread adoption of AI in healthcare is hampered by a few key barriers, including fragmentation of health data, the diversity of regulatory frameworks and what is perceived as high cost of implementation. None of the challenges is, in itself, insurmountable, but when considered together they require a nuanced approach to policy.

Moving forward, we think the following policy areas will prove most important.

  1. Strengthening Trust in Health AI


As with any new or emerging technology, AI must gain the trust of consumers. Given the high-stakes and sensitive nature of work in the healthcare industry, AI solutions must meet additional expectations by being privacy-preserving, transparent, nondiscriminatory, ethical and secure. To make that possible in a complex global regulatory environment, we need to:

  • Ensure harmonized interpretation and enforcement of existing privacy principles and laws that apply to machine learning and AI-based data processing in healthcare.

  • Support risk-based accountability approaches aimed at privacy, security and safety risk minimization through technical and organizational measures.

  • Promote regulatory sandboxes and pilot programs to help organizations designing AI solutions remain innovative while assuring protections for patients and individuals.

  • Foster voluntary self-assessment mechanisms, based on broadly agreed criteria, to evaluate the effectiveness of AI solutions and compliance, as well as provide meaningful information on AI outcomes and accessibility for patients.



  1. Maximizing Access to and Utility of Data


Today, health data is still mostly fragmented across countries, organizations and platforms. Healthcare AI solutions would benefit from legislative and regulatory measures that improve data access, interoperability, quality and diversity of datasets. In these measures, we must also:

  • Support the formation of quality health and life sciences data sets and ensure these are accessible, interoperable and usable.

  • Foster multiple legal grounds for data processing, such as public interest, to allow for the secondary use of AI-driven health data.

  • Participate in international standardization activities around data formats and access, security technologies, algorithm design, and protection.



  1. Driving Investment in Research and Adoption of Trustworthy Healthcare AI


In-depth research — and funding for that research — is essential to developing trustworthy AI. Given the complexity of this field and its socioeconomic impacts, that funding is likely to come from a variety of sources, including governments, research institutions and private companies. We must create a culture of collaboration amongst policymakers, industry leaders, healthcare providers and patients. Within this collaborative ecosystem, we must:

  • Invest in R&D and the standardization of AI technologies, such as privacy-preserving machine learning (PPML), trusted execution environments, federated learning, homomorphic encryption and differential privacy.

  • Create additional financial incentives to adopt AI solutions that are respectful of privacy and ethics and improve security and safety.

  • Allocate financial resources to enable digitization of health information, automation of medical devices and state-of-the-art computing infrastructure that enhances the quality of AI-driven healthcare services.


What all of these policy areas highlight is the important of working together as a society. No one company or regulatory body can create the perfect AI solution that can be implemented in every healthcare facility around the whole world. It will take a broad coalition of society’s stakeholders to ensure that every patient and every nation can realize the full benefits of health AI.

To learn more about what Intel is doing to foster trustworthy health AI, read about Intel’s work on health AI use cases and its collaboration with the University of Pennsylvania to identify brain tumors using AI.