By Claire Vishik, Intel Fellow and Riccardo Masucci, Global Director of Privacy
Trust in Digital Life (TDL) is an association comprising leading industry partners and knowledge institutes to improve digital technologies and services to the benefit of citizens, businesses and society. Intel has been a board member of TDL and engaged over the past years in a number of activities related to cutting edge technologies such as blockchain and artificial intelligence (AI).
In the context of AI, Intel is a key supporter of a series of roundtables to bring together representatives from academia, industry, and European institutions to generate insights on trustworthy AI. Following a successful launch event on March 20, TDL organized a second roundtable on 18 June in Brussels at the Representation of the State of Hessen to the EU. The diverse expertise of the participants – including representatives from the European Commission and Parliament, Member States, large corporations, SMEs, civil society, and think tanks – sparked a lively debate on opportunities and challenges.
The discussion covered key components of trustworthy AI, such as addressing algorithmic bias, accessing data responsibly, strengthening security in machine learning, protecting individuals’ privacy, developing technology ethically, and implementing viable auditing. Attention was brought to different approaches across world regions and countries in legal, regulatory, and research areas.
All attendees agreed on the importance of building trust in the AI technology to ensure its broader adoption leading to economic growth, competitiveness, and societal benefits. Building trust in AI will translate to a number of actions such as investing in innovation and R&D, improving governance and testing, and better understanding the demand for AI technologies.
Privacy and security are crucial to develop truly innovative, trusted and inclusive AI. Technology companies, including Intel, support accountability in AI by public and private organizations, putting in place the appropriate risk-based measures and innovative technologies to assess approaches to mitigation of harms. A number of promising methodologies are being considered, such as privacy preserving machine learning (e.g. homomorphic encryption, multiparty computation, federated learning, among others).
The series of quarterly TDL roundtables on AI will continue after the summer break. We look forward to these fruitful discussions with multiple stakeholders – policymakers, regulators, technologists, and researchers – that are exploring technology and policy solutions to realize the full potential of AI while protecting citizens and answering to societal concerns.