Policy@Intel
A place to exchange ideas and perspectives, promoting a thriving innovation economy through public policy
644 Discussions

Intel Participates in A Conversation about Ethical AI at the Brookings Institute

Chloe_R_Autio
Employee
0 0 751

By Chloe Autio, Policy Analyst, AI and Privacy Policy


Last Friday, Intel’s Heather Patterson, PhD, a Senior Research Scientist in Intel Labs, participated in a panel dialogue hosted by the Center for Technology Innovation at the Brookings Institute. The conversation concerned ethical implications raised by artificial intelligence (AI), focusing on the roles and responsibilities of industry and government in ensuring the responsible design, development, and deployment of AI. Central to the dialogue were recommendations from fellow panelists and Brookings fellows, Darrell West and William Galston.

heatherpbrookings.jpg Intel's Heather Patterson joined fellow panelists at Brookings for a conversation on #AIEthics last Friday.

In How to Address AI Ethical Dilemmas, West highlights six key steps that technology companies should take to ensure that the ethics of AI are “taken seriously”: 1) incorporate ethicists into development teams; 2) develop an “AI Ethics” code; 3) institute an AI review board to tackle decisions around ethical issues; 4) develop audit trail to track AI decision making; 5) implement AI ethics training programs; and 6) provide some means of remediation when AI systems cause harm. To this list, Patterson suggested three additional approaches to making AI more ethical, each of which Intel is pursuing:

  1. Get the right people in the room: In addition to ethicists, we need anthropologists, sociologists, cognitive scientists, philosophers, and user experience professionals to actively participate on development teams, to conduct social research, and to translate their insights into design principles. Culture and context matter, and team diversity increases the likelihood that that AI applications will be human-centric and inclusive.

  2. Liberate data responsibly: It takes data to make algorithms better. And, the more we know about the data we are using to train machine learning and deep learning models, the more transparent our technology will be. Open access to data helps us prevent information asymmetries, making technology more effective and accountable.

  3. Require accountability for ethical design and implementation: Many companies have released principles explaining what they will and won’t do with their technology, a commitment that Intel respects. Patterson also suggests that a company’s code of AI ethics should embody its values, such as as respecting international principles of human rights, and that this code of ethics should be directly implemented in internal product development lifecycles. It is reasonable for customers - and consumers - to understand what guardrails companies put in place when it comes to new technologies, and why.


 

Intel was pleased to participate in this conversation and is actively pursuing these and similar strategic discussions about how to design and deploy AI responsibly, so that all people will be able enjoy the full benefits that new technologies bring.