Made up by a diverse group from academic, corporate and government backgrounds, the council will meet regularly in 2019 to discuss the issues AI poses. The panel is an extension of Google’s AI Principles, which is a broad set of goals that the company put in place last year, after criticism about its involvement with military contracts.
What is the Panel and What Are its Aims?
The Advanced Technology External Advisory Council (ATEAC) was announced by Google’s Senior Vice President, Kent Walker, on its blog. According to the post, the panel is an extension of Google’s goals to use and create AI responsibly, a position that was laid out in its AI Principles in June. Some of the tasks that panel have been challenged with tackling, include facial recognition and machine learning. The council will serve over the course of 2019, and hold four meetings in this period, with the first occurring in April. In its blog, Google states that it will encourage members of the council to share learnings, and publish a report summarizing the discussions.
What are Google’s AI Principles?
Revealed last year, Google’s AI Principles are said to be a direct reaction to its decision not to renew its contract with the Pentagon to provide AI drones. Google’s work in this area had proved controversial, prompting several resignations from Google staff. It appears to be a growing issue in the tech space, with other companies such as Microsoft feeling the heat from their work with defence departments. Google’s AI principles set in stone its mandate going forward, and while the company was much mocked for removing its “Don’t be evil” policy from its code of conduct (replacing it with “Do the right thing”), the AI principles arguably go much further than a simple tagline. So, what are the principles? In addition, there are also areas that Google has promised it will not apply its AI to. These include surveillance that “contradicts international norms”, technologies that violate human rights, and weapons that aim to harm people (although the company will continue to work with the military in recruitment, cyber security, and search and rescue).
Who is on the Panel?
ATEAC is made up of eight key stakeholders with years of experience in the AI space from corporate, academic and government backgrounds. Google makes it clear that the panel represents their own perspectives, and don’t speak for the institutions they are associated with. Alessandro Acquiti – Professor of Information Technology and Public Policy at Heinz College, Carnegie Mellon University Bubacarr Bah – Senior Researcher of Mathematics with specialization in Data Science at the African Institute for Mathematical Sciences South Africa, and Assistant Professor in the Department of Mathematical Sciences at Stellenbosch University. De Kai – Professor of Computer Science and Engineering at the Hong Kong University of Science and Technology, and Distinguished Research Scholar at Berkeley’s International Computer Science Institute. Dyan Gibbens – CEO of Trumbull, a startup focused on automation, data and environmental resilience in energy and defense. Joanne Bryson – Associate Professor in the Department of Computer Science at the University of Bath. Also consulted with LEGO on its child-orientated Mindstorms programming line. Kay Coles James – President of The Heritage Foundation, focusing on free enterprise, limited government, individual freedom and national defense Lucian Floridi – Professor of Philosophy and Ethics of Information at the University of Oxford, Professorial Fellow of Exeter College and Turing Fellow and Chair of the Data Ethics Group of the Alan Turing Institute. William Joseph Burns – Previous U.S. deputy secretary of state. President of the Carnegie Endowment for International Peace, the oldest international affairs think tank in the United States. With the pace of development in the AI sector, these principles certainly represent an important code of stated conduct. What remains to be seen, of course, is how well Google holds true to these lofty aims.