
Artificial intelligence can work wonders, but often it works in mysterious ways.
Machine learning is based on the principle that a software program can analyze a huge set of data and fine-tune its algorithms to detect patterns and come up with solutions that humans may miss. That’s how Google DeepMind’s Alpha Go AI agent learned to play the ancient game of Go (and other games) well enough to beat expert players.
But if programmers and users can’t figure out how AI algorithms came up with their results, that black-box behavior can be a cause for concern. It may become impossible to judge whether AI agents have picked up unjustified biases or racial profiling from their data sets.
That’s why terms such as transparency, explainability and interpretability are playing an increasing role in the AI ethics debate.
The European Commission includes transparency and traceability among its requirements for AI systems, in line with the “right to explanation” laid out in data-protection laws. The French government already has committed to publishing the code that powers the algorithms it uses. In the United States, the Federal Trade Commission’s Office of Technology Research and Investigation has been charged with providing guidance on algorithmic transparency.
Transparency figures in Microsoft CEO Satya Nadella’s “10 Laws of AI” as well — and Erez Barak, senior director of product for Microsoft’s AI Division, addressed the issue head-on today at the Global Artificial Intelligence Conference in Seattle.
“We believe that transparency is a key,” he said. “How many features did we consider? Did we consider just these five? Or did we consider 5,000 and choose these five?”