Categories
GeekWire

How AI picks up our all-too-human biases

AI demonstration
Princeton researcher Aylin Caliskan demonstrates how Google’s automatic translation program shows signs of gender bias. (Princeton University via YouTube / Aaron Nathans)

There’s fresh evidence that artificial intelligence software absorbs human biases about race and gender, and it may be due to the very structure of languages.

Scientists came to that conclusion after creating a statistical system for scoring the positive and negative connotations associated with words in AI-analyzed texts.

A similar system, known as the Implicit Association Test or IAT, has suggested that humans harbor biases about the comparative status of different races, as well as men and women, even though they don’t explicitly acknowledge them.

Princeton University’s Aylin Caliskan and her colleagues adapted the IAT for a textual analysis tool they call the Word-Embedding Association Test, or WEAT. They describe the method, and its application, in research published today by the journal Science.

Get the full story on GeekWire.

By Alan Boyle

Mastermind of Cosmic Log, contributor to GeekWire and Universe Today, author of "The Case for Pluto: How a Little Planet Made a Big Difference," past president of the Council for the Advancement of Science Writing.

Leave a Reply

Discover more from Cosmic Log

Subscribe now to keep reading and get access to the full archive.

Continue reading