How AI picks up our all-too-human biases

AI demonstration

Princeton researcher Aylin Caliskan demonstrates how Google’s automatic translation program shows signs of gender bias. (Princeton University via YouTube / Aaron Nathans)

There’s fresh evidence that artificial intelligence software absorbs human biases about race and gender, and it may be due to the very structure of languages.

Scientists came to that conclusion after creating a statistical system for scoring the positive and negative connotations associated with words in AI-analyzed texts.

A similar system, known as the Implicit Association Test or IAT, has suggested that humans harbor biases about the comparative status of different races, as well as men and women, even though they don’t explicitly acknowledge them.

Princeton University’s Aylin Caliskan and her colleagues adapted the IAT for a textual analysis tool they call the Word-Embedding Association Test, or WEAT. They describe the method, and its application, in research published today by the journal Science.

Get the full story on GeekWire.

About Alan Boyle

Award-winning science writer, creator of Cosmic Log, author of "The Case for Pluto: How a Little Planet Made a Big Difference," president of the Council for the Advancement of Science Writing. Check out "About Alan Boyle" for more fun facts.
This entry was posted in GeekWire and tagged , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s