There’s fresh evidence that artificial intelligence software absorbs human biases about race and gender, and it may be due to the very structure of languages.
Scientists came to that conclusion after creating a statistical system for scoring the positive and negative connotations associated with words in AI-analyzed texts.
A similar system, known as the Implicit Association Test or IAT, has suggested that humans harbor biases about the comparative status of different races, as well as men and women, even though they don’t explicitly acknowledge them.
Princeton University’s Aylin Caliskan and her colleagues adapted the IAT for a textual analysis tool they call the Word-Embedding Association Test, or WEAT. They describe the method, and its application, in research published today by the journal Science.