Amazon’s facial technology had a harder time recognizing the gender of darker-skinned women and made more mistakes identifying gender overall than competing technologies from Microsoft and IBM, according to an MIT study published Thursday.
Amazon’s Rekognition software incorrectly identified women as men 19 percent of the time, according to the study. In addition, it incorrectly identified darker-skinned women as men 31 percent of the time, it says. Software from Microsoft, by comparison, identified darker-skinned women as men 1.5 percent of the time.
Matt Wood, general manager of artificial intelligence at Amazon Web Services, said that the study’s test results are based on facial analysis, not facial recognition. Analysis, he said, can find faces in videos or images and assign generic attributes, such as the wearing of glasses. Recognition, he said, matches an individual’s face to images in videos and photographs. The Rekognition technology includes both of these functionalities.
“It’s not possible to draw a conclusion on the accuracy of facial recognition for any use case – including law enforcement – based on results obtained using facial analysis,” Wood said in a statement.
Wood added that the study didn’t use the latest version of Rekognition. Amazon, using an up-to-date version of Rekognition with similar data, found no false positive matches, Wood says.
Researchers from the MIT study didn’t immediately respond to a request for comment.
Amazon has provided Rekognition to, though civil liberties groups, members of Congress and Amazon’s own employees have raised concerns about privacy. Earlier this month, a group of shareholders also called on Amazon to to government agencies.