The images were gathered from search engines and published in 2016 to a dataset called MS Celeb and used train facial recognition systems around the world, including by military researchers and Chinese firms such as SenseTime and Megvii, the Financial Times reported Thursday. The dataset — previously been used in an AI project to recognize celebrities – had been linked to China’s efforts to crack down on ethnic minorities in the country.
“The site was intended for academic purposes,” Microsoft told the Financial Times. “It was run by an employee that is no longer with Microsoft and has since been removed.”
Facial-recognition technology is commonly used for everyday tasks such as like unlocking phones and tagging friends on social media, but privacy concerns persist. Advances in artificial intelligence and the proliferation of cameras have made it increasingly easy to watch and track what individuals are doing.
Law enforcement agencies frequently rely on technology to help with investigations, but the software isn’t without its flaws. Software used by the UK’s Metropolitan Police was reported earlier this year to produce incorrect matches in 98 percent of cases.
Many of the people featured in the dataset were not asked for their consent to be included, but their images were scraped from the internet under the Creative Commons license, according to the FT. The Creative Commons license allows academic reuse of photos, a permission granted by the image’s copyright holder, not the photo’s subject.
The Chinese government used facial recognition software to track and control on the 11 million Uighurs, a largely Muslim minority, in the country, The New York Times reported in April. Tapping an expansive network of surveillance cameras, the technology looked for Uighurs based on their appearance and kept tabs on their movement and put millions in detention camps, the Times reported.
In December, Microsoft blog post.that requires facial-recognition technology to be independently tested to ensure accuracy. “Unless we act, we risk waking up five years from now to find that facial recognition services have spread in ways that exacerbate societal issues,” Microsoft chief counsel Brad Smith wrote in a
Microsoft didn’t immediately respond to a request for comment.