ImageNet Roulette: Viral Phenomenon Exposes a Dangerous AI Flaw

Last week, the Twitterverse exploded with a slew of users’ photos and selfies containing surprisingly bizarre labels attached to them. The labels seemed to describe the personality traits of those pictured, and while some appeared to be harmless or even amusing, others were incredibly derogatory and outright wrong. Later, a major image database decided to remove over 600,000 photos.

Many users shared photos which described them with racist slurs and even accused them of being criminals. The pictures contained a #ImageNetRoulette tag, and as one may assume, they generated a massive controversy on the web. However, it was quickly discovered that an even stranger project was hiding behind it.

Surprisingly, the viral photos were all part of ImageNet Roulette. It’s an app and an art/tech project devised by Trevor Paglen and Kate Crawford. Paglen, an American artist focusing on data, and Crawford, Microsoft analyst and AI Now Institute co-founder, initiated this project to prove a frightening hypothesis regarding artificial intelligence.

While the app doesn’t bear a resemblance to playing roulette games online, it managed to shock most of those who submitted their photos. Many took offense after being described with inappropriate terms, while not fully understanding the purpose of the ImageNet Roulette app.

ImageNet Roulette Experiment

In its essence, Crawford and Paglen launched the project in order to reveal the biases and flaws of AI algorithms’ data detection. It centers around the negative impact faulty data can have on artificial intelligence. The creators developed the app using ImageNet. It’s a pivotal photo recognition database.

With the use of a complex network of over 14 million “people” category photos, the app matches users’ submitted smartphone pictures to the ones in the database. The database, ImageNet, is credited with playing a pioneering role in the deep learning revolution of AI.

After the submission, the app runs through a facial recognition software, comparing the uploaded photos to those found in over 2,500 “people” categories. Then, the labels aka classifiers, get attached to the submitted photo. After its release, the creators noted that the app managed to generate over 100,000 labels per hour.

In most cases, the classifiers within the algorithm ranged between racist, cruel, misogynistic, and offensive results, while others were goofy and silly. The assumptions of the AI covered everything from gender, career, age, emotions, and personal traits.

While the purpose of testing stereotypical classifications by AI systems is quite clear, many are not familiar with the fact that the topic has received a lot of attention from the AI community and its experts in recent years.

Crawford and Paglen’s project was in development for two years. In essence, it was devised as a provocation of machine learning systems.

Moreover, Crawford noticed that while her label “mediatrix” was humorous, labels for women of color that were posted on Twitter returned some startling results, ranging from racist to misogynistic.

The Results

Trevor Paglen explained that the collaborative project was not intended to criticize the notion of artificial intelligence but rather to test the limitations of its current state.

Moreover, machine intelligence has had a lengthy commercial, academic, and cultural history that has always incorporated some negative connotations. Thus, ImageNet Roulette can serve as an adequate example of what can happen when things go wrong with an AI system. And the result was just as Crawford and Paglen had predicted.

Namely, the pair already knew that the classifiers returned to those who uploaded their photos would generate shock and outrage.

Furthermore, the project also wanted to emphasize the problem of offensive stereotypes in today’s society with a clear message. The project’s initiators believe that it’s important to show the impact of negative stereotypes as opposed to ignoring them.

600,000 Photos Removed

After the bias of the system was proven, over 600,000 photos were deleted from the ImageNet people category in the database. ImageNet Roulette’s creators noted that it was a clever course of action and a decent first step. The database researchers then claimed that over 1 million photos would get removed. Allegedly, the developers of ImageNet stated that they are familiar with some flaws in their systems and that their staff has been trying to solve them in the past few months.

However, Crawford and Paglen believe that a sudden reduction of bias may not be able to solve some of the larger issues surrounding facial recognition systems. They’re calling for a detailed reassessment concerning AI ethics training and database building. Therefore, the experiment didn’t merely expose problems within AI systems, but also the fact that machines can gain biases from humans involved in developing their tech.

Interestingly, biases were recently found to be present in AI systems by IBM, Microsoft, as well as Amazon. The initiators of ImageNet Roulette have claimed that major tech companies should also perform research on how bias can affect AI development and results.

App Shutdown

According to the creators, the app will no longer be offered to the public as of late September. It will be deleted from the web. However, its software has been displayed in Milan’s Fondazione Prada Osservatorio as part of an art exhibit that will last until next year.

Even though the ImageNet Roulette app showed how negative biases could affect the functionality of AI software, many startling issues remain. Overall, the experiment showed how AI could easily blur the lines between politics, ideology, history, science, etc. ImageNet Roulettete’s founders emphasize further problems in AI categorization and facial recognition, especially concerning the rapid development of systems utilized for government, healthcare, and educational institutions.

Even if a portion of ImageNet’s photos were removed, Crawford and Paglen believe categorization is deeply controversial. From their viewpoint, it incorporates politics and an unclear line on who determines the meaning, purpose, and the key elements of image analysis. According to Crawford and Paglen, technical fixes and photo/data removals are just the beginning. Thus, a profound message was hidden behind one of the most bizarre and surprising viral stories of this year.

Share this GiN Article on your favorite social media network:

Leave a Reply

Your email address will not be published. Required fields are marked *