Explainable Artificial Intelligence
AI Harms

One thing to keep in mind about AI is that it’s a form of technology, and technology isn’t neutral. Technology is a mirror of society, and it reflects the systemic bias and harms that exists in society. Technology is made by people and for people, and it has all of the same problems humans have, like bias, hate, and harm. AI, like any technology, harms people

→ On June 28th, 2015, computer engineer Jacky Alcine was uploading photos of himself and his friend. But the Google+ app he was using kept auto tagging himself and his friend as 'gorillas'. Jacky and his friend are both Black. 

→ One day in 2013, Harvard University professor Latanya Sweeney was googling her own name and noticed that the Google search results were prompting ads that read “Latanya Sweeney, arrested?" and “Latanya Sweeney, bail bonds?” Professor Sweeney tried other African American sounding names, and the same prompts showed up. Professor Sweeney has never been arrested. 

→ In 2018, researcher Joy Buolamwini and Dr. Timnit Gebru discovered that computer vision algorithms, across major companies like Google, IBM Watson, and Amazon, had a difficult time recognizing Black faces, across all genders. And yet these products were on the market, for any person or company to use. In fact, the software was 99% right for white masculine presenting faces, adn only 35% right for darker skinned, feminine presenting faces. 

→ In 2017, transgender YouTubers had their transition videos copied, without their consent, so AI researchers could use their videos as training data to build an algorithm.

 
Liz O'Sullivan, VP at responsable AI company Arthur/ activist
 
Caroline Sinders, critical designer/artist

“So it’s not that technology misidentifies, but it is also that technology is used to further reinforce racist policy and practices in a different areas.”

Janus Rose

Researchers have ‘built’ AI systems to ‘recognize’ gender from eye irises, and to study faces to determine political affiliation - but all of these algorithms are inaccurate. You can’t determine someone’s gender from their eyes- but believing that you can, or building a system to gather that kind of data, will lead to harmful and dangerous consequences for people from marginalized groups. As Liz O’Sullivan pointed out "there is a part of the society that gets disproportionately negatively affected when AI touches upon their live.”

This website you’re reading was made in 2021, and we are having issues with AI. It’s not just that AI is built ‘incorrectly’ or ‘unethically’, but AI exists in society and  in a world of deep systemic injustice. Janus Rose underlines "Everything that led to the creation of these systems operate as a base level of our society -  that is the root cause.

 
Lukáš Likavčan, philosopher

These harms are multi-fold-one in that these AI systems are not accurate and can never be accurate, but also we have to ask, what happens when this kind of technology is deployed in everyday life? What happens when these systems make assumptions about individuals in the general public? What happens when this kind of technology affects negatively people of color, transgender people, and women? It harms them, and it furthers systemic harm

AI and algorithms “reproduce and codify conditions that are created by systems of oppression.”

Janus Rose

But who really uses this? Look at this example, to figure out the scale of the problem. 

Have you heard of emotion recognition systems?  

Emotion Recognition Systems utilize facial recognition to determine emotion. Emotion Facial Action Coding System (Emfacs) developed by Paul Ekman and Wallace V Friesen in the 1980s. This system that defines emotions is the backbone of most emotion recognition systems today and can be traced back to the 1960s, when Ekman and two colleagues hypothesized that there are six universal emotions – anger, disgust, fear, happiness, sadness and surprise – and that these emotions can be detected and are expressed across across all cultures. 

What is wrong with this scenario? If emotion recognition systems are using computer vision, which is already inaccurate, but it’s also built on top of an impossible premise- that human society ONLY has six emotions that are universally shared. But we have so many emotions, and emotions are expressed differently across cultures. 

More importantly, emotion recognition assumes that we, as members of society, express our emotions truly, everyday. But we don’t, sometimes when we’re scared we smile to diffuse a situation, sometimes we laugh to ease tension, sometimes when we’re incredibly sad, we learn to keep our faces neutral. 

Another powerful illustration is digital discrimination, as explained by Janus Rose, "when you create a system that recognizes people as this is a man or this is a woman - what it does - it erases all of the people who fall outside binary to exist and it also erases people who don't conform with these normative standards.”

 
Janus Rose, editor at Motherboard/Vice

“The harms happen to those who are already marginalized, it’s highly disperse, this impact.”

Aviva de Groot

We opened this essay with some recent examples of the AI inflicted harms. But this real harm goes even further back, before AI, and here are just a few examples how technology generally harmed people of color. 

→ In the mid-1950s, Kodak invented the “Shirley” card, a card that was a photograph of a white woman and a set of colors to help photographic labs ensure that colors and densities (or tones) of prints were printed correctly. This helped labs process negatives, but it also skewed photo processing to favor white skin

→ Let’s go back even further to the 1800s to the creation of phrenology, the racist fake-science that believed the shape of one's face could determine one’s intelligence. But phrenology ‘documented’ that certain features that corresponded with lower intelligence were non-white features. 

The creation of AI systems to determine gender, intelligence, political affiliation- are always erroneous and error ridden, but it comes from a similar biased and harmful place of phrenology.

AI as pharmakon - it’s a poison and a remedy at the same time.”

Lukáš Likavčan