Data violence

A form of violence that occurs when the choices embedded in algorithms are built on assumptions and prejudices about people, intimately weaving them into processes and results that reinforce biases and, worse, make them seem objective (due to the social perception of mathematical language).

Data violence

The piece

Cathy O’Neil, author of “Weapons of Math Destruction” sees in algorithms a fundamental issue at work: people tend to trust results that look scientific, like algorithmic risk scores, “and today it is almost impossible to appeal to these systems.” Algorithms provide a convenient way for people to avoid difficult decision-making, deferring to “mathematical” results.

The power these have in maintaining structural violences is bigger than we might think. An algorithm that advises an airport operator to search a person because their gender is unclear. A bot that becomes a fascist after just a few hours of feeding off Twitter. A historical data-based raise recommending tool that doesn’t promote females due to lack of previously existing models.

We rarely get to understand the processes that an algorithm goes through in order to label us. We only see the label. And the consequences of these tags determine how the world sees us and the world we’re bound to live in.

O’Neil, Cathy (2016) “Weapons of Math Destruction”. Crown Waldron, Lucas et. al (2019) “When Transgender Travelers Walk Into Scanners, Invasive Searches Sometimes Wait on the Other Side” Propublica (online). Vincent, James (2016) “Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day” The Verge (online) Dastin, Jeffrey (2018) “Amazon scraps secret AI recruiting tool that showed bias against women” Reuters.

Context

Computer algorithms now shape our world in profound and mostly invisible ways. They predict if we’ll be valuable customers and whether we’re likely to repay a loan. They filter what we see on social media, sort through resumes, and evaluate job performance. They inform prison sentences and monitor our health. Most of these algorithms have been created with good intentions. The goal is to replace subjective judgments with objective measurements to perform tasks in a faster, more efficient way. But it doesn’t always work out like that.

Many companies that build and market these algorithms like to talk about how objective they are, claiming they remove human error and bias from complex decision-making. But in reality, every algorithm reflects the subconscious choices of its human designer.

Back in 2015, a news piece on data violence went viral: “Google Photos’ algorithm tagged black people as gorillas”. This horrible mistake evidenced that its AI system hadn’t been trained using faces of black people (therefore they weren’t categorised as human beings).

Of course this is extremely hard to prove. When algorithms remain unaudited and unregulated, they basically turn into black boxes. And in a time when codes are commodities, that’s especially dangerous: packs of algorithms designed by private companies are sold to third parties but the exact details of how they work are kept secret.

Data violence

Related concepts

Invisible labor: Sometimes invisibility is not strictly related to "seeing" or to a visual act. In this sense, it may refer to market devaluation or to a social judgment that labels some tasks as “less important”. When speaking about data violence, the term invisible refers directly to the visual act of not seeing the workers or not understanding that they are performing work or how they are performing it. An example is when an algorithm obscures which tasks are performed by humans and which are performed by computers

O’Neil, Cathy (2016) “Weapons of Math Destruction”. Crown Cherry, Miriam (2009) “Working for Virtually Minimum Wage” Ala. L. Rev BBC News editorial (2015). “Google apologises for Photos app’s racist blunder”. BBC (online)

Data violence

Next up

Selfie dysmorphia

A project by Domestic Data Streamers