4 Algorithms

Invisible, Irreversible, and Infinite

Diana Daly

Because computer programmers are self-selected this way, and because many people think of the typical “tech geeks” as white and male (as suggested by the Google Image search on the left), people who end up learning computer programming in the US are more likely to be white than any other race, and are more likely to identify as male than any other gender.

How can computers carry bias?

Many people think computers and algorithms are neutral – racism and sexism are not programmers’ problems. In the case of Tay’s programmers, this false belief enabled more hate speech online and led to the embarrassment of their employer. Human-crafted computer programs mediate nearly everything humans do today, and human responses are involved in many of those tasks. Considering the near-infinite extent to which algorithms and their activities are replicated, the presence of human is a devastating threat to computer-dependent societies in general and to those targeted or harmed by those biases in particular.

A white man wearing Google Glass
Google Glass was considered by some to be an example of a poor decision by a homogenous workforce.

Problems like these are rampant in the tech industry because there is a damaging belief in US (and some other) societies that the development of computer technologies is antisocial, and that some kinds of people are better at it than others. As a result of this bias in tech industries and computing, there are not enough kinds of people working on tech development teams: not enough women, not enough people who are not white, not enough people who remember to think of children, not enough people who think socially.

Remember Google Glass? You may not; that product failed because few people wanted interaction with a computer to come between themselves and eye contact with humans and the world. People who fit the definition of “tech nerd” fell within this small demographic, but the sentiment was not shared by the broader community of technology users. Critics labeled the unfortunate people who did purchase the product as “glassholes.”

Exacerbating Bias in Algorithms: The Three I’s

In its early years, the internet was viewed as a utopia, an ideal world that would permit a completely free flow of all available information to everyone, equally. John Perry Barlow’s 1996 Declaration of the Independence of Cyberspace represents this utopian vision, in which the internet liberates users from all biases and even from their own bodies (at which human biases are so often directed). Barlow’s utopian vision does not match the internet of today. Our social norms and inequalities accompany us across all the media and sites we use, and worsened in a climate where information value is determined by marketability and profit, as Sociologist Zeynep Tufecki explains in this Ted Talk.

Because algorithms are built on human cooperation with computing programs, human selectivity and human flaws are embedded within algorithms. Humans as users carry our own biases, and today there is particular concern that algorithms pick up and spread these biases to many, many others. They even make us more biased by hiding results that the algorithm calculates we may not like. When we get our news and information from social media, invisible algorithms consider our own biases and those of friends in our social networks to determine which new posts and stories to show us in search results and news feeds. The result for each user can be called their echo chamber or as author Eli Pariser describes it, a in which we only see news and information we like and agree with, leading to political polarization.

Although algorithms can generate very sophisticated recommendations, algorithms do not make sophisticated decisions. When humans make poor decisions, they can rely on themselves or on other humans to recognize and reverse the error; at the very least, a human decision-maker can be held responsible. Human decision-making often takes time and critical reflection to implement, such as the writing of an approved ordinance into law. When algorithms are used in place of human decision-making, I describe what ensues as : Algorithms’ decisions become invisible, irreversible, and infinite. Most social media platforms and many organizations using algorithms will not share how their algorithms work; for this lack of transparency, they are known as.

Exposing Invisible Algorithms: Pro Publica

Journalists at Pro Publica are educating the public on what algorithms can do by explaining and testing black box algorithms. This work is particularly valuable because most algorithmic bias is hard to detect for small groups or individual human users. Studies like ProPublica’s presented in the “Breaking the Black Box” series (below) have been based on groups systematically testing algorithms from different machines, locations, and users. Using investigative journalism, Pro Publica has also found that algorithms used by law enforcement are significantly more likely to label African Americans as High Risk for reoffending and white Americans as Low Risk.

 

Fighting Unjust Algorithms

Algorithms are laden with errors. Some of these errors can be traced to the biases of those of developed them, as when a facial recognition system meant for global implementation is only trained using data sets from a limited population (say, predominantly white or male). Algorithms can become problematic when they are hacked by groups of users, like Microsoft’s Tay was. Algorithms are also grounded in the values of those who shape them, and these values may reward some involved while disenfranchising others.

Despite their flaws, algorithms are increasingly used in heavily consequential ways. They predict how likely a person is to commit a crime or default on a bank loan based on a given data set. They can target users with messages on social media that are customized to fit their interests, their voting preferences, or their fears. They can identify who is in photos online or in recordings of offline spaces.

Confronting the landscape of increasing algorithmic control is activism to limit the control of algorithms over human lives. Below, read about the work of the Algorithmic Justice League and other activists promoting bans on facial recognition. And consider: What roles might algorithms play in your life that may deserve more attention, scrutiny, and even activism?

The Algorithmic Justice League vs facial recognition tech in Boston

MIT Computer Scientist and “Poet of Code” Joy Buolamwini heads the Algorithmic Justice League, an organization making remarkable headway into fighting facial recognition technologies, whose work she explains in the first video below. On June 9th, 2020, Buolamwini and other computer scientists presented alongside citizens at Boston City Council meeting in support of a proposed ordinance banning facial recognition in public spaces in the city. Held and shared by live stream during COVID-19, footage of this meeting offers a remarkable look at the value of human advocacy in shaping the future of social technologies. The second video below should be cued to the beginning of Buolamwini’s testimony half an hour in. Boston’s City Council subsequently voted unanimously to ban facial recognition technologies by the city.

 

 

 

Core Concepts and Questions

Core Concepts

a step-by-step set of instructions for getting something done to serve humans, whether that something is making a decision, solving a problem, or getting from point A to point B (or point Z)

cooperation from human software developers, and cooperation on the part of users

assumptions about a person, culture, or population

a term coined by Eli Pariser, also called an echo chamber. A phenomenon in which we only see news and information we like and agree with, leading to political polarization

the term used when processes created for computer-based decision-making is not shared with or made clear to outsiders

algorithms’ decisions can become invisible, irreversible, and infinite

Core Questions

A. Questions for qualitative thought

  1. Write and/or draw an algorithm (or your best try at one) to perform an activity you wish you could automate. Doing the dishes? Taking an English test? It’s up to you.
  2. Often there are spaces online that make one feel like an outsider, or like an insider. Study an online space that makes you feel like one of these – how it that outsider or insider status being communicated to you, or to others?
  3. Consider the history of how you learned whatever you know about computing. This could mean how you came to understand key terms, searching online simple programs, coding, etc. Then, reinvent that history if you’d learned all you wish you knew about computing at the times and in the ways you feel you should have learned them.

B. Review: Let’s test how well you’ve been programmed. (Mark the best answers.)

Media Attributions

License

Icon for the Creative Commons Attribution 4.0 International License

Humans are Social Media, OER Edition 2021 by Diana Daly is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.

Share This Book