4 Algorithms

Invisible, Irreversible, and Infinite

How can computers carry bias?

Many people think computers and algorithms are neutral – racism and sexism are not programmers’ problems. In the case of Tay’s programmers, this false belief enabled more hate speech online and led to the embarrassment of their employer. Human-crafted computer programs mediate nearly everything humans do today, and human responses are involved in many of those tasks. Considering the near-infinite extent to which algorithms and their activities are replicated, the presence of human is a devastating threat to computer-dependent societies in general and to those targeted or harmed by those biases in particular.

A white man wearing Google Glass
Google Glass was considered by some to be an example of a poor decision by a homogenous workforce.

Problems like these are rampant in the tech industry because there is a damaging belief in US (and some other) societies that the development of computer technologies is antisocial, and that some kinds of people are better at it than others. As a result of this bias in tech industries and computing, there are not enough kinds of people working on tech development teams: not enough women, not enough people who are not white, not enough people who remember to think of children, not enough people who think socially.

Remember Google Glass? You may not; that product failed because few people wanted interaction with a computer to come between themselves and eye contact with humans and the world. People who fit the definition of “tech nerd” fell within this small demographic, but the sentiment was not shared by the broader community of technology users. Critics labeled the unfortunate people who did purchase the product as “glassholes.”

social media analytics across continents

Student Content

I was born and raised in a small town in the county of Kent in England. In 2012, through my father’s company, my mother, my father, my brother, my two dogs and I moved to Los Gatos, California. Five years prior to me moving to the United States, the first iPhone came out on June 29th, 2007. I got my first phone at around age ten and my first iPhone at around age twelve. Once I got my first iPhone, I downloaded KIK, Facebook, Instagram, and Snapchat. I fell in love with these social media apps. I would spend hours every day using these apps to talk to friends, see what others were doing, post funny memes, and so much more. I thought this was the only side to social media and this was everything to see on social media. Little did I know, that moving to the United States would expose me to a completely who knew side of social media.

My own personal and cultural knowledge of social media that makes me different from others is living in two different countries and experiencing the two different cultures on social media. While residents in England and America speak the same language, the two cultures are very different. Due to this difference in the culture I have been exposed to different trends, different purposes for using social media, different apps, and more. Through experiencing social media in both countries, I now have a more cultured knowledge of social media.

Social media in England and the United States are very different. These differences are caused by algorithms. Because the cultures are different, users are going to like, dislike, comment, subscribe, etc.. to different things and different people. These differences will affect algorithms and continue to show users similar to social media content. Owners of these social media apps use what is called. is defined as data that is collected from social media websites and apps that give a clear picture of your online actions and presence. Everyone’s are different. As I live in California in the United States some of my may be similar to someone else who is from California as we are exposed to similar cultures. If you were to compare my to someone living in England, there would be similarities but also a lot of differences as we are exposed to different cultures.

When I first moved to the United States, I did not understand a lot about social media. The humor was different, trends were different, the way people represented themselves online was different. It took adjusting, but now I fully understand the humor, the trends, the way people represent themselves, and more. I have now lived in the United States for 8 years now and I can fully say I have adapted to the culture here. If I was to look at social media in England, even though that was my first time on social media, it would be different and harder for me to understand.

About the author

Issy Brooker was born and raised in Kent, England. She moved to the United States in 2012. Issy Brooker is currently 19 years old and a first year student at the University of Arizona.Graphic of the author

 

 

Exacerbating Bias in Algorithms: The Three I’s

In its early years, the internet was viewed as a utopia, an ideal world that would permit a completely free flow of all available information to everyone, equally. John Perry Barlow’s 1996 Declaration of the Independence of Cyberspace represents this utopian vision, in which the internet liberates users from all biases and even from their own bodies (at which human biases are so often directed). Barlow’s utopian vision does not match the internet of today. Our social norms and inequalities accompany us across all the media and sites we use, and worsened in a climate where information value is determined by marketability and profit, as Sociologist Zeynep Tufecki explains in this Ted Talk.

Because algorithms are built on human cooperation with computing programs, human selectivity and human flaws are embedded within algorithms. Humans as users carry our own biases, and today there is particular concern that algorithms pick up and spread these biases to many, many others. They even make us more biased by hiding results that the algorithm calculates we may not like. When we get our news and information from social media, invisible algorithms consider our own biases and those of friends in our social networks to determine which new posts and stories to show us in search results and news feeds. The result for each user can be called their echo chamber or as author Eli Pariser describes it, a in which we only see news and information we like and agree with, leading to political polarization.

Although algorithms can generate very sophisticated recommendations, algorithms do not make sophisticated decisions. When humans make poor decisions, they can rely on themselves or on other humans to recognize and reverse the error; at the very least, a human decision-maker can be held responsible. Human decision-making often takes time and critical reflection to implement, such as the writing of an approved ordinance into law. When algorithms are used in place of human decision-making, I describe what ensues as : Algorithms’ decisions become invisible, irreversible, and infinite. Most social media platforms and many organizations using algorithms will not share how their algorithms work; for this lack of transparency, they are known as .

Exposing Invisible Algorithms: Pro Publica

Journalists at Pro Publica are educating the public on what algorithms can do by explaining and testing black box algorithms. This work is particularly valuable because most algorithmic bias is hard to detect for small groups or individual human users. Studies like ProPublica’s presented in the “Breaking the Black Box” series (below) have been based on groups systematically testing algorithms from different machines, locations, and users. Using investigative journalism, Pro Publica has also found that algorithms used by law enforcement are significantly more likely to label African Americans as High Risk for reoffending and white Americans as Low Risk.

 

Fighting Unjust Algorithms

Algorithms are laden with errors. Some of these errors can be traced to the biases of those of developed them, as when a facial recognition system meant for global implementation is only trained using data sets from a limited population (say, predominantly white or male). Algorithms can become problematic when they are hacked by groups of users, like Microsoft’s Tay was. Algorithms are also grounded in the values of those who shape them, and these values may reward some involved while disenfranchising others.

Despite their flaws, algorithms are increasingly used in heavily consequential ways. They predict how likely a person is to commit a crime or default on a bank loan based on a given data set. They can target users with messages on social media that are customized to fit their interests, their voting preferences, or their fears. They can identify who is in photos online or in recordings of offline spaces.

Confronting the landscape of increasing algorithmic control is activism to limit the control of algorithms over human lives. Below, read about the work of the Algorithmic Justice League and other activists promoting bans on facial recognition. And consider: What roles might algorithms play in your life that may deserve more attention, scrutiny, and even activism?

The Algorithmic Justice League vs facial recognition tech in Boston

MIT Computer Scientist and “Poet of Code” Joy Buolamwini heads the Algorithmic Justice League, an organization making remarkable headway into fighting facial recognition technologies, whose work she explains in the first video below. On June 9th, 2020, Buolamwini and other computer scientists presented alongside citizens at Boston City Council meeting in support of a proposed ordinance banning facial recognition in public spaces in the city. Held and shared by live stream during COVID-19, footage of this meeting offers a remarkable look at the value of human advocacy in shaping the future of social technologies. The second video below should be cued to the beginning of Buolamwini’s testimony half an hour in. Boston’s City Council subsequently voted unanimously to ban facial recognition technologies by the city.

 

 

 

Core Concepts and Questions

Core Concepts

a step-by-step set of instructions for getting something done to serve humans, whether that something is making a decision, solving a problem, or getting from point A to point B (or point Z)

cooperation from human software developers, and cooperation on the part of users

assumptions about a person, culture, or population

a term coined by Eli Pariser, also called an echo chamber. A phenomenon in which we only see news and information we like and agree with, leading to political polarization

the term used when processes created for computer-based decision-making is not shared with or made clear to outsiders

algorithms’ decisions can become invisible, irreversible, and infinite

Core Questions

A. Questions for qualitative thought

  1. Write and/or draw an algorithm (or your best try at one) to perform an activity you wish you could automate. Doing the dishes? Taking an English test? It’s up to you.
  2. Often there are spaces online that make one feel like an outsider, or like an insider. Study an online space that makes you feel like one of these – how it that outsider or insider status being communicated to you, or to others?
  3. Consider the history of how you learned whatever you know about computing. This could mean how you came to understand key terms, searching online simple programs, coding, etc. Then, reinvent that history if you’d learned all you wish you knew about computing at the times and in the ways you feel you should have learned them.

B. Review: Let’s test how well you’ve been programmed. (Mark the best answers.)

License

Share This Book