8 Information

You are reading an old edition of this textbook…

Read the new edition >> Humans R Social Media >>

 

In the age of social media, the notions of truth, information, and knowledge are all changing. These notions were once amorphous and invisible – the kinds of airy, invisible topics only philosophers and a few scientists studied. But today truth, information, and knowledge are all represented, constructed, and battled about online. Page views, shares, and reactions clue individuals and companies in to what spreads from machine to machine and mind to mind. Content editable by users online is negotiated and changed in real time. In this chapter we’ll look at the problems and opportunities afforded by social media in relationship with truths and knowledge.

Knowledge is always based on multiple pieces of information, and usually involves finding coherence across them when they conflict.​

“Fake news” and “post-truth”

Much has been made in recent years of .” This is a term, favored by the President of the United States among others, that circulates ubiquitously through social as well as traditional media. In 2016, Oxford Dictionaries presented “post-truth” as its “word of the year.” But what do these terms mean, and what do they have to do with social media?

To understand these terms, we have to look closely at what we expect with the word “news” and notions of truth and “fake”-ness. These conversations start with the concepts of objectivity and subjectivity.

Objectivity and subjectivity

To be objective is to present a truth in a way that would also be true for anyone anywhere; so that truth exists regardless of anyone’s perspective. The popular notion of what is true is often based on this expectation of objective truth.

The expectation of objective truth makes sense in some situations – related to physics and mathematics, for example. However, humans’ presentations of both current and historic events have always been subjective – that is, one or more subjects with a point of view have presented the events as they see or remember them. When subjective accounts disagree, journalists and historians face a tricky process of figuring out why the accounts disagree, and piecing together what the evidence is beneath subjective accounts, to learn what is true.

Multiple truths = knowledge production

In US society, we have not historically thought about knowledge as being a negotiation among multiple truths. Even at the beginning of the 21st century, the production of knowledge was considered the domain of those privileged with the highest education – usually from the most powerful sectors of society. For example, when I was growing up, the Encyclopedia Britannica was the authority I looked to for general information about everything. I did not know who the authors were, but I trusted they were experts.

Enter Wikipedia, the online encyclopedia, and everything changed.

 

The first version of Wikipedia was founded on a more similar model to the Encyclopedia Britannica than it is now. It was called Nupedia, and only experts were invited to contribute. But then one of the co-founders, Jimmy Wales, decided to try a new model of knowledge production based on the concept of collective intelligence, written about by Pierre Lévy. The belief underpinning collective intelligence, and Wikipedia, is that no one knows everything, but everyone knows something. Everyone was invited to contribute to Wikipedia. And everyone still is.

When many different perspectives are involved, there can be multiple and even conflicting truths around the same topic. And there can be intense competition to put forth some preferred version of events. But the more perspectives you see, the more knowledge you have about the topic in general. And the results of negotiation between multiple truths can be surprisingly accurate when compared with known truths. A 2005 study in the prominent journal Nature comparing the accuracy of the Encyclopedia Britannica and Wikipedia found they had around the same numbers of errors and levels of accuracy.

What are truths?

And the third ingredient of a truth? That is you, the human reader. As an interpreter, and sometimes sharer/spreader of online information and “news”, you must keep an active mind. You are catching up with that truth in real-time. Is it true, based on evidence available to you from your perspective? Even if it once seemed true, has evidence recently emerged that reveals it to not be true? Many truths are not true forever; as we learn more, what once seemed true is often revealed to not be true.

Truths are not always profitable, so they compete with a lot of other types of content online. As a steward of the world of online information, you have to work to keep truths in circulation.

Infographic: Lies Spread Faster Than Truth (based on 2018 MIT Study)
Infographic by Diana Daly based on the article by Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146-1151.

Why people spread “fake news” and bad information

“Fake news” has multiple meanings in our culture today. When politicians and online discussants refer to stories as fake news, they are often referring to news that does not match their perspective. But there are news stories generated today that are better described as “fake” – based on no evidence.

So why is “fake news” more of an issue today than it was at some points in the past?

Well, historically “news” has long been the presentation of information on current events in our world. In past eras of traditional media, a much smaller number of people published news content. There were codes of ethics associated with journalism, such as the Journalist’s Creed written by Walter Williams in 1914. Not all journalists followed this or any other code of ethics, but in the past, those who behaved unethically were often called out by their colleagues and unemployable with trusted news organizations.

Today, thanks to Web 2.0 and social media sites, nearly anyone can create and widely circulate stories branded as news; the case study of a story by Eric Tucker in this New York Times lesson plan is a good example. And the huge mass of “news” stories that results involves stories created based on a variety of motivations. This is why Oxford Dictionaries made the term post-truth their word of the year for 2016.

People or agencies may spread stories as news online to:

  • spread truth
  • influence others
  • generate profit

Multiple motivations may drive someone to create or spread a story not based on evidence. But when spreading truth is not one of the story creators’ concerns, you could justifiably call that story “fake news.” I try not to use that term these days though; it’s too loaded with politics. I prefer to call “news” unconcerned with truth by its more scientific name…

Bullshit!

bags of trash with a sign reading "quality bullshit"
Bullshit is a scientific term for information spread without concern for truth.​

Think I’m bullshitting you when I say is the scientific name for fake news? Well, I’m not. There are information scientists and philosophers who study different types of bad information, and here are some of basic overviews of their classifications for bad information:

  •  = inaccurate information; often spread without intention to deceive
  •  = information intended to deceive
  • bullshit = information spread without concern for whether or not it’s true

Professors Kay Mathiesen and Don Fallis at the University of Arizona have written that much of the “fake news” generated in the recent election season was bullshit, because producers were concerned with winning influence or profit or both, but were unconcerned with whether it was true.

It is not always possible to know the motivation(s) behind a story’s creation. Indeed, it can be difficult to determine the source of information on social media. But there have been some cases where identified sources were clearly trying to deceive, or were bullshitting – creating content that would spread fast without caring whether it was true.

Cases of bad information spread reveal different intentions, including destabilization of the US government, and profit. There have been multiple cases of “news” story “factories,” in which people work together informally or are even employed to create news stories. The New York Times investigated one factory in Russia, a nation whose government’s interference in the US election was the subject of a federal investigation. And Wired Magazine reported on a factory in Macedonia in which teens created election-related news stories for profit.

There is evidence that the systematic creation of election-related stories had a considerable effect on the 2016 US Presidential election. Donald Trump’s victory was considered a victory by self-proclaimed “trolls” (see Chapter 3 for a longer discussion of this phenomenon) and others who collaborated in publishing online content to defeat Hillary Clinton. Some of these content creators celebrated their campaign, including its disregard for truths, in an event they called the Deplora-Ball.

Mark Zuckerberg initially denied responsibility for Facebook’s spread of deceptive stories. Now Facebook moderators are beginning to flag “disputed news.” But it is likely “news” factories will continue to produce stories not based in truth as long as there are readers who continue to spread them.

The Alt-Right: From fake news to domestic terrorism

2016 saw the fast growth online of a right-leaning political aggregate in the US known as the Alt-Right (first mentioned in Chapter 5). The Alt-Right and related “white nationalist” groups have framed themselves in response to movements based on identity politics – groups that rally or identify around a race, ethnicity, upbringing, or religion rather than a political party. But many refute the notion that these groups are formed around identity, particularly when white supremacy – which centers on oppressing other races – has been so closely associated with Alt-Right media and demonstrations.

What seems to have brought the Alt-Right together more than identity politics is their approach to news – which they often discount as biased – and truth or “reality” – which in their culture it has been acceptable to manufacture for political use. Karl Rove of the second Bush administration was an early purveyor of Alt-Right ideology, who insisted that people in power create their own reality (and therefore truths.) The Alt-Right movement has followed this philosophy, recruiting followers through memes that imagine situations that fit with their politics. One Alt-Right blogger professed clear political intentions behind disinformation he spread in a profile by the New Yorker Magazine – disinformation which spread widely prior to the 2016 election.

We’re an empire now, and when we act, we create our own reality. And while you’re studying that reality—judiciously, as you will—we’ll act again, creating other new realities, which you can study too, and that’s how things will sort out. We’re history’s actors … and you, all of you, will be left to just study what we do. ~ Karl Rove to a NYTimes reporter in 2002

Bullshit that really took off

According to PolitiFact, some big headlines from 2016 of stories not based in truth included these:

  • Hillary Clinton is running a child sex ring out of a pizza shop.
  • Democrats want to impose Islamic law in Florida.
  • Thousands of people at a Donald Trump rally in Manhattan chanted, “We hate Muslims, we hate blacks, we want our great country back.”

Buzzfeed tracked the rates at which election stories spread on Facebook in 2016, and found these false stories out-performed true election stories:

  • “Pope Francis Shocks World, Endorses Donald Trump for President”
  • “WikiLeaks CONFIRMS Hillary Sold Weapons to ISIS”
  • “IT’S OVER: Hillary’s ISIS Email Just Leaked and It’s Worse Than Anyone Could Have Imagined”

None of the listed stories was based in truth, but readers spread them wildly across their social networks and other online spaces. And many readers believed them. Take “pizzagate”: In response to the pizza shop story, one man showed up with a gun at the pizza shop at the center of the story and fired shots, attempting to break up what he believed was a massive pedophilia operation.

Which leads to a new question. We now understand some of the reasons bullshit and other bad information spreads online. But why are readers and social media users so ready to believe it?

Bugs in the human belief system

Fake news and bad information are more likely to be believed when they confirm what we already believe.

We believe bullshit, fake news, and other types of deceptive information based on numerous interconnected human behaviors. Forbes recently presented an article, Why Your Brain May Be Wired To Believe Fake News, which broke down a few of these with the help of the neuroscientist Daniel Levitin. Levitin cited two well-researched human tendencies that draw us to swallow certain types of information while ignoring others.

  • One tendency is : You want to keep believing what you already believe, treasuring a preexisting belief like Gollum treasures the ring in Tolkien’s Lord of the Rings series.
  • The other tendency is : the brain runs through the text of something to select the pieces of it that confirm what you think is already true, while knocking away and ignoring the pieces that don’t confirm what you believe.

These tendencies to believe what we want to hear and see are exacerbated by social network-enabled filter bubbles (described in Chapter 4 of this book.) When we get our news through social media, we are less likely to see opposing points of view, which social networking sites filter out, and which we are unlikely to see on our own.

There is concern that youth and students are particularly vulnerable to believing deceptive online content. But I believe that with some training, youth are going to be better at “reading” than those older than them. Youth are accustomed to online content layered with pictures, links, and insider conversations and connections. The trick to “reading” in the age of social media is to read all of these layers, not just the text.

Dr. Daly’s steps to “reading” social media news stories in 2020:

Reading today means ingesting multiple levels of a source simultaneously.
  1. Put aside your biases. Recognize and put aside your belief perseverance and your confirmation bias. You may want a story to be true or untrue, but you probably don’t want to be fooled by it.
  2. Read the story’s words AND its pictures. What are they saying? What are they NOT saying?
  3. Read the story’s history AND its sources. Who / where is this coming from? What else has come from there and from them?
  4. Read the story’s audience AND its conversations. Who is this source speaking to, and who is sharing and speaking back? How might they be doing so in coded ways? (Here‘s an example to make you think about images and audience, whether or not you agree to Filipovic’s interpretation.)
  5. Before you share, consider fact-checking. Reliable fact-checking sites at the time of this writing include:
  • politifact.com
  • snopes.com
  • factcheck.org

That said – no one fact-checking site is perfect.; neither is any one news site. All are subjective and liable to be taken over by partisan interests or trolls.

 

Core Concepts

a term recently popularized by politicians to refer to stories they do not agree with

inaccurate information spread without the intention to deceive

information intended to deceive those who receive it

information spread without concern for whether or not it’s true

the negotiation of multiple truths as a way of understanding or “knowing” something

the human tendency for the brain to run through the text of something to select the pieces of it that confirm what you think is already true, while knocking away and ignoring the pieces that don’t confirm what you believe

the human tendency to want to continue believing what you already believe

 

Core Questions

 

Related Content

 

Consider It: How to be a good digital citizen during the election – and its aftermath

(Kolina Koltai, University of Washington, from The Conversation

image
You are a key player in efforts to curb misinformation online.
John Fedele/The Image Bank via Getty Images

Kolina Koltai, University of Washington

In the runup to the U.S. presidential election there has been an unprecedented amount of misinformation about the voting process and mail-in ballots. It’s almost certain that misinformation and disinformation will increase, including, importantly, in the aftermath of the election. Misinformation is incorrect or misleading information, and disinformation is misinformation that is knowingly and deliberately propagated.

While every presidential election is critical, the stakes feel particularly high given the challenges of 2020.

I study misinformation online, and I can caution you about the kind of misinformation you may see on Tuesday and the days after, and I can offer you advice about what you can do to help prevent its spread. A fast-moving 24/7 news cycle and social media make it incredibly easy to share content. Here are steps you can take to be a good digital citizen and avoid inadvertently contributing to the problem.

Election misinformation

Recent reports by disinformation researchers highlight the potential for an enormous amount of misleading information and disinformation to spread rapidly on Election Day and the days following. People spreading disinformation may be trying to sway the election one way or the other or simply undermine confidence in the election and American democracy in general.

the Kremlin's Spasskaya Tower and St. Basil's Cathedral reflected in rain water puddles in Red Square in Moscow, Russia
U.S. intelligence services have reported that the Russian government is orchestrating disinformation campaigns aimed at the U.S. elections and pandemic response.
AP Photo/Pavel Golovkin

This report by the Election Integrity Partnership (EIP) details narratives meant to delegitimize the election and show how uncertainty creates opportunities for misinformation to flourish.

In particular, you may end up seeing misleading information shared about voting in person, mail-in ballots, the day-of voting experience and the results of the election. You may see stories online circulating about coronavirus outbreaks or infections at polling locations, violence or threats of intimidation at polling locations, misinformation about when, where and how to vote, and stories of voting suppression through long lines at polling stations and people being turned away.

We likely won’t know the results on Election Day, and this delay is both anticipated and legitimate. There may be misinformation about the winner of the presidential election and the final counting of ballots, especially with the increase in mail-in ballots in response to the coronavirus pandemic. It will be important to know that not every state finalizes their official ballot count on Nov. 3, and there may be narratives that threaten the legitimacy of the election results, like people claiming their vote did not get counted or saying they found discarded completed ballots.

What if the source of misinformation is … you?

There is a lot you can do to help reduce the spread of election misinformation online. This can happen both accidentally and intentionally, and there are both foreign and domestic actors who create disinformation campaigns. But ultimately, you have the power to not share content.

Sharing mis/disinformation gives it power. Regardless of your demographic, you can be susceptible to misinformation, and sometimes specifically targeted by disinformation. One of the biggest steps you can take to be a good digital citizen this election season is not to contribute to the sharing of misinformation. This can be surprisingly difficult, even with the best of intentions.

One type of misinformation that has been popular leading up to the election – and is likely to remain popular – is “friend of a friend” claims. These claims are often unverified stories without attribution that are quickly spread by people copy and pasting the same story across their networks.

You may see these claims as social media statuses like a Facebook post or an Instagram Story, or even as a bit of text forwarded to you in a group chat. They are often text-based, with no name attached to the story, but instead forwarded along by a “friend of a friend.”

This type of misinformation is popular to share because the stories can center around the good intentions of wanting to inform others, and they often provide a social context, for example my friend’s doctor or my brother’s co-worker, that can make the stories seem legitimate. However, these often provide no actual evidence or proof of the claim and should not be shared, even if you believe the information is useful. It could be misleading.

How to avoid spreading misinformation

Many useful resources are available about how to identify misinformation, which can guide you on what to share and not to share. You can improve your ability to spot misinformation and learn to avoid being duped by disinformation campaigns.

https://youtube.com/watch?v=gE9dFM4Bs0k%3Fwmode%3Dtransparent%26start%3D0
Tips for spotting misinformation online.

A key approach is the Stop, Investigate, Find and Trace (SIFT) technique, a fact-checking process developed by digital literacy expert Mike Caulfield of Washington State University Vancouver.

Following this technique, when you encounter something you want to share online, you can stop and check to see if you know the website or source of the information. Then investigate the source and find out where the story is coming from. Then find trusted coverage to see if there is a consensus among media sources about the claim. Finally, trace claims, quotes and media back to their original contexts to see if things were taken out of context or manipulated.

Finally, you may want to share your own experience with voting this year on social media. Following the recommendation of Election Integrity Project, it is a good idea to share positive experiences about voting. Go ahead and share your “I voted” sticker selfie. Sharing stories about how people socially distanced and wore masks at polling locations can highlight the positive experiences of voting in-person.

[Deep knowledge, daily. Sign up for The Conversation’s newsletter.]

However, EIP cautions about posting about negative experiences. While negative experiences warrant attention, a heavy focus on them can stoke feelings of disenfranchisement, which could suppress voter turnout. Further, once you post something on social media, it can be taken out of context and used to advanced narratives that you may not support.

Most people care about the upcoming election and informing people in their networks. It is only natural to want to share important and critical information about the election. However, I urge you to practice caution in these next few weeks when sharing information online. While it’s probably not possible to stop all disinformation at its source, we the people can do our part to stop its spread.The Conversation

Kolina Koltai, Postdoctoral Researcher of Information Studies, University of Washington

This article is republished from The Conversation under a Creative Commons license. Read the original article.

PS…Coronavirus

I am creating this page on April 14th, 2020. Today it has been about one month since the coronavirus and responses to it began truly transforming life where I live in Tucson, Arizona; and weeks or months longer since life was transformed in earlier hotspots including China. The pandemic of coronavirus is causing depression and destruction on a global scale, and it is also giving us a distinct view of information as matter that depends on time and space, as well as geopolitics and culture.

 

Media Attributions

License

Icon for the Creative Commons Attribution 4.0 International License

Humans are Social Media, OER Edition 2021 by Diana Daly is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.

Share This Book