Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.
Information
Diana Daly
Key points
Social media transforms truth, information, and knowledge into tangible, actively debated entities online, tracked by metrics like page views and shares.
The chapter explores the prevalence of “fake news” and “post-truth” in the social media era, contrasting traditional newspaper reliance with the instant dissemination through platforms.
Objectivity, presenting universally valid truths, is contrasted with subjectivity, where individual perspectives shape event presentations, including subjective interpretations of historical events.
Traditional views of knowledge as elitist are challenged by platforms like Wikipedia, emphasizing collective intelligence and negotiating multiple truths, leading to accurate knowledge production.
The chapter delves into motivations for creating and spreading “fake news,” examining influences like profit, truth, and the democratization of content creation on the internet.
Psychological factors like belief perseverance and confirmation bias contribute to the acceptance of misinformation, exacerbated by social media filter bubbles reinforcing existing beliefs.
In the age of social media, the notions of truth, information, and knowledge are all changing. These notions were once amorphous and invisible – the kinds of airy, invisible topics only philosophers and a few scientists studied. But today truth, information, and knowledge are all represented, constructed, and battled about online. Page views, shares, and reactions clue individuals and companies in to what spreads from machine to machine and mind to mind. Content editable by users online is negotiated and changed in real time. In this chapter, we’ll look at the problems and opportunities afforded by social media in relationship with truths and knowledge.
Section 1: “Fake news” and “post-truth”
Much has been made in recent years of “fake news.” This is a term, favored by the President of the United States among others, that circulates ubiquitously through social as well as traditional media. In 2016, Oxford Dictionaries presented “post-truth” as its “word of the year.” But what do these terms mean, and what do they have to do with social media?
To understand these terms, we have to look closely at what we expect with the word “news” and notions of truth and “fake”-ness. These conversations start with the concepts of objectivity and subjectivity.
Student Insights: From Horse Travel to Human Touch – Speedy News (writing by Jenna Wing, Spring 2021)
Objectivity and subjectivity
To be objective is to present a truth in a way that would also be true for anyone anywhere; so that truth exists regardless of anyone’s perspective. The popular notion of what is true is often based on this expectation of objective truth.
The expectation of objective truth makes sense in some situations – related to physics and mathematics, for example. However, humans’ presentations of both current and historic events have always been subjective – that is, one or more subjects with a point of view have presented the events as they see or remember them. When subjective accounts disagree, journalists and historians face a tricky process of figuring out why the accounts disagree, and piecing together what the evidence is beneath subjective accounts, to learn what is true.
Multiple truths = knowledge production
In US society, we have not historically thought about knowledge as being a negotiation among multiple truths. Even at the beginning of the 21st century, the production of knowledge was considered the domain of those privileged with the highest education – usually from the most powerful sectors of society. For example, when I was growing up, the Encyclopedia Britannica was the authority I looked to for general information about everything. I did not know who the authors were, but I trusted they were experts.
Enter Wikipedia, the online encyclopedia, and everything changed.
The first version of Wikipedia was founded on a more similar model to the Encyclopedia Britannica than it is now. It was called Nupedia, and only experts were invited to contribute. But then one of the co-founders, Jimmy Wales, decided to try a new model of knowledge production based on the concept of collective intelligence, written about by Pierre Lévy. The belief underpinning collective intelligence, and Wikipedia, is that no one knows everything, but everyone knows something. Everyone was invited to contribute to Wikipedia. And everyone still is.
When many different perspectives are involved, there can be multiple and even conflicting truths around the same topic. And there can be intense competition to put forth some preferred version of events. But the more perspectives you see, the more knowledge you have about the topic in general. The results of a negotiation between multiple truths can be surprisingly accurate. A 2012 study by Oxford University comparing the accuracy levels of Wikipedia to other online encyclopedias found Wikipedia had higher accuracy than Encyclopedia Brittanica.
Section 2: What are truths?
So what qualifies as “a truth?” Well, truths are created and sustained from three ingredients. The first two ingredients are evidence and sincerity. That is, truths must involve evidence – pieces of information that could or can be seen or otherwise experienced in the world. And truths must involve sincerity – the intention of their creator to be honest.
And the third ingredient of a truth? That is you, the human reader. As an interpreter, and sometimes sharer/spreader of online information and “news”, you must keep an active mind. You are catching up with that truth in real-time. Is it true, based on evidence available to you from your perspective? Even if it once seemed true, has evidence recently emerged that reveals it to not be true? Many truths are not true forever; as we learn more, what once seemed true is often revealed to not be true.
Truths are not always profitable, so they compete with a lot of other types of content online. As a steward of the world of online information, you have to work to keep truths in circulation.
Student Insights: My journey with technology (video by Abby Arnold, Spring 2021)
Section 3: Why people spread “fake news” and bad information
“Fake news” has multiple meanings in our culture today. When politicians and online discussants refer to stories as fake news, they are often referring to news that does not match their perspective. But there are news stories generated today that are better described as “fake” – based on no evidence.
So why is “fake news” more of an issue today than it was at some points in the past?
Well, historically “news” has long been the presentation of information on current events in our world. In past eras of traditional media, a much smaller number of people published news content. There were codes of ethics associated with journalism, such as the Journalist’s Creed, written by Walter Williams in 1914. Not all journalists followed this or any other code of ethics, but in the past, those who behaved unethically were often called out by their colleagues and unemployable with trusted news organizations.
Today, thanks to Web 2.0 and social media sites, nearly anyone can create and widely circulate stories branded as news; the case study of a story by Eric Tucker in this New York Times lesson plan is a good example. And the huge mass of “news” stories that results involves stories created based on a variety of motivations. This is why Oxford Dictionaries made the term post-truth their word of the year for 2016.
People or agencies may spread stories as news online to:
spread truth
influence others
generate profit
Multiple motivations may drive someone to create or spread a story not based on evidence. But when spreading truth is not one of the story creators’ concerns, you could justifiably call that story “fake news.” I try not to use that term these days though; it’s too loaded with politics. I prefer to call “news” unconcerned with truth by its more scientific name…
Bullshit!
Think I’m bullshitting you when I say bullshit is the scientific name for fake news? Well, I’m not. There are information scientists and philosophers who study different types of bad information, and here are some of the basic overviews of their classifications for bad information:
misinformation= inaccurate information; often spread without intention to deceive
bullshit= information spread without concern for whether or not it’s true
Professors Kay Mathiesen and Don Fallis at the University of Arizona have written that much of the “fake news” generated in the recent election season was bullshit, because producers were concerned with winning influence or profit or both, but were unconcerned with whether it was true. Examples include news generated by a fake news factory in Macedonia.
Student Insights: Searchability: The Helpful, but Inescapable Nature of Online Media (writing by Devon, Spring 2021)
Respond to this case study: The author states that misinformation, disinformation, and bullshit lead to confirmation bias. What is a real-world example of when false information led to confirmation bias?
Section 4: Bugs in the human belief system
We believe bullshit, fake news, and other types of deceptive information based on numerous interconnected human behaviors. Forbes recently presented an article, Why Your Brain May Be Wired To Believe Fake News, which broke down a few of these with the help of the neuroscientist Daniel Levitin. Levitin cited two well-researched human tendencies that draw us to swallow certain types of information while ignoring others.
One tendency is belief perseverance: You want to keep believing what you already believe, treasuring a preexisting belief like Gollum treasures the ring in Tolkien’s Lord of the Rings series.
The other tendency is confirmation bias: the brain runs through the text of something to select the pieces of it that confirm what you think is already true, while knocking away and ignoring the pieces that don’t confirm what you believe.
These tendencies to believe what we want to hear and see are exacerbated by social network-enabled filter bubbles (described in Chapter 4 of this book.) When we get our news through social media, we are less likely to see opposing points of view, which social networking sites filter out, and which we are unlikely to see on our own.
There is concern that youth and students are particularly vulnerable to believing deceptive online content. But I believe that with some training, youth are going to be better at “reading” than those older than them. Youth are accustomed to online content layered with pictures, links, and insider conversations and connections. The trick to “reading” in the age of social media is to read all of these layers, not just the text.
Student insights: Social media affordances (video by Kendall Peterson, Spring 2021)
Tips for “reading” social media news stories:
Put aside your biases. Recognize and put aside your belief perseverance and your confirmation bias. You may want a story to be true or untrue, but you probably don’t want to be fooled by it.
Read the story’s words AND its pictures. What are they saying? What are they NOT saying?
Read the story’s history AND its sources. Who / where is this coming from? What else has come from there and from them?
Read the story’s audience AND its conversations. Who is this source speaking to, and who is sharing and speaking back? How might they be doing so in coded ways? (Here‘s an example to make you think about images and audience, whether or not you agree with Filipovic’s interpretation.)
Before you share, consider fact-checking. Reliable fact-checking sites at the time of this writing include:
That said – no one fact-checking site is perfect.; neither is any one news site. All are subjective and liable to be taken over by partisan interests or trolls.
@Reality — Social Media and Ourselves podcast
@Reality
Release date: November 1st 2021
The internet can seem like a faraway place. It can seem fictional and like it cannot affect you. But today we see relationships, politics, and cultural movements echoing attitudes that originate on the web. How can this be? In this episode, we listen to stories from people who thought they were impervious to the internet’s influence. Instead, they found their realities perturbed by things they first saw on-screen. Produced and narrated by Gabe Stultz with support from Jacquie Kuru and Diana Daly of iVoices Media Lab at the University of Arizona. All music in this episode by Gabe Stultz.
Respond to this podcast episode…How did the podcast episode “@Reality” use interviews, student voices, or sounds to demonstrate a current or past social trend phenomenon? If you were making a sequel to this episode, what voices or sounds would you include to help listeners understand more about this trend, and why?
the human tendency for the brain to run through the text of something to select the pieces of it that confirm what you think is already true, while knocking away and ignoring the pieces that don’t confirm what you believe
inaccurate information spread without the intention to deceive
Core Questions
A. Questions for qualitative thought:
How can individuals maintain critical thinking skills and resist confirmation bias in an age of “fake news” and “bullshit“?
How can platforms like Wikipedia and other collaborative models of information creation address issues of power and bias while fostering accurate and diverse knowledge?
What ethical considerations should guide our engagement with online information and the stories we choose to share?
How can educators prepare young people for the challenges and opportunities of online information while recognizing their potential advantages in navigating this complex landscape?
B. Review: Which is the best answer?
C. Game on!
Related Content
Read It: It’s not just about facts: Democrats and Republicans have sharply different attitudes about removing misinformation from social media
Misinformation is a key global threat, but Democrats and Republicans disagree about how to address the problem. In particular, Democrats and Republicans diverge sharply on removing misinformation from social media.
Only three weeks after the Biden administration announced the Disinformation Governance Board in April 2022, the effort to develop best practices for countering disinformation was halted because of Republican concerns about its mission. Why do Democrats and Republicans have such different attitudes about content moderation?
My colleagues Jennifer Pan and Margaret E. Roberts and I found in a study published in the journal Science Advances that Democrats and Republicans not only disagree about what is true or false, they also differ in their internalized preferences for content moderation. Internalized preferences may be related to people’s moral values, identities or other psychological factors, or people internalizing the preferences of party elites.
And though people are sometimes strategic about wanting misinformation that counters their political views removed, internalized preferences are a much larger factor in the differing attitudes toward content moderation.
Internalized preferences or partisan bias?
In our study, we found that Democrats are about twice as likely as Republicans to want to remove misinformation, while Republicans are about twice as likely as Democrats to consider removal of misinformation as censorship. Democrats’ attitudes might depend somewhat on whether the content aligns with their own political views, but this seems to be due, at least in part, to different perceptions of accuracy.
Previous research showed that Democrats and Republicans have different views about content moderation of misinformation. One of the most prominent explanations is the “fact gap”: the difference in what Democrats and Republicans believe is true or false. For example, a study found that both Democrats and Republicans were more likely to believe news headlines that were aligned with their own political views.
But it is unlikely that the fact gap alone can explain the huge differences in content moderation attitudes. That’s why we set out to study two other factors that might lead Democrats and Republicans to have different attitudes: preference gap and party promotion. A preference gap is a difference in internalized preferences about whether, and what, content should be removed. Party promotion is a person making content moderation decisions based on whether the content aligns with their partisan views.
We asked 1,120 U.S. survey respondents who identified as either Democrat or Republican about their opinions on a set of political headlines that we identified as misinformation based on a bipartisan fact check. Each respondent saw one headline that was aligned with their own political views and one headline that was misaligned. After each headline, the respondent answered whether they would want the social media company to remove the headline, whether they would consider it censorship if the social media platform removed the headline, whether they would report the headline as harmful, and how accurate the headline was.
Deep-seated differences
When we compared how Democrats and Republicans would deal with headlines overall, we found strong evidence for a preference gap. Overall, 69% of Democrats said misinformation headlines in our study should be removed, but only 34% of Republicans said the same; 49% of Democrats considered the misinformation headlines harmful, but only 27% of Republicans said the same; and 65% of Republicans considered headline removal to be censorship, but only 29% of Democrats said the same.
Even in cases where Democrats and Republicans agreed that the same headlines were inaccurate, Democrats were nearly twice as likely as Republicans to want to remove the content, while Republicans were nearly twice as likely as Democrats to consider removal censorship.
We didn’t test explicitly why Democrats and Republicans have such different internalized preferences, but there are at least two possible reasons. First, Democrats and Republicans might differ in factors like their moral values or identities. Second, Democrats and Republicans might internalize what the elites in their parties signal. For example, Republican elites have recently framed content moderation as a free speech and censorship issue. Republicans might use these elites’ preferences to inform their own.
When we zoomed in on headlines that are either aligned or misaligned for Democrats, we found a party promotion effect: Democrats were less favorable to content moderation when misinformation aligned with their own views. Democrats were 11% less likely to want the social media company to remove headlines that aligned with their own political views. They were 13% less likely to report headlines that aligned with their own views as harmful. We didn’t find a similar effect for Republicans.
Our study shows that party promotion may be partly due to different perceptions of accuracy of the headlines. When we looked only at Democrats who agreed with our statement that the headlines were false, the party promotion effect was reduced to 7%.
Implications for social media platforms
We find it encouraging that the effect of party promotion is much smaller than the effect of internalized preferences, especially when accounting for accuracy perceptions. However, given the huge partisan differences in content moderation preferences, we believe that social media companies should look beyond the fact gap when designing content moderation policies that aim for bipartisan support.
Future research could explore whether getting Democrats and Republicans to agree on moderation processes – rather than moderation of individual pieces of content – could reduce disagreement. Also, other types of content moderation such as downweighting, which involves platforms reducing the virality of certain content, might prove to be less contentious. Finally, if the preference gap – the differences in deep-seated preferences between Democrats and Republicans – is rooted in value differences, platforms could try to use different moral framings to appeal to people on both sides of the partisan divide.
For now, Democrats and Republicans are likely to continue to disagree over whether removing misinformation from social media improves public discourse or amounts to censorship.
A term recently popularized by politicians to refer to stories they do not agree with
Information spread without concern for whether or not it's true
Inaccurate information that is spread without the intention to deceive.
Information intended to deceive those who receive it.
The human tendency to want to continue believing what you already believe.
The human tendency for the brain to run through the text of something to select the pieces of it that confirm what you think is already true, while knocking away and ignoring the pieces that don't confirm what you believe.
The negotiation of multiple truths as a way of understanding or "knowing" something
Dr. Diana Daly of the University of Arizona is the Director of iVoices, a media lab helping students produce media from their narratives on technologies. Prof Daly teaches about qualitative research, social media, and information quality at the University of Arizona.
definition
A term recently popularized by politicians to refer to stories they do not agree with
Information spread without concern for whether or not it's true
Inaccurate information that is spread without the intention to deceive.
Information intended to deceive those who receive it.
The human tendency to want to continue believing what you already believe.
The human tendency for the brain to run through the text of something to select the pieces of it that confirm what you think is already true, while knocking away and ignoring the pieces that don't confirm what you believe.
The negotiation of multiple truths as a way of understanding or "knowing" something