Information

Diana Daly

Key points

  • Social media transforms truth, information, and knowledge into tangible, actively debated entities online, tracked by metrics like page views and shares.
  • The chapter explores the prevalence of “fake news” and “post-truth” in the social media era, contrasting traditional newspaper reliance with the instant dissemination through platforms.
  • Objectivity, presenting universally valid truths, is contrasted with subjectivity, where individual perspectives shape event presentations, including subjective interpretations of historical events.
  • Traditional views of knowledge as elitist are challenged by platforms like Wikipedia, emphasizing collective intelligence and negotiating multiple truths, leading to accurate knowledge production.
  • The chapter delves into motivations for creating and spreading “fake news,” examining influences like profit, truth, and the democratization of content creation on the internet.
  • Psychological factors like belief perseverance and confirmation bias contribute to the acceptance of misinformation, exacerbated by social media filter bubbles reinforcing existing beliefs.

In this chapter

Objectivity and subjectivity

To be objective is to present a truth in a way that would also be true for anyone anywhere; so that truth exists regardless of anyone’s perspective. The popular notion of what is true is often based on this expectation of objective truth.

The expectation of objective truth makes sense in some situations – related to physics and mathematics, for example. However, humans’ presentations of both current and historic events have always been subjective – that is, one or more subjects with a point of view have presented the events as they see or remember them. When subjective accounts disagree, journalists and historians face a tricky process of figuring out why the accounts disagree, and piecing together what the evidence is beneath subjective accounts, to learn what is true.

Multiple truths = knowledge production

In US society, we have not historically thought about knowledge as being a negotiation among multiple truths. Even at the beginning of the 21st century, the production of knowledge was considered the domain of those privileged with the highest education – usually from the most powerful sectors of society. For example, when I was growing up, the Encyclopedia Britannica was the authority I looked to for general information about everything. I did not know who the authors were, but I trusted they were experts.

Enter Wikipedia, the online encyclopedia, and everything changed.

 

Click here for a captioned version of this video.

The first version of Wikipedia was founded on a more similar model to the Encyclopedia Britannica than it is now. It was called Nupedia, and only experts were invited to contribute. But then one of the co-founders, Jimmy Wales, decided to try a new model of knowledge production based on the concept of collective intelligence, written about by Pierre Lévy. The belief underpinning collective intelligence, and Wikipedia, is that no one knows everything, but everyone knows something. Everyone was invited to contribute to Wikipedia. And everyone still is.

When many different perspectives are involved, there can be multiple and even conflicting truths around the same topic. And there can be intense competition to put forth some preferred version of events. But the more perspectives you see, the more knowledge you have about the topic in general. The results of a negotiation between multiple truths can be surprisingly accurate. A 2012 study by Oxford University comparing the accuracy levels of Wikipedia to other online encyclopedias found Wikipedia had higher accuracy than Encyclopedia Brittanica.

Section 2: What are truths?

 

And the third ingredient of a truth? That is you, the human reader. As an interpreter, and sometimes sharer/spreader of online information and “news”, you must keep an active mind. You are catching up with that truth in real-time. Is it true, based on evidence available to you from your perspective? Even if it once seemed true, has evidence recently emerged that reveals it to not be true? Many truths are not true forever; as we learn more, what once seemed true is often revealed to not be true.

Truths are not always profitable, so they compete with a lot of other types of content online. As a steward of the world of online information, you have to work to keep truths in circulation.

 

Student Insights: My journey with technology (video by Abby Arnold, Spring 2021)

Graphic profile image provided by the student author depicting a smiling woman with long, blonde hair, wearing a royal blue University of Arizona sweatshirt. The background is pale blue with white polka dots.

 

Infographic: The top of the graphic reads "Lies Spread Faster Than Truth" and '10 years of Twitter data.' with a a dark blue background. The rest of the graphic is divided in half with the left side in yellow and the right in salmon. The left reads: 'Retweets of Fact," and has four points below it which read; 'Factual inquiry is slow' (with an image of a turtle); 'truth can be predictable' (with an image of people facing each other at a table with a clock above them); 'true posts elicited sadness & trust' (with a geometric sad face, a text box and two hands about to connect); 'bots spread truth and lies equally' (with an image of a robot face in the center and an age on one side and a devil on the other).The right side reads: 'Retweets of Fiction' and has four points to share, which read: 'Spread six times faster!'(with an image of a running cheetah); 'New! Exciting!' (and an image of three faces, one excited, one angry, and another spreading gossip); 'Lies led to surprise & disgust!' (with images of angry and disgusted faces); and 'humans seem to prefer sharing lies to truth!' (with a face showing on open mouth and asterisks for eyes).
Infographic by Diana Daly based on the article by Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146-1151. (Image: Lies Spread Faster Than Truth by Diana Daly, CC BY-NC-SA.)

Section 3: Why people spread “fake news” and bad information

“Fake news” has multiple meanings in our culture today. When politicians and online discussants refer to stories as fake news, they are often referring to news that does not match their perspective. But there are news stories generated today that are better described as “fake” – based on no evidence.

So why is “fake news” more of an issue today than it was at some points in the past?

Well, historically “news” has long been the presentation of information on current events in our world. In past eras of traditional media, a much smaller number of people published news content. There were codes of ethics associated with journalism, such as the Journalist’s Creed, written by Walter Williams in 1914. Not all journalists followed this or any other code of ethics, but in the past, those who behaved unethically were often called out by their colleagues and unemployable with trusted news organizations.

Today, thanks to Web 2.0 and social media sites, nearly anyone can create and widely circulate stories branded as news; the case study of a story by Eric Tucker in this New York Times lesson plan is a good example. And the huge mass of “news” stories that results involves stories created based on a variety of motivations. This is why Oxford Dictionaries made the term post-truth their word of the year for 2016.

People or agencies may spread stories as news online to:

  • spread truth
  • influence others
  • generate profit

Multiple motivations may drive someone to create or spread a story not based on evidence. But when spreading truth is not one of the story creators’ concerns, you could justifiably call that story “fake news.” I try not to use that term these days though; it’s too loaded with politics. I prefer to call “news” unconcerned with truth by its more scientific name…

Bullshit!

Outdoor image of four, tied bags of trash (two white, two blue) o grass with a plywood sign in front of them reading "quality bullshit $2 a bag".
Bullshit is a scientific term for information spread without concern for truth.​

Think I’m bullshitting you when I say bullshit is the scientific name for fake news? Well, I’m not. There are information scientists and philosophers who study different types of bad information, and here are some of the basic overviews of their classifications for bad information:

  • misinformation = inaccurate information; often spread without intention to deceive
  • disinformation = information intended to deceive
  • bullshit = information spread without concern for whether or not it’s true

Professors Kay Mathiesen and Don Fallis at the University of Arizona have written that much of the “fake news” generated in the recent election season was bullshit, because producers were concerned with winning influence or profit or both, but were unconcerned with whether it was true. Examples include news generated by a fake news factory in Macedonia.

 

Student Insights: Searchability: The Helpful, but Inescapable Nature of Online Media (writing by Devon, Spring 2021)

iVoices logo; light blue circle with a dark navy, old fashioned condenser microphone on a red stand with the word 'iVoices' at the bottom right.

 

Respond to this case study: The author states that misinformation, disinformation, and bullshit lead to confirmation bias. What is a real-world example of when false information led to confirmation bias?

 

Section 4: Bugs in the human belief system

Illustration of four men in top hat and bowler hats and coats running, holding old-timey newspapers with titles like 'fake news', 'humbug news', and 'cheap sensation'.
Fake news and bad information are more likely to be believed when they confirm what we already believe

We believe bullshit, fake news, and other types of deceptive information based on numerous interconnected human behaviors. Forbes recently presented an article, Why Your Brain May Be Wired To Believe Fake News, which broke down a few of these with the help of the neuroscientist Daniel Levitin. Levitin cited two well-researched human tendencies that draw us to swallow certain types of information while ignoring others.

  • One tendency is belief perseverance: You want to keep believing what you already believe, treasuring a preexisting belief like Gollum treasures the ring in Tolkien’s Lord of the Rings series.
  • The other tendency is confirmation bias: the brain runs through the text of something to select the pieces of it that confirm what you think is already true, while knocking away and ignoring the pieces that don’t confirm what you believe.

These tendencies to believe what we want to hear and see are exacerbated by social network-enabled filter bubbles (described in Chapter 4 of this book.) When we get our news through social media, we are less likely to see opposing points of view, which social networking sites filter out, and which we are unlikely to see on our own.

There is concern that youth and students are particularly vulnerable to believing deceptive online content. But I believe that with some training, youth are going to be better at “reading” than those older than them. Youth are accustomed to online content layered with pictures, links, and insider conversations and connections. The trick to “reading” in the age of social media is to read all of these layers, not just the text.

 

Student insights: Social media affordances (video by Kendall Peterson, Spring 2021)

Graphic profile image provided by the student author depicting a woman with long, brown hair, a black, sleeveless shirt, and a border collie-looking dog behind her with its head on her shoulder. The background is pale with black stars (some filled in black, all outlined in black).

 

Tips for “reading” social media news stories:

An illustration in Art Nouveau style designed by Sadie Wendell Mitchell depicts a women in a flowy, cream dress with black hair pulled up, reading a book, surrounded by books; some disheveled, some stacked, and in a library. The three named books have these titles in the piece: “Economy”, “The Psychology of the Male Human” and “The Study of Bugology”. A sign above reads “DO IT NOW”.
Reading today means ingesting multiple levels of a source simultaneously.
  1. Put aside your biases. Recognize and put aside your belief perseverance and your confirmation bias. You may want a story to be true or untrue, but you probably don’t want to be fooled by it.
  2. Read the story’s words AND its pictures. What are they saying? What are they NOT saying?
  3. Read the story’s history AND its sources. Who / where is this coming from? What else has come from there and from them?
  4. Read the story’s audience AND its conversations. Who is this source speaking to, and who is sharing and speaking back? How might they be doing so in coded ways? (Here‘s an example to make you think about images and audience, whether or not you agree with Filipovic’s interpretation.)
  5. Before you share, consider fact-checking. Reliable fact-checking sites at the time of this writing include:

That said – no one fact-checking site is perfect.; neither is any one news site. All are subjective and liable to be taken over by partisan interests or trolls.

 

@Reality — Social Media and Ourselves podcast

@Reality

Release date: November 1st 2021

The internet can seem like a faraway place. It can seem fictional and like it cannot affect you. But today we see relationships, politics, and cultural movements echoing attitudes that originate on the web. How can this be? In this episode, we listen to stories from people who thought they were impervious to the internet’s influence. Instead, they found their realities perturbed by things they first saw on-screen. Produced and narrated by Gabe Stultz with support from Jacquie Kuru and Diana Daly of iVoices Media Lab at the University of Arizona. All music in this episode by Gabe Stultz.

LISTEN   •   LISTEN WITH TRANSCRIPT

Respond to this podcast episode…How did the podcast episode “@Reality” use interviews, student voices, or sounds to demonstrate a current or past social trend phenomenon? If you were making a sequel to this episode, what voices or sounds would you include to help listeners understand more about this trend, and why?

 

Core Concepts and Questions

Core Concepts

belief perseverance

the human tendency to want to continue believing what you already believe

bullshit

information spread without concern for whether or not it’s true

confirmation bias

the human tendency for the brain to run through the text of something to select the pieces of it that confirm what you think is already true, while knocking away and ignoring the pieces that don’t confirm what you believe

disinformation

information intended to deceive those who receive it

fake news

a term recently popularized by politicians to refer to stories they do not agree with

knowledge construction

the negotiation of multiple truths as a way of understanding or “knowing” something

misinformation

inaccurate information spread without the intention to deceive

 

 

Core Questions

A. Questions for qualitative thought:

  1. How can individuals maintain critical thinking skills and resist confirmation bias in an age of “fake news” and “bullshit“?
  2. How can platforms like Wikipedia and other collaborative models of information creation address issues of power and bias while fostering accurate and diverse knowledge?
  3. What ethical considerations should guide our engagement with online information and the stories we choose to share?
  4. How can educators prepare young people for the challenges and opportunities of online information while recognizing their potential advantages in navigating this complex landscape?

B. Review: Which is the best answer?

C. Game on!

 

Related Content

Read It: It’s not just about facts: Democrats and Republicans have sharply different attitudes about removing misinformation from social media

A digitally-altered image of a person sitting with a laptop in their lap with their fingers on the keyboard, the screen showing a stylized head with an elongated nose piercing through a stylized sheet of paper. The background is black and there are digitized swirls of colorful, ethereal waves superimposed.
Your political leanings go a long way to determine whether you think it’s a good or bad idea to take down misinformation.
Johner Images via Getty Images

Ruth Elisabeth Appel, Stanford University

Misinformation is a key global threat, but Democrats and Republicans disagree about how to address the problem. In particular, Democrats and Republicans diverge sharply on removing misinformation from social media.

Only three weeks after the Biden administration announced the Disinformation Governance Board in April 2022, the effort to develop best practices for countering disinformation was halted because of Republican concerns about its mission. Why do Democrats and Republicans have such different attitudes about content moderation?

My colleagues Jennifer Pan and Margaret E. Roberts and I found in a study published in the journal Science Advances that Democrats and Republicans not only disagree about what is true or false, they also differ in their internalized preferences for content moderation. Internalized preferences may be related to people’s moral values, identities or other psychological factors, or people internalizing the preferences of party elites.

And though people are sometimes strategic about wanting misinformation that counters their political views removed, internalized preferences are a much larger factor in the differing attitudes toward content moderation.

Internalized preferences or partisan bias?

In our study, we found that Democrats are about twice as likely as Republicans to want to remove misinformation, while Republicans are about twice as likely as Democrats to consider removal of misinformation as censorship. Democrats’ attitudes might depend somewhat on whether the content aligns with their own political views, but this seems to be due, at least in part, to different perceptions of accuracy.

Previous research showed that Democrats and Republicans have different views about content moderation of misinformation. One of the most prominent explanations is the “fact gap”: the difference in what Democrats and Republicans believe is true or false. For example, a study found that both Democrats and Republicans were more likely to believe news headlines that were aligned with their own political views.

But it is unlikely that the fact gap alone can explain the huge differences in content moderation attitudes. That’s why we set out to study two other factors that might lead Democrats and Republicans to have different attitudes: preference gap and party promotion. A preference gap is a difference in internalized preferences about whether, and what, content should be removed. Party promotion is a person making content moderation decisions based on whether the content aligns with their partisan views.

We asked 1,120 U.S. survey respondents who identified as either Democrat or Republican about their opinions on a set of political headlines that we identified as misinformation based on a bipartisan fact check. Each respondent saw one headline that was aligned with their own political views and one headline that was misaligned. After each headline, the respondent answered whether they would want the social media company to remove the headline, whether they would consider it censorship if the social media platform removed the headline, whether they would report the headline as harmful, and how accurate the headline was.

Deep-seated differences

When we compared how Democrats and Republicans would deal with headlines overall, we found strong evidence for a preference gap. Overall, 69% of Democrats said misinformation headlines in our study should be removed, but only 34% of Republicans said the same; 49% of Democrats considered the misinformation headlines harmful, but only 27% of Republicans said the same; and 65% of Republicans considered headline removal to be censorship, but only 29% of Democrats said the same.

Even in cases where Democrats and Republicans agreed that the same headlines were inaccurate, Democrats were nearly twice as likely as Republicans to want to remove the content, while Republicans were nearly twice as likely as Democrats to consider removal censorship.

We didn’t test explicitly why Democrats and Republicans have such different internalized preferences, but there are at least two possible reasons. First, Democrats and Republicans might differ in factors like their moral values or identities. Second, Democrats and Republicans might internalize what the elites in their parties signal. For example, Republican elites have recently framed content moderation as a free speech and censorship issue. Republicans might use these elites’ preferences to inform their own.

When we zoomed in on headlines that are either aligned or misaligned for Democrats, we found a party promotion effect: Democrats were less favorable to content moderation when misinformation aligned with their own views. Democrats were 11% less likely to want the social media company to remove headlines that aligned with their own political views. They were 13% less likely to report headlines that aligned with their own views as harmful. We didn’t find a similar effect for Republicans.

Our study shows that party promotion may be partly due to different perceptions of accuracy of the headlines. When we looked only at Democrats who agreed with our statement that the headlines were false, the party promotion effect was reduced to 7%.

Implications for social media platforms

We find it encouraging that the effect of party promotion is much smaller than the effect of internalized preferences, especially when accounting for accuracy perceptions. However, given the huge partisan differences in content moderation preferences, we believe that social media companies should look beyond the fact gap when designing content moderation policies that aim for bipartisan support.

Future research could explore whether getting Democrats and Republicans to agree on moderation processes – rather than moderation of individual pieces of content – could reduce disagreement. Also, other types of content moderation such as downweighting, which involves platforms reducing the virality of certain content, might prove to be less contentious. Finally, if the preference gap – the differences in deep-seated preferences between Democrats and Republicans – is rooted in value differences, platforms could try to use different moral framings to appeal to people on both sides of the partisan divide.

For now, Democrats and Republicans are likely to continue to disagree over whether removing misinformation from social media improves public discourse or amounts to censorship.

Ruth Elisabeth Appel, Ph.D. Candidate in Communication, Stanford University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Media Attributions

definition

About the author

Dr. Diana Daly of the University of Arizona is the Director of iVoices, a media lab helping students produce media from their narratives on technologies. Prof Daly teaches about qualitative research, social media, and information quality at the University of Arizona.

definition

License

Icon for the Creative Commons Attribution 4.0 International License

Humans R Social Media - 2024 "Living Book" Edition Copyright © 2024 by Diana Daly is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.

Share This Book