Digital Literacy 101 (Part II): Understanding Processes that can Strengthen Digital Literacy
In our last blog post, we discussed both the cognitive and technological processes that affect the ways that people engage with information and one another online. However, it is important to not only discuss the issues that lead to problems in online spaces, but also look at some of the resources being used to strengthen digital literacy to address these problems. Here, two additional sets of practices appeared. One set of practices emerged that focus on changing the ways that users analyze online content. The other focused on altering how people create and share content.
Both sets of practices are key in developing digital literacy. Thoughtful content analysis requires the ability to identify and address false or misleading information. Thoughtful content creation requires skills in promoting civil discourse. This post is not meant to provide an exhaustive list of the most promising practices for support and innovation. The examples included below spotlight some strategies that can be adopted independently or in combination, and tailored and improved upon over time to address context-specific digital literacy needs. Further, many of the interventions collected in our review of the literature are designed to address the cognitive and technological processes described in our last blog.
Targeting Individuals and Cognitive Processes
Several digital literacy supports focus on educating learners about the cognitive and technological processes that contribute to the spread of misinformation and the breakdown of civil discourse online. They do so by providing skills needed to make better judgements about the validity of information, and better decisions about content they encounter. Below we outline four methods that consistently arose in the literature.
- Encourage Source Interrogation — It may feel intuitive to attempt to directly address false or misleading content online, but research shows that it’s more effective to focus on the source of misinformation.¹ In fact, other studies show that focusing on the misleading content itself, rather than the source of that content, can actually reinforce belief in misinformation. Teaching individuals how to evaluate and interrogate the sources of the information they encounter online may be an effective strategy at combating misinformation.
- Promote a Pause to Focus on Accuracy of Information — It is easy to assume that most people want to share factual information, and that the accuracy of content would be a key factor in determining what information individuals share. However, judgments about accuracy can take time. Individuals may be quick to share articles without pausing to reflect on the accuracy of the headline or content, but recent studies show that intentionally pausing and reflecting on the accuracy of information may make individuals less likely to share inaccurate stories.²
- “Inoculate” Through Deliberate Exposure to Misinformation — Showing people explicit examples of misinformation may effectively build awareness and an ability to discern between authentic and misleading information.³ When people are given practice identifying false or misleading content, they are able to learn the features of potential misinformation (e.g., instilling a sense of urgency or having an abrasive tone). Several interventions in this space leverage educational games or interactive tutorials to intentionally expose participants to misinformation, and build users’ skills in judging the accuracy of information.
- Build Algorithmic Awareness — It could also be beneficial to inform people of how algorithms affect their access to information (e.g., through micro-targeting, computational amplification, or filter bubbles/echo chambers discussed in our last blog post). Creating this awareness could allow people to identify the presence of algorithms, retrain their newsfeed to access a variety of sources, and better scrutinize news sources to which they are exposed. Some strategies individuals could take to retrain their algorithms include identifying and avoiding websites that are not credible, and whitelisting specific credible sites. Actively sourcing information from multiple perspectives (e.g. from multiple websites, political leanings, etc.) can also mitigate the impact of algorithms.⁴
We began the last blog post with a scenario that, for many, will sound familiar: we imagined that we were scrolling through our favorite social media feed when a headline caught our eye. We saw the headline a few times that morning, but then we noticed a short tagline added by a respected colleague, as well as the names of several mutual friends offering affirmations in the form of responses. Ultimately, we clicked the link and read the article. Revisiting this scenario, it is clear to see how these strategies may have led to a different outcome. Perhaps we could have ruled out the article as misinformation based on its source or maybe a short pause would have allowed us to pick up on the sensational nature of the headline.
Organizations including First Draft News, and The Hewlett Foundation’s U.S. Democracy Program have developed programs centered on combating misinformation through research, training, and the development of resources that promote digital literacy skills through strategies including the above. Additionally, organizations like the Decision Education Foundation provide educational resources and frameworks to facilitate effective, high-quality decision making processes.
Targeting Novel Technological Innovations
While the above strategies can be used by individuals to analyze the information they engage with, interventions in this space can also consider the structural issues of platforms that contribute to the spread of misinformation and the breakdown of civil discourse. Novel technological solutions can be used to address these structural issues by diminishing access to false or misleading information, identifying irresponsible users, or to prompt users’ analysis of information using warning messages or other popups or labels. Below several of these strategies are explored in more detail.
- Using Artificial Intelligence (AI) to Detect and Label Misinformation — AI tools, which can assess the accuracy of individual news articles against external sources or evaluate the factual nature of entire news sites, can attach ratings or verification checks. These technologies have been increasingly utilized to expand the scope and scale of misinformation detection.⁵ For example, social media and other technology-based platforms are using AI tools to scale the work of human fact checkers, track suspicious accounts, attach warnings or more context to content, reduce distribution, and remove misinformation that may contribute to imminent harm.⁶ Additionally, platforms as well as individuals can utilize plugins or extensions assisted by AI to help detect and block potential misinformation.⁷ All of these systems can inform down-ranking tools that work with search engines to make misinformation and disinformation less accessible
- Automated Prompts — Implementing simple prompts that encourage the reader to focus on accuracy can increase the quality of news that people share on social media.⁸ For example, these prompts can ask individuals to rate and explain the accuracy of headlines before sharing. This strategy has not only made users more discerning in their subsequent sharing, but has also generated useful data to help inform down-ranking algorithms. Recent studies suggest that prompting approaches and digital literacy tips are not hindered by scalability issues that face strict fact checking approaches and, in fact, can be used in conjunction with crowd-sourced fact-checking to maximize efficiency .⁹
- Crowdsourcing — By drawing on the expertise and breadth of a large number of readers or viewers, crowdsourcing can be a useful means of combating misinformation.¹⁰ Rather than relying on a small number of professional fact checkers, platforms can recruit large numbers of laypersons to rate the accuracy of headlines or posts.¹¹ Despite potential concerns about political bias or lacking expertise, recent research suggests there is high agreement between groups of laypersons and professional fact checkers.¹² This approach is particularly useful because of its scalability, capitalizing on the large number of users already interacting with platforms.
- Limiting Access to Content — Researchers found that “information inundation” can lead to the perpetuation of misinformation because individuals do not have time to analyze the vast amounts of information presented to them.¹³ The study determined that information inundation produces political polarization even when information is accurate and unbiased because individuals only select information that confirms their beliefs. In an interview, Drakopoulos suggests “a simple tip to combat misinformation is for platforms to not censor content — which can have very adverse effects — and just control the amount of information that a user is exposed to at a time… This can be a limit in the size of the news feed or a random sampling of only a few articles per topic that are presented to users”.¹⁴
Some social media platforms have made notable progress in the pursuit of many of these strategic interventions. Tools like Twitter’s Birdwatch, which leverages elements of crowdsourcing and algorithmic labeling of material likely to be misleading, provide a strong case for the continued development of novel technologies that can change the ways individuals access and process content online.
Changing Creation and Sharing of Content
In addition to changing the ways that people access and analyze information, interventions can encourage digital literacy skills around civil discourse by empowering the thoughtful contribution and amplification of content in digital spaces. These skills enable learners to examine and discuss disagreements in productive ways.
There are multiple frameworks that focus on promoting civil discourse skills. Some of these frameworks include Rowe and Spencer’s strategies targeted to the postsecondary level, Doubet and Hocket’s prek-12 focused strategies, and Junco and Chickering’s institution-focused strategies. These strategies offer considerable promise in enabling individuals and groups to break down barriers that contribute to filter bubbles and echo chambers, and build opportunities for communication across lines of difference. The following strategies can be used to empower individuals to engage in civil discourse online.
- Ask Clarifying Questions — One possible technique to promote civil discourse online is to ask clarifying questions. Encouraging an individual to expand on their point of view may provide the necessary context for understanding why that person came to the opinion that they hold. With a greater context of their views established, someone may be able to find areas of common agreement and then use that as a building point for discussion. The individuals may not have their original opinions changed at the end of the discussion however providing the space for constructive conversation where there is a balance of positive and negative emotions and more balance in “i-statements” and “you-statements” can contribute to a greater appreciation for where each person is coming from.¹⁵
- Consider Alternative Perspectives — Another technique to encourage civil discourse is to encourage alternative perspective taking. Before an individual comments or engages with online information, pausing to consider alternative perspectives may help the individual craft a more thoughtful response. Beyond pausing, considering an alternative perspective includes seeking to understand how someone arrived at their viewpoint, and the underlying beliefs and thought processes that support it. By considering these perspectives a person may be able to find areas of agreement or know how to communicate in a manner that does not attack the original author for their belief but rather acknowledges it as part of their response.
Returning again to the example with which we started this blog series: After sharing the article to our personal feed, a relative writes something that sparks a chaotic series of reactions. In the scenario we chose to remove the relative’s comments and block them from our feed. This action has serious consequences, possibly including the strengthening of both our echo chamber, and theirs. Alternatively, utilizing the strategies above, the outcome might have been an opportunity to strengthen ties between you and your relative, expand perspectives, and limit the spread of misleading content or ideas.
Three initiatives including The Dynamical Conversations Lab, Living Room Conversations, and the University of Arizona National Institute for Civil Discourse have developed programming that promotes healthy, productive, civil debate leveraging methods including the two mentioned above.