Before diving into the subject of rhetoric, I wanted to share some thoughts on an article that appeared in The New York Times (reproduced below - apologies for the formatting) proposing an Information Literacy strategy that the author claims can help people sort bad information from good without getting drawn into malevolent quagmires by engaging directly with the information being scrutinized.
The ideas in the piece originate from the work of Michael Caulfield, a digital literacy expert from Washington State University Vancouver. Like many of us, Caulfield is trying to find ways to deal with the massive volume of misinformation on the Internet, including “fake news” and misleading arguments that support vile opinions, including dangerous conspiracy theories.
Traditional Information Literacy approaches propose students analyze questionable material by reading, listening to, or watching it closely enough to identify facts and claims which can then be scrutinized through independent research. For example, a web site that argues against global warming likely has links to sources supporting arguments that global warming is false, sources that can be reviewed to determine if they represent legitimate scientific opinion or flummery.
But engaging with information carries a risk, argues Caulfield, the risk that someone might get sucked into a world of false beliefs and dangerous opinions if they engage with problematical material too deeply. This risk is heightened, he claims, by strategies people who create these sources use to draw people into their world by taking advantage of frailties in the human mind that can lead to error or misguided desires to belong.
In order to avoid such dangerous rabbit holes, the article outlines a strategy that lets you still use Information Literacy techniques to scrutinize a source without getting contaminated by long-term exposure to the potentially dangerous original.
That strategy involves four steps that spell out the tidy acronym SIFT (Stop; Investigate the source; Find better coverage; and Trace claims, quotes and media to the original context). For example (an example from the article, as a matter of fact), rather than dive into the work of an anti-vaccine activist, you can instead start by Googling his name which – in this example – revealed him to be a conspiracy theorist. Having investigated the source and found it wanting (SIFT step 2), one is free to search out better coverage of vaccines that avoid false beliefs and conspiratorial thinking.
Given that the story is presented as a critique of critical-thinking practices that encourage people to engage with ideas, even ones they might find repugnant, I wanted to share thoughts about what I found compelling and not-so-compelling about the arguments presented in the piece.
On the plus side, the article highlights a vital point related to the value of our attention. Columnist Michael Goldhaber has written a great deal about the “attention economy” which points out how, in a world with only so many waking hours, the scarcest (and thus most valuable) commodity we have is the time we devote to things. Given this, time devoted to bigoted or conspiratorial web sites – even time spent debunking them – is time that could be put into more productive (and less risky) activities. So I appreciate his suggestion that we start treating this commodity as precious, rather than continuing to give it away to people who may not have our best interests in mind.
The SIFT method proposed in the story is also a valuable technique for quickly coming to a conclusion about a questionable source of information. For if you can rapidly identify that an information source is unreliable through fast independent verification, why waste valuable attention culling through the original?
That said, the ability to quickly discover someone peddling a questionable set of beliefs is actually a conspiracy theorist seems like a pretty easy case. But what about situations that are not so clear cut?
In discussions of the Principle of Charity (a critical-thinking method that asks you to argue with the strongest version of an opponent’s argument, rather than dwelling on weak points that don’t necessarily bring the whole argument down), I mention situations that might not warrant application of that principle, such as arguments in favor of perpetual motion machines or crank race theories. But I also point out how easy it is to assign arguments you don’t like to easily dismissible categories and then dismiss them without having to engage with potentially legitimate (if controversial) ideas.
In addition to the attention and honey-trap threats described in the article, our information age also presents other risks, including the tendency to lump any idea we do not like with propositions outside the bounds of legitimate discussion. Discussions over hot topics like how scarce resources (such as COVID vaccine) should be distributed, or how the nation should come to terms with race issues are challenging enough. But what happens if we start sifting (pun intended) information and opinions by using third-party sources (Google, Wikipedia, etc.) to confirm that people and opinions we don’t like are too monstrous and dangerous to even listen to or lay eyes on?
Speaking of sources, if the people trying to deceive us are ingenious enough to create sophisticated sites and stories designed to drag us into their lunatic and bigoted quagmires, what’s keeping them from corrupting those third-party sites we’re supposed to move laterally towards in order to determine who is safe to trust? While I’m not Wikipedia-phobic, is that really the site we want to use for our quick-and-dirty verification of whom to trust about controversial matters? And given that whole industries are dedicated to gaming Google, how much faith should we invest in the algorithm that determines what gets listed in their first page of search results?
Given that the specific suggestions contained in the NYT piece are slight refinements of conventional Information Literacy practices (double-check sources, verify through legitimate third parties, reverse-search images, etc.), it seems like the major new thing being added to the discussion is the idea that some information sources are not just wrong but too dangerous to even attempt to engage with by, for example, finding out what other people have to say. While I can think of a few scary places where that might be true (QAnon comes to mind), we should be more afraid that those trying to suck us into their world might have a knack for getting our beliefs cast into the darkness.
So rather than protect ourselves and our children from dangerous ideas we are too fragile to handle, perhaps we should instead make sure we and those we teach and raise are too tough-minded and strong-willed to be hustled by con men whose arguments we are more than equpped to dismantle.
Original Story
Don’t Go Down the Rabbit Hole by Charlie Warzel, New York Times, Feb 18, 2021
Critical thinking, as we’re taught to do it, isn’t helping in the fight against misinformation.
For an academic, Michael Caulfield has an odd request: Stop overthinking what you see online.
Mr. Caulfield, a digital literacy expert at Washington State University Vancouver, knows all too well that at this very moment, more people are fighting for the opportunity to lie to you than at perhaps any other point in human history.
Misinformation rides the greased algorithmic rails of powerful social media platforms and travels at velocities and in volumes that make it nearly impossible to stop. That alone makes information warfare an unfair fight for the average internet user. But Mr. Caulfield argues that the deck is stacked even further against us. That the way we’re taught from a young age to evaluate and think critically about information is fundamentally flawed and out of step with the chaos of the current internet.
“We’re taught that, in order to protect ourselves from bad information, we need to deeply engage with the stuff that washes up in front of us,” Mr. Caulfield told me recently. He suggested that the dominant mode of media literacy (if kids get taught any at all) is that “you’ll get imperfect information and then use reasoning to fix that somehow. But in reality, that strategy can completely backfire.”
In other words: Resist the lure of rabbit holes, in part, by reimagining media literacy for the internet hellscape we occupy.
It’s often counterproductive to engage directly with content from an unknown source, and people can be led astray by false information. Influenced by the research of Sam Wineburg, a professor at Stanford, and Sarah McGrew, an assistant professor at the University of Maryland, Mr. Caulfield argued that the best way to learn about a source of information is to leave it and look elsewhere, a concept called lateral reading.
For instance, imagine you were to visit Stormfront, a white supremacist message board, to try to understand racist claims in order to debunk them. “Even if you see through the horrible rhetoric, at the end of the day you gave that place however many minutes of your time,” Mr. Caulfield said. “Even with good intentions, you run the risk of misunderstanding something, because Stormfront users are way better at propaganda than you. You won’t get less racist reading Stormfront critically, but you might be overloaded by information and overwhelmed.”
Our current information crisis, Mr. Caulfield argues, is an attention crisis.
“The goal of disinformation is to capture attention, and critical thinking is deep attention,” he wrote in 2018. People learn to think critically by focusing on something and contemplating it deeply — to follow the information’s logic and the inconsistencies.
That natural human mind-set is a liability in an attention economy. It allows grifters, conspiracy theorists, trolls and savvy attention hijackers to take advantage of us and steal our focus. “Whenever you give your attention to a bad actor, you allow them to steal your attention from better treatments of an issue, and give them the opportunity to warp your perspective,” Mr. Caulfield wrote.
One way to combat this dynamic is to change how we teach media literacy: Internet users need to learn that our attention is a scarce commodity that is to be spent wisely.
In 2016, Mr. Caulfield met Mr. Wineburg, who suggested modeling the process after the way professional fact checkers assess information. Mr. Caulfield refined the practice into four simple principles:
1. Stop.
2. Investigate the source.
3. Find better coverage.
4. Trace claims, quotes and media to the original context.
Otherwise known as SIFT.
Mr. Caulfield walked me through the process using an Instagram post from Robert F. Kennedy Jr., a prominent anti-vaccine activist, falsely alleging a link between the human papillomavirus vaccine and cancer. “If this is not a claim where I have a depth of understanding, then I want to stop for a second and, before going further, just investigate the source,” Mr. Caulfield said. He copied Mr. Kennedy’s name in the Instagram post and popped it into Google. “Look how fast this is,” he told me as he counted the seconds out loud. In 15 seconds, he navigated to Wikipedia and scrolled through the introductory section of the page, highlighting with his cursor the last sentence, which reads that Mr. Kennedy is an anti-vaccine activist and a conspiracy theorist.
“Is Robert F. Kennedy Jr. the best, unbiased source on information about a vaccine? I’d argue no. And that’s good enough to know we should probably just move on,” he said.
He probed deeper into the method to find better coverage by copying the main claim in Mr. Kennedy’s post and pasting that into a Google search. The first two results came from Agence France-Presse’s fact-check website and the National Institutes of Health. His quick searches showed a pattern: Mr. Kennedy’s claims were outside the consensus — a sign they were motivated by something other than science.
The SIFT method and the instructional teaching unit (about six hours of class work) that accompanies it has been picked up by dozens of universities across the country and in some Canadian high schools. What is potentially revolutionary about SIFT is that it focuses on making quick judgments. A SIFT fact check can and should take just 30, 60, 90 seconds to evaluate a piece of content.
The four steps are based on the premise that you often make a better decision with less information than you do with more. Also, spending 15 minutes to determine a single fact in order to decipher a tweet or a piece of news coming from a source you’ve never seen before will often leave you more confused than you were before. “The question we want students asking is: Is this a good source for this purpose, or could I find something better relatively quickly?” Mr. Caulfield said. “I’ve seen in the classroom where a student finds a great answer in three minutes but then keeps going and ends up won over by bad information.”
SIFT has its limits. It’s designed for casual news consumers, not experts or those attempting to do deep research. A reporter working on an investigative story or trying to synthesize complex information will have to go deep. But for someone just trying to figure out a basic fact, it’s helpful not to get bogged down. “We’ve been trained to think that Googling or just checking one resource we trust is almost like cheating,” he said. “But when people search Google, the best results may not always be first, but the good information is usually near the top. Often you see a pattern in the links of a consensus that’s been formed. But deeper into the process, it often gets weirder. It’s important to know when to stop.”
Christina Ladam, an assistant political science professor at the University of Nevada, Reno, has seen the damage firsthand. While teaching an introductory class as a Ph.D. student in 2015, she noticed her students had trouble vetting sources and distinguishing credible news from untrustworthy information. During one research assignment on the 2016 presidential race, multiple students cited a debunked claim from a satirical website claiming that Ben Carson, a candidate that year, had been endorsed by the Ku Klux Klan. “Some of these students had never had somebody even talk to them about checking sources or looking for fake news,” she told me. “It was just uncritical acceptance if it fit with the narrative in their head or complete rejection if it didn’t.”
Ms. Ladam started teaching a SIFT-based media literacy unit in her political science classes because of the method’s practical application. The unit is short, only two weeks long. Her students latched onto quick tricks like how to hover over a Twitter handle and see if the account looks legitimate or is a parody account or impersonation. They learned how to reverse image search using Google to check if a photo had been doctored or if similar photos had been published by trusted news outlets. Students were taught to identify claims in Facebook or Instagram posts and, with a few searches, decide — even if they’re unsure of the veracity — whether the account seems to be a trustworthy guide or if they should look elsewhere.
The goal isn’t to make political judgments or to talk students out of a particular point of view, but to try to get them to understand the context of a source of information and make decisions about its credibility. The course is not precious about overly academic sources, either.
“The students are confused when I tell them to try and trace something down with a quick Wikipedia search, because they’ve been told not to do it,” she said. “Not for research papers, but if you’re trying to find out if a site is legitimate or if somebody has a history as a conspiracy theorist and you show them how to follow the page’s citation, it’s quick and effective, which means it’s more likely to be used.”
As a journalist who can be a bit of a snob about research methods, it makes me anxious to type this advice. Use Wikipedia for quick guidance! Spend less time torturing yourself with complex primary sources! A part of my brain hears this and reflexively worries these methods could be exploited by conspiracy theorists. But listening to Ms. Ladam and Mr. Caulfield describe disinformation dynamics, it seems that snobs like me have it backward.
Think about YouTube conspiracy theorists or many QAnon or anti-vaccine influencers. Their tactic, as Mr. Caulfield noted, is to flatter viewers while overloading them with three-hour videos laced with debunked claims and pseudoscience, as well as legitimate information. “The internet offers this illusion of explanatory depth,” he said. “Until 20 seconds ago, you’d never thought about, say, race and IQ, but now, suddenly, somebody is treating you like an expert. It’s flattering your intellect, and so you engage, but you don’t really stand a chance.”
What he described is a kind of informational hubris we have that is quite difficult to fight. But what SIFT and Mr. Caulfield’s lessons seem to do is flatter their students in a different way: by reminding us our attention is precious.
The goal of SIFT isn’t to be the arbiter of truth but to instill a reflex that asks if something is worth one’s time and attention and to turn away if not. Because the method is less interested in political judgments, Mr. Caulfield and Ms. Ladam noticed, students across the political spectrum are more likely to embrace it. By the end of the two-week course, Ms. Ladam said, students are better at finding primary sources for research papers. In discussions they’re less likely to fall back on motivated reasoning. Students tend to be less defensive when confronted with a piece of information they disagree with. Even if their opinions on a broader issue don’t change, a window is open that makes conversation possible. Perhaps most promising, she has seen her students share the methods with family members who post dubious news stories online. “It sounds so simple, but I think that teaching people how to check their news source by even a quick Wikipedia can have profound effects,” she said.
SIFT is not an antidote to misinformation. Poor media literacy is just one component of a broader problem that includes more culpable actors like politicians, platforms and conspiracy peddlers. If powerful, influential people with the ability to command vast quantities of attention use that power to warp reality and platforms don’t intervene, no mnemonic device can stop them. But SIFT may add a bit of friction into the system. Most important, it urges us to take the attention we save with SIFT and apply it to issues that matter to us.
“Right now we are taking the scarcest, most valuable resource we have — our attention — and we’re using it to try to repair the horribly broken information ecosystem,” Mr. Caulfield said. “We’re throwing good money after bad.”
Our focus isn’t free, and yet we’re giving it away with every glance at a screen. But it doesn’t have to be that way. In fact, the economics are in our favor. Demand for our attention is at an all-time high, and we control supply. It’s time we increased our price.
コメント