top of page

Disinformation


Like many people trying to popularize critical-thinking and media-literacy methodologies, one of my goals with LogicCheck and other projects is to help people deal with a world seemingly spinning out of control due to polarization, distorted reasoning, and other factors impacting our ability to think through and solve problems. And one of the things many people involved with this fight have on their enemies list is disinformation.


Which is why I was surprised to discover that the fight against disinformation (or misinformation) has come under scrutiny by not just one but two New York Times writers, including Jay Caspian Kang in an opinion piece entitled Can We Get Smarter About Disinformation?, and Matthew Yglesias who wrote a Substack post on the topic entitled The misinformation problem seems like misinformation.

The Kang piece references a long report in the Harper's Magazine by Joseph Bernstein titled Bad News (subtitled “Selling the story of disinformation”). Since this is not a full-blown logic-checker, I’ll refrain from copying all three pieces into this post and instead focus on a couple of key arguments (taken primarily from Bernstein’s Harper’s piece) regarding why shots fired at the disinformation threat may represent misfires (or even backfires).


To begin with, Bernstein highlights that many of those most charged up and trying to do something to stem the disinformation threat have a pretty significant stake in being part of whatever solution they come up with.


For example, a Commission on Information Disorder, sponsored by the Aspen Institute think tank, was chaired by celebrity newscaster Katie Couric and included academics, business leaders (including current and former executives from major Internet firms like Google and Facebook), and even a member of the British royal family.


The Commission was one of many efforts Bernstein refers to as “Big Disinfo,” in which previous gatekeepers (such as mainstream media leaders, former and current government officials, academics, and tech leaders) have stepped up to try to tame the Wild West of today’s radically uncontrolled media landscape.


The desire of previous information gatekeepers to return to their earlier role, all in service of protecting the public of course, is understandable. But Bernstein also makes an interesting case for why tech giants might want in on a project that requires them to take responsibility for irresponsible use of their platforms.


For, according to Bernstein and other social-media critics, it is in the financial interest of Facebook et al. to push the perception that their platforms are extraordinarily persuasive, even if it means controlling bad use of this mind-shaping power (such as bots and trolls manipulating the public through disinformation) to protect “good” use of such platforms (such as advertisers giving Facebook lots of money to get that same public to buy shoes).


While arguments based on an analysis of who benefits (or who profits) are intuitive, they also tend to be uncharitable since they assume the only reason people embrace a call to action is because they may profit from the results. Government and media professionals can be civic minded, not just self-serving, and the downside of today’s information free-for-all is clearly a problem that needs to be addressed by people with expertise and commitment to the greater good.


That said, recent cases of media institutions and tech platforms labeling controversial content as disinformation and limiting its distribution demonstrate that both cure and disease carry downsides. Theories regarding a potential human origin to COVID or the provenance of Hunter Biden’s laptop are hardly conclusive, but the speed at which debate was shut down about these (potentially true) stories clearly demonstrates that we should not be so quick to hand a gatekeeping role to powerful entities made up of human beings who – like all of us – struggle to separate truth from fiction.


Another argument against Big Disinfo is definitional. After all, how can we go to war with disinformation if we lack a common understanding of what that term even means?


As with the “Who Profits?” argument, I tend to be cool to claims that we must lock down our terminology before debate can continue. In my most recent book on critical thinking, for example, I dedicate a lot of pages to lack of consensus over what that phrase means, but also point out that agreed-upon wording of “critical thinking” is not required to forge ahead with educational projects dedicated to teaching students skills (like logic) that are well understood to be part of the critical-thinker’s toolkit.


With this issue, however, terminology carries baggage that can lead to action, up to and including shutting down legitimate news sources accused of peddling disinformation. We saw similar weaponizing of language when former President Trump continually accused his critics (including legitimate news sources) of peddling “fake news,” another term that became more of a slur than a description of a well-understood phenomenon.


I’ve gotten fond of this taxonomy regarding types of misinformation found in the modern information ecosystem, one that takes into account types of content, the motivation of content creators, and the nature of content dissemination. While it may not be as simple as labeling something a lie, it does provide the basis for more fine-tuned judgement over what constitutes genuine falsehood versus information we’d prefer not be shared.

Comments


bottom of page