I was generally wondering how we got from the freedom to differ on how exactly purgatory works to dismissing the scientific evidence on climate change or vaccinations on grounds that amount to beliefs.
ChatGPT 4-o helpfully sent me to start understanding this by looking into the work of Dan Kahan at Yale and his research on cultured cognition and motivated reasoning (e.g. Motivated Reasoning and its Cognates, extracted from Dan M. Kahan, Neutral Principles, Motivated Cognition and Some Problems for Constitutional Law, in: Harvard Law Review, vol 125 (2011), pp.1-77).
In a blog entry that gives a talk at an NSF 2011 conference, Kahan summarizes the problem well:
When risks and other facts that admit of scientific investigation become the focus of cultural status competition, members of opposing group will be unconsciously motivated to construe all manner of evidence in a manner that reinforces their commitment to the positions that predominate within their respective groups.
Or, as Kahan clarified in a blog entry from 2014 when discussing whether social science research shows a dissipation of trust in science in people who identify as Republican,
... political polarization over risks and other policy-relevant facts is a consequence of a the latent distrust citizens with opposing cultural identities have of one another, & their suspicion that "science" is being invoked opportunistically, disingenuously to disguise as claims about "how the world works" what are in fact contested understandings of "how we should live".
Note that ChatGPT 4-o was very clear that I needed to look at the side of the narrative imagination (Martha Nussbaum) as well as the epistemic character that dealing with truth requires (Quassim Cassam). The "minimal program" for looking at the various facets of this issue ChatGPT 4-o summarized in this handy table, which I copy as a graphic here.
The core dilemma in this investigation is the aymmetry between facts and narratives:“Facts do not counter, narratives do. But you need facts to invalidate narratives.”
As ChatGPT 4-o pointed out:
This is the epistemic asymmetry that the 21st century hasn’t resolved. And bad actors have exploited it masterfully: because a compelling lie with emotional resonance is faster and more persuasive than a slow, careful debunking—even when the debunking is factually unassailable.
No comments:
Post a Comment