It Is Bad To Alter Or Retract Published Research That Has No Factual Errors, Even If You Are Doing It “For Social Justice”
You want MORE injection of political values in science?
I unlocked this article on 12/1/2022. If you enjoy it, please consider becoming a paid subscriber:
On August 18, the journal Nature Human Behavior published an editorial called “Science must respect the dignity and rights of all humans.” The editorial is draped in reasonableness — who could be against dignity and rights for all humans? — and contains some unobjectionable arguments, like how it’s important to “[d]efine categories [of humans] in as much detail as the study protocol allows,” which, okay. But its primary goal, if you read it carefully, is to expand the number of reasons scientific articles can be rejected for publication or, most troublingly, edited or retracted post-publication.
Early on, the article contains the ominous sentence “Although academic freedom is fundamental, it is not unbounded.” Whenever someone feels the need to express opposition to a belief held by just about no one, it’s a sign of potentially choppy intellectual waters ahead. To be fair, I don’t know how seriously this policy will be taken or if researchers will really try to use it as a cudgel to force retractions. But boy, is it half-baked. It nicely demonstrates the extent to which the careless injection of political values (guised as reasonableness) into science can cause trouble.
Here are the basic criteria the editorial lays out that could trigger negative consequences for a publication and/or its author(s):
Regardless of content type (research, review or opinion) and, for research, regardless of whether a research project was reviewed and approved by an appropriate institutional ethics committee, editors reserve the right to request modifications to (or correct or otherwise amend post-publication), and in severe cases refuse publication of (or retract post-publication):
1. Content that is premised upon the assumption of inherent biological, social, or cultural superiority or inferiority of one human group over another based on race, ethnicity, national or social origin, sex, gender identity, sexual orientation, religion, political or other beliefs, age, disease, (dis)ability, or other socially constructed or socially relevant groupings (hereafter referred to as socially constructed or socially relevant human groupings).
2. Content that undermines — or could reasonably be perceived to undermine — the rights and dignities of an individual or human group on the basis of socially constructed or socially relevant human groupings.
3. Content that includes text or images that directly or indirectly disparage a person or group on the basis of socially constructed or socially relevant human groupings.
4. Submissions that embody singular, privileged perspectives, which are exclusionary of a diversity of voices in relation to socially constructed or socially relevant human groupings, and which purport such perspectives to be generalisable and/or assumed.
You’ll notice the language is incredibly vague throughout. What does “Content that undermines — or could reasonably be perceived to undermine — the rights and dignities of an individual or human group” mean? Who defines what it means to undermine someone’s rights or dignity, let alone whether or not such a claim is “reasonable”? What does it mean to “indirectly disparage” a person or group? What does it mean to embody “singular, privileged perspectives, which are exclusionary of a diversity of voices”? Since the stakes here are pretty high, more careful definitions would be useful. There aren’t any.
What’s most alarming is that unless I’m missing something, research that is perfectly valid and well-executed could run afoul of these guidelines.
There isn’t much concern here about the truth of a claim; rather, the focus is almost entirely on subjective questions like what constitutes “indirect disparagement” or whether or not a perspective is “singular” or “privileged.” If you don’t think these are contested concepts — or that they don’t offer a convenient Trojan horse for controversial political claims to smuggle themselves into research — just spend five minutes on the corner of Science Twitter most concerned with social justice. Watch the sorts of things highly credentialed experts writing under their own names say about privilege and harm and so on.
To take one of many potentially troubling outcomes of these guidelines, it seems pretty clear that Nature Human Behavior could insist on the post-publication editing (or retraction!) of a paper solely on the basis of the author’s race. Let’s say an Asian author publishes an otherwise well-done, single-author study about a group of black study subjects. Wouldn’t that “embody [a] singular, privileged perspective[ ], which [is] exclusionary of a diversity of voices”? I don’t see why it wouldn’t.
This is all just a very bad idea. There are already so many obstacles to publishing quality research, and there are already so many incentives that nudge researchers away from doing so. If you are an under-resourced researcher, in particular, you have to run a veritable gauntlet to really accomplish anything in terms of your published output. So as always, the burden of these new rules will fall more on the have-nots.
What is the point of any of this? It’s not like there’s been some glut of genuinely racist or bigoted work published by scientists lately. What there has been is a glut of harm inflation. Seemingly every month, the category of stuff that gets labeled “harmful” or even violent increases. Read this story about a gender dysphoria paper retracted under truly bizarre circumstances. Or this insanity about the meltdown at JAMA that ensued after a brief podcast exchange entailing the gentlest imaginable critique of the concept of “structural racism.” Or read what happened to Norman Wang for opposing race-based affirmative action (the majority position among Americans when it comes to undergraduate admissions, though Wang was writing about med school).
Read these stories and then tell me we want to give bad-faith and/or politically radical actors more power — power codified by a science journal itself! — to attack research they don’t like. Given what’s going on, this is ill-advised. These sorts of exaggerated harm claims are quite corrosive. We should be building guardrails against them, not institutional structures to help amplify their impact.
More broadly, I’ve made this point before, but the idea of a direct causal link between an empirical finding and a political outcome is severely and chronically overstated. Genuine bigots are not bigots because they have carefully reviewed the evidence and decided that bigotry is scientifically justified; rather, they cherry-pick science in an attempt to justify feelings they already have. If intelligence has a strong hereditary component, the worst jerks in the world will exaggerate and distort this finding to support eugenicist beliefs. If we’re all true blank slates with equal cognitive potential at birth, the worst jerks in the world will claim that degenerate culture and child-rearing practice causes some groups to lag behind. There will always be some justification! And even the most rudimentary knowledge of the twentieth century will reveal that unfathomable acts of mass political murder have been conducted both by monsters obsessed with biological racial purity and by monsters obsessed with socially engineering their supposedly malleable subjects.
It’s also worth noting that if you care about genuinely marginalized groups, you’re going to have to publish research that shows they’re worse off in the first place. As the philosopher Liam Bright, who is alway worth reading both on Twitter and elsewhere, pointed out in a thread that starts here, the vagueness of these guidelines could hinder the publication of such work:
Since my Sunday is disturbingly placid, here's my more specific worry about the Nature Human Behaviour code of ethics. They include “unintended harms to the dignity of a social group” as among their, like, things not to do. On paper, like, sure, that sounds bad; watch out for it. But many academics are idealist in a pejorative Marxist sense. These people would absolutely see a study showing <poor group shower less because they have less access to clean running water> and worry more about an implication the group are dirty than their lack of running water. Those people, I worry, now have a publication code that could back them up in blocking or at least severely curtailing publication of such a study. And not just that, but basically anything which identifies a social problem for a group the idealist academics care about. None of this is in principle a problem. Maybe people will be sensible and just not do that and apply the code in a good way. Could happen. But my take is academics are in fact a mix of evil and clueless and so will do the bad thing.
It’s hard to disagree with this, especially if you have been following the recent controversies at the intersection of science and social justice.
***
I wonder if, at root, some of the current craze for purging liberal spaces of impurities is more about economics and job market jockeying than deeply held ideological beliefs. This Nature Human Behavior editorial got me thinking about elite overproduction, or the idea that society is producing too many people with elite credentials relative to the number of “slots” available in those rarefied spaces, leading to political frustration and instability among these types. Specifically, I thought of Leighton Woodhouse’s recent article arguing that would-be elites frustrated with their place in the world can carve out jobs for themselves by contriving or exaggerating problems in institutions:
[P]robably no industry has scooped up more of this labor market overflow [that is, would-be elites denied elite positions] than the non-profit sector. Unlike in tech and media, in the world of progressive NGOs, there is an actual organic demand for the moral capital that these job applicants have spent four years of college accumulating. There, one’s finely calibrated sensitivity to microaggressions, one’s native fluency in the obscure grammar and lexicon of social justice speak, and one’s acute ability to discern the structures of racism in literally anything are assets rather than liabilities. And from there, one can literally create the consumer market for those talents out of thin air, simply by inventing new social problems to solve.
You know what’s another area containing a lot of highly credentialed people with tons of moral capital squabbling over an ever-decreasing number of truly elite slots? Much of academia! So I think part of the story here is that if you can’t establish a truly stable career as an academic when doing so is harder than ever before (at least in many fields), you can turn instead toward a career within science by becoming a cop who pulls over other researchers for alleged Acts Of Harm. (“Sorry to bother you, ma’am, but I noticed that this paper you just published embodies a singular, privileged perspective that is exclusionary of a diversity of voices.”)
The Nature Human Behavior authors say they “commit to using this guidance cautiously and judiciously, consulting with ethics experts and advocacy groups where needed.” Those ethics experts and advocacy groups are part of a growth industry: As the sense of crisis deepens (because a small group of people insist that scientific research is constantly inflicting harm on vulnerable populations, and that in the absence of expert consultation the researchers and reviewers in question will remain oblivious to their wrongdoing), there will be more and more demand for these services. It will create more jobs that are technically within the fields of “science,” but not in the sense of really generating new knowledge. Rather, the goal will be to exert greater influence on the types of knowledge that are allowed to be produced. This stuff goes only in one direction: more and more things are going to be harmful, it turns out. We’ll need more and more experts to prevent us from doing harm to others. We should thank them.
Questions? Comments? Retraction-worthy parts of this newsletter? I’m at singalminded@gmail.com or on Twitter at @jessesingal. Image: “Man hand throwing crumpled paper to the basket. - stock photo” via Getty.
These measures remind me of the kind of policy posturing Republicans do to prove to the world that they are True Conservatives. Does the neighboring state require sex offenders to register? Then MY state must flog them! If another state flogs them, I must fire them into space with a cannon! It's this nuclear-arms race except with laws instead of weapons.
I think the anti-racist stuff is the same. Nobody wants to be seen as *less* anti-racist than the next organization, so that means you have to constantly be on the prowl for ways to out-anti-racist your peers.
I do worry about this hurting good research. One of the reasons why gay marriage was first passed in battleground states was because there was tons of research showing that gay households did not have worse life outcomes than straight households. I worry that research would not get approved of today if there was sufficient fear that it may show gay people are worse parents than hets.
This whole thing feels very churchy. I feel like every grad student/scientist/research fellow needs to read Galileo's Middle Finger.