62 Comments
Oct 19, 2023·edited Oct 19, 2023

My two cents on the last question Grant asked, whether we should be worried about being TOO dismissive of potentially weak studies: good god, no. I'm a lawyer, not an academic, but my legal career has put me in frequent close contact with academic social scientists and their work product. While frequently lovely people, far too many of them end up as impressively-credentialed BS artists who are much, MUCH more likely to wave away obvious problems/limitations with their work, and the work of their colleagues, than they are to take those limitations too seriously. As a profession, social scientists can't really be trusted to police the flow of shoddy social science into the world. It's appropriate to default to skepticism about social science "findings."

Expand full comment

This is how you end up with Monk Debates that devolve into two people essentially arguing that "the data" supports their opposing positions. Hygiene in research findings is incredibly important if you care about public trust.

Expand full comment

The best two arguments I have in favor of Grant's position are:

1) https://slatestarcodex.com/2014/08/14/beware-isolated-demands-for-rigor/

(This does not mean rigor is unwarranted, but make sure to extend it equally)

2) A risk of focusing on the weakest studies to find the problems.

I don't know how Jesse picked the studies to dive deeper into. If he picked the biggest studies, that's probably fine, because they'll have the largest effect on the outcome. If he went through and found the 5 weakest studies and reported those, well.... maybe the other 290 studies were fine!

Sort of like strawman vs. steelman.

Expand full comment

I would love for people to regularly ask social scientists "what was the last thing in your field you changed your mind about and why?"

(Actually I think a question like this is probably good to ask darn near everyone.)

Expand full comment

They should be asking that of themselves. How open am I to new evidence? Many social scientists I can think of would pass this test pretty well. Many would not.

Expand full comment

Likely they've not altered a bit since they went into the field - which they entered in order to propagate their Lefty world view with "science."

Expand full comment

I understand that there are huge replicability issues in the social sciences....especially psychology, and in psychology, trans issues seem among the most tenuous.

Expand full comment

It’s also worth pointing out that the way Coleman defines colorblindness is completely orthogonal to multiculturalism. One can (and should!) learn about and celebrate cultural differences while eliminating race from public policy and striving to avoid prejudice in personal interactions.

Expand full comment

Ironically, the same people who ostentatiously claim to celebrate cultural diversity seem to somehow find it inconceivable that these cultural differences might play even a partial role in statistical group outcomes.

Expand full comment

Drives me wild.

Expand full comment

Indeed. The heated controversy here has resulted from progressives disavowing multiculturalism as a goal or strategy and pursuing a more aggressive and divisive ideology.

Expand full comment

This type of in-depth reporting is why I support you. Thank you!

Expand full comment

I think Adam's argument is entirely unfair/nonsensical. The point of research is to be right, and it strikes me as really unfair that researchers should get a pass on questionable work because it's normalized or because it's hard to find.

Expand full comment

I am Ok with Grant's argument in other contexts. Sometimes weak evidence turns out to be right, and weak evidence should at least give us pause.

The problem here is correlation vs. causation, which is a much bigger deal. A causal research design vs. a robust correlation observational study are two very different things, especially if it's easy to think of ways the correlation results are biased.

Expand full comment

Ok full disclosure, I have not read the whole thing yet. But I scrolled to the end to see what Grant had said to Jesse. I read Grant's "Think Again" a couple years ago and generally liked it. But sometimes I think he suffers from not enough of his own medicine. His key argument is that it's better to think like a "scientist" who looks for the truth without an agenda instead of a "prosecutor" trying to beat the other guy in an argument or a "preacher" trying to convert others to your "truth." So his "warning" to Jesse seems to be saying, dude, you're getting awfully prosecutor-y here. But from where I sit, Jesse generally does a great job of pointing out where he could be wrong, giving authors the benefit of the doubt about their intentions, and trying to suss out what can be gleaned from a study in order to hone in on something close to the "truth." If this isn't "science," what is? If "science" is being dominated by groupthink and regularly putting out papers that support conventional thinking, then someone who is determined to uncover what portion of their findings truly hold water will inevitably appear to be a "prosecutor." Because there are too few "scientists" willing to act as their own "prosecutors" right now.

Expand full comment
Oct 20, 2023·edited Oct 20, 2023

I think the social scienctists that would be prosecutors would be those who have different views. You could look at this on the political ideological perspective (left vs right) and that works but in an intellectual robust society it could be much more.

Adam Grant talks a good game about intellectual diversity. And what he says is right. He actually models the reasons why he’s right in his back and forth with Jesse.

He shows that he can’t be neutral and will give his ideological pals every benefit of the doubt. In a world, where no one is neutral you need different views to be represented to get the best outcomes. But the left has curated conservatives out of their spaces. That leaves no one to check their work. So they put out bad work and talk about consensus. The “literature” or the “scholarship” says… It’s crazy!

Expand full comment

It’s very amusing given this whole thing started when TED reached out to Adam to be a “prosecutor” against Coleman’s ideas. It’s very clear that Adam does not even understand Coleman’s position.

Expand full comment

FWIW I preferred Julia Galef’s “soldier”/“scout” dichotomy.

Expand full comment

I think it's fair to say Jesse was responding in a somewhat lawyerly fashion to what was originally a lawyerly use of the study. Responding to lawyering with lawyering is appropriate.

In fact, as it relates to the political relevance / applicability of any study, lawyering is the *only* feasible approach in anything remotely contentious. If you are a scientist evaluating the work of another scientist for scientific purposes, absolutely avoid lawyering. If you are using science to make a political claim, lawyering is your only option, and is the correct option.

Expand full comment

Only in social science- “we find support for certain ideas leads to supporting the policies most associated with these ideas”

Expand full comment

Great article.

I appreciate this kind of deep-dive tremendously, and agree completely with other commentors here that Grant's concern about people being too critical of scientific papers is, frankly, unscientific.

This is something that really floored me: "Among other variables, the meta-analysis authors code support for affirmative action, support for liberal immigration policies, high ratings about outgroups on a “feelings thermometer,” and other attitudinal rather than behavioral measures, as “indicators of high quality intergroup relations.”

I couldn't help but mentally zoom in on the inclusion of "support for affirmative action" as a measurement of the outcome ("high quality intergroup relations") when the variable being measured is color-blind versus non-color-blind ideology -- the blatant circularity of this, for me, illustrates why the whole meta-analysis is very hard to take seriously.

Expand full comment

A similar definition is used in papers about “racial resentment”. Any respondent who doesn’t believe in racial preferences for minorities scores high on “racial resentment”. This is a brilliant technique for manufacturing results that favor your policies: you take a phrase with one commonly understood and very negative meaning, redefine it to mean something innocuous your political opponents support, and suddenly Republicans are child molesters*.

* In this paper “child molester” is defined as someone who scores high on our survey that assesses the use of punitive disciplinary methods on their children.

Expand full comment

Had the same thought, I was seeing this pop up o error and over again. I actually thought there was a term for this type of methodological error, and it doesn't seem there is a term for it. (I asked a fer peope and searched onlone for like an hour) There certainly should be, and people should be cautioned against it.

Expand full comment
Oct 19, 2023·edited Oct 21, 2023

“This supplementary appendix lists all the studies and their effect and sample sizes in one place. If pull it up, you’ll find that one of the largest samples came from ‘Multicultural and ethnic attitudes in Canada: An overview of the 1991 National Survey,’ published in 1995 by J. W. Berry and Rudolf Kalin in the Canadian Journal of Behavioural Science. According to the authors of the meta-analysis, Berry and Kalin found an r = .57 ….”

My understanding is that:

a) Canada has always had and accommodated for two distinctly different cultures (one Anglo Protestant English-speaking, the other Franco Roman Catholic French-speaking). The US, of course, hasn’t. This, alone, calls into question any “scientific” argument that assumes reactions to “multiculturalism” will be roughly parallel.

b) Canadians don’t worry about crime the way Americans do. There is considerably less violence in Canada. This likely means much less fear and suspicion overall.

c) Canadian immigration has, for several decades, been composed of people from an array of nations and cultures, a substantial number of whom are better educated than its general population. What’s more, Canada is a vast, scarcely populated country that has had trouble drawing needed professionals to its rural communities and bitterly cold north.

d) Canada never had an economy reliant on the brutal enslavement of millions of black people. Nor did it have Jim Crow, the US’s shameful history of racism or countless examples of tension, sometimes violent, between races and ethnicities in its cities. Nothing in Canada’s history in any way is analogous to America’s white/black racial history. Nothing. This would seem to be your proverbial pretty fucking big deal.

Even if the conclusions that Adam Grant mistakenly attributes to the Canadian data were valid, we couldn’t assume they were applicable to the US. Just as we can’t assume a correct reading is a refutation. The histories, populations, cultures and circumstances of, and in, the two countries are just too different in too many ways. (Local attitudes in freezing Winnipeg towards a pediatrician from Ghana, a math teacher from Croatia or an orthodontist of South Asian descent are going to be of limited relevance to the racial dynamics of contemporary Miami, Dallas or Detroit.)

It’s incredible that Coleman Hughes has to deal with this nonsense. Kudos to Jesse Singal for his willingness to address Grant’s (either idiotic or bad faith) appeal to “meta-analysis” with earnestness and rigor.

Expand full comment

From the headline, I thought this was going to be about "color blindness" in the "visual impairment" sense (e.g. can't easily differentiate red from green). And I was so looking forward to a deep dive into a non-hyper-politicized sphere...

Expand full comment

Me too! Even though I’m fully aware of the Coleman Hughes controversy, my initial reading of the headline was that, like the recent doubt cast on the amyloid plaque theory of Alzheimer’s, another supposedly “settled” body of knowledge in a physical science was about to be exposed as at best weakly supported and at worst outright fraud!

Expand full comment

You can examine the tests for colorblindness yourself. They are plots of dots in varying sizes and colors, with the color variation falling into two "color classes" (as in, there might be dots in several different shades of green, and other dots in several different shades of red).

The dots are arranged so that, if you can perceive a difference between the two color classes, the test image will show a meaningful pattern. (for example, the dots might pick out a red "15" against a green background.)

https://www.aoa.org/healthy-eyes/eye-and-vision-conditions/color-vision-deficiency

You run the test by just asking people what each test image shows. This is technically not enough to prove that colorblindness exists - you could have one population that sees the differences (which your data will prove), and another population that also sees the differences, but lies to you when you ask them about it.

The test is good for producing information in a context where being colorblind is bad, because it can easily prove that someone is not colorblind. It will fail in a context where people might seek to be seen as colorblind, because it can't easily prove that someone is colorblind. (Though a faker might give a pattern of responses that are inconsistent with previously established work on which color distinctions should stand or fall with each other.)

Expand full comment

OK, I was (I think justifiably) irritated at Michael splaining Ishihara tests at me, but I think there’s a vein to be mined here. Yes, dot tests for color blindness are common and I think we all understand that while they probably have the usual false positive/false negative problems that all screening tests have, that they are a useful filter for identifying people who may have a common syndrome affecting the retinal cones that makes them to varying degrees unable to distinguish colors based on their red/green reflectivity. All good and well. But there is, behind the scenes, a genetic explanation of what particular chromosomal aberrations cause this condition, based on the fact that it is observed far more often in males than females.

What if evidence came to light that called that explanation into question? Not the existence of the phenomenon, but the widely accepted X-linked recessive gene that supposedly caused it? What if a body of evidence emerged that instead implicated some epigenetic phenomenon? Or a theory that posited an environmental insult whose expression was linked to androgens?

Ten years ago I would have scoffed at this kind of thing as lunatic fringe psychosis. Now...not so sure.

Expand full comment

Dude. I’m a pilot. I know what a color blindness screen is. I’ve taken every possible variation in order to qualify for FAA medicals.

Expand full comment

Me as well. I had blissfully forgotten about this controversy, and thought there was some new discovery in ophthalmology suggesting that true color blindness was more rare than thought.

Expand full comment

My initial reaction was the same, even though I was well aware of the Coleman Hughes argument.

Expand full comment

Same, but I thought it was going to be part of a comparison with something that is hyper-politicized (“so-and-so et al. found that gender dysphoria doesn’t exist? Well, what’s-his-face et al. found that color blindness doesn’t exist!”).

Expand full comment

Yeep. Careful! It's people of non-hued vision, please.

Expand full comment

Ok, read through this now (hooray for meetings ending early!), and I am not surprised at all to find that these studies appear to largely have "good outcomes" be defined as "agreeing with liberal/progressive policies" and then that the diversity idiologies that conservatives and normie liberals agree with don't correlate well!

Expand full comment

I didn't even know about the correlational nature of most of the studies underpinning the meta analysis, but this does strengthen Coleman's statement that "Social science meta-analysis are unreliable, because social science studies are unreliable". That claim did strike me as overly broad, but my god the simple correlation-causation thing is something you learn in like high-school no?

Also not addressed in this article, but it does seem like the "outcomes" being measured are essentially the ideology itself in some cases. Is it really fair to measure the effectiveness of "multiculturalism" by measuring "support for multicultural policies"? This seems like a really circular form of reasoning that would inevitably pad the numbers here.

Expand full comment

"approach a body of research like prosecutors seeking a conviction."

I'd say it's more like a defense attorney seeking an acquittal, and I'm not sure that's a bad thing. Bodies of research should be able to survive cross examination. That's foundational to science.

Expand full comment

I'll have to read Jesse's piece again when I have more time. Maybe, I'll even have to read the paper itself. I'm still don't understand why Grant believes that it refutes Coleman's argument. I get that meta-analyses try to overcome the bias and limitations of individual studies (like Nate Silver's use of a collection of polls to make predictions instead of using individual polls that may have noise). But unlike political polls, in which it is pretty easy to compare apples to apples (who are you going to vote for?), a bunch of scientific studies seems to include such a mishmash different questions and techniques that I don't know how you can confidently use a meta-analysis to confidently advance broad general conclusions.

Expand full comment
Oct 20, 2023·edited Oct 20, 2023

What really struck me in the whole brouhaha was the notion that somehow Coleman's message must be stifled because it goes against the social science, even if it did go against the social science. In the law, we say about such matters that this is a political question not a legal one. And likewise, this is a political question, not one for the social sciences. (Not saying the social sciences can't chime in, but they can't have veto power on such squishy issues.)

Expand full comment

Sadly, there's a tremendous signal loss at each phase: the actual data -> study write up by the authors-> how (and if) the study results get reported -> how people interpret the reporting.

Even worse, so many "studies" are terrible, based on surveys of convenience, retrospective recollections (e.g. "how much red meat did you eat in 2013?"), P-hacking, or hypothesis after the fact. So many problems I can't even list them all.

These days, on topics I want to understand better, I either read the actual studies, court cases, etc. Or I find experts who seem to be doing the leg work . (a.g. Peter Attia on health and longevity, or Jesse Singal on covering topics like this.)

But I also acknowledge there are many topic where I know I don't know, and I also know that most people who claim to know something are wrong. E.g. what was the actual damage from the mRNA vaccines? What is really happening with global warming? How do we make the US education system stronger.

Anyways, I always really appreciate Jesse's deep dives on these things. The methodology is as interesting as the topic itself.

Expand full comment

That is indeed the problem. Do your own research? We’ll, when I get my degree in bio-statistics. Rely on folx like Barbara Ferrer?

Expand full comment

Great deep dive, thanks! I just want to point out that regardless of how strong a point Grant has about this meta-analysis (or similar ones), it's not enough to tar Coleman as outside the Overton window, and therefore a legitimate target of deplatforming. Even if you believe everything Grant says, in the context of Coleman's talk, it amounts to... wait for it..."it's complicated". Not "Coleman's talk is unacceptable, harmful, and without merit" which is what the staffers who objected are claiming. (Not a direct quote.)

Expand full comment

Exactly this. Grant's pushback was not as weak as it should be, but it was weak nonetheless.

Expand full comment