Does The Racial Microaggressions Scale Actually Measure Exposure To Microaggressions? (Unlocked)
Science is hard
My subscribers voted, overwhelmingly, that they are okay with me sometimes unlocking previously paywalled posts that are at least three months old. That’s what I’m doing here — this post ran on 6/3/2021, and the original version lives here. As always, I’m cloning it to protect the privacy of subscribers who commented on the original — this version was created on 9/11/2021.
If you find this article useful or interesting, please, please consider becoming a paid subscriber. My paid subscribers are the reason I was able to write this piece in the first place, and they’re the ones who keep this newsletter going.
Back in 2017, Scott Lilienfeld, a renowned psychologist who has, sadly, since passed away at too young an age, published an article in Perspectives on Psychological Science raising many issues with the research into so-called microaggressions, defined as “brief and commonplace daily verbal, behavioral, or environmental indignities, whether intentional or unintentional, that communicate hostile, derogatory, or negative racial slights and insults toward people of color.”
Microaggressions have exploded in diversity-training settings since the concept first went mainstream around 2007, and this concerned Lilienfeld. The title of his paper, “Microaggressions: Strong Claims, Inadequate Evidence,” more or less sums up his thoughts. He believed the microaggression research program (MRP) was a mess.
Lilienfeld argued that belief in the scientific soundness of the MRP relied on five “core assumptions”…
1. Microaggressions are operationalized with sufficient clarity and consensus to afford rigorous scientific investigation.
2. Microaggressions are interpreted negatively by most or all minority group members.
3. Microaggressions reflect implicitly prejudicial and implicitly aggressive motives.
4. Microaggressions can be validly assessed using only respondents’ subjective reports.
5. Microaggressions exert an adverse impact on recipients’ mental health.
...and that not one, not two, but all five of these were, at best, unproven.
Lilienfeld was a legendary debunker, but also a productive and good-faith one. Given the serious and often alarming claims being made by some microaggressions scholars and diversity trainers — that, for instance, saying that “America is a melting pot” could contribute to a minority student’s risk of suicide — he called on everyone to hit the pause button with regard to microaggressions trainings, but also offered a way to get the MRP on track:
Based on the literature reviewed here, it seems more than prudent to call for a moratorium on microaggression training, the widespread distribution of microaggression lists on college campuses, and other practical implementations of the MRP (e.g., the insertion of microaggression questions on student course evaluations), at least until the MRP can take heed of many or most of the research recommendations listed here (see Table 1).
Table 1 consisted of 18 common-sense recommendations, including “Provide a clearer operationalization of microaggressions, with a particular focus on which actions and statements do not fall under the microaggression umbrella” (emphasis in the original) and “Ensure that microaggression items contain sufficient situational context to minimize ambiguity in their interpretation.”
I thought Lilienfeld’s paper was great and wrote a sympathetic piece about it at the time. I don’t think I’ve written much about the subject since then. For today’s newsletter, I had planned on picking up the story by focusing in on one particularly concrete recommendation he made and then arguing that microaggressions researchers have proceeded to largely ignore it, much to the detriment of the research literature on this subject.
In the course of doing so, however, a research-methods expert I reached out to for some stats help pointed me to a much more fundamental point that deserves its own newsletter: a key scale researchers use to measure exposure to microaggressions is pretty questionable.
Backing up just a little: Microaggressions are seen as a big deal in large part because studies appear to correlate exposure to them with various negative outcomes, most notably mental-health outcomes. To demonstrate this, you ‘simply’ correlate the frequency or severity of someone’s microaggression exposure to their present mental-health status.
‘Simply’ is in air quotes because for many reasons, this isn’t actually simple. One of them is the almost complete ambiguity surrounding how to determine whether something qualifies as a microaggression given the expansive definition of the term and wide variety of utterances cited as examples. To be sure, some commonly-listed microaggressions are straightforwardly offensive — “You’re not like the other [members of a racial group],” for example — but many others are ambiguous, like the “melting pot” thing. It’s unclear why this should be seen as a microaggression, and you can, if you so choose (and/or run a consulting firm that can make money from doing so), decide that anything is offensive by tacking a negative story onto it. Maybe when someone says “America is a melting pot,” for example, they are implicitly, harmfully downplaying the ongoing role of racism in perpetuating segregation. (Or whatever.)
Further complicating things on the what-qualifies front is that when you ask (for example) black and Latino Americans whether they find certain utterances commonly listed as microaggressions to be offensive, you get overwhelming evidence suggesting they don’t, on average. Seventy-seven percent of black Americans and 70 percent of Latino ones don’t find the melting-pot comment offensive, for example. So you have a situation in which professors and diversity trainers are telling kids they should be offended (and could be rendered mentally ill) by utterances that the vast majority of people in their racial groups don’t find offensive. You can see why Lilienfeld was concerned! There’s some genuine potential here to make things worse, rather than better, especially when it comes to first-generation college students from minority backgrounds who are navigating majority-white spaces for the first time in their lives.
Another reason correlating microaggression exposure to mental-health outcomes is complicated is the potential hidden role of negative emotionality. NE is a fairly broad term for a tendency toward negative thought and rumination, a tendency to view ambiguous situations in a dark light, and so on. Because so many microaggressions are ambiguous, and will be processed in a positive, negative, or neutral light not solely on the basis of their objective content but through the filters of situation and personality, it stands to reason that someone’s level of NE might influence their self-reported microaggression exposure. So these self-reports are not pure measures of how often certain utterances were directed at someone, but are also shaped by the personality of the reporter.
We can take this out of a racial context to make it less controversial and more intuitive: If we ask ten people “How often in your everyday life do you feel disrespected by others?,” we’ll get a range of results. Of course these results do not constitute a straightforward, purely objective rendering of how often the 10 people were disrespected — people high in negative emotionality will likely interpret some situations as ‘disrespect’ that would simply bounce off of people lower in NE. I mean, one time a woman got mad at me for holding the door for her!
I did not actually become a men’s rights activist, but this is a solid example of how personality (or even just someone’s fleeting mood) can color their interpretation of what would be considered by most to be a neutral or slightly positive stimulus.
Okay, so: For all these reasons, microaggressions research is a mess, and attempting to correlate exposure to microaggressions with negative outcomes a very fraught project. Which means that scales seeking to measure exposure to microaggresions already face an uphill battle and should be constructed very carefully.
One such scale is the Racial Microaggresions Scale (RMAS), first published in 2012 by Susan R. Torres-Harding, Alejandro L. Andrade, Jr., and Crist E. Romero Diaz of Roosevelt University. If you search Google Scholar for microaggressions studies published using this scale in 2020 and 2021, you’ll see that there are a fair number of them. Without looking too hard, you can find studies in which researchers use this scale to attempt to correlate microaggressions to PTSD among black Americans, to better understand “The Role of Racial Microaggressions and Bicultural Self-Efficacy on Work Volition in Racially Diverse Adults,” and, in a top journal, to measure the “Prevalence and Nature of Sexist and Racial/Ethnic Microaggressions Against Surgeons and Anesthesiologists.” Many of these studies clearly view “scored high on the RMAS” as identical to “experienced more and/or more severe microaggressions.”
Should that be how we interpret the scale? Here, from the paper announcing its creation, are the 32 items on it (as the text of the paper makes clear, three were subsequently eliminated for statistical reasons — I struck them through in red to indicate as such):
Microaggressions are an inherently fuzzy concept, but I feel like most people would agree that they do occur. This scale includes examples that seem fairly sound: “I am mistaken for being a service worker or lower-status worker simply because of my race,” “I am singled out by police or security people because of my race,” and “People suggest that I am ‘exotic’ in a sexual way because of my race.” Of course, humans are pretty fallible and we might question whether in a given case the person is attributing the behavior in question to the correct cause (more on which in a moment), but the point is these do at least seem to map on to what could reasonably be construed as microaggressions.
Many of the items on the list, though, plainly don’t. The last five items, for example, all have to do with simply being a minority in a school or workplace setting, or with only encountering authority figures or fictional characters of races different from one’s own. Being in these non-diverse situations may well correlate with the probability of experiencing microaggressions (though we’d need sound research to prove this, as one could tell stories pointing in the other direction as well), but none of them constitutes a microaggression in and of itself. In other cases, such as the items pertaining to being ignored, having one’s work devalued, or having one’s contributions ignored on account of one’s race, it’s plain to see how easily the scale could be picking up certain personality characteristics pertaining to negative emotionality rather than actual, unambiguous exposure to a microaggression. (I could see someone saying, Well, isn’t that true of the items you listed above as being pretty sound? In those cases, we’re at least dealing with events that happened — getting pulled over or mistaken for a janitor. This last batch seems to deal mostly with events that didn’t happen, and I think that makes attribution an even more difficult game. In short, I bet people are better at predicting why X happened than why X didn’t happen, but of course there could be good-faith disagreement with my analysis here, as well as the broader question of which items on this scale are weakest. The overarching point is many of these items are quite subjective and rely on some degree of mind-reading.)
If you’re at work and someone says something ridiculous like “You are a credit to your race,” that’s an unambiguous microaggression. If you are at work and feel like you didn’t get enough credit for your contribution to The Smith Report, I mean… who knows? It may well be because of your race, or it may be because of something else. This seems like an example of a measurement tool leaning into rather than attempting to circumvent the weaknesses of the construct it is attempting to measure. That is, if microaggressions researchers took some of Lilienfeld’s advice and really sought to figure out the, say, 15 most clear-cut microaggressions, and then asked people how often they’d been exposed to them, that would cut out a lot of ambiguity. But these items? If anything they fog things up further. Sometimes your contributions to The Smith Report are disregarded because of racism, sure, but sometimes it’s because your boss is a jerk, or an overworked colleague forgot to include everyone’s names on the final product, or because you actually didn’t do a great job. There are so many potential reasons someone’s work might be devalued that we should be more skeptical of their claim it was because of race (which is hard to know for sure) than we should of their claim that someone told them they were a credit to their race (which is easy to know for sure).
What’s interesting to think about is how, because of the design of this scale, it’s quite likely — maybe inevitable — that researchers are going to keep ‘discovering,’ over and over and over, that people who “experience many microaggressions” (that is, score high on the RMAS, which only sorta measures microaggressions per se), suffer from mental-health problems as a result. That’s because many of these questions are capturing personality traits that are themselves correlated with mental-health problems! It’s very unlikely this will produce good, rigorous research, but it will certainly produce provocative results, some of them eye-popping.
Maybe that’s kind of the point.
Questions? Comments? A full tabulation of all the anti-Norwegian microaggressions contained in this newsletter? I’m at singalminded@gmail.com or on Twitter at @jessesingal. The image of a hand holding up a RACISM IS A VIRUS TOO sign is by Rolande PG on Unsplash.