Skip to content
Breaking News Alert Prosecutor: California DA Dropped Bombshell Election Data Case Because It Might Help Trump

APA Retracts Shocking Military Male Rape Study

Share

In a remarkably embarrassing moment for the American Psychological Association, on Sunday night the group retracted the marquee study in the latest issue of its journal, Psychological Services. Rolled out with a triumphant press release only a week ago, the study suggested the rate of male sexual assault in the military might be more than 15 times higher than the rate most anonymous surveys indicated.

In recent years, studies suggesting high, even pandemic rates of rape in the military have been greeted with wide attention among members of the press and on Capitol Hill, but this time the response was muted, even before the study’s remarkably shoddy methodology came under scrutiny. This is likely because its hysteria-tinged estimate of the rate of male sexual assault seemed high even for those deeply invested in the issue. The social scientists finally jumped the shark with this one.

Consider the numbers. The last large-scale, anonymous survey of military personnel devoted to the subject of sexual assault, conducted by the RAND Corporation in 2014, estimated that a little less than 1 percent of men in uniform had been victims that year of some form of sexual assault, from unwanted touching through to rape. That translates to approximately 10,600 victims—and this during the course of only a single year.

This is already a very high, even shocking number. Thus, if that number misses the mark by a factor of 15, it would have meant something like 159,000 servicemen were victimized that year—or, for perspective, a number not too far off from the total number of troops in the Marine Corps.

The Underreporting Problem

The figure is just as shocking when compared to the number of men actually reporting sexual assaults in the service, which in fiscal year 2014 numbered in the very low four figures. If the RAND Corporation’s estimates were correct, than the rate of underreporting is something like one to nine; if the American Psychological Association’s (APA’s) study had been correct, the problem would be much, much worse.

When you solicit responses to an anonymous online survey, your respondents are likely to be those highly motivated by your survey’s subject matter.

Sexual assault is certainly an underreported crime, and the exact rate at which it is underreported is, strictly speaking, unknowable. The RAND survey represented an improvement over past efforts by the Pentagon to get a handle on the issue—which involved surveys with risibly small sample sizes and inconsistency in the phrasing of questions—but it still had its issues, most important among them the “volunteer” problem. This means that when you solicit responses to an anonymous online survey, your respondents are likely to be those highly motivated by your survey’s subject matter. In response to this, the RAND researchers assure us that, “All estimates presented in the report and annex…use survey weights that account for the sample design and survey nonresponse.”

Of course, the volunteer problem would lead one to expect that RAND might be overestimating the actual number of male victims of sexual assault, but the study published by the good people at the APA attempted to show precisely the opposite: that RAND was massively understating the problem.

How did the association’s study get its numbers? Read on, and examine a case of social science at peak risibility.

A Method that Attempts to Reduce Stigma

The researchers behind this latest study believed that men face a stigma about reporting sexual assault. This is surely true, but the authors of the study took this observation further, believing that such a stigma had a significant effect even on the results of anonymous surveys. They believed that rather than simply asking veterans if they have been victims of sexual assault, or for that matter breaking the offense down into its constituent crimes (as RAND does for active-duty servicemen), instead a technique called “unmatched count technique,” or “UCT,” was better suited.

The authors of the study took this observation further, believing that such a stigma had a significant effect even on the results of anonymous surveys.

How does UCT work? It’s complicated. Respondents to the survey solicitation are randomly organized into three groups. Members of Group A are asked, straightforwardly, to “respond if the sensitive item [i.e., history of being sexually assaulted in the military] is true for them.” Members of Group B are asked a series of random, innocuous questions, none of which have to do with sexual assault. But rather than answering yes or no to each, Group B are asked only to provide the computer with the number of how many questions to which they have said “yes.” Finally, the members of Group C are asked the same innocuous questions as Group B, but with the question about sexual assault thrown in. Like the second group, they don’t provide yes-or-no answers for each question, but only the total number of “yes’s.”

Then, the difference in reported affirmations between Groups B and C is used to estimate how many men in Group C are answering “yes” to the question of sexual assault. Because respondents in Group C never actually have to tell the computer that they specifically answer “yes” to the question of sexual assault, their results are assumed to be a more accurate measure than those from the members of Group A.

The Numbers Don’t Add Up

The researchers tried out this theory on a group of Iraq and Afghanistan veterans—or, people they supposed were veterans; more on that below—who were “recruited via a range of print and online resources that cater to military and veteran populations.” They ended up working with only 180 respondents—a “relatively small sample,” the authors said, in a moment of vast understatement. After completing the online survey, “participants were directed to a separate website where they were eligible for a chance to win a $500 gift card.”

Their result didn’t come close to matching with any other serious measure of the problem.

From these respondents, the researchers found that 1.1 percent of Group A said they had been victims of sexual assault during their military service, and estimated that 17.2 percent of Group C must have been victims, given the number of questions they affirmed in comparison to Group B.

Aside from questions about the technique itself, there is the oddity that the figure from Group A actually seems low given the results of the RAND Survey, which estimated that around 1 percent of military men were assaulted in a single year. As the whole population of the military doesn’t turn over every year, and assuming different people are assaulted in most cases from year to year, the total rate of victims—when questioned about the course of their entire careers—should be higher than 1 percent.

But of course what made the news was the other disparity: the shocking figure of 17.2 percent, derived via the statistical magic of UCT. The researchers argued in their conclusion that this high number means that “unmatched count technique” should be more frequently used to measure the problem of military sexual assault, especially of males, because the high result confirms their assumption that the problem is underreported—even though their result didn’t come close to matching with any other serious measure of the problem.

Doncha Love Social Sciences

There was some humility. At the end of the study, the researchers cautioned that the results are only preliminary, given the miniscule sample size and, then, this humdinger: “Although recruitment efforts were limited to military-related online contexts, there was no way to independently confirm the military status of participants.”

Just another day in the social sciences: a sample size of 180 for a population that numbers in the great many millions, with no way to confirm that members of the sample are even part of the target population, and employing increasingly complex techniques because simple ones produce results that don’t suit the agenda of the researchers.

None of these things apparently bothered the editors and peer reviewers of the journal because, no doubt, it flattered their own preconceived notions about the issue. As Andrew Ferguson has argued at length, this is all par for the social-science course. The only really surprising thing was that the study’s sponsor was sufficiently ashamed, after the fact, to issue a retraction.