October 24, 2021

sneezeallergy

The Best Heatlh Under One Roof.

Colby Cosh: Yet another famous psychology study turns out to be pure bunk

‘Don’t believe everything you read in the science section of a newspaper, but when it comes to psychology, don’t believe anything’

Article content

Psychological science has been suffering a “replication crisis” for a decade now, but this week there was a particularly newspaper-friendly example of a famous finding failing to hold up on re-examination. The lede writes itself, saving me a bit of trouble: a scientific paper on the subject of dishonesty, one cited hundreds of times, has been proven beyond doubt to contain… dishonestly manufactured, 100 per cent fictional data.

Advertisement

Article content

It’s a funny story. About a decade ago, two well-known behaviour researchers, Dan Ariely (a Duke University prof and best-selling author) and Nina Mazar (then at the U of T’s Rotman School of Management), cut a deal with an insurance company to perform a harmless experiment on customers. Their goal was to study aspects of cheating, something insurers have a powerful incentive to understand. An unnamed company “in the southeastern United States” was asking customers to report the current odometer readings on insured vehicles, and the forms required those customers to sign a statement saying “I promise that the information I am providing is true.”

The particular question that Ariely and Mazar were interested in is this: would it make a difference whether you put the honesty attestation at the top of the form or at the bottom? Would asking for a signature “first” lead to some noticeable difference in customer reporting of mileage? The company supposedly sent out two versions of the forms, assigning one or the other to each customer randomly, and compiled a dataset.

Advertisement

Article content

When the customer-reported figures were compared with baseline readings previously taken by the insurer, the psychologists concluded that customers filling out the signature-first version of the form claimed to have driven 10 per cent more miles between the two readings. Since the customers’ incentive was to report less mileage (and thereby receive lower premiums), the most natural explanation was that the signature-first form encouraged people to report more accurately.

Or that was true, at least, until an unsuccessful effort was made to replicate the insurance experiment in 2020. As part of that process, the data gathered for the 2012 experiment was published as an Excel spreadsheet. The numbers looked fishy, and “data thugs” who specialize in spotting cooked scientific data were able to reveal the full horror after receiving an anonymous tip. The brutes in question — Uri Simonsohn, Leif Nelson and Joe Simmons — run an important weblog called Data Colada. Being behaviour researchers themselves, they have pioneered many new statistical methods in fraud-spotting over the years.

Advertisement

Article content

But it didn’t take any sophistication to see that the data provided by the insurance company had been faked in a childish way. You can visit Data Colada and see for yourself. It’s obvious that someone, probably an employee of the insurance company, fabricated the odometer readings, using Excel’s random-number generator to do it.

The authors of the original paper seem to agree that the study is a dud. For behaviour researcher Lisa L. Shu, a graduate student who ended up as the lead author of the 2012 paper when her research team joined forces with Ariely and Mazar, it was a reputation-maker; Ms. Shu now bravely describes herself as “delighted” that the bogus dataset, which her own group never “saw”, “touched”, or questioned, has been exposed by Simonsohn et al.

Advertisement

Article content

And, of course, this is science working the way it’s supposed to, in principle. The only reason people were combing over an old result was that everyone is now aware of the “replication crisis” and there is more funding available for replications of celebrated psychology experiments (which often turn out to be turkeys).

On the other hand, if anybody at all had looked more closely at the original spreadsheet, there would have been no need for “replication.” And specialist “thugs” who ferret out bogus numbers don’t (yet) receive the fame and fortune that TED-Talking, science-popularizing psychologists sometimes do. Some of you will have heard of Dan Ariely or even bought his books. None, or almost none, will know of pioneering debunkers like Simonsohn or Nick Brown.

Advertisement

Article content

It’s unusual that the bad Excel sheet was preserved for a decade, and unusual that it was ever openly distributed. This is not yet a strong ethical norm in the soft sciences, but, as the Data Colada guys observe, “We have worked on enough fraud cases in the last decade to know that scientific fraud is more common than is convenient to believe, and that it does not happen only on the periphery of science. … There will never be a perfect solution, but there is an obvious step to take: data should be posted.”

Many economics journals now insist on this, they add, but social psychology largely doesn’t. Psych experiments are expensive, and often involve hunting for small statistical effects in limited samples, so psych researchers are chronically defensive. They have strong incentives to fiddle with study parameters, or, as in this case, to accept too-good-to-be-true data at face value. As a news consumer, you don’t have to be taken in: I would conclude by saying “Don’t believe everything you read in the science section of a newspaper, but when it comes to psychology, don’t believe anything.”

Twitter.com/colbycosh

The big issues are far from settled. Sign up for the NP Comment newsletter, NP Platformed.

Advertisement

Comments

Postmedia is committed to maintaining a lively but civil forum for discussion and encourage all readers to share their views on our articles. Comments may take up to an hour for moderation before appearing on the site. We ask you to keep your comments relevant and respectful. We have enabled email notifications—you will now receive an email if you receive a reply to your comment, there is an update to a comment thread you follow or if a user you follow comments. Visit our Community Guidelines for more information and details on how to adjust your email settings.