Following on from our previous posts, let’s look at some lessons from the existing research and how they’re relevant to stopping white-collar crime in its tracks.
If you’ve been ahead of the curve, you’ll know that this month was a pretty exciting one for
Let’s get to the final instalment in two parts. We’ll dive back into the research by Dan Ariely, a Professor of Psychology and Behavioural Economics at Duke University. If you haven’t already watched the documentary trailer from my previous post, here it is again:
In summary, dishonesty is commonplace. Everyone has the capacity to cheat if given the opportunity and if the conditions are right. Ariely’s research tells us that the world isn’t a place of goodies versus baddies. Rather, it’s an infinite struggle within ourselves.
From little things, big (fraudulent) things grow
One thing that comes across most strongly in the documentary is this: the white-collar criminals (from insider traders to CV fraudsters) started small. Then their greed got the better of them and/or dishonesty was normalised. Our Australian insider traders, Christopher Hill and Lukas Kamay, certainly found that a bit of cheating snowballed into something far bigger.
Before we start getting judgmental, Ariely’s research tells us that no one is saint. Under the right conditions, anyone can succumb to dishonesty within the limits of their “fudge factor” (i.e. the extent to which you can cheat while still believing that you’re a good, honest person). With this said, Kamay’s fudge factor was clearly much greater at $8 million compared to Hill’s fudge factor of $200,000.
Hear ye, hear ye
This is of course fascinating from a legal viewpoint. Law enforcement in relation to white-collar crime involves assessing dishonesty after the event. What legally-trained minds fail to do well (shock horror!) is to consider how we can take preventative steps to prevent cheating within any given ecosystem.
Given that Ariely’s research has wide-ranging applications, let’s focus on six key lessons. These have the most relevance to designing better systems to improve honesty. Ariely puts it well by saying that:
“This is not about being bad, it’s about being human. It’s then about how we protect ourselves against our own bad behaviour, and the bad behaviour of other people.”—Dan Ariely
1. We are watching you
The first and most obvious lesson is—we cheat less when we know that someone is watching us. As the matrix test from my previous post demonstrated, it’s easier to cheat when you know that no one can catch you out.
Another intriguing experiment also looked into this issue. It so turns out that you don’t always need an actual person watching you.
In the mid-2000s, psychology researchers at a British university tested an unusual honesty box system. Their subjects? 48 of their colleagues who used the common kitchen. For several years, members of the Psychology Division had the option to pay for tea, coffee and milk through an honesty box.
For the experiment, our researchers decided to put a new sign for prices at eye level, above the honesty box and the coffee/tea making equipment. Over 10 weeks, they alternated between two images printed above the prices for tea, coffee and milk. They were:
- a pair of eyes looking directly at the observer; and
Each week, the researchers would record the amount of tea, coffee and milk their colleagues consumed, as well as how much was in the honesty box.
The unwitting subjects were naive to the entire experiment. Strangely enough, no one commented on the changing images of eyes and flowers. Despite this, the results told a very interesting story:
Our researchers concluded that these findings were important when thinking about “designing honesty-based systems, or … to maximize contributions to public goods”. Even weak systems, like printed eyes, had an unconscious effect on participants. It shows us that a person’s concern for their reputation is a powerful motivator for good behaviour.
Hot Tip: If you think that your colleagues take too many forks from the shared kitchen and never return them, you might want to stick some printed eyes in the drawer.
On a serious note, the eye and flower experiment tells us that we can get better outcomes if there’s a system where people know they are, well, being watched. It all sounds rather logical when you think about it.
2. Conflicts of interest are real problems
Moving on from printed eyes to the second lesson. Conflicts of interests are serious problems. Ariely cites his one-off experience working with lawyers as an expert witness.
“We academics are sometimes called upon to use our knowledge as consultants and expert witnesses. Shortly after I got my first academic job, I was invited by a large law firm to be an expert witness… Very early in the case I realized that the lawyers I was working with were trying to plant ideas in my mind that would buttress their case. They did not do it forcefully or by saying that certain things would be good for their clients. Instead, they asked me to describe all the research that was relevant to their case. They suggested that some of the less favorable findings for their position might have some methodological flaws and that the research supporting their view was very important and well done. They also paid me warm compliments each time that I interpreted research in a way that was useful to them. After a few weeks, I discovered that I rather quickly adopted the viewpoint of those who were paying me. The whole experience made me doubt whether it’s at all possible to be objective when one is paid for his or her opinion. (And now that I am writing about my lack of objectivity, I am sure that no one will ever ask me to be an expert witness again—and maybe that’s a good thing.”
— Dan Ariely, The (Honest) Truth About Dishonesty (2012)
Aside from not biting the hand that feeds you, what happens when there’s a conflict between say, a client and an adviser? Does disclosure help to remedy the conflict of interest?
It so turns out that three researchers from Yale, Carnegie Mellon and UC Berkeley have found that disclosure may result in worse outcomes for those receiving advice from their conflicted advisers. Cain, Loewenstein and Moore ran a series of studies and found that “sunshine policies” like disclosure were ineffective in remedying conflicts of interests. They can even backfire.
One study involved an estimating task where participants had to work out the total monetary value of coins in a jar. There were two groups of participants, “estimators” and “advisers”. Estimators had to guess the number of coins in a jar. Advisers had the task of advising estimators on their guesses. The difference between the two groups was that the advisers had better information, as the researchers gave them a number range (e.g. a figure between $10 to $30) and a longer time to inspect the jar. The conflict? When estimators
Our researchers found that in several conflict of interest scenarios,
This is a problem because we often believe that disclosure is the panacea for all evil. For example, if your doctor tells you about a drug trial and that they are getting a $5,000 cut. Would you think that your doctor was acting in your best interests? Do they get into your good books because they were honest enough to tell you about their cash benefit?
Declaring that you have a conflict of interest does not remove that conflict—you only signal that there is one. It should make us reconsider whether disclosing conflicts is an effective way of managing them.
So what are the solutions? Our researchers think that the best way of dealing with conflicts of interest is to eliminate them, rather than rely on disclosure. For example:
“Physicians, for example, could (and, we believe, should) be prohibited from accepting gifts from pharmaceutical companies. Investment banks could be barred from providing buy/sell recommendations on the stocks of companies whose issues they underwrite. Bond-rating firms could be paid by those who use the information they generate rather than by the companies whose bonds they rate.”
However, our researchers acknowledge that it’s not always possible to eliminate conflicts altogether. Aside from minimising conflict, the next possible best option would be having an awareness of the extent to which we need to account for conflicts.
3. Don’t distance the person from the money
The third lesson is that we shouldn’t distance the person from the money they’re dealing with. Aside from one explanation, there is another good reason for the turn of phrase “cold, hard cash”. It’s because it’s visceral. It feels real.
Numbers on a screen don’t have the same effect.
On a variation of the matrix test, participants were asked to grab tokens out of a bowl for the number of problems they solved. They could later convert the tokens into cash. What Ariely found was that when they used tokens to represent money, the level of cheating was significantly higher.
Of the matrix test involving the token variation (an abstract representation of money), Ariely’s conclusion was this:
“Do people find it easier to misbehave and think of themselves as good people? I think the answer is yes, absolutely yes.”—Dan Ariely
This has consequences in the financial world where people deal with money in abstract and electronic ways. For example, take tweaking risk premiums on a financial product. It doesn’t seem like anyone gets hurt when you tweak something on a spreadsheet to make your numbers look better, right? Or mix bad mortgages (where people are unlikely to repay a cent) with marginal amounts of good debt—and price the entire package as if it was a safe financial product?
Unfortunately, this was what the GFC boiled down to. And people did get hurt.
In Part 2, we’ll look at the final three lessons to take home.