Thinking, Fast and Slow for Lawyers (Part 3)

10 mins

In this final instalment, we look at risk-taking, the perils of memory and how we love humans more than robots (except for Pepper the robot above).

In this final instalment, we look at risk-taking, the perils of memory and how we love humans more than robots (except for Pepper the robot above).

Let’s take a little last whirl through Thinking, Fast and Slow (or TFS) by Daniel Kahneman. In Part 1, we looked at “cognitive ease”, where our brain has “System 1” (intuitive) and “System 2” (deliberative) modes of operating. In Part 2, we delved into how storytelling⁠—while powerful⁠—can sometimes mislead us.

Now hang in tight and strap on your seatbelts. We’re going to talk about risk-taking.

Risky biscuit

To eat or not to eat, that is the question.

A lawyer I knew once described litigation outcomes as uncertain and could go either way. I note that this lawyer spent more time drafting contracts (what we call “front-end lawyers”) than wandering the wafty corridors of the court (“back-end lawyers”), so views may differ in this respect.

In all fairness, litigation is a risky “venture”. Parties become more cognizant of the potential downsides of running a case to final judgment. Most cases settle before they get to trial. Many cases listed (or what we call “set down”) for trial end up settling before trial, or even on the first day of trial. There have been instances where a judge was about to hand down final judgment, but the parties arrived in court to inform the judge that they no longer desired judgment (as they had reached a settlement that same day). How annoying!

Unless the matter concerns issues of liberty (e.g. criminal law), a matter of principle (e.g. environmental law) or setting a precedent for future cases (e.g. administrative law), winning may not guarantee a sweet ending. Losing is, of course, a more bruising outcome than winning⁠—and a situation that both parties wish to avoid at all costs.

So why do parties take the risk in the first place and how do they mentally deal (or not deal) with losing?

TFS explains the mechanics. First, let’s cover three concepts:

  1. The reference point
  2. Loss aversion
  3. Overweighting small probabilities

1. The reference point

The reference point is hugely important in the minds of humans. It is the measure by which we judge whether we’re “winning” or “losing”. For example, Kahneman notes that:

“In labor negotiations, it is well understood by both sides that the reference point is the existing contract and that the negotiations will focus on mutual demands for concessions relative to that reference point.”

2. Loss aversion

Our evolutionary make-up is such that the “brains of humans and other animals contain a mechanism that is designed to give priority to bad news”. This is the reason why humans are generally loss averse, which means we pay more attention to avoiding losses (e.g. getting eaten) over pursuing gains (e.g. eating delicious fruit across a crocodile-infested river). In Kahneman’s words, “losses loom larger than gains”. Researchers reached this conclusion through experiments like these (not my crocodile example):

“You are offered a gamble on the toss of a coin.
If the coin shows tails, you lose $100.
If the coin shows heads, you win $150.
Is this gamble attractive? Would you accept it?”

You stand to gain more than you would lose. For most people, however, the fear of losing $100 was more intense than the hope of gaining $150.

TFS points out that loss aversion is everywhere:

“If you are set to look for it, the asymmetric intensity of the motives to avoid losses and to achieve gains shows up almost everywhere. It is an ever-present feature of negotiations, especially of renegotiations of an existing contract, the typical situation in labor negotiations and in international discussions of trade or arms limitations. The existing terms define reference points, and a proposed change in any aspect of the agreement is inevitably viewed as a concession that one side makes to the other. Loss aversion creates an asymmetry that makes agreements difficult to reach.”

It’s also relevant for anyone who wonders why it’s so difficult to convince others to shift away from the status quo.

3. Overweighting small probabilities

Our brains tend to overweight small probabilities (or ignore them altogether).

“Because of the possibility effect, we tend to overweight small risks and are willing to pay far more than expected value to eliminate them altogether.”

This happened when I rented a car in Slovenia with right-hand drive (terrifying). There was the option to buy additional insurance to cover all damages or live with the thought of paying an excess of €1500 for any incidents. I decided to pay extra to eliminate the possibility altogether.

Ironically, I brought TFS on my trip and fell into the very trap that Kahneman talked about. I ended up getting insurance that I didn’t need (or can we point to hindsight bias)? The positive is that there was not a dent in the car and we got back in one piece.

Another example of where humans overweight small probabilities is buying a lottery ticket in the hope of winning… big. As you know, the chances are very small. But the thinking goes “well if I don’t buy a ticket, then I’m not even in the running. I’ll take a punt”.

Risk-taking in litigation

The three concepts above are important in the context of litigation and negotiation. Our researchers came up with a neat table to explain the “fourfold pattern”:

Kahneman describes the fourfold pattern as “one of the core achievements of prospect theory”. It shows how our fear of losses and our tendency to overweight small probabilities controls the way we make decisions. The fourfold pattern works this way:

  • 1st row: Shows the probability of wins or losses
  • 2nd row: Shows the emotion that most people feel
  • 3rd row: Shows how most people behave when offered the choice of a gamble or a sure gain (or loss)
  • 4th row: Describes the expected attitudes of a defendant and a plaintiff as they discuss the settlement of a civil suit.

This results in unexpected and less than favourable outcomes in litigation.

Quadrant 1 (Top right-hand corner)

A defendant who stands to lose will gamble on a 5% win⁠—be more aggressive and take more risks. Here’s a TFS example:

“She’s suing him for alimony. She would actually like to settle, but he prefers to go to court. That’s not surprising⁠—she can only gain, so she’s risk averse. He, on the other hand, faces options that are all bad, so he’d rather take the risk.”

Quadrant 2 (Top left-hand corner)

A plaintiff who has a high probability of winning a case is likely to settle to avoid a loss (and achieve certainty). Here’s a TFS example:

“Imagine that you inherited $1 million, but your greedy stepsister has contested the will in court. The decision is expected tomorrow. Your lawyer assures you that you have a strong case and that you have a 95% chance to win, but he takes pains to remind you that judicial decisions are never perfectly predictable. Now you are approached by a risk-adjustment company, which offers to buy your case for $910,000 outright⁠—take it or leave it. The offer is lower (by 40,000!) than the expected value of waiting for the judgment (which is $950,000), but are you quite sure you would want to reject it? If such an event actually happens in your life, you should know that a large industry of ’structured settlements’ exists to provide certainty at a hefty price, by taking advantage of the certainty effect.”

Note that this is a US-centric example. It’s quite different in Australia where litigation funders (and similar companies) have only recently entered the market.

Quadrant 3 (Bottom left-hand corner)

A plaintiff with a frivolous claim is “likely to obtain a more generous settlement than the statistics of the situation justify”. A frivolous claim is the equivalent of a winning lottery ticket for a large prize. To understand why plaintiffs with frivolous claims tend to obtain a more generous settlement, see Quadrant 4.

Quadrant 4 (Bottom right-hand corner)

Defendants see frivolous claims as a nuisance. But because they overweight the small probability of losing and tend to be risk-averse, they often decide that “settling for a modest amount is equivalent to purchasing insurance against the unlikely event of a bad verdict”. Here’s a TFS quip:

“He is tempted to settle this frivolous claim to avoid a freak loss, however unlikely. That’s overweighting of small probabilities. Since he is likely to face many similar problems, he would be better off not yielding.”

Kahneman notes that in the long-run “consistent overweighting of improbable outcomes⁠—a feature of intuitive decision making⁠—eventually leads to inferior outcomes”. For organisations, settling frivolous claims is a sub-optimal strategy as the total sum paid out over several settlements can exceed the sum paid out if they followed the litigation process.

Why we compensate for losses

TFS gives us a fascinating explanation for why the law tends to compensate for losses, rather than foregone gains.

As lawyers, we barely question the underlying reason for why compensation exists. We spend a whole heap of time on proving that “but for” the defendant’s actions⁠—the plaintiff would not have suffered the loss.

Kahneman notes that since people who lose suffer more psychologically than people who merely fail to gain, it makes sense that the law protects those who suffer loss than those who miss out on gains. Do you agree?

The perils of memory

We all know that memory can be imperfect. An old friend might say “hey, do you remember when you did this?” Sometimes, our memories produce a blank and we know⁠—in that instant⁠—that our memories are fallible. TFS makes us feel a little worse about it. Our memories are much more rubbish than we admit, even to ourselves.

Kahneman designed a carefully-controlled experiment called the “cold-hand situation” to compare our experiencing self (i.e. what we actually experience) versus our remembering self (i.e. what we remember from the event). This is how it worked:

Participants endured two cold-hand episodes:

“The short episode consisted of 60 seconds of immersion in water at 14°C, which is experienced as painfully cold, but not intolerable. At the end of 60 seconds, the experimenter instructed the participant to remove his hand from the water and offered a warm towel.

The long episode lasted 90 seconds. Its first 60 seconds were identical to the short episode. The experimenter said nothing at all at the end of the 60 seconds. Instead he opened a valve that allowed slightly warmer water to flow into the tub. During the additional 30 seconds, the temperature of the water rose by roughly 1°, just enough for most subjects to detect a slight decrease in the intensity of pain.”

The researchers then asked participants which episode they wished to repeat. It turned out that 80% of the participants who reported that their pain diminished during the final phase of the longer episode opted to repeat it.

So what happened? This experiment (and others) tell us that there is a conflict between the experiencing self and the remembering self. In essence:

  • We remember the worst or best moments of an experience⁠—and the end of an experience.
  • The duration of an experience (good or bad) has no effect whatsoever on how we later rate our experience of pain or pleasure.

This has implications for witness testimony. It’s not only the passing of time which causes memories to fade (something which the law is very much aware of)⁠—but goes to a root issue on how we process experiences and create memories in the first place.

We prefer humans over robots

What is the future of algorithms, artificial intelligence (AI) and machine learning in the legal space? TFS tells us that we prefer humans over robots⁠—the “natural” over “artificial”. For anyone working in bringing tech to law, don’t be surprised if lawyers push back⁠—and hard. TFS referred to the historic chess victory of Deep Blue over Gary Kasparov and notes how we prefer to sympathise with our fellow human.

“The aversion to algorithms making decisions that affect humans is rooted in the strong preference that many people have for the natural over the synthetic or artificial. Asked whether they would rather eat an organic or a commercially grown apple, most people prefer the ‘all natural’ one. Even after being informed that the two apples taste the same, have identical nutritional value, and are equally healthful, a majority still prefer the organic fruit. Even the producers of beer have found that they can increase sales by putting ‘All Natural’ or ‘No Preservatives on the label.”

Furthermore, even where there is an algorithm in place, humans love to override it by referencing their own experience. As always, we tend to ignore the base case and substitute our own understanding⁠—something we covered in Part 2.

At the end of the day, we should always remember that behind every algorithm is a human being. Our biases, however, will be with us for a long while (until we evolve). This might be something we should not forget in the ongoing debate of tech x law.

Takeaways

TFS is simply the start to becoming more aware of how our human brain works. There is still so much more to know. We certainly don’t have all the solutions.

What we can learn from TFS:

  • When you recognise that you’re in a cognitive mindfield, slow down and ask for reinforcement from System 2.
  • Sleep and rest is important⁠—System 2 gets tired quickly.
  • Organisations can encourage a culture (through procedures) where people watch out for each other as they approach cognitive minefields.
  • Always remember the base rates⁠—don’t only look to your own experience.
  • Beware hindsight bias⁠—the tendency to assume that we can predict the future by looking to past events.
  • Know your limits and your biases.
  • Read TFS!
Here’s my (very marked-up) copy.

At the end of the day, don’t be too hard on yourself. Remember, we’re all human!

Image credit // Alex Knight