What Is Our Children Learning? Part 3.

Welcome back to our examination of N. Gregory Mankiw’s “ten principles of economics.” (Part 1 of this series is here; Part 2 is here). We got through two principles last time, but today we’ll only manage one. It’s a big ‘un:

Principle 4: People Respond to Incentives

An incentive is, according to my computer’s dictionary, “a thing that motivates or encourages one to do something.” So people respond to incentives, which are things that people respond to. This is a tautology rather than a principle.

Of course, there’s more to it. In the explanatory text, Mankiw points out that keeping incentives in mind is crucial when designing public policy (which is true — unintentional incentives in the tax code in Sweden led to the horror that was Abba’s outfits), and that, for example, a gasoline tax would likely encourage people to use less gasoline (also true). But then he gives this example, which I’ll quote in full.

When policymakers fail to consider how their policies affect incentives, they often end up with unintended consequences. For example, consider public policy regarding auto safety. Today, all cars have seat belts, but that was not true 50 years ago. In the 1960s, Ralph Nader’s book Unsafe at Any Speed generated much public concern over auto safety. Congress responded with laws requiring seat belts as standard equipment on new cars.

How does a seat belt law affect auto safety? The direct effect is obvious: When a person wears a seat belt, the probability of surviving an auto accident rises. But that’s not the end of the story because the law also affects behavior by altering incentives. The relevant behavior here is the speed and care with which drivers operate their cars. Driving slowly and carefully is costly because it uses the driver’s time and energy. When deciding how safely to drive, rational people compare, perhaps unconsciously, the marginal benefit from safer driving to the marginal cost. As a result, they drive mroe slowly and carefully when the benefit of increased safety is high. For example, when road conditions are icy, people drive more attentively and at lower speeds than they do when road conditions are clear.

Consider how a seat belt law alters a driver’s cost-benefit calculation. Seat belts make accidents less costly because they reduce the likelihood of injury or death. In other words, seat belts reduce the benefits of slow and careful driving. People respond to seat belts as they would as they would to an improvement in road conditions–by driving faster and less carefully. The result of a seat belt law, therefore, is a larger number of accidents.

This sounds like blackboard bullshit, but there’s actually empirical support for it (it’s called the “Peltzman effect” from the person who found that yes, people take bigger risks on the road when they feel safer.) My favorite example of the Peltzman effect, as long as we’re talking about Sweden: Sweden switched from driving on the left to driving on the right, overnight, in 1967. Everyone expected a wave of accidents, but instead accidents went down because drivers were paying more attention. Note that this means that we don’t calculate risks exactly–drivers overreacted in that case.

But Mankiw makes it sound like government efforts to improve road safety have been, and must be, futile because drivers will simply adjust their driving to keep the risk constant. That just plain ain’t the case. I’ve read Unsafe at Any Speed, and here are some other problems with Detroit’s cars back then, besides the lack of standard seat belts:

  • If you were one of those safety-conscious people who had paid extra for a seat belt, a head-on collision could insert the steering wheel into your face (the steering column was one piece, instead of two shorter pieces connected by a gear).
  • Even if that didn’t happen, your seat might break off, hammering you into the steering wheel.
  • Brake fluid had a tendency to boil, leaving the brakes useless. By the time anyone checked at the accident scene, the fluid would have cooled, leaving no evidence except the driver’s word.
  • The snazzy designs of yesteryear left big blind spots (and a few people were even impaled on those cool tailfins).
  • Even outside rear-view mirrors weren’t standard (until 1966).
  • In fact, the car companies had only adopted windshield wipers, directional signals, and brake lights (which were standard by the time Nader was writing) under compulsion from the government.
The car companies insisted that they simply couldn’t fix these problems (although somehow they found the resources to completely redesign their cars every year); they only did fix them when government action, or the threat of it, forced them to.And if you think it’s unlikely that we’ve let our driving deteriorate to the point that we’ve offset all of these reduced risks (many of which drivers in the 1960s didn’t even know about), you’re right. Driving is much safer than it used to be. It’s worth remembering just how deadly driving used to be — the economist John Kenneth Galbraith wrote in the 1950s of an “annual massacre of impressive proportions,” and a British person in the 1960s said, “I drive every day. I see blood on the road every week.”Government doesn’t get all the credit; cultural changes have mattered as well (when I was a kid, if you got into an accident, “I was drunk” was considered an excuse). Still, it’s no secret that “the Peltzman effect” does not equal “regulations are futile.” Here’s a 2006 meta-analysis:

As Peltzman (1975) acknowledges, offsetting behaviour could be trivial or substantial. Indeed, the amount varies between road safety measures. Behavioural adaptation generally does not eliminate the safety gains from programmes, but tends to reduce the size of the expected effects.

And that person who saw blood on the road? He was quoted in Richard Titmuss’s The Gift Relationship (1968), describing why he gave blood regularly. And Titmuss’s book is relevant to this discussion. It showed how the blood supply in Britain, where people donated for free and where the distribution was done by a socialist bureaucracy, was cheaper and better than in America, where donation and distribution were rewarded with money. A reviewer summarized the message of the book:

For a lesson in modern political economy, consider the trade in doolb. In the land of Niatirb, the supplie[r]s of this vital commodity receive no pay, and its processing and distribution are in the hands of government bureaucrats. In Acirema, by contrast, nearly all doolb supplie[r]s receive cash or other tangible, individual incentives, and much of the processing and distribution is carried out for profit. Obviously, Acireman doolb supplies will be higher in quality, lower in price, more accurately attuned to demand, and involve far less wastage—right? Wrong.

Point being, “incentive” is a broad term. The desire to do good is an incentive as much as the desire for money. People gave more blood in Britain, where giving was a noble, selfless act, than in America, where it was a troublesome way to make a couple of bucks.

Mankiw doesn’t deny this, but he doesn’t mention it, either. It’s easy to misread this section as saying that our individual, selfish gain/loss calculations are all that matter.

And Mankiw doesn’t mention that taking incentives into account doesn’t always work. Consider crack cocaine. When it first appeared in American cities in the early 80s, it was new, it was scary, and a (somewhat justified) moral panic ensued. The result was the Anti-Drug Abuse Act of 1986, which imposed wildly disproportional penalties on crack (100 times those for a similar amount of powder cocaine). I’m old enough to remember that the idea was to stop crack from spreading. Addicts were expected to look at the incentives and say, “heck, it’s just not worth it. I can get as high as I want to on heroin.” To put it mildly, that didn’t happen. We can conclude that people become crackheads for some reason other than a rational cost-benefit analysis.

And it’s not just crackheads. The Earned Income Tax Credit is a helpful subsidy for the working poor, but that doesn’t mean that people take its incentives into account when making life decisions:

“I mean, Earned Income Credit is nice, but it’s not everything!” said one 25-year-old mother of three. “I’m not going to let it factor into my marriage if I ever want to get married. I’m not marrying the Earned Income Credit. I’m marrying the man I love.”

I’m not creating a straw man here: a lot of public-sector reform in the English-speaking world, and the countries we bully, really has been based on the idea that monetary incentives trump everything. Many still cling to this idea in the face of the fact that these “reforms” haven’t actually made things better.

Grade: C-.

The principle itself is a dull tautology, redeemed in part by some clear explanation of policy consequences, but even that winds up being misleading.

Suggested replacement principle: “Human behavior is a complex thing that is only partly explained by measurable incentives.”

It’s worth pointing out that Mankiw’s errors and omissions so far have all pointed in the same direction–toward a defense of the status quo (redistributing income reduces the size of the pie so don’t bother, environmental regulations do the same so don’t bother, and safety regulations lead to offsetting behavior so don’t bother). We’ll be seeing more of that.

The first four principles were categorized as “how people make decisions,” although they barely scratched the surface of how people actually make decisions. The next three are “how people interact.” We’ll start with them tomorrow.


On to the next post!

Leave a Reply




You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>