Punishment Deters Crime Because Humans Are Bounded in Their Strategic Decision-Making
Journal of Artificial Societies and Social Simulation
12 (3) 1
<https://www.jasss.org/12/3/1.html>
For information about citing this article, click here
Received: 22-Dec-2008 Accepted: 16-May-2009 Published: 30-Jun-2009
πi ( si ) = si ( g–c p ) . | (1) |
Table 1: The inspection game. Payoffs for inspectees are given on the left and for inspectors on the right side of the comma. The payoffs denote: g gains for crime, p punishment, k inspection cost, r rewards for successful inspection with p > g > 0, r > k > 0. | ||||
Inspector j | ||||
inspect | not inspect | |||
Inspectee i | crime | g–p , r - k | ⇐ | g , 0 |
⇓ | ⇑ | |||
no crime | 0 , -k | ⇒ | 0 , 0 | |
πi ( si , cj ) = si ( g–cj p ) . | (2) |
φj ( si , cj ) = cj ( si r–k ) . | (3) |
si* = k/r . | (4) |
cj* = g/p . | (5) |
πi ( si , sh , cj , cl ) = si ( g–cj p )–sh l . | (6) |
φj ( si , sh , cj , cl ) = cj ( si r–k ) . | (7) |
Figure 1. The design for measuring crime and punishment in the laboratory experiment and in the agent based simulation |
Figure 2. Higher punishment deters crime and inspection. In experiment 1, punishment is increased from 15 periods low punishment to 15 periods high punishment. Experiment 2 implements the reversed order. Red boxplots represent crime rates for 15 periods of low punishment vs. 15 periods of high punishment, blue boxplots represent corresponding inspection rates. For entirely rational humans, higher punishment would only reduce inspection activities and leave crime rates constant. Mixed Nash equilibria from game theory predict inspection rates at 20% for high punishment and at 83% for low punishment. Crime rates are predicted to be constant at 50% for both punishment levels. A comparison of empirical data with Nash predictions reveals that, in high punishment regimes, too high inspection rates match too low crime rates and, in low punishment regimes, too low inspection rates match too high crime rates. Such synchronized patterns suggest strategic interaction of humans with bounded rationality. |
Table 2: Logistic random intercepts models, illustrating the statistical significance of punishment effects on crime and inspection. Standard errors in parentheses, 2940 decisions clustered in 98 subjects. High punish: Dummy, 0-low, 1-high punishment. * p < 0.05, ** p < 0.01, *** p < 0.001. | ||
(1) Crime | (2) Inspection | |
Fixed effects | ||
High punishment | -1.06*** (0.085) | -0.72*** (0.084) |
Intercept | 0.70*** (0.13) | 0.28* (0.13) |
Variance of Intercept | 1.22 (0.23) | 1.28 (0.24) |
-2LogLikelihood | -1776.2 | -1804.7 |
Bic | 3576.4 | 3633.4 |
N (decisions) | 2940 | 2940 |
Inspectee i commits a crime if
g–(( 1 - ωi) ci + ωiη ) p > 0 | (8) |
Inspector j inspects if
(( 1 - ωj ) sj + ωjη ) r - k > 0 | (9) |
Inspectee i commits a crime if
( 1 - ωi ) (g–ci p) + ωiη > 0 | (10) |
Inspector j inspects if
( 1 - ωj ) ( sj r–k) + ωjη > 0 | (11) |
Figure 3. Comparison of simulated punishment effects on crime and inspection for different levels of bounded rationality. Each chart displays the dynamics of the crime and inspection rate over 1000 time steps in an exemplary simulation run with 1000 inspectees and 1000 inspectors. The four columns represent different levels of bounded rationality, expressed as the percentage ω of the random component η in the agents' decisions (0%, 20%, 60%, 100% random). The upper two rows (a) and (b) display results for modeling bounded rationality as "bounded learning" from social interaction. This bias consists of random noise in the agents' estimation of the detection probabilities for criminal behavior. In the lower two rows (c) and (d), bounded rationality is modeled as bounded decision-making. This "Bounded Decision-Making Model" considers different degrees of random noise ω for the whole decision function, producing gradually erratic behavior of the agents. Results for both models of bounded rationality can be compared for low punishment versus high punishment ((a) vs. (b) and (c) vs. (d)). Firstly, the results demonstrate that the mean of the oscillating crime and inspection rates approximate the theoretical prediction of the mixed Nash equilibria for the case of no random noise (ω = 0%) for both models. Secondly, the oscillations in the case of ω = 0% vanish rapidly for runs with greater ω. Thirdly, the mean crime and inspection rates are sensitive for different levels of bounded rationality ω. This sensitivity will be analyzed in greater detail and generality in Fig. 4 |
Figure 4. Punishment only deters crime if agents are bounded in their strategic decision-making. In the simulations ((a), (b), (c), (d)), each point represents the aggregated, average crime (red) and inspection rates (blue) over 1000 time steps and 1000 agents. (Single instead of aggregated runs can be seen in figure 3.) In the left "Bounded Learning Model", (a) and (d), agents are partially driven by random noise ω in estimating the detection probabilities of criminal behavior. In the "Bounded Decision-Making Model", (b) and (e), the complete decision function of the agents is biased by the random noise ω. The box-plots, (c) and (f), display the empirical inspection and crime rates from the two experiments described in chapter 2 and figure 2. Note for comparison that the crime and inspection data in the box-plots refer to the similar y-scale as the simulations. The upper part, (a), (b), (c), displays results for low punishment and the lower part, (d), (e), (f), for high punishment. Simulation results can be compared with Nash predictions, which are represented by dashed lines. For completely rational agents (ω = 0), both models reproduce Nash predictions: Higher punishment does not deter crime but exclusively inspection. Nash predictions do not hold for increasing ω. With an increasing lack of strategic reasoning, punishment has an increasing effect on crime and a decreasing effect on inspection. The two models predict different patterns of punishment effects and therewith enable comparisons with empirical data and inferences on rationality flaws in humans. The "Bounded Decision-Making Model" matches the empirical patterns (on the right side of the figure) well for a range of about ω ≈ 50%. This indicates that humans are both - bounded in their capacity to perceive their social environment correctly and in their capacity to maximize subjective utility. |
Inspectee i commits a crime if
g/p > η | (12) |
Inspector j inspects if
η > k/r | (13) |
Inspectee i commits a crime if
η > 0 . | (14) |
Inspector j inspects if
η > 0 . | (15) |
It is evident that the random variable η drives solely the decisions for committing crimes and for performing inspections.
Figure 5. Bounded Learning: Effects of random noise ω on crime and inspection for fixed payoff combinations. K/R denotes inspectors' payoffs (inspection costs over inspection rewards k/r) and G/P denotes criminals' payoffs (criminal gains over punishment g/p). Points represent mean crime and inspection rates over 500 periods over 50 groups of four. The results confirm and generalize our previous simulations, which only covered analyses for the specific payoff combinations from the laboratory experiments: For minimal random noise ω, agents reproduce Nash equilibria for all payoff combinations: Crime rates increase for increasing K/R independently of G/P and inspection rates decrease for decreasing G/P independently of K/R. For maximal random noise ω, criminals' equilibria flip from opponents' indifference points k/r to their own indifference point g/p. Inspectors' equilibria flip from their opponents indifference point g/p to their reverse indifference point 1–k/r. Values in between minimal and maximal random noise ω produce equilibria between both scenarios. |
Figure 6. Bounded Decision-Making: Effect of random noise ω on crime and inspection for fixed payoff combinations. K/R denotes inspectors' payoffs (inspection costs over inspection rewards k/r) and G/P denotes criminals' payoffs (criminal gains over punishment g/p.) Points represent mean crime and inspection rates over 500 periods over 50 groups of four. The results confirm and generalize our previous simulations, which only covered analyses for the specific payoff combinations from the laboratory experiment: For minimal random noise ω, agents reproduce Nash equilibria for all payoff combinations: Crime rates increase for increasing K/R independently of G/P and inspection rates decrease for decreasing G/P independently of K/R. With increasing random noise ω, agents equilibria for crime and inspection move toward 50%, which is the mean value of the uniform distribution we draw from to obtain our ω-values. |
Figure 7. Three-dimensional contour plots: Effect of payoff combinations on crime and inspection for fixed levels of random noise ω. The contours show mean crime and inspection rates for simulation runs over 500 periods with 50 groups of four agents. Darker areas represent more crime respectively more inspection activities. Numbers in the contours represent 10% steps (e.g. 0.1 means 10%). The first two rows represent simulation runs for the Bounded Learning Model, with row 1 referring to the mean crime rate and row 2 to the mean inspection rate. Rows 3 and 4 represent simulation runs for the Bounded Decision-Making Model, with row 3 for crime and row 4 for inspection rates (e.g. Learn Crime refers to the mean crime rate for the Bounded Learning Model). Results confirm and generalize our previous restricted simulations for payoff combinations from the laboratory experiments. The first column of ω = 0 represents no random noise in both models, bounded learning and bounded decision-making. Here, crime rates only depend on inspectors' payoffs k/r as the vertical lines reveal. Inspection rates only depend on inspectees' payoffs g/p, as the horizontal lines reveal. For increasing randomness ω, crime rates depend increasingly on criminals' own payoffs so that the crime lines shift increasingly to horizontal positions. And inspection rates depend increasingly on inspectors' payoffs so that lines shift to vertical positions. In the Bounded Learning Model, increasing random noise ω forces crime and inspection rates to shift into complete randomness, which is in our model 50% crime and 50% inspection. |
2Note that for the case of 0% both models are representations of the learning model "fictitious play" (Fudenberg and Kreps 1993; Fudenberg and Levine 1998). Nevertheless, our extension of the model fictitious play is new as we suggest two different extensions of the model (bounded learning and bounded decision-making). Both of these extensions measure bounded rationality with the percentage of random noise in the decision function. The difference between the models is the different component, which is subject to noise. As the focus of this article is on bounded rationality rather than on a systematic theoretical analysis and empirical validation of learning models, we do not pursue the analysis of learning models. However, the interested reader is referred to Rauhut (2009).
BIANCO, W T, Ordeshook, P C and Tsebelis, G (1990) Crime and punishment: Are one-shot, two person games enough?, American Political Science Review, 84, (2), 569-589.
BOLTON, G E and Ockenfels, A (2000) ERC: A Theory of Equity, Reciprocity, and Competition. American Economic Review, 90 (1), 166-193.
BOYD, R, Gintis, H, Bowles, S and Richerson, P J (2003) The evolution of altruistic punishment. Proc Natl Acad Sci USA, 100 (6), 3531-3535.
CAMERON, S (1988) The Economics of Deterrence: A Survey of Theory and Evidence. Kyklos, 41 (2), 301-323.
CHIAPPORI, PA, Levitt, S D and Groseclose, T (2002) Testing Mixed-Strategy Equilibria When Players Are Heterogenous: The Case of Penalty Kicks in Soccer. American Economic Review, 92, 1138-1151.
DOOB, A N. and Webster, C M (2003) Sentence Severity and Crime: Accepting the Null Hypothesis. Crime and Justice. A Review of Research, 28, 143-195.
FALK, A and Fischbacher, U (2002) Crime in the Lab. Detecting Social Interaction. European Economic Review, 46, 859-869.
FEHR, E and Schmidt, K M (1999) A theory of fairness, competition, and cooperation. Quarterly Journal of Economics, 114 (3), 817-868.
FEHR, E and Gächter, S (2002) Altruistic Punishment in Humans. Nature, 415 (10), 137-140.
FISCHBACHER, U (2007) Z-Tree. Zurich Toolbox for Ready-made Economic Experiments. Experimental Economics, 10 (2), 171-178.
FOWLER, J H, (2005) Altruistic punishment and the origin of cooperation. Proc Natl Acad Sci USA, 102 (19), 7047-7049.
FUDENBERG, D and Kreps, D M (1993) Learning Mixed Equilibria. Games And Economic Behavior, 5 (3), 320-367.
FUDENBERG, D and Levine, D K (1998) The Theory of Learning in Games. MIT Press, Cambridge, MA.
GIGERENZER, Gerd and Goldstein, D G (1996) Reasoning the fast and frugal way: Models of bounded rationality. Psychological Review, 103 (4), 650-669.
GÜRERK, Ö, Irlenbusch, B and Rockenbach, B (2006) The competitive advantage of sanctioning institutions. Science, 312 (5770), 108-111.
HENRICH, J and Boyd, R (2001) Why people punish defectors. Journal of Theoretical Biology, 208, 79-89.
LEVITT, S D (1997) Using electoral cycles in police hiring to estimate the effect of police on crime. American Economic Review, 87 (3), 270-290.
MACCOUN, R and Reuter, P (1998) Drug Control. In Michael Tonry (Ed.), The Handbook of Crime and Punishment, New York: Oxford University Press, 207-238.
MACY, M W and Flache, A (2002) Learning Dynamics in Social Dilemmas. Proc Natl Acad Sci USA, 99, 7229-7236.
MATSUEDA, R L, Kreager, D A and Huizinga, D (2006) Deterring Delinquents: A Rational Choice Model of Theft and Violence. American Sociological Review, 71, 95-122.
NAGIN, D S (1998) Criminal Deterrence Research at the Outset of the Twenty-First Century. Crime and Justice. A Review of Research, 23, 1-42.
POGARSKY, G and Piquero, A R (2003) Can Punishment Encourage Offending? Investigating the Resetting Effect. Journal of Research in Crime and Delinquency, 40 (1), 95-120.
RABIN, M (1993) Incorporating fairness into game theory and economics. American Economic Review, 83 (5), 1281-1302.
RAUHUT, H and Krumpal, I (2008) Enforcement of social norms in low-cost and high-cost situations. Zeitschrift für Soziologie, 5, 380-402.
RAUHUT, H (2009) Higher punishment, less control? Experimental evidence on the inspection game. Rationality and Society, 21 (3).
SHERMAN, L W (1993) Deffiance, Deterrence, and irrelevance: A theory of the criminal sanction. Journal of Research in Crime and Delinquency, 30 (4), 445-473.
SIGMUND, K, Hauert, C and Nowak, M A (2001) Reward and punishment. Proc Natl Acad Sci USA, 98 (19), 10757-10762.
TODD, PM and Gigerenzer, G (2000) Precis of Simple heuristics that make us smart. Behavioral And Brain Sciences, 23 (5), 727-780.
TSEBELIS, G (1989) The Abuse of Probability in Political Analysis: The Robinson Crusoe Fallacy. American Political Science Review, 1, 77-91.
TSEBELIS, G (1990) Penalty Has No Impact on Crime. A Game Theoretic Analysis. Rationality and Society, 2, 255-286.
WALKER, M and Wooders, J (2001) Minimax play at Wimbledon. American Economic Review, 91 (5), 1521-1538.
Return to Contents of this issue
© Copyright Journal of Artificial Societies and Social Simulation, [2009]