Adam Smith, Scientist and Evolutionist: Part 2

vernon smith justice science beneficence other-regarding behavior experimental economics trust games sentiments social preference theory utility functions neoclassical economics bart wilson

Vernon Smith for AdamSmithWorks

March 6, 2019
In this series of essays, I want to illustrate the power of Adam Smith’s social system, developed in The Theory of Moral Sentiments and An Inquiry into the Nature and Causes of the Wealth of Nations, to bring order to contemporary experiments where traditional game-theoretic models failed to predict human action even under the conditions of anonymity and elaborate on lessons that behavioural scientists should draw from Smith’s works. In Part 1, I outlined Smith’s social, evolutionary system of Sentiments and its relevance to both human decision making and the broader rules of human conduct. In Part 2, I apply these rules to game theory to show how they improve upon traditional models and explain why the orthodox response to the failure of those   models falls short of Smith’s system. Finally, in Part 3, I explore the implications of Smith’s social-moral system to trade, wealth, and human liberty.




Part 2: Adam Smith versus Neo-classical Models: The Nexus of Context, Motivation and Self-interest in Human Sociality [1]   


In Part 1, I summarized how Adam Smith laid out a system for explaining human behavior in which our natural tendency to care about how others think about our actions drives an evolutionary social process that helps to align the actions and interests of the different people in society. The framework outlined in Part 1 explained how both seeking praise and praiseworthiness relates to judgments of proper behavior (propriety). 



Recall from Part 1 that any action taken is a reflection of the actor’s socializing experiences, as well as of conventions (“customs” that define what “people will go along with”) that also result from the socializing process. If a choice is out of order, either because a situation is uncertain or unfamiliar, the person who takes it will find themselves on the receiving end of the disapproval of their fellows (corrective feedback, or “disapprobation”). Hence, The Theory of Moral Sentiments (Smith, 1759; hereafter Sentiments) is primarily about the adaptation of individuals to what is “fit and proper.” Regardless of the original set of rules and norms, the demands of social conventions emerge and change through time, implying that decisions made in this way are subject to evolutionary change and adaptation. 



In this essay, I want to illustrate the power of Smith’s theory, as established in Sentiments, to bring order to contemporary economic experiments where the traditional game-theoretic models failed decisively to predict human action even under the conditions of anonymity. Recall from Part 1 how Smith’s model of sociability relates to “the love of what is honorable” (Smith, 1759, p 137) and thus to our natural desire for both praise and praiseworthiness and to avoid blame and blameworthiness. This can be demonstrated by applying Smith’s framework to two-person economic games (such as the trust game discussed below) and seeing how it is demonstrated in the cited literature.




Propositions on Beneficence and Justice
Sentiments explores the long evolved, pre-civil rules and order of society that became the foundation of the civil order. Smith articulates this core concept through four strong propositions: two on Beneficence and two on Justice. 



  • Beneficence Proposition 1: Actions tending to be beneficent toward others, and which are properly (intentionally) motivated, alone merit a reward response because these actions alone excite the gratitude felt in others. (Smith, 1759, p 78) 

  • Beneficence Proposition 2: Because beneficence is always free and cannot be extorted, a lack of beneficence does no positive evil, invokes no resentment, and merits no punishment. (Smith, 1759, p 78) 





  • Justice Proposition 1: Actions tending to be hurtful toward others and that are improperly (intentionally) motivated, alone merit punishment in response, because these actions alone excite the resentment felt in others. (Smith, 1759, p 78) 

  • Justice Proposition 2: Mere adherence to the rules of justice (want of injustice) does no positive good, excites no gratitude, and merits no reward. (Smith, 1759, p 81-2) 

Notice that the intentions of the actor are important to all of these propositions: 
“To the intention or affection of the heart, therefore, to the propriety or impropriety, to the beneficence or hurtfulness of the design, all praise or blame, all approbation or disapprobation, of any kind, which can justly be bestowed upon any action, must ultimately belong.” (Smith, 1759. p 93) 





Applications to a Trust Game
Smith’s model and Beneficence Proposition 1 (beneficent actions deserve reward) can be to the analysis of a two-person game that is intended to provide a strong test of whether people will cooperate if they interact only one time. These games are known as trust games. The game is illustrated in Figure 1 (McCabe and Smith, 2000). Its design was motivated, as a simple special case, by the “investment trust game” of Berg et al., (1995) one of the most cited articles in experimental economics.


Figure 1: Invest $10 Trust Game

If Player 1 moves right, each player receives $10 and the game ends—this is called the “subgame perfect equilibrium.” If Player 1 moves down, this passes the play (the decision-making power) to Player 2. Player 1’s $10 can be tripled because of gains from exchange between the two players,[2] but the players are not told about a narrative to explain the gains to improve the chances that they make the most self-interested decisions.[3] A right move by Player 2 splits the $30 gain equally between the two players, which means that Player 1 gets $15 and Player 2 gets $25—this is called the cooperative outcome. But Player 2 can also decide to take all the money by moving down, receiving $40, while Player 1, who has agreed to cooperate, gets nothing. This outcome is called “defection.” Traditionally, game theory assumes that Player 1 will not trust Player 2 and instead minimize their losses by choosing not to pass the decision about who gets how much money on to Player 2, meaning that both players get $10 and the overall level of wealth available to the players is lower than it could be.   

Let’s explore the contrast between the predictions in this classic game with Smith’s model of human interaction. I will first review the well-known traditional “self-interested” analysis of player action for the game in Figure 1. The game assumes:



  1. Common knowledge that all Players are strictly self-interested and non-satiated.[4]
  2. Only the player’s own payoff outcomes matter when choosing which action to take.
  3. Players will decide on their action by working backwards through time, considering potential payoffs at each stage. (They will “apply backward induction to the game tree.”) 
  4. So we determine each player’s choice based on a reverse sequence of play. 
  5. If Player 1 passes to Player 2, Player 2 is motivated to move down for the highest payoff. 
  6. Knowing that Player 2 can win most by choosing to move down, Player 1 should always reason that their best strategy is to move right as a first decision, which is why it is considered the (subgame perfect) “equilibrium” of the game.
 
Now consider how the same game would be analyzed from the perspective of Sentiments rather than using a presumption of pure self-interest. This means adding sensitivity to the rules of conduct (mediated by hurt-benefit patterns); that players infer the intentions of their fellows; each person imagines themselves in the other’s role; and players practice “self-command.” 

With these assumptions, a move down by Player 1 unambiguously benefits Player 2, and is quite transparently “properly motivated”—it is extortion or threat-free, and although Player 1 can avoid vulnerability to loss by not passing to Player 2, Player 1 chooses to do so anyway. The available choices by Player 1, together with the alternative available responses of Player 2, including all the payoffs, define the “circumstances” that determine action, and which are read by the actors when applying Beneficence Proposition 1.



  1. Common knowledge that all players are strictly self-interested and non-satiated.
  2. Action is self-controlled by sensitivity to who is hurt or benefits from an action and an inference of intent.
  3. Intentions are inferred from the opportunity cost of the action taken. For instance, Player 2 can see that Player 1 is choosing to trust them based on the fact that they are making themselves more vulnerable by doing so.
  4. Intentional Beneficence leads to Gratitude, and Gratitude leads to an Impulse to Reward. 
  5. Intentional Hurt leads to Resentment, and Resentment leads to an Impulse to Punish.[5]
  6. Players apply backward induction to the game tree to determine who is hurt or benefits from an action at each node and to judge intent.
  7. Each Player’s “impartial spectator” imagines herself in the role of the other in judging intent and probable responses.
  8. All of these assumptions make each play a signaling game—in an important way, like a conversation—between players to convey their intent.
 
 Given these assumptions, if Player 1 would cooperate if in the role of Player 2, will Player 2 see it in the same way if given the opportunity to act? And will Player 2 cooperate, given the unambiguous signal of Player 1’s beneficent intention? Which model of human behavior better predicts the way real people will react to these choices?
The results for 24 randomly and anonymously paired subjects playing this game are displayed in Figure 2.


Figure 2: Number and Frequency of Actions in Invest $10 Trust Game

In the lab, 12 of the 24 Player 1s chose to trust their Player 2, and of the remaining 12 pairs, 75 percent arrived at a cooperative solution, while only three Player 2s defected. Far from exhibiting a subgame perfect equilibrium of Player 1s choosing to limit their losses by refusing to pass play to Player 2, nor the prediction that all Player 2s given the chance to play would maximize their payout by defecting. Instead, half of Player 1s were willing to trust their Player 2, and knowing for certain the action of their paired Player 1 counterpart, nine of the twelve Player 2s chose to cooperate. Further, the random assignments of subjects to roles implies that the same proportion of Player 1s would move right if they had been assigned to position 2. Hence, the estimated proportion of Player 1s deterred from choosing to move down by the uncertainty that Player 2 is a person like themselves is 0.75 – 0.50 =0.25. The results are substantially more consistent with Beneficence Proposition 1 than they are with the traditional neo-classical model. 




Sentiments vs “Social Preference” models of Other-regarding Behavior
In response to the “prediction failure” of traditional theory in games like this, experimental and behavioral economists formed a wide consensus on how to resolve the problem: re-specify the utility function to accommodate the anomalous observations.[6] 



Working backwards, the discipline changed the model to assume that some Player 2s prefer action that yields money for both themselves and their matched Player 1. In other words, they still assume that Player 2 plays to maximize their own utility, but assume that their utility depends on how well other players do, too. Introducing this assumption to the game barely resolves the contradiction with real world outcomes. But why was this the professional response? 



The traditional assumption in economic theory is that people will act to maximize their own “utility” (usually, but not always, measured in dollars), or a “Max U(own)[7]” model of action. This assumption failed. It was unable to predict that many people are sensitive to how others benefit or are hurt by their actions. In other words, their chosen actions show other-regarding and not only own-regarding considerations. It seemed natural for economists to simply change the actors’ utility function to assume that people maximize their utility by ensuring that both they and others benefit from their actions.



Alternative, non-utilitarian modelling frameworks, such as in Sentiments, were not part of the predominant current literature or thinking. Long before this problem was revealed in the lab, the neo-classical revolution had displaced assumptions about the process through which people interpret a problem as they encounter it (process models and thinking) in classical economics with models that assumed people will tend toward predictable decisions based on their expected outcomes (equilibrium in outcome space). After the neo-classical revolution, context and circumstances did not matter; only outcomes based on maximization mattered. 



The logical error in simply re-specifying the utility function was to suppose that other-regarding behavior implies other-regarding preferences. In truth it’s the other way around: if preferences are other-regarding then behavior will be other-regarding, but it does not follow that other-regarding behavior implies other-regarding utility. Social preferences are sufficient, but not necessary, as a condition for the observation of other-regarding behavior. 



The experimental research design process and agendas were influenced both by this resolution and by this belief error: “A substantial number of people exhibit social preferences, which means they are not solely motivated by material self-interest but also care positively or negatively for the material payoffs of relevant reference agents.” (Fehr and Fischbacher, 2002, p C1) The statement is incorrect because what people exhibit is other-regarding choice behavior.[8] As explained above, this does not imply other-regarding utility. The counterfactual to Max U(own) as determining decision is either a just-so Max U(own, other),[9] or an alternative model of human sociality. The tests reported in Fehr and Fischbacher (2002) are tests of Max U(own) against “social preferences”, represented by Max U(own, other). This means that the neo-classical utility model is not tested against an alternative model of human social behavior that also predicts other-regarding action; nor are the alternatives tested against each other.[10]



The new presumed preferences must have this “just-so” nature because the behavior they are trying to describe shows sensitivity to intentions and the descriptive circumstances in which the game is embedded, but outcomes can tell us little about these things. Re-specifying the utility function was required to incorporate as parameters all of the elements that were found to effect decision-making. 



Smith’s axiom that agents have common knowledge that they are self-interested and non-satiated is sufficient to allow the process of social maturation and does not rule out social preference functions.[11] Each agent’s social preference must be compatible with the other. Modelling relationships—what it means to be social—becomes the challenge to socio-economic theory, not refitting utility functions after the fact to agree with new observations.



My argument is that social preference theory  as a response to the prediction failures of standard neo-classical utility theory in extensive form games, constitutes an anecdotal “fix” that fails to address the origin and development of those failures. Social preference theory fails to address why outcomes are context sensitive and similar failures in broader themes in economics and society. Smith modelled the socialization process, not just the endpoint outcome, and derived features of the emergent rules that account for human conduct.[12]



Adam Smith’s model of Sentiments, long-ignored by game theorists, can be incorporated into our analysis to help to correct the standard assumptions of the discipline to better align with real-world behavior. In Part 3 of this essay, I will combine the insights of the comparison between the Sentiments model with the traditional game assumptions with the foundations established in Part 1 to explore the implications of Smith’s social theory for his insights into the nature and causes of increasing wealth, justice, and human liberty.












[1] In the text I review our treatment in Smith and Wilson, 2017a wherein the rules we follow derive directly from these motivations. 

[2] A real-world example would be when an investor provides financial support to an entrepreneur.

[3] See Osborn, et al. (2015) who embed a previous extensive form game tree in a narrative which causes a substantial change in the results.

[4] These terms are explained in full in Part 1 of this essay.

[5] For trust game tests of Justice Proposition 1 and Beneficence Proposition 2 see Smith and Wilson (2014, 2017)

[6] Influential contributions pursuing this line of research include Fehr and Fischbacher (2002). For an excellent recent report, see Cox, et al. (2016).

[7] Max U(own) indicates that an actor will maximize their utility, which is based on their own outcome.

[8] “Other-regarding behavior” was a term introduced in Hoffman et al. (1994), in their study of ultimatum and dictator games, to guard against the presumption that the behavior was necessarily explained by and due to other regarding preferences, specifically “fairness” in the sense of outcomes, as distinct from fair-play rules which are modeled in Sentiments. But that cautionary language failed to prevent the leap captured in the above quotation, which reflects a common misperception.

[9] Max U(own, other) implies that an actor will still maximize their own utility, but that it is based both on their own outcome and the outcomes of others.

[10] Cox et al. (2016) ingeniously examine various motives for Player 2s to be trustworthy in a variation on the original Berg at al. (1995) game. The most important motive empirically is what they call vulnerability-responsiveness in which Player 2 avoids any action that would hurt Player 1. But this is explained by Smith’s model as a weak form of Beneficence Proposition 1: not to hurt is for Player 2 not to play down. 

[11] The sensitivity in Smith’s model to intentions, kindness (and its reward), and hurtfulness (and its punishment) leave open the possibility of several distinct utilitarian social preference functions—one associated with each behavioral trait. Most have corresponding social psychological features in Smith’s model of sociality, indicated in parentheses: Positive reciprocity (intentionally beneficent action invokes gratitude and a reward response); Negative reciprocity (intentionally hurtful action invokes resentment and a punishment response); Inequity aversion (actions/outcomes that are inappropriate or unmerited in a given context may cause resentment and deserve punishment); Pure altruism or unconditional kindness (the nearest similarity in Sentiments appears to be “natural (kin) affection”[11], or “universal benevolence” Smith, 1759, p 219 and 235); and Envy (may reduce sympathy for the joy, or greatness, felt for the achievements of others; Smith, 1759, p 41-2, 44).

[12] The latter had broad implications for the structure of rules that govern a decentralized society.
 


References


Berg, Joyce, John Dickhaut, and Kevin McCabe (1995) “Trust, Reciprocity and Social History”, Games and Economic behavior, 10: 122-42

Cox, James C., Rudolf Kerschbamer, and Daniel Neururer (2016) “What is trustworthiness and what drives it?” Games and Economic Behavior, 98: 197-218

McCabe, Kevin and Vernon L. Smith. 2000. “A Comparison of Naïve and Sophisticated Subject Behavior with Game Theoretic Predictions,” Proceedings of the National Academy of Arts and Sciences, 97: 3777-81.

Osborn, Jan, Bart J. Wilson, and Bradley R. Sherwood. "Conduct in Narrativized Trust Games." Southern Economic Journal 81: 562-597.

Smith, Adam (1759) The Theory of Moral Sentiments. Edited by D. D. Raphael and A. L. Macfie. Oxford: Oxford University Press, 1976.

Smith, Vernon L. and Bart J. Wilson (2014) “Fair and Impartial Spectators in Experimental Economic Behavior", Review of Behavioral Economics: Vol. 1: No. 1–2, pp 1-26. http://dx.doi.org/10.1561/105.00000001

Smith, Vernon L. and Bart J. Wilson (2017a) “Sentiments, Conduct, and Trust in the Laboratory,” Social Philosophy and Policy, Vol. 34: No. 1. Published online: 14 June 2017, pp. 25-55. https://doi.org/10.1017/S0265052517000024

Smith, Vernon L. and Bart J. Wilson (2017b) “Equilibrium Play in Voluntary Ultimatum Games: Beneficence Cannot be Extorted,” Smith Institute for Political Economy and Philosophy, Chapman University. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3026357