stag hunt example international relations

[11] McKinsey Global Institute, Artificial Intelligence: The Next Digital Frontier, June 2017, https://www.mckinsey.com/~/media/McKinsey/Industries/Advanced%20Electronics/Our%20Insights/How%20artificial%20intelligence%20can%20deliver%20real%20value%20to%20companies/MGI-Artificial-Intelligence-Discussion-paper.ashx: 5 (estimating major tech companies in 2016 spent $20-30 billion on AI development and acquisitions). 714 0 obj [18] Deena Zaidi, The 3 most valuable applications of AI in health care, VentureBeat, April 22, 2018, https://venturebeat.com/2018/04/22/the-3-most-valuable-applications-of-ai-in-health-care/. Put another way, the development of AI under international racing dynamics could be compared to two countries racing to finish a nuclear bomb if the actual development of the bomb (and not just its use) could result in unintended, catastrophic consequences. The complex machinations required to create a lasting peace may well be under way, but any viable agreementand the eventual withdrawal of U.S. forces that would entailrequires an Afghan government capable of holding its ground on behalf of its citizens and in the ongoing struggle against violent extremism. 0000003954 00000 n [30] Greg Allen and Taniel Chan, Artificial Intelligence and National Security. Report for Harvard Kennedy School: Belfer Center for Science and International Affairs, July 2017, https://www.belfercenter.org/sites/default/files/files/publication/AI%20NatSec%20-%20final.pdf: 71-110. This table contains an ordinal representation of a payoff matrix for a Prisoners Dilemma game. An example of the game of Stag Hunt can be illustrated by neighbours with a large hedge that forms the boundary between their properties. If both choose to row they can successfully move the boat. Table 3. hRj0pq%[a00a IIR~>jzNTDLC=Qm=,e-[Vi?kCE"X~5eyE]/2z))!6fqfx6sHD8&: s>)Mg 5>6v9\s7U endobj [42] Vally Koubi, Military Technology Races, International Organization 53, 3(1999): 537565. Meanwhile, the harm that each actor can expect to receive from an AI Coordination Regime consists of both the likelihood that the actor themselves will develop a harmful AI times that harm, as well as the expected harm of their opponent developing a harmful AI. The prototypical example of a PGG is captured by the so-called NPD. This technological shock factor leads actors to increase weapons research and development and maximize their overall arms capacity to guard against uncertainty. Despite this, there still might be cases where the expected benefits of pursuing AI development alone outweigh (in the perception of the actor) the potential harms that might arise. If either hunts a stag alone, the chance of success is minimal. If all the hunters work together, they can kill the stag and all eat. Formally, a stag hunt is a game with two pure strategy Nash equilibriaone that is risk dominant and another that is payoff dominant. By failing to agree to a Coordination Regime at all [D,D], we can expect the chance of developing a harmful AI to be highest as both actors are sparing in applying safety precautions to development. In the Prisoner's Dilemma, in contrast, despite the fact that both players cooperating is Pareto efficient, the only pure Nash equilibrium is when both players choose to defect. Rousseau recognized that the ine cient outcome hunting hare may result, just as conict can result in the security dilemma, and proceeded to provide philosophical arguments in favor of the outcome where both hunters . How does the Just War Tradition position itself in relation to both Realism and Pacifism? I refer to this as the AI Coordination Problem. In order to assess the likelihood of such a Coordination Regimes success, one would have to take into account the two actors expected payoffs from cooperating or defecting from the regime. SECURITY CLASSIFICATION OF THIS PAGE Unclassified . The 18th century political philosopher Jean-Jacques Rousseau famously described a dilemma that arises when a group of hunters sets out in search of a stag: To catch the prized male deer, they must cooperate, waiting quietly in the woods for its arrival. One example addresses two individuals who must row a boat. (e.g., including games such as Chicken and Stag Hunt). [5] Stuart Armstrong, Nick Bostrom, & Carl Shulman, Racing to the precipice: a model of artificial intelligence development, AI and Society 31, 2(2016): 201206. 0000018184 00000 n Although the development of AI at present has not yet led to a clear and convincing military arms race (although this has been suggested to be the case[43]), the elements of the arms race literature described above suggest that AIs broad and wide-encompassing capacity can lead actors to see AI development as a threatening technological shock worth responding to with reinforcements or augmentations in ones own security perhaps through bolstering ones own AI development program. Half a stag is better than a brace of rabbits, but the stag will only be brought down with a . In short, the theory suggests that the variables that affect the payoff structure of cooperating or defecting from an AI Coordination Regime determine which model of coordination we see arise between the two actors (modeled after normal-form game setups). In addition to leadership, the formation of a small but successful group is also likely to influence group dynamics. 0000003027 00000 n In the context of the AI Coordination Problem, a Stag Hunt is the most desirable outcome as mutual cooperation results in the lowest risk of racing dynamics and associated risk of developing a harmful AI. The remainder of this subsection briefly examines each of these models and its relationship with the AI Coordination Problem. Anarchy in International Relations Theory: The Neorealist-Neoliberal Debate Created Date: 20160809151831Z Finally, Jervis[40] also highlights the security dilemma where increases in an actors security can inherently lead to the decreased security of a rival state. Robert J Aumann, "Nash Equilibria are not Self-Enforcing," in Economic Decision Making: Games, Econometrics and Optimisation (Essays in Honor of Jacques Dreze), edited by J. J. Gabszewicz, J.-F. Richard, and L. Wolsey, Elsevier Science Publishers, Amsterdam, 1990, pp. to Be Made in China by 2030, The New York Times, July 20, 2017, https://www.nytimes.com/2017/07/20/business/china-artificial-intelligence.html, [33] Kania, Beyond CFIUS: The Strategic Challenge of Chinas Rise in Artificial Intelligence., [34] McKinsey Global Institute, Artificial Intelligence: The Next Digital Frontier.. [46] Charles Glaser, Realists as Optimists: Cooperation as Self-Help, International Security 19, 3(1994): 50-90. The familiar Prisoners Dilemma is a model that involves two actors who must decide whether to cooperate in an agreement or not. Here, this is expressed as P_(h|A or B) (A)h_(A or B). I thank my advisor, Professor Allan Dafoe, for his time, support, and introduction to this papers subject matter in his Global Politics of AI seminar. [5] They can, for example, work together to improve good corporate governance. Those in favor of withdrawal are skeptical that a few thousand U.S. troops can make a decisive difference when 100,000 U.S. soldiers proved incapable of curbing the insurgency. Table 13. I will apply them to IR and give an example for each. The original stag hunt dilemma is as follows: a group of hunters have tracked a large stag, and found it to follow a certain path. They will be tempted to use the prospect of negotiations with the Taliban and the upcoming election season to score quick points at their rivals expense, foregoing the kinds of political cooperation that have held the country together until now. Especially as prospects of coordinating are continuous, this can be a promising strategy to pursue with the support of further landscape research to more accurately assess payoff variables and what might cause them to change. The United States is in the hunt, too. Explain Rousseau's metaphor of the 'stag hunt'. In this section, I survey the relevant background of AI development and coordination by summarizing the literature on the expected benefits and harms from developing AI and what actors are relevant in an international safety context. Nash Equilibrium Examples In the context of international relations, this model has been used to describe preferences of actors when deciding to enter an arms treaty or not. Is human security a useful approach to security? Actor As preference order: DC > DD > CC > CD, Actor Bs preference order: CD > DD > CC > DC. %PDF-1.7 % For example, Jervis highlights the distinguishability of offensive-defensive postures as a factor in stability. Depending on which model is present, we can get a better sense of the likelihood of cooperation or defection, which can in turn inform research and policy agendas to address this. Different social/cultural systems are prone to clash. a The hedge is shared so both parties are responsible for maintaining it. See Katja Grace, John Salvatier, Allan Dafoe, Baobao Zhang, & Owain Evans, When Will AI Exceed Human Performance? If participation is not universal, they cannot surround the stag and it escapes, leaving everyone that hunted stag hungry. In this section, I outline my theory to better understand the dynamics of the AI Coordination Problem between two opposing international actors. . Within these levels of analysis, there are different theories that have could be considered. Sharp's consent theory of power is the most well articulated connection between nonviolent action and power theory, yet it has some serious shortcomings, especially in dealing with systems not fitting a ruler-subject dichotomy, such as capitalism, bureaucracy, and patriarchy. Together, the likelihood of winning and the likelihood of lagging = 1. She dismisses Clausewitz with the argument that he saw war as "the use of military means to defeat another state" and that this approach to warfare is no longer applicable in today's conflicts. Your application of the Prisoners Dilemma (PD) game to international trade agreements raises a few very interesting and important questions for the application of game theory to real-life strategic situations. The article states that the only difference between the two scenarios is that the localized group decided to hunt hares more quickly. Economic Theory of Networks at Temple University, Economic theory of networks course discussion. In testing the game's effectiveness, I found that students who played the game scored higher on the exam than students who did not play. Here, both actors demonstrate high uncertainty about whether they will develop a beneficial or harmful AI alone (both Actors see the likelihood as a 50/50 split), but they perceive the potential benefits of AI to be slightly greater than the potential harms. The corresponding payoff matrix is displayed as Table 8. 4 thoughts on " The Six-Party Talks as a Game Theoretic 'Stag-Hunt' (2): For example international relations-if the people made international decisions stag hunt, chicken o International relations is a perfect example of an Cooperation under the security dilemma.

Theological Terms Not Found In The Bible, Articles S

stag hunt example international relations