Attrition warfare is a military strategy consisting of belligerent attempts to win a war by wearing down the enemy to the point of collapse through continuous losses in personnel and material. The war will be won by the side with greater such resources; the word attrition comes from the Latin root atterere to rub against, similar to the "grinding down" of the opponent's forces in attrition warfare. Military theorists and strategists have viewed attrition warfare as something to be avoided. Attrition warfare represents an attempt to grind down an opponent and its superior numbers, the opposite of the usual principles of war in which one attempts to achieve decisive victories by using minimal necessary resources and in minimal amount of time, through manoeuvre, concentration of force and the like. On the other hand, a side that perceives itself to be at a marked disadvantage in manoeuvre warfare or unit tactics may deliberately seek out attrition warfare to neutralize its opponent's advantages.
If the sides are nearly evenly matched, the outcome of a war of attrition is to be a Pyrrhic victory. The difference between war of attrition and other forms of war is somewhat artificial since war always contains an element of attrition. One can be said to pursue a strategy of attrition if one makes it the main goal to cause gradual attrition to the opponent amounting to unacceptable or unsustainable levels for the opponent while limiting one's own gradual losses to acceptable and sustainable levels; that should be seen as opposed to other main goals such as the conquest of some resource or territory or an attempt to cause the enemy great losses in a single stroke. Attritional methods are tried only as a last resort, when other methods have failed or are not feasible; when attritional methods have worn down the enemy sufficiently to make other methods feasible, attritional methods are abandoned in favor of other strategies. In World War I, improvements in firepower but not communications and mobility forced military commanders to rely on attrition, with terrible casualties.
Attritional methods are in themselves sufficient to cause a nation to give up a nonvital ambition, but other methods are necessary to achieve unconditional surrender. It is argued that the best-known example of attrition warfare was on the Western Front during World War I. Both military forces found themselves in static defensive positions in trenches running from Switzerland to the English Channel. For years, without any opportunity for manoeuvres, the only way the commanders thought that they could defeat the enemy was to attack head on and grind the other down. One of the most enduring examples of attrition warfare on the Western Front is the Battle of Verdun, which took place throughout most of 1916. Erich von Falkenhayn claimed that his tactics at Verdun were designed not to take the city but rather to destroy the French Army in its defense. Falkenhayn is described as wanting to "bleed France white" and thus the attrition tactics were employed in the battle. Attritional warfare in World War I has been shown by historians such as Hew Strachan to have been used as a post hoc ergo propter hoc excuse for failed offensives.
Contemporary sources disagree with Strachan's view on this. While the Christmas Memorandum is a post-war invention, the strategy of "bleeding France white" was the original strategy for the battle. Attrition to the enemy was easy to assert and difficult to refute and thus may have been a convenient facesaving excuse in the wake of many indecisive battles, it is, in many cases, hard to see the logic of warfare by attrition because of the obvious uncertainty of the level of damage to the enemy and of the damage that the attacking force may sustain to its own limited and expensive resources while trying to achieve that damage. Historians such as John Terraine and Gary Sheffield have suggested that attritional warfare was, however, a necessary step on the road to eventual victory, a'wearing down process' that sapped Central Powers strength and left them vulnerable during the Hundred Days campaign of 1918; that is not to say that a general will not be prepared to sustain high casualties while trying to reach an objective.
An example in which one side used attrition warfare to neutralize the other side's advantage in manoeuvrability and unit tactics occurred during the latter part of the American Civil War, when Union general Ulysses S. Grant pushed the Confederate Army continually, in spite of losses, he predicted that the Union's supplies and manpower would overwhelm the Confederacy if the casualty ratio was unfavorable. Scythian tactics during the European Scythian campaign of Darius I of 513 BC, in deep steppes retreat, avoiding a direct confrontation with the Darius I's army, while spoiling the wells and pastures The Athenians, who were weaker in land warfare during the Peloponnesian War, employed attrition warfare using their navy; the "delaying" tactics of Quintus Fabius Maximus Verrucosus against Hannibal Barca during the Second Punic War. Battle of Actium of 31 BC during the Roman civil wars The Hungarian resistance against the Mongols 1241–1242 The Dai Viet Kingdom, three repulsions of Kublai Khan in 1258, 1285 and 1288 The American strategy during the American Revolutionary War The latter portion of the American Civil War, notably the Siege of Vicksburg, the Overland Campaign and the Siege of Petersburg The French invasion of Russia by Napoleon Bonaparte in 1812 The Spanish Civil War Tonnage war in the Atlantic and Pacific during World War II The Air battle for Great Britain in World
In game theory, a sequential game is a game where one player chooses their action before the others choose theirs. The players must have some information of the first's choice, otherwise the difference in time would have no strategic effect. Sequential games hence are governed by the time axis, represented in the form of decision trees. Unlike sequential games, simultaneous games do not have a time axis as players choose their moves without being sure of the other's, are represented in the form of payoff matrices. Extensive form representations are used for sequential games, since they explicitly illustrate the sequential aspects of a game. Combinatorial games are sequential games. Games such as chess, infinite chess, tic-tac-toe and Go are examples of sequential games; the size of the decision trees can vary according to game complexity, ranging from the small game tree of tic-tac-toe, to an immensely complex game tree of chess so large that computers cannot map it completely. In sequential games with perfect information, a subgame perfect equilibrium can be found by backward induction.
Simultaneous game Subgame perfection Sequential auction
In game theory, a simultaneous game is a game where each player chooses his action without knowledge of the actions chosen by other players. Simultaneous games contrast with sequential games, which are played by the players taking turns. Normal form representations are used for simultaneous games. Rock-paper-scissors, a played hand game, is an example of a simultaneous game. Both players make a decision without knowledge of the opponent's decision, reveal their hands at the same time. There are two players in this game and each of them has three different strategies to make their decision. We will display Player 1's strategies as Player 2's strategies as columns. In the table, the numbers in red represent the payoff to Player 1, the numbers in blue represent the payoff to Player 2. Hence, the pay off for a 2 player game in rock-paper-scissors will look like this: The prisoner's dilemma is an example of a simultaneous game; some variants of chess that belong to this class of games include Synchronous chess and Parity chess.
Sequential game Simultaneous action selection Bibliography Pritchard, D. B.. Beasley, John, ed; the Classified Encyclopedia of Chess Variants. John Beasley. ISBN 978-0-9555168-0-1
Game theory is the study of mathematical models of strategic interaction between rational decision-makers. It has applications in all fields of social science, as well as in computer science, it addressed zero-sum games, in which one person's gains result in losses for the other participants. Today, game theory applies to a wide range of behavioral relations, is now an umbrella term for the science of logical decision making in humans and computers. Modern game theory began with the idea regarding the existence of mixed-strategy equilibria in two-person zero-sum games and its proof by John von Neumann. Von Neumann's original proof used the Brouwer fixed-point theorem on continuous mappings into compact convex sets, which became a standard method in game theory and mathematical economics, his paper was followed by the 1944 book Theory of Games and Economic Behavior, co-written with Oskar Morgenstern, which considered cooperative games of several players. The second edition of this book provided an axiomatic theory of expected utility, which allowed mathematical statisticians and economists to treat decision-making under uncertainty.
Game theory was developed extensively in the 1950s by many scholars. It was explicitly applied to biology in the 1970s, although similar developments go back at least as far as the 1930s. Game theory has been recognized as an important tool in many fields; as of 2014, with the Nobel Memorial Prize in Economic Sciences going to game theorist Jean Tirole, eleven game theorists have won the economics Nobel Prize. John Maynard Smith was awarded the Crafoord Prize for his application of game theory to biology. Early discussions of examples of two-person games occurred long before the rise of modern, mathematical game theory; the first known discussion of game theory occurred in a letter written by Charles Waldegrave, an active Jacobite, uncle to James Waldegrave, a British diplomat, in 1713. In this letter, Waldegrave provides a minimax mixed strategy solution to a two-person version of the card game le Her, the problem is now known as Waldegrave problem. In his 1838 Recherches sur les principes mathématiques de la théorie des richesses, Antoine Augustin Cournot considered a duopoly and presents a solution, a restricted version of the Nash equilibrium.
In 1913, Ernst Zermelo published Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels. It proved that the optimal chess strategy is determined; this paved the way for more general theorems. In 1938, the Danish mathematical economist Frederik Zeuthen proved that the mathematical model had a winning strategy by using Brouwer's fixed point theorem. In his 1938 book Applications aux Jeux de Hasard and earlier notes, Émile Borel proved a minimax theorem for two-person zero-sum matrix games only when the pay-off matrix was symmetric. Borel conjectured that non-existence of mixed-strategy equilibria in two-person zero-sum games would occur, a conjecture, proved false. Game theory did not exist as a unique field until John von Neumann published the paper On the Theory of Games of Strategy in 1928. Von Neumann's original proof used Brouwer's fixed-point theorem on continuous mappings into compact convex sets, which became a standard method in game theory and mathematical economics, his paper was followed by his 1944 book Theory of Games and Economic Behavior co-authored with Oskar Morgenstern.
The second edition of this book provided an axiomatic theory of utility, which reincarnated Daniel Bernoulli's old theory of utility as an independent discipline. Von Neumann's work in game theory culminated in this 1944 book; this foundational work contains the method for finding mutually consistent solutions for two-person zero-sum games. During the following time period, work on game theory was focused on cooperative game theory, which analyzes optimal strategies for groups of individuals, presuming that they can enforce agreements between them about proper strategies. In 1950, the first mathematical discussion of the prisoner's dilemma appeared, an experiment was undertaken by notable mathematicians Merrill M. Flood and Melvin Dresher, as part of the RAND Corporation's investigations into game theory. RAND pursued the studies because of possible applications to global nuclear strategy. Around this same time, John Nash developed a criterion for mutual consistency of players' strategies, known as Nash equilibrium, applicable to a wider variety of games than the criterion proposed by von Neumann and Morgenstern.
Nash proved that every n-player, non-zero-sum non-cooperative game has what is now known as a Nash equilibrium. Game theory experienced a flurry of activity in the 1950s, during which time the concepts of the core, the extensive form game, fictitious play, repeated games, the Shapley value were developed. In addition, the first applications of game theory to philosophy and political science occurred during this time. In 1979 Robert Axelrod tried setting up computer programs as players and found that in tournaments between them the winner was a simple "tit-for-tat" program that cooperates on the first step on subsequent steps just does whatever its opponent did on the previous step; the same winner was often obtained by natural selection. In 1965, Reinhard Selten introduced his solution concept of subgame perfect equilibria, which further refined the Nash equilibrium. In 1994 Nash and Harsanyi became Economics Nobel Laureates for their contributi
In economics and other social sciences, preference is the ordering of alternatives based on their relative utility, a process which results in an optimal "choice". The character of the individual preferences is determined purely by taste factors, independent of considerations of prices, income, or availability of goods. With the help of the scientific method many practical decisions of life can be modelled, resulting in testable predictions about human behavior. Although economists are not interested in the underlying causes of the preferences in themselves, they are interested in the theory of choice because it serves as a background for empirical demand analysis. In 1926 Ragnar Frisch developed for the first time a mathematical model of preferences in the context of economic demand and utility functions. Up to economists had developed an elaborated theory of demand that omitted primitive characteristics of people; this omission ceased when, at the end of the 19th and the beginning of the 20th century, logical positivism predicated the need of theoretical concepts to be related with observables.
Whereas economists in the 18th and 19th centuries felt comfortable theorizing about utility, with the advent of logical positivism in the 20th century, they felt that it needed more of an empirical structure. Because binary choices are directly observable, it appealed to economists; the search for observables in microeconomics is taken further by revealed preference theory. Since the pioneer efforts of Frisch in the 1920s, one of the major issues which has pervaded the theory of preferences is the representability of a preference structure with a real-valued function; this has been achieved by mapping it to the mathematical index called utility. Von Neumann and Morgenstern 1944 book "Games and Economic Behaviour" treated preferences as a formal relation whose properties can be stated axiomatically; these type of axiomatic handling of preferences soon began to influence other economists: Marschak adopted it by 1950, Houthakker employed it in a 1950 paper, Kenneth Arrow perfected it in his 1951 book "Social Choice and Individual Values".
Gérard Debreu, influenced by the ideas of the Bourbaki group, championed the axiomatization of consumer theory in the 1950s, the tools he borrowed from the mathematical field of binary relations have become mainstream since then. Though the economics of choice can be examined either at the level of utility functions or at the level of preferences, to move from one to the other can be useful. For example, shifting the conceptual basis from an abstract preference relation to an abstract utility scale results in a new mathematical framework, allowing new kinds of conditions on the structure of preference to be formulated and investigated. Another historical turnpoint can be traced back to 1895, when Georg Cantor proved in a theorem that if a binary relation is linearly ordered it is isomorphically embeddable in the ordered real numbers; this notion would become influential for the theory of preferences in economics: by the 1940s prominent authors such as Paul Samuelson, would theorize about people having weakly ordered preferences.
Suppose the set of all states of the world is X and an agent has a preference relation on X. It is common to mark the weak preference relation by ⪯, so that x ⪯ y means "the agent wants y at least as much as x" or "the agent weakly prefers y to x"; the symbol ∼ is used as a shorthand to the indifference relation: x ∼ y ⟺, which reads "the agent is indifferent between y and x". The symbol ≺ is used as a shorthand to the strong preference relation: x ≺ y ⟺, which reads "the agent prefers y to x". In everyday speech, the statement "x is preferred to y" is understood to mean that someone chooses x over y. However, decision theory rests on more precise definitions of preferences given that there are many experimental conditions influencing people's choices in many directions. Suppose a person is confronted with a mental experiment that she must solve with the aid of introspection, she is offered apples and oranges, is asked to verbally choose one of the two. A decision scientist observing this single event would be inclined to say that whichever is chosen is the preferred alternative.
Under several repetitions of this experiment, if the scientist observes that apples are chosen 51% of the time it would mean that x ≻ y. If half of the time oranges are chosen x ∼ y. If 51% of the time she chooses oranges it means that y ≻ x. Preference is here being identified with a greater frequency of choice; this experiment implicitly assumes. Otherwise, out of 100 repetitions, some of them will give as a result that neither apples, oranges or ties are chosen; these few cases of uncertainty will ruin any preference information resulting from the frequency attributes of the other valid cases. However, this example was used
An extensive-form game is a specification of a game in game theory, allowing for the explicit representation of a number of key aspects, like the sequencing of players' possible moves, their choices at every decision point, the information each player has about the other player's moves when they make a decision, their payoffs for all possible game outcomes. Extensive-form games allow for the representation of incomplete information in the form of chance events modeled as "moves by nature"; some authors in introductory textbooks define the extensive-form game as being just a game tree with payoffs, add the other elements in subsequent chapters as refinements. Whereas the rest of this article follows this gentle approach with motivating examples, we present upfront the finite extensive-form games as constructed here; this general definition was introduced by Harold W. Kuhn in 1953, who extended an earlier definition of von Neumann from 1928. Following the presentation from Hart, an n-player extensive-form game thus consists of the following: A finite set of n players A rooted tree, called the game tree Each terminal node of the game tree has an n-tuple of payoffs, meaning there is one payoff for each player at the end of every possible play A partition of the non-terminal nodes of the game tree in n+1 subsets, one for each player, with a special subset for a fictitious player called Chance.
Each player's subset of nodes is referred to as the "nodes of the player". Each node of the Chance player has a probability distribution over its outgoing edges; each set of nodes of a rational player is further partitioned in information sets, which make certain choices indistinguishable for the player when making a move, in the sense that: there is a one-to-one correspondence between outgoing edges of any two nodes of the same information set—thus the set of all outgoing edges of an information set is partitioned in equivalence classes, each class representing a possible choice for a player's move at some point—, every path in the tree from the root to a terminal node can cross each information set at most once the complete description of the game specified by the above parameters is common knowledge among the playersA play is thus a path through the tree from the root to a terminal node. At any given non-terminal node belonging to Chance, an outgoing branch is chosen according to the probability distribution.
At any rational player's node, the player must choose one of the equivalence classes for the edges, which determines one outgoing edge except the player doesn't know which one is being followed. A pure strategy for a player thus consists of a selection—choosing one class of outgoing edges for every information set. In a game of perfect information, the information sets are singletons. It's less evident, it is assumed that each player has a von Neumann–Morgenstern utility function defined for every game outcome. The above presentation, while defining the mathematical structure over which the game is played, elides however the more technical discussion of formalizing statements about how the game is played like "a player cannot distinguish between nodes in the same information set when making a decision"; these can be made precise using epistemic modal logic. A perfect information two-player game over a game tree can be represented as an extensive form game with outcomes. Examples of such games include tic-tac-toe and infinite chess.
A game over an expectminimax tree, like that of backgammon, has no imperfect information but has moves of chance. For example, poker has both moves of imperfect information. A complete extensive-form representation specifies: the players of a game for every player every opportunity they have to move what each player can do at each of their moves what each player knows for every move the payoffs received by every player for every possible combination of moves The game on the right has two players: 1 and 2; the numbers by every non-terminal node indicate. The numbers by every terminal node represent the payoffs to the players; the labels by every edge of the graph are the name of the action. The initial node belongs to player 1. Play according to the tree is as follows: player 1 chooses between U and D; the payoffs are as specified in the tree. There are four outcomes represented by the four terminal nodes of the tree:, and; the payoffs associated with each outcome are as follows, and. If player 1 plays D, player 2 will play U' to maximise their payoff and so player 1 will only receive 1.
However, if player 1 plays U, player 2 maximises their payoff by playing D' and player 1 receives 2. Player 1 prefers 2 to 1 and s