1.
Polygon
–
In elementary geometry, a polygon /ˈpɒlɪɡɒn/ is a plane figure that is bounded by a finite chain of straight line segments closing in a loop to form a closed polygonal chain or circuit. These segments are called its edges or sides, and the points where two edges meet are the vertices or corners. The interior of the polygon is called its body. An n-gon is a polygon with n sides, for example, a polygon is a 2-dimensional example of the more general polytope in any number of dimensions. The basic geometrical notion of a polygon has been adapted in various ways to suit particular purposes, mathematicians are often concerned only with the bounding closed polygonal chain and with simple polygons which do not self-intersect, and they often define a polygon accordingly. A polygonal boundary may be allowed to intersect itself, creating star polygons and these and other generalizations of polygons are described below. The word polygon derives from the Greek adjective πολύς much, many and it has been suggested that γόνυ knee may be the origin of “gon”. Polygons are primarily classified by the number of sides, Polygons may be characterized by their convexity or type of non-convexity, Convex, any line drawn through the polygon meets its boundary exactly twice. As a consequence, all its interior angles are less than 180°, equivalently, any line segment with endpoints on the boundary passes through only interior points between its endpoints. Non-convex, a line may be found which meets its boundary more than twice, equivalently, there exists a line segment between two boundary points that passes outside the polygon. Simple, the boundary of the polygon does not cross itself, there is at least one interior angle greater than 180°. Star-shaped, the interior is visible from at least one point. The polygon must be simple, and may be convex or concave, self-intersecting, the boundary of the polygon crosses itself. Branko Grünbaum calls these coptic, though this term does not seem to be widely used, star polygon, a polygon which self-intersects in a regular way. A polygon cannot be both a star and star-shaped, equiangular, all corner angles are equal. Cyclic, all lie on a single circle, called the circumcircle. Isogonal or vertex-transitive, all lie within the same symmetry orbit. The polygon is cyclic and equiangular
2.
Polytope
–
In elementary geometry, a polytope is a geometric object with flat sides, and may exist in any general number of dimensions n as an n-dimensional polytope or n-polytope. For example, a polygon is a 2-polytope and a three-dimensional polyhedron is a 3-polytope. Polytopes in more than three dimensions were first discovered by Ludwig Schläfli, the German term polytop was coined by the mathematician Reinhold Hoppe, and was introduced to English mathematicians as polytope by Alicia Boole Stott. The term polytope is nowadays a broad term that covers a class of objects. Many of these definitions are not equivalent, resulting in different sets of objects being called polytopes and they represent different approaches to generalizing the convex polytopes to include other objects with similar properties. In this approach, a polytope may be regarded as a tessellation or decomposition of some given manifold, an example of this approach defines a polytope as a set of points that admits a simplicial decomposition. However this definition does not allow star polytopes with interior structures, the discovery of star polyhedra and other unusual constructions led to the idea of a polyhedron as a bounding surface, ignoring its interior. A polyhedron is understood as a surface whose faces are polygons, a 4-polytope as a hypersurface whose facets are polyhedra and this approach is used for example in the theory of abstract polytopes. In certain fields of mathematics, the terms polytope and polyhedron are used in a different sense and this terminology is typically confined to polytopes and polyhedra that are convex. A polytope comprises elements of different dimensionality such as vertices, edges, faces, cells, terminology for these is not fully consistent across different authors. For example, some authors use face to refer to an -dimensional element while others use face to denote a 2-face specifically, authors may use j-face or j-facet to indicate an element of j dimensions. Some use edge to refer to a ridge, while H. S. M. Coxeter uses cell to denote an -dimensional element, the terms adopted in this article are given in the table below, An n-dimensional polytope is bounded by a number of -dimensional facets. These facets are themselves polytopes, whose facets are -dimensional ridges of the original polytope, Every ridge arises as the intersection of two facets. Ridges are once again polytopes whose facets give rise to -dimensional boundaries of the original polytope and these bounding sub-polytopes may be referred to as faces, or specifically j-dimensional faces or j-faces. A 0-dimensional face is called a vertex, and consists of a single point, a 1-dimensional face is called an edge, and consists of a line segment. A 2-dimensional face consists of a polygon, and a 3-dimensional face, sometimes called a cell, the convex polytopes are the simplest kind of polytopes, and form the basis for several different generalizations of the concept of polytopes. A convex polytope is defined as the intersection of a set of half-spaces. This definition allows a polytope to be neither bounded nor finite, Polytopes are defined in this way, e. g. in linear programming
3.
Level set
–
In mathematics, a level set of a real-valued function f of n real variables is a set of the form L c =, that is, a set where the function takes on a given constant value c. When the number of variables is two, a set is generically a curve, called a level curve, contour line. So a level curve is the set of all real-valued solutions of an equation in two variables x1 and x2, when n =3, a level set is called a level surface, and for higher values of n the level set is a level hypersurface. So a level surface is the set of all real-valued roots of an equation in three variables x1, x2 and x3, and a level hypersurface is the set of all real-valued roots of an equation in n variables, a level set is a special case of a fiber. Level sets show up in many applications, often different names. For example, a curve is a level curve, which is considered independently of its neighbor curves. Analogously, a surface is sometimes called an implicit surface or an isosurface. The name isocontour is also used, which means a contour of equal height, for example, given a specific radius r, the equation of a circle defines an isocontour. R2 = x 2 + y 2 If we choose r =5 then our isovalue is c =52 =25, all points that evaluate to 25 constitute the isocontour. This means that they are a member of the level set. If a point evaluates to less than 25 the point is on the inside of the isocontour, If the result is greater than 25, it is on the outside. A second example is the logarithmically spaced level curve plot of Himmelblaus function shown in the figure, If the function f is differentiable, the gradient of f at a point is either zero, or perpendicular to the level set of f at that point. To understand what this means, imagine that two hikers are at the location on a mountain. One of them is bold, and she decides to go in the direction where the slope is steepest, the other one is more cautious, he does not want to either climb or descend, choosing a path which will keep him at the same height. In our analogy, the theorem says that the two hikers will depart in directions perpendicular to one another. A consequence of this theorem is that if f is differentiable, a set is a hypersurface. At a critical point, a set may be reduced to a point or may have a singularity such as a self-intersection point or a cusp. A set of the form L c − = is called a set of f
4.
Polyhedron
–
In geometry, a polyhedron is a solid in three dimensions with flat polygonal faces, straight edges and sharp corners or vertices. The word polyhedron comes from the Classical Greek πολύεδρον, as poly- + -hedron, a convex polyhedron is the convex hull of finitely many points, not all on the same plane. Cubes and pyramids are examples of convex polyhedra, a polyhedron is a 3-dimensional example of the more general polytope in any number of dimensions. Convex polyhedra are well-defined, with several equivalent standard definitions, however, the formal mathematical definition of polyhedra that are not required to be convex has been problematic. Many definitions of polyhedron have been given within particular contexts, some more rigorous than others, some of these definitions exclude shapes that have often been counted as polyhedra or include shapes that are often not considered as valid polyhedra. As Branko Grünbaum observed, The Original Sin in the theory of polyhedra goes back to Euclid, the writers failed to define what are the polyhedra. Nevertheless, there is agreement that a polyhedron is a solid or surface that can be described by its vertices, edges, faces. Natural refinements of this definition require the solid to be bounded, to have a connected interior, and possibly also to have a connected boundary. However, the polyhedra defined in this way do not include the self-crossing star polyhedra, their faces may not form simple polygons, definitions based on the idea of a bounding surface rather than a solid are also common. If a planar part of such a surface is not itself a convex polygon, ORourke requires it to be subdivided into smaller convex polygons, cromwell gives a similar definition but without the restriction of three edges per vertex. Again, this type of definition does not encompass the self-crossing polyhedra, however, there exist topological polyhedra that cannot be realized as acoptic polyhedra. One modern approach is based on the theory of abstract polyhedra and these can be defined as partially ordered sets whose elements are the vertices, edges, and faces of a polyhedron. A vertex or edge element is less than an edge or face element when the vertex or edge is part of the edge or face, additionally, one may include a special bottom element of this partial order and a top element representing the whole polyhedron. However, these requirements are relaxed, to instead require only that the sections between elements two levels apart from line segments. Geometric polyhedra, defined in other ways, can be described abstractly in this way, a realization of an abstract polyhedron is generally taken to be a mapping from the vertices of the abstract polyhedron to geometric points, such that the points of each face are coplanar. A geometric polyhedron can then be defined as a realization of an abstract polyhedron, realizations that forgo the requirement of planarity, that impose additional requirements of symmetry, or that map the vertices to higher dimensional spaces have also been considered. Unlike the solid-based and surface-based definitions, this perfectly well for star polyhedra. However, without restrictions, this definition allows degenerate or unfaithful polyhedra
5.
Plane (geometry)
–
In mathematics, a plane is a flat, two-dimensional surface that extends infinitely far. A plane is the analogue of a point, a line. When working exclusively in two-dimensional Euclidean space, the article is used, so. Many fundamental tasks in mathematics, geometry, trigonometry, graph theory and graphing are performed in a space, or in other words. Euclid set forth the first great landmark of mathematical thought, a treatment of geometry. He selected a small core of undefined terms and postulates which he used to prove various geometrical statements. Although the plane in its sense is not directly given a definition anywhere in the Elements. In his work Euclid never makes use of numbers to measure length, angle, in this way the Euclidean plane is not quite the same as the Cartesian plane. This section is concerned with planes embedded in three dimensions, specifically, in R3. In a Euclidean space of any number of dimensions, a plane is determined by any of the following. A line and a point not on that line, a line is either parallel to a plane, intersects it at a single point, or is contained in the plane. Two distinct lines perpendicular to the plane must be parallel to each other. Two distinct planes perpendicular to the line must be parallel to each other. Specifically, let r0 be the vector of some point P0 =. The plane determined by the point P0 and the vector n consists of those points P, with position vector r, such that the vector drawn from P0 to P is perpendicular to n. Recalling that two vectors are perpendicular if and only if their dot product is zero, it follows that the plane can be described as the set of all points r such that n ⋅ =0. Expanded this becomes a + b + c =0, which is the form of the equation of a plane. This is just a linear equation a x + b y + c z + d =0 and this familiar equation for a plane is called the general form of the equation of the plane
6.
Mathematical model
–
A mathematical model is a description of a system using mathematical concepts and language. The process of developing a model is termed mathematical modeling. Mathematical models are used in the sciences and engineering disciplines. Physicists, engineers, statisticians, operations research analysts, and economists use mathematical models most extensively, a model may help to explain a system and to study the effects of different components, and to make predictions about behaviour. Mathematical models can take many forms, including systems, statistical models, differential equations. These and other types of models can overlap, with a model involving a variety of abstract structures. In general, mathematical models may include logical models, in many cases, the quality of a scientific field depends on how well the mathematical models developed on the theoretical side agree with results of repeatable experiments. Lack of agreement between theoretical mathematical models and experimental measurements often leads to important advances as better theories are developed, in the physical sciences, the traditional mathematical model contains four major elements. These are Governing equations Defining equations Constitutive equations Constraints Mathematical models are composed of relationships. Relationships can be described by operators, such as operators, functions, differential operators. Variables are abstractions of system parameters of interest, that can be quantified, a model is considered to be nonlinear otherwise. The definition of linearity and nonlinearity is dependent on context, for example, in a statistical linear model, it is assumed that a relationship is linear in the parameters, but it may be nonlinear in the predictor variables. Similarly, an equation is said to be linear if it can be written with linear differential operators. In a mathematical programming model, if the functions and constraints are represented entirely by linear equations. If one or more of the functions or constraints are represented with a nonlinear equation. Nonlinearity, even in simple systems, is often associated with phenomena such as chaos. Although there are exceptions, nonlinear systems and models tend to be difficult to study than linear ones. A common approach to nonlinear problems is linearization, but this can be if one is trying to study aspects such as irreversibility
7.
Linear function
–
In linear algebra and functional analysis, a linear function is a linear map. In calculus, analytic geometry and related areas, a function is a polynomial of degree one or less. When the function is of one variable, it is of the form f = a x + b. The graph of such a function of one variable is a nonvertical line, a is frequently referred to as the slope of the line, and b as the intercept. For a function f of any number of independent variables, the general formula is f = b + a 1 x 1 + … + a k x k. A constant function is also considered linear in this context, as it is a polynomial of degree zero or is the zero polynomial and its graph, when there is only one independent variable, is a horizontal line. In this context, the meaning may be referred to as a homogeneous linear function or a linear form. In the context of linear algebra, this meaning is a kind of affine map. In linear algebra, a function is a map f between two vector spaces that preserves vector addition and scalar multiplication, f = f + f f = a f. Here a denotes a constant belonging to some field K of scalars and x and y are elements of a vector space, some authors use linear function only for linear maps that take values in the scalar field, these are also called linear functionals. The linear functions of calculus qualify as linear maps when f =0, or, equivalently, geometrically, the graph of the function must pass through the origin. Homogeneous function Nonlinear system Piecewise linear function Linear interpolation Discontinuous linear map Izrail Moiseevich Gelfand, Lectures on Linear Algebra, Interscience Publishers, ISBN 0-486-66082-6 Thomas S. Shores, Applied Linear Algebra and Matrix Analysis, Undergraduate Texts in Mathematics, Springer. ISBN 0-387-33195-6 James Stewart, Calculus, Early Transcendentals, edition 7E, ISBN 978-0-538-49790-9 Leonid N. Vaserstein, Linear Programming, in Leslie Hogben, ed. Handbook of Linear Algebra, Discrete Mathematics and Its Applications, Chapman and Hall/CRC, chap
8.
Mathematical optimization
–
In mathematics, computer science and operations research, mathematical optimization, also spelled mathematical optimisation, is the selection of a best element from some set of available alternatives. The generalization of optimization theory and techniques to other formulations comprises an area of applied mathematics. Such a formulation is called a problem or a mathematical programming problem. Many real-world and theoretical problems may be modeled in this general framework, typically, A is some subset of the Euclidean space Rn, often specified by a set of constraints, equalities or inequalities that the members of A have to satisfy. The domain A of f is called the space or the choice set. The function f is called, variously, a function, a loss function or cost function, a utility function or fitness function, or, in certain fields. A feasible solution that minimizes the objective function is called an optimal solution, in mathematics, conventional optimization problems are usually stated in terms of minimization. Generally, unless both the function and the feasible region are convex in a minimization problem, there may be several local minima. While a local minimum is at least as good as any nearby points, a global minimum is at least as good as every feasible point. In a convex problem, if there is a minimum that is interior, it is also the global minimum. Optimization problems are often expressed with special notation, consider the following notation, min x ∈ R This denotes the minimum value of the objective function x 2 +1, when choosing x from the set of real numbers R. The minimum value in case is 1, occurring at x =0. Similarly, the notation max x ∈ R2 x asks for the value of the objective function 2x. In this case, there is no such maximum as the function is unbounded. This represents the value of the argument x in the interval, John Wiley & Sons, Ltd. pp. xxviii+489. (2008 Second ed. in French, Programmation mathématique, Théorie et algorithmes, Editions Tec & Doc, Paris,2008. Nemhauser, G. L. Rinnooy Kan, A. H. G. Todd, handbooks in Operations Research and Management Science. Amsterdam, North-Holland Publishing Co. pp. xiv+709, J. E. Dennis, Jr. and Robert B
9.
Linear equality
–
A linear equation is an algebraic equation in which each term is either a constant or the product of a constant and a single variable. A simple example of an equation with only one variable, x, may be written in the form, ax + b =0, where a and b are constants. The constants may be numbers, parameters, or even functions of parameters. Linear equations can have one or more variables. An example of an equation with three variables, x, y, and z, is given by, ax + by + cz + d =0, where a, b, c, and d are constants and a, b. Linear equations occur frequently in most subareas of mathematics and especially in applied mathematics, an equation is linear if the sum of the exponents of the variables of each term is one. Equations with exponents greater than one are non-linear, an example of a non-linear equation of two variables is axy + b =0, where a and b are constants and a ≠0. It has two variables, x and y, and is non-linear because the sum of the exponents of the variables in the first term and this article considers the case of a single equation for which one searches the real solutions. All its content applies for complex solutions and, more generally for linear equations with coefficients, a linear equation in one unknown x may always be rewritten a x = b. If a ≠0, there is a solution x = b a. The origin of the name comes from the fact that the set of solutions of such an equation forms a straight line in the plane. Linear equations can be using the laws of elementary algebra into several different forms. These equations are referred to as the equations of the straight line. In what follows, x, y, t, and θ are variables, in the general form the linear equation is written as, A x + B y = C, where A and B are not both equal to zero. The equation is written so that A ≥0, by convention. The graph of the equation is a line, and every straight line can be represented by an equation in the above form. If A is nonzero, then the x-intercept, that is, if B is nonzero, then the y-intercept, that is the y-coordinate of the point where the graph crosses the y-axis, is C/B, and the slope of the line is −A/B. The general form is written as, a x + b y + c =0
10.
Linear inequality
–
In mathematics a linear inequality is an inequality which involves a linear function. Two-dimensional linear inequalities are expressions in two variables of the form, a x + b y < c and a x + b y ≥ c, the solution set of such an inequality can be graphically represented by a half-plane in the Euclidean plane. The line that determines the half-planes is not included in the set when the inequality is strict. Then, pick a convenient point not on the line, such as, since 0 +3 =0 <9, this point is in the solution set, so the half-plane containing this point is the solution set of this linear inequality. In Rn linear inequalities are the expressions that may be written in the form f < b or f ≤ b, X n are called the unknowns, and a 1, a 2. A n are called the coefficients, alternatively, these may be written as g <0 or g ≤0, where g is an affine function. X n are the unknowns, a 11, a 12, a m n are the coefficients of the system, and b 1, b 2. B m are the constant terms and this can be concisely written as the matrix inequality A x ≤ b, where A is an m×n matrix, x is an n×1 column vector of variables, and b is an m×1 column vector of constants. In the above systems both strict and non-strict inequalities may be used, not all systems of linear inequalities have solutions. The set of solutions of a linear inequality constitutes a ] of the n-dimensional real space. The set of solutions of a system of linear inequalities corresponds to the intersection of the half-spaces defined by individual inequalities and it is a convex set, since the half-spaces are convex sets, and the intersection of a set of convex sets is also convex. In the non-degenerate cases this convex set is a convex polyhedron and it may also be empty or a convex polyhedron of lower dimension confined to an ] of the n-dimensional space Rn. A linear programming problem seeks to optimize of a subject to a number of constraints on the variables which. The list of constraints is a system of linear inequalities, generalizations of this type are only of theoretical interest until an application for them becomes apparent. Angel, Allen R. Porter, Stuart R
11.
Feasible region
–
This is the initial set of candidate solutions to the problem, before the set of candidates has been narrowed down. For example, consider the problem Minimize x 2 + y 4 with respect to the x and y. Here the feasible set is the set of pairs in which the value of x is at least 1 and at most 10 and the value of y is at least 5 and at most 12. Note that the set of the problem is separate from the objective function, which states the criterion to be optimized. In many problems, the set reflects a constraint that one or more variables must be non-negative. In pure integer programming problems, the set is the set of integers. In linear programming problems, the set is a convex polytope. Constraint satisfaction is the process of finding a point in the feasible region, a convex feasible set is one in which a line segment connecting any two feasible points goes through only other feasible points, and not through any points outside the feasible set. If the constraints of a problem are mutually contradictory, there are no points that satisfy all the constraints. In this case the problem has no solution and is said to be infeasible, feasible sets may be bounded or unbounded. For example, the set defined by the constraint set is unbounded because in some directions there is no limit on how far one can go. In contrast, the set formed by the constraint set is bounded because the extent of movement in any direction is limited by the constraints. In linear programming problems with n variables, a necessary but not sufficient condition for the set to be bounded is that the number of constraints be at least n +1. If the feasible set is unbounded, there may or may not be an optimum, in optimization and other branches of mathematics, and in search algorithms, a candidate solution is a member of the set of possible solutions in the feasible region of a given problem. A candidate solution does not have to be a likely or reasonable solution to the problem—it is simply in the set that satisfies all constraints, that is, it is in the set of feasible solutions. The space of all solutions, before any feasible points have been excluded, is called the feasible region, feasible set, search space. This is the set of all possible solutions that satisfy the problems constraints, constraint satisfaction is the process of finding a point in the feasible set. In the case of the algorithm, the candidate solutions are the individuals in the population being evolved by the algorithm
12.
Convex polytope
–
A convex polytope is a special case of a polytope, having the additional property that it is also a convex set of points in the n-dimensional space Rn. Some authors use the terms polytope and convex polyhedron interchangeably. In addition, some require a polytope to be a bounded set. The terms bounded/unbounded convex polytope will be used whenever the boundedness is critical to the discussed issue. Yet other texts treat a convex n-polytope as a surface or -manifold, Convex polytopes play an important role both in various branches of mathematics and in applied areas, most notably in linear programming. A comprehensive and influential book in the subject, called Convex Polytopes, was published in 1967 by Branko Grünbaum, in 2003 the 2nd edition of the book was published, with significant additional material contributed by new writers. In Grünbaums book, and in other texts in discrete geometry. Grünbaum points out that this is solely to avoid the repetition of the word convex. A polytope is called if it is an n-dimensional object in Rn. Many examples of bounded convex polytopes can be found in the article polyhedron, a convex polytope may be defined in a number of ways, depending on what is more suitable for the problem at hand. Grünbaums definition is in terms of a set of points in space. Other important definitions are, as the intersection of half-spaces and as the hull of a set of points. This is equivalent to defining a bounded convex polytope as the hull of a finite set of points. Such a definition is called a vertex representation, for a compact convex polytope, the minimal V-description is unique and it is given by the set of the vertices of the polytope. A convex polytope may be defined as an intersection of a number of half-spaces. Such definition is called a half-space representation, there exist infinitely many H-descriptions of a convex polytope. However, for a convex polytope, the minimal H-description is in fact unique and is given by the set of the facet-defining halfspaces. A closed half-space can be written as an inequality, a 1 x 1 + a 2 x 2 + ⋯ + a n x n ≤ b where n is the dimension of the space containing the polytope under consideration
13.
Intersection (mathematics)
–
In mathematics, the intersection of two or more objects is another, usually smaller object. All objects are presumed to lie in a common space except in set theory. The intersection is one of basic concepts of geometry, intuitively, the intersection of two or more objects is a new object that lies in each of original objects. An intersection can have various shapes, but a point is the most common in a plane geometry. It is always defined, but may be empty, incidence geometry defines an intersection as an object of lower dimension that is incident to each of original objects. In this approach an intersection can be sometimes undefined, such as for parallel lines, in both cases the concept of intersection relies on logical conjunction. Algebraic geometry defines intersections in its own way with the intersection theory, there can be more than one primitive object, such as points that form an intersection. It can be understood ambiguously, either the intersection is all of them, constructive solid geometry, Boolean Intersection is one of the ways of combining 2D/3D shapes Meet Weisstein, Eric W. Intersection
14.
Real number
–
In mathematics, a real number is a value that represents a quantity along a line. The adjective real in this context was introduced in the 17th century by René Descartes, the real numbers include all the rational numbers, such as the integer −5 and the fraction 4/3, and all the irrational numbers, such as √2. Included within the irrationals are the numbers, such as π. Real numbers can be thought of as points on a long line called the number line or real line. Any real number can be determined by a possibly infinite decimal representation, such as that of 8.632, the real line can be thought of as a part of the complex plane, and complex numbers include real numbers. These descriptions of the numbers are not sufficiently rigorous by the modern standards of pure mathematics. All these definitions satisfy the definition and are thus equivalent. The statement that there is no subset of the reals with cardinality greater than ℵ0. Simple fractions were used by the Egyptians around 1000 BC, the Vedic Sulba Sutras in, c.600 BC, around 500 BC, the Greek mathematicians led by Pythagoras realized the need for irrational numbers, in particular the irrationality of the square root of 2. Arabic mathematicians merged the concepts of number and magnitude into a general idea of real numbers. In the 16th century, Simon Stevin created the basis for modern decimal notation, in the 17th century, Descartes introduced the term real to describe roots of a polynomial, distinguishing them from imaginary ones. In the 18th and 19th centuries, there was work on irrational and transcendental numbers. Johann Heinrich Lambert gave the first flawed proof that π cannot be rational, Adrien-Marie Legendre completed the proof, Évariste Galois developed techniques for determining whether a given equation could be solved by radicals, which gave rise to the field of Galois theory. Charles Hermite first proved that e is transcendental, and Ferdinand von Lindemann, lindemanns proof was much simplified by Weierstrass, still further by David Hilbert, and has finally been made elementary by Adolf Hurwitz and Paul Gordan. The development of calculus in the 18th century used the set of real numbers without having defined them cleanly. The first rigorous definition was given by Georg Cantor in 1871, in 1874, he showed that the set of all real numbers is uncountably infinite but the set of all algebraic numbers is countably infinite. Contrary to widely held beliefs, his first method was not his famous diagonal argument, the real number system can be defined axiomatically up to an isomorphism, which is described hereafter. Another possibility is to start from some rigorous axiomatization of Euclidean geometry, from the structuralist point of view all these constructions are on equal footing
15.
Affine transformation
–
In geometry, an affine transformation, affine map or an affinity is a function between affine spaces which preserves points, straight lines and planes. Also, sets of parallel lines remain parallel after an affine transformation, an affine transformation does not necessarily preserve angles between lines or distances between points, though it does preserve ratios of distances between points lying on a straight line. Examples of affine transformations include translation, scaling, homothety, similarity transformation, reflection, rotation, shear mapping, and compositions of them in any combination and sequence. If X and Y are affine spaces, then every affine transformation f, X → Y is of the form x ↦ M x + b, unlike a purely linear transformation, an affine map need not preserve the zero point in a linear space. Thus, every linear transformation is affine, but not every affine transformation is linear, all Euclidean spaces are affine, but there are affine spaces that are non-Euclidean. In affine coordinates, which include Cartesian coordinates in Euclidean spaces, another way to deal with affine transformations systematically is to select a point as the origin, then, any affine transformation is equivalent to a linear transformation followed by a translation. An affine map f, A → B between two spaces is a map on the points that acts linearly on the vectors. In symbols, f determines a linear transformation φ such that and we can interpret this definition in a few other ways, as follows. If an origin O ∈ A is chosen, and B denotes its image f ∈ B, the conclusion is that, intuitively, f consists of a translation and a linear map. In other words, f preserves barycenters, as shown above, an affine map is the composition of two functions, a translation and a linear map. Ordinary vector algebra uses matrix multiplication to represent linear maps, using an augmented matrix and an augmented vector, it is possible to represent both the translation and the linear map using a single matrix multiplication. If A is a matrix, = is equivalent to the following y → = A x → + b →, the above-mentioned augmented matrix is called an affine transformation matrix, or projective transformation matrix. This representation exhibits the set of all affine transformations as the semidirect product of K n and G L. This is a group under the operation of composition of functions, ordinary matrix-vector multiplication always maps the origin to the origin, and could therefore never represent a translation, in which the origin must necessarily be mapped to some other point. By appending the additional coordinate 1 to every vector, one considers the space to be mapped as a subset of a space with an additional dimension. In that space, the original space occupies the subset in which the coordinate is 1. Thus the origin of the space can be found at. A translation within the space by means of a linear transformation of the higher-dimensional space is then possible
16.
Algorithm
–
In mathematics and computer science, an algorithm is a self-contained sequence of actions to be performed. Algorithms can perform calculation, data processing and automated reasoning tasks, an algorithm is an effective method that can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. The transition from one state to the next is not necessarily deterministic, some algorithms, known as randomized algorithms, giving a formal definition of algorithms, corresponding to the intuitive notion, remains a challenging problem. In English, it was first used in about 1230 and then by Chaucer in 1391, English adopted the French term, but it wasnt until the late 19th century that algorithm took on the meaning that it has in modern English. Another early use of the word is from 1240, in a manual titled Carmen de Algorismo composed by Alexandre de Villedieu and it begins thus, Haec algorismus ars praesens dicitur, in qua / Talibus Indorum fruimur bis quinque figuris. Which translates as, Algorism is the art by which at present we use those Indian figures, the poem is a few hundred lines long and summarizes the art of calculating with the new style of Indian dice, or Talibus Indorum, or Hindu numerals. An informal definition could be a set of rules that precisely defines a sequence of operations, which would include all computer programs, including programs that do not perform numeric calculations. Generally, a program is only an algorithm if it stops eventually, but humans can do something equally useful, in the case of certain enumerably infinite sets, They can give explicit instructions for determining the nth member of the set, for arbitrary finite n. An enumerably infinite set is one whose elements can be put into one-to-one correspondence with the integers, the concept of algorithm is also used to define the notion of decidability. That notion is central for explaining how formal systems come into being starting from a set of axioms. In logic, the time that an algorithm requires to complete cannot be measured, from such uncertainties, that characterize ongoing work, stems the unavailability of a definition of algorithm that suits both concrete and abstract usage of the term. Algorithms are essential to the way computers process data, thus, an algorithm can be considered to be any sequence of operations that can be simulated by a Turing-complete system. Although this may seem extreme, the arguments, in its favor are hard to refute. Gurevich. Turings informal argument in favor of his thesis justifies a stronger thesis, according to Savage, an algorithm is a computational process defined by a Turing machine. Typically, when an algorithm is associated with processing information, data can be read from a source, written to an output device. Stored data are regarded as part of the state of the entity performing the algorithm. In practice, the state is stored in one or more data structures, for some such computational process, the algorithm must be rigorously defined, specified in the way it applies in all possible circumstances that could arise. That is, any conditional steps must be dealt with, case-by-case
17.
Vector space
–
A vector space is a collection of objects called vectors, which may be added together and multiplied by numbers, called scalars in this context. Scalars are often taken to be numbers, but there are also vector spaces with scalar multiplication by complex numbers, rational numbers. The operations of addition and scalar multiplication must satisfy certain requirements, called axioms. Euclidean vectors are an example of a vector space and they represent physical quantities such as forces, any two forces can be added to yield a third, and the multiplication of a force vector by a real multiplier is another force vector. In the same vein, but in a more geometric sense, Vector spaces are the subject of linear algebra and are well characterized by their dimension, which, roughly speaking, specifies the number of independent directions in the space. Infinite-dimensional vector spaces arise naturally in mathematical analysis, as function spaces and these vector spaces are generally endowed with additional structure, which may be a topology, allowing the consideration of issues of proximity and continuity. Among these topologies, those that are defined by a norm or inner product are commonly used. This is particularly the case of Banach spaces and Hilbert spaces, historically, the first ideas leading to vector spaces can be traced back as far as the 17th centurys analytic geometry, matrices, systems of linear equations, and Euclidean vectors. Today, vector spaces are applied throughout mathematics, science and engineering, furthermore, vector spaces furnish an abstract, coordinate-free way of dealing with geometrical and physical objects such as tensors. This in turn allows the examination of local properties of manifolds by linearization techniques, Vector spaces may be generalized in several ways, leading to more advanced notions in geometry and abstract algebra. The concept of space will first be explained by describing two particular examples, The first example of a vector space consists of arrows in a fixed plane. This is used in physics to describe forces or velocities, given any two such arrows, v and w, the parallelogram spanned by these two arrows contains one diagonal arrow that starts at the origin, too. This new arrow is called the sum of the two arrows and is denoted v + w, when a is negative, av is defined as the arrow pointing in the opposite direction, instead. Such a pair is written as, the sum of two such pairs and multiplication of a pair with a number is defined as follows, + = and a =. The first example above reduces to one if the arrows are represented by the pair of Cartesian coordinates of their end points. A vector space over a field F is a set V together with two operations that satisfy the eight axioms listed below, elements of V are commonly called vectors. Elements of F are commonly called scalars, the second operation, called scalar multiplication takes any scalar a and any vector v and gives another vector av. In this article, vectors are represented in boldface to distinguish them from scalars
18.
Matrix (mathematics)
–
In mathematics, a matrix is a rectangular array of numbers, symbols, or expressions, arranged in rows and columns. For example, the dimensions of the matrix below are 2 ×3, the individual items in an m × n matrix A, often denoted by ai, j, where max i = m and max j = n, are called its elements or entries. Provided that they have the size, two matrices can be added or subtracted element by element. The rule for multiplication, however, is that two matrices can be multiplied only when the number of columns in the first equals the number of rows in the second. Any matrix can be multiplied element-wise by a scalar from its associated field, a major application of matrices is to represent linear transformations, that is, generalizations of linear functions such as f = 4x. The product of two matrices is a matrix that represents the composition of two linear transformations. Another application of matrices is in the solution of systems of linear equations, if the matrix is square, it is possible to deduce some of its properties by computing its determinant. For example, a matrix has an inverse if and only if its determinant is not zero. Insight into the geometry of a transformation is obtainable from the matrixs eigenvalues. Applications of matrices are found in most scientific fields, in computer graphics, they are used to manipulate 3D models and project them onto a 2-dimensional screen. Matrix calculus generalizes classical analytical notions such as derivatives and exponentials to higher dimensions, Matrices are used in economics to describe systems of economic relationships. A major branch of analysis is devoted to the development of efficient algorithms for matrix computations. Matrix decomposition methods simplify computations, both theoretically and practically, algorithms that are tailored to particular matrix structures, such as sparse matrices and near-diagonal matrices, expedite computations in finite element method and other computations. Infinite matrices occur in planetary theory and in atomic theory, a simple example of an infinite matrix is the matrix representing the derivative operator, which acts on the Taylor series of a function. A matrix is an array of numbers or other mathematical objects for which operations such as addition and multiplication are defined. Most commonly, a matrix over a field F is an array of scalars each of which is a member of F. Most of this focuses on real and complex matrices, that is, matrices whose elements are real numbers or complex numbers. More general types of entries are discussed below, for instance, this is a real matrix, A =
19.
Economics
–
Economics is a social science concerned chiefly with description and analysis of the production, distribution, and consumption of goods and services according to the Merriam-Webster Dictionary. Economics focuses on the behaviour and interactions of economic agents and how economies work, consistent with this focus, textbooks often distinguish between microeconomics and macroeconomics. Microeconomics examines the behaviour of elements in the economy, including individual agents and markets, their interactions. Individual agents may include, for example, households, firms, buyers, macroeconomics analyzes the entire economy and issues affecting it, including unemployment of resources, inflation, economic growth, and the public policies that address these issues. Economic analysis can be applied throughout society, as in business, finance, health care, Economic analyses may also be applied to such diverse subjects as crime, education, the family, law, politics, religion, social institutions, war, science, and the environment. At the turn of the 21st century, the domain of economics in the social sciences has been described as economic imperialism. The ultimate goal of economics is to improve the conditions of people in their everyday life. There are a variety of definitions of economics. Some of the differences may reflect evolving views of the subject or different views among economists, to supply the state or commonwealth with a revenue for the publick services. Say, distinguishing the subject from its uses, defines it as the science of production, distribution. On the satirical side, Thomas Carlyle coined the dismal science as an epithet for classical economics, in this context and it enquires how he gets his income and how he uses it. Thus, it is on the one side, the study of wealth and on the other and more important side, a part of the study of man. He affirmed that previous economists have usually centred their studies on the analysis of wealth, how wealth is created, distributed, and consumed, but he said that economics can be used to study other things, such as war, that are outside its usual focus. This is because war has as the goal winning it, generates both cost and benefits, and, resources are used to attain the goal. If the war is not winnable or if the costs outweigh the benefits. Some subsequent comments criticized the definition as overly broad in failing to limit its subject matter to analysis of markets, there are other criticisms as well, such as in scarcity not accounting for the macroeconomics of high unemployment. The same source reviews a range of included in principles of economics textbooks. Among economists more generally, it argues that a particular definition presented may reflect the direction toward which the author believes economics is evolving, microeconomics examines how entities, forming a market structure, interact within a market to create a market system
20.
Scheduling (production processes)
–
Scheduling is the process of arranging, controlling and optimizing work and workloads in a production process or manufacturing process. Scheduling is used to plant and machinery resources, plan human resources, plan production processes. It is an important tool for manufacturing and engineering, where it can have a impact on the productivity of a process. In manufacturing, the purpose of scheduling is to minimize the time and costs, by telling a production facility when to make, with which staff. Production scheduling aims to maximize the efficiency of the operation and reduce costs, Scheduling is the process of arranging, controlling and optimizing work and workloads in a production process. Companies use backward and forward scheduling to allocate plant and machinery resources, plan human resources, plan production processes, forward scheduling is planning the tasks from the date resources become available to determine the shipping date or the due date. Backward scheduling is planning the tasks from the due date or required-by date to determine the start date and/or any changes in capacity required, a key character of scheduling is the productivity, the relation between quantity of inputs and quantity of output. Key concepts here are, Inputs, Inputs are plant, labor, materials, tooling, energy, Outputs, Outputs are the products produced in factories either for other factories or for the end buyer. The extent to which any one product is produced within any one factory is governed by transaction cost, output within the factory, The output of any one work area within the factory is an input to the next work area in that factory according to the manufacturing process. For example, the output of cutting is an input to the bending room, output for the next factory, By way of example, the output of a paper mill is an input to a print factory. The output of a plant is an input to an asphalt plant, a cosmetics factory. Output for the end buyer, Factory output goes to the consumer via a business such as a retailer or an asphalt paving company. Resource allocation, Resource allocation is assigning inputs to produce output, the aim is to maximize output with given inputs or to minimize quantity of inputs to produce required output. Production scheduling can take a significant amount of computing power if there are a number of tasks. Batch production scheduling shares some concepts and techniques with finite capacity scheduling which has applied to many manufacturing problems. The specific issues of scheduling batch manufacturing processes have generated considerable industrial, a batch process can be described in terms of a recipe which comprises a bill of materials and operating instructions which describe how to make the product. The ISA S88 batch process control standard provides a framework for describing a batch process recipe, the standard provides a procedural hierarchy for a recipe. A recipe may be organized into a series of unit-procedures or major steps, unit-procedures are organized into operations, and operations may be further organized into phases
21.
Leonid Kantorovich
–
Leonid Vitaliyevich Kantorovich was a Soviet mathematician and economist, known for his theory and development of techniques for the optimal allocation of resources. He is regarded as the founder of linear programming and he was the winner of the Stalin Prize in 1949 and the Nobel Memorial Prize in Economics in 1975. Kantorovich was born on 19 January 1912, to a Russian Jewish family and his father was a doctor practicing in Saint Petersburg. In 1926, at the age of fourteen, he began his studies at the Leningrad University and he graduated from the Faculty of Mathematics in 1930, and began his graduate studies. In 1934, at the age of 22 years, he became a full professor, later, Kantorovich worked for the Soviet government. He was given the task of optimizing production in a plywood industry and he came up with the mathematical technique now known as linear programming, some years before it was advanced by George Dantzig. He authored several books including The Mathematical Method of Production Planning and Organization, for his work, Kantorovich was awarded the Stalin Prize in 1949. After 1939, he became the professor of Military engineering-technical university, during the Siege of Leningrad, Kantorovich was the professor of VITU of Navy and in charge of safety on the Road of Life. He calculated the distance between cars on ice, depending on thickness of ice and temperature of the air. In December 1941 and January 1942, Kantorovich personally walked between cars driving on the ice of Lake Ladoga, on the Road of Life, to ensure the cars did not sink, however, many cars with food for survivors of the siege were destroyed by the German air-bombings. In 1948 Kantorovich was assigned to the project of the USSR. Since 1960, Kantorovich lived and worked in Novosibirsk, where he created, for his feat and courage Kantorovich was awarded the Order of the Patriotic War, and was decorated with the medal For Defense of Leningrad. The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel, in mathematical analysis, Kantorovich had important results in functional analysis, approximation theory, and operator theory. In particular, Kantorovich formulated fundamental results in the theory of normed vector lattices, Kantorovich considered infinite-dimensional optimization problems, such as the Kantorovich-Monge problem in transportation theory. His analysis proposed the Kantorovich metric, which is used in probability theory, List of economists List of Russian mathematicians V. Makarov. Kantorovich, Leonid Vitaliyevich The New Palgrave, A Dictionary of Economics, v.3, Mathematical Methods of Organizing and Planning Production Management Science, Vol.6, No. Spreadsheet presenting all examples of Kantorovich,1939 with the OpenOffice. org Calc Solver as well as the lp_solver, princeton University Press and the RAND Corporation,1963. Cf. p.22 for the work of Kantorovich, on an Industrial Programming Problem of Kantorovich, Management Science, Vol.8, No
22.
John von Neumann
–
John von Neumann was a Hungarian-American mathematician, physicist, inventor, computer scientist, and polymath. He made major contributions to a number of fields, including mathematics, physics, economics, computing, and statistics. He published over 150 papers in his life, about 60 in pure mathematics,20 in physics, and 60 in applied mathematics and his last work, an unfinished manuscript written while in the hospital, was later published in book form as The Computer and the Brain. His analysis of the structure of self-replication preceded the discovery of the structure of DNA, also, my work on various forms of operator theory, Berlin 1930 and Princeton 1935–1939, on the ergodic theorem, Princeton, 1931–1932. During World War II he worked on the Manhattan Project, developing the mathematical models behind the lenses used in the implosion-type nuclear weapon. After the war, he served on the General Advisory Committee of the United States Atomic Energy Commission, along with theoretical physicist Edward Teller, mathematician Stanislaw Ulam, and others, he worked out key steps in the nuclear physics involved in thermonuclear reactions and the hydrogen bomb. Von Neumann was born Neumann János Lajos to a wealthy, acculturated, Von Neumanns place of birth was Budapest in the Kingdom of Hungary which was then part of the Austro-Hungarian Empire. He was the eldest of three children and he had two younger brothers, Michael, born in 1907, and Nicholas, who was born in 1911. His father, Neumann Miksa was a banker, who held a doctorate in law and he had moved to Budapest from Pécs at the end of the 1880s. Miksas father and grandfather were both born in Ond, Zemplén County, northern Hungary, johns mother was Kann Margit, her parents were Jakab Kann and Katalin Meisels. Three generations of the Kann family lived in apartments above the Kann-Heller offices in Budapest. In 1913, his father was elevated to the nobility for his service to the Austro-Hungarian Empire by Emperor Franz Joseph, the Neumann family thus acquired the hereditary appellation Margittai, meaning of Marghita. The family had no connection with the town, the appellation was chosen in reference to Margaret, Neumann János became Margittai Neumann János, which he later changed to the German Johann von Neumann. Von Neumann was a child prodigy, as a 6 year old, he could multiply and divide two 8-digit numbers in his head, and could converse in Ancient Greek. When he once caught his mother staring aimlessly, the 6 year old von Neumann asked her, formal schooling did not start in Hungary until the age of ten. Instead, governesses taught von Neumann, his brothers and his cousins, Max believed that knowledge of languages other than Hungarian was essential, so the children were tutored in English, French, German and Italian. A copy was contained in a private library Max purchased, One of the rooms in the apartment was converted into a library and reading room, with bookshelves from ceiling to floor. Von Neumann entered the Lutheran Fasori Evangelikus Gimnázium in 1911 and this was one of the best schools in Budapest, part of a brilliant education system designed for the elite
23.
Joseph Fourier
–
The Fourier transform and Fouriers law are also named in his honour. Fourier is also credited with the discovery of the greenhouse effect. Fourier was born at Auxerre, the son of a tailor and he was orphaned at age nine. Fourier was recommended to the Bishop of Auxerre, and through this introduction, the commissions in the scientific corps of the army were reserved for those of good birth, and being thus ineligible, he accepted a military lectureship on mathematics. He took a prominent part in his own district in promoting the French Revolution and he was imprisoned briefly during the Terror but in 1795 was appointed to the École Normale, and subsequently succeeded Joseph-Louis Lagrange at the École Polytechnique. Fourier accompanied Napoleon Bonaparte on his Egyptian expedition in 1798, as scientific adviser, cut off from France by the English fleet, he organized the workshops on which the French army had to rely for their munitions of war. He also contributed several papers to the Egyptian Institute which Napoleon founded at Cairo. After the British victories and the capitulation of the French under General Menou in 1801, in 1801, Napoleon appointed Fourier Prefect of the Department of Isère in Grenoble, where he oversaw road construction and other projects. However, Fourier had previously returned home from the Napoleon expedition to Egypt to resume his academic post as professor at École Polytechnique when Napoleon decided otherwise in his remark. The Prefect of the Department of Isère having recently died, I would like to express my confidence in citizen Fourier by appointing him to this place, hence being faithful to Napoleon, he took the office of Prefect. It was while at Grenoble that he began to experiment on the propagation of heat and he presented his paper On the Propagation of Heat in Solid Bodies to the Paris Institute on December 21,1807. He also contributed to the monumental Description de lÉgypte, Fourier moved to England in 1816. Later, he returned to France, and in 1822 succeeded Jean Baptiste Joseph Delambre as Permanent Secretary of the French Academy of Sciences, in 1830, he was elected a foreign member of the Royal Swedish Academy of Sciences. In 1830, his health began to take its toll, Fourier had already experienced, in Egypt and Grenoble. At Paris, it was impossible to be mistaken with respect to the cause of the frequent suffocations which he experienced. A fall, however, which he sustained on the 4th of May 1830, while descending a flight of stairs, shortly after this event, he died in his bed on 16 May 1830. His name is one of the 72 names inscribed on the Eiffel Tower, a bronze statue was erected in Auxerre in 1849, but it was melted down for armaments during World War II. Joseph Fourier University in Grenoble is named after him and this book was translated, with editorial corrections, into English 56 years later by Freeman
24.
Soviet Union
–
The Soviet Union, officially the Union of Soviet Socialist Republics was a socialist state in Eurasia that existed from 1922 to 1991. It was nominally a union of national republics, but its government. The Soviet Union had its roots in the October Revolution of 1917 and this established the Russian Socialist Federative Soviet Republic and started the Russian Civil War between the revolutionary Reds and the counter-revolutionary Whites. In 1922, the communists were victorious, forming the Soviet Union with the unification of the Russian, Transcaucasian, Ukrainian, following Lenins death in 1924, a collective leadership and a brief power struggle, Joseph Stalin came to power in the mid-1920s. Stalin suppressed all opposition to his rule, committed the state ideology to Marxism–Leninism. As a result, the country underwent a period of rapid industrialization and collectivization which laid the foundation for its victory in World War II and postwar dominance of Eastern Europe. Shortly before World War II, Stalin signed the Molotov–Ribbentrop Pact agreeing to non-aggression with Nazi Germany, in June 1941, the Germans invaded the Soviet Union, opening the largest and bloodiest theater of war in history. Soviet war casualties accounted for the highest proportion of the conflict in the effort of acquiring the upper hand over Axis forces at battles such as Stalingrad. Soviet forces eventually captured Berlin in 1945, the territory overtaken by the Red Army became satellite states of the Eastern Bloc. The Cold War emerged by 1947 as the Soviet bloc confronted the Western states that united in the North Atlantic Treaty Organization in 1949. Following Stalins death in 1953, a period of political and economic liberalization, known as de-Stalinization and Khrushchevs Thaw, the country developed rapidly, as millions of peasants were moved into industrialized cities. The USSR took a lead in the Space Race with Sputnik 1, the first ever satellite, and Vostok 1. In the 1970s, there was a brief détente of relations with the United States, the war drained economic resources and was matched by an escalation of American military aid to Mujahideen fighters. In the mid-1980s, the last Soviet leader, Mikhail Gorbachev, sought to reform and liberalize the economy through his policies of glasnost. The goal was to preserve the Communist Party while reversing the economic stagnation, the Cold War ended during his tenure, and in 1989 Soviet satellite countries in Eastern Europe overthrew their respective communist regimes. This led to the rise of strong nationalist and separatist movements inside the USSR as well, in August 1991, a coup détat was attempted by Communist Party hardliners. It failed, with Russian President Boris Yeltsin playing a role in facing down the coup. On 25 December 1991, Gorbachev resigned and the twelve constituent republics emerged from the dissolution of the Soviet Union as independent post-Soviet states
25.
Economist
–
An economist is a practitioner in the social science discipline of economics. The individual may also study, develop, and apply theories and concepts from economics and write about economic policy. A generally accepted interpretation in academia is that an economist is one who has attained a Ph. D. in economics, teaches economic science, the professionalization of economics, reflected in academia, has been described as the main change in economics since around 1900. Economists debate the path they believe their profession should take, surveys among economists indicate a preference for a shift toward the latter. Most major universities have a faculty, school or department. However, many prominent economists come from a background in mathematics, business, political science, law, sociology, getting a PhD in economics takes six years, on average, with a median of 5.3 years. The Nobel Memorial Prize in Economics, established by Sveriges Riksbank in 1968, is a prize awarded to each year for outstanding intellectual contributions in the field of economics. The prize winners are announced in October every year and they receive their awards on December 10, the anniversary of Alfred Nobels death. In contrast to regulated professions such as engineering, law or medicine, in academia, to be called an economist requires a Ph. D. degree in Economics. A professional working inside of one of many fields of economics or having a degree in this subject is often considered to be an economist. In addition to government and academia, economists are employed in banking, finance, accountancy, commerce, marketing, business administration, lobbying. Politicians often consult economists before enacting economic policy, many statesmen have academic degrees in economics. Economics graduates are employable in varying degrees depending on the regional economic scenario, small numbers go on to undertake postgraduate studies, either in economics, research, teacher training or further qualifications in specialist areas. Nearly 135 colleges and universities grant around 900 new Ph. D. s every year, incomes are highest for those in the private sector, followed by the federal government, with academia paying the lowest incomes. As of January 2013, PayScale. com showed Ph. D. economists salary ranges as follows, all Ph. D. economists, $61,000 to $160,000, Ph. D. The largest single grouping of economists in the UK are the more than 1000 members of the Government Economic Service. This figure compares very favourably with the picture, with 64 percent of economics graduates in employment. Some current well-known economists include, Ben Bernanke, Chairman of the Federal Reserve from 2006 to 2014, milton Friedman, Nobel Memorial Prize in Economic Sciences laureate in Economics
26.
World War II
–
World War II, also known as the Second World War, was a global war that lasted from 1939 to 1945, although related conflicts began earlier. It involved the vast majority of the worlds countries—including all of the great powers—eventually forming two opposing alliances, the Allies and the Axis. It was the most widespread war in history, and directly involved more than 100 million people from over 30 countries. Marked by mass deaths of civilians, including the Holocaust and the bombing of industrial and population centres. These made World War II the deadliest conflict in human history, from late 1939 to early 1941, in a series of campaigns and treaties, Germany conquered or controlled much of continental Europe, and formed the Axis alliance with Italy and Japan. Under the Molotov–Ribbentrop Pact of August 1939, Germany and the Soviet Union partitioned and annexed territories of their European neighbours, Poland, Finland, Romania and the Baltic states. In December 1941, Japan attacked the United States and European colonies in the Pacific Ocean, and quickly conquered much of the Western Pacific. The Axis advance halted in 1942 when Japan lost the critical Battle of Midway, near Hawaii, in 1944, the Western Allies invaded German-occupied France, while the Soviet Union regained all of its territorial losses and invaded Germany and its allies. During 1944 and 1945 the Japanese suffered major reverses in mainland Asia in South Central China and Burma, while the Allies crippled the Japanese Navy, thus ended the war in Asia, cementing the total victory of the Allies. World War II altered the political alignment and social structure of the world, the United Nations was established to foster international co-operation and prevent future conflicts. The victorious great powers—the United States, the Soviet Union, China, the United Kingdom, the Soviet Union and the United States emerged as rival superpowers, setting the stage for the Cold War, which lasted for the next 46 years. Meanwhile, the influence of European great powers waned, while the decolonisation of Asia, most countries whose industries had been damaged moved towards economic recovery. Political integration, especially in Europe, emerged as an effort to end pre-war enmities, the start of the war in Europe is generally held to be 1 September 1939, beginning with the German invasion of Poland, Britain and France declared war on Germany two days later. The dates for the beginning of war in the Pacific include the start of the Second Sino-Japanese War on 7 July 1937, or even the Japanese invasion of Manchuria on 19 September 1931. Others follow the British historian A. J. P. Taylor, who held that the Sino-Japanese War and war in Europe and its colonies occurred simultaneously and this article uses the conventional dating. Other starting dates sometimes used for World War II include the Italian invasion of Abyssinia on 3 October 1935. The British historian Antony Beevor views the beginning of World War II as the Battles of Khalkhin Gol fought between Japan and the forces of Mongolia and the Soviet Union from May to September 1939, the exact date of the wars end is also not universally agreed upon. It was generally accepted at the time that the war ended with the armistice of 14 August 1945, rather than the formal surrender of Japan
27.
Tjalling Koopmans
–
Tjalling Charles Koopmans was a Dutch American mathematician and economist, the joint winner with Leonid Kantorovich of the 1975 Nobel Memorial Prize in Economic Sciences. Koopmans was born in s-Graveland, Netherlands and he began his university education at the Utrecht University at seventeen, specializing in mathematics. Three years later, in 1930, he switched to theoretical physics, in 1933, he met Jan Tinbergen, the winner of the 1969 Nobel Memorial Prize in Economics, and moved to Amsterdam to study mathematical economics under him. In addition to mathematical economics, Koopmans extended his explorations to econometrics and statistics, in 1936 he graduated from Leiden University with a PhD, under the direction of Hendrik Kramers. The title of the thesis was Linear regression analysis of time series. Koopmans moved to the United States in 1940, there he worked for a while for a government body in Washington D. C. In 1946, he became a citizen of the United States. Also in 1948, he was elected as a Fellow of the American Statistical Association and he continued to publish, on the economics of optimal growth and activity analysis. Koopmans early works on the Hartree–Fock theory are associated with the Koopmans theorem, Koopmans was awarded his Nobel memorial prize for his contributions to the field of resource allocation, specifically the theory of optimal use of resources. Koopmans was a son of Sjoerd Koopmans and Wytske van der Zee, one of Sjoerd Koopmans sisters, Gatske Koopmans, married Symon van der Meer, Their son Pieter van der Meer was the father of Nobel Prize winner Simon van der Meer. Serial correlation and quadratic forms in normal variables, on the Description and Comparison of Economic Systems. Nobel Memorial Lecture, Concepts of optimality and their uses, hughes Hallett, Andrew J. Econometrics and the Theory of Economic Policy, The Tinbergen–Theil Contributions 40 Years On. Testing residuals from least squares regressions for being generated by the Gaussian random walk, biography of Tjalling Koopmans from the Institute for Operations Research and the Management Sciences
28.
Nobel Memorial Prize in Economic Sciences
–
The prize was established in 1968 by a donation from Swedens central bank, the Swedish National Bank, on the banks 300th anniversary. Although it is not one of the prizes that Alfred Nobel established in his will in 1895, laureates are announced with the other Nobel Prize laureates, and receive the award at the same ceremony. Laureates in the Memorial Prize in Economics are selected by the Royal Swedish Academy of Sciences and it was first awarded in 1969 to the Dutch and Norwegian economists Jan Tinbergen and Ragnar Frisch, for having developed and applied dynamic models for the analysis of economic processes. An endowment in perpetuity from Sveriges Riksbank pays the Nobel Foundations administrative expenses associated with the prize, since 2012, the monetary portion of the Prize in Economics has totalled 8 million Swedish kronor. This is equivalent to the amount given for the original Nobel Prizes, the Prize in Economics is not one of the original Nobel Prizes created by Alfred Nobels will. However, the process, selection criteria, and awards presentation of the Prize in Economic Sciences are performed in a manner similar to that of the Nobel Prizes. Laureates are announced with the Nobel Prize laureates, and receive the award at the same ceremony, shall have conferred the greatest benefit on mankind. According to its website, the Royal Swedish Academy of Sciences administers a researcher exchange with academies in other countries and publishes six scientific journals. Members of the Academy and former laureates are also authorised to nominate candidates, all proposals and their supporting evidence must be received before February 1. The proposals are reviewed by the Prize Committee and specially appointed experts, before the end of September, the committee chooses potential laureates. If there is a tie, the chairman of the committee casts the deciding vote, next, the potential laureates must be approved by the Royal Swedish Academy of Sciences. Members of the Ninth Class of the Academy vote in mid-October to determine the next laureate or laureates of the Prize in Economics. The first prize in economics was awarded in 1969 to Ragnar Frisch, in 2009, Elinor Ostrom became the first woman awarded the prize. This makes it available to researchers in such topics as political science, psychology, moreover, the composition of the Economics Prize Committee changed to include two non-economists. This has not been confirmed by the Economics Prize Committee, the members of the 2007 Economics Prize Committee are still dominated by economists, as the secretary and four of the five members are professors of economics. Some critics argue that the prestige of the Prize in Economics derives in part from its association with the Nobel Prizes, among them is the Swedish human rights lawyer Peter Nobel, a great-grandson of Ludvig Nobel. Nobel criticizes the institution of misusing his familys name. He explaiend that Nobel despised people who cared more about profits than societys well-being and this does not matter in the natural sciences
29.
Frank Lauren Hitchcock
–
Frank Lauren Hitchcock was an American mathematician and physicist known for his formulation of the transportation problem in 1941. Frank did his study at Phillips Andover Academy. He entered Harvard University and completed his bachelors degree in 1896, then he began teaching, first in Paris and at Kenyon College in Gambier, Ohio. From 1904 to 1906 he taught chemistry at North Dakota State University, Hitchcock returned to Massachusetts and began to teach at Massachusetts Institute of Technology and study at the graduate level at Harvard. In 1910 he obtained a Ph. D. with a thesis entitled, Hitchcock stayed at MIT until retirement, publishing his analysis of optimal distribution in 1941. Frank Hitchcock was descended from New England forebears and his mother was Susan Ida Porter and his father was Elisha Pike Hitchcock. His parents married on June 27,1866, Frank was born March 6,1875, in New York City. He had two sisters, Mary E. Hitchcock and Viola M. Hitchcock, and two brothers, George P. Hitchcock and Ernest Van Ness Hitchcock, although Frank was born in New York City, he was raised in Pittsford, Vermont. Frank married Margaret Johnson Blakely in Paris, France on May 25,1899 and they had three children, Lauren Blakely, John Edward, and George Blakely, January 12,1910. At the time of his death Frank had 11 grandchildren and 6 great-grandsons,1910, Vector Functions of a Point. 1915, A Classification of Quadratic Vectors Functions, Proceedings of the National Academy of Sciences of the United States of America 1,177 to 183. 1917, On the simultaneous formulation of two vector functions, Proceedings of the Royal Irish Academy Section A34,1 to 10. 1920, A study of the vector product Vφαθβ, Proceedings of the Royal Irish Academy Section A35,30 to 7,1920, A Thermodynamic Study of Electrolytic Solutions, Proceedings of the National Academy of Sciences of the United States of America 6,186 to 197. 1920, An Identical Relation Connecting Seven Vectors,1921, The Axes of a Quadratic Vector, Proceedings AAAS56,331 to 351. 1921, with Norbert Wiener, A New Vector Method in Integral Equations,1923, On Double Polyadics, with Application to the Linear Matrix Equation, Proceedings AAAS58,355 to 395. 1923, Identities Satisfied by Algebraic Point Functions in N-space, Proceedings AAAS58,399 to 421,1923, with Clark S. Robinson, Differential Equations in Applied Chemistry, John Wiley & Sons, now from Archive. org. 1923, A Method for the Numerical Solution of Integral Equations,1924, The Coincident Points of Two Algebraic Transformations. 1922, A Solution of the Linear Matrix Equation by Double Multiplication, dr. Frank L. Hitchcock, Mathematician, Professor Emeritus at M. I. T
30.
Simplex algorithm
–
In mathematical optimization, Dantzigs simplex algorithm is a popular algorithm for linear programming. The name of the algorithm is derived from the concept of a simplex and was suggested by T. S. Motzkin. Simplices are not actually used in the method, but one interpretation of it is that it operates on simplicial cones, the simplicial cones in question are the corners of a geometric object called a polytope. The shape of this polytope is defined by the applied to the objective function. There is a process to convert any linear program into one in standard form so this results in no loss of generality. In geometric terms, the region defined by all values of x such that A x ≤ b, x i ≥0 is a convex polytope. In this context such a point is known as a feasible solution. It can be shown that for a program in standard form. The simplex algorithm applies this insight by walking along edges of the polytope to extreme points with greater and greater objective values and this continues until the maximum value is reached or an unbounded edge is visited, concluding that the problem has no solution. The solution of a program is accomplished in two steps. In the first step, known as Phase I, an extreme point is found. Depending on the nature of the program this may be trivial, the possible results of Phase I are either that a basic feasible solution is found or that the feasible region is empty. In the latter case the program is called infeasible. In the second step, Phase II, the algorithm is applied using the basic feasible solution found in Phase I as a starting point. The possible results from Phase II are either a basic feasible solution or an infinite edge on which the objective function is unbounded below. George Dantzig worked on planning methods for the US Army Air Force during World War II using a desk calculator, during 1946 his colleague challenged him to mechanize the planning process in order to entice him into not taking another job. Dantzig formulated the problem as linear inequalities inspired by the work of Wassily Leontief, however, Dantzigs core insight was to realize that most such ground rules can be translated into a linear objective function that needs to be maximized. Development of the method was evolutionary and happened over a period of about a year
31.
George Dantzig
–
George Bernard Dantzig was an American mathematical scientist who made important contributions to operations research, computer science, economics, and statistics. Dantzig is known for his development of the algorithm, an algorithm for solving linear programming problems. In statistics, Dantzig solved two problems in statistical theory, which he had mistaken for homework after arriving late to a lecture by Jerzy Neyman. Dantzig was the Professor Emeritus of Transportation Sciences and Professor of Operations Research, born in Portland, Oregon, George Bernard Dantzig was named after George Bernard Shaw, the Irish writer. His father, Tobias Dantzig, was a Baltic German mathematician and linguist, Dantzigs parents met during their study at the Sorbonne University in Paris, where Tobias studied mathematics under Henri Poincaré, after whom Dantzigs brother was named. The Dantzigs immigrated to the United States, where settled in Portland. Early in the 1920s the Dantzig family moved from Baltimore to Washington, George Dantzig received his B. S. from University of Maryland in 1936 in mathematics and physics, which is part of the University of Maryland College of Computer, Mathematical, and Natural Sciences. He earned his masters degree in mathematics from the University of Michigan in 1938, with the outbreak of World War II, Dantzig took a leave of absence from the doctoral program at Berkeley to join the U. S. Air Force Office of Statistical Control. In 1946, he returned to Berkeley to complete the requirements of his program, although he had a faculty offer from Berkeley, he returned to the Air Force as mathematical advisor to the comptroller. In 1952 Dantzig joined the division of the RAND Corporation. By 1960 he became a professor in the Department of Industrial Engineering at UC Berkeley, in 1966 he joined the Stanford faculty as Professor of Operations Research and of Computer Science. A year later, the Program in Operations Research became a full-fledged department, in 1973 he founded the Systems Optimization Laboratory there. On a sabbatical leave that year, he headed the Methodology Group at the International Institute for Applied Systems Analysis in Laxenburg, later he became the C. A. Criley Professor of Transportation Sciences at Stanford, and kept going, well beyond his mandatory retirement in 1985. He was a member of the National Academy of Sciences, the National Academy of Engineering, the Mathematical Programming Society honored Dantzig by creating the George B. Dantzig Prize, bestowed every three years since 1982 on one or two people who have made a significant impact in the field of mathematical programming, Dantzig died on May 13,2005, in his home in Stanford, California, of complications from diabetes and cardiovascular disease. Dantzigs seminal work allows the airline industry, for example, to schedule crews, based on his work tools are developed that shipping companies use to determine how many planes they need and where their delivery trucks should be deployed. It is used in manufacturing, revenue management, telecommunications, advertising, architecture, circuit design, an event in Dantzigs life became the origin of a famous story in 1939 while he was a graduate student at UC Berkeley. Near the beginning of a class for which Dantzig was late, when Dantzig arrived, he assumed that the two problems were a homework assignment and wrote them down
32.
Game theory
–
Game theory is the study of mathematical models of conflict and cooperation between intelligent rational decision-makers. Game theory is used in economics, political science, and psychology, as well as logic, computer science. Originally, it addressed zero-sum games, in one persons gains result in losses for the other participants. Today, game theory applies to a range of behavioral relations, and is now an umbrella term for the science of logical decision making in humans, animals. Modern game theory began with the idea regarding the existence of equilibria in two-person zero-sum games. Von Neumanns original proof used Brouwer fixed-point theorem on continuous mappings into compact convex sets and his paper was followed by the 1944 book Theory of Games and Economic Behavior, co-written with Oskar Morgenstern, which considered cooperative games of several players. The second edition of this provided an axiomatic theory of expected utility. This theory was developed extensively in the 1950s by many scholars, Game theory was later explicitly applied to biology in the 1970s, although similar developments go back at least as far as the 1930s. Game theory has been recognized as an important tool in many fields. With the Nobel Memorial Prize in Economic Sciences going to game theorist Jean Tirole in 2014, John Maynard Smith was awarded the Crafoord Prize for his application of game theory to biology. Early discussions of examples of two-person games occurred long before the rise of modern, the first known discussion of game theory occurred in a letter written by Charles Waldegrave, an active Jacobite, and uncle to James Waldegrave, a British diplomat, in 1713. In this letter, Waldegrave provides a mixed strategy solution to a two-person version of the card game le Her. James Madison made what we now recognize as an analysis of the ways states can be expected to behave under different systems of taxation. In 1913 Ernst Zermelo published Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels and it proved that the optimal chess strategy is strictly determined. This paved the way for more general theorems, the Danish mathematician Zeuthen proved that the mathematical model had a winning strategy by using Brouwers fixed point theorem. In his 1938 book Applications aux Jeux de Hasard and earlier notes, Borel conjectured that non-existence of mixed-strategy equilibria in two-person zero-sum games would occur, a conjecture that was proved false. Game theory did not really exist as a field until John von Neumann published a paper in 1928. Von Neumanns original proof used Brouwers fixed-point theorem on continuous mappings into compact convex sets and his paper was followed by his 1944 book Theory of Games and Economic Behavior co-authored with Oskar Morgenstern
33.
Abundance of the chemical elements
–
The abundance of a chemical element is a measure of the occurrence of the element relative to all other elements in a given environment. Abundance is measured in one of three ways, by the mass-fraction, by the mole-fraction, or by the volume-fraction, most abundance values in this article are given as mass-fractions. For example, the abundance of oxygen in water can be measured in two ways, the mass fraction is about 89%, because that is the fraction of waters mass which is oxygen. However, the mole-fraction is 33.3333. % because only 1 atom of 3 in water, the abundance of chemical elements in the universe is dominated by the large amounts of hydrogen and helium which were produced in the Big Bang. Remaining elements, making up only about 2% of the universe, were produced by supernovae. Lithium, beryllium and boron are rare although they are produced by nuclear fusion. The elements from carbon to iron are more common in the universe because of the ease of making them in supernova nucleosynthesis. Elements of higher number than iron become progressively more rare in the universe, because they increasingly absorb stellar energy in being produced. Elements with even numbers are generally more common than their neighbors in the periodic table. The abundance of elements in the Sun and outer planets is similar to that in the universe. Due to solar heating, the elements of Earth and the rocky planets of the Solar System have undergone an additional depletion of volatile hydrogen, helium, neon, nitrogen. The crust, mantle, and core of the Earth show evidence of chemical segregation plus some sequestration by density, lighter silicates of aluminum are found in the crust, with more magnesium silicate in the mantle, while metallic iron and nickel compose the core. The abundance of elements in specialized environments, such as atmospheres, or oceans, the elements – that is, ordinary matter made of protons, neutrons, and electrons, are only a small part of the content of the Universe. Cosmological observations suggest that only 4. 6% of the universes energy comprises the visible baryonic matter that constitutes stars, planets, the rest is made up of dark energy and dark matter. Hydrogen is the most abundant element in the Universe, helium is second, however, after this, the rank of abundance does not continue to correspond to the atomic number, oxygen has abundance rank 3, but atomic number 8. All others are less common. Heavier elements were mostly produced much later, inside of stars, hydrogen and helium are estimated to make up roughly 74% and 24% of all baryonic matter in the universe respectively. Despite comprising only a small fraction of the universe, the remaining heavy elements can greatly influence astronomical phenomena
34.
Observable universe
–
There are at least two trillion galaxies in the observable universe, containing more stars than all the grains of sand on planet Earth. Assuming the universe is isotropic, the distance to the edge of the universe is roughly the same in every direction. That is, the universe is a spherical volume centered on the observer. Every location in the Universe has its own universe, which may or may not overlap with the one centered on Earth. The word observable used in this sense does not depend on modern technology actually permits detection of radiation from an object in this region. It simply indicates that it is possible in principle for light or other signals from the object to reach an observer on Earth, in practice, we can see light only from as far back as the time of photon decoupling in the recombination epoch. That is when particles were first able to emit photons that were not quickly re-absorbed by other particles, before then, the Universe was filled with a plasma that was opaque to photons. The detection of gravitational waves indicates there is now a possibility of detecting signals from before the recombination epoch. The surface of last scattering is the collection of points in space at the distance that photons from the time of photon decoupling just reach us today. These are the photons we detect today as cosmic microwave background radiation, however, with future technology, it may be possible to observe the still older relic neutrino background, or even more distant events via gravitational waves. It is estimated that the diameter of the universe is about 28.5 gigaparsecs. The total mass of matter in the universe can be calculated using the critical density. Some parts of the Universe are too far away for the light emitted since the Big Bang to have had time to reach Earth. In the future, light from distant galaxies will have had time to travel. This fact can be used to define a type of cosmic event horizon whose distance from the Earth changes over time, both popular and professional research articles in cosmology often use the term universe to mean observable universe. It is plausible that the galaxies within our observable universe represent only a fraction of the galaxies in the Universe. If the Universe is finite but unbounded, it is possible that the Universe is smaller than the observable universe. In this case, what we take to be very distant galaxies may actually be duplicate images of nearby galaxies and it is difficult to test this hypothesis experimentally because different images of a galaxy would show different eras in its history, and consequently might appear quite different
35.
Leonid Khachiyan
–
Leonid Genrikhovich Khachiyan was a Soviet mathematician of Armenian descent who taught Computer Science at Rutgers University. He was most famous for his ellipsoid algorithm for linear programming, Khachiyan was born in St. Petersburg and moved to Moscow with his parents at age 9. There he later earned a Ph. D. in computational mathematics in 1978, in 1982 he won the prestigious Fulkerson Prize from the Mathematical Programming Society and the American Mathematical Society for outstanding papers in the area of discrete mathematics. In 1989 he joined Cornell University’s School of Operations Research and Industrial Engineering as a professor and had been at Rutgers since 1990. He wrote a series of papers with Bahman Kalantari on various matrix scaling and balancing problems, Khachiyan is survived by his wife of 20 years and two daughters who currently live in the United States. He is also survived by his father, a professor of theoretical mechanics, his mother, a retired civil engineer. In Memoriam, Leonid Khachiyan from the Computer Science Department, Rutgers University, SIAM news, Leonid Khachiyan, 1952–2005, An Appreciation. The Mathematics Genealogy Project, Leonid Khachiyan
36.
Interior point method
–
Interior point methods are a certain class of algorithms that solve linear and nonlinear convex optimization problems. John von Neumann suggested an interior point method of programming which was neither a polynomial time method nor an efficient method in practice. In fact, it turned out to be slower in practice compared to the commonly used simplex method, in 1984, Narendra Karmarkar developed a method for linear programming called Karmarkars algorithm which runs in probably polynomial time and is also very efficient in practice. It enabled solutions of linear programming problems which were beyond the capabilities of the simplex method, contrary to the simplex method, it reaches a best solution by traversing the interior of the feasible region. The method can be generalized to convex programming based on a self-concordant barrier function used to encode the convex set, any convex optimization problem can be transformed into minimizing a linear function over a convex set by converting to the epigraph form. The idea of encoding the feasible set using a barrier and designing barrier methods was studied by Anthony V. Fiacco, Garth P. McCormick, and others in the early 1960s. These ideas were developed for general nonlinear programming, but they were later abandoned due to the presence of more competitive methods for this class of problems. Yurii Nesterov and Arkadi Nemirovski came up with a class of such barriers that can be used to encode any convex set. They guarantee that the number of iterations of the algorithm is bounded by a polynomial in the dimension, already Khachiyans ellipsoid method was a polynomial time algorithm, however, it was too slow to be of practical interest. The class of primal-dual path-following interior point methods is considered the most successful, mehrotras predictor-corrector algorithm provides the basis for most implementations of this class of methods. The primal-dual methods idea is easy to demonstrate for constrained nonlinear optimization. For simplicity consider the all-inequality version of an optimization problem, minimize f subject to c i ≥0 for i =1, …, m, x ∈ R n. The logarithmic barrier function associated with is B = f − μ ∑ i =1 m log Here μ is a positive scalar. As μ converges to zero the minimum of B should converge to a solution of and we try to find those for which the gradient of the barrier function is zero. Applying to we get an equation for the gradient, g − A T λ =0 where the matrix A is the constraint c Jacobian, the intuition behind is that the gradient of f should lie in the subspace spanned by the constraints gradients. Applying Newtons method to and we get an equation for update, because of, the condition λ ≥0 should be enforced at each step. This can be done by choosing appropriate α, →, augmented Lagrangian method Penalty method Karush–Kuhn–Tucker conditions Bonnans, J. Frédéric, Gilbert, J. Charles, Lemaréchal, Claude, Sagastizábal, Claudia A. Numerical optimization, Theoretical and practical aspects, proceedings of the sixteenth annual ACM symposium on Theory of computing - STOC84,302
37.
Operations research
–
Operations research, or operational research in British usage, is a discipline that deals with the application of advanced analytical methods to help make better decisions. Further, the operational analysis is used in the British military, as an intrinsic part of capability development, management. In particular, operational analysis forms part of the Combined Operational Effectiveness and Investment Appraisals and it is often considered to be a sub-field of applied mathematics. The terms management science and decision science are used as synonyms. Operation research is concerned with determining the maximum or minimum of some real-world objective. Originating in military efforts before World War II, its techniques have grown to concern problems in a variety of industries, nearly all of these techniques involve the construction of mathematical models that attempt to describe the system. Because of the computational and statistical nature of most of these fields, OR also has ties to computer science. In the decades after the two wars, the techniques were more widely applied to problems in business, industry. Early work in research was carried out by individuals such as Charles Babbage. Percy Bridgman brought operational research to bear on problems in physics in the 1920s, modern operational research originated at the Bawdsey Research Station in the UK in 1937 and was the result of an initiative of the stations superintendent, A. P. Rowe. Rowe conceived the idea as a means to analyse and improve the working of the UKs early warning radar system, initially, he analysed the operating of the radar equipment and its communication networks, expanding later to include the operating personnels behaviour. This revealed unappreciated limitations of the CH network and allowed action to be taken. Scientists in the United Kingdom including Patrick Blackett, Cecil Gordon, Solly Zuckerman, other names for it included operational analysis and quantitative management. During the Second World War close to 1,000 men and women in Britain were engaged in operational research, about 200 operational research scientists worked for the British Army. Patrick Blackett worked for different organizations during the war. In 1941, Blackett moved from the RAE to the Navy, after first working with RAF Coastal Command, in 1941, blacketts team at Coastal Commands Operational Research Section included two future Nobel prize winners and many other people who went on to be pre-eminent in their fields. They undertook a number of analyses that aided the war effort. Convoys travel at the speed of the slowest member, so small convoys can travel faster and it was also argued that small convoys would be harder for German U-boats to detect
38.
Microeconomics
–
One goal of microeconomics is to analyze the market mechanisms that establish relative prices among goods and services and allocate limited resources among alternative uses. Microeconomics shows conditions under which free markets lead to desirable allocations and it also analyzes market failure, where markets fail to produce efficient results. Microeconomics also deals with the effects of economic policies on the aspects of the economy. Particularly in the wake of the Lucas critique, much of modern macroeconomic theory has been built upon microfoundations—i. e, based upon basic assumptions about micro-level behavior. Microeconomic theory typically begins with the study of a single rational, to economists, rationality means an individual possesses stable preferences that are both complete and transitive. The technical assumption that preference relations are continuous is needed to ensure the existence of a utility function, microeconomic theory progresses by defining a competitive budget set which is a subset of the consumption set. It is at point that economists make the technical assumption that preferences are locally non-satiated. Without the assumption of LNS there is no guarantee that an individual would maximize utility. With the necessary tools and assumptions in place the utility maximization problem is developed, the utility maximization problem is the heart of consumer theory. The utility maximization problem attempts to explain the action axiom by imposing rationality axioms on consumer preferences, the utility maximization problem serves not only as the mathematical foundation of consumer theory but as a metaphysical explanation of it as well. That is, the utility maximization problem is used by economists to not only explain what or how individuals make choices, the utility maximization problem is a constrained optimization problem in which an individual seeks to maximize utility subject to a budget constraint. Economists use the extreme value theorem to guarantee that a solution to the utility maximization problem exists and that is, since the budget constraint is both bounded and closed, a solution to the utility maximization problem exists. Economists call the solution to the utility maximization problem a Walrasian demand function or correspondence, the utility maximization problem has so far been developed by taking consumer tastes as the primitive. However, a way to develop microeconomic theory is by taking consumer choice as the primitive. This model of microeconomic theory is referred to as Revealed preference theory, the theory of supply and demand usually assumes that markets are perfectly competitive. This implies that there are buyers and sellers in the market and none of them have the capacity to significantly influence prices of goods. In many real-life transactions, the assumption fails because some individual buyers or sellers have the ability to influence prices, quite often, a sophisticated analysis is required to understand the demand-supply equation of a good model. However, the works well in situations meeting these assumptions