1.
Polygon
–
In elementary geometry, a polygon /ˈpɒlɪɡɒn/ is a plane figure that is bounded by a finite chain of straight line segments closing in a loop to form a closed polygonal chain or circuit. These segments are called its edges or sides, and the points where two edges meet are the vertices or corners. The interior of the polygon is called its body. An n-gon is a polygon with n sides, for example, a polygon is a 2-dimensional example of the more general polytope in any number of dimensions. The basic geometrical notion of a polygon has been adapted in various ways to suit particular purposes, mathematicians are often concerned only with the bounding closed polygonal chain and with simple polygons which do not self-intersect, and they often define a polygon accordingly. A polygonal boundary may be allowed to intersect itself, creating star polygons and these and other generalizations of polygons are described below. The word polygon derives from the Greek adjective πολύς much, many and it has been suggested that γόνυ knee may be the origin of “gon”. Polygons are primarily classified by the number of sides, Polygons may be characterized by their convexity or type of non-convexity, Convex, any line drawn through the polygon meets its boundary exactly twice. As a consequence, all its interior angles are less than 180°, equivalently, any line segment with endpoints on the boundary passes through only interior points between its endpoints. Non-convex, a line may be found which meets its boundary more than twice, equivalently, there exists a line segment between two boundary points that passes outside the polygon. Simple, the boundary of the polygon does not cross itself, there is at least one interior angle greater than 180°. Star-shaped, the interior is visible from at least one point. The polygon must be simple, and may be convex or concave, self-intersecting, the boundary of the polygon crosses itself. Branko Grünbaum calls these coptic, though this term does not seem to be widely used, star polygon, a polygon which self-intersects in a regular way. A polygon cannot be both a star and star-shaped, equiangular, all corner angles are equal. Cyclic, all lie on a single circle, called the circumcircle. Isogonal or vertex-transitive, all lie within the same symmetry orbit. The polygon is cyclic and equiangular
2.
Polytope
–
In elementary geometry, a polytope is a geometric object with flat sides, and may exist in any general number of dimensions n as an n-dimensional polytope or n-polytope. For example, a polygon is a 2-polytope and a three-dimensional polyhedron is a 3-polytope. Polytopes in more than three dimensions were first discovered by Ludwig Schläfli, the German term polytop was coined by the mathematician Reinhold Hoppe, and was introduced to English mathematicians as polytope by Alicia Boole Stott. The term polytope is nowadays a broad term that covers a class of objects. Many of these definitions are not equivalent, resulting in different sets of objects being called polytopes and they represent different approaches to generalizing the convex polytopes to include other objects with similar properties. In this approach, a polytope may be regarded as a tessellation or decomposition of some given manifold, an example of this approach defines a polytope as a set of points that admits a simplicial decomposition. However this definition does not allow star polytopes with interior structures, the discovery of star polyhedra and other unusual constructions led to the idea of a polyhedron as a bounding surface, ignoring its interior. A polyhedron is understood as a surface whose faces are polygons, a 4-polytope as a hypersurface whose facets are polyhedra and this approach is used for example in the theory of abstract polytopes. In certain fields of mathematics, the terms polytope and polyhedron are used in a different sense and this terminology is typically confined to polytopes and polyhedra that are convex. A polytope comprises elements of different dimensionality such as vertices, edges, faces, cells, terminology for these is not fully consistent across different authors. For example, some authors use face to refer to an -dimensional element while others use face to denote a 2-face specifically, authors may use j-face or j-facet to indicate an element of j dimensions. Some use edge to refer to a ridge, while H. S. M. Coxeter uses cell to denote an -dimensional element, the terms adopted in this article are given in the table below, An n-dimensional polytope is bounded by a number of -dimensional facets. These facets are themselves polytopes, whose facets are -dimensional ridges of the original polytope, Every ridge arises as the intersection of two facets. Ridges are once again polytopes whose facets give rise to -dimensional boundaries of the original polytope and these bounding sub-polytopes may be referred to as faces, or specifically j-dimensional faces or j-faces. A 0-dimensional face is called a vertex, and consists of a single point, a 1-dimensional face is called an edge, and consists of a line segment. A 2-dimensional face consists of a polygon, and a 3-dimensional face, sometimes called a cell, the convex polytopes are the simplest kind of polytopes, and form the basis for several different generalizations of the concept of polytopes. A convex polytope is defined as the intersection of a set of half-spaces. This definition allows a polytope to be neither bounded nor finite, Polytopes are defined in this way, e. g. in linear programming
3.
Level set
–
In mathematics, a level set of a real-valued function f of n real variables is a set of the form L c =, that is, a set where the function takes on a given constant value c. When the number of variables is two, a set is generically a curve, called a level curve, contour line. So a level curve is the set of all real-valued solutions of an equation in two variables x1 and x2, when n =3, a level set is called a level surface, and for higher values of n the level set is a level hypersurface. So a level surface is the set of all real-valued roots of an equation in three variables x1, x2 and x3, and a level hypersurface is the set of all real-valued roots of an equation in n variables, a level set is a special case of a fiber. Level sets show up in many applications, often different names. For example, a curve is a level curve, which is considered independently of its neighbor curves. Analogously, a surface is sometimes called an implicit surface or an isosurface. The name isocontour is also used, which means a contour of equal height, for example, given a specific radius r, the equation of a circle defines an isocontour. R2 = x 2 + y 2 If we choose r =5 then our isovalue is c =52 =25, all points that evaluate to 25 constitute the isocontour. This means that they are a member of the level set. If a point evaluates to less than 25 the point is on the inside of the isocontour, If the result is greater than 25, it is on the outside. A second example is the logarithmically spaced level curve plot of Himmelblaus function shown in the figure, If the function f is differentiable, the gradient of f at a point is either zero, or perpendicular to the level set of f at that point. To understand what this means, imagine that two hikers are at the location on a mountain. One of them is bold, and she decides to go in the direction where the slope is steepest, the other one is more cautious, he does not want to either climb or descend, choosing a path which will keep him at the same height. In our analogy, the theorem says that the two hikers will depart in directions perpendicular to one another. A consequence of this theorem is that if f is differentiable, a set is a hypersurface. At a critical point, a set may be reduced to a point or may have a singularity such as a self-intersection point or a cusp. A set of the form L c − = is called a set of f
4.
Polyhedron
–
In geometry, a polyhedron is a solid in three dimensions with flat polygonal faces, straight edges and sharp corners or vertices. The word polyhedron comes from the Classical Greek πολύεδρον, as poly- + -hedron, a convex polyhedron is the convex hull of finitely many points, not all on the same plane. Cubes and pyramids are examples of convex polyhedra, a polyhedron is a 3-dimensional example of the more general polytope in any number of dimensions. Convex polyhedra are well-defined, with several equivalent standard definitions, however, the formal mathematical definition of polyhedra that are not required to be convex has been problematic. Many definitions of polyhedron have been given within particular contexts, some more rigorous than others, some of these definitions exclude shapes that have often been counted as polyhedra or include shapes that are often not considered as valid polyhedra. As Branko Grünbaum observed, The Original Sin in the theory of polyhedra goes back to Euclid, the writers failed to define what are the polyhedra. Nevertheless, there is agreement that a polyhedron is a solid or surface that can be described by its vertices, edges, faces. Natural refinements of this definition require the solid to be bounded, to have a connected interior, and possibly also to have a connected boundary. However, the polyhedra defined in this way do not include the self-crossing star polyhedra, their faces may not form simple polygons, definitions based on the idea of a bounding surface rather than a solid are also common. If a planar part of such a surface is not itself a convex polygon, ORourke requires it to be subdivided into smaller convex polygons, cromwell gives a similar definition but without the restriction of three edges per vertex. Again, this type of definition does not encompass the self-crossing polyhedra, however, there exist topological polyhedra that cannot be realized as acoptic polyhedra. One modern approach is based on the theory of abstract polyhedra and these can be defined as partially ordered sets whose elements are the vertices, edges, and faces of a polyhedron. A vertex or edge element is less than an edge or face element when the vertex or edge is part of the edge or face, additionally, one may include a special bottom element of this partial order and a top element representing the whole polyhedron. However, these requirements are relaxed, to instead require only that the sections between elements two levels apart from line segments. Geometric polyhedra, defined in other ways, can be described abstractly in this way, a realization of an abstract polyhedron is generally taken to be a mapping from the vertices of the abstract polyhedron to geometric points, such that the points of each face are coplanar. A geometric polyhedron can then be defined as a realization of an abstract polyhedron, realizations that forgo the requirement of planarity, that impose additional requirements of symmetry, or that map the vertices to higher dimensional spaces have also been considered. Unlike the solid-based and surface-based definitions, this perfectly well for star polyhedra. However, without restrictions, this definition allows degenerate or unfaithful polyhedra
5.
Plane (geometry)
–
In mathematics, a plane is a flat, two-dimensional surface that extends infinitely far. A plane is the analogue of a point, a line. When working exclusively in two-dimensional Euclidean space, the article is used, so. Many fundamental tasks in mathematics, geometry, trigonometry, graph theory and graphing are performed in a space, or in other words. Euclid set forth the first great landmark of mathematical thought, a treatment of geometry. He selected a small core of undefined terms and postulates which he used to prove various geometrical statements. Although the plane in its sense is not directly given a definition anywhere in the Elements. In his work Euclid never makes use of numbers to measure length, angle, in this way the Euclidean plane is not quite the same as the Cartesian plane. This section is concerned with planes embedded in three dimensions, specifically, in R3. In a Euclidean space of any number of dimensions, a plane is determined by any of the following. A line and a point not on that line, a line is either parallel to a plane, intersects it at a single point, or is contained in the plane. Two distinct lines perpendicular to the plane must be parallel to each other. Two distinct planes perpendicular to the line must be parallel to each other. Specifically, let r0 be the vector of some point P0 =. The plane determined by the point P0 and the vector n consists of those points P, with position vector r, such that the vector drawn from P0 to P is perpendicular to n. Recalling that two vectors are perpendicular if and only if their dot product is zero, it follows that the plane can be described as the set of all points r such that n ⋅ =0. Expanded this becomes a + b + c =0, which is the form of the equation of a plane. This is just a linear equation a x + b y + c z + d =0 and this familiar equation for a plane is called the general form of the equation of the plane
6.
Mathematical model
–
A mathematical model is a description of a system using mathematical concepts and language. The process of developing a model is termed mathematical modeling. Mathematical models are used in the sciences and engineering disciplines. Physicists, engineers, statisticians, operations research analysts, and economists use mathematical models most extensively, a model may help to explain a system and to study the effects of different components, and to make predictions about behaviour. Mathematical models can take many forms, including systems, statistical models, differential equations. These and other types of models can overlap, with a model involving a variety of abstract structures. In general, mathematical models may include logical models, in many cases, the quality of a scientific field depends on how well the mathematical models developed on the theoretical side agree with results of repeatable experiments. Lack of agreement between theoretical mathematical models and experimental measurements often leads to important advances as better theories are developed, in the physical sciences, the traditional mathematical model contains four major elements. These are Governing equations Defining equations Constitutive equations Constraints Mathematical models are composed of relationships. Relationships can be described by operators, such as operators, functions, differential operators. Variables are abstractions of system parameters of interest, that can be quantified, a model is considered to be nonlinear otherwise. The definition of linearity and nonlinearity is dependent on context, for example, in a statistical linear model, it is assumed that a relationship is linear in the parameters, but it may be nonlinear in the predictor variables. Similarly, an equation is said to be linear if it can be written with linear differential operators. In a mathematical programming model, if the functions and constraints are represented entirely by linear equations. If one or more of the functions or constraints are represented with a nonlinear equation. Nonlinearity, even in simple systems, is often associated with phenomena such as chaos. Although there are exceptions, nonlinear systems and models tend to be difficult to study than linear ones. A common approach to nonlinear problems is linearization, but this can be if one is trying to study aspects such as irreversibility
7.
Linear function
–
In linear algebra and functional analysis, a linear function is a linear map. In calculus, analytic geometry and related areas, a function is a polynomial of degree one or less. When the function is of one variable, it is of the form f = a x + b. The graph of such a function of one variable is a nonvertical line, a is frequently referred to as the slope of the line, and b as the intercept. For a function f of any number of independent variables, the general formula is f = b + a 1 x 1 + … + a k x k. A constant function is also considered linear in this context, as it is a polynomial of degree zero or is the zero polynomial and its graph, when there is only one independent variable, is a horizontal line. In this context, the meaning may be referred to as a homogeneous linear function or a linear form. In the context of linear algebra, this meaning is a kind of affine map. In linear algebra, a function is a map f between two vector spaces that preserves vector addition and scalar multiplication, f = f + f f = a f. Here a denotes a constant belonging to some field K of scalars and x and y are elements of a vector space, some authors use linear function only for linear maps that take values in the scalar field, these are also called linear functionals. The linear functions of calculus qualify as linear maps when f =0, or, equivalently, geometrically, the graph of the function must pass through the origin. Homogeneous function Nonlinear system Piecewise linear function Linear interpolation Discontinuous linear map Izrail Moiseevich Gelfand, Lectures on Linear Algebra, Interscience Publishers, ISBN 0-486-66082-6 Thomas S. Shores, Applied Linear Algebra and Matrix Analysis, Undergraduate Texts in Mathematics, Springer. ISBN 0-387-33195-6 James Stewart, Calculus, Early Transcendentals, edition 7E, ISBN 978-0-538-49790-9 Leonid N. Vaserstein, Linear Programming, in Leslie Hogben, ed. Handbook of Linear Algebra, Discrete Mathematics and Its Applications, Chapman and Hall/CRC, chap
8.
Mathematical optimization
–
In mathematics, computer science and operations research, mathematical optimization, also spelled mathematical optimisation, is the selection of a best element from some set of available alternatives. The generalization of optimization theory and techniques to other formulations comprises an area of applied mathematics. Such a formulation is called a problem or a mathematical programming problem. Many real-world and theoretical problems may be modeled in this general framework, typically, A is some subset of the Euclidean space Rn, often specified by a set of constraints, equalities or inequalities that the members of A have to satisfy. The domain A of f is called the space or the choice set. The function f is called, variously, a function, a loss function or cost function, a utility function or fitness function, or, in certain fields. A feasible solution that minimizes the objective function is called an optimal solution, in mathematics, conventional optimization problems are usually stated in terms of minimization. Generally, unless both the function and the feasible region are convex in a minimization problem, there may be several local minima. While a local minimum is at least as good as any nearby points, a global minimum is at least as good as every feasible point. In a convex problem, if there is a minimum that is interior, it is also the global minimum. Optimization problems are often expressed with special notation, consider the following notation, min x ∈ R This denotes the minimum value of the objective function x 2 +1, when choosing x from the set of real numbers R. The minimum value in case is 1, occurring at x =0. Similarly, the notation max x ∈ R2 x asks for the value of the objective function 2x. In this case, there is no such maximum as the function is unbounded. This represents the value of the argument x in the interval, John Wiley & Sons, Ltd. pp. xxviii+489. (2008 Second ed. in French, Programmation mathématique, Théorie et algorithmes, Editions Tec & Doc, Paris,2008. Nemhauser, G. L. Rinnooy Kan, A. H. G. Todd, handbooks in Operations Research and Management Science. Amsterdam, North-Holland Publishing Co. pp. xiv+709, J. E. Dennis, Jr. and Robert B
9.
Linear equality
–
A linear equation is an algebraic equation in which each term is either a constant or the product of a constant and a single variable. A simple example of an equation with only one variable, x, may be written in the form, ax + b =0, where a and b are constants. The constants may be numbers, parameters, or even functions of parameters. Linear equations can have one or more variables. An example of an equation with three variables, x, y, and z, is given by, ax + by + cz + d =0, where a, b, c, and d are constants and a, b. Linear equations occur frequently in most subareas of mathematics and especially in applied mathematics, an equation is linear if the sum of the exponents of the variables of each term is one. Equations with exponents greater than one are non-linear, an example of a non-linear equation of two variables is axy + b =0, where a and b are constants and a ≠0. It has two variables, x and y, and is non-linear because the sum of the exponents of the variables in the first term and this article considers the case of a single equation for which one searches the real solutions. All its content applies for complex solutions and, more generally for linear equations with coefficients, a linear equation in one unknown x may always be rewritten a x = b. If a ≠0, there is a solution x = b a. The origin of the name comes from the fact that the set of solutions of such an equation forms a straight line in the plane. Linear equations can be using the laws of elementary algebra into several different forms. These equations are referred to as the equations of the straight line. In what follows, x, y, t, and θ are variables, in the general form the linear equation is written as, A x + B y = C, where A and B are not both equal to zero. The equation is written so that A ≥0, by convention. The graph of the equation is a line, and every straight line can be represented by an equation in the above form. If A is nonzero, then the x-intercept, that is, if B is nonzero, then the y-intercept, that is the y-coordinate of the point where the graph crosses the y-axis, is C/B, and the slope of the line is −A/B. The general form is written as, a x + b y + c =0
10.
Linear inequality
–
In mathematics a linear inequality is an inequality which involves a linear function. Two-dimensional linear inequalities are expressions in two variables of the form, a x + b y < c and a x + b y ≥ c, the solution set of such an inequality can be graphically represented by a half-plane in the Euclidean plane. The line that determines the half-planes is not included in the set when the inequality is strict. Then, pick a convenient point not on the line, such as, since 0 +3 =0 <9, this point is in the solution set, so the half-plane containing this point is the solution set of this linear inequality. In Rn linear inequalities are the expressions that may be written in the form f < b or f ≤ b, X n are called the unknowns, and a 1, a 2. A n are called the coefficients, alternatively, these may be written as g <0 or g ≤0, where g is an affine function. X n are the unknowns, a 11, a 12, a m n are the coefficients of the system, and b 1, b 2. B m are the constant terms and this can be concisely written as the matrix inequality A x ≤ b, where A is an m×n matrix, x is an n×1 column vector of variables, and b is an m×1 column vector of constants. In the above systems both strict and non-strict inequalities may be used, not all systems of linear inequalities have solutions. The set of solutions of a linear inequality constitutes a ] of the n-dimensional real space. The set of solutions of a system of linear inequalities corresponds to the intersection of the half-spaces defined by individual inequalities and it is a convex set, since the half-spaces are convex sets, and the intersection of a set of convex sets is also convex. In the non-degenerate cases this convex set is a convex polyhedron and it may also be empty or a convex polyhedron of lower dimension confined to an ] of the n-dimensional space Rn. A linear programming problem seeks to optimize of a subject to a number of constraints on the variables which. The list of constraints is a system of linear inequalities, generalizations of this type are only of theoretical interest until an application for them becomes apparent. Angel, Allen R. Porter, Stuart R
11.
Feasible region
–
This is the initial set of candidate solutions to the problem, before the set of candidates has been narrowed down. For example, consider the problem Minimize x 2 + y 4 with respect to the x and y. Here the feasible set is the set of pairs in which the value of x is at least 1 and at most 10 and the value of y is at least 5 and at most 12. Note that the set of the problem is separate from the objective function, which states the criterion to be optimized. In many problems, the set reflects a constraint that one or more variables must be non-negative. In pure integer programming problems, the set is the set of integers. In linear programming problems, the set is a convex polytope. Constraint satisfaction is the process of finding a point in the feasible region, a convex feasible set is one in which a line segment connecting any two feasible points goes through only other feasible points, and not through any points outside the feasible set. If the constraints of a problem are mutually contradictory, there are no points that satisfy all the constraints. In this case the problem has no solution and is said to be infeasible, feasible sets may be bounded or unbounded. For example, the set defined by the constraint set is unbounded because in some directions there is no limit on how far one can go. In contrast, the set formed by the constraint set is bounded because the extent of movement in any direction is limited by the constraints. In linear programming problems with n variables, a necessary but not sufficient condition for the set to be bounded is that the number of constraints be at least n +1. If the feasible set is unbounded, there may or may not be an optimum, in optimization and other branches of mathematics, and in search algorithms, a candidate solution is a member of the set of possible solutions in the feasible region of a given problem. A candidate solution does not have to be a likely or reasonable solution to the problem—it is simply in the set that satisfies all constraints, that is, it is in the set of feasible solutions. The space of all solutions, before any feasible points have been excluded, is called the feasible region, feasible set, search space. This is the set of all possible solutions that satisfy the problems constraints, constraint satisfaction is the process of finding a point in the feasible set. In the case of the algorithm, the candidate solutions are the individuals in the population being evolved by the algorithm
12.
Convex polytope
–
A convex polytope is a special case of a polytope, having the additional property that it is also a convex set of points in the n-dimensional space Rn. Some authors use the terms polytope and convex polyhedron interchangeably. In addition, some require a polytope to be a bounded set. The terms bounded/unbounded convex polytope will be used whenever the boundedness is critical to the discussed issue. Yet other texts treat a convex n-polytope as a surface or -manifold, Convex polytopes play an important role both in various branches of mathematics and in applied areas, most notably in linear programming. A comprehensive and influential book in the subject, called Convex Polytopes, was published in 1967 by Branko Grünbaum, in 2003 the 2nd edition of the book was published, with significant additional material contributed by new writers. In Grünbaums book, and in other texts in discrete geometry. Grünbaum points out that this is solely to avoid the repetition of the word convex. A polytope is called if it is an n-dimensional object in Rn. Many examples of bounded convex polytopes can be found in the article polyhedron, a convex polytope may be defined in a number of ways, depending on what is more suitable for the problem at hand. Grünbaums definition is in terms of a set of points in space. Other important definitions are, as the intersection of half-spaces and as the hull of a set of points. This is equivalent to defining a bounded convex polytope as the hull of a finite set of points. Such a definition is called a vertex representation, for a compact convex polytope, the minimal V-description is unique and it is given by the set of the vertices of the polytope. A convex polytope may be defined as an intersection of a number of half-spaces. Such definition is called a half-space representation, there exist infinitely many H-descriptions of a convex polytope. However, for a convex polytope, the minimal H-description is in fact unique and is given by the set of the facet-defining halfspaces. A closed half-space can be written as an inequality, a 1 x 1 + a 2 x 2 + ⋯ + a n x n ≤ b where n is the dimension of the space containing the polytope under consideration
13.
Intersection (mathematics)
–
In mathematics, the intersection of two or more objects is another, usually smaller object. All objects are presumed to lie in a common space except in set theory. The intersection is one of basic concepts of geometry, intuitively, the intersection of two or more objects is a new object that lies in each of original objects. An intersection can have various shapes, but a point is the most common in a plane geometry. It is always defined, but may be empty, incidence geometry defines an intersection as an object of lower dimension that is incident to each of original objects. In this approach an intersection can be sometimes undefined, such as for parallel lines, in both cases the concept of intersection relies on logical conjunction. Algebraic geometry defines intersections in its own way with the intersection theory, there can be more than one primitive object, such as points that form an intersection. It can be understood ambiguously, either the intersection is all of them, constructive solid geometry, Boolean Intersection is one of the ways of combining 2D/3D shapes Meet Weisstein, Eric W. Intersection
14.
Real number
–
In mathematics, a real number is a value that represents a quantity along a line. The adjective real in this context was introduced in the 17th century by René Descartes, the real numbers include all the rational numbers, such as the integer −5 and the fraction 4/3, and all the irrational numbers, such as √2. Included within the irrationals are the numbers, such as π. Real numbers can be thought of as points on a long line called the number line or real line. Any real number can be determined by a possibly infinite decimal representation, such as that of 8.632, the real line can be thought of as a part of the complex plane, and complex numbers include real numbers. These descriptions of the numbers are not sufficiently rigorous by the modern standards of pure mathematics. All these definitions satisfy the definition and are thus equivalent. The statement that there is no subset of the reals with cardinality greater than ℵ0. Simple fractions were used by the Egyptians around 1000 BC, the Vedic Sulba Sutras in, c.600 BC, around 500 BC, the Greek mathematicians led by Pythagoras realized the need for irrational numbers, in particular the irrationality of the square root of 2. Arabic mathematicians merged the concepts of number and magnitude into a general idea of real numbers. In the 16th century, Simon Stevin created the basis for modern decimal notation, in the 17th century, Descartes introduced the term real to describe roots of a polynomial, distinguishing them from imaginary ones. In the 18th and 19th centuries, there was work on irrational and transcendental numbers. Johann Heinrich Lambert gave the first flawed proof that π cannot be rational, Adrien-Marie Legendre completed the proof, Évariste Galois developed techniques for determining whether a given equation could be solved by radicals, which gave rise to the field of Galois theory. Charles Hermite first proved that e is transcendental, and Ferdinand von Lindemann, lindemanns proof was much simplified by Weierstrass, still further by David Hilbert, and has finally been made elementary by Adolf Hurwitz and Paul Gordan. The development of calculus in the 18th century used the set of real numbers without having defined them cleanly. The first rigorous definition was given by Georg Cantor in 1871, in 1874, he showed that the set of all real numbers is uncountably infinite but the set of all algebraic numbers is countably infinite. Contrary to widely held beliefs, his first method was not his famous diagonal argument, the real number system can be defined axiomatically up to an isomorphism, which is described hereafter. Another possibility is to start from some rigorous axiomatization of Euclidean geometry, from the structuralist point of view all these constructions are on equal footing
15.
Affine function
–
In geometry, an affine transformation, affine map or an affinity is a function between affine spaces which preserves points, straight lines and planes. Also, sets of parallel lines remain parallel after an affine transformation, an affine transformation does not necessarily preserve angles between lines or distances between points, though it does preserve ratios of distances between points lying on a straight line. Examples of affine transformations include translation, scaling, homothety, similarity transformation, reflection, rotation, shear mapping, and compositions of them in any combination and sequence. If X and Y are affine spaces, then every affine transformation f, X → Y is of the form x ↦ M x + b, unlike a purely linear transformation, an affine map need not preserve the zero point in a linear space. Thus, every linear transformation is affine, but not every affine transformation is linear, all Euclidean spaces are affine, but there are affine spaces that are non-Euclidean. In affine coordinates, which include Cartesian coordinates in Euclidean spaces, another way to deal with affine transformations systematically is to select a point as the origin, then, any affine transformation is equivalent to a linear transformation followed by a translation. An affine map f, A → B between two spaces is a map on the points that acts linearly on the vectors. In symbols, f determines a linear transformation φ such that and we can interpret this definition in a few other ways, as follows. If an origin O ∈ A is chosen, and B denotes its image f ∈ B, the conclusion is that, intuitively, f consists of a translation and a linear map. In other words, f preserves barycenters, as shown above, an affine map is the composition of two functions, a translation and a linear map. Ordinary vector algebra uses matrix multiplication to represent linear maps, using an augmented matrix and an augmented vector, it is possible to represent both the translation and the linear map using a single matrix multiplication. If A is a matrix, = is equivalent to the following y → = A x → + b →, the above-mentioned augmented matrix is called an affine transformation matrix, or projective transformation matrix. This representation exhibits the set of all affine transformations as the semidirect product of K n and G L. This is a group under the operation of composition of functions, ordinary matrix-vector multiplication always maps the origin to the origin, and could therefore never represent a translation, in which the origin must necessarily be mapped to some other point. By appending the additional coordinate 1 to every vector, one considers the space to be mapped as a subset of a space with an additional dimension. In that space, the original space occupies the subset in which the coordinate is 1. Thus the origin of the space can be found at. A translation within the space by means of a linear transformation of the higher-dimensional space is then possible
16.
Algorithm
–
In mathematics and computer science, an algorithm is a self-contained sequence of actions to be performed. Algorithms can perform calculation, data processing and automated reasoning tasks, an algorithm is an effective method that can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. The transition from one state to the next is not necessarily deterministic, some algorithms, known as randomized algorithms, giving a formal definition of algorithms, corresponding to the intuitive notion, remains a challenging problem. In English, it was first used in about 1230 and then by Chaucer in 1391, English adopted the French term, but it wasnt until the late 19th century that algorithm took on the meaning that it has in modern English. Another early use of the word is from 1240, in a manual titled Carmen de Algorismo composed by Alexandre de Villedieu and it begins thus, Haec algorismus ars praesens dicitur, in qua / Talibus Indorum fruimur bis quinque figuris. Which translates as, Algorism is the art by which at present we use those Indian figures, the poem is a few hundred lines long and summarizes the art of calculating with the new style of Indian dice, or Talibus Indorum, or Hindu numerals. An informal definition could be a set of rules that precisely defines a sequence of operations, which would include all computer programs, including programs that do not perform numeric calculations. Generally, a program is only an algorithm if it stops eventually, but humans can do something equally useful, in the case of certain enumerably infinite sets, They can give explicit instructions for determining the nth member of the set, for arbitrary finite n. An enumerably infinite set is one whose elements can be put into one-to-one correspondence with the integers, the concept of algorithm is also used to define the notion of decidability. That notion is central for explaining how formal systems come into being starting from a set of axioms. In logic, the time that an algorithm requires to complete cannot be measured, from such uncertainties, that characterize ongoing work, stems the unavailability of a definition of algorithm that suits both concrete and abstract usage of the term. Algorithms are essential to the way computers process data, thus, an algorithm can be considered to be any sequence of operations that can be simulated by a Turing-complete system. Although this may seem extreme, the arguments, in its favor are hard to refute. Gurevich. Turings informal argument in favor of his thesis justifies a stronger thesis, according to Savage, an algorithm is a computational process defined by a Turing machine. Typically, when an algorithm is associated with processing information, data can be read from a source, written to an output device. Stored data are regarded as part of the state of the entity performing the algorithm. In practice, the state is stored in one or more data structures, for some such computational process, the algorithm must be rigorously defined, specified in the way it applies in all possible circumstances that could arise. That is, any conditional steps must be dealt with, case-by-case
17.
Vector space
–
A vector space is a collection of objects called vectors, which may be added together and multiplied by numbers, called scalars in this context. Scalars are often taken to be numbers, but there are also vector spaces with scalar multiplication by complex numbers, rational numbers. The operations of addition and scalar multiplication must satisfy certain requirements, called axioms. Euclidean vectors are an example of a vector space and they represent physical quantities such as forces, any two forces can be added to yield a third, and the multiplication of a force vector by a real multiplier is another force vector. In the same vein, but in a more geometric sense, Vector spaces are the subject of linear algebra and are well characterized by their dimension, which, roughly speaking, specifies the number of independent directions in the space. Infinite-dimensional vector spaces arise naturally in mathematical analysis, as function spaces and these vector spaces are generally endowed with additional structure, which may be a topology, allowing the consideration of issues of proximity and continuity. Among these topologies, those that are defined by a norm or inner product are commonly used. This is particularly the case of Banach spaces and Hilbert spaces, historically, the first ideas leading to vector spaces can be traced back as far as the 17th centurys analytic geometry, matrices, systems of linear equations, and Euclidean vectors. Today, vector spaces are applied throughout mathematics, science and engineering, furthermore, vector spaces furnish an abstract, coordinate-free way of dealing with geometrical and physical objects such as tensors. This in turn allows the examination of local properties of manifolds by linearization techniques, Vector spaces may be generalized in several ways, leading to more advanced notions in geometry and abstract algebra. The concept of space will first be explained by describing two particular examples, The first example of a vector space consists of arrows in a fixed plane. This is used in physics to describe forces or velocities, given any two such arrows, v and w, the parallelogram spanned by these two arrows contains one diagonal arrow that starts at the origin, too. This new arrow is called the sum of the two arrows and is denoted v + w, when a is negative, av is defined as the arrow pointing in the opposite direction, instead. Such a pair is written as, the sum of two such pairs and multiplication of a pair with a number is defined as follows, + = and a =. The first example above reduces to one if the arrows are represented by the pair of Cartesian coordinates of their end points. A vector space over a field F is a set V together with two operations that satisfy the eight axioms listed below, elements of V are commonly called vectors. Elements of F are commonly called scalars, the second operation, called scalar multiplication takes any scalar a and any vector v and gives another vector av. In this article, vectors are represented in boldface to distinguish them from scalars
18.
Matrix (mathematics)
–
In mathematics, a matrix is a rectangular array of numbers, symbols, or expressions, arranged in rows and columns. For example, the dimensions of the matrix below are 2 ×3, the individual items in an m × n matrix A, often denoted by ai, j, where max i = m and max j = n, are called its elements or entries. Provided that they have the size, two matrices can be added or subtracted element by element. The rule for multiplication, however, is that two matrices can be multiplied only when the number of columns in the first equals the number of rows in the second. Any matrix can be multiplied element-wise by a scalar from its associated field, a major application of matrices is to represent linear transformations, that is, generalizations of linear functions such as f = 4x. The product of two matrices is a matrix that represents the composition of two linear transformations. Another application of matrices is in the solution of systems of linear equations, if the matrix is square, it is possible to deduce some of its properties by computing its determinant. For example, a matrix has an inverse if and only if its determinant is not zero. Insight into the geometry of a transformation is obtainable from the matrixs eigenvalues. Applications of matrices are found in most scientific fields, in computer graphics, they are used to manipulate 3D models and project them onto a 2-dimensional screen. Matrix calculus generalizes classical analytical notions such as derivatives and exponentials to higher dimensions, Matrices are used in economics to describe systems of economic relationships. A major branch of analysis is devoted to the development of efficient algorithms for matrix computations. Matrix decomposition methods simplify computations, both theoretically and practically, algorithms that are tailored to particular matrix structures, such as sparse matrices and near-diagonal matrices, expedite computations in finite element method and other computations. Infinite matrices occur in planetary theory and in atomic theory, a simple example of an infinite matrix is the matrix representing the derivative operator, which acts on the Taylor series of a function. A matrix is an array of numbers or other mathematical objects for which operations such as addition and multiplication are defined. Most commonly, a matrix over a field F is an array of scalars each of which is a member of F. Most of this focuses on real and complex matrices, that is, matrices whose elements are real numbers or complex numbers. More general types of entries are discussed below, for instance, this is a real matrix, A =
19.
Economics
–
Economics is a social science concerned chiefly with description and analysis of the production, distribution, and consumption of goods and services according to the Merriam-Webster Dictionary. Economics focuses on the behaviour and interactions of economic agents and how economies work, consistent with this focus, textbooks often distinguish between microeconomics and macroeconomics. Microeconomics examines the behaviour of elements in the economy, including individual agents and markets, their interactions. Individual agents may include, for example, households, firms, buyers, macroeconomics analyzes the entire economy and issues affecting it, including unemployment of resources, inflation, economic growth, and the public policies that address these issues. Economic analysis can be applied throughout society, as in business, finance, health care, Economic analyses may also be applied to such diverse subjects as crime, education, the family, law, politics, religion, social institutions, war, science, and the environment. At the turn of the 21st century, the domain of economics in the social sciences has been described as economic imperialism. The ultimate goal of economics is to improve the conditions of people in their everyday life. There are a variety of definitions of economics. Some of the differences may reflect evolving views of the subject or different views among economists, to supply the state or commonwealth with a revenue for the publick services. Say, distinguishing the subject from its uses, defines it as the science of production, distribution. On the satirical side, Thomas Carlyle coined the dismal science as an epithet for classical economics, in this context and it enquires how he gets his income and how he uses it. Thus, it is on the one side, the study of wealth and on the other and more important side, a part of the study of man. He affirmed that previous economists have usually centred their studies on the analysis of wealth, how wealth is created, distributed, and consumed, but he said that economics can be used to study other things, such as war, that are outside its usual focus. This is because war has as the goal winning it, generates both cost and benefits, and, resources are used to attain the goal. If the war is not winnable or if the costs outweigh the benefits. Some subsequent comments criticized the definition as overly broad in failing to limit its subject matter to analysis of markets, there are other criticisms as well, such as in scarcity not accounting for the macroeconomics of high unemployment. The same source reviews a range of included in principles of economics textbooks. Among economists more generally, it argues that a particular definition presented may reflect the direction toward which the author believes economics is evolving, microeconomics examines how entities, forming a market structure, interact within a market to create a market system
20.
Routing
–
Routing is the process of selecting a path for traffic in a network, or between or across multiple networks. Packet forwarding is the transit of logically addressed network packets from one interface to another. Intermediate nodes are typically network hardware devices such as routers, bridges, gateways, firewalls, general-purpose computers also forward packets and perform routing, although they have no specially optimized hardware for the task. The routing process usually directs forwarding on the basis of routing tables, thus, constructing routing tables, which are held in the routers memory, is very important for efficient routing. Most routing algorithms use only one path at a time. Multipath routing techniques enable the use of alternative paths. Routing, in a sense of the term, is often contrasted with bridging in its assumption that network addresses are structured. Structured addresses allow a single routing table entry to represent the route to a group of devices, in large networks, structured addressing outperforms unstructured addressing. Routing has become the dominant form of addressing on the Internet, bridging is still widely used within localized environments. This article focuses on unicast routing algorithms, in static routing, small networks may use manually configured routing tables. Larger networks have complex topologies that can change rapidly, making the construction of routing tables unfeasible. Nevertheless, most of the switched telephone network uses pre-computed routing tables. Examples of dynamic-routing protocols and algorithms include Routing Information Protocol, Open Shortest Path First, distance vector algorithms use the Bellman–Ford algorithm. This approach assigns a cost number to each of the links between each node in the network, nodes send information from point A to point B via the path that results in the lowest total cost. The algorithm operates in a simple manner. When a node first starts, it knows of its immediate neighbours. Each node, on a basis, sends to each neighbour node its own current assessment of the total cost to get to all the destinations it knows of. The neighbouring nodes examine this information and compare it to what they know, anything that represents an improvement on what they already have
21.
Scheduling (production processes)
–
Scheduling is the process of arranging, controlling and optimizing work and workloads in a production process or manufacturing process. Scheduling is used to plant and machinery resources, plan human resources, plan production processes. It is an important tool for manufacturing and engineering, where it can have a impact on the productivity of a process. In manufacturing, the purpose of scheduling is to minimize the time and costs, by telling a production facility when to make, with which staff. Production scheduling aims to maximize the efficiency of the operation and reduce costs, Scheduling is the process of arranging, controlling and optimizing work and workloads in a production process. Companies use backward and forward scheduling to allocate plant and machinery resources, plan human resources, plan production processes, forward scheduling is planning the tasks from the date resources become available to determine the shipping date or the due date. Backward scheduling is planning the tasks from the due date or required-by date to determine the start date and/or any changes in capacity required, a key character of scheduling is the productivity, the relation between quantity of inputs and quantity of output. Key concepts here are, Inputs, Inputs are plant, labor, materials, tooling, energy, Outputs, Outputs are the products produced in factories either for other factories or for the end buyer. The extent to which any one product is produced within any one factory is governed by transaction cost, output within the factory, The output of any one work area within the factory is an input to the next work area in that factory according to the manufacturing process. For example, the output of cutting is an input to the bending room, output for the next factory, By way of example, the output of a paper mill is an input to a print factory. The output of a plant is an input to an asphalt plant, a cosmetics factory. Output for the end buyer, Factory output goes to the consumer via a business such as a retailer or an asphalt paving company. Resource allocation, Resource allocation is assigning inputs to produce output, the aim is to maximize output with given inputs or to minimize quantity of inputs to produce required output. Production scheduling can take a significant amount of computing power if there are a number of tasks. Batch production scheduling shares some concepts and techniques with finite capacity scheduling which has applied to many manufacturing problems. The specific issues of scheduling batch manufacturing processes have generated considerable industrial, a batch process can be described in terms of a recipe which comprises a bill of materials and operating instructions which describe how to make the product. The ISA S88 batch process control standard provides a framework for describing a batch process recipe, the standard provides a procedural hierarchy for a recipe. A recipe may be organized into a series of unit-procedures or major steps, unit-procedures are organized into operations, and operations may be further organized into phases
22.
Leonid Kantorovich
–
Leonid Vitaliyevich Kantorovich was a Soviet mathematician and economist, known for his theory and development of techniques for the optimal allocation of resources. He is regarded as the founder of linear programming and he was the winner of the Stalin Prize in 1949 and the Nobel Memorial Prize in Economics in 1975. Kantorovich was born on 19 January 1912, to a Russian Jewish family and his father was a doctor practicing in Saint Petersburg. In 1926, at the age of fourteen, he began his studies at the Leningrad University and he graduated from the Faculty of Mathematics in 1930, and began his graduate studies. In 1934, at the age of 22 years, he became a full professor, later, Kantorovich worked for the Soviet government. He was given the task of optimizing production in a plywood industry and he came up with the mathematical technique now known as linear programming, some years before it was advanced by George Dantzig. He authored several books including The Mathematical Method of Production Planning and Organization, for his work, Kantorovich was awarded the Stalin Prize in 1949. After 1939, he became the professor of Military engineering-technical university, during the Siege of Leningrad, Kantorovich was the professor of VITU of Navy and in charge of safety on the Road of Life. He calculated the distance between cars on ice, depending on thickness of ice and temperature of the air. In December 1941 and January 1942, Kantorovich personally walked between cars driving on the ice of Lake Ladoga, on the Road of Life, to ensure the cars did not sink, however, many cars with food for survivors of the siege were destroyed by the German air-bombings. In 1948 Kantorovich was assigned to the project of the USSR. Since 1960, Kantorovich lived and worked in Novosibirsk, where he created, for his feat and courage Kantorovich was awarded the Order of the Patriotic War, and was decorated with the medal For Defense of Leningrad. The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel, in mathematical analysis, Kantorovich had important results in functional analysis, approximation theory, and operator theory. In particular, Kantorovich formulated fundamental results in the theory of normed vector lattices, Kantorovich considered infinite-dimensional optimization problems, such as the Kantorovich-Monge problem in transportation theory. His analysis proposed the Kantorovich metric, which is used in probability theory, List of economists List of Russian mathematicians V. Makarov. Kantorovich, Leonid Vitaliyevich The New Palgrave, A Dictionary of Economics, v.3, Mathematical Methods of Organizing and Planning Production Management Science, Vol.6, No. Spreadsheet presenting all examples of Kantorovich,1939 with the OpenOffice. org Calc Solver as well as the lp_solver, princeton University Press and the RAND Corporation,1963. Cf. p.22 for the work of Kantorovich, on an Industrial Programming Problem of Kantorovich, Management Science, Vol.8, No
23.
John von Neumann
–
John von Neumann was a Hungarian-American mathematician, physicist, inventor, computer scientist, and polymath. He made major contributions to a number of fields, including mathematics, physics, economics, computing, and statistics. He published over 150 papers in his life, about 60 in pure mathematics,20 in physics, and 60 in applied mathematics and his last work, an unfinished manuscript written while in the hospital, was later published in book form as The Computer and the Brain. His analysis of the structure of self-replication preceded the discovery of the structure of DNA, also, my work on various forms of operator theory, Berlin 1930 and Princeton 1935–1939, on the ergodic theorem, Princeton, 1931–1932. During World War II he worked on the Manhattan Project, developing the mathematical models behind the lenses used in the implosion-type nuclear weapon. After the war, he served on the General Advisory Committee of the United States Atomic Energy Commission, along with theoretical physicist Edward Teller, mathematician Stanislaw Ulam, and others, he worked out key steps in the nuclear physics involved in thermonuclear reactions and the hydrogen bomb. Von Neumann was born Neumann János Lajos to a wealthy, acculturated, Von Neumanns place of birth was Budapest in the Kingdom of Hungary which was then part of the Austro-Hungarian Empire. He was the eldest of three children and he had two younger brothers, Michael, born in 1907, and Nicholas, who was born in 1911. His father, Neumann Miksa was a banker, who held a doctorate in law and he had moved to Budapest from Pécs at the end of the 1880s. Miksas father and grandfather were both born in Ond, Zemplén County, northern Hungary, johns mother was Kann Margit, her parents were Jakab Kann and Katalin Meisels. Three generations of the Kann family lived in apartments above the Kann-Heller offices in Budapest. In 1913, his father was elevated to the nobility for his service to the Austro-Hungarian Empire by Emperor Franz Joseph, the Neumann family thus acquired the hereditary appellation Margittai, meaning of Marghita. The family had no connection with the town, the appellation was chosen in reference to Margaret, Neumann János became Margittai Neumann János, which he later changed to the German Johann von Neumann. Von Neumann was a child prodigy, as a 6 year old, he could multiply and divide two 8-digit numbers in his head, and could converse in Ancient Greek. When he once caught his mother staring aimlessly, the 6 year old von Neumann asked her, formal schooling did not start in Hungary until the age of ten. Instead, governesses taught von Neumann, his brothers and his cousins, Max believed that knowledge of languages other than Hungarian was essential, so the children were tutored in English, French, German and Italian. A copy was contained in a private library Max purchased, One of the rooms in the apartment was converted into a library and reading room, with bookshelves from ceiling to floor. Von Neumann entered the Lutheran Fasori Evangelikus Gimnázium in 1911 and this was one of the best schools in Budapest, part of a brilliant education system designed for the elite
24.
Joseph Fourier
–
The Fourier transform and Fouriers law are also named in his honour. Fourier is also credited with the discovery of the greenhouse effect. Fourier was born at Auxerre, the son of a tailor and he was orphaned at age nine. Fourier was recommended to the Bishop of Auxerre, and through this introduction, the commissions in the scientific corps of the army were reserved for those of good birth, and being thus ineligible, he accepted a military lectureship on mathematics. He took a prominent part in his own district in promoting the French Revolution and he was imprisoned briefly during the Terror but in 1795 was appointed to the École Normale, and subsequently succeeded Joseph-Louis Lagrange at the École Polytechnique. Fourier accompanied Napoleon Bonaparte on his Egyptian expedition in 1798, as scientific adviser, cut off from France by the English fleet, he organized the workshops on which the French army had to rely for their munitions of war. He also contributed several papers to the Egyptian Institute which Napoleon founded at Cairo. After the British victories and the capitulation of the French under General Menou in 1801, in 1801, Napoleon appointed Fourier Prefect of the Department of Isère in Grenoble, where he oversaw road construction and other projects. However, Fourier had previously returned home from the Napoleon expedition to Egypt to resume his academic post as professor at École Polytechnique when Napoleon decided otherwise in his remark. The Prefect of the Department of Isère having recently died, I would like to express my confidence in citizen Fourier by appointing him to this place, hence being faithful to Napoleon, he took the office of Prefect. It was while at Grenoble that he began to experiment on the propagation of heat and he presented his paper On the Propagation of Heat in Solid Bodies to the Paris Institute on December 21,1807. He also contributed to the monumental Description de lÉgypte, Fourier moved to England in 1816. Later, he returned to France, and in 1822 succeeded Jean Baptiste Joseph Delambre as Permanent Secretary of the French Academy of Sciences, in 1830, he was elected a foreign member of the Royal Swedish Academy of Sciences. In 1830, his health began to take its toll, Fourier had already experienced, in Egypt and Grenoble. At Paris, it was impossible to be mistaken with respect to the cause of the frequent suffocations which he experienced. A fall, however, which he sustained on the 4th of May 1830, while descending a flight of stairs, shortly after this event, he died in his bed on 16 May 1830. His name is one of the 72 names inscribed on the Eiffel Tower, a bronze statue was erected in Auxerre in 1849, but it was melted down for armaments during World War II. Joseph Fourier University in Grenoble is named after him and this book was translated, with editorial corrections, into English 56 years later by Freeman
25.
Soviet Union
–
The Soviet Union, officially the Union of Soviet Socialist Republics was a socialist state in Eurasia that existed from 1922 to 1991. It was nominally a union of national republics, but its government. The Soviet Union had its roots in the October Revolution of 1917 and this established the Russian Socialist Federative Soviet Republic and started the Russian Civil War between the revolutionary Reds and the counter-revolutionary Whites. In 1922, the communists were victorious, forming the Soviet Union with the unification of the Russian, Transcaucasian, Ukrainian, following Lenins death in 1924, a collective leadership and a brief power struggle, Joseph Stalin came to power in the mid-1920s. Stalin suppressed all opposition to his rule, committed the state ideology to Marxism–Leninism. As a result, the country underwent a period of rapid industrialization and collectivization which laid the foundation for its victory in World War II and postwar dominance of Eastern Europe. Shortly before World War II, Stalin signed the Molotov–Ribbentrop Pact agreeing to non-aggression with Nazi Germany, in June 1941, the Germans invaded the Soviet Union, opening the largest and bloodiest theater of war in history. Soviet war casualties accounted for the highest proportion of the conflict in the effort of acquiring the upper hand over Axis forces at battles such as Stalingrad. Soviet forces eventually captured Berlin in 1945, the territory overtaken by the Red Army became satellite states of the Eastern Bloc. The Cold War emerged by 1947 as the Soviet bloc confronted the Western states that united in the North Atlantic Treaty Organization in 1949. Following Stalins death in 1953, a period of political and economic liberalization, known as de-Stalinization and Khrushchevs Thaw, the country developed rapidly, as millions of peasants were moved into industrialized cities. The USSR took a lead in the Space Race with Sputnik 1, the first ever satellite, and Vostok 1. In the 1970s, there was a brief détente of relations with the United States, the war drained economic resources and was matched by an escalation of American military aid to Mujahideen fighters. In the mid-1980s, the last Soviet leader, Mikhail Gorbachev, sought to reform and liberalize the economy through his policies of glasnost. The goal was to preserve the Communist Party while reversing the economic stagnation, the Cold War ended during his tenure, and in 1989 Soviet satellite countries in Eastern Europe overthrew their respective communist regimes. This led to the rise of strong nationalist and separatist movements inside the USSR as well, in August 1991, a coup détat was attempted by Communist Party hardliners. It failed, with Russian President Boris Yeltsin playing a role in facing down the coup. On 25 December 1991, Gorbachev resigned and the twelve constituent republics emerged from the dissolution of the Soviet Union as independent post-Soviet states
26.
Economist
–
An economist is a practitioner in the social science discipline of economics. The individual may also study, develop, and apply theories and concepts from economics and write about economic policy. A generally accepted interpretation in academia is that an economist is one who has attained a Ph. D. in economics, teaches economic science, the professionalization of economics, reflected in academia, has been described as the main change in economics since around 1900. Economists debate the path they believe their profession should take, surveys among economists indicate a preference for a shift toward the latter. Most major universities have a faculty, school or department. However, many prominent economists come from a background in mathematics, business, political science, law, sociology, getting a PhD in economics takes six years, on average, with a median of 5.3 years. The Nobel Memorial Prize in Economics, established by Sveriges Riksbank in 1968, is a prize awarded to each year for outstanding intellectual contributions in the field of economics. The prize winners are announced in October every year and they receive their awards on December 10, the anniversary of Alfred Nobels death. In contrast to regulated professions such as engineering, law or medicine, in academia, to be called an economist requires a Ph. D. degree in Economics. A professional working inside of one of many fields of economics or having a degree in this subject is often considered to be an economist. In addition to government and academia, economists are employed in banking, finance, accountancy, commerce, marketing, business administration, lobbying. Politicians often consult economists before enacting economic policy, many statesmen have academic degrees in economics. Economics graduates are employable in varying degrees depending on the regional economic scenario, small numbers go on to undertake postgraduate studies, either in economics, research, teacher training or further qualifications in specialist areas. Nearly 135 colleges and universities grant around 900 new Ph. D. s every year, incomes are highest for those in the private sector, followed by the federal government, with academia paying the lowest incomes. As of January 2013, PayScale. com showed Ph. D. economists salary ranges as follows, all Ph. D. economists, $61,000 to $160,000, Ph. D. The largest single grouping of economists in the UK are the more than 1000 members of the Government Economic Service. This figure compares very favourably with the picture, with 64 percent of economics graduates in employment. Some current well-known economists include, Ben Bernanke, Chairman of the Federal Reserve from 2006 to 2014, milton Friedman, Nobel Memorial Prize in Economic Sciences laureate in Economics
27.
World War II
–
World War II, also known as the Second World War, was a global war that lasted from 1939 to 1945, although related conflicts began earlier. It involved the vast majority of the worlds countries—including all of the great powers—eventually forming two opposing alliances, the Allies and the Axis. It was the most widespread war in history, and directly involved more than 100 million people from over 30 countries. Marked by mass deaths of civilians, including the Holocaust and the bombing of industrial and population centres. These made World War II the deadliest conflict in human history, from late 1939 to early 1941, in a series of campaigns and treaties, Germany conquered or controlled much of continental Europe, and formed the Axis alliance with Italy and Japan. Under the Molotov–Ribbentrop Pact of August 1939, Germany and the Soviet Union partitioned and annexed territories of their European neighbours, Poland, Finland, Romania and the Baltic states. In December 1941, Japan attacked the United States and European colonies in the Pacific Ocean, and quickly conquered much of the Western Pacific. The Axis advance halted in 1942 when Japan lost the critical Battle of Midway, near Hawaii, in 1944, the Western Allies invaded German-occupied France, while the Soviet Union regained all of its territorial losses and invaded Germany and its allies. During 1944 and 1945 the Japanese suffered major reverses in mainland Asia in South Central China and Burma, while the Allies crippled the Japanese Navy, thus ended the war in Asia, cementing the total victory of the Allies. World War II altered the political alignment and social structure of the world, the United Nations was established to foster international co-operation and prevent future conflicts. The victorious great powers—the United States, the Soviet Union, China, the United Kingdom, the Soviet Union and the United States emerged as rival superpowers, setting the stage for the Cold War, which lasted for the next 46 years. Meanwhile, the influence of European great powers waned, while the decolonisation of Asia, most countries whose industries had been damaged moved towards economic recovery. Political integration, especially in Europe, emerged as an effort to end pre-war enmities, the start of the war in Europe is generally held to be 1 September 1939, beginning with the German invasion of Poland, Britain and France declared war on Germany two days later. The dates for the beginning of war in the Pacific include the start of the Second Sino-Japanese War on 7 July 1937, or even the Japanese invasion of Manchuria on 19 September 1931. Others follow the British historian A. J. P. Taylor, who held that the Sino-Japanese War and war in Europe and its colonies occurred simultaneously and this article uses the conventional dating. Other starting dates sometimes used for World War II include the Italian invasion of Abyssinia on 3 October 1935. The British historian Antony Beevor views the beginning of World War II as the Battles of Khalkhin Gol fought between Japan and the forces of Mongolia and the Soviet Union from May to September 1939, the exact date of the wars end is also not universally agreed upon. It was generally accepted at the time that the war ended with the armistice of 14 August 1945, rather than the formal surrender of Japan
28.
USSR
–
The Soviet Union, officially the Union of Soviet Socialist Republics was a socialist state in Eurasia that existed from 1922 to 1991. It was nominally a union of national republics, but its government. The Soviet Union had its roots in the October Revolution of 1917 and this established the Russian Socialist Federative Soviet Republic and started the Russian Civil War between the revolutionary Reds and the counter-revolutionary Whites. In 1922, the communists were victorious, forming the Soviet Union with the unification of the Russian, Transcaucasian, Ukrainian, following Lenins death in 1924, a collective leadership and a brief power struggle, Joseph Stalin came to power in the mid-1920s. Stalin suppressed all opposition to his rule, committed the state ideology to Marxism–Leninism. As a result, the country underwent a period of rapid industrialization and collectivization which laid the foundation for its victory in World War II and postwar dominance of Eastern Europe. Shortly before World War II, Stalin signed the Molotov–Ribbentrop Pact agreeing to non-aggression with Nazi Germany, in June 1941, the Germans invaded the Soviet Union, opening the largest and bloodiest theater of war in history. Soviet war casualties accounted for the highest proportion of the conflict in the effort of acquiring the upper hand over Axis forces at battles such as Stalingrad. Soviet forces eventually captured Berlin in 1945, the territory overtaken by the Red Army became satellite states of the Eastern Bloc. The Cold War emerged by 1947 as the Soviet bloc confronted the Western states that united in the North Atlantic Treaty Organization in 1949. Following Stalins death in 1953, a period of political and economic liberalization, known as de-Stalinization and Khrushchevs Thaw, the country developed rapidly, as millions of peasants were moved into industrialized cities. The USSR took a lead in the Space Race with Sputnik 1, the first ever satellite, and Vostok 1. In the 1970s, there was a brief détente of relations with the United States, the war drained economic resources and was matched by an escalation of American military aid to Mujahideen fighters. In the mid-1980s, the last Soviet leader, Mikhail Gorbachev, sought to reform and liberalize the economy through his policies of glasnost. The goal was to preserve the Communist Party while reversing the economic stagnation, the Cold War ended during his tenure, and in 1989 Soviet satellite countries in Eastern Europe overthrew their respective communist regimes. This led to the rise of strong nationalist and separatist movements inside the USSR as well, in August 1991, a coup détat was attempted by Communist Party hardliners. It failed, with Russian President Boris Yeltsin playing a role in facing down the coup. On 25 December 1991, Gorbachev resigned and the twelve constituent republics emerged from the dissolution of the Soviet Union as independent post-Soviet states
29.
Tjalling Koopmans
–
Tjalling Charles Koopmans was a Dutch American mathematician and economist, the joint winner with Leonid Kantorovich of the 1975 Nobel Memorial Prize in Economic Sciences. Koopmans was born in s-Graveland, Netherlands and he began his university education at the Utrecht University at seventeen, specializing in mathematics. Three years later, in 1930, he switched to theoretical physics, in 1933, he met Jan Tinbergen, the winner of the 1969 Nobel Memorial Prize in Economics, and moved to Amsterdam to study mathematical economics under him. In addition to mathematical economics, Koopmans extended his explorations to econometrics and statistics, in 1936 he graduated from Leiden University with a PhD, under the direction of Hendrik Kramers. The title of the thesis was Linear regression analysis of time series. Koopmans moved to the United States in 1940, there he worked for a while for a government body in Washington D. C. In 1946, he became a citizen of the United States. Also in 1948, he was elected as a Fellow of the American Statistical Association and he continued to publish, on the economics of optimal growth and activity analysis. Koopmans early works on the Hartree–Fock theory are associated with the Koopmans theorem, Koopmans was awarded his Nobel memorial prize for his contributions to the field of resource allocation, specifically the theory of optimal use of resources. Koopmans was a son of Sjoerd Koopmans and Wytske van der Zee, one of Sjoerd Koopmans sisters, Gatske Koopmans, married Symon van der Meer, Their son Pieter van der Meer was the father of Nobel Prize winner Simon van der Meer. Serial correlation and quadratic forms in normal variables, on the Description and Comparison of Economic Systems. Nobel Memorial Lecture, Concepts of optimality and their uses, hughes Hallett, Andrew J. Econometrics and the Theory of Economic Policy, The Tinbergen–Theil Contributions 40 Years On. Testing residuals from least squares regressions for being generated by the Gaussian random walk, biography of Tjalling Koopmans from the Institute for Operations Research and the Management Sciences
30.
Nobel prize in economics
–
The prize was established in 1968 by a donation from Swedens central bank, the Swedish National Bank, on the banks 300th anniversary. Although it is not one of the prizes that Alfred Nobel established in his will in 1895, laureates are announced with the other Nobel Prize laureates, and receive the award at the same ceremony. Laureates in the Memorial Prize in Economics are selected by the Royal Swedish Academy of Sciences and it was first awarded in 1969 to the Dutch and Norwegian economists Jan Tinbergen and Ragnar Frisch, for having developed and applied dynamic models for the analysis of economic processes. An endowment in perpetuity from Sveriges Riksbank pays the Nobel Foundations administrative expenses associated with the prize, since 2012, the monetary portion of the Prize in Economics has totalled 8 million Swedish kronor. This is equivalent to the amount given for the original Nobel Prizes, the Prize in Economics is not one of the original Nobel Prizes created by Alfred Nobels will. However, the process, selection criteria, and awards presentation of the Prize in Economic Sciences are performed in a manner similar to that of the Nobel Prizes. Laureates are announced with the Nobel Prize laureates, and receive the award at the same ceremony, shall have conferred the greatest benefit on mankind. According to its website, the Royal Swedish Academy of Sciences administers a researcher exchange with academies in other countries and publishes six scientific journals. Members of the Academy and former laureates are also authorised to nominate candidates, all proposals and their supporting evidence must be received before February 1. The proposals are reviewed by the Prize Committee and specially appointed experts, before the end of September, the committee chooses potential laureates. If there is a tie, the chairman of the committee casts the deciding vote, next, the potential laureates must be approved by the Royal Swedish Academy of Sciences. Members of the Ninth Class of the Academy vote in mid-October to determine the next laureate or laureates of the Prize in Economics. The first prize in economics was awarded in 1969 to Ragnar Frisch, in 2009, Elinor Ostrom became the first woman awarded the prize. This makes it available to researchers in such topics as political science, psychology, moreover, the composition of the Economics Prize Committee changed to include two non-economists. This has not been confirmed by the Economics Prize Committee, the members of the 2007 Economics Prize Committee are still dominated by economists, as the secretary and four of the five members are professors of economics. Some critics argue that the prestige of the Prize in Economics derives in part from its association with the Nobel Prizes, among them is the Swedish human rights lawyer Peter Nobel, a great-grandson of Ludvig Nobel. Nobel criticizes the institution of misusing his familys name. He explaiend that Nobel despised people who cared more about profits than societys well-being and this does not matter in the natural sciences
31.
Frank Lauren Hitchcock
–
Frank Lauren Hitchcock was an American mathematician and physicist known for his formulation of the transportation problem in 1941. Frank did his study at Phillips Andover Academy. He entered Harvard University and completed his bachelors degree in 1896, then he began teaching, first in Paris and at Kenyon College in Gambier, Ohio. From 1904 to 1906 he taught chemistry at North Dakota State University, Hitchcock returned to Massachusetts and began to teach at Massachusetts Institute of Technology and study at the graduate level at Harvard. In 1910 he obtained a Ph. D. with a thesis entitled, Hitchcock stayed at MIT until retirement, publishing his analysis of optimal distribution in 1941. Frank Hitchcock was descended from New England forebears and his mother was Susan Ida Porter and his father was Elisha Pike Hitchcock. His parents married on June 27,1866, Frank was born March 6,1875, in New York City. He had two sisters, Mary E. Hitchcock and Viola M. Hitchcock, and two brothers, George P. Hitchcock and Ernest Van Ness Hitchcock, although Frank was born in New York City, he was raised in Pittsford, Vermont. Frank married Margaret Johnson Blakely in Paris, France on May 25,1899 and they had three children, Lauren Blakely, John Edward, and George Blakely, January 12,1910. At the time of his death Frank had 11 grandchildren and 6 great-grandsons,1910, Vector Functions of a Point. 1915, A Classification of Quadratic Vectors Functions, Proceedings of the National Academy of Sciences of the United States of America 1,177 to 183. 1917, On the simultaneous formulation of two vector functions, Proceedings of the Royal Irish Academy Section A34,1 to 10. 1920, A study of the vector product Vφαθβ, Proceedings of the Royal Irish Academy Section A35,30 to 7,1920, A Thermodynamic Study of Electrolytic Solutions, Proceedings of the National Academy of Sciences of the United States of America 6,186 to 197. 1920, An Identical Relation Connecting Seven Vectors,1921, The Axes of a Quadratic Vector, Proceedings AAAS56,331 to 351. 1921, with Norbert Wiener, A New Vector Method in Integral Equations,1923, On Double Polyadics, with Application to the Linear Matrix Equation, Proceedings AAAS58,355 to 395. 1923, Identities Satisfied by Algebraic Point Functions in N-space, Proceedings AAAS58,399 to 421,1923, with Clark S. Robinson, Differential Equations in Applied Chemistry, John Wiley & Sons, now from Archive. org. 1923, A Method for the Numerical Solution of Integral Equations,1924, The Coincident Points of Two Algebraic Transformations. 1922, A Solution of the Linear Matrix Equation by Double Multiplication, dr. Frank L. Hitchcock, Mathematician, Professor Emeritus at M. I. T
32.
George Dantzig
–
George Bernard Dantzig was an American mathematical scientist who made important contributions to operations research, computer science, economics, and statistics. Dantzig is known for his development of the algorithm, an algorithm for solving linear programming problems. In statistics, Dantzig solved two problems in statistical theory, which he had mistaken for homework after arriving late to a lecture by Jerzy Neyman. Dantzig was the Professor Emeritus of Transportation Sciences and Professor of Operations Research, born in Portland, Oregon, George Bernard Dantzig was named after George Bernard Shaw, the Irish writer. His father, Tobias Dantzig, was a Baltic German mathematician and linguist, Dantzigs parents met during their study at the Sorbonne University in Paris, where Tobias studied mathematics under Henri Poincaré, after whom Dantzigs brother was named. The Dantzigs immigrated to the United States, where settled in Portland. Early in the 1920s the Dantzig family moved from Baltimore to Washington, George Dantzig received his B. S. from University of Maryland in 1936 in mathematics and physics, which is part of the University of Maryland College of Computer, Mathematical, and Natural Sciences. He earned his masters degree in mathematics from the University of Michigan in 1938, with the outbreak of World War II, Dantzig took a leave of absence from the doctoral program at Berkeley to join the U. S. Air Force Office of Statistical Control. In 1946, he returned to Berkeley to complete the requirements of his program, although he had a faculty offer from Berkeley, he returned to the Air Force as mathematical advisor to the comptroller. In 1952 Dantzig joined the division of the RAND Corporation. By 1960 he became a professor in the Department of Industrial Engineering at UC Berkeley, in 1966 he joined the Stanford faculty as Professor of Operations Research and of Computer Science. A year later, the Program in Operations Research became a full-fledged department, in 1973 he founded the Systems Optimization Laboratory there. On a sabbatical leave that year, he headed the Methodology Group at the International Institute for Applied Systems Analysis in Laxenburg, later he became the C. A. Criley Professor of Transportation Sciences at Stanford, and kept going, well beyond his mandatory retirement in 1985. He was a member of the National Academy of Sciences, the National Academy of Engineering, the Mathematical Programming Society honored Dantzig by creating the George B. Dantzig Prize, bestowed every three years since 1982 on one or two people who have made a significant impact in the field of mathematical programming, Dantzig died on May 13,2005, in his home in Stanford, California, of complications from diabetes and cardiovascular disease. Dantzigs seminal work allows the airline industry, for example, to schedule crews, based on his work tools are developed that shipping companies use to determine how many planes they need and where their delivery trucks should be deployed. It is used in manufacturing, revenue management, telecommunications, advertising, architecture, circuit design, an event in Dantzigs life became the origin of a famous story in 1939 while he was a graduate student at UC Berkeley. Near the beginning of a class for which Dantzig was late, when Dantzig arrived, he assumed that the two problems were a homework assignment and wrote them down
33.
Simplex algorithm
–
In mathematical optimization, Dantzigs simplex algorithm is a popular algorithm for linear programming. The name of the algorithm is derived from the concept of a simplex and was suggested by T. S. Motzkin. Simplices are not actually used in the method, but one interpretation of it is that it operates on simplicial cones, the simplicial cones in question are the corners of a geometric object called a polytope. The shape of this polytope is defined by the applied to the objective function. There is a process to convert any linear program into one in standard form so this results in no loss of generality. In geometric terms, the region defined by all values of x such that A x ≤ b, x i ≥0 is a convex polytope. In this context such a point is known as a feasible solution. It can be shown that for a program in standard form. The simplex algorithm applies this insight by walking along edges of the polytope to extreme points with greater and greater objective values and this continues until the maximum value is reached or an unbounded edge is visited, concluding that the problem has no solution. The solution of a program is accomplished in two steps. In the first step, known as Phase I, an extreme point is found. Depending on the nature of the program this may be trivial, the possible results of Phase I are either that a basic feasible solution is found or that the feasible region is empty. In the latter case the program is called infeasible. In the second step, Phase II, the algorithm is applied using the basic feasible solution found in Phase I as a starting point. The possible results from Phase II are either a basic feasible solution or an infinite edge on which the objective function is unbounded below. George Dantzig worked on planning methods for the US Army Air Force during World War II using a desk calculator, during 1946 his colleague challenged him to mechanize the planning process in order to entice him into not taking another job. Dantzig formulated the problem as linear inequalities inspired by the work of Wassily Leontief, however, Dantzigs core insight was to realize that most such ground rules can be translated into a linear objective function that needs to be maximized. Development of the method was evolutionary and happened over a period of about a year
34.
Game theory
–
Game theory is the study of mathematical models of conflict and cooperation between intelligent rational decision-makers. Game theory is used in economics, political science, and psychology, as well as logic, computer science. Originally, it addressed zero-sum games, in one persons gains result in losses for the other participants. Today, game theory applies to a range of behavioral relations, and is now an umbrella term for the science of logical decision making in humans, animals. Modern game theory began with the idea regarding the existence of equilibria in two-person zero-sum games. Von Neumanns original proof used Brouwer fixed-point theorem on continuous mappings into compact convex sets and his paper was followed by the 1944 book Theory of Games and Economic Behavior, co-written with Oskar Morgenstern, which considered cooperative games of several players. The second edition of this provided an axiomatic theory of expected utility. This theory was developed extensively in the 1950s by many scholars, Game theory was later explicitly applied to biology in the 1970s, although similar developments go back at least as far as the 1930s. Game theory has been recognized as an important tool in many fields. With the Nobel Memorial Prize in Economic Sciences going to game theorist Jean Tirole in 2014, John Maynard Smith was awarded the Crafoord Prize for his application of game theory to biology. Early discussions of examples of two-person games occurred long before the rise of modern, the first known discussion of game theory occurred in a letter written by Charles Waldegrave, an active Jacobite, and uncle to James Waldegrave, a British diplomat, in 1713. In this letter, Waldegrave provides a mixed strategy solution to a two-person version of the card game le Her. James Madison made what we now recognize as an analysis of the ways states can be expected to behave under different systems of taxation. In 1913 Ernst Zermelo published Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels and it proved that the optimal chess strategy is strictly determined. This paved the way for more general theorems, the Danish mathematician Zeuthen proved that the mathematical model had a winning strategy by using Brouwers fixed point theorem. In his 1938 book Applications aux Jeux de Hasard and earlier notes, Borel conjectured that non-existence of mixed-strategy equilibria in two-person zero-sum games would occur, a conjecture that was proved false. Game theory did not really exist as a field until John von Neumann published a paper in 1928. Von Neumanns original proof used Brouwers fixed-point theorem on continuous mappings into compact convex sets and his paper was followed by his 1944 book Theory of Games and Economic Behavior co-authored with Oskar Morgenstern
35.
Abundance of the chemical elements
–
The abundance of a chemical element is a measure of the occurrence of the element relative to all other elements in a given environment. Abundance is measured in one of three ways, by the mass-fraction, by the mole-fraction, or by the volume-fraction, most abundance values in this article are given as mass-fractions. For example, the abundance of oxygen in water can be measured in two ways, the mass fraction is about 89%, because that is the fraction of waters mass which is oxygen. However, the mole-fraction is 33.3333. % because only 1 atom of 3 in water, the abundance of chemical elements in the universe is dominated by the large amounts of hydrogen and helium which were produced in the Big Bang. Remaining elements, making up only about 2% of the universe, were produced by supernovae. Lithium, beryllium and boron are rare although they are produced by nuclear fusion. The elements from carbon to iron are more common in the universe because of the ease of making them in supernova nucleosynthesis. Elements of higher number than iron become progressively more rare in the universe, because they increasingly absorb stellar energy in being produced. Elements with even numbers are generally more common than their neighbors in the periodic table. The abundance of elements in the Sun and outer planets is similar to that in the universe. Due to solar heating, the elements of Earth and the rocky planets of the Solar System have undergone an additional depletion of volatile hydrogen, helium, neon, nitrogen. The crust, mantle, and core of the Earth show evidence of chemical segregation plus some sequestration by density, lighter silicates of aluminum are found in the crust, with more magnesium silicate in the mantle, while metallic iron and nickel compose the core. The abundance of elements in specialized environments, such as atmospheres, or oceans, the elements – that is, ordinary matter made of protons, neutrons, and electrons, are only a small part of the content of the Universe. Cosmological observations suggest that only 4. 6% of the universes energy comprises the visible baryonic matter that constitutes stars, planets, the rest is made up of dark energy and dark matter. Hydrogen is the most abundant element in the Universe, helium is second, however, after this, the rank of abundance does not continue to correspond to the atomic number, oxygen has abundance rank 3, but atomic number 8. All others are less common. Heavier elements were mostly produced much later, inside of stars, hydrogen and helium are estimated to make up roughly 74% and 24% of all baryonic matter in the universe respectively. Despite comprising only a small fraction of the universe, the remaining heavy elements can greatly influence astronomical phenomena
36.
Observable universe
–
There are at least two trillion galaxies in the observable universe, containing more stars than all the grains of sand on planet Earth. Assuming the universe is isotropic, the distance to the edge of the universe is roughly the same in every direction. That is, the universe is a spherical volume centered on the observer. Every location in the Universe has its own universe, which may or may not overlap with the one centered on Earth. The word observable used in this sense does not depend on modern technology actually permits detection of radiation from an object in this region. It simply indicates that it is possible in principle for light or other signals from the object to reach an observer on Earth, in practice, we can see light only from as far back as the time of photon decoupling in the recombination epoch. That is when particles were first able to emit photons that were not quickly re-absorbed by other particles, before then, the Universe was filled with a plasma that was opaque to photons. The detection of gravitational waves indicates there is now a possibility of detecting signals from before the recombination epoch. The surface of last scattering is the collection of points in space at the distance that photons from the time of photon decoupling just reach us today. These are the photons we detect today as cosmic microwave background radiation, however, with future technology, it may be possible to observe the still older relic neutrino background, or even more distant events via gravitational waves. It is estimated that the diameter of the universe is about 28.5 gigaparsecs. The total mass of matter in the universe can be calculated using the critical density. Some parts of the Universe are too far away for the light emitted since the Big Bang to have had time to reach Earth. In the future, light from distant galaxies will have had time to travel. This fact can be used to define a type of cosmic event horizon whose distance from the Earth changes over time, both popular and professional research articles in cosmology often use the term universe to mean observable universe. It is plausible that the galaxies within our observable universe represent only a fraction of the galaxies in the Universe. If the Universe is finite but unbounded, it is possible that the Universe is smaller than the observable universe. In this case, what we take to be very distant galaxies may actually be duplicate images of nearby galaxies and it is difficult to test this hypothesis experimentally because different images of a galaxy would show different eras in its history, and consequently might appear quite different
37.
Leonid Khachiyan
–
Leonid Genrikhovich Khachiyan was a Soviet mathematician of Armenian descent who taught Computer Science at Rutgers University. He was most famous for his ellipsoid algorithm for linear programming, Khachiyan was born in St. Petersburg and moved to Moscow with his parents at age 9. There he later earned a Ph. D. in computational mathematics in 1978, in 1982 he won the prestigious Fulkerson Prize from the Mathematical Programming Society and the American Mathematical Society for outstanding papers in the area of discrete mathematics. In 1989 he joined Cornell University’s School of Operations Research and Industrial Engineering as a professor and had been at Rutgers since 1990. He wrote a series of papers with Bahman Kalantari on various matrix scaling and balancing problems, Khachiyan is survived by his wife of 20 years and two daughters who currently live in the United States. He is also survived by his father, a professor of theoretical mechanics, his mother, a retired civil engineer. In Memoriam, Leonid Khachiyan from the Computer Science Department, Rutgers University, SIAM news, Leonid Khachiyan, 1952–2005, An Appreciation. The Mathematics Genealogy Project, Leonid Khachiyan
38.
Interior-point method
–
Interior point methods are a certain class of algorithms that solve linear and nonlinear convex optimization problems. John von Neumann suggested an interior point method of programming which was neither a polynomial time method nor an efficient method in practice. In fact, it turned out to be slower in practice compared to the commonly used simplex method, in 1984, Narendra Karmarkar developed a method for linear programming called Karmarkars algorithm which runs in probably polynomial time and is also very efficient in practice. It enabled solutions of linear programming problems which were beyond the capabilities of the simplex method, contrary to the simplex method, it reaches a best solution by traversing the interior of the feasible region. The method can be generalized to convex programming based on a self-concordant barrier function used to encode the convex set, any convex optimization problem can be transformed into minimizing a linear function over a convex set by converting to the epigraph form. The idea of encoding the feasible set using a barrier and designing barrier methods was studied by Anthony V. Fiacco, Garth P. McCormick, and others in the early 1960s. These ideas were developed for general nonlinear programming, but they were later abandoned due to the presence of more competitive methods for this class of problems. Yurii Nesterov and Arkadi Nemirovski came up with a class of such barriers that can be used to encode any convex set. They guarantee that the number of iterations of the algorithm is bounded by a polynomial in the dimension, already Khachiyans ellipsoid method was a polynomial time algorithm, however, it was too slow to be of practical interest. The class of primal-dual path-following interior point methods is considered the most successful, mehrotras predictor-corrector algorithm provides the basis for most implementations of this class of methods. The primal-dual methods idea is easy to demonstrate for constrained nonlinear optimization. For simplicity consider the all-inequality version of an optimization problem, minimize f subject to c i ≥0 for i =1, …, m, x ∈ R n. The logarithmic barrier function associated with is B = f − μ ∑ i =1 m log Here μ is a positive scalar. As μ converges to zero the minimum of B should converge to a solution of and we try to find those for which the gradient of the barrier function is zero. Applying to we get an equation for the gradient, g − A T λ =0 where the matrix A is the constraint c Jacobian, the intuition behind is that the gradient of f should lie in the subspace spanned by the constraints gradients. Applying Newtons method to and we get an equation for update, because of, the condition λ ≥0 should be enforced at each step. This can be done by choosing appropriate α, →, augmented Lagrangian method Penalty method Karush–Kuhn–Tucker conditions Bonnans, J. Frédéric, Gilbert, J. Charles, Lemaréchal, Claude, Sagastizábal, Claudia A. Numerical optimization, Theoretical and practical aspects, proceedings of the sixteenth annual ACM symposium on Theory of computing - STOC84,302
39.
Operations research
–
Operations research, or operational research in British usage, is a discipline that deals with the application of advanced analytical methods to help make better decisions. Further, the operational analysis is used in the British military, as an intrinsic part of capability development, management. In particular, operational analysis forms part of the Combined Operational Effectiveness and Investment Appraisals and it is often considered to be a sub-field of applied mathematics. The terms management science and decision science are used as synonyms. Operation research is concerned with determining the maximum or minimum of some real-world objective. Originating in military efforts before World War II, its techniques have grown to concern problems in a variety of industries, nearly all of these techniques involve the construction of mathematical models that attempt to describe the system. Because of the computational and statistical nature of most of these fields, OR also has ties to computer science. In the decades after the two wars, the techniques were more widely applied to problems in business, industry. Early work in research was carried out by individuals such as Charles Babbage. Percy Bridgman brought operational research to bear on problems in physics in the 1920s, modern operational research originated at the Bawdsey Research Station in the UK in 1937 and was the result of an initiative of the stations superintendent, A. P. Rowe. Rowe conceived the idea as a means to analyse and improve the working of the UKs early warning radar system, initially, he analysed the operating of the radar equipment and its communication networks, expanding later to include the operating personnels behaviour. This revealed unappreciated limitations of the CH network and allowed action to be taken. Scientists in the United Kingdom including Patrick Blackett, Cecil Gordon, Solly Zuckerman, other names for it included operational analysis and quantitative management. During the Second World War close to 1,000 men and women in Britain were engaged in operational research, about 200 operational research scientists worked for the British Army. Patrick Blackett worked for different organizations during the war. In 1941, Blackett moved from the RAE to the Navy, after first working with RAF Coastal Command, in 1941, blacketts team at Coastal Commands Operational Research Section included two future Nobel prize winners and many other people who went on to be pre-eminent in their fields. They undertook a number of analyses that aided the war effort. Convoys travel at the speed of the slowest member, so small convoys can travel faster and it was also argued that small convoys would be harder for German U-boats to detect
40.
Microeconomics
–
One goal of microeconomics is to analyze the market mechanisms that establish relative prices among goods and services and allocate limited resources among alternative uses. Microeconomics shows conditions under which free markets lead to desirable allocations and it also analyzes market failure, where markets fail to produce efficient results. Microeconomics also deals with the effects of economic policies on the aspects of the economy. Particularly in the wake of the Lucas critique, much of modern macroeconomic theory has been built upon microfoundations—i. e, based upon basic assumptions about micro-level behavior. Microeconomic theory typically begins with the study of a single rational, to economists, rationality means an individual possesses stable preferences that are both complete and transitive. The technical assumption that preference relations are continuous is needed to ensure the existence of a utility function, microeconomic theory progresses by defining a competitive budget set which is a subset of the consumption set. It is at point that economists make the technical assumption that preferences are locally non-satiated. Without the assumption of LNS there is no guarantee that an individual would maximize utility. With the necessary tools and assumptions in place the utility maximization problem is developed, the utility maximization problem is the heart of consumer theory. The utility maximization problem attempts to explain the action axiom by imposing rationality axioms on consumer preferences, the utility maximization problem serves not only as the mathematical foundation of consumer theory but as a metaphysical explanation of it as well. That is, the utility maximization problem is used by economists to not only explain what or how individuals make choices, the utility maximization problem is a constrained optimization problem in which an individual seeks to maximize utility subject to a budget constraint. Economists use the extreme value theorem to guarantee that a solution to the utility maximization problem exists and that is, since the budget constraint is both bounded and closed, a solution to the utility maximization problem exists. Economists call the solution to the utility maximization problem a Walrasian demand function or correspondence, the utility maximization problem has so far been developed by taking consumer tastes as the primitive. However, a way to develop microeconomic theory is by taking consumer choice as the primitive. This model of microeconomic theory is referred to as Revealed preference theory, the theory of supply and demand usually assumes that markets are perfectly competitive. This implies that there are buyers and sellers in the market and none of them have the capacity to significantly influence prices of goods. In many real-life transactions, the assumption fails because some individual buyers or sellers have the ability to influence prices, quite often, a sophisticated analysis is required to understand the demand-supply equation of a good model. However, the works well in situations meeting these assumptions
41.
Profit maximization
–
In economics, profit maximization is the short run or long run process by which a firm determines the price and output level that returns the greatest profit. There are several approaches to this problem, any costs incurred by a firm may be classed into two groups, fixed costs and variable costs. Fixed costs, which only in the short run, are incurred by the business at any level of output. These may include equipment maintenance, rent, wages of employees whose numbers cannot be increased or decreased in the short run, variable costs change with the level of output, increasing as more product is generated. Materials consumed during production often have the largest impact on this category, fixed cost and variable cost, combined, equal total cost. Revenue is the amount of money that a company receives from its business activities, usually from the sale of goods. For instance, taking the first definition, if it costs a firm 400 USD to produce 5 units and 480 USD to produce 6, to obtain the profit maximizing output quantity, we start by recognizing that profit is equal to total revenue minus total cost. Given a table of costs and revenues at each quantity, we can compute equations or plot the data directly on a graph. The profit-maximizing output is the one at which this difference reaches its maximum, in the accompanying diagram, the linear total revenue curve represents the case in which the firm is a perfect competitor in the goods market, and thus cannot set its own selling price. The profit-maximizing output level is represented as the one at which revenue is the height of C and total cost is the height of B. This output level is also the one at which the total profit curve is at its maximum, an equivalent perspective relies on the relationship that, for each unit sold, marginal profit equals marginal revenue minus marginal cost. At the output level at which marginal revenue equals marginal cost, marginal profit is zero, average total costs are represented by curve ATC. Total economic profit is represented by the area of the rectangle PABC, the optimum quantity is the same as the optimum quantity in the first diagram. If the firm is operating in a market, changes would have to be made to the diagrams. For example, the marginal revenue curve would have a negative gradient, in a competitive environment, more complicated profit maximization solutions involve the use of game theory. In some cases a firms demand and cost conditions are such that marginal profits are greater than zero for all levels of production up to a certain maximum. In other words, the profit maximizing quantity and price can be determined by setting marginal revenue equal to zero, marginal revenue equals zero when the total revenue curve has reached its maximum value. An example would be an airline flight
42.
Block matrix
–
In mathematics, a block matrix or a partitioned matrix is a matrix that is interpreted as having been broken into sections called blocks or submatrices. Any matrix may be interpreted as a matrix in one or more ways, with each interpretation defined by how its rows. Block matrix algebra arises in general from biproducts in categories of matrices, the matrix P = can be partitioned into 4 2×2 blocks P11 =, P12 =, P21 =, P22 =. The partitioned matrix can then be written as P = and it is possible to use a block partitioned matrix product that involves only algebra on submatrices of the factors. The partitioning of the factors is not arbitrary, however, the matrices in the resulting matrix C are calculated by multiplying, C α β = ∑ γ =1 s A α γ B γ β. Or, using the Einstein notation that implicitly sums over repeated indices, if a matrix is partitioned into four blocks, it can be inverted blockwise as follows, where A, B, C and D have arbitrary size. Equivalently, A block diagonal matrix is a matrix that is a square matrix. A block diagonal matrix A has the form A = where Ak is a matrix, in other words, it is the direct sum of A1, …. It can also be indicated as A1 ⊕ A2 ⊕ … ⊕ An or diag, any square matrix can trivially be considered a block diagonal matrix with only one block. For the determinant and trace, the following properties hold det A = det A1 × … × det A n, tr A = tr A1 + ⋯ + tr A n. The inverse of a diagonal matrix is another block diagonal matrix, composed of the inverse of each block, as follows. The eigenvalues and eigenvectors of A are simply those of A1 and A2 and. and A n and it is essentially a tridiagonal matrix but has submatrices in places of scalars. A block tridiagonal matrix A has the form A = where Ak, Bk and Ck are square sub-matrices of the lower, main, block tridiagonal matrices are often encountered in numerical solutions of engineering problems. Optimized numerical methods for LU factorization are available and hence efficient solution algorithms for equation systems with a block tridiagonal matrix as coefficient matrix, the Thomas algorithm, used for efficient solution of equation systems involving a tridiagonal matrix can also be applied using matrix operations to block tridiagonal matrices. A block Toeplitz matrix is another special block matrix, which contains blocks that are repeated down the diagonals of the matrix, the individual block matrix elements, Aij, must also be a Toeplitz matrix. A block Toeplitz matrix A has the form A =, for any arbitrary matrices A and B, we have the direct sum of A and B, denoted by A ⊕ B and defined as A ⊕ B =. This operation generalizes naturally to arbitrary dimensioned arrays, note that any element in the direct sum of two vector spaces of matrices could be represented as a direct sum of two matrices. In linear algebra terms, the use of a block matrix corresponds to having a linear mapping thought of in terms of corresponding bunches of basis vectors and that again matches the idea of having distinguished direct sum decompositions of the domain and range
43.
Linear programming
–
Linear programming is a method to achieve the best outcome in a mathematical model whose requirements are represented by linear relationships. Linear programming is a case of mathematical programming. More formally, linear programming is a technique for the optimization of an objective function, subject to linear equality. Its feasible region is a polytope, which is a set defined as the intersection of finitely many half spaces. Its objective function is an affine function defined on this polyhedron. A linear programming algorithm finds a point in the polyhedron where this function has the smallest value if such a point exists, the expression to be maximized or minimized is called the objective function. The inequalities Ax ≤ b and x ≥0 are the constraints which specify a convex polytope over which the function is to be optimized. In this context, two vectors are comparable when they have the same dimensions, if every entry in the first is less-than or equal-to the corresponding entry in the second then we can say the first vector is less-than or equal-to the second vector. Linear programming can be applied to fields of study. It is widely used in business and economics, and is utilized for some engineering problems. Industries that use linear programming models include transportation, energy, telecommunications and it has proved useful in modeling diverse types of problems in planning, routing, scheduling, assignment, and design. The first linear programming formulation of a problem that is equivalent to the linear programming problem was given by Leonid Kantorovich in 1939. He developed it during World War II as a way to plan expenditures and returns so as to reduce costs to the army, about the same time as Kantorovich, the Dutch-American economist T. C. Koopmans formulated classical economic problems as linear programs, Kantorovich and Koopmans later shared the 1975 Nobel prize in economics. Dantzig independently developed general linear programming formulation to use for planning problems in US Air Force, in 1947, Dantzig also invented the simplex method that for the first time efficiently tackled the linear programming problem in most cases. Dantzig provided formal proof in an unpublished report A Theorem on Linear Inequalities on January 5,1948, postwar, many industries found its use in their daily planning. Dantzigs original example was to find the best assignment of 70 people to 70 jobs, the computing power required to test all the permutations to select the best assignment is vast, the number of possible configurations exceeds the number of particles in the observable universe. However, it only a moment to find the optimum solution by posing the problem as a linear program
44.
Packing problems
–
Packing problems are a class of optimization problems in mathematics that involve attempting to pack objects together into containers. The goal is to pack a single container as densely as possible or pack all objects using as few containers as possible. Many of these problems can be related to real life packaging, storage, each packing problem has a dual covering problem, which asks how many of the same objects are required to completely cover every region of the container, where objects are allowed to overlap. In a Bin packing problem, you are given, containers A set of some or all of which must be packed into one or more containers. The set may contain different objects with their sizes specified, or an object of a fixed dimension that can be used repeatedly. Usually the packing must be without overlaps between goods and other goods or the container walls, in some variants, the aim is to find the configuration that packs a single container with the maximal density. More commonly, the aim is to all the objects into as few containers as possible. In some variants the overlapping is allowed but should be minimized, many of these problems, when the container size is increased in all directions, become equivalent to the problem of packing objects as densely as possible in infinite Euclidean space. This problem is relevant to a number of disciplines, and has received significant attention. The Kepler conjecture postulated an optimal solution for packing spheres hundreds of years before it was correct by Thomas Callister Hales. Many other shapes have received attention, including ellipsoids, Platonic and Archimedean solids including tetrahedra and these problems are mathematically distinct from the ideas in the circle packing theorem. The related circle packing problem deals with packing circles, possibly of different sizes, on a surface, the counterparts of a circle in other dimensions can never be packed with complete efficiency in dimensions larger than one. That is, there always be unused space if you are only packing circles. The most efficient way of packing circles, hexagonal packing, produces approximately 91% efficiency, in three dimensions, the face-centered cubic lattice offers the best lattice packing of spheres, and is believed to be the optimal of all packings. With simple sphere packings in three dimensions there are nine possible defineable packings, the 8-dimensional E8 lattice and 24-dimensional Leech lattice have also been proven to be optimal in their respective real dimensional space. Cubes can easily be arranged to fill space completely, the most natural packing being the cubic honeycomb. No other Platonic solid can tile space on its own, tetrahedra can achieve a packing of at least 85%. One of the best packings of regular dodecahedra is based on the aforementioned face-centered cubic lattice, tetrahedra and octahedra together can fill all of space in an arrangement known as the tetrahedral-octahedral honeycomb
45.
Vertex cover
–
In the mathematical discipline of graph theory, a vertex cover of a graph is a set of vertices such that each edge of the graph is incident to at least one vertex of the set. The problem of finding a minimum vertex cover is an optimization problem in computer science and is a typical example of an NP-hard optimization problem that has an approximation algorithm. Its decision version, the vertex cover problem, was one of Karps 21 NP-complete problems and is therefore a classical NP-complete problem in complexity theory. Furthermore, the vertex cover problem is tractable and a central problem in parameterized complexity theory. The minimum vertex cover problem can be formulated as a linear program whose dual linear program is the maximum matching problem. Such a set is said to cover the edges of G, the following figure shows two examples of vertex covers, with some vertex cover V ′ marked in red. A minimum vertex cover is a cover of smallest possible size. The vertex cover number τ is the size of a vertex cover. The following figure shows examples of vertex covers in the previous graphs. The set of all vertices is a vertex cover, the endpoints of any maximal matching form a vertex cover. The complete bipartite graph K m, n has a vertex cover of size τ = min. A set of vertices is a vertex cover if and only if its complement is an independent set, consequently, the number of vertices of a graph is equal to its minimum vertex cover number plus the size of a maximum independent set. The minimum vertex cover problem is the problem of finding a smallest vertex cover in a given graph. INSTANCE, Graph G OUTPUT, Smallest number k such that G has a cover of size k. If the problem is stated as a problem, it is called the vertex cover problem, INSTANCE, Graph G. QUESTION, Does G have a cover of size at most k. The vertex cover problem is an NP-complete problem, it was one of Karps 21 NP-complete problems and it is often used in computational complexity theory as a starting point for NP-hardness proofs. Assume that every vertex has an associated cost of c ≥0, the minimum vertex cover problem can be formulated as the following integer linear program
46.
Matching (graph theory)
–
In the mathematical discipline of graph theory, a matching or independent edge set in a graph is a set of edges without common vertices. It may also be a graph consisting of edges without common vertices. Bipartite matching is a case of a network flow problem. Given a graph G =, a matching M in G is a set of pairwise non-adjacent edges, a vertex is matched if it is an endpoint of one of the edges in the matching. In other words, a matching M of a graph G is maximal if every edge in G has a non-empty intersection with at least one edge in M, the following figure shows examples of maximal matchings in three graphs. A maximum matching is a matching that contains the largest possible number of edges, there may be many maximum matchings. The matching number ν of a graph G is the size of a maximum matching, note that every maximum matching is maximal, but not every maximal matching is a maximum matching. The following figure shows examples of maximum matchings in the three graphs. A perfect matching is a matching which matches all vertices of the graph and that is, every vertex of the graph is incident to exactly one edge of the matching. Figure above is an example of a perfect matching, every perfect matching is maximum and hence maximal. In some literature, the term complete matching is used, in the above figure, only part shows a perfect matching. A perfect matching is also an edge cover. Thus, ν ≤ ρ, that is, the size of a matching is no larger than the size of a minimum edge cover. A near-perfect matching is one in which one vertex is unmatched. This can only occur when the graph has an odd number of vertices, in the above figure, part shows a near-perfect matching. If, for every vertex in a graph, there is a matching that omits only that vertex. Given a matching M, a path is a path that begins with an unmatched vertex and is a path in which the edges belong alternatively to the matching. An augmenting path is a path that starts from and ends on free vertices
47.
Edge cover
–
In graph theory, an edge cover of a graph is a set of edges such that every vertex of the graph is incident to at least one edge of the set. In computer science, the edge cover problem is the problem of finding an edge cover of minimum size. It is a problem that belongs to the class of covering problems. Formally, a cover of a graph G is a set of edges C such that each vertex in G is incident with at least one edge in C. The set C is said to cover the vertices of G, the following figure shows examples of edge coverings in two graphs. A minimum edge covering is a covering of smallest possible size. The edge covering number ρ is the size of an edge covering. The following figure shows examples of edge coverings. Note that the figure on the right is not only an edge cover, in particular, it is a perfect matching, a matching M in which each vertex is incident with exactly one edge in M. A perfect matching is always an edge covering. The set of all edges is a cover, assuming that there are no degree-0 vertices. The complete bipartite graph Km, n has edge covering number max, a smallest edge cover can be found in polynomial time by finding a maximum matching and extending it greedily so that all vertices are covered. In the following figure, a matching is marked with red. On the other hand, the problem of finding a smallest vertex cover is an NP-hard problem. Vertex cover Set cover – the edge cover problem is a case of the set cover problem, the elements of the universe are vertices. Garey, Michael R. Johnson, David S, computers and Intractability, A Guide to the Theory of NP-Completeness, W. H. Freeman, ISBN 0-7167-1045-5
48.
Independent set (graph theory)
–
In graph theory, an independent set or stable set is a set of vertices in a graph, no two of which are adjacent. That is, it is a set S of vertices such that for two vertices in S, there is no edge connecting the two. Equivalently, each edge in the graph has at most one endpoint in S, the size of an independent set is the number of vertices it contains. Independent sets have also been called internally stable sets, a maximal independent set is either an independent set such that adding any other vertex to the set forces the set to contain an edge or the set of all vertices of the empty graph. A maximum independent set is an independent set of largest possible size for a given graph G and this size is called the independence number of G, and denoted α. The problem of finding such a set is called the independent set problem and is an NP-hard optimization problem. As such, it is unlikely that there exists an efficient algorithm for finding an independent set of a graph. Every maximum independent set also is maximal, but the converse implication does not necessarily hold, a set is independent if and only if it is a clique in the graph’s complement, so the two concepts are complementary. In fact, sufficiently large graphs with no large cliques have large independent sets, a set is independent if and only if its complement is a vertex cover. Therefore, the sum of the size of the largest independent set α, a vertex coloring of a graph G corresponds to a partition of its vertex set into independent subsets. Hence the minimal number of colors needed in a coloring, the chromatic number χ, is at least the quotient of the number of vertices in G. In a bipartite graph with no isolated vertices, the number of vertices in an independent set equals the number of edges in a minimum edge covering. An independent set that is not the subset of another independent set is called maximal, every graph contains at most 3n/3 maximal independent sets, but many graphs have far fewer. The number of independent sets in n-vertex cycle graphs is given by the Perrin numbers. Therefore, both numbers are proportional to powers of 1.324718, the plastic number, in computer science, several computational problems related to independent sets have been studied. In the maximum independent set problem, the input is a graph. If there are multiple independent sets, only one need be output. This problem is referred to as vertex packing
49.
Linear programming relaxation
–
That is, for each constraint of the form x i ∈ of the original integer program, one instead uses a pair of linear constraints 0 ≤ x i ≤1. The resulting relaxation is a program, hence the name. Consider the set cover problem, the linear programming relaxation of which was first considered by Lovász. In this problem, one is given as input a family of sets F =, the task is to find a subfamily, with as few sets as possible, having the same union as F. To formulate this as a 0-1 integer program, form an indicator variable xi for each set Si, that takes the value 1 when Si belongs to the chosen subfamily and 0 when it does not. Then a valid cover can be described by an assignment of values to the indicator variables satisfying the constraints x i ∈ and, the minimum set cover corresponds to the assignment of indicator variables satisfying these constraints and minimizing the linear objective function min ∑ i x i. As a specific example of the set cover problem, consider the instance F =, there are three optimal set covers, each of which includes two of the three given sets. Thus, the value of the objective function of the corresponding 0-1 integer program is 2. However, there is a solution in which each set is assigned the weight 1/2. Thus, in example, the linear programming relaxation has a value differing from that of the unrelaxed 0-1 integer program. The linear programming relaxation of a program may be solved using any standard linear programming technique. If the optimal solution to the linear program happens to have all variables either 0 or 1, thus, the relaxation provides an optimistic bound on the integer programs solution. Since the set cover problem has solution values that are integers, thus, in this instance, despite having a different value from the unrelaxed problem, the linear programming relaxation gives us a tight lower bound on the solution quality of the original problem. Linear programming relaxation is a technique for designing approximation algorithms for hard optimization problems. In this application, an important concept is the integrality gap, in a maximization problem the fraction is reversed. The integrality gap is always at least 1, in the #Example above, the instance F = shows an integrality gap of 4/3. Typically, the integrality gap translates into the ratio of an approximation algorithm. This is because an approximation algorithm relies on some rounding strategy that finds, for every relaxed solution of size M f r a c, an integer solution of size at most R R ⋅ M f r a c
50.
Approximation algorithms
–
In computer science and operations research, approximation algorithms are algorithms used to find approximate solutions to optimization problems. Unlike heuristics, which usually only find reasonably good solutions reasonably fast, one wants provable solution quality, ideally, the approximation is optimal up to a small constant factor. Approximation algorithms are increasingly being used for problems where exact polynomial-time algorithms are known but are too expensive due to the input size. A typical example for an algorithm is the one for vertex cover in graphs, find an uncovered edge. It is clear that the cover is at most twice as large as the optimal one. This is a constant factor approximation algorithm with a factor of 2, nP-hard problems vary greatly in their approximability, some, such as the bin packing problem, can be approximated within any factor greater than 1. Others are impossible to approximate within any constant, or even polynomial factor unless P = NP, nP-hard problems can often be expressed as integer programs and solved exactly in exponential time. Many approximation algorithms emerge from the linear programming relaxation of the integer program, not all approximation algorithms are suitable for all practical applications. They often use IP/LP/Semidefinite solvers, complex structures or sophisticated algorithmic techniques which lead to difficult implementation problems. Also, some approximation algorithms have impractical running times even though they are polynomial time, yet the study of even very expensive algorithms is not a completely theoretical pursuit as they can yield valuable insights. A classic example is the initial PTAS for Euclidean TSP due to Sanjeev Arora which had prohibitive running time, yet within a year, Arora refined the ideas into a linear time algorithm. Another example was a discovery of a PTAS for dense CSP problems by Arora, Karger, in such scenarios, they must compete with the corresponding direct IP formulations. For some approximation algorithms it is possible to prove properties about the approximation of the optimum result. { O P T ≤ f ≤ ρ O P T, if ρ >1, ρ O P T ≤ f ≤ O P T, the factor ρ is called the relative performance guarantee. An approximation algorithm has a performance guarantee or bounded error c. Similarly, the guarantee, R, of a solution y to an instance x is defined as R = max. Clearly, the guarantee is greater than or equal to 1 and equal to 1 if. If an algorithm A guarantees to return solutions with a guarantee of at most r
51.
Independent set problem
–
In graph theory, an independent set or stable set is a set of vertices in a graph, no two of which are adjacent. That is, it is a set S of vertices such that for two vertices in S, there is no edge connecting the two. Equivalently, each edge in the graph has at most one endpoint in S, the size of an independent set is the number of vertices it contains. Independent sets have also been called internally stable sets, a maximal independent set is either an independent set such that adding any other vertex to the set forces the set to contain an edge or the set of all vertices of the empty graph. A maximum independent set is an independent set of largest possible size for a given graph G and this size is called the independence number of G, and denoted α. The problem of finding such a set is called the independent set problem and is an NP-hard optimization problem. As such, it is unlikely that there exists an efficient algorithm for finding an independent set of a graph. Every maximum independent set also is maximal, but the converse implication does not necessarily hold, a set is independent if and only if it is a clique in the graph’s complement, so the two concepts are complementary. In fact, sufficiently large graphs with no large cliques have large independent sets, a set is independent if and only if its complement is a vertex cover. Therefore, the sum of the size of the largest independent set α, a vertex coloring of a graph G corresponds to a partition of its vertex set into independent subsets. Hence the minimal number of colors needed in a coloring, the chromatic number χ, is at least the quotient of the number of vertices in G. In a bipartite graph with no isolated vertices, the number of vertices in an independent set equals the number of edges in a minimum edge covering. An independent set that is not the subset of another independent set is called maximal, every graph contains at most 3n/3 maximal independent sets, but many graphs have far fewer. The number of independent sets in n-vertex cycle graphs is given by the Perrin numbers. Therefore, both numbers are proportional to powers of 1.324718, the plastic number, in computer science, several computational problems related to independent sets have been studied. In the maximum independent set problem, the input is a graph. If there are multiple independent sets, only one need be output. This problem is referred to as vertex packing
52.
Vertex cover problem
–
In the mathematical discipline of graph theory, a vertex cover of a graph is a set of vertices such that each edge of the graph is incident to at least one vertex of the set. The problem of finding a minimum vertex cover is an optimization problem in computer science and is a typical example of an NP-hard optimization problem that has an approximation algorithm. Its decision version, the vertex cover problem, was one of Karps 21 NP-complete problems and is therefore a classical NP-complete problem in complexity theory. Furthermore, the vertex cover problem is tractable and a central problem in parameterized complexity theory. The minimum vertex cover problem can be formulated as a linear program whose dual linear program is the maximum matching problem. Such a set is said to cover the edges of G, the following figure shows two examples of vertex covers, with some vertex cover V ′ marked in red. A minimum vertex cover is a cover of smallest possible size. The vertex cover number τ is the size of a vertex cover. The following figure shows examples of vertex covers in the previous graphs. The set of all vertices is a vertex cover, the endpoints of any maximal matching form a vertex cover. The complete bipartite graph K m, n has a vertex cover of size τ = min. A set of vertices is a vertex cover if and only if its complement is an independent set, consequently, the number of vertices of a graph is equal to its minimum vertex cover number plus the size of a maximum independent set. The minimum vertex cover problem is the problem of finding a smallest vertex cover in a given graph. INSTANCE, Graph G OUTPUT, Smallest number k such that G has a cover of size k. If the problem is stated as a problem, it is called the vertex cover problem, INSTANCE, Graph G. QUESTION, Does G have a cover of size at most k. The vertex cover problem is an NP-complete problem, it was one of Karps 21 NP-complete problems and it is often used in computational complexity theory as a starting point for NP-hardness proofs. Assume that every vertex has an associated cost of c ≥0, the minimum vertex cover problem can be formulated as the following integer linear program
53.
Dominating set problem
–
In graph theory, a dominating set for a graph G = is a subset D of V such that every vertex not in D is adjacent to at least one member of D. The domination number γ is the number of vertices in a smallest dominating set for G, the dominating set problem concerns testing whether γ ≤ K for a given graph G and input K, it is a classical NP-complete decision problem in computational complexity theory. Therefore it is believed there is no efficient algorithm that finds a smallest dominating set for a given graph. Figures – on the show three examples of dominating sets for a graph. In each example, each vertex is adjacent to at least one red vertex. As Hedetniemi & Laskar note, the problem was studied from the 1950s onwards. Their bibliography lists over 300 papers related to domination in graphs, let G be a graph with n ≥1 vertices and let Δ be the maximum degree of the graph. The following bounds on γ are known, One vertex can dominate at most Δ other vertices, the set of all vertices is a dominating set in any graph, therefore γ ≤ n. If there are no isolated vertices in G, then there are two disjoint dominating sets in G, see domatic partition for details, therefore in any graph without isolated vertices it holds that γ ≤ n/2. Thus, the smallest maximal independent set is greater in size than the smallest independent dominating set, the independent domination number i of a graph G is the size of the smallest independent dominating set. There are graph families in which a minimum maximal independent set is a dominating set. For example, Allan & Laskar show that γ = i if G is a claw-free graph, a graph G is called a domination-perfect graph if γ = i in every induced subgraph H of G. Since an induced subgraph of a graph is claw-free, it follows that every claw-free graphs is also domination-perfect. Figures and are independent dominating sets, while figure illustrates a dominating set that is not an independent set, for any graph G, its line graph L is claw-free, and hence a minimum maximal independent set in L is also a minimum dominating set in L. An independent set in L corresponds to a matching in G, therefore a minimum maximal matching has the same size as a minimum edge dominating set. There exists a pair of polynomial-time L-reductions between the dominating set problem and the set cover problem. These reductions show that an efficient algorithm for the dominating set problem would provide an efficient algorithm for the set cover problem. Both problems are in fact Log-APX-complete, the set cover problem is a well-known NP-hard problem – the decision version of set covering was one of Karps 21 NP-complete problems, which were shown to be NP-complete already in 1972
54.
Fractional coloring
–
Fractional coloring is a topic in a young branch of graph theory known as fractional graph theory. It is a generalization of graph coloring. In a traditional graph coloring, each vertex in a graph is assigned some color, in a fractional coloring however, a set of colors is assigned to each vertex of a graph. The requirement about adjacent vertices still holds, so if two vertices are joined by an edge, they must have no colors in common, fractional graph coloring can be viewed as the linear programming relaxation of traditional graph coloring. Indeed, fractional coloring problems are more amenable to a linear programming approach than traditional coloring problems. A b-fold coloring of a graph G is an assignment of sets of size b to vertices of a such that adjacent vertices receive disjoint sets. An a, b-coloring is a b-fold coloring out of a available colors, equivalently, it can be defined as a homomorphism to the Kneser graph KGa, b. The b-fold chromatic number χb is the least a such that an a, b-coloring exists. The fractional chromatic number χf is defined to be χ f = lim b → ∞ χ b b = inf b χ b b Note that the limit exists because χb is subadditive, the fractional chromatic number can equivalently be defined in probabilistic terms. χf is the smallest k for which exists a probability distribution over the independent sets of G such that for each vertex v, given an independent set S drawn from the distribution. Some properties of χf, χ f ≥ n / α and ω ≤ χ f ≤ χ. Furthermore, the chromatic number approximates the chromatic number within a logarithmic factor, in fact. Here n is the order of G, α is the number, ω is the clique number. Kneser graphs give examples where χ/χf is arbitrarily large, since χ=a-2b+2, the fractional chromatic number χf of a graph G can be obtained as a solution to a linear program. Let I be the set of all independent sets of G, for each independent set I, define a nonnegative real variable xI. Then χf is the value of ∑ I ∈ I x I. The dual of this linear program computes the fractional clique number and that is, a weighting of the vertices of G such that the total weight assigned to any independent set is at most 1. The strong duality theorem of linear programming guarantees that the solutions to both linear programs have the same value
55.
Convex set
–
In convex geometry, a convex set is a subset of an affine space that is closed under convex combinations. For example, a cube is a convex set, but anything that is hollow or has an indent, for example. The boundary of a set is always a convex curve. The intersection of all convex sets containing a given subset A of Euclidean space is called the hull of A. It is the smallest convex set containing A, a convex function is a real-valued function defined on an interval with the property that its epigraph is a convex set. Convex minimization is a subfield of optimization that studies the problem of minimizing convex functions over convex sets, the branch of mathematics devoted to the study of properties of convex sets and convex functions is called convex analysis. The notion of a set can be generalized as described below. Let S be a space over the real numbers, or, more generally. A set C in S is said to be if, for all x and y in C and all t in the interval. In other words, every point on the segment connecting x and y is in C. This implies that a set in a real or complex topological vector space is path-connected. Furthermore, C is strictly convex if every point on the segment connecting x and y other than the endpoints is inside the interior of C. A set C is called convex if it is convex. The convex subsets of R are simply the intervals of R, some examples of convex subsets of the Euclidean plane are solid regular polygons, solid triangles, and intersections of solid triangles. Some examples of convex subsets of a Euclidean 3-dimensional space are the Archimedean solids, the Kepler-Poinsot polyhedra are examples of non-convex sets. A set that is not convex is called a non-convex set, the complement of a convex set, such as the epigraph of a concave function, is sometimes called a reverse convex set, especially in the context of mathematical optimization. If S is a set in n-dimensional space, then for any collection of r, r >1. Ur in S, and for any nonnegative numbers λ1, + λr =1, then one has, ∑ k =1 r λ k u k ∈ S
56.
Linear functional
–
In linear algebra, a linear functional or linear form is a linear map from a vector space to its field of scalars. The set of all linear functionals from V to k, Homk, forms a space over k with the addition of the operations of addition. This space is called the space of V, or sometimes the algebraic dual space. It is often written V∗ or V′ when the field k is understood, if V is a topological vector space, the space of continuous linear functionals — the continuous dual — is often simply called the dual space. If V is a Banach space, then so is its dual, to distinguish the ordinary dual space from the continuous dual space, the former is sometimes called the algebraic dual. In finite dimensions, every linear functional is continuous, so the dual is the same as the algebraic dual. Suppose that vectors in the coordinate space Rn are represented as column vectors x → =. For each row there is a linear functional f defined by f = a 1 x 1 + ⋯ + a n x n. This is just the product of the row vector and the column vector x →, f =. Linear functionals first appeared in functional analysis, the study of spaces of functions. Let Pn denote the space of real-valued polynomial functions of degree ≤n defined on an interval. If c ∈, then let evc, Pn → R be the evaluation functional, the mapping f → f is linear since = f + g = α f. If x0, …, xn are n+1 distinct points in, then the evaluation functionals evxi, the integration functional I defined above defines a linear functional on the subspace Pn of polynomials of degree ≤ n. If x0, …, xn are n+1 distinct points in, then there are coefficients a0, … and this forms the foundation of the theory of numerical quadrature. This follows from the fact that the linear functionals evxi, f → f defined above form a basis of the space of Pn. Linear functionals are particularly important in quantum mechanics, quantum mechanical systems are represented by Hilbert spaces, which are anti–isomorphic to their own dual spaces. A state of a mechanical system can be identified with a linear functional. For more information see bra–ket notation, in the theory of generalized functions, certain kinds of generalized functions called distributions can be realized as linear functionals on spaces of test functions
57.
Convex function
–
Equivalently, a function is convex if its epigraph is a convex set. Also equivalently, if the function is differentiable, and the second derivative is always greater than or equal to zero for its entire domain. Well-known examples of convex functions include the function x 2. Convex functions play an important role in areas of mathematics. They are especially important in the study of problems where they are distinguished by a number of convenient properties. For instance, a function on an open set has no more than one minimum. In probability theory, a convex function applied to the value of a random variable is always less than or equal to the expected value of the convex function of the random variable. This result, known as Jensens inequality, underlies many important inequalities, let X be a convex set in a real vector space and let f, X → R be a function. F is called if, ∀ x 1, x 2 ∈ X, ∀ t ∈, f ≤ t f + f. F is called convex if, ∀ x 1 ≠ x 2 ∈ X, ∀ t ∈, f < t f + f. A function f is said to be concave if −f is convex, suppose f is a function of one real variable defined on an interval, and let R = f − f x 1 − x 2. F is convex if and only if R is monotonically non-decreasing in x1 and this characterization of convexity is quite useful to prove the following results. A convex function f defined on some open interval C is continuous on C, F admits left and right derivatives, and these are monotonically non-decreasing. As a consequence, f is differentiable at all but at most countably many points, if C is closed, then f may fail to be continuous at the endpoints of C.3. A function is midpoint convex on an interval C if ∀ x 1, x 2 ∈ C, f ≤ f + f 2 and this condition is only slightly weaker than convexity. For example, a real-valued Lebesgue measurable function that is midpoint-convex will be convex by the Sierpinski theorem, in particular, a continuous function that is midpoint convex will be convex. A differentiable function of one variable is convex on an interval if, if a function is differentiable and convex then it is also continuously differentiable. For the basic case of a function from the real numbers to the real numbers
58.
Local minimum
–
Pierre de Fermat was one of the first mathematicians to propose a general technique, adequality, for finding the maxima and minima of functions. As defined in set theory, the maximum and minimum of a set are the greatest and least elements in the set, unbounded infinite sets, such as the set of real numbers, have no minimum or maximum. The value of the function at a point is called the maximum value of the function. Similarly, the function has a minimum point at x∗ if f ≤ f for all x in X within distance ε of x∗. A similar definition can be used when X is a space, since the definition just given can be rephrased in terms of neighbourhoods. Note that a maximum point is always a local maximum point. In both the global and local cases, the concept of a strict extremum can be defined, note that a point is a strict global maximum point if and only if it is the unique global maximum point, and similarly for minimum points. A continuous real-valued function with a compact domain always has a maximum point, an important example is a function whose domain is a closed interval of real numbers. Finding global maxima and minima is the goal of mathematical optimization, if a function is continuous on a closed interval, then by the extreme value theorem global maxima and minima exist. Furthermore, a global maximum either must be a maximum in the interior of the domain. So a method of finding a maximum is to look at all the local maxima in the interior, and also look at the maxima of the points on the boundary. Local extrema of functions can be found by Fermats theorem. For any function that is defined piecewise, one finds a maximum by finding the maximum of each piece separately, the function x2 has a unique global minimum at x =0. The function x3 has no global minima or maxima, although the first derivative is 0 at x =0, this is an inflection point. The function x x has a global maximum at x = e. The function x−x has a global maximum over the positive real numbers at x = 1/e. The function x3/3 − x has first derivative x2 −1, setting the first derivative to 0 and solving for x gives stationary points at −1 and +1. From the sign of the second derivative we can see that −1 is a local maximum, note that this function has no global maximum or minimum
59.
Global minimum
–
Pierre de Fermat was one of the first mathematicians to propose a general technique, adequality, for finding the maxima and minima of functions. As defined in set theory, the maximum and minimum of a set are the greatest and least elements in the set, unbounded infinite sets, such as the set of real numbers, have no minimum or maximum. The value of the function at a point is called the maximum value of the function. Similarly, the function has a minimum point at x∗ if f ≤ f for all x in X within distance ε of x∗. A similar definition can be used when X is a space, since the definition just given can be rephrased in terms of neighbourhoods. Note that a maximum point is always a local maximum point. In both the global and local cases, the concept of a strict extremum can be defined, note that a point is a strict global maximum point if and only if it is the unique global maximum point, and similarly for minimum points. A continuous real-valued function with a compact domain always has a maximum point, an important example is a function whose domain is a closed interval of real numbers. Finding global maxima and minima is the goal of mathematical optimization, if a function is continuous on a closed interval, then by the extreme value theorem global maxima and minima exist. Furthermore, a global maximum either must be a maximum in the interior of the domain. So a method of finding a maximum is to look at all the local maxima in the interior, and also look at the maxima of the points on the boundary. Local extrema of functions can be found by Fermats theorem. For any function that is defined piecewise, one finds a maximum by finding the maximum of each piece separately, the function x2 has a unique global minimum at x =0. The function x3 has no global minima or maxima, although the first derivative is 0 at x =0, this is an inflection point. The function x x has a global maximum at x = e. The function x−x has a global maximum over the positive real numbers at x = 1/e. The function x3/3 − x has first derivative x2 −1, setting the first derivative to 0 and solving for x gives stationary points at −1 and +1. From the sign of the second derivative we can see that −1 is a local maximum, note that this function has no global maximum or minimum
60.
Concave function
–
In mathematics, a concave function is the negative of a convex function. A concave function is also synonymously called concave downwards, concave down, convex upwards, a real-valued function f on an interval is said to be concave if, for any x and y in the interval and for any α in, f ≥ f + α f. A function is called strictly concave if f > f + α f for any α in and x ≠ y. For a function f, R→R, this definition merely states that for every z between x and y, the point on the graph of f is above the line joining the points. A function f is quasiconcave if the upper contour sets of the function S = are convex sets, a function f is concave over a convex set if and only if the function −f is a convex function over the set. Points where concavity changes are inflection points, the sum of two concave functions is itself concave and so is the pointwise minimum of two concave functions, i. e. the set of concave functions on a given domain form a semifield. If f is twice-differentiable, then f is concave if and only if f ′′ is non-positive, if its second derivative is negative then it is strictly concave, but the opposite is not true, as shown by f = −x4. Any local maximum of a function is also a global maximum. A strictly concave function will have at most one global maximum, if f is concave and differentiable, then it is bounded above by its first-order Taylor approximation, f ≤ f + f ′. A continuous function on C is concave if and only if for any x and y in C f ≥ f + f 2, if a function f is concave, and f ≥0, then f is subadditive.5 are always negative. The logarithm function f = log x is concave on its domain, any affine function f = a x + b is both concave and convex, but not strictly-concave nor strictly-convex. The sine function is concave on the interval, the function f = log | B |, where | B | is the determinant of a nonnegative-definite matrix B, is concave. Practical example, rays bending in computation of radiowave attenuation in the atmosphere, concave polygon Convex function Jensens inequality Logarithmically concave function Quasiconcave function Crouzeix, J. -P. In Durlauf, Steven N. Blume, Lawrence E, the New Palgrave Dictionary of Economics. Rao, Singiresu S. Engineering Optimization, Theory and Practice
61.
Local maximum
–
Pierre de Fermat was one of the first mathematicians to propose a general technique, adequality, for finding the maxima and minima of functions. As defined in set theory, the maximum and minimum of a set are the greatest and least elements in the set, unbounded infinite sets, such as the set of real numbers, have no minimum or maximum. The value of the function at a point is called the maximum value of the function. Similarly, the function has a minimum point at x∗ if f ≤ f for all x in X within distance ε of x∗. A similar definition can be used when X is a space, since the definition just given can be rephrased in terms of neighbourhoods. Note that a maximum point is always a local maximum point. In both the global and local cases, the concept of a strict extremum can be defined, note that a point is a strict global maximum point if and only if it is the unique global maximum point, and similarly for minimum points. A continuous real-valued function with a compact domain always has a maximum point, an important example is a function whose domain is a closed interval of real numbers. Finding global maxima and minima is the goal of mathematical optimization, if a function is continuous on a closed interval, then by the extreme value theorem global maxima and minima exist. Furthermore, a global maximum either must be a maximum in the interior of the domain. So a method of finding a maximum is to look at all the local maxima in the interior, and also look at the maxima of the points on the boundary. Local extrema of functions can be found by Fermats theorem. For any function that is defined piecewise, one finds a maximum by finding the maximum of each piece separately, the function x2 has a unique global minimum at x =0. The function x3 has no global minima or maxima, although the first derivative is 0 at x =0, this is an inflection point. The function x x has a global maximum at x = e. The function x−x has a global maximum over the positive real numbers at x = 1/e. The function x3/3 − x has first derivative x2 −1, setting the first derivative to 0 and solving for x gives stationary points at −1 and +1. From the sign of the second derivative we can see that −1 is a local maximum, note that this function has no global maximum or minimum
62.
Global maximum
–
Pierre de Fermat was one of the first mathematicians to propose a general technique, adequality, for finding the maxima and minima of functions. As defined in set theory, the maximum and minimum of a set are the greatest and least elements in the set, unbounded infinite sets, such as the set of real numbers, have no minimum or maximum. The value of the function at a point is called the maximum value of the function. Similarly, the function has a minimum point at x∗ if f ≤ f for all x in X within distance ε of x∗. A similar definition can be used when X is a space, since the definition just given can be rephrased in terms of neighbourhoods. Note that a maximum point is always a local maximum point. In both the global and local cases, the concept of a strict extremum can be defined, note that a point is a strict global maximum point if and only if it is the unique global maximum point, and similarly for minimum points. A continuous real-valued function with a compact domain always has a maximum point, an important example is a function whose domain is a closed interval of real numbers. Finding global maxima and minima is the goal of mathematical optimization, if a function is continuous on a closed interval, then by the extreme value theorem global maxima and minima exist. Furthermore, a global maximum either must be a maximum in the interior of the domain. So a method of finding a maximum is to look at all the local maxima in the interior, and also look at the maxima of the points on the boundary. Local extrema of functions can be found by Fermats theorem. For any function that is defined piecewise, one finds a maximum by finding the maximum of each piece separately, the function x2 has a unique global minimum at x =0. The function x3 has no global minima or maxima, although the first derivative is 0 at x =0, this is an inflection point. The function x x has a global maximum at x = e. The function x−x has a global maximum over the positive real numbers at x = 1/e. The function x3/3 − x has first derivative x2 −1, setting the first derivative to 0 and solving for x gives stationary points at −1 and +1. From the sign of the second derivative we can see that −1 is a local maximum, note that this function has no global maximum or minimum
63.
Simple polygon
–
In geometry a simple polygon /ˈpɒlɪɡɒn/ is a flat shape consisting of straight, non-intersecting line segments or sides that are joined pair-wise to form a closed path. If the sides then the polygon is not simple. The qualifier simple is frequently omitted, with the definition then being understood to define a polygon in general. The definition given above ensures the following properties, A polygon encloses a region which always has a measurable area, the line segments that make-up a polygon meet only at their endpoints, called vertices or less formally corners. Exactly two edges meet at each vertex, the number of edges always equals the number of vertices. Two edges meeting at a corner are usually required to form an angle that is not straight, otherwise, according to the definition in use, this boundary may or may not form part of the polygon itself. A polygon in the plane is simple if and only if it is equivalent to a circle. Its interior is topologically equivalent to a disk, if a collection of non-crossing line segments forms the boundary of a region of the plane that is topologically equivalent to a disk, then this boundary is called a weakly simple polygon. In the image on the left, ABCDEFGHJKLM is a simple polygon according to this definition. Referring to the image above, ABCM is a boundary of a planar region with a hole FGHJ. The cut ED connects the hole with the exterior and is traversed twice in the resulting weakly simple polygonal representation and this formalizes the notion that such a polygon allows segments to touch but not to cross. However, this type of weakly simple polygon does not need to form the boundary of a region, as its interior can be empty. For example, referring to the image above, the polygonal chain ABCBA is a simple polygon according to this definition. Point in polygon testing involves determining, for a simple polygon P, simple formulae are known for computing polygon area, that is, the area of the interior of the polygon. Polygon partition is a set of units, which do not overlap. A polygon partition problem is a problem of finding a partition which is minimal in some sense, for example, a special case of polygon partition is Polygon triangulation, dividing a simple polygon into triangles. Although convex polygons are easy to triangulate, triangulating a general polygon is more difficult because we have to avoid adding edges that cross outside the polygon. Nevertheless, Bernard Chazelle showed in 1991 that any simple polygon with n vertices can be triangulated in Θ time, the same algorithm may also be used for determining whether a closed polygonal chain forms a simple polygon
64.
Criss-cross algorithm
–
In mathematical optimization, the criss-cross algorithm is any of a family of algorithms for linear programming. Like the simplex algorithm of George B, Dantzig, the criss-cross algorithm is not a polynomial-time algorithm for linear programming. Both algorithms visit all 2D corners of a cube in dimension D, however, when it is started at a random corner, the criss-cross algorithm on average visits only D additional corners. Thus, for the cube, the algorithm visits all 8 corners in the worst case. The criss-cross algorithm was published independently by Tamás Terlaky and by Zhe-Min Wang, in linear programming, the criss-cross algorithm pivots between a sequence of bases but differs from the simplex algorithm of George Dantzig. The criss-cross algorithm is simpler than the algorithm, because the criss-cross algorithm only has one phase. Its pivoting rules are similar to the least-index pivoting rule of Bland, Blands rule uses only signs of coefficients rather than their order when deciding eligible pivots. Blands rule selects an entering variables by comparing values of reduced costs, the criss-cross algorithm has been applied to furnish constructive proofs of basic results in real linear algebra, such as the lemma of Farkas. While most simplex variants are monotonic in the objective, most variants of the criss-cross algorithm lack a monotone merit function which can be a disadvantage in practice, the criss-cross algorithm works on a standard pivot tableau. In a general step, if the tableau is primal or dual infeasible, an important property is that the selection is made on the union of the infeasible indices and the standard version of the algorithm does not distinguish column and row indices. The time complexity of an algorithm counts the number of arithmetic operations sufficient for the algorithm to solve the problem. For example, Gaussian elimination requires on the order of D3 operations, there are examples of algorithms that do not have polynomial-time complexity. For example, a generalization of Gaussian elimination called Buchbergers algorithm has for its complexity an exponential function of the problem data, because exponential functions eventually grow much faster than polynomial functions, an exponential complexity implies that an algorithm has slow performance on large problems. Several algorithms for linear programming—Khachiyans ellipsoidal algorithm, Karmarkars projective algorithm, the ellipsoidal and projective algorithms were published before the criss-cross algorithm. However, like the simplex algorithm of Dantzig, the algorithm is not a polynomial-time algorithm for linear programming. Like the simplex algorithm, the algorithm visits all 8 corners of the three-dimensional cube in the worst case. When it is initialized at a corner of the cube. Trivially, the algorithm takes on average D steps for a cube