An ambient space or ambient configuration space is the space surrounding an object. In mathematics in geometry and topology, an ambient space is the space surrounding a mathematical object. For example, a line may be studied in isolation, or it may be studied as an object in two-dimensional space—in which case the ambient space is the plane, or as an object in three-dimensional space—in which case the ambient space is three-dimensional. To see why this makes a difference, consider the statement "Lines that never meet are parallel." This is true if the ambient space is two-dimensional, but false if the ambient space is three-dimensional, because in the latter case the lines could be skew lines, rather than parallel. Configuration space Manifold and ambient manifold Submanifolds and Hypersurfaces Riemannian manifolds Ricci curvature Differential form Schilders, W. H. A.. Numerical Methods in Electromagnetics. Special Volume. Elsevier. Pp. 120ff. ISBN 0-444-51375-2. Wiggins, Stephen. Chaotic Transport in Dynamical Systems.
Berlin: Springer. Pp. 209ff. ISBN 3-540-97522-5
Linear programming is a method to achieve the best outcome in a mathematical model whose requirements are represented by linear relationships. Linear programming is a special case of mathematical programming. More formally, linear programming is a technique for the optimization of a linear objective function, subject to linear equality and linear inequality constraints, its feasible region is a convex polytope, a set defined as the intersection of finitely many half spaces, each of, defined by a linear inequality. Its objective function is a real-valued affine function defined on this polyhedron. A linear programming algorithm finds a point in the polyhedron where this function has the smallest value if such a point exists. Linear programs are problems that can be expressed in canonical form as Maximize c T x subject to A x ≤ b and x ≥ 0 where x represents the vector of variables, c and b are vectors of coefficients, A is a matrix of coefficients, T is the matrix transpose; the expression to be maximized or minimized is called the objective function.
The inequalities Ax ≤ b and x ≥ 0 are the constraints which specify a convex polytope over which the objective function is to be optimized. In this context, two vectors are comparable. If every entry in the first is less-than or equal-to the corresponding entry in the second it can be said that the first vector is less-than or equal-to the second vector. Linear programming can be applied to various fields of study, it is used in mathematics, to a lesser extent in business and for some engineering problems. Industries that use linear programming models include transportation, telecommunications, manufacturing, it has proven useful in modeling diverse types of problems in planning, scheduling and design. The problem of solving a system of linear inequalities dates back at least as far as Fourier, who in 1827 published a method for solving them, after whom the method of Fourier–Motzkin elimination is named. In 1939 a linear programming formulation of a problem, equivalent to the general linear programming problem was given by the Soviet economist Leonid Kantorovich, who proposed a method for solving it.
It is a way he developed, during World War II, to plan expenditures and returns in order to reduce costs of the army and to increase losses imposed on the enemy. Kantorovich's work was neglected in the USSR. About the same time as Kantorovich, the Dutch-American economist T. C. Koopmans formulated classical economic problems as linear programs. Kantorovich and Koopmans shared the 1975 Nobel prize in economics. In 1941, Frank Lauren Hitchcock formulated transportation problems as linear programs and gave a solution similar to the simplex method. Hitchcock had died in 1957 and the Nobel prize is not awarded posthumously. During 1946–1947, George B. Dantzig independently developed general linear programming formulation to use for planning problems in US Air Force. In 1947, Dantzig invented the simplex method that for the first time efficiently tackled the linear programming problem in most cases; when Dantzig arranged a meeting with John von Neumann to discuss his simplex method, Neumann conjectured the theory of duality by realizing that the problem he had been working in game theory was equivalent.
Dantzig provided formal proof in an unpublished report "A Theorem on Linear Inequalities" on January 5, 1948. In the post-war years, many industries applied it in their daily planning. Dantzig's original example was to find the best assignment of 70 people to 70 jobs; the computing power required to test all the permutations to select the best assignment is vast. However, it takes only a moment to find the optimum solution by posing the problem as a linear program and applying the simplex algorithm; the theory behind linear programming drastically reduces the number of possible solutions that must be checked. The linear programming problem was first shown to be solvable in polynomial time by Leonid Khachiyan in 1979, but a larger theoretical and practical breakthrough in the field came in 1984 when Narendra Karmarkar introduced a new interior-point method for solving linear-programming problems. Linear programming is a used field of optimization for several reasons. Many practical problems in operations research can be expressed as linear programming problems.
Certain special cases of linear programming, such as network flow problems and multicommodity flow problems are considered important enough to have generated much research on specialized algorithms for their solution. A number of algorithms for other types of optimization problems work by solving LP problems as sub-problems. Ideas from linear programming have inspired many of the central concepts of optimization theory, such as duality and the importance of convexity and its generalizations. Linear programming was used in the early formation o
In mathematics, geometric topology is the study of manifolds and maps between them embeddings of one manifold into another. Geometric topology as an area distinct from algebraic topology may be said to have originated in the 1935 classification of lens spaces by Reidemeister torsion, which required distinguishing spaces that are homotopy equivalent but not homeomorphic; this was the origin of simple homotopy theory. The use of the term geometric topology to describe these seems to have originated rather recently. Manifolds differ radically in behavior in low dimension. High-dimensional topology refers to manifolds of dimension 5 and above, or in relative terms, embeddings in codimension 3 and above. Low-dimensional topology is concerned with questions in dimensions up to 4, or embeddings in codimension up to 2. Dimension 4 is special, in that in some respects, dimension 4 is high-dimensional, while in other respects, dimension 4 is low-dimensional, thus the topological classification of 4-manifolds is in principle easy, the key questions are: does a topological manifold admit a differentiable structure, if so, how many?
Notably, the smooth case of dimension 4 is the last open case of the generalized Poincaré conjecture. The distinction is because surgery theory works in dimension 5 and above, thus the behavior of manifolds in dimension 5 and above is controlled algebraically by surgery theory. In dimension 4 and below, surgery theory does not work, other phenomena occur. Indeed, one approach to discussing low-dimensional manifolds is to ask "what would surgery theory predict to be true, were it to work?" – and understand low-dimensional phenomena as deviations from this. The precise reason for the difference at dimension 5 is because the Whitney embedding theorem, the key technical trick which underlies surgery theory, requires 2+1 dimensions; the Whitney trick allows one to "unknot" knotted spheres – more remove self-intersections of immersions. In surgery theory, the key step is in the middle dimension, thus when the middle dimension has codimension more than 2, the Whitney trick works; the key consequence of this is Smale's h-cobordism theorem, which works in dimension 5 and above, forms the basis for surgery theory.
A modification of the Whitney trick can work in 4 dimensions, is called Casson handles – because there are not enough dimensions, a Whitney disk introduces new kinks, which can be resolved by another Whitney disk, leading to a sequence of disks. The limit of this tower yields a topological but not differentiable map, hence surgery works topologically but not differentiably in dimension 4. In all dimensions, the fundamental group of a manifold is a important invariant, determines much of the structure. A manifold is orientable if it has a consistent choice of orientation, a connected orientable manifold has two different possible orientations. In this setting, various equivalent formulations of orientability can be given, depending on the desired application and level of generality. Formulations applicable to general topological manifolds employ methods of homology theory, whereas for differentiable manifolds more structure is present, allowing a formulation in terms of differential forms. An important generalization of the notion of orientability of a space is that of orientability of a family of spaces parameterized by some other space for which an orientation must be selected in each of the spaces which varies continuously with respect to changes in the parameter values.
A handle decomposition of an m-manifold M is a union ∅ = M − 1 ⊂ M 0 ⊂ M 1 ⊂ M 2 ⊂ ⋯ ⊂ M m − 1 ⊂ M m = M where each M i is obtained from M i − 1 by the attaching of i -handles. A handle decomposition is to a manifold what a CW-decomposition is to a topological space—in many regards the purpose of a handle decomposition is to have a language analogous to CW-complexes, but adapted to the world of smooth manifolds, thus an i-handle is the smooth analogue of an i-cell. Handle decompositions of manifolds arise via Morse theory; the modification of handle structures is linked to Cerf theory. Local flatness is a property of a submanifold in a topological manifold of larger dimension. In the category of topological manifolds, locally flat submanifolds play a role similar to that of embedded submanifolds in the category of smooth
A complex number is a number that can be expressed in the form a + bi, where a and b are real numbers, i is a solution of the equation x2 = −1. Because no real number satisfies this equation, i is called an imaginary number. For the complex number a + bi, a is called the real part, b is called the imaginary part. Despite the historical nomenclature "imaginary", complex numbers are regarded in the mathematical sciences as just as "real" as the real numbers, are fundamental in many aspects of the scientific description of the natural world. Complex numbers allow solutions to certain equations. For example, the equation 2 = − 9 has no real solution, since the square of a real number cannot be negative. Complex numbers provide a solution to this problem; the idea is to extend the real numbers with an indeterminate i, taken to satisfy the relation i2 = −1, so that solutions to equations like the preceding one can be found. In this case the solutions are −1 + 3i and −1 − 3i, as can be verified using the fact that i2 = −1: 2 = 2 = = 9 = − 9, 2 = 2 = 2 = 9 = − 9.
According to the fundamental theorem of algebra, all polynomial equations with real or complex coefficients in a single variable have a solution in complex numbers. In contrast, some polynomial equations with real coefficients have no solution in real numbers; the 16th century Italian mathematician Gerolamo Cardano is credited with introducing complex numbers in his attempts to find solutions to cubic equations. Formally, the complex number system can be defined as the algebraic extension of the ordinary real numbers by an imaginary number i; this means that complex numbers can be added and multiplied, as polynomials in the variable i, with the rule i2 = −1 imposed. Furthermore, complex numbers can be divided by nonzero complex numbers. Overall, the complex number system is a field. Geometrically, complex numbers extend the concept of the one-dimensional number line to the two-dimensional complex plane by using the horizontal axis for the real part and the vertical axis for the imaginary part.
The complex number a + bi can be identified with the point in the complex plane. A complex number whose real part is zero is said to be purely imaginary. A complex number whose imaginary part is zero can be viewed as a real number. Complex numbers can be represented in polar form, which associates each complex number with its distance from the origin and with a particular angle known as the argument of this complex number; the geometric identification of the complex numbers with the complex plane, a Euclidean plane, makes their structure as a real 2-dimensional vector space evident. Real and imaginary parts of a complex number may be taken as components of a vector with respect to the canonical standard basis; the addition of complex numbers is thus depicted as the usual component-wise addition of vectors. However, the complex numbers allow for a richer algebraic structure, comprising additional operations, that are not available in a vector space. Based on the concept of real numbers, a complex number is a number of the form a + bi, where a and b are real numbers and i is an indeterminate satisfying i2 = −1.
For example, 2 + 3i is a complex number. This way, a complex number is defined as a polynomial with real coefficients in the single indeterminate i, for which the relation i2 + 1 = 0 is imposed. Based on this definition, complex numbers can be added and multiplied, using the addition and multiplication for polynomials; the relation i2 + 1 = 0 induces the equalities i4k = 1, i4k+1 = i, i4k+2 = −1, i4k+3 = −i, which hold for all integers k. The real number a is called the real part of the complex number a + bi. To emphasize, the imaginary part does not include a factor i and b, not bi, is the imaginary part. Formally, the complex numbers are defined as the quotient ring of the polynomia
Michiel Hazewinkel is a Dutch mathematician, Emeritus Professor of Mathematics at the Centre for Mathematics and Computer and the University of Amsterdam known for his 1978 book Formal groups and applications and as editor of the Encyclopedia of Mathematics. Born in Amsterdam to Jan Hazewinkel and Geertrude Hendrika Werner, Hazewinkel studied at the University of Amsterdam, he received his BA in Mathematics and Physics in 1963, his MA in Mathematics with a minor in Philosophy in 1965 and his PhD in 1969 under supervision of Frans Oort and Albert Menalda for the thesis "Maximal Abelian Extensions of Local Fields". After graduation Hazewinkel started his academic career as Assistant Professor at the University of Amsterdam in 1969. In 1970 he became Associate Professor at the Erasmus University Rotterdam, where in 1972 he was appointed Professor of Mathematics at the Econometric Institute. Here he was thesis advisor of Roelof Stroeker, M. van de Vel, Jo Ritzen, Gerard van der Hoek. From 1973 to 1975 he was Professor at the Universitaire Instelling Antwerpen, where Marcel van de Vel was his PhD student.
From 1982 to 1985 he was appointed part-time Professor Extraordinarius in Mathematics at the Erasmus Universiteit Rotterdam, part-time Head of the Department of Pure Mathematics at the Centre for Mathematics and Computer in Amsterdam. In 1985 he was appointed Professor Extraordinarius in Mathematics at the University of Utrecht, where he supervised the promotion of Frank Kouwenhoven, Huib-Jan Imbens, J. Scholma and F. Wainschtein. At the Centre for Mathematics and Computer CWI in Amsterdam in 1988 he became Professor of Mathematics and head of the Department of Algebra and Geometry until his retirement in 2008. Hazewinkel has been managing editor for journals as Nieuw Archief voor Wiskunde since 1977, he was managing editor for the book series Mathematics and Its Applications for Kluwer Academic Publishers in 1977. Hazewinkel was member of 15 professional societies in the field of Mathematics, participated in numerous administrative tasks in institutes, Program Committee, Steering Committee, Consortiums and Boards.
In 1994 Hazewinkel was elected member of the International Academy of Computer Sciences and Systems. Hazewinkel has authored and edited several books, numerous articles. Books, selection: 1970. Géométrie algébrique-généralités-groupes commutatifs. With Michel Demazure and Pierre Gabriel. Masson & Cie. 1976. On invariants, canonical forms and moduli for linear, finite dimensional, dynamical systems. With Rudolf E. Kalman. Springer Berlin Heidelberg. 1978. Formal groups and applications. Vol. 78. Elsevier. 1993. Encyclopaedia of Mathematics. Ed. Vol. 9. Springer. Articles, a selection: Hazewinkel, Michiel. "Moduli and canonical forms for linear dynamical systems II: The topological case". Mathematical Systems Theory. 10: 363–385. Doi:10.1007/BF01683285. Archived from the original on 12 December 2013. Hazewinkel, Michiel. "On Lie algebras and finite dimensional filtering". Stochastics. 7: 29–62. Doi:10.1080/17442508208833212. Archived from the original on 12 December 2013. Hazewinkel, M.. J.. "Nonexistence of finite-dimensional filters for conditional statistics of the cubic sensor problem".
Systems & Control Letters. 3: 331–340. Doi:10.1016/0167-691190074-9. Hazewinkel, Michiel. "The algebra of quasi-symmetric functions is free over the integers". Advances in Mathematics. 164: 283–300. Doi:10.1006/aima.2001.2017. Homepage
Not to be confused with Intersectionality theory. In mathematics, intersection theory is a branch of algebraic geometry, where subvarieties are intersected on an algebraic variety, of algebraic topology, where intersections are computed within the cohomology ring; the theory for varieties is older, with roots in Bézout's theorem on curves and elimination theory. On the other hand, the topological theory more reached a definitive form. For a connected oriented manifold M of dimension 2n the intersection form is defined on the n-th cohomology group by the evaluation of the cup product on the fundamental class in H2n. Stated there is a bilinear form λ M: H n × H n → Z given by λ M = ⟨ a ⌣ b, ⟩ ∈ Z with λ M = n λ M ∈ Z; this is a symmetric form for n in which case the signature of M is defined to be the signature of the form, an alternating form for n odd. These can be referred to uniformly as ε-symmetric forms, where ε = n = ±1 for symmetric and skew-symmetric forms, it is possible in some circumstances to refine this form to an ε-quadratic form, though this requires additional data such as a framing of the tangent bundle.
It is possible to work with Z/2Z coefficients instead. These forms are important topological invariants. For example, a theorem of Michael Freedman states that connected compact 4-manifolds are determined by their intersection forms up to homeomorphism – see intersection form. By Poincaré duality, it turns out. If possible, choose representative n-dimensional submanifolds A, B for the Poincaré duals of a and b. ΛM is the oriented intersection number of A and B, well-defined because since dimensions of A and B sum to the total dimension of M they generically intersect at isolated points. This explains the terminology intersection form. William Fulton in Intersection Theory writes... if A and B are subvarieties of a non-singular variety X, the intersection product A · B should be an equivalence class of algebraic cycles related to the geometry of how A ∩ B, A and B are situated in X. Two extreme cases have been most familiar. If the intersection is proper, i.e. dim = dim A + dim B − dim X A · B is a linear combination of the irreducible components of A ∩ B, with coefficients the intersection multiplicities.
At the other extreme, if A = B is a non-singular subvariety, the self-intersection formula says that A · B is represented by the top Chern class of the normal bundle of A in X. To give a definition, in the general case, of the intersection multiplicity was the major concern of André Weil's 1946 book Foundations of Algebraic Geometry. Work in the 1920s of B. L. van der Waerden had addressed the question. A well-working machinery of intersecting algebraic cycle V and W requires more than taking just the set-theoretic intersection V ∩ W of the cycles in question. If the two cycles are in "good position" the intersection product, denoted V · W, should consist of the set-theoretic intersection of the two subvarieties; however cycles may be in bad position, e.g. two parallel lines in the plane, or a plane containing a line. In both cases the intersection should be a point, again, if one cycle is moved, this would be the intersection; the intersection of two cycles V and W is called proper if the codimension of the intersection V ∩ W is the sum of the codimensions of V and W i.e. the "expected" value.
Therefore, the concept of moving cycles using appropriate equivalence relations on algebraic cycles is used. The equivalence must be broad enough that given any two cycles V and W, there are equivalent cycles V′ and W′ such that the intersection V′ ∩ W′ is proper. Of course, on the other hand, for a second equivalent V′′ and W′′, V′ ∩ W′ needs to be equivalent to V′′ ∩ W′′. For the purposes of intersection theory, rational equivalence is the most important one. Two r-dimensional cycles on a variety X are rationally equivalent if there is a rational function f on a -dimensional subvariety Y, i.e. an element of the function field k or equivalently a function f : Y → P1, such that V − W = f −1 − f −1, where f −1 is counted with multiplicities. Rational equivalence accomplishes; the guiding principle in the definition of intersection multiplicities of cycles is continuity in a certain sense. Consider the following elementary example: the intersection of a parabola y = x2 and an axis y = 0 should be 2 ·, because if one of the cycles moves, there are two intersecti
Degrees of freedom (physics and chemistry)
In physics, a degree of freedom is an independent physical parameter in the formal description of the state of a physical system. The set of all states of a system is known as the system's phase space, degrees of freedom of the system, are the dimensions of the phase space; the location of a particle in three-dimensional space requires. The direction and speed at which a particle moves can be described in terms of three velocity components, each in reference to the three dimensions of space. If the time evolution of the system is deterministic, where the state at one instant uniquely determines its past and future position and velocity as a function of time, such a system has six degrees of freedom. If the motion of the particle is constrained to a lower number of dimensions, for example, the particle must move along a wire or on a fixed surface the system has fewer than six degrees of freedom. On the other hand, a system with an extended object that can rotate or vibrate can have more than six degrees of freedom.
In classical mechanics, the state of a point particle at any given time is described with position and velocity coordinates in the Lagrangian formalism, or with position and momentum coordinates in the Hamiltonian formalism. In statistical mechanics, a degree of freedom is a single scalar number describing the microstate of a system; the specification of all microstates of a system is a point in the system's phase space. In the 3D ideal chain model in chemistry, two angles are necessary to describe the orientation of each monomer, it is useful to specify quadratic degrees of freedom. These are degrees of freedom. In three-dimensional space, three degrees of freedom are associated with the movement of a particle. A diatomic gas molecule has 6 degrees of freedom; this set may be decomposed in terms of translations and vibrations of the molecule. The center of mass motion of the entire molecule accounts for 3 degrees of freedom. In addition, the molecule has two rotational degrees of one vibrational mode.
The rotations occur around the two axes perpendicular to the line between the two atoms. The rotation around the atom–atom bond is not a physical rotation; this yields, for a diatomic molecule, a decomposition of: N = 6 = 3 + 2 + 1. For a general, non-linear molecule, all 3 rotational degrees of freedom are considered, resulting in the decomposition: 3 N = 3 + 3 + which means that an N-atom molecule has 3N − 6 vibrational degrees of freedom for N > 2. In special cases, such as adsorbed large molecules, the rotational degrees of freedom can be limited to only one; as defined above one can count degrees of freedom using the minimum number of coordinates required to specify a position. This is done as follows: For a single particle we need 2 coordinates in a 2-D plane to specify its position and 3 coordinates in 3-D space, thus its degree of freedom in a 3-D space is 3. For a body consisting of 2 particles in a 3-D space with constant distance between them we can show its degrees of freedom to be 5.
Let's say the other has coordinate with z2 unknown. Application of the formula for distance between two coordinates d = 2 + 2 + 2 results in one equation with one unknown, in which we can solve for z2. One of x1, x2, y1, y2, z1, or z2 can be unknown. Contrary to the classical equipartition theorem, at room temperature, the vibrational motion of molecules makes negligible contributions to the heat capacity; this is because these degrees of freedom are frozen because the spacing between the energy eigenvalues exceeds the energy corresponding to ambient temperatures. In the following table such degrees of freedom are disregarded because of their low effect on total energy. Only the translational and rotational degrees of freedom contribute, in equal amounts, to the heat capacity ratio; this is why γ = 7/5 for diatomic gases at room temperature. However, at high temperatures, on the order of the vibrational temperature, vibrational motion cannot be neglected. Vibrational temperatures are between 103 K and 104 K.
The set of degrees of freedom X1, ... , XN of a system is independent if the energy associated with the set can be written in the following form: E = ∑ i = 1 N E i, where Ei is a function of the sole variable Xi. example: if X1 and X2 are two degrees of freedom, E is the associated energy: If E = X 1 4 + X 2 4 the two degrees of freedom are independent. If E = X 1 4 + X 1