1.
Mathematics
–
Mathematics is the study of topics such as quantity, structure, space, and change. There is a range of views among mathematicians and philosophers as to the exact scope, Mathematicians seek out patterns and use them to formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proof, when mathematical structures are good models of real phenomena, then mathematical reasoning can provide insight or predictions about nature. Through the use of abstraction and logic, mathematics developed from counting, calculation, measurement, practical mathematics has been a human activity from as far back as written records exist. The research required to solve mathematical problems can take years or even centuries of sustained inquiry, rigorous arguments first appeared in Greek mathematics, most notably in Euclids Elements. Galileo Galilei said, The universe cannot be read until we have learned the language and it is written in mathematical language, and the letters are triangles, circles and other geometrical figures, without which means it is humanly impossible to comprehend a single word. Without these, one is wandering about in a dark labyrinth, carl Friedrich Gauss referred to mathematics as the Queen of the Sciences. Benjamin Peirce called mathematics the science that draws necessary conclusions, David Hilbert said of mathematics, We are not speaking here of arbitrariness in any sense. Mathematics is not like a game whose tasks are determined by arbitrarily stipulated rules, rather, it is a conceptual system possessing internal necessity that can only be so and by no means otherwise. Albert Einstein stated that as far as the laws of mathematics refer to reality, they are not certain, Mathematics is essential in many fields, including natural science, engineering, medicine, finance and the social sciences. Applied mathematics has led to entirely new mathematical disciplines, such as statistics, Mathematicians also engage in pure mathematics, or mathematics for its own sake, without having any application in mind. There is no clear line separating pure and applied mathematics, the history of mathematics can be seen as an ever-increasing series of abstractions. The earliest uses of mathematics were in trading, land measurement, painting and weaving patterns, in Babylonian mathematics elementary arithmetic first appears in the archaeological record. Numeracy pre-dated writing and numeral systems have many and diverse. Between 600 and 300 BC the Ancient Greeks began a study of mathematics in its own right with Greek mathematics. Mathematics has since been extended, and there has been a fruitful interaction between mathematics and science, to the benefit of both. Mathematical discoveries continue to be made today, the overwhelming majority of works in this ocean contain new mathematical theorems and their proofs. The word máthēma is derived from μανθάνω, while the modern Greek equivalent is μαθαίνω, in Greece, the word for mathematics came to have the narrower and more technical meaning mathematical study even in Classical times
2.
Dynamical system
–
In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in a geometrical space. Examples include the models that describe the swinging of a clock pendulum, the flow of water in a pipe. At any given time, a system has a state given by a tuple of real numbers that can be represented by a point in an appropriate state space. The evolution rule of the system is a function that describes what future states follow from the current state. Often the function is deterministic, that is, for a time interval only one future state follows from the current state. However, some systems are stochastic, in random events also affect the evolution of the state variables. In physics, a system is described as a particle or ensemble of particles whose state varies over time. In order to make a prediction about the future behavior. Dynamical systems are a part of chaos theory, logistic map dynamics, bifurcation theory, the self-assembly process. The concept of a system has its origins in Newtonian mechanics. To determine the state for all future times requires iterating the relation many times—each advancing time a small step, the iteration procedure is referred to as solving the system or integrating the system. If the system can be solved, given a point it is possible to determine all its future positions. Before the advent of computers, finding an orbit required sophisticated mathematical techniques, numerical methods implemented on electronic computing machines have simplified the task of determining the orbits of a dynamical system. For simple dynamical systems, knowing the trajectory is often sufficient, the difficulties arise because, The systems studied may only be known approximately—the parameters of the system may not be known precisely or terms may be missing from the equations. The approximations used bring into question the validity or relevance of numerical solutions, to address these questions several notions of stability have been introduced in the study of dynamical systems, such as Lyapunov stability or structural stability. The stability of the dynamical system implies that there is a class of models or initial conditions for which the trajectories would be equivalent, the operation for comparing orbits to establish their equivalence changes with the different notions of stability. The type of trajectory may be more important than one particular trajectory, some trajectories may be periodic, whereas others may wander through many different states of the system. Applications often require enumerating these classes or maintaining the system within one class, classifying all possible trajectories has led to the qualitative study of dynamical systems, that is, properties that do not change under coordinate changes
3.
Algebra
–
Algebra is one of the broad parts of mathematics, together with number theory, geometry and analysis. In its most general form, algebra is the study of mathematical symbols, as such, it includes everything from elementary equation solving to the study of abstractions such as groups, rings, and fields. The more basic parts of algebra are called elementary algebra, the abstract parts are called abstract algebra or modern algebra. Elementary algebra is generally considered to be essential for any study of mathematics, science, or engineering, as well as such applications as medicine, abstract algebra is a major area in advanced mathematics, studied primarily by professional mathematicians. Elementary algebra differs from arithmetic in the use of abstractions, such as using letters to stand for numbers that are unknown or allowed to take on many values. For example, in x +2 =5 the letter x is unknown, in E = mc2, the letters E and m are variables, and the letter c is a constant, the speed of light in a vacuum. Algebra gives methods for solving equations and expressing formulas that are easier than the older method of writing everything out in words. The word algebra is used in certain specialized ways. A special kind of object in abstract algebra is called an algebra. A mathematician who does research in algebra is called an algebraist, the word algebra comes from the Arabic الجبر from the title of the book Ilm al-jabr wal-muḳābala by Persian mathematician and astronomer al-Khwarizmi. The word entered the English language during the century, from either Spanish, Italian. It originally referred to the procedure of setting broken or dislocated bones. The mathematical meaning was first recorded in the sixteenth century, the word algebra has several related meanings in mathematics, as a single word or with qualifiers. As a single word without an article, algebra names a broad part of mathematics, as a single word with an article or in plural, an algebra or algebras denotes a specific mathematical structure, whose precise definition depends on the author. Usually the structure has an addition, multiplication, and a scalar multiplication, when some authors use the term algebra, they make a subset of the following additional assumptions, associative, commutative, unital, and/or finite-dimensional. In universal algebra, the word refers to a generalization of the above concept. With a qualifier, there is the distinction, Without an article, it means a part of algebra, such as linear algebra, elementary algebra. With an article, it means an instance of some abstract structure, like a Lie algebra, sometimes both meanings exist for the same qualifier, as in the sentence, Commutative algebra is the study of commutative rings, which are commutative algebras over the integers
4.
Space (mathematics)
–
In mathematics, a space is a set with some added structure. Mathematical spaces often form a hierarchy, i. e. one space may inherit all the characteristics of a parent space, modern mathematics treats space quite differently compared to classical mathematics. In the ancient mathematics, space was an abstraction of the three-dimensional space observed in the everyday life. The axiomatic method had been the research tool since Euclid. The method of coordinates was adopted by René Descartes in 1637, two equivalence relations between geometric figures were used, congruence and similarity. Translations, rotations and reflections transform a figure into congruent figures, homotheties — into similar figures, for example, all circles are mutually similar, but ellipses are not similar to circles. The relation between the two geometries, Euclidean and projective, shows that objects are not given to us with their structure. Rather, each mathematical theory describes its objects by some of their properties, distances and angles are never mentioned in the axioms of the projective geometry and therefore cannot appear in its theorems. The question what is the sum of the three angles of a triangle is meaningful in the Euclidean geometry but meaningless in the projective geometry. A different situation appeared in the 19th century, in some geometries the sum of the three angles of a triangle is well-defined but different from the classical value. The non-Euclidean hyperbolic geometry, introduced by Nikolai Lobachevsky in 1829, eugenio Beltrami in 1868 and Felix Klein in 1871 obtained Euclidean models of the non-Euclidean hyperbolic geometry, and thereby completely justified this theory. This discovery forced the abandonment of the pretensions to the truth of Euclidean geometry. It showed that axioms are not obvious, nor implications of definitions, to what extent do they correspond to an experimental reality. This important physical problem no longer has anything to do with mathematics, even if a geometry does not correspond to an experimental reality, its theorems remain no less mathematical truths. These Euclidean objects and relations play the non-Euclidean geometry like contemporary actors playing an ancient performance, relations between the actors only mimic relations between the characters in the play. Likewise, the relations between the chosen objects of the Euclidean model only mimic the non-Euclidean relations. It shows that relations between objects are essential in mathematics, while the nature of the objects is not, according to Nicolas Bourbaki, the period between 1795 and 1872 can be called the golden age of geometry. Analytic geometry made a progress and succeeded in replacing theorems of classical geometry with computations via invariants of transformation groups
5.
Physics
–
Physics is the natural science that involves the study of matter and its motion and behavior through space and time, along with related concepts such as energy and force. One of the most fundamental disciplines, the main goal of physics is to understand how the universe behaves. Physics is one of the oldest academic disciplines, perhaps the oldest through its inclusion of astronomy, Physics intersects with many interdisciplinary areas of research, such as biophysics and quantum chemistry, and the boundaries of physics are not rigidly defined. New ideas in physics often explain the mechanisms of other sciences while opening new avenues of research in areas such as mathematics. Physics also makes significant contributions through advances in new technologies that arise from theoretical breakthroughs, the United Nations named 2005 the World Year of Physics. Astronomy is the oldest of the natural sciences, the stars and planets were often a target of worship, believed to represent their gods. While the explanations for these phenomena were often unscientific and lacking in evidence, according to Asger Aaboe, the origins of Western astronomy can be found in Mesopotamia, and all Western efforts in the exact sciences are descended from late Babylonian astronomy. The most notable innovations were in the field of optics and vision, which came from the works of many scientists like Ibn Sahl, Al-Kindi, Ibn al-Haytham, Al-Farisi and Avicenna. The most notable work was The Book of Optics, written by Ibn Al-Haitham, in which he was not only the first to disprove the ancient Greek idea about vision, but also came up with a new theory. In the book, he was also the first to study the phenomenon of the pinhole camera, many later European scholars and fellow polymaths, from Robert Grosseteste and Leonardo da Vinci to René Descartes, Johannes Kepler and Isaac Newton, were in his debt. Indeed, the influence of Ibn al-Haythams Optics ranks alongside that of Newtons work of the same title, the translation of The Book of Optics had a huge impact on Europe. From it, later European scholars were able to build the devices as what Ibn al-Haytham did. From this, such important things as eyeglasses, magnifying glasses, telescopes, Physics became a separate science when early modern Europeans used experimental and quantitative methods to discover what are now considered to be the laws of physics. Newton also developed calculus, the study of change, which provided new mathematical methods for solving physical problems. The discovery of new laws in thermodynamics, chemistry, and electromagnetics resulted from greater research efforts during the Industrial Revolution as energy needs increased, however, inaccuracies in classical mechanics for very small objects and very high velocities led to the development of modern physics in the 20th century. Modern physics began in the early 20th century with the work of Max Planck in quantum theory, both of these theories came about due to inaccuracies in classical mechanics in certain situations. Quantum mechanics would come to be pioneered by Werner Heisenberg, Erwin Schrödinger, from this early work, and work in related fields, the Standard Model of particle physics was derived. Areas of mathematics in general are important to this field, such as the study of probabilities, in many ways, physics stems from ancient Greek philosophy
6.
Economics
–
Economics is a social science concerned chiefly with description and analysis of the production, distribution, and consumption of goods and services according to the Merriam-Webster Dictionary. Economics focuses on the behaviour and interactions of economic agents and how economies work, consistent with this focus, textbooks often distinguish between microeconomics and macroeconomics. Microeconomics examines the behaviour of elements in the economy, including individual agents and markets, their interactions. Individual agents may include, for example, households, firms, buyers, macroeconomics analyzes the entire economy and issues affecting it, including unemployment of resources, inflation, economic growth, and the public policies that address these issues. Economic analysis can be applied throughout society, as in business, finance, health care, Economic analyses may also be applied to such diverse subjects as crime, education, the family, law, politics, religion, social institutions, war, science, and the environment. At the turn of the 21st century, the domain of economics in the social sciences has been described as economic imperialism. The ultimate goal of economics is to improve the conditions of people in their everyday life. There are a variety of definitions of economics. Some of the differences may reflect evolving views of the subject or different views among economists, to supply the state or commonwealth with a revenue for the publick services. Say, distinguishing the subject from its uses, defines it as the science of production, distribution. On the satirical side, Thomas Carlyle coined the dismal science as an epithet for classical economics, in this context and it enquires how he gets his income and how he uses it. Thus, it is on the one side, the study of wealth and on the other and more important side, a part of the study of man. He affirmed that previous economists have usually centred their studies on the analysis of wealth, how wealth is created, distributed, and consumed, but he said that economics can be used to study other things, such as war, that are outside its usual focus. This is because war has as the goal winning it, generates both cost and benefits, and, resources are used to attain the goal. If the war is not winnable or if the costs outweigh the benefits. Some subsequent comments criticized the definition as overly broad in failing to limit its subject matter to analysis of markets, there are other criticisms as well, such as in scarcity not accounting for the macroeconomics of high unemployment. The same source reviews a range of included in principles of economics textbooks. Among economists more generally, it argues that a particular definition presented may reflect the direction toward which the author believes economics is evolving, microeconomics examines how entities, forming a market structure, interact within a market to create a market system
7.
Inflation
–
In economics, inflation is a sustained increase in the general price level of goods and services in an economy over a period of time resulting in a loss of value of currency. When the price rises, each unit of currency buys fewer goods. Consequently, inflation reflects a reduction in the power per unit of money – a loss of real value in the medium of exchange. A chief measure of inflation is the inflation rate, the annualized percentage change in a general price index, usually the consumer price index. The opposite of inflation is deflation, Inflation affects economies in various positive and negative ways. Economists generally believe that high rates of inflation and hyperinflation are caused by a growth of the money supply. However, money supply growth does not necessarily cause inflation, some economists maintain that under the conditions of a liquidity trap, large monetary injections are like pushing on a string. Views on which factors determine low to moderate rates of inflation are more varied, low or moderate inflation may be attributed to fluctuations in real demand for goods and services, or changes in available supplies such as during scarcities. However, the view is that a long sustained period of inflation is caused by money supply growing faster than the rate of economic growth. Today, most economists favor a low and steady rate of inflation, the task of keeping the rate of inflation low and stable is usually given to monetary authorities. Rapid increases in quantity of the money or in the money supply have occurred in many different societies throughout history. By diluting the gold with other metals, the government could issue more coins without also needing to increase the amount of used to make them. When the cost of each coin is lowered in this way and this practice would increase the money supply but at the same time the relative value of each coin would be lowered. As the relative value of the coins becomes lower, consumers would need to give more coins in exchange for the same goods and these goods and services would experience a price increase as the value of each coin is reduced. Song Dynasty China introduced the practice of printing paper money in order to create fiat currency, during the Mongol Yuan Dynasty, the government spent a great deal of money fighting costly wars, and reacted by printing more money, leading to inflation. Fearing the inflation that plagued the Yuan dynasty, the Ming Dynasty initially rejected the use of paper money, historically, large infusions of gold or silver into an economy also led to inflation. This was largely caused by the influx of gold and silver from the New World into Habsburg Spain. The silver spread throughout a previously cash-starved Europe and caused widespread inflation, demographic factors also contributed to upward pressure on prices, with European population growth after depopulation caused by the Black Death pandemic
8.
Unemployment
–
During periods of recession, an economy usually experiences a relatively high unemployment rate. According to International Labour Organization report, more than 200 million people globally or 6% of the workforce were without a job in 2012. There remains considerable debate regarding the causes, consequences and solutions for unemployment. Classical economics, new classical economics, and the Austrian School of economics argue that market mechanisms are reliable means of resolving unemployment, Keynesian economics emphasizes the cyclical nature of unemployment and recommends government interventions in the economy that it claims will reduce unemployment during recessions. This theory focuses on recurrent shocks that suddenly reduce aggregate demand for goods and services, Keynesian models recommend government interventions designed to increase demand for workers, these can include financial stimuli, publicly funded job creation, and expansionist monetary policies. Structural arguments emphasize causes and solutions related to disruptive technologies and globalization, Causes and solutions for frictional unemployment often address job entry threshold and wage rates. Behavioral economists highlight individual biases in decision making, and often involve problems and solutions concerning sticky wages, for centuries, experts have predicted that machines would make workers obsolete and increase unemployment. The state of being without any work both for an educated & uneducated person, for earning ones livelihood is meant by unemployment, some additional types of unemployment that are occasionally mentioned are seasonal unemployment, hardcore unemployment, and hidden unemployment. Though there have been several definitions of voluntary and involuntary unemployment in the economics literature, voluntary unemployment is attributed to the individuals decisions, whereas involuntary unemployment exists because of the socio-economic environment in which individuals operate. In these terms, much or most of frictional unemployment is voluntary, on the other hand, cyclical unemployment, structural unemployment, and classical unemployment are largely involuntary in nature. So, in practice, the distinction between voluntary and involuntary unemployment is hard to draw and this happens with cyclical unemployment, as macroeconomic forces cause microeconomic unemployment which can boomerang back and exacerbate these macroeconomic forces. Classical, or real-wage unemployment, occurs when real wages for a job are set above the market-clearing level causing the number of job-seekers to exceed the number of vacancies. On the other hand, some argue that as wages fall below a livable wage many choose to drop out of the labor market. This is especially true in countries where low-income families are supported through public welfare systems, in such cases, wages would have to be high enough to motivate people to choose employment over what they receive through public welfare. Wages below a livable wage are likely to result in lower labor market participation in above stated scenario, in addition it must be noted that consumption of goods and services is the primary driver of increased need for labor. Higher wages leads to workers having more available to consume goods. Therefore, higher wages increase general consumption and as a result need for labor increases, many economists have argued that unemployment increases with increased governmental regulation. For example, minimum wage laws raise the cost of some low-skill laborers above market equilibrium, laws restricting layoffs may make businesses less likely to hire in the first place, as hiring becomes more risky
9.
Geometry
–
Geometry is a branch of mathematics concerned with questions of shape, size, relative position of figures, and the properties of space. A mathematician who works in the field of geometry is called a geometer, Geometry arose independently in a number of early cultures as a practical way for dealing with lengths, areas, and volumes. Geometry began to see elements of mathematical science emerging in the West as early as the 6th century BC. By the 3rd century BC, geometry was put into a form by Euclid, whose treatment, Euclids Elements. Geometry arose independently in India, with texts providing rules for geometric constructions appearing as early as the 3rd century BC, islamic scientists preserved Greek ideas and expanded on them during the Middle Ages. By the early 17th century, geometry had been put on a solid footing by mathematicians such as René Descartes. Since then, and into modern times, geometry has expanded into non-Euclidean geometry and manifolds, while geometry has evolved significantly throughout the years, there are some general concepts that are more or less fundamental to geometry. These include the concepts of points, lines, planes, surfaces, angles, contemporary geometry has many subfields, Euclidean geometry is geometry in its classical sense. The mandatory educational curriculum of the majority of nations includes the study of points, lines, planes, angles, triangles, congruence, similarity, solid figures, circles, Euclidean geometry also has applications in computer science, crystallography, and various branches of modern mathematics. Differential geometry uses techniques of calculus and linear algebra to problems in geometry. It has applications in physics, including in general relativity, topology is the field concerned with the properties of geometric objects that are unchanged by continuous mappings. In practice, this often means dealing with large-scale properties of spaces, convex geometry investigates convex shapes in the Euclidean space and its more abstract analogues, often using techniques of real analysis. It has close connections to convex analysis, optimization and functional analysis, algebraic geometry studies geometry through the use of multivariate polynomials and other algebraic techniques. It has applications in areas, including cryptography and string theory. Discrete geometry is concerned mainly with questions of relative position of simple objects, such as points. It shares many methods and principles with combinatorics, Geometry has applications to many fields, including art, architecture, physics, as well as to other branches of mathematics. The earliest recorded beginnings of geometry can be traced to ancient Mesopotamia, the earliest known texts on geometry are the Egyptian Rhind Papyrus and Moscow Papyrus, the Babylonian clay tablets such as Plimpton 322. For example, the Moscow Papyrus gives a formula for calculating the volume of a truncated pyramid, later clay tablets demonstrate that Babylonian astronomers implemented trapezoid procedures for computing Jupiters position and motion within time-velocity space
10.
Point (geometry)
–
In modern mathematics, a point refers usually to an element of some set called a space. More specifically, in Euclidean geometry, a point is a primitive notion upon which the geometry is built, being a primitive notion means that a point cannot be defined in terms of previously defined objects. That is, a point is defined only by some properties, called axioms, in particular, the geometric points do not have any length, area, volume, or any other dimensional attribute. A common interpretation is that the concept of a point is meant to capture the notion of a location in Euclidean space. Points, considered within the framework of Euclidean geometry, are one of the most fundamental objects, Euclid originally defined the point as that which has no part. This idea is easily generalized to three-dimensional Euclidean space, where a point is represented by a triplet with the additional third number representing depth. Further generalizations are represented by an ordered tuplet of n terms, many constructs within Euclidean geometry consist of an infinite collection of points that conform to certain axioms. This is usually represented by a set of points, As an example, a line is a set of points of the form L =. Similar constructions exist that define the plane, line segment and other related concepts, a line segment consisting of only a single point is called a degenerate line segment. In addition to defining points and constructs related to points, Euclid also postulated a key idea about points, in spite of this, modern expansions of the system serve to remove these assumptions. There are several inequivalent definitions of dimension in mathematics, in all of the common definitions, a point is 0-dimensional. The dimension of a space is the maximum size of a linearly independent subset. In a vector space consisting of a point, there is no linearly independent subset. The zero vector is not itself linearly independent, because there is a non trivial linear combination making it zero,1 ⋅0 =0, if no such minimal n exists, the space is said to be of infinite covering dimension. A point is zero-dimensional with respect to the covering dimension because every open cover of the space has a refinement consisting of a open set. The Hausdorff dimension of X is defined by dim H , = inf, a point has Hausdorff dimension 0 because it can be covered by a single ball of arbitrarily small radius. Although the notion of a point is considered fundamental in mainstream geometry and topology, there are some systems that forgo it, e. g. noncommutative geometry. More precisely, such structures generalize well-known spaces of functions in a way that the operation take a value at this point may not be defined
11.
Curve
–
In mathematics, a curve is, generally speaking, an object similar to a line but that need not be straight. Thus, a curve is a generalization of a line, in that curvature is not necessarily zero, various disciplines within mathematics have given the term different meanings depending on the area of study, so the precise meaning depends on context. However, many of these meanings are special instances of the definition which follows, a curve is a topological space which is locally homeomorphic to a line. In everyday language, this means that a curve is a set of points which, near each of its points, looks like a line, a simple example of a curve is the parabola, shown to the right. A large number of curves have been studied in multiple mathematical fields. A closed curve is a curve that forms a path whose starting point is also its ending point—that is, closely related meanings include the graph of a function and a two-dimensional graph. Interest in curves began long before they were the subject of mathematical study and this can be seen in numerous examples of their decorative use in art and on everyday objects dating back to prehistoric times. Curves, or at least their graphical representations, are simple to create, historically, the term line was used in place of the more modern term curve. Hence the phrases straight line and right line were used to distinguish what are called lines from curved lines. For example, in Book I of Euclids Elements, a line is defined as a breadthless length, Euclids idea of a line is perhaps clarified by the statement The extremities of a line are points. Later commentators further classified according to various schemes. For example, Composite lines Incomposite lines Determinate Indeterminate The Greek geometers had studied many kinds of curves. One reason was their interest in solving problems that could not be solved using standard compass. These curves include, The conic sections, deeply studied by Apollonius of Perga The cissoid of Diocles, studied by Diocles, the conchoid of Nicomedes, studied by Nicomedes as a method to both double the cube and to trisect an angle. The Archimedean spiral, studied by Archimedes as a method to trisect an angle, the spiric sections, sections of tori studied by Perseus as sections of cones had been studied by Apollonius. A fundamental advance in the theory of curves was the advent of analytic geometry in the seventeenth century and this enabled a curve to be described using an equation rather than an elaborate geometrical construction. Previously, curves had been described as geometrical or mechanical according to how they were, or supposedly could be, conic sections were applied in astronomy by Kepler. Newton also worked on an example in the calculus of variations
12.
Manifold
–
In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point. More precisely, each point of a manifold has a neighbourhood that is homeomorphic to the Euclidean space of dimension n. One-dimensional manifolds include lines and circles, but not figure eights, two-dimensional manifolds are also called surfaces. Although a manifold locally resembles Euclidean space, globally it may not, for example, the surface of the sphere is not a Euclidean space, but in a region it can be charted by means of map projections of the region into the Euclidean plane. When a region appears in two neighbouring charts, the two representations do not coincide exactly and a transformation is needed to pass from one to the other, Manifolds naturally arise as solution sets of systems of equations and as graphs of functions. One important class of manifolds is the class of differentiable manifolds and this differentiable structure allows calculus to be done on manifolds. A Riemannian metric on a manifold allows distances and angles to be measured, symplectic manifolds serve as the phase spaces in the Hamiltonian formalism of classical mechanics, while four-dimensional Lorentzian manifolds model spacetime in general relativity. After a line, the circle is the simplest example of a topological manifold, Topology ignores bending, so a small piece of a circle is treated exactly the same as a small piece of a line. Consider, for instance, the top part of the circle, x2 + y2 =1. Any point of this arc can be described by its x-coordinate. So, projection onto the first coordinate is a continuous, and invertible, mapping from the arc to the open interval. Such functions along with the regions they map are called charts. Similarly, there are charts for the bottom, left, and right parts of the circle, together, these parts cover the whole circle and the four charts form an atlas for the circle. The top and right charts, χtop and χright respectively, overlap in their domain, Each map this part into the interval, though differently. Let a be any number in, then, T = χ r i g h t = χ r i g h t =1 − a 2 Such a function is called a transition map. The top, bottom, left, and right charts show that the circle is a manifold, charts need not be geometric projections, and the number of charts is a matter of some choice. These two charts provide a second atlas for the circle, with t =1 s Each chart omits a single point, either for s or for t and it can be proved that it is not possible to cover the full circle with a single chart. Viewed using calculus, the transition function T is simply a function between open intervals, which gives a meaning to the statement that T is differentiable
13.
Fractal
–
A fractal is a mathematical set that exhibits a repeating pattern displayed at every scale. It is also known as expanding symmetry or evolving symmetry, if the replication is exactly the same at every scale, it is called a self-similar pattern. An example of this is the Menger Sponge, Fractals can also be nearly the same at different levels. This latter pattern is illustrated in small magnifications of the Mandelbrot set, Fractals also include the idea of a detailed pattern that repeats itself. Fractals are different from other geometric figures because of the way in which they scale, doubling the edge lengths of a polygon multiplies its area by four, which is two raised to the power of two. Likewise, if the radius of a sphere is doubled, its volume scales by eight, but if a fractals one-dimensional lengths are all doubled, the spatial content of the fractal scales by a power that is not necessarily an integer. This power is called the dimension of the fractal. As mathematical equations, fractals are usually nowhere differentiable, the term fractal was first used by mathematician Benoît Mandelbrot in 1975. Mandelbrot based it on the Latin frāctus meaning broken or fractured, there is some disagreement amongst authorities about how the concept of a fractal should be formally defined. Mandelbrot himself summarized it as beautiful, damn hard, increasingly useful, Fractals are not limited to geometric patterns, but can also describe processes in time. Fractal patterns with various degrees of self-similarity have been rendered or studied in images, structures and sounds and found in nature, technology, art, Fractals are of particular relevance in the field of chaos theory, since the graphs of most chaotic processes are fractal. The word fractal often has different connotations for laypeople than for mathematicians, the mathematical concept is difficult to define formally even for mathematicians, but key features can be understood with little mathematical background. If this is done on fractals, however, no new detail appears, nothing changes, self-similarity itself is not necessarily counter-intuitive. The difference for fractals is that the pattern reproduced must be detailed, a regular line, for instance, is conventionally understood to be 1-dimensional, if such a curve is divided into pieces each 1/3 the length of the original, there are always 3 equal pieces. In contrast, consider the Koch snowflake and it is also 1-dimensional for the same reason as the ordinary line, but it has, in addition, a fractal dimension greater than 1 because of how its detail can be measured. This also leads to understanding a third feature, that fractals as mathematical equations are nowhere differentiable, in a concrete sense, this means fractals cannot be measured in traditional ways. The history of fractals traces a path from chiefly theoretical studies to modern applications in computer graphics, according to Pickover, the mathematics behind fractals began to take shape in the 17th century when the mathematician and philosopher Gottfried Leibniz pondered recursive self-similarity. In his writings, Leibniz used the term fractional exponents, also in the last part of that century, Felix Klein and Henri Poincaré introduced a category of fractal that has come to be called self-inverse fractals
14.
Attractor
–
In the mathematical field of dynamical systems, an attractor is a set of numerical values toward which a system tends to evolve, for a wide variety of starting conditions of the system. System values that get close enough to the values remain close even if slightly disturbed. In finite-dimensional systems, the variable may be represented algebraically as an n-dimensional vector. The attractor is a region in n-dimensional space, if the evolving variable is two- or three-dimensional, the attractor of the dynamic process can be represented geometrically in two or three dimensions. An attractor can be a point, a set of points, a curve. If the variable is a scalar, the attractor is a subset of the number line. Describing the attractors of chaotic dynamical systems has been one of the achievements of chaos theory, a trajectory of the dynamical system in the attractor does not have to satisfy any special constraints except for remaining on the attractor, forward in time. The trajectory may be periodic or chaotic, if a set of points is periodic or chaotic, but the flow in the neighborhood is away from the set, the set is not an attractor, but instead is called a repeller. A dynamical system is described by one or more differential or difference equations. The equations of a dynamical system specify its behavior over any given short period of time. To determine the behavior for a longer period, it is often necessary to integrate the equations, either through analytical means or through iteration. Dynamical systems in the world tend to arise from dissipative systems, if it were not for some driving force. The dissipation and the driving force tend to balance, killing off initial transients, the subset of the phase space of the dynamical system corresponding to the typical behavior is the attractor, also known as the attracting section or attractee. Invariant sets and limit sets are similar to the attractor concept, an invariant set is a set that evolves to itself under the dynamics. A limit set is a set of such that there exists some initial state that ends up arbitrarily close to the limit set as time goes to infinity. For example, the pendulum has two invariant points, the point x0 of minimum height and the point x1 of maximum height. The point x0 is also a set, as trajectories converge to it. Because of the due to air resistance, the point x0 is also an attractor
15.
Scalar (mathematics)
–
A scalar is an element of a field which is used to define a vector space. A quantity described by multiple scalars, such as having both direction and magnitude, is called a vector, more generally, a vector space may be defined by using any field instead of real numbers, such as complex numbers. Then the scalars of that space will be the elements of the associated field. A scalar product operation – not to be confused with scalar multiplication – may be defined on a vector space, a vector space equipped with a scalar product is called an inner product space. The real component of a quaternion is also called its scalar part, the term is also sometimes used informally to mean a vector, matrix, tensor, or other usually compound value that is actually reduced to a single component. Thus, for example, the product of a 1×n matrix and an n×1 matrix, the term scalar matrix is used to denote a matrix of the form kI where k is a scalar and I is the identity matrix. The word scalar derives from the Latin word scalaris, a form of scala. The English word scale also comes from scala, according to a citation in the Oxford English Dictionary the first recorded usage of the term scalar in English came with W. R. A vector space is defined as a set of vectors, a set of scalars, and a multiplication operation that takes a scalar k. For example, in a space, the scalar multiplication k yields. In a function space, kƒ is the function x ↦ k, the scalars can be taken from any field, including the rational, algebraic, real, and complex numbers, as well as finite fields. According to a theorem of linear algebra, every vector space has a basis. It follows that every vector space over a scalar field K is isomorphic to a vector space where the coordinates are elements of K. For example, every vector space of dimension n is isomorphic to n-dimensional real space Rn. Alternatively, a vector space V can be equipped with a function that assigns to every vector v in V a scalar ||v||. By definition. If ||v|| is interpreted as the length of v, this operation can be described as scaling the length of v by k, a vector space equipped with a norm is called a normed vector space. The norm is defined to be an element of Vs scalar field K. Moreover, if V has dimension 2 or more, K must be closed under square root, as well as the four operations, thus the rational numbers Q are excluded
16.
Chaos theory
–
Chaos theory is a branch of mathematics focused on the behavior of dynamical systems that are highly sensitive to initial conditions. This happens even though these systems are deterministic, meaning that their behavior is fully determined by their initial conditions. In other words, the nature of these systems does not make them predictable. This behavior is known as chaos, or simply chaos. The theory was summarized by Edward Lorenz as, Chaos, When the present determines the future, Chaotic behavior exists in many natural systems, such as weather and climate. It also occurs spontaneously in some systems with components, such as road traffic. This behavior can be studied through analysis of a mathematical model, or through analytical techniques such as recurrence plots. Chaos theory has applications in several disciplines, including meteorology, sociology, physics, environmental science, computer science, engineering, economics, biology, ecology, the theory formed the basis for such fields of study as complex dynamical systems, edge of chaos theory, self-assembly process. Chaos theory concerns deterministic systems whose behavior can in principle be predicted, Chaotic systems are predictable for a while and then appear to become random. Some examples of Lyapunov times are, chaotic electrical circuits, about 1 millisecond, weather systems, a few days, in chaotic systems, the uncertainty in a forecast increases exponentially with elapsed time. Hence, mathematically, doubling the forecast time more than squares the proportional uncertainty in the forecast and this means, in practice, a meaningful prediction cannot be made over an interval of more than two or three times the Lyapunov time. When meaningful predictions cannot be made, the system appears random, in common usage, chaos means a state of disorder. However, in theory, the term is defined more precisely. Although no universally accepted mathematical definition of chaos exists, a commonly used definition originally formulated by Robert L, in these cases, while it is often the most practically significant property, sensitivity to initial conditions need not be stated in the definition. If attention is restricted to intervals, the second property implies the other two, an alternative, and in general weaker, definition of chaos uses only the first two properties in the above list. Sensitivity to initial conditions means that each point in a system is arbitrarily closely approximated by other points with significantly different future paths. Thus, a small change, or perturbation, of the current trajectory may lead to significantly different future behavior. C. Entitled Predictability, Does the Flap of a Butterflys Wings in Brazil set off a Tornado in Texas, the flapping wing represents a small change in the initial condition of the system, which causes a chain of events leading to large-scale phenomena
17.
Trajectory
–
A trajectory or flight path is the path that a moving object follows through space as a function of time. The object might be a projectile or a satellite, for example, it can be an orbit—the path of a planet, an asteroid, or a comet as it travels around a central mass. A trajectory can be described either by the geometry of the path or as the position of the object over time. In control theory a trajectory is a set of states of a dynamical system. In discrete mathematics, a trajectory is a sequence k ∈ N of values calculated by the application of a mapping f to an element x of its source. A familiar example of a trajectory is the path of a projectile, in a significantly simplified model, the object moves only under the influence of a uniform gravitational force field. This can be an approximation for a rock that is thrown for short distances. In this simple approximation, the trajectory takes the shape of a parabola, generally when determining trajectories, it may be necessary to account for nonuniform gravitational forces and air resistance. This is the focus of the discipline of ballistics, one of the remarkable achievements of Newtonian mechanics was the derivation of the laws of Kepler. In the gravitational field of a point mass or an extended mass. Newtons theory later developed into the branch of physics known as classical mechanics. It employs the mathematics of differential calculus, over the centuries, countless scientists have contributed to the development of these two disciplines. Classical mechanics became a most prominent demonstration of the power of thought, i. e. reason. It helps to understand and predict an enormous range of phenomena, trajectories are, consider a particle of mass m, moving in a potential field V. Physically speaking, mass represents inertia, and the field V represents external forces of a kind known as conservative. Given V at every relevant position, there is a way to infer the associated force that would act at that position, not all forces can be expressed in this way, however. The motion of the particle is described by the differential equation m d 2 x → d t 2 = − ∇ V with x → =. On the right-hand side, the force is given in terms of ∇ V and this is the mathematical form of Newtons second law of motion, force equals mass times acceleration, for such situations
18.
Periodic function
–
In mathematics, a periodic function is a function that repeats its values in regular intervals or periods. The most important examples are the functions, which repeat over intervals of 2π radians. Periodic functions are used throughout science to describe oscillations, waves, any function which is not periodic is called aperiodic. A function f is said to be periodic with period P if we have f = f for all values of x in the domain, geometrically, a periodic function can be defined as a function whose graph exhibits translational symmetry. Specifically, a function f is periodic with period P if the graph of f is invariant under translation in the x-direction by a distance of P and this definition of periodic can be extended to other geometric shapes and patterns, such as periodic tessellations of the plane. A function that is not periodic is called aperiodic, for example, the sine function is periodic with period 2 π, since sin = sin x for all values of x. This function repeats on intervals of length 2 π, everyday examples are seen when the variable is time, for instance the hands of a clock or the phases of the moon show periodic behaviour. Periodic motion is motion in which the position of the system are expressible as periodic functions, for a function on the real numbers or on the integers, that means that the entire graph can be formed from copies of one particular portion, repeated at regular intervals. A simple example of a function is the function f that gives the fractional part of its argument. In particular, f = f = f =, =0.5 The graph of the function f is the sawtooth wave. The trigonometric functions sine and cosine are periodic functions, with period 2π. The subject of Fourier series investigates the idea that a periodic function is a sum of trigonometric functions with matching periods. According to the definition above, some functions, for example the Dirichlet function, are also periodic, in the case of Dirichlet function. For example, f = sin has period 2 π therefore sin will have period 2 π5, a function whose domain is the complex numbers can have two incommensurate periods without being constant. The elliptic functions are such functions, if L is the period of the function then, L =2 π / k One common generalization of periodic functions is that of antiperiodic functions. This is a function f such that f = −f for all x, for example, the sine or cosine function is π-antiperiodic and 2π-periodic. A further generalization appears in the context of Bloch waves and Floquet theory, in this context, the solution is typically a function of the form, f = e i k P f where k is a real or complex number. Functions of this form are sometimes called Bloch-periodic in this context, a periodic function is the special case k =0, and an antiperiodic function is the special case k = π/P
19.
Integral
–
In mathematics, an integral assigns numbers to functions in a way that can describe displacement, area, volume, and other concepts that arise by combining infinitesimal data. Integration is one of the two operations of calculus, with its inverse, differentiation, being the other. The area above the x-axis adds to the total and that below the x-axis subtracts from the total, roughly speaking, the operation of integration is the reverse of differentiation. For this reason, the integral may also refer to the related notion of the antiderivative. In this case, it is called an integral and is written. The integrals discussed in this article are those termed definite integrals, a rigorous mathematical definition of the integral was given by Bernhard Riemann. It is based on a procedure which approximates the area of a curvilinear region by breaking the region into thin vertical slabs. A line integral is defined for functions of two or three variables, and the interval of integration is replaced by a curve connecting two points on the plane or in the space. In a surface integral, the curve is replaced by a piece of a surface in the three-dimensional space and this method was further developed and employed by Archimedes in the 3rd century BC and used to calculate areas for parabolas and an approximation to the area of a circle. A similar method was developed in China around the 3rd century AD by Liu Hui. This method was used in the 5th century by Chinese father-and-son mathematicians Zu Chongzhi. The next significant advances in integral calculus did not begin to appear until the 17th century, further steps were made in the early 17th century by Barrow and Torricelli, who provided the first hints of a connection between integration and differentiation. Barrow provided the first proof of the theorem of calculus. Wallis generalized Cavalieris method, computing integrals of x to a power, including negative powers. The major advance in integration came in the 17th century with the independent discovery of the theorem of calculus by Newton. The theorem demonstrates a connection between integration and differentiation and this connection, combined with the comparative ease of differentiation, can be exploited to calculate integrals. In particular, the theorem of calculus allows one to solve a much broader class of problems. Equal in importance is the mathematical framework that both Newton and Leibniz developed
20.
Iteration
–
Iteration is the act of repeating a process, either to generate an unbounded sequence of outcomes, or with the aim of approaching a desired goal, target or result. Each repetition of the process is called an iteration. In the context of mathematics or computer science, iteration is a building block of algorithms. Iteration in mathematics may refer to the process of iterating a function i. e. applying a function repeatedly, iteration of apparently simple functions can produce complex behaviours and difficult problems - for examples, see the Collatz conjecture and juggler sequences. Another use of iteration in mathematics is in iterative methods which are used to produce approximate solutions to certain mathematical problems. Newtons method is an example of an iterative method, manual calculation of a numbers square root is a common use and a well-known example. Iteration in computing is the marking out of a block of statements within a computer program for a defined number of repetitions. That block of statements is said to be iterated, a computer scientist might also refer to block of statements as an iteration. In the example above, the line of code is using the value of i as it increments and this idea is found in the old adage, Practice makes perfect. Unlike computing and math, educational iterations are not predetermined, instead, in algorithmic situations, recursion and iteration can be employed to the same effect. Some types of programming languages, known as functional programming languages, are designed such that they do not set up block of statements for explicit repetition as with the for loop, instead, those programming languages exclusively use recursion. Each piece of work will be divided repeatedly until the amount of work is as small as it can possibly be, the algorithm then reverses and reassembles the pieces into a complete whole. The classic example of recursion is in list-sorting algorithms such as Merge Sort, the code below is an example of a recursive algorithm in the Scheme programming language that will output the same result as the pseudocode under the previous heading. In Object-Oriented Programming, an iterator is an object that ensures iteration is executed in the way for a range of different data structures, saving time. An iteratee is an abstraction which accepts or rejects data during an iteration, recursion Fractal Iterated function Infinite compositions of analytic functions
21.
Friction
–
Friction is the force resisting the relative motion of solid surfaces, fluid layers, and material elements sliding against each other. There are several types of friction, Dry friction resists relative lateral motion of two surfaces in contact. Dry friction is subdivided into static friction between non-moving surfaces, and kinetic friction between moving surfaces, fluid friction describes the friction between layers of a viscous fluid that are moving relative to each other. Lubricated friction is a case of fluid friction where a lubricant fluid separates two solid surfaces, skin friction is a component of drag, the force resisting the motion of a fluid across the surface of a body. Internal friction is the force resisting motion between the making up a solid material while it undergoes deformation. When surfaces in contact move relative to other, the friction between the two surfaces converts kinetic energy into thermal energy. This property can have consequences, as illustrated by the use of friction created by rubbing pieces of wood together to start a fire. Kinetic energy is converted to thermal energy whenever motion with friction occurs, another important consequence of many types of friction can be wear, which may lead to performance degradation and/or damage to components. Friction is a component of the science of tribology, Friction is not itself a fundamental force. Dry friction arises from a combination of adhesion, surface roughness, surface deformation. The complexity of interactions makes the calculation of friction from first principles impractical and necessitates the use of empirical methods for analysis. Friction is a non-conservative force - work done against friction is path dependent, in the presence of friction, some energy is always lost in the form of heat. Thus mechanical energy is not conserved, the Greeks, including Aristotle, Vitruvius, and Pliny the Elder, were interested in the cause and mitigation of friction. They were aware of differences between static and kinetic friction with Themistius stating in 350 A. D. that it is easier to further the motion of a moving body than to move a body at rest. The classic laws of sliding friction were discovered by Leonardo da Vinci in 1493, a pioneer in tribology and these laws were rediscovered by Guillaume Amontons in 1699. Amontons presented the nature of friction in terms of surface irregularities, the understanding of friction was further developed by Charles-Augustin de Coulomb. Coulomb further considered the influence of sliding velocity, temperature and humidity, the distinction between static and dynamic friction is made in Coulombs friction law, although this distinction was already drawn by Johann Andreas von Segner in 1758. Leslie was equally skeptical about the role of adhesion proposed by Desaguliers, in Leslies view, friction should be seen as a time-dependent process of flattening, pressing down asperities, which creates new obstacles in what were cavities before
22.
Thermodynamics
–
Thermodynamics is a branch of science concerned with heat and temperature and their relation to energy and work. The behavior of these quantities is governed by the four laws of thermodynamics, the laws of thermodynamics are explained in terms of microscopic constituents by statistical mechanics. Thermodynamics applies to a variety of topics in science and engineering, especially physical chemistry, chemical engineering. The initial application of thermodynamics to mechanical heat engines was extended early on to the study of chemical compounds, Chemical thermodynamics studies the nature of the role of entropy in the process of chemical reactions and has provided the bulk of expansion and knowledge of the field. Other formulations of thermodynamics emerged in the following decades, statistical thermodynamics, or statistical mechanics, concerned itself with statistical predictions of the collective motion of particles from their microscopic behavior. In 1909, Constantin Carathéodory presented a mathematical approach to the field in his axiomatic formulation of thermodynamics. A description of any thermodynamic system employs the four laws of thermodynamics that form an axiomatic basis, the first law specifies that energy can be exchanged between physical systems as heat and work. In thermodynamics, interactions between large ensembles of objects are studied and categorized, central to this are the concepts of the thermodynamic system and its surroundings. A system is composed of particles, whose average motions define its properties, properties can be combined to express internal energy and thermodynamic potentials, which are useful for determining conditions for equilibrium and spontaneous processes. With these tools, thermodynamics can be used to describe how systems respond to changes in their environment and this can be applied to a wide variety of topics in science and engineering, such as engines, phase transitions, chemical reactions, transport phenomena, and even black holes. This article is focused mainly on classical thermodynamics which primarily studies systems in thermodynamic equilibrium, non-equilibrium thermodynamics is often treated as an extension of the classical treatment, but statistical mechanics has brought many advances to that field. Guericke was driven to make a vacuum in order to disprove Aristotles long-held supposition that nature abhors a vacuum. Shortly after Guericke, the English physicist and chemist Robert Boyle had learned of Guerickes designs and, in 1656, in coordination with English scientist Robert Hooke, using this pump, Boyle and Hooke noticed a correlation between pressure, temperature, and volume. In time, Boyles Law was formulated, which states that pressure, later designs implemented a steam release valve that kept the machine from exploding. By watching the valve rhythmically move up and down, Papin conceived of the idea of a piston and he did not, however, follow through with his design. Nevertheless, in 1697, based on Papins designs, engineer Thomas Savery built the first engine, although these early engines were crude and inefficient, they attracted the attention of the leading scientists of the time. Black and Watt performed experiments together, but it was Watt who conceived the idea of the condenser which resulted in a large increase in steam engine efficiency. Drawing on all the work led Sadi Carnot, the father of thermodynamics, to publish Reflections on the Motive Power of Fire
23.
Phase space
–
For mechanical systems, the phase space usually consists of all possible values of position and momentum variables. The concept of space was developed in the late 19th century by Ludwig Boltzmann, Henri Poincaré. For every possible state of the system, or allowed combination of values of the systems parameters, the systems evolving state over time traces a path through the high-dimensional space. As a whole, the diagram represents all that the system can be. A phase space may contain a number of dimensions. For instance, a gas containing many molecules may require a separate dimension for each x, y and z positions. In classical mechanics, any choice of generalized coordinates qi for the position defines conjugate generalized momenta pi which together define co-ordinates on phase space, the motion of an ensemble of systems in this space is studied by classical statistical mechanics. The local density of points in such systems obeys Liouvilles Theorem, within the context of a model system in classical mechanics, the phase space coordinates of the system at any given time are composed of all of the systems dynamic variables. Because of this, it is possible to calculate the state of the system at any time in the future or the past. For simple systems, there may be as few as one or two degrees of freedom, the simplest non-trivial examples are the exponential growth model/decay and the logistic growth model. In this case, a sketch of the portrait may give qualitative information about the dynamics of the system. Here, the horizontal axis gives the position and vertical axis the velocity, as the system evolves, its state follows one of the lines on the phase diagram. Classic examples of phase diagrams from chaos theory are, the Lorenz attractor population growth parameter plane of quadratic polynomials with Mandelbrot set. A plot of position and momentum variables as a function of time is called a phase plot or a phase diagram. In quantum mechanics, the p and q of phase space normally become hermitian operators in a Hilbert space. But they may retain their classical interpretation, provided functions of them compose in novel algebraic ways. With J E Moyal, these completed the foundations of the phase space formulation of quantum mechanics, in thermodynamics and statistical mechanics contexts, the term phase space has two meanings, It is used in the same sense as in classical mechanics. In this sense, as long as the particles are distinguishable, N is typically on the order of Avogadros number, thus describing the system at a microscopic level is often impractical
24.
Pendulum
–
A pendulum is a weight suspended from a pivot so that it can swing freely. When a pendulum is displaced sideways from its resting, equilibrium position, when released, the restoring force combined with the pendulums mass causes it to oscillate about the equilibrium position, swinging back and forth. The time for one cycle, a left swing and a right swing, is called the period. The period depends on the length of the pendulum and also to a degree on the amplitude. Pendulums are also used in instruments such as accelerometers and seismometers. Historically they were used as gravimeters to measure the acceleration of gravity in geophysical surveys, the word pendulum is new Latin, from the Latin pendulus, meaning hanging. The simple gravity pendulum is a mathematical model of a pendulum. This is a weight on the end of a massless cord suspended from a pivot, when given an initial push, it will swing back and forth at a constant amplitude. Real pendulums are subject to friction and air drag, so the amplitude of their swings declines and it is independent of the mass of the bob. For small swings the period of swing is approximately the same for different size swings, that is and this property, called isochronism, is the reason pendulums are so useful for timekeeping. Successive swings of the pendulum, even if changing in amplitude, for larger amplitudes, the period increases gradually with amplitude so it is longer than given by equation. For example, at an amplitude of θ0 = 23° it is 1% larger than given by, the period increases asymptotically as θ0 approaches 180°, because the value θ0 = 180° is an unstable equilibrium point for the pendulum. The length L used to calculate the period of the simple pendulum in eq. above is the distance from the pivot point to the center of mass of the bob. Any swinging rigid body free to rotate about a horizontal axis is called a compound pendulum or physical pendulum. The appropriate equivalent length L for calculating the period of any such pendulum is the distance from the pivot to the center of oscillation. This point is located under the center of mass at a distance from the pivot traditionally called the radius of oscillation, if most of the mass is concentrated in a relatively small bob compared to the pendulum length, the center of oscillation is close to the center of mass. Substituting this expression in above, the period T of a pendulum is given by T =2 π I m g R for sufficiently small oscillations. For example, a rigid rod of length L pivoted about one end has moment of inertia I = m L2 /3
25.
Subset
–
In mathematics, especially in set theory, a set A is a subset of a set B, or equivalently B is a superset of A, if A is contained inside B, that is, all elements of A are also elements of B. The relationship of one set being a subset of another is called inclusion or sometimes containment, the subset relation defines a partial order on sets. The algebra of subsets forms a Boolean algebra in which the relation is called inclusion. For any set S, the inclusion relation ⊆ is an order on the set P of all subsets of S defined by A ≤ B ⟺ A ⊆ B. We may also partially order P by reverse set inclusion by defining A ≤ B ⟺ B ⊆ A, when quantified, A ⊆ B is represented as, ∀x. So for example, for authors, it is true of every set A that A ⊂ A. Other authors prefer to use the symbols ⊂ and ⊃ to indicate proper subset and superset, respectively and this usage makes ⊆ and ⊂ analogous to the inequality symbols ≤ and <. For example, if x ≤ y then x may or may not equal y, but if x < y, then x definitely does not equal y, and is less than y. Similarly, using the convention that ⊂ is proper subset, if A ⊆ B, then A may or may not equal B, the set A = is a proper subset of B =, thus both expressions A ⊆ B and A ⊊ B are true. The set D = is a subset of E =, thus D ⊆ E is true, any set is a subset of itself, but not a proper subset. The empty set, denoted by ∅, is also a subset of any given set X and it is also always a proper subset of any set except itself. These are two examples in both the subset and the whole set are infinite, and the subset has the same cardinality as the whole. The set of numbers is a proper subset of the set of real numbers. In this example, both sets are infinite but the set has a larger cardinality than the former set. Another example in an Euler diagram, Inclusion is the partial order in the sense that every partially ordered set is isomorphic to some collection of sets ordered by inclusion. The ordinal numbers are a simple example—if each ordinal n is identified with the set of all ordinals less than or equal to n, then a ≤ b if and only if ⊆. For the power set P of a set S, the partial order is the Cartesian product of k = |S| copies of the partial order on for which 0 <1. This can be illustrated by enumerating S = and associating with each subset T ⊆ S the k-tuple from k of which the ith coordinate is 1 if and only if si is a member of T
26.
Neighbourhood (mathematics)
–
In topology and related areas of mathematics, a neighbourhood is one of the basic concepts in a topological space. It is closely related to the concepts of set and interior. Intuitively speaking, a neighbourhood of a point is a set of points containing that point where one can move some amount away from that point without leaving the set. If X is a space and p is a point in X. This is also equivalent to p ∈ X being in the interior of V, note that the neighbourhood V need not be an open set itself. If V is open it is called an open neighbourhood, some mathematicians require that neighbourhoods be open, so it is important to note conventions. A set that is a neighbourhood of each of its points is open since it can be expressed as the union of sets containing each of its points. The collection of all neighbourhoods of a point is called the system at the point. If S is a subset of topological space X then a neighbourhood of S is a set V that includes an open set U containing S, the neighbourhood of a point is just a special case of this definition. In a metric space M =, a set V is a neighbourhood of a point p if there exists an open ball with centre p and radius r >0, such that B r = B = is contained in V. V is called uniform neighbourhood of a set S if there exists a number r such that for all elements p of S, B r = is contained in V. For r >0 the r -neighbourhood S r of a set S is the set of all points in X that are at less than r from S, S r = ⋃ p ∈ S B r. It directly follows that an r -neighbourhood is a neighbourhood. The above definition is useful if the notion of open set is already defined, there is an alternative way to define a topology, by first defining the neighbourhood system, and then open sets as those sets containing a neighbourhood of each of their points. In a uniform space S =, V is called a neighbourhood of P if P is not close to X ∖ V. A deleted neighbourhood of a point p is a neighbourhood of p, for instance, the interval = is a neighbourhood of p =0 in the real line, so the set ∪ = ∖ is a deleted neighbourhood of 0. Note that a neighbourhood of a given point is not in fact a neighbourhood of the point. The concept of deleted neighbourhood occurs in the definition of the limit of a function, bredon, Glen E. Topology and geometry
27.
Open set
–
In topology, an open set is an abstract concept generalizing the idea of an open interval in the real line. These conditions are very loose, and they allow enormous flexibility in the choice of open sets, in the two extremes, every set can be open, or no set can be open but the space itself and the empty set. In practice, however, open sets are usually chosen to be similar to the intervals of the real line. The notion of an open set provides a way to speak of nearness of points in a topological space. Once a choice of open sets is made, the properties of continuity, connectedness, and compactness, each choice of open sets for a space is called a topology. Although open sets and the topologies that they comprise are of importance in point-set topology. Intuitively, an open set provides a method to distinguish two points, for example, if about one point in a topological space there exists an open set not containing another point, the two points are referred to as topologically distinguishable. In this manner, one may speak of two subsets of a topological space are near without concretely defining a metric on the topological space. Therefore, topological spaces may be seen as a generalization of metric spaces, in the set of all real numbers, one has the natural Euclidean metric, that is, a function which measures the distance between two real numbers, d = |x - y|. Therefore, given a number, one can speak of the set of all points close to that real number. In essence, points within ε of x approximate x to an accuracy of degree ε, note that ε >0 always but as ε becomes smaller and smaller, one obtains points that approximate x to a higher and higher degree of accuracy. For example, if x =0 and ε =1, the points within ε of x are precisely the points of the interval, that is, however, with ε =0.5, the points within ε of x are precisely the points of. Clearly, these points approximate x to a degree of accuracy compared to when ε =1. The previous discussion shows, for the case x =0, in particular, sets of the form give us a lot of information about points close to x =0. Thus, rather than speaking of a concrete Euclidean metric, one may use sets to describe points close to x, thus, we find that in some sense, every real number is distance 0 away from 0. It may help in case to think of the measure as being a binary condition, all things in R are equally close to 0. In general, one refers to the family of sets containing 0, used to approximate 0, as a neighborhood basis, in fact, one may generalize these notions to an arbitrary set, rather than just the real numbers. In this case, given a point of that set, one may define a collection of sets around x, of course, this collection would have to satisfy certain properties for otherwise we may not have a well-defined method to measure distance
28.
Metric space
–
In mathematics, a metric space is a set for which distances between all members of the set are defined. Those distances, taken together, are called a metric on the set, a metric on a space induces topological properties like open and closed sets, which lead to the study of more abstract topological spaces. The most familiar metric space is 3-dimensional Euclidean space, in fact, a metric is the generalization of the Euclidean metric arising from the four long-known properties of the Euclidean distance. The Euclidean metric defines the distance between two points as the length of the line segment connecting them. Maurice Fréchet introduced metric spaces in his work Sur quelques points du calcul fonctionnel, since for any x, y ∈ M, The function d is also called distance function or simply distance. Often, d is omitted and one just writes M for a space if it is clear from the context what metric is used. Ignoring mathematical details, for any system of roads and terrains the distance between two locations can be defined as the length of the shortest route connecting those locations, to be a metric there shouldnt be any one-way roads. The triangle inequality expresses the fact that detours arent shortcuts, many of the examples below can be seen as concrete versions of this general idea. The real numbers with the function d = | y − x | given by the absolute difference. The rational numbers with the distance function also form a metric space. The positive real numbers with distance function d = | log | is a metric space. Any normed vector space is a space by defining d = ∥ y − x ∥. Examples, The Manhattan norm gives rise to the Manhattan distance, the maximum norm gives rise to the Chebyshev distance or chessboard distance, the minimal number of moves a chess king would take to travel from x to y. The British Rail metric on a vector space is given by d = ∥ x ∥ + ∥ y ∥ for distinct points x and y. The name alludes to the tendency of railway journeys to proceed via London irrespective of their final destination, If is a metric space and X is a subset of M, then becomes a metric space by restricting the domain of d to X × X. The discrete metric, where d =0 if x = y and d =1 otherwise, is a simple but important example and this, in particular, shows that for any set, there is always a metric space associated to it. Using this metric, any point is a ball, and therefore every subset is open. A finite metric space is a metric space having a number of points
29.
Measure (mathematics)
–
In mathematical analysis, a measure on a set is a systematic way to assign a number to each suitable subset of that set, intuitively interpreted as its size. In this sense, a measure is a generalization of the concepts of length, area, for instance, the Lebesgue measure of the interval in the real numbers is its length in the everyday sense of the word – specifically,1. Technically, a measure is a function that assigns a real number or +∞ to subsets of a set X. It must further be countably additive, the measure of a subset that can be decomposed into a finite number of smaller disjoint subsets, is the sum of the measures of the smaller subsets. In general, if one wants to associate a consistent size to each subset of a set while satisfying the other axioms of a measure. This problem was resolved by defining measure only on a sub-collection of all subsets, the so-called measurable subsets and this means that countable unions, countable intersections and complements of measurable subsets are measurable. Non-measurable sets in a Euclidean space, on which the Lebesgue measure cannot be defined consistently, are complicated in the sense of being badly mixed up with their complement. Indeed, their existence is a consequence of the axiom of choice. Measure theory was developed in stages during the late 19th and early 20th centuries by Émile Borel, Henri Lebesgue, Johann Radon. The main applications of measures are in the foundations of the Lebesgue integral, in Andrey Kolmogorovs axiomatisation of probability theory, probability theory considers measures that assign to the whole set the size 1, and considers measurable subsets to be events whose probability is given by the measure. Ergodic theory considers measures that are invariant under, or arise naturally from, let X be a set and Σ a σ-algebra over X. A function μ from Σ to the real number line is called a measure if it satisfies the following properties, Non-negativity. Countable additivity, For all countable collections i =1 ∞ of pairwise disjoint sets in Σ, μ = ∑ k =1 ∞ μ One may require that at least one set E has finite measure. Then the empty set automatically has measure zero because of countable additivity, because μ = μ = μ + μ + μ + …, which implies that μ =0. If only the second and third conditions of the definition of measure above are met, the pair is called a measurable space, the members of Σ are called measurable sets. If and are two spaces, then a function f, X → Y is called measurable if for every Y-measurable set B ∈ Σ Y. See also Measurable function#Caveat about another setup, a triple is called a measure space. A probability measure is a measure with total measure one – i. e, a probability space is a measure space with a probability measure
30.
Geometric primitive
–
The term geometric primitive, or prim, in computer graphics and CAD systems is used in various senses, with the common meaning of the simplest geometric objects that the system can handle. Sometimes the subroutines that draw the objects are called geometric primitives as well. The most primitive primitives are point and straight line segment, which were all that early vector graphics systems had, in constructive solid geometry, primitives are simple geometric shapes such as a cube, cylinder, sphere, cone, pyramid, torus. Modern 2D computer graphics systems may operate with primitives which are lines, a common set of two-dimensional primitives includes lines, points, and polygons, although some people prefer to consider triangles primitives, because every polygon can be constructed from triangles. All other graphic elements are built up from these primitives, in three dimensions, triangles or polygons positioned in three-dimensional space can be used as primitives to model more complex 3D forms. In some cases, curves may be considered primitives, in cases, curves are complex forms created from many straight. These are considered to be primitives in 3D modelling because they are the blocks for many other shapes. A 3D package may include a list of extended primitives which are more complex shapes that come with the package. For example, a teapot is listed as a primitive in 3D Studio Max, in CAD software or 3D modelling, the interface may present the user with the ability to create primitives which may be further modified by edits. For example in the practice of box modelling the user will start with a cuboid, then use extrusion, in this use the primitive is just a convenient starting point, rather than the fundamental unit of modelling. Various graphics accelerators exist with hardware acceleration for rendering specific primitives such as lines or triangles (frequently with texture mapping, modern 3d accelerators typically accept sequences of triangles as triangle strips. Sculpted prim Peachpit. com Info On 3D Primitives
31.
Line (geometry)
–
The notion of line or straight line was introduced by ancient mathematicians to represent straight objects with negligible width and depth. Lines are an idealization of such objects, the straight line is that which is equally extended between its points. In modern mathematics, given the multitude of geometries, the concept of a line is tied to the way the geometry is described. When a geometry is described by a set of axioms, the notion of a line is left undefined. The properties of lines are determined by the axioms which refer to them. One advantage to this approach is the flexibility it gives to users of the geometry, thus in differential geometry a line may be interpreted as a geodesic, while in some projective geometries a line is a 2-dimensional vector space. This flexibility also extends beyond mathematics and, for example, permits physicists to think of the path of a light ray as being a line, to avoid this vicious circle certain concepts must be taken as primitive concepts, terms which are given no definition. In geometry, it is frequently the case that the concept of line is taken as a primitive, in those situations where a line is a defined concept, as in coordinate geometry, some other fundamental ideas are taken as primitives. When the line concept is a primitive, the behaviour and properties of lines are dictated by the axioms which they must satisfy, in a non-axiomatic or simplified axiomatic treatment of geometry, the concept of a primitive notion may be too abstract to be dealt with. In this circumstance it is possible that a description or mental image of a notion is provided to give a foundation to build the notion on which would formally be based on the axioms. Descriptions of this type may be referred to, by some authors and these are not true definitions and could not be used in formal proofs of statements. The definition of line in Euclids Elements falls into this category, when geometry was first formalised by Euclid in the Elements, he defined a general line to be breadthless length with a straight line being a line which lies evenly with the points on itself. These definitions serve little purpose since they use terms which are not, themselves, in fact, Euclid did not use these definitions in this work and probably included them just to make it clear to the reader what was being discussed. In an axiomatic formulation of Euclidean geometry, such as that of Hilbert, for example, for any two distinct points, there is a unique line containing them, and any two distinct lines intersect in at most one point. In two dimensions, i. e. the Euclidean plane, two lines which do not intersect are called parallel, in higher dimensions, two lines that do not intersect are parallel if they are contained in a plane, or skew if they are not. Any collection of many lines partitions the plane into convex polygons. Lines in a Cartesian plane or, more generally, in affine coordinates, in two dimensions, the equation for non-vertical lines is often given in the slope-intercept form, y = m x + b where, m is the slope or gradient of the line. B is the y-intercept of the line, X is the independent variable of the function y = f
32.
Surface (topology)
–
In topology and differential geometry, a surface is a two-dimensional manifold, and, as such, may be an abstract surface not embedded in any Euclidean space. For example, the Klein bottle is a surface, which cannot be represented in the three-dimensional Euclidean space without introducing self-intersections, in mathematics, a surface is a geometrical shape that resembles to a deformed plane. The most familiar examples arise as boundaries of solid objects in ordinary three-dimensional Euclidean space R3, the exact definition of a surface may depend on the context. Typically, in geometry, a surface may cross itself, while, in topology and differential geometry. A surface is a space, this means that a moving point on a surface may move in two directions. In other words, around almost every point, there is a patch on which a two-dimensional coordinate system is defined. For example, the surface of the Earth resembles a two-dimensional sphere, the concept of surface is widely used in physics, engineering, computer graphics, and many other disciplines, primarily in representing the surfaces of physical objects. For example, in analyzing the properties of an airplane. A surface is a space in which every point has an open neighbourhood homeomorphic to some open subset of the Euclidean plane E2. Such a neighborhood, together with the corresponding homeomorphism, is known as a chart and it is through this chart that the neighborhood inherits the standard coordinates on the Euclidean plane. These coordinates are known as coordinates and these homeomorphisms lead us to describe surfaces as being locally Euclidean. In most writings on the subject, it is assumed, explicitly or implicitly, that as a topological space a surface is also nonempty, second countable. It is also assumed that the surfaces under consideration are connected. The rest of this article will assume, unless specified otherwise, that a surface is nonempty, Hausdorff, second countable and these homeomorphisms are also known as charts. The boundary of the upper half-plane is the x-axis, a point on the surface mapped via a chart to the x-axis is termed a boundary point. The collection of points is known as the boundary of the surface which is necessarily a one-manifold, that is. On the other hand, a point mapped to above the x-axis is an interior point, the collection of interior points is the interior of the surface which is always non-empty. The closed disk is an example of a surface with boundary
33.
Three-dimensional space
–
Three-dimensional space is a geometric setting in which three values are required to determine the position of an element. This is the meaning of the term dimension. In physics and mathematics, a sequence of n numbers can be understood as a location in n-dimensional space, when n =3, the set of all such locations is called three-dimensional Euclidean space. It is commonly represented by the symbol ℝ3 and this serves as a three-parameter model of the physical universe in which all known matter exists. However, this space is one example of a large variety of spaces in three dimensions called 3-manifolds. Furthermore, in case, these three values can be labeled by any combination of three chosen from the terms width, height, depth, and breadth. In mathematics, analytic geometry describes every point in space by means of three coordinates. Three coordinate axes are given, each perpendicular to the two at the origin, the point at which they cross. They are usually labeled x, y, and z, below are images of the above-mentioned systems. Two distinct points determine a line. Three distinct points are either collinear or determine a unique plane, four distinct points can either be collinear, coplanar or determine the entire space. Two distinct lines can intersect, be parallel or be skew. Two parallel lines, or two intersecting lines, lie in a plane, so skew lines are lines that do not meet. Two distinct planes can either meet in a line or are parallel. Three distinct planes, no pair of which are parallel, can meet in a common line. In the last case, the three lines of intersection of each pair of planes are mutually parallel, a line can lie in a given plane, intersect that plane in a unique point or be parallel to the plane. In the last case, there will be lines in the plane that are parallel to the given line, a hyperplane is a subspace of one dimension less than the dimension of the full space. The hyperplanes of a space are the two-dimensional subspaces, that is
34.
Topology
–
In mathematics, topology is concerned with the properties of space that are preserved under continuous deformations, such as stretching, crumpling and bending, but not tearing or gluing. This can be studied by considering a collection of subsets, called open sets, important topological properties include connectedness and compactness. Topology developed as a field of study out of geometry and set theory, through analysis of such as space, dimension. Such ideas go back to Gottfried Leibniz, who in the 17th century envisioned the geometria situs, Leonhard Eulers Seven Bridges of Königsberg Problem and Polyhedron Formula are arguably the fields first theorems. The term topology was introduced by Johann Benedict Listing in the 19th century, by the middle of the 20th century, topology had become a major branch of mathematics. It defines the basic notions used in all branches of topology. Algebraic topology tries to measure degrees of connectivity using algebraic constructs such as homology, differential topology is the field dealing with differentiable functions on differentiable manifolds. It is closely related to geometry and together they make up the geometric theory of differentiable manifolds. Geometric topology primarily studies manifolds and their embeddings in other manifolds, a particularly active area is low-dimensional topology, which studies manifolds of four or fewer dimensions. This includes knot theory, the study of mathematical knots, Topology, as a well-defined mathematical discipline, originates in the early part of the twentieth century, but some isolated results can be traced back several centuries. Among these are certain questions in geometry investigated by Leonhard Euler and his 1736 paper on the Seven Bridges of Königsberg is regarded as one of the first practical applications of topology. On 14 November 1750 Euler wrote to a friend that he had realised the importance of the edges of a polyhedron and this led to his polyhedron formula, V − E + F =2. Some authorities regard this analysis as the first theorem, signalling the birth of topology, further contributions were made by Augustin-Louis Cauchy, Ludwig Schläfli, Johann Benedict Listing, Bernhard Riemann and Enrico Betti. Listing introduced the term Topologie in Vorstudien zur Topologie, written in his native German, in 1847, the term topologist in the sense of a specialist in topology was used in 1905 in the magazine Spectator. Their work was corrected, consolidated and greatly extended by Henri Poincaré, in 1895 he published his ground-breaking paper on Analysis Situs, which introduced the concepts now known as homotopy and homology, which are now considered part of algebraic topology. Unifying the work on function spaces of Georg Cantor, Vito Volterra, Cesare Arzelà, Jacques Hadamard, Giulio Ascoli and others, Maurice Fréchet introduced the metric space in 1906. A metric space is now considered a case of a general topological space. In 1914, Felix Hausdorff coined the term topological space and gave the definition for what is now called a Hausdorff space, currently, a topological space is a slight generalization of Hausdorff spaces, given in 1922 by Kazimierz Kuratowski
35.
Stephen Smale
–
Stephen Smale is an American mathematician from Flint, Michigan. His research concerns topology, dynamical systems and mathematical economics and he was awarded the Fields Medal in 1966 and spent more than three decades on the mathematics faculty of the University of California, Berkeley. Smale entered the University of Michigan in 1948, initially, he was a good student, placing into an honors calculus sequence taught by Bob Thrall and earning himself As. However, his sophomore and junior years were marred with mediocre grades, mostly Bs, Cs, however, with some luck, Smale was accepted as a graduate student at the University of Michigans mathematics department. Yet again, Smale performed poorly in his first years, earning a C average as a graduate student and it was only when the department chair, Hildebrandt, threatened to kick Smale out that he began to work hard. Smale finally earned his Ph. D. in 1957, under Raoul Bott, Smale began his career as an instructor at the college at the University of Chicago. In 1958, he astounded the world with a proof of a sphere eversion. After having made great strides in topology, he turned to the study of dynamical systems. His first contribution is the Smale horseshoe that started significant research in dynamical systems and he also outlined a research program carried out by many others. Smale is also known for injecting Morse theory into mathematical economics, in 1998 he compiled a list of 18 problems in mathematics to be solved in the 21st century, known as Smales problems. This list was compiled in the spirit of Hilberts famous list of problems produced in 1900, in fact, Smales list contains some of the original Hilbert problems, including the Riemann hypothesis and the second half of Hilberts sixteenth problem, both of which are still unsolved. Earlier in his career, Smale was involved in controversy over remarks he made regarding his work habits while proving the higher-dimensional Poincaré conjecture and he said that his best work had been done on the beaches of Rio. This led to the withholding of his grant money from the NSF and he has been politically active in various movements in the past, such as the Free Speech movement and the movement against the Vietnam War. At one time he was subpoenaed by the House Un-American Activities Committee, in 1960 Smale was appointed an associate professor of mathematics at the University of California, Berkeley, moving to a professorship at Columbia University the following year. In 1964 he returned to a professorship at UC Berkeley where he has spent the part of his career. He retired from UC Berkeley in 1995 and took up a post as professor at the City University of Hong Kong and he also amassed over the years one of the finest private mineral collections in existence. Many of Smales mineral specimens can be seen in the book—The Smale Collection, since 2002 Smale is a Professor at the Toyota Technological Institute at Chicago, starting August 1,2009, he is also a Distinguished University Professor at the City University of Hong Kong. In 2007, Smale was awarded the Wolf Prize in mathematics, generalized Poincarés conjecture in dimensions greater than four
36.
Horseshoe map
–
In the mathematics of chaos theory, a horseshoe map is any member of a class of chaotic maps of the square into itself. It is an example in the study of dynamical systems. The map was introduced by Stephen Smale while studying the behavior of the orbits of the van der Pol oscillator. The action of the map is defined geometrically by squishing the square, then stretching the result into a long strip, most points eventually leave the square under the action of the map. They go to the side caps where they will, under iteration, the points that remain in the square under repeated iteration form a fractal set and are part of the invariant set of the map. The squishing, stretching and folding of the map are typical of chaotic systems. In the horseshoe map, the squeezing and stretching are uniform and they compensate each other so that the area of the square does not change. The folding is done neatly, so that the orbits that remain forever in the square can be simply described, the horseshoe map f is a diffeomorphism defined from a region S of the plane into itself. The region S is a square capped by two semi-disks, the action of f is defined through the composition of three geometrically defined transformations. First the square is contracted along the direction by a factor a < 1/2. The caps are contracted so as to remain attached to the resulting rectangle. Contracting by a smaller than one half assures that there will be a gap between the branches of the horseshoe. Next the rectangle is stretched horizontally by a factor of 1/a, finally the resulting strip is folded into a horseshoe-shape and placed back into S. The interesting part of the dynamics is the image of the square into itself, once that part is defined, the map can be extended to a diffeomorphism by defining its action on the caps. The caps are made to contract and eventually map inside one of the caps, the extension of f to the caps adds a fixed point to the non-wandering set of the map. To keep the class of horseshoe maps simple, the region of the horseshoe should not map back into the square. The horseshoe map is one-to-one, which means that an inverse f−1 exists when restricted to the image of S under f, by folding the contracted and stretched square in different ways, other types of horseshoe maps are possible. To ensure that the map remains one-to-one, the square must not overlap itself
37.
Cantor set
–
In mathematics, the Cantor set is a set of points lying on a single line segment that has a number of remarkable and deep properties. It was discovered in 1874 by Henry John Stephen Smith and introduced by German mathematician Georg Cantor in 1883, through consideration of this set, Cantor and others helped lay the foundations of modern point-set topology. Although Cantor himself defined the set in a general, abstract way, Cantor himself mentioned the ternary construction only in passing, as an example of a more general idea, that of a perfect set that is nowhere dense. The Cantor ternary set C is created by deleting the open middle third from a set of line segments. One starts by deleting the open middle third from the interval, next, the open middle third of each of these remaining segments is deleted, leaving four line segments, ∪ ∪ ∪. This process is continued ad infinitum, where the nth set is C n = C n −13 ∪ for n ≥1, and C0 =. The Cantor ternary set contains all points in the interval that are not deleted at any step in this infinite process, the first six steps of this process are illustrated below. This process of removing middle thirds is an example of a finite subdivision rule. It is perhaps most intuitive to think about the Cantor set as the set of numbers between zero and one whose ternary expansion in base three doesnt contain the digit 1. As the above shows, the Cantor ternary set is in bijection with the set of paths in a full binary tree on countably many nodes. Such a path is determined by an infinite series of instructions determining at each node whether we go left or right as we traverse the diagram. This in turn describes the expansion of the number. For example, such a path might begin which describes the ternary number 0.02200, in particular, the Cantor set is canonically in bijection with the set of binary sequences. Since the Cantor set is defined as the set of points not excluded and this total is the geometric progression ∑ n =0 ∞2 n 3 n +1 =13 +29 +427 +881 + ⋯ =13 =1. So that the left is 1 –1 =0. This calculation shows that the Cantor set cannot contain any interval of non-zero length, in fact, it may seem surprising that there should be anything left—after all, the sum of the lengths of the removed intervals is equal to the length of the original interval. However, a look at the process reveals that there must be something left. So removing the segment from the original interval leaves behind the points 1/3 and 2/3
38.
Fixed point (mathematics)
–
In mathematics, a fixed point of a function is an element of the functions domain that is mapped to itself by the function. That is to say, c is a point of the function f if. This means f = fn = c, an important terminating consideration when recursively computing f, a set of fixed points is sometimes called a fixed set. For example, if f is defined on the numbers by f = x 2 −3 x +4, then 2 is a fixed point of f. In graphical terms, a point means the point is on the line y = x. Points that come back to the value after a finite number of iterations of the function are called periodic points. A fixed point is a point with period equal to one. In projective geometry, a point of a projectivity has been called a double point. In Galois theory, the set of the points of a set of field automorphisms is a field called the fixed field of the set of automorphisms. An expression of prerequisites and proof of the existence of solution is given by the Banach fixed-point theorem. The natural cosine function has exactly one fixed point, which is attractive, in this case, close enough is not a stringent criterion at all—to demonstrate this, start with any real number and repeatedly press the cos key on a calculator. It eventually converges to about 0.739085133, which is a fixed point and that is where the graph of the cosine function intersects the line y = x. Not all fixed points are attractive, for example, x =0 is a fixed point of the function f = 2x, but iteration of this function for any value other than zero rapidly diverges. However, if the function f is differentiable in an open neighbourhood of a fixed point x0. Attractive fixed points are a case of a wider mathematical concept of attractors. An attractive fixed point is said to be a fixed point if it is also Lyapunov stable. A fixed point is said to be a stable fixed point if it is Lyapunov stable. The center of a linear differential equation of the second order is an example of a neutrally stable fixed point
39.
Limit cycle
–
Such behavior is exhibited in some nonlinear systems. Limit cycles have been used to model the behavior of a great many real world oscillatory systems, the study of limit cycles was initiated by Henri Poincaré. We consider a two-dimensional dynamical system of the form x ′ = V where V, R2 → R2 is a smooth function, a trajectory of this system is some smooth function x with values in R2 which satisfies this differential equation. Such a trajectory is called closed if it is not constant but returns to its starting point, an orbit is the image of a trajectory, a subset of R2. A closed orbit, or cycle, is the image of a closed trajectory, a limit cycle is a cycle which is the limit set of some other trajectory. By the Jordan curve theorem, every closed trajectory divides the plane into two regions, the interior and the exterior of the curve. The corresponding statement holds for a trajectory in the interior that approaches the limit cycle for time approaching − ∞, in the case where all the neighbouring trajectories approach the limit cycle as time approaches infinity, it is called a stable or attractive limit cycle. If instead all neighbouring trajectories approach it as time approaches negative infinity, Stable limit cycles are examples of attractors. Every closed trajectory contains within its interior a stationary point of the system, the Bendixson–Dulac theorem and the Poincaré–Bendixson theorem predict the absence or existence, respectively, of limit cycles of two-dimensional nonlinear dynamical systems. Finding limit cycles in general is a difficult problem. The number of cycles of a polynomial differential equation in the plane is the main object of the second part of Hilberts sixteenth problem. Periodic point Self-oscillation Stable manifold Hyperbolic set limit cycle, steven H. Strogatz, Nonlinear Dynamics and Chaos, Addison Wesley publishing company,1994. M. Vidyasagar, Nonlinear Systems Analysis, second edition, Prentice Hall, Englewood Cliffs, philip Hartman, Ordinary Differential Equation, Society for Industrial and Applied Mathematics,2002. Witold Hurewicz, Lectures on Ordinary Differential Equations, Dover,2002, solomon Lefschetz, Differential Equations, Geometric Theory, Dover,2005. Lawrence Perko, Differential Equations and Dynamical Systems, Springer-Verlag,2006, arthur Mattuck, Limit Cycles, Existence and Non-existence Criteria, MIT Open Courseware http, //videolectures. net/mit1803s06_mattuck_lec32/#
40.
Intersection (set theory)
–
In mathematics, the intersection A ∩ B of two sets A and B is the set that contains all elements of A that also belong to B, but no other elements. For explanation of the used in this article, refer to the table of mathematical symbols. The intersection of A and B is written A ∩ B, formally, A ∩ B = that is x ∈ A ∩ B if and only if x ∈ A and x ∈ B. For example, The intersection of the sets and is, the number 9 is not in the intersection of the set of prime numbers and the set of odd numbers. More generally, one can take the intersection of sets at once. The intersection of A, B, C, and D, Intersection is an associative operation, thus, A ∩ = ∩ C. Additionally, intersection is commutative, thus A ∩ B = B ∩ A, inside a universe U one may define the complement Ac of A to be the set of all elements of U not in A. We say that A intersects B if A intersects B at some element, a intersects B if their intersection is inhabited. We say that A and B are disjoint if A does not intersect B, in plain language, they have no elements in common. A and B are disjoint if their intersection is empty, denoted A ∩ B = ∅, for example, the sets and are disjoint, the set of even numbers intersects the set of multiples of 3 at 0,6,12,18 and other numbers. The most general notion is the intersection of a nonempty collection of sets. If M is a nonempty set whose elements are themselves sets, then x is an element of the intersection of M if, the notation for this last concept can vary considerably. Set theorists will sometimes write ⋂M, while others will instead write ⋂A∈M A, the latter notation can be generalized to ⋂i∈I Ai, which refers to the intersection of the collection. Here I is a nonempty set, and Ai is a set for every i in I. In the case that the index set I is the set of numbers, notation analogous to that of an infinite series may be seen. When formatting is difficult, this can also be written A1 ∩ A2 ∩ A3 ∩, even though strictly speaking, A1 ∩ (A2 ∩ (A3 ∩. Finally, let us note that whenever the symbol ∩ is placed before other symbols instead of them, it should be of a larger size. Note that in the section we excluded the case where M was the empty set
41.
Union (set theory)
–
In set theory, the union of a collection of sets is the set of all elements in the collection. It is one of the operations through which sets can be combined and related to each other. For explanation of the used in this article, refer to the table of mathematical symbols. The union of two sets A and B is the set of elements which are in A, in B, for example, if A = and B = then A ∪ B =. Sets cannot have duplicate elements, so the union of the sets and is, multiple occurrences of identical elements have no effect on the cardinality of a set or its contents. Binary union is an operation, that is, A ∪ = ∪ C. The operations can be performed in any order, and the parentheses may be omitted without ambiguity, similarly, union is commutative, so the sets can be written in any order. The empty set is an identity element for the operation of union and that is, A ∪ ∅ = A, for any set A. This follows from analogous facts about logical disjunction, since sets with unions and intersections form a Boolean algebra, intersection distributes over union A ∩ = ∪ and union distributes over intersection A ∪ = ∩. One can take the union of several sets simultaneously, for example, the union of three sets A, B, and C contains all elements of A, all elements of B, and all elements of C, and nothing else. Thus, x is an element of A ∪ B ∪ C if and only if x is in at least one of A, B, and C. In mathematics a finite union means any union carried out on a number of sets. The most general notion is the union of a collection of sets. If M is a set whose elements are themselves sets, then x is an element of the union of M if, in symbols, x ∈ ⋃ M ⟺ ∃ A ∈ M, x ∈ A. This idea subsumes the preceding sections, in that A ∪ B ∪ C is the union of the collection, also, if M is the empty collection, then the union of M is the empty set. The notation for the concept can vary considerably. For a finite union of sets S1, S2, S3, …, S n one often writes S1 ∪ S2 ∪ S3 ∪ ⋯ ∪ S n or ⋃ i =1 n S i. In the case that the index set I is the set of natural numbers, whenever the symbol ∪ is placed before other symbols instead of between them, it is of a larger size