1.
Differential equation
–
A differential equation is a mathematical equation that relates some function with its derivatives. In applications, the functions usually represent physical quantities, the derivatives represent their rates of change, because such relations are extremely common, differential equations play a prominent role in many disciplines including engineering, physics, economics, and biology. In pure mathematics, differential equations are studied from different perspectives. Only the simplest differential equations are solvable by explicit formulas, however, if a self-contained formula for the solution is not available, the solution may be numerically approximated using computers. Differential equations first came into existence with the invention of calculus by Newton, jacob Bernoulli proposed the Bernoulli differential equation in 1695. This is a differential equation of the form y ′ + P y = Q y n for which the following year Leibniz obtained solutions by simplifying it. Historically, the problem of a string such as that of a musical instrument was studied by Jean le Rond dAlembert, Leonhard Euler, Daniel Bernoulli. In 1746, d’Alembert discovered the wave equation, and within ten years Euler discovered the three-dimensional wave equation. The Euler–Lagrange equation was developed in the 1750s by Euler and Lagrange in connection with their studies of the tautochrone problem. This is the problem of determining a curve on which a particle will fall to a fixed point in a fixed amount of time. Lagrange solved this problem in 1755 and sent the solution to Euler, both further developed Lagranges method and applied it to mechanics, which led to the formulation of Lagrangian mechanics. Contained in this book was Fouriers proposal of his heat equation for conductive diffusion of heat and this partial differential equation is now taught to every student of mathematical physics. For example, in mechanics, the motion of a body is described by its position. Newtons laws allow one to express these variables dynamically as an equation for the unknown position of the body as a function of time. In some cases, this equation may be solved explicitly. An example of modelling a real world problem using differential equations is the determination of the velocity of a ball falling through the air, considering only gravity, the balls acceleration towards the ground is the acceleration due to gravity minus the acceleration due to air resistance. Gravity is considered constant, and air resistance may be modeled as proportional to the balls velocity and this means that the balls acceleration, which is a derivative of its velocity, depends on the velocity. Finding the velocity as a function of time involves solving a differential equation, Differential equations can be divided into several types
2.
Natural science
–
Natural science is a branch of science concerned with the description, prediction, and understanding of natural phenomena, based on observational and empirical evidence. Mechanisms such as review and repeatability of findings are used to try to ensure the validity of scientific advances. Natural science can be divided into two branches, life science and physical science. Physical science is subdivided into branches, including physics, space science, chemistry and these branches of natural science may be further divided into more specialized branches. Modern natural science succeeded more classical approaches to natural philosophy, usually traced to ancient Greece, galileo, Descartes, Francis Bacon, and Newton debated the benefits of using approaches which were more mathematical and more experimental in a methodical way. Still, philosophical perspectives, conjectures, and presuppositions, often overlooked, systematic data collection, including discovery science, succeeded natural history, which emerged in the 16th century by describing and classifying plants, animals, minerals, and so on. Today, natural history suggests observational descriptions aimed at popular audiences, philosophers of science have suggested a number of criteria, including Karl Poppers controversial falsifiability criterion, to help them differentiate scientific endeavors from non-scientific ones. Validity, accuracy, and quality control, such as peer review and this field encompasses a set of disciplines that examines phenomena related to living organisms. The scale of study can range from sub-component biophysics up to complex ecologies, biology is concerned with the characteristics, classification and behaviors of organisms, as well as how species were formed and their interactions with each other and the environment. The biological fields of botany, zoology, and medicine date back to periods of civilization. However, it was not until the 19th century that became a unified science. Once scientists discovered commonalities between all living things, it was decided they were best studied as a whole, modern biology is divided into subdisciplines by the type of organism and by the scale being studied. Molecular biology is the study of the chemistry of life, while cellular biology is the examination of the cell. At a higher level, anatomy and physiology looks at the internal structures, constituting the scientific study of matter at the atomic and molecular scale, chemistry deals primarily with collections of atoms, such as gases, molecules, crystals, and metals. The composition, statistical properties, transformations and reactions of these materials are studied, chemistry also involves understanding the properties and interactions of individual atoms and molecules for use in larger-scale applications. Most chemical processes can be studied directly in a laboratory, using a series of techniques for manipulating materials, chemistry is often called the central science because of its role in connecting the other natural sciences. Early experiments in chemistry had their roots in the system of Alchemy, the science of chemistry began to develop with the work of Robert Boyle, the discoverer of gas, and Antoine Lavoisier, who developed the theory of the Conservation of mass. The success of science led to a complementary chemical industry that now plays a significant role in the world economy
3.
Engineering
–
The term Engineering is derived from the Latin ingenium, meaning cleverness and ingeniare, meaning to contrive, devise. Engineering has existed since ancient times as humans devised fundamental inventions such as the wedge, lever, wheel, each of these inventions is essentially consistent with the modern definition of engineering. The term engineering is derived from the engineer, which itself dates back to 1390 when an engineer originally referred to a constructor of military engines. In this context, now obsolete, a referred to a military machine. Notable examples of the obsolete usage which have survived to the present day are military engineering corps, the word engine itself is of even older origin, ultimately deriving from the Latin ingenium, meaning innate quality, especially mental power, hence a clever invention. The earliest civil engineer known by name is Imhotep, as one of the officials of the Pharaoh, Djosèr, he probably designed and supervised the construction of the Pyramid of Djoser at Saqqara in Egypt around 2630–2611 BC. Ancient Greece developed machines in both civilian and military domains, the Antikythera mechanism, the first known mechanical computer, and the mechanical inventions of Archimedes are examples of early mechanical engineering. In the Middle Ages, the trebuchet was developed, the first steam engine was built in 1698 by Thomas Savery. The development of this gave rise to the Industrial Revolution in the coming decades. With the rise of engineering as a profession in the 18th century, similarly, in addition to military and civil engineering, the fields then known as the mechanic arts became incorporated into engineering. The inventions of Thomas Newcomen and the Scottish engineer James Watt gave rise to mechanical engineering. The development of specialized machines and machine tools during the revolution led to the rapid growth of mechanical engineering both in its birthplace Britain and abroad. John Smeaton was the first self-proclaimed civil engineer and is regarded as the father of civil engineering. He was an English civil engineer responsible for the design of bridges, canals, harbours and he was also a capable mechanical engineer and an eminent physicist. Smeaton designed the third Eddystone Lighthouse where he pioneered the use of hydraulic lime and his lighthouse remained in use until 1877 and was dismantled and partially rebuilt at Plymouth Hoe where it is known as Smeatons Tower. The United States census of 1850 listed the occupation of engineer for the first time with a count of 2,000, there were fewer than 50 engineering graduates in the U. S. before 1865. In 1870 there were a dozen U. S. mechanical engineering graduates, in 1890 there were 6,000 engineers in civil, mining, mechanical and electrical. There was no chair of applied mechanism and applied mechanics established at Cambridge until 1875, the theoretical work of James Maxwell and Heinrich Hertz in the late 19th century gave rise to the field of electronics
4.
Astronomy
–
Astronomy is a natural science that studies celestial objects and phenomena. It applies mathematics, physics, and chemistry, in an effort to explain the origin of those objects and phenomena and their evolution. Objects of interest include planets, moons, stars, galaxies, and comets, while the phenomena include supernovae explosions, gamma ray bursts, more generally, all astronomical phenomena that originate outside Earths atmosphere are within the purview of astronomy. A related but distinct subject, physical cosmology, is concerned with the study of the Universe as a whole, Astronomy is the oldest of the natural sciences. The early civilizations in recorded history, such as the Babylonians, Greeks, Indians, Egyptians, Nubians, Iranians, Chinese, during the 20th century, the field of professional astronomy split into observational and theoretical branches. Observational astronomy is focused on acquiring data from observations of astronomical objects, theoretical astronomy is oriented toward the development of computer or analytical models to describe astronomical objects and phenomena. The two fields complement each other, with theoretical astronomy seeking to explain the results and observations being used to confirm theoretical results. Astronomy is one of the few sciences where amateurs can play an active role, especially in the discovery. Amateur astronomers have made and contributed to many important astronomical discoveries, Astronomy means law of the stars. Astronomy should not be confused with astrology, the system which claims that human affairs are correlated with the positions of celestial objects. Although the two share a common origin, they are now entirely distinct. Generally, either the term astronomy or astrophysics may be used to refer to this subject, however, since most modern astronomical research deals with subjects related to physics, modern astronomy could actually be called astrophysics. Few fields, such as astrometry, are purely astronomy rather than also astrophysics, some titles of the leading scientific journals in this field includeThe Astronomical Journal, The Astrophysical Journal and Astronomy and Astrophysics. In early times, astronomy only comprised the observation and predictions of the motions of objects visible to the naked eye, in some locations, early cultures assembled massive artifacts that possibly had some astronomical purpose. Before tools such as the telescope were invented, early study of the stars was conducted using the naked eye, most of early astronomy actually consisted of mapping the positions of the stars and planets, a science now referred to as astrometry. From these observations, early ideas about the motions of the planets were formed, and the nature of the Sun, Moon, the Earth was believed to be the center of the Universe with the Sun, the Moon and the stars rotating around it. This is known as the model of the Universe, or the Ptolemaic system. The Babylonians discovered that lunar eclipses recurred in a cycle known as a saros
5.
Physics
–
Physics is the natural science that involves the study of matter and its motion and behavior through space and time, along with related concepts such as energy and force. One of the most fundamental disciplines, the main goal of physics is to understand how the universe behaves. Physics is one of the oldest academic disciplines, perhaps the oldest through its inclusion of astronomy, Physics intersects with many interdisciplinary areas of research, such as biophysics and quantum chemistry, and the boundaries of physics are not rigidly defined. New ideas in physics often explain the mechanisms of other sciences while opening new avenues of research in areas such as mathematics. Physics also makes significant contributions through advances in new technologies that arise from theoretical breakthroughs, the United Nations named 2005 the World Year of Physics. Astronomy is the oldest of the natural sciences, the stars and planets were often a target of worship, believed to represent their gods. While the explanations for these phenomena were often unscientific and lacking in evidence, according to Asger Aaboe, the origins of Western astronomy can be found in Mesopotamia, and all Western efforts in the exact sciences are descended from late Babylonian astronomy. The most notable innovations were in the field of optics and vision, which came from the works of many scientists like Ibn Sahl, Al-Kindi, Ibn al-Haytham, Al-Farisi and Avicenna. The most notable work was The Book of Optics, written by Ibn Al-Haitham, in which he was not only the first to disprove the ancient Greek idea about vision, but also came up with a new theory. In the book, he was also the first to study the phenomenon of the pinhole camera, many later European scholars and fellow polymaths, from Robert Grosseteste and Leonardo da Vinci to René Descartes, Johannes Kepler and Isaac Newton, were in his debt. Indeed, the influence of Ibn al-Haythams Optics ranks alongside that of Newtons work of the same title, the translation of The Book of Optics had a huge impact on Europe. From it, later European scholars were able to build the devices as what Ibn al-Haytham did. From this, such important things as eyeglasses, magnifying glasses, telescopes, Physics became a separate science when early modern Europeans used experimental and quantitative methods to discover what are now considered to be the laws of physics. Newton also developed calculus, the study of change, which provided new mathematical methods for solving physical problems. The discovery of new laws in thermodynamics, chemistry, and electromagnetics resulted from greater research efforts during the Industrial Revolution as energy needs increased, however, inaccuracies in classical mechanics for very small objects and very high velocities led to the development of modern physics in the 20th century. Modern physics began in the early 20th century with the work of Max Planck in quantum theory, both of these theories came about due to inaccuracies in classical mechanics in certain situations. Quantum mechanics would come to be pioneered by Werner Heisenberg, Erwin Schrödinger, from this early work, and work in related fields, the Standard Model of particle physics was derived. Areas of mathematics in general are important to this field, such as the study of probabilities, in many ways, physics stems from ancient Greek philosophy
6.
Chemistry
–
Chemistry is a branch of physical science that studies the composition, structure, properties and change of matter. Chemistry is sometimes called the science because it bridges other natural sciences, including physics. For the differences between chemistry and physics see comparison of chemistry and physics, the history of chemistry can be traced to alchemy, which had been practiced for several millennia in various parts of the world. The word chemistry comes from alchemy, which referred to a set of practices that encompassed elements of chemistry, metallurgy, philosophy, astrology, astronomy, mysticism. An alchemist was called a chemist in popular speech, and later the suffix -ry was added to this to describe the art of the chemist as chemistry, the modern word alchemy in turn is derived from the Arabic word al-kīmīā. In origin, the term is borrowed from the Greek χημία or χημεία and this may have Egyptian origins since al-kīmīā is derived from the Greek χημία, which is in turn derived from the word Chemi or Kimi, which is the ancient name of Egypt in Egyptian. Alternately, al-kīmīā may derive from χημεία, meaning cast together, in retrospect, the definition of chemistry has changed over time, as new discoveries and theories add to the functionality of the science. The term chymistry, in the view of noted scientist Robert Boyle in 1661, in 1837, Jean-Baptiste Dumas considered the word chemistry to refer to the science concerned with the laws and effects of molecular forces. More recently, in 1998, Professor Raymond Chang broadened the definition of chemistry to mean the study of matter, early civilizations, such as the Egyptians Babylonians, Indians amassed practical knowledge concerning the arts of metallurgy, pottery and dyes, but didnt develop a systematic theory. Greek atomism dates back to 440 BC, arising in works by such as Democritus and Epicurus. In 50 BC, the Roman philosopher Lucretius expanded upon the theory in his book De rerum natura, unlike modern concepts of science, Greek atomism was purely philosophical in nature, with little concern for empirical observations and no concern for chemical experiments. Work, particularly the development of distillation, continued in the early Byzantine period with the most famous practitioner being the 4th century Greek-Egyptian Zosimos of Panopolis. He formulated Boyles law, rejected the four elements and proposed a mechanistic alternative of atoms. Before his work, though, many important discoveries had been made, the Scottish chemist Joseph Black and the Dutchman J. B. English scientist John Dalton proposed the theory of atoms, that all substances are composed of indivisible atoms of matter. Davy discovered nine new elements including the alkali metals by extracting them from their oxides with electric current, british William Prout first proposed ordering all the elements by their atomic weight as all atoms had a weight that was an exact multiple of the atomic weight of hydrogen. The inert gases, later called the noble gases were discovered by William Ramsay in collaboration with Lord Rayleigh at the end of the century, thereby filling in the basic structure of the table. Organic chemistry was developed by Justus von Liebig and others, following Friedrich Wöhlers synthesis of urea which proved that organisms were, in theory
7.
Biology
–
Biology is a natural science concerned with the study of life and living organisms, including their structure, function, growth, evolution, distribution, identification and taxonomy. Modern biology is a vast and eclectic field, composed of branches and subdisciplines. However, despite the broad scope of biology, there are certain unifying concepts within it that consolidate it into single, coherent field. In general, biology recognizes the cell as the unit of life, genes as the basic unit of heredity. It is also understood today that all organisms survive by consuming and transforming energy and by regulating their internal environment to maintain a stable, the term biology is derived from the Greek word βίος, bios, life and the suffix -λογία, -logia, study of. The Latin-language form of the term first appeared in 1736 when Swedish scientist Carl Linnaeus used biologi in his Bibliotheca botanica, the first German use, Biologie, was in a 1771 translation of Linnaeus work. In 1797, Theodor Georg August Roose used the term in the preface of a book, karl Friedrich Burdach used the term in 1800 in a more restricted sense of the study of human beings from a morphological, physiological and psychological perspective. The science that concerns itself with these objects we will indicate by the biology or the doctrine of life. Although modern biology is a recent development, sciences related to. Natural philosophy was studied as early as the ancient civilizations of Mesopotamia, Egypt, the Indian subcontinent, however, the origins of modern biology and its approach to the study of nature are most often traced back to ancient Greece. While the formal study of medicine back to Hippocrates, it was Aristotle who contributed most extensively to the development of biology. Especially important are his History of Animals and other works where he showed naturalist leanings, and later more empirical works that focused on biological causation and the diversity of life. Aristotles successor at the Lyceum, Theophrastus, wrote a series of books on botany that survived as the most important contribution of antiquity to the plant sciences, even into the Middle Ages. Scholars of the medieval Islamic world who wrote on biology included al-Jahiz, Al-Dīnawarī, who wrote on botany, biology began to quickly develop and grow with Anton van Leeuwenhoeks dramatic improvement of the microscope. It was then that scholars discovered spermatozoa, bacteria, infusoria, investigations by Jan Swammerdam led to new interest in entomology and helped to develop the basic techniques of microscopic dissection and staining. Advances in microscopy also had a impact on biological thinking. In the early 19th century, a number of biologists pointed to the importance of the cell. Thanks to the work of Robert Remak and Rudolf Virchow, however, meanwhile, taxonomy and classification became the focus of natural historians
8.
Geology
–
Geology is an earth science concerned with the solid Earth, the rocks of which it is composed, and the processes by which they change over time. Geology can also refer generally to the study of the features of any terrestrial planet. Geology gives insight into the history of the Earth by providing the evidence for plate tectonics, the evolutionary history of life. Geology also plays a role in engineering and is a major academic discipline. The majority of data comes from research on solid Earth materials. These typically fall into one of two categories, rock and unconsolidated material, the majority of research in geology is associated with the study of rock, as rock provides the primary record of the majority of the geologic history of the Earth. There are three types of rock, igneous, sedimentary, and metamorphic. The rock cycle is an important concept in geology which illustrates the relationships between three types of rock, and magma. When a rock crystallizes from melt, it is an igneous rock, the sedimentary rock can then be subsequently turned into a metamorphic rock due to heat and pressure and is then weathered, eroded, deposited, and lithified, ultimately becoming a sedimentary rock. Sedimentary rock may also be re-eroded and redeposited, and metamorphic rock may also undergo additional metamorphism, all three types of rocks may be re-melted, when this happens, a new magma is formed, from which an igneous rock may once again crystallize. Geologists also study unlithified material which typically comes from more recent deposits and these materials are superficial deposits which lie above the bedrock. Because of this, the study of material is often known as Quaternary geology. This includes the study of sediment and soils, including studies in geomorphology, sedimentology and this theory is supported by several types of observations, including seafloor spreading, and the global distribution of mountain terrain and seismicity. This coupling between rigid plates moving on the surface of the Earth and the mantle is called plate tectonics. The development of plate tectonics provided a basis for many observations of the solid Earth. Long linear regions of geologic features could be explained as plate boundaries, mid-ocean ridges, high regions on the seafloor where hydrothermal vents and volcanoes exist, were explained as divergent boundaries, where two plates move apart. Arcs of volcanoes and earthquakes were explained as convergent boundaries, where one plate subducts under another, transform boundaries, such as the San Andreas Fault system, resulted in widespread powerful earthquakes. Plate tectonics also provided a mechanism for Alfred Wegeners theory of continental drift and they also provided a driving force for crustal deformation, and a new setting for the observations of structural geology
9.
Applied mathematics
–
Applied mathematics is a branch of mathematics that deals with mathematical methods that find use in science, engineering, business, computer science, and industry. Thus, applied mathematics is a combination of science and specialized knowledge. The term applied mathematics also describes the professional specialty in which work on practical problems by formulating and studying mathematical models. The activity of applied mathematics is thus connected with research in pure mathematics. Historically, applied mathematics consisted principally of applied analysis, most notably differential equations, approximation theory, quantitative finance is now taught in mathematics departments across universities and mathematical finance is considered a full branch of applied mathematics. Engineering and computer science departments have made use of applied mathematics. Today, the applied mathematics is used in a broader sense. It includes the areas noted above as well as other areas that have become increasingly important in applications. Even fields such as number theory that are part of mathematics are now important in applications. There is no consensus as to what the various branches of applied mathematics are, such categorizations are made difficult by the way mathematics and science change over time, and also by the way universities organize departments, courses, and degrees. Many mathematicians distinguish between applied mathematics, which is concerned with methods, and the applications of mathematics within science. Mathematicians such as Poincaré and Arnold deny the existence of applied mathematics, similarly, non-mathematicians blend applied mathematics and applications of mathematics. The use and development of mathematics to industrial problems is also called industrial mathematics. Historically, mathematics was most important in the sciences and engineering. Academic institutions are not consistent in the way they group and label courses, programs, at some schools, there is a single mathematics department, whereas others have separate departments for Applied Mathematics and Mathematics. It is very common for Statistics departments to be separated at schools with graduate programs, many applied mathematics programs consist of primarily cross-listed courses and jointly appointed faculty in departments representing applications. Some Ph. D. programs in applied mathematics require little or no coursework outside of mathematics, in some respects this difference reflects the distinction between application of mathematics and applied mathematics. Research universities dividing their mathematics department into pure and applied sections include MIT, brigham Young University also has an Applied and Computational Emphasis, a program that allows student to graduate with a Mathematics degree, with an emphasis in Applied Math
10.
Continuum mechanics
–
Continuum mechanics is a branch of mechanics that deals with the analysis of the kinematics and the mechanical behavior of materials modeled as a continuous mass rather than as discrete particles. The French mathematician Augustin-Louis Cauchy was the first to formulate such models in the 19th century, research in the area continues till today. Modeling an object as a continuum assumes that the substance of the object completely fills the space it occupies, Continuum mechanics deals with physical properties of solids and fluids which are independent of any particular coordinate system in which they are observed. These physical properties are represented by tensors, which are mathematical objects that have the required property of being independent of coordinate system. These tensors can be expressed in coordinate systems for computational convenience, Materials, such as solids, liquids and gases, are composed of molecules separated by space. On a microscopic scale, materials have cracks and discontinuities, a continuum is a body that can be continually sub-divided into infinitesimal elements with properties being those of the bulk material. More specifically, the continuum hypothesis/assumption hinges on the concepts of an elementary volume. This condition provides a link between an experimentalists and a viewpoint on constitutive equations as well as a way of spatial and statistical averaging of the microstructure. The latter then provide a basis for stochastic finite elements. The levels of SVE and RVE link continuum mechanics to statistical mechanics, the RVE may be assessed only in a limited way via experimental testing, when the constitutive response becomes spatially homogeneous. Specifically for fluids, the Knudsen number is used to assess to what extent the approximation of continuity can be made, consider car traffic on a highway---with just one lane for simplicity. Somewhat surprisingly, and in a tribute to its effectiveness, continuum mechanics effectively models the movement of cars via a differential equation for the density of cars. The familiarity of this situation empowers us to understand a little of the continuum-discrete dichotomy underlying continuum modelling in general. To start modelling define that, x measure distance along the highway, t is time, ρ is the density of cars on the highway, cars do not appear and disappear. Consider any group of cars, from the car at the back of the group located at x = a to the particular car at the front located at x = b. The total number of cars in this group N = ∫ a b ρ d x, since cars are conserved d N / d t =0. The only way an integral can be zero for all intervals is if the integrand is zero for all x, consequently, conservation derives the first order nonlinear conservation PDE ∂ ρ ∂ t + ∂ ∂ x =0 for all positions on the highway. This conservation PDE applies not only to car traffic but also to fluids, solids, crowds, animals, plants, bushfires, financial traders and this PDE is one equation with two unknowns, so another equation is needed to form a well posed problem
11.
Chaos theory
–
Chaos theory is a branch of mathematics focused on the behavior of dynamical systems that are highly sensitive to initial conditions. This happens even though these systems are deterministic, meaning that their behavior is fully determined by their initial conditions. In other words, the nature of these systems does not make them predictable. This behavior is known as chaos, or simply chaos. The theory was summarized by Edward Lorenz as, Chaos, When the present determines the future, Chaotic behavior exists in many natural systems, such as weather and climate. It also occurs spontaneously in some systems with components, such as road traffic. This behavior can be studied through analysis of a mathematical model, or through analytical techniques such as recurrence plots. Chaos theory has applications in several disciplines, including meteorology, sociology, physics, environmental science, computer science, engineering, economics, biology, ecology, the theory formed the basis for such fields of study as complex dynamical systems, edge of chaos theory, self-assembly process. Chaos theory concerns deterministic systems whose behavior can in principle be predicted, Chaotic systems are predictable for a while and then appear to become random. Some examples of Lyapunov times are, chaotic electrical circuits, about 1 millisecond, weather systems, a few days, in chaotic systems, the uncertainty in a forecast increases exponentially with elapsed time. Hence, mathematically, doubling the forecast time more than squares the proportional uncertainty in the forecast and this means, in practice, a meaningful prediction cannot be made over an interval of more than two or three times the Lyapunov time. When meaningful predictions cannot be made, the system appears random, in common usage, chaos means a state of disorder. However, in theory, the term is defined more precisely. Although no universally accepted mathematical definition of chaos exists, a commonly used definition originally formulated by Robert L, in these cases, while it is often the most practically significant property, sensitivity to initial conditions need not be stated in the definition. If attention is restricted to intervals, the second property implies the other two, an alternative, and in general weaker, definition of chaos uses only the first two properties in the above list. Sensitivity to initial conditions means that each point in a system is arbitrarily closely approximated by other points with significantly different future paths. Thus, a small change, or perturbation, of the current trajectory may lead to significantly different future behavior. C. Entitled Predictability, Does the Flap of a Butterflys Wings in Brazil set off a Tornado in Texas, the flapping wing represents a small change in the initial condition of the system, which causes a chain of events leading to large-scale phenomena
12.
Dynamical system
–
In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in a geometrical space. Examples include the models that describe the swinging of a clock pendulum, the flow of water in a pipe. At any given time, a system has a state given by a tuple of real numbers that can be represented by a point in an appropriate state space. The evolution rule of the system is a function that describes what future states follow from the current state. Often the function is deterministic, that is, for a time interval only one future state follows from the current state. However, some systems are stochastic, in random events also affect the evolution of the state variables. In physics, a system is described as a particle or ensemble of particles whose state varies over time. In order to make a prediction about the future behavior. Dynamical systems are a part of chaos theory, logistic map dynamics, bifurcation theory, the self-assembly process. The concept of a system has its origins in Newtonian mechanics. To determine the state for all future times requires iterating the relation many times—each advancing time a small step, the iteration procedure is referred to as solving the system or integrating the system. If the system can be solved, given a point it is possible to determine all its future positions. Before the advent of computers, finding an orbit required sophisticated mathematical techniques, numerical methods implemented on electronic computing machines have simplified the task of determining the orbits of a dynamical system. For simple dynamical systems, knowing the trajectory is often sufficient, the difficulties arise because, The systems studied may only be known approximately—the parameters of the system may not be known precisely or terms may be missing from the equations. The approximations used bring into question the validity or relevance of numerical solutions, to address these questions several notions of stability have been introduced in the study of dynamical systems, such as Lyapunov stability or structural stability. The stability of the dynamical system implies that there is a class of models or initial conditions for which the trajectories would be equivalent, the operation for comparing orbits to establish their equivalence changes with the different notions of stability. The type of trajectory may be more important than one particular trajectory, some trajectories may be periodic, whereas others may wander through many different states of the system. Applications often require enumerating these classes or maintaining the system within one class, classifying all possible trajectories has led to the qualitative study of dynamical systems, that is, properties that do not change under coordinate changes
13.
Social science
–
Social science is a major category of academic disciplines, concerned with society and the relationships among individuals within a society. It in turn has many branches, each of which is considered a social science, the social sciences include economics, political science, human geography, demography, psychology, sociology, anthropology, archaeology, jurisprudence, history, and linguistics. The term is sometimes used to refer specifically to the field of sociology. A more detailed list of sub-disciplines within the sciences can be found at Outline of social science. Positivist social scientists use methods resembling those of the sciences as tools for understanding society. In modern academic practice, researchers are often eclectic, using multiple methodologies, the term social research has also acquired a degree of autonomy as practitioners from various disciplines share in its aims and methods. Social sciences came forth from the philosophy of the time and were influenced by the Age of Revolutions, such as the Industrial Revolution. The social sciences developed from the sciences, or the systematic knowledge-bases or prescriptive practices, the beginnings of the social sciences in the 18th century are reflected in the grand encyclopedia of Diderot, with articles from Jean-Jacques Rousseau and other pioneers. The growth of the sciences is also reflected in other specialized encyclopedias. The modern period saw social science first used as a distinct conceptual field, Social science was influenced by positivism, focusing on knowledge based on actual positive sense experience and avoiding the negative, metaphysical speculation was avoided. Auguste Comte used the term science sociale to describe the field, taken from the ideas of Charles Fourier, following this period, there were five paths of development that sprang forth in the social sciences, influenced by Comte on other fields. One route that was taken was the rise of social research, large statistical surveys were undertaken in various parts of the United States and Europe. Another route undertaken was initiated by Émile Durkheim, studying social facts, a third means developed, arising from the methodological dichotomy present, in which social phenomena were identified with and understood, this was championed by figures such as Max Weber. The fourth route taken, based in economics, was developed and furthered economic knowledge as a hard science, the last path was the correlation of knowledge and social values, the antipositivism and verstehen sociology of Max Weber firmly demanded this distinction. In this route, theory and prescription were non-overlapping formal discussions of a subject, around the start of the 20th century, Enlightenment philosophy was challenged in various quarters. The development of social science subfields became very quantitative in methodology, examples of boundary blurring include emerging disciplines like social research of medicine, sociobiology, neuropsychology, bioeconomics and the history and sociology of science. Increasingly, quantitative research and qualitative methods are being integrated in the study of action and its implications. In the first half of the 20th century, statistics became a discipline of applied mathematics
14.
Economics
–
Economics is a social science concerned chiefly with description and analysis of the production, distribution, and consumption of goods and services according to the Merriam-Webster Dictionary. Economics focuses on the behaviour and interactions of economic agents and how economies work, consistent with this focus, textbooks often distinguish between microeconomics and macroeconomics. Microeconomics examines the behaviour of elements in the economy, including individual agents and markets, their interactions. Individual agents may include, for example, households, firms, buyers, macroeconomics analyzes the entire economy and issues affecting it, including unemployment of resources, inflation, economic growth, and the public policies that address these issues. Economic analysis can be applied throughout society, as in business, finance, health care, Economic analyses may also be applied to such diverse subjects as crime, education, the family, law, politics, religion, social institutions, war, science, and the environment. At the turn of the 21st century, the domain of economics in the social sciences has been described as economic imperialism. The ultimate goal of economics is to improve the conditions of people in their everyday life. There are a variety of definitions of economics. Some of the differences may reflect evolving views of the subject or different views among economists, to supply the state or commonwealth with a revenue for the publick services. Say, distinguishing the subject from its uses, defines it as the science of production, distribution. On the satirical side, Thomas Carlyle coined the dismal science as an epithet for classical economics, in this context and it enquires how he gets his income and how he uses it. Thus, it is on the one side, the study of wealth and on the other and more important side, a part of the study of man. He affirmed that previous economists have usually centred their studies on the analysis of wealth, how wealth is created, distributed, and consumed, but he said that economics can be used to study other things, such as war, that are outside its usual focus. This is because war has as the goal winning it, generates both cost and benefits, and, resources are used to attain the goal. If the war is not winnable or if the costs outweigh the benefits. Some subsequent comments criticized the definition as overly broad in failing to limit its subject matter to analysis of markets, there are other criticisms as well, such as in scarcity not accounting for the macroeconomics of high unemployment. The same source reviews a range of included in principles of economics textbooks. Among economists more generally, it argues that a particular definition presented may reflect the direction toward which the author believes economics is evolving, microeconomics examines how entities, forming a market structure, interact within a market to create a market system
15.
Population dynamics
–
Population dynamics is the branch of life sciences that studies the size and age composition of populations as dynamical systems, and the biological and environmental processes driving them. Example scenarios are ageing populations, population growth, or population decline, the first principle of population dynamics is widely regarded as the exponential law of Malthus, as modeled by the Malthusian growth model. The Lotka–Volterra predator-prey equations are another example, as well as the alternative Arditi-Ginzburg equations. The computer game SimCity and the MMORPG Ultima Online, among others, in the past 30 years, population dynamics has been complemented by evolutionary game theory, developed first by John Maynard Smith. Under these dynamics, evolutionary biology concepts may take a deterministic mathematical form, Population dynamics overlap with another active area of research in mathematical biology, mathematical epidemiology, the study of infectious disease affecting populations. Various models of viral spread have been proposed and analyzed, the rate at which a population increases in size if there are no density-dependent forces regulating the population is known as the intrinsic rate of increase. D N d t 1 N = r Where d N / d t is the rate of increase of the population, N is the population size and this is therefore the theoretical maximum rate of increase of a population per individual. The concept is used in insect population biology to determine how environmental factors affect the rate at which pest populations increase. See also exponential population growth and logistic population growth and it is very unusual to see this in nature. In the last 100 years, human population growth has appeared to be exponential, in the long run, however, it is not likely. Paul Ehrlich and Thomas Malthus believed that population growth would lead to overpopulation and starvation due to scarcity of resources. They believed that population would grow at rate in which they exceed the ability at which humans can find food. In the future, humans would be unable to feed large populations, the biological assumptions of exponential growth is that the per capita growth rate is constant. Growth is not limited by resource scarcity or predation, N t +1 = λ N t where λ is the discrete-time per capita growth rate. At λ =1, we get a line and a discrete-time per capita growth rate of zero. At λ <1, we get a decrease in per capita growth rate, at λ >1, we get an increase in per capita growth rate. At λ =0, we get extinction of the species, D N d T = r N where d N d T is the rate of population growth per unit time, r is the maximum per capita growth rate, and N is the population size. At r >0, there is an increase in per capita growth rate, at r =0, the per capita growth rate is zero
16.
Ordinary differential equation
–
In mathematics, an ordinary differential equation is a differential equation containing one or more functions of one independent variable and its derivatives. The term ordinary is used in contrast with the partial differential equation which may be with respect to more than one independent variable. ODEs that are linear equations have exact closed-form solutions that can be added and multiplied by coefficients. Graphical and numerical methods, applied by hand or by computer, may approximate solutions of ODEs and perhaps yield useful information, often sufficing in the absence of exact, Ordinary differential equations arise in many contexts of mathematics and science. Mathematical descriptions of change use differentials and derivatives, often, quantities are defined as the rate of change of other quantities, or gradients of quantities, which is how they enter differential equations. Specific mathematical fields include geometry and analytical mechanics, scientific fields include much of physics and astronomy, meteorology, chemistry, biology, ecology and population modelling, economics. Many mathematicians have studied differential equations and contributed to the field, including Newton, Leibniz, the Bernoulli family, Riccati, Clairaut, dAlembert, in general, F is a function of the position x of the particle at time t. The unknown function x appears on both sides of the equation, and is indicated in the notation F. In what follows, let y be a dependent variable and x an independent variable, the notation for differentiation varies depending upon the author and upon which notation is most useful for the task at hand. Given F, a function of x, y, and derivatives of y, then an equation of the form F = y is called an explicit ordinary differential equation of order n. The function r is called the term, leading to two further important classifications, Homogeneous If r =0, and consequently one automatic solution is the trivial solution. The solution of a homogeneous equation is a complementary function. The additional solution to the function is the particular integral. The general solution to an equation can be written as y = yc + yp. Non-linear A differential equation that cannot be written in the form of a linear combination, a number of coupled differential equations form a system of equations. In column vector form, = These are not necessarily linear, the implicit analogue is, F =0 where 0 = is the zero vector. In the same sources, implicit ODE systems with a singular Jacobian are termed differential algebraic equations and this distinction is not merely one of terminology, DAEs have fundamentally different characteristics and are generally more involved to solve than ODE systems. Given a differential equation F =0 a function u, I ⊂ R → R is called the solution or integral curve for F, if u is n-times differentiable on I, and F =0 x ∈ I
17.
Partial differential equation
–
In mathematics, a partial differential equation is a differential equation that contains unknown multivariable functions and their partial derivatives. PDEs are used to formulate problems involving functions of several variables, PDEs can be used to describe a wide variety of phenomena such as sound, heat, electrostatics, electrodynamics, fluid dynamics, elasticity, or quantum mechanics. These seemingly distinct physical phenomena can be formalised similarly in terms of PDEs, just as ordinary differential equations often model one-dimensional dynamical systems, partial differential equations often model multidimensional systems. PDEs find their generalisation in stochastic partial differential equations, Partial differential equations are equations that involve rates of change with respect to continuous variables. The dynamics for the body take place in a finite-dimensional configuration space. This distinction usually makes PDEs much harder to solve ordinary differential equations. Classic domains where PDEs are used include acoustics, fluid dynamics, electrodynamics, a partial differential equation for the function u is an equation of the form f =0. If f is a function of u and its derivatives. Common examples of linear PDEs include the equation, the wave equation, Laplaces equation, Helmholtz equation, Klein–Gordon equation. A relatively simple PDE is ∂ u ∂ x =0 and this relation implies that the function u is independent of x. However, the equation gives no information on the dependence on the variable y. Hence the general solution of equation is u = f. The analogous ordinary differential equation is d u d x =0, which has the solution u = c and these two examples illustrate that general solutions of ordinary differential equations involve arbitrary constants, but solutions of PDEs involve arbitrary functions. A solution of a PDE is generally not unique, additional conditions must generally be specified on the boundary of the region where the solution is defined. For instance, in the example above, the function f can be determined if u is specified on the line x =0. Even if the solution of a differential equation exists and is unique. The mathematical study of questions is usually in the more powerful context of weak solutions. The derivative of u with respect to y approaches 0 uniformly in x as n increases and this solution approaches infinity if nx is not an integer multiple of π for any non-zero value of y
18.
Differential algebraic equation
–
This distinction between ODEs and DAEs is made because DAEs have different characteristics and are generally more difficult to solve. A DAE system of form is called semi-explicit. Every solution of the second half g of the equation defines a direction for x via the first half f of the equations. But not every point is a solution of g, the variables in x and the first half f of the equations get the attribute differential. The components of y and the second half g of the equations are called the algebraic variables or equations of the system, the solution of a DAE consists of two parts, first the search for consistent initial values and second the computation of a trajectory. To find consistent initial values it is necessary to consider the derivatives of some of the component functions of the DAE. The highest order of a derivative that is necessary for this process is called the differentiation index, the equations derived in computing the index and consistent initial values may also be of use in the computation of the trajectory. A semi-explicit DAE system can be converted to a one by decreasing the differentiation index by one. The distinction of DAEs to ODEs becomes apparent if some of the dependent variables occur without their derivatives, F is a vector of n + m functions that involve subsets these n + m variables and n derivatives. As a whole, the set of DAEs is a function F, R → R, initial conditions must be a solution of the system of equations of the form F =0. The momentum variables u and v should be constrained by the law of conservation of energy, neither condition is explicit in those equations. Differentiation of the last equation leads to x ˙ x + y ˙ y =0 ⇒ u x + v y =0, to obtain unique derivative values for all dependent variables the last equation was three times differentiated. This gives an index of 3, which is typical for constrained mechanical systems. This is a semi-explicit DAE of index 1, another set of similar equations may be obtained starting from and a sign for x. DAEs also naturally occur in the modelling of circuits with non-linear devices, modified nodal analysis employing DAEs is used for example in the ubiquitous SPICE family of numeric circuit simulators. Similarly, Fraunhofers Analog Insydes Mathematica package can be used to derive DAEs from a netlist and it is worth noting that the index of a DAE can be made arbitrarily high by cascading/coupling via capacitors operational amplifiers with positive feedback. DAE of the form x ˙ = f,0 = g. are called semi-explicit, the index-1 property requires that g is solvable for y. Every sufficiently smooth DAE is almost everywhere reducible to this semi-explicit index-1 form, two major problems in the solution of DAEs are index reduction and consistent initial conditions
19.
Integro-differential equation
–
In mathematics, an integro-differential equation is an equation that involves both integrals and derivatives of a function. The general first-order, linear integro-differential equation is of the form d d x u + ∫ x 0 x f d t = g, u = u 0, x 0 ≥0, as is typical with differential equations, obtaining a closed-form solution can often be difficult. In the relatively few cases where a solution can be found, it is often by some kind of integral transform, in such situations, the solution of the problem may be derived by applying the inverse transform to the solution of this algebraic equation. Consider the following problem, u ′ +2 u +5 ∫0 x u d t = {1, x ≥00, x <0 with u =0. {\displaystyle u+2u+5\int _^u\, dt=\left\ The Laplace transform is defined by, thus, U =1 s 2 +2 s +5. Inverting the Laplace transform using contour integral methods then gives u =12 e − x sin , integro-differential equations model many situations from science and engineering. A particularly rich source is electrical circuit analysis, the activity of interacting inhibitory and excitatory neurons can be described by a system of integro-differential equations, see for example the Wilson-Cowan model
20.
Fractional calculus
–
In this context, the term powers refers to iterative application of a linear operator on a function, in some analogy to function composition acting on a variable, e. g. f 2 = f. Continuous semigroups are prevalent in mathematics, and have an interesting theory, Fractional differential equations are a generalization of differential equations through the application of fractional calculus. The derivative of a function f at a point x is a property only when a is an integer. In other words, it is not correct to say that the derivative at x of a function f depends only on values of f very near x. Therefore it is expected that the theory involves some sort of boundary conditions, to use a metaphor, the fractional derivative requires some peripheral vision. As far as the existence of such a theory is concerned, the fractional derivative of a function to order a is often now defined by means of the Fourier or Mellin integral transforms. A fairly natural question to ask is there exists a linear operator H, or half-derivative. Let f be a function defined for x >0, form the definite integral from 0 to x. Repeating this process gives = ∫0 x d t = ∫0 x d t, the Cauchy formula for repeated integration, namely =1. ∫0 x n −1 f d t, leads in a way to a generalization for real n. Using the gamma function to remove the discrete nature of the function gives us a natural candidate for fractional applications of the integral operator. =1 Γ ∫0 x α −1 f d t This is in fact a well-defined operator. It is straightforward to show that the J operator satisfies = = =1 Γ ∫0 x α + β −1 f d t This relationship is called the property of fractional differintegral operators. Unfortunately the comparable process for the derivative operator D is significantly more complex, let us assume that f is a monomial of the form f = x k. The first derivative is as usual f ′ = d d x f = k x k −1, repeating this gives the more general result that d a d x a x k = k. For example, the th derivative of the th derivative yields the 2nd derivative, also notice that setting negative values for a yields integrals. For example, D3 /2 f = D1 /2 D1 f = D1 /2 d d x f We can also come at the question via the Laplace transform. Noting that L = L =1 s and L =1 s =1 s 2 etc. we assert J α f = L −1, for example J α = L −1 = Γ Γ t α + k as expected
21.
Linear differential equation
–
In mathematics, linear differential equations are differential equations having solutions which can be added together in particular linear combinations to form further solutions. They equate 0 to a polynomial that is linear in the value and various derivatives of a variable, linear differential equations can be ordinary or partial. The solutions to differential equations form a vector space. For a function dependent on time we may write the equation more expressly as L y = f and and it is convenient to rewrite this equation in an operator form L n ≡ y where D is the differential operator d/dt, and the An are given functions. Such an equation is said to have order n, the index of the highest derivative of y that is involved, a typical simple example is the linear differential equation used to model radioactive decay. Let N denote the number of radioactive atoms remaining in some sample of material at time t, the case where f =0 is called a homogeneous equation and its solutions are called complementary functions. It is particularly important to the solution of the general case, when the Ai are numbers, the equation is said to have constant coefficients. The exponential function is one of the few functions to keep its shape after differentiation, allowing the sum of its derivatives to cancel out to zero. Division by ezx gives the nth-order polynomial, F = z n + A1 z n −1 + ⋯ + A n =0 and this algebraic equation F =0 is the characteristic equation considered later by Gaspard Monge and Augustin-Louis Cauchy. Formally, the terms y of the differential equation are replaced by zk. Solving the polynomial gives n values of z, z1, substitution of any of those values for z into ezx gives a solution ezix. Since homogeneous linear differential equations obey the principle, any linear combination of these functions also satisfies the differential equation. When these roots are all distinct, we have n distinct solutions to the differential equation and it can be shown that these are linearly independent, by applying the Vandermonde determinant, and together they form a basis of the space of all solutions of the differential equation. The preceding gave a solution for the case when all zeros are distinct, for the general case, if z is a zero of F having multiplicity m, then, for k ∈, y = x k e z x is a solution of the ordinary differential equation. Applying this to all roots gives a collection of n distinct and linearly independent functions, as before, these functions make up a basis of the solution space. If the coefficients Ai of the equation are real, then real-valued solutions are generally preferable. A case that involves complex roots can be solved with the aid of Eulers formula, in the n=2 case y ″ + y ′ + b y =0, the characteristic equation is of the form z 2 + z + b =0. In case #2, the solution is given by y = e z x
22.
Nonlinear system
–
In mathematics and physical sciences, a nonlinear system is a system in which the change of the output is not proportional to the change of the input. Nonlinear problems are of interest to engineers, physicists and mathematicians, nonlinear systems may appear chaotic, unpredictable or counterintuitive, contrasting with the much simpler linear systems. In other words, in a system of equations, the equation to be solved cannot be written as a linear combination of the unknown variables or functions that appear in them. Systems can be defined as non-linear, regardless of whether or not known linear functions appear in the equations. In particular, an equation is linear if it is linear in terms of the unknown function and its derivatives. As nonlinear equations are difficult to solve, nonlinear systems are approximated by linear equations. This works well up to some accuracy and some range for the input values and it follows that some aspects of the behavior of a nonlinear system appear commonly to be counterintuitive, unpredictable or even chaotic. Although such chaotic behavior may resemble random behavior, it is not random. For example, some aspects of the weather are seen to be chaotic and this nonlinearity is one of the reasons why accurate long-term forecasts are impossible with current technology. Some authors use the term nonlinear science for the study of nonlinear systems and this is disputed by others, Using a term like nonlinear science is like referring to the bulk of zoology as the study of non-elephant animals. In mathematics, a function f is one which satisfies both of the following properties, Additivity or superposition, f = f + f, Homogeneity. Additivity implies homogeneity for any rational α, and, for continuous functions, for a complex α, homogeneity does not follow from additivity. For example, a map is additive but not homogeneous. The equation is called homogeneous if C =0, if f contains differentiation with respect to x, the result will be a differential equation. Nonlinear algebraic equations, which are also called polynomial equations, are defined by equating polynomials to zero, for example, x 2 + x −1 =0. For a single equation, root-finding algorithms can be used to find solutions to the equation. However, systems of equations are more complicated, their study is one motivation for the field of algebraic geometry. It is even difficult to decide whether a given system has complex solutions
23.
Dependent and independent variables
–
In mathematical modelling and statistical modelling, there are dependent and independent variables. The models investigate how the former depend on the latter, the dependent variables represent the output or outcome whose variation is being studied. The independent variables represent inputs or causes, i. e. potential reasons for variation, models test or explain the effects that the independent variables have on the dependent variables. Sometimes, independent variables may be included for other reasons, such as for their potential confounding effect, in mathematics, a function is a rule for taking an input and providing an output. A symbol that stands for an input is called an independent variable, while a symbol that stands for an arbitrary output is called a dependent variable. The most common symbol for the input is x, and the most common symbol for the output is y and it is possible to have multiple independent variables and/or multiple dependent variables. For instance, in calculus, one often encounters functions of the form z = f. Functions with multiple outputs are often written as vector-valued functions. In Set Theory, a function between a set X and a set Y is a subset of the Cartesian product X × Y such that every element of X appears in a pair with exactly one element of Y. However, many advanced textbooks do not distinguish between dependent and independent variables, in an experiment, the dependent variable is the event expected to change when the independent variable is manipulated. In data mining tools, the variable is assigned a role as target variable, while a dependent variable may be assigned a role as regular variable. Known values for the target variable are provided for the data set and test data set. The target variable is used in supervised learning algorithms but not in non-supervised learning, in mathematical modelling, the dependent variable is studied to see if and how much it varies as the independent variables vary. In the simple stochastic linear model y i = a + b x i + e i the term y i is the i th value of the dependent variable and x i is i th value of the independent variable. The term e i is known as the error and contains the variability of the dependent variable not explained by the independent variable, with multiple independent variables, the expression is, y i = a + b x 1 + b x 2 +. + b x n + e i, where n is the number of independent variables, in simulation, the dependent variable is changed in response to changes in the independent variables. If the independent variable is referred to as an explanatory variable then the response variable is preferred by some authors for the dependent variable. Explained variable is preferred by some authors over dependent variable when the quantities treated as dependent variables may not be statistically dependent, if the dependent variable is referred to as an explained variable then the term predictor variable is preferred by some authors for the independent variable
24.
Autonomous differential equation
–
In mathematics, an autonomous system or autonomous differential equation is a system of ordinary differential equations which does not explicitly depend on the independent variable. When the variable is time, they are also called time-invariant systems, autonomous systems are closely related to dynamical systems. Any autonomous system can be transformed into a system and, using very weak assumptions. An autonomous system is a system of differential equations of the form d d t x = f where x takes values in n-dimensional Euclidean space. Let x 1 be a solution of the initial value problem for an autonomous system d d t x = f, x = x 0. Then x 2 = x 1 solves d d t x = f, x = x 0, for the initial condition, the verification is trivial, x 2 = x 1 = x 1 = x 0. The equation y ′ = y is autonomous, since the independent variable, let us call it x, the following techniques apply to one-dimensional autonomous differential equations. Any one-dimensional equation of order n is equivalent to an n -dimensional first-order system, then, recalling the definition of v, d x d t = v ⇒ t + C = ∫ d x v which is an implicit solution. The special case where f is independent of x ′ d 2 x d t 2 = f benefits from separate treatment and these types of equations are very common in classical mechanics because they are always Hamiltonian systems. The idea is to use of the identity d x d t = −1 which follows from the chain rule. A natural question is then, can we do something like this with higher order equations, the answer is yes for second order equations, but theres more work to do. Using the above mentality, we can extend the technique to the general equation d 2 x d t 2 = n f where n is some parameter not equal to two. This will work since the second derivative can be written in a form involving a power of x ′ and this should not be surprising, considering that nonlinear autonomous systems in three dimensions can produce truly chaotic behavior such as the Lorenz attractor and the Rössler attractor. With this mentality, it isnt too surprising that general non-autonomous equations of second order cant be solved explicitly
25.
Homogeneous differential equations
–
A first-order ordinary differential equation in the form, M d x + N d y =0 is a homogeneous type if both functions M and N are homogeneous functions of the same degree n. That is, multiplying each variable by a parameter λ, we find M = λ n M and N = λ n N. Thus, M N = M N. In the quotient M N = M N, we can let t =1 / x to simplify this quotient to a function f of the variable y / x, M N = M N = M N = f. The equations in this discussion are not to be used as formulary for solutions, a linear differential equation is called homogeneous if the following condition is satisfied, If ϕ is a solution, so is c ϕ, where c is an arbitrary constant. Note that in order for this condition to hold, each term in a differential equation of the dependent variable y must contain y or any derivative of y. A linear differential equation that fails this condition is called inhomogeneous, a linear differential equation can be represented as a linear operator acting on y where x is usually the independent variable and y is the dependent variable. It should be noted that the existence of a constant term is a sufficient condition for an equation to be inhomogeneous, method of separation of variables Boyce, William E. DiPrima, Richard C. Elementary differential equations and boundary value problems, Wiley, ISBN 978-0470458310, Ordinary differential equations, New York, Dover Publications, ISBN0486603490. Homogeneous differential equations at MathWorld Wikibooks, Ordinary Differential Equations/Substitution 1
26.
Differential operator
–
In mathematics, a differential operator is an operator defined as a function of the differentiation operator. It is helpful, as a matter of notation first, to consider differentiation as an operation that accepts a function. This article considers mainly linear operators, which are the most common type, however, non-linear differential operators, such as the Schwarzian derivative also exist. Assume that there is a map A from a function space F1 to another function space F2, the most common differential operator is the action of taking the derivative itself. Common notations for taking the first derivative with respect to a variable x include, d d x, D, D x, and ∂ x. When taking higher, nth order derivatives, the operator may also be written, d n d x n, D n, the derivative of a function f of an argument x is sometimes given as either of the following, ′ f ′. The D notations use and creation is credited to Oliver Heaviside, one of the most frequently seen differential operators is the Laplacian operator, defined by Δ = ∇2 = ∑ k =1 n ∂2 ∂ x k 2. Another differential operator is the Θ operator, or theta operator, as in one variable, the eigenspaces of Θ are the spaces of homogeneous polynomials. In writing, following common mathematical convention, the argument of an operator is usually placed on the right side of the operator itself. Such a bidirectional-arrow notation is used for describing the probability current of quantum mechanics. The differential operator del, also called nabla operator, is an important vector differential operator and it appears frequently in physics in places like the differential form of Maxwells equations. In three-dimensional Cartesian coordinates, del is defined, ∇ = x ^ ∂ ∂ x + y ^ ∂ ∂ y + z ^ ∂ ∂ z. Del is used to calculate the gradient, curl, divergence, and Laplacian of various objects. This definition therefore depends on the definition of the scalar product. In the functional space of functions, the scalar product is defined by ⟨ f, g ⟩ = ∫ a b f g ¯ d x. If one moreover adds the condition that f or g vanishes for x → a and x → b and this formula does not explicitly depend on the definition of the scalar product. It is therefore chosen as a definition of the adjoint operator. When T ∗ is defined according to formula, it is called the formal adjoint of T. A self-adjoint operator is an equal to its own adjoint
27.
Delay differential equation
–
DDEs are also called time-delay systems, systems with aftereffect or dead-time, hereditary systems, equations with deviating argument, or differential-difference equations. Four points may give an explanation of the popularity of DDEs. Aftereffect is a problem, it is well known that, together with the increasing expectations of dynamic performances. Many processes include aftereffect phenomena in their inner dynamics, in addition, actuators, sensors, communication networks that are now involved in feedback control loops introduce such delays. Finally, besides actual delays, time lags are frequently used to very high order models. Then, the interest for DDEs keeps on growing in all areas and, especially. Delay systems are still resistant to many classical controllers, one could think that the simplest approach would consist in replacing them by some finite-dimensional approximations. Unfortunately, ignoring effects which are represented by DDEs is not a general alternative, in the best situation. In worst cases, it is disastrous in terms of stability. Delay properties are also surprising since several studies have shown that voluntary introduction of delays can also benefit the control, in spite of their complexity, DDEs however often appear as simple infinite-dimensional models in the very complex area of partial differential equations. A general form of the differential equation for x ∈ R n is d d t x = f. In this equation, f is an operator from R × R n × C1 to R n. Continuous delay d d t x = f Discrete delay d d t x = f for τ1 > ⋯ > τ m ≥0. Linear with discrete delays d d t x = A0 x + A1 x + ⋯ + A m x where A0, …, A m ∈ R n × n. Pantograph equation d d t x = a x + b x and this equation and some more general forms are named after the pantographs on trains. DDEs are mostly solved in a fashion with a principle called the method of steps. For instance, consider the DDE with a single delay d d t x = f with initial condition ϕ, → R n. Then the solution on the interval is given by ψ which is the solution to the initial value problem d d t ψ = f
28.
Phase portrait
–
A phase portrait is a geometric representation of the trajectories of a dynamical system in the phase plane. Each set of conditions is represented by a different curve. Phase portraits are a tool in studying dynamical systems. They consist of a plot of typical trajectories in the state space and this reveals information such as whether an attractor, a repellor or limit cycle is present for the chosen parameter value. The concept of equivalence is important in classifying the behaviour of systems by specifying when two different phase portraits represent the same qualitative dynamic behavior. A phase portrait graph of a dynamical system depicts the systems trajectories and stable steady states, the axes are of state variables. Simple harmonic oscillator where the portrait is made up of ellipses centred at the origin. Van der Pol oscillator see picture, bifurcation diagram Parameter plane and Mandelbrot set Phase space Phase plane Phase plane method Jordan, D. W. Smith, P. Nonlinear Ordinary Differential Equations. Non-linear Dynamics and Chaos, With applications to Physics, Biology, Chemistry, Phase Portrait Generator a tool for sketching phase portraits of 2D systems. Linear Phase Portraits, an MIT Mathlet
29.
Phase space
–
For mechanical systems, the phase space usually consists of all possible values of position and momentum variables. The concept of space was developed in the late 19th century by Ludwig Boltzmann, Henri Poincaré. For every possible state of the system, or allowed combination of values of the systems parameters, the systems evolving state over time traces a path through the high-dimensional space. As a whole, the diagram represents all that the system can be. A phase space may contain a number of dimensions. For instance, a gas containing many molecules may require a separate dimension for each x, y and z positions. In classical mechanics, any choice of generalized coordinates qi for the position defines conjugate generalized momenta pi which together define co-ordinates on phase space, the motion of an ensemble of systems in this space is studied by classical statistical mechanics. The local density of points in such systems obeys Liouvilles Theorem, within the context of a model system in classical mechanics, the phase space coordinates of the system at any given time are composed of all of the systems dynamic variables. Because of this, it is possible to calculate the state of the system at any time in the future or the past. For simple systems, there may be as few as one or two degrees of freedom, the simplest non-trivial examples are the exponential growth model/decay and the logistic growth model. In this case, a sketch of the portrait may give qualitative information about the dynamics of the system. Here, the horizontal axis gives the position and vertical axis the velocity, as the system evolves, its state follows one of the lines on the phase diagram. Classic examples of phase diagrams from chaos theory are, the Lorenz attractor population growth parameter plane of quadratic polynomials with Mandelbrot set. A plot of position and momentum variables as a function of time is called a phase plot or a phase diagram. In quantum mechanics, the p and q of phase space normally become hermitian operators in a Hilbert space. But they may retain their classical interpretation, provided functions of them compose in novel algebraic ways. With J E Moyal, these completed the foundations of the phase space formulation of quantum mechanics, in thermodynamics and statistical mechanics contexts, the term phase space has two meanings, It is used in the same sense as in classical mechanics. In this sense, as long as the particles are distinguishable, N is typically on the order of Avogadros number, thus describing the system at a microscopic level is often impractical
30.
Exponential stability
–
See Lyapunov stability, which gives a definition of asymptotic stability for more general dynamical systems. All exponentially stable systems are also asymptotically stable, in control theory, a continuous linear time-invariant system is exponentially stable if and only if the system has eigenvalues with strictly negative real parts. A discrete-time input-to-output LTI system is stable if and only if the poles of its transfer function lie strictly within the unit circle centered on the origin of the complex plane. Exponential stability is a form of asymptotic stability, systems that are not LTI are exponentially stable if their convergence is bounded by exponential decay. An exponentially stable LTI system is one that will not blow up when given a finite input or non-zero initial condition, if the system is instead given a Dirac delta impulse as input, then induced oscillations will die away and the system will return to its previous value. If oscillations do not die away, or the system does not return to its output when an impulse is applied. The graph on the shows the impulse response of two similar systems. The green curve is the response of the system impulse response y = e − t 5. Although one response is oscillatory, both return to the value of 0 over time. Imagine putting a marble in a ladle and it will settle itself into the lowest point of the ladle and, unless disturbed, will stay there. Now imagine giving the ball a push, which is an approximation to a Dirac delta impulse, the marble will roll back and forth but eventually resettle in the bottom of the ladle. Drawing the horizontal position of the marble over time would give a gradually diminishing sinusoid rather like the curve in the image above. A step input in this case requires supporting the marble away from the bottom of the ladle and it is important to note that in this example the system is not stable for all inputs. Give the marble a big push, and it will fall out of the ladle and fall. For some systems, therefore, it is proper to state that a system is stable over a certain range of inputs. Control theory State space Parameter estimation and asymptotic stability instochastic filtering, Anastasia Papavasiliou∗September 28,2004
31.
Rate of convergence
–
In numerical analysis, the speed at which a convergent sequence approaches its limit is called the rate of convergence. This may even make the difference between needing ten or a million iterations insignificant, similar concepts are used for discretization methods. However, the terminology in this case is different from the terminology for iterative methods, series acceleration is a collection of techniques for improving the rate of convergence of a series discretization. Such acceleration is commonly accomplished with sequence transformations, suppose that the sequence converges to the number L. We say that this sequence converges linearly to L, if exists a number μ ∈ such that lim k → ∞ | x k +1 − L | | x k − L | = μ. The number μ is called the rate of convergence, if the sequence converges, and μ = μ k varies from step to step with μ k →0 for k → ∞, then the sequence is said to converge superlinearly. μ = μ k varies from step to step with μ k →1 for k → ∞, the next definition is used to distinguish superlinear rates of convergence. We say that the sequence converges with order q to L for q>1 if lim k → ∞ | x k +1 − L | | x k − L | q = μ | μ >0. In particular, convergence with order q =2 is called quadratic convergence, q =3 is called cubic convergence and this is sometimes called Q-linear convergence, Q-quadratic convergence, etc. to distinguish it from the definition below. The Q stands for quotient, because the definition uses the quotient between two successive terms, to distinguish it from that definition, this is sometimes called R-linear convergence, R-quadratic convergence, etc. More generally, the sequence Cμk converges linearly with rate μ if |μ| <1, the sequence also converges linearly to 0 with rate 1/2 under the extended definition, but not under the simple definition. In fact, it is quadratically convergent, finally, the sequence converges sublinearly and logarithmically. A similar situation exists for discretization methods, the important parameter here for the convergence speed is not the iteration number k but it depends on the number of grid points and grid spacing. In this case, the number of grid points n in a process is inversely proportional to the grid spacing. In this case, a x n is said to converge to L with order p if there exists a constant C such that | x n − L | < C n − p for all n. This is written as |xn - L| = O using the big O notation and this is the relevant definition when discussing methods for numerical quadrature or the solution of ordinary differential equations. A practical method to calculate the rate of convergence for a method is to implement the following formula. The sequence with dk =1 / was introduced above and this sequence converges with order 1 according to the convention for discretization methods
32.
Numerical integration
–
This article focuses on calculation of definite integrals. The term numerical quadrature is more or less a synonym for numerical integration, Some authors refer to numerical integration over more than one dimension as cubature, others take quadrature to include higher-dimensional integration. The basic problem in numerical integration is to compute an approximate solution to a definite integral ∫ a b f d x to a degree of accuracy. If f is a smooth function integrated over a number of dimensions. The term numerical integration first appears in 1915 in the publication A Course in Interpolation, Quadrature is a historical mathematical term that means calculating area. Quadrature problems have served as one of the sources of mathematical analysis. Mathematicians of Ancient Greece, according to the Pythagorean doctrine, understood calculation of area as the process of constructing geometrically a square having the same area and that is why the process was named quadrature. For example, a quadrature of the circle, Lune of Hippocrates and this construction must be performed only by means of compass and straightedge. The ancient Babylonians used the trapezoidal rule to integrate the motion of Jupiter along the ecliptic, for a quadrature of a rectangle with the sides a and b it is necessary to construct a square with the side x = a b. For this purpose it is possible to use the fact, if we draw the circle with the sum of a and b as the diameter. The similar geometrical construction solves a problem of a quadrature for a parallelogram, problems of quadrature for curvilinear figures are much more difficult. The quadrature of the circle with compass and straightedge had been proved in the 19th century to be impossible, nevertheless, for some figures a quadrature can be performed. The quadratures of a surface and a parabola segment done by Archimedes became the highest achievement of the antique analysis. The area of the surface of a sphere is equal to quadruple the area of a circle of this sphere. The area of a segment of the cut from it by a straight line is 4/3 the area of the triangle inscribed in this segment. For the proof of the results Archimedes used the Method of exhaustion of Eudoxus, in medieval Europe the quadrature meant calculation of area by any method. More often the Method of indivisibles was used, it was less rigorous, john Wallis algebrised this method, he wrote in his Arithmetica Infinitorum series that we now call the definite integral, and he calculated their values. Isaac Barrow and James Gregory made further progress, quadratures for some algebraic curves, christiaan Huygens successfully performed a quadrature of some Solids of revolution
33.
Dirac delta function
–
It was introduced by theoretical physicist Paul Dirac. From a purely mathematical viewpoint, the Dirac delta is not strictly a function, because any extended-real function that is equal to zero everywhere, the delta function only makes sense as a mathematical object when it appears inside an integral. From this perspective the Dirac delta can usually be manipulated as though it were a function, the formal rules obeyed by this function are part of the operational calculus, a standard tool kit of physics and engineering. Formally, the function must be defined as the distribution that corresponds to a probability measure supported at the origin. In many applications, the Dirac delta is regarded as a kind of limit of a sequence of functions having a spike at the origin. The approximating functions of the sequence are thus approximate or nascent delta functions, in the context of signal processing the delta function is often referred to as the unit impulse symbol. Its discrete analog is the Kronecker delta function, which is defined on a discrete domain. The graph of the function is usually thought of as following the whole x-axis. Despite its name, the function is not truly a function. For example, the objects f = δ and g =0 are equal everywhere except at x =0 yet have integrals that are different. According to Lebesgue integration theory, if f and g are functions such that f = g almost everywhere, then f is integrable if and only if g is integrable, rigorous treatment of the Dirac delta requires measure theory or the theory of distributions. The Dirac delta is used to model a tall narrow spike function, for example, to calculate the dynamics of a baseball being hit by a bat, one can approximate the force of the bat hitting the baseball by a delta function. Later, Augustin Cauchy expressed the theorem using exponentials, f =12 π ∫ − ∞ ∞ e i p x d p. Cauchy pointed out that in some circumstances the order of integration in this result was significant. A rigorous interpretation of the form and the various limitations upon the function f necessary for its application extended over several centuries. Namely, it is necessary that these functions decrease sufficiently rapidly to zero in order to ensure the existence of the Fourier integral, for example, the Fourier transform of such simple functions as polynomials does not exist in the classical sense. The extension of the classical Fourier transformation to distributions considerably enlarged the class of functions that could be transformed, and leading to the formal development of the Dirac delta function. An infinitesimal formula for a tall, unit impulse delta function explicitly appears in an 1827 text of Augustin Louis Cauchy. Siméon Denis Poisson considered the issue in connection with the study of propagation as did Gustav Kirchhoff somewhat later
34.
Euler method
–
In mathematics and computational science, the Euler method is a first-order numerical procedure for solving ordinary differential equations with a given initial value. It is the most basic method for numerical integration of ordinary differential equations and is the simplest Runge–Kutta method. The Euler method is named after Leonhard Euler, who treated it in his book Institutionum calculi integralis. The Euler method is a method, which means that the local error is proportional to the square of the step size. The Euler method often serves as the basis to more complex methods. Consider the problem of calculating the shape of a curve which starts at a given point. The idea is that while the curve is initially unknown, its starting point, then, from the differential equation, the slope to the curve at A0 can be computed, and so, the tangent line. Take a small step along that tangent line up to a point A1, along this small step, the slope does not change too much, so A1 will be close to the curve. If we pretend that A1 is still on the curve, after several steps, a polygonal curve A0 A1 A2 A3 … is computed. Choose a value h for the size of every step and set t n = t 0 + n h. Now, one step of the Euler method from t n to t n +1 = t n + h is y n +1 = y n + h f. The value of y n is an approximation of the solution to the ODE at time t n, y n ≈ y, the Euler method is explicit, i. e. the solution y n +1 is an explicit function of y i for i ≤ n. Given the initial value problem y ′ = y, y =1, the Euler method is y n +1 = y n + h f. so first we must compute f. In this simple equation, the function f is defined by f = y. By doing the above step, we have found the slope of the line that is tangent to the curve at the point. Recall that the slope is defined as the change in y divided by the change in t, the next step is to multiply the above value by the step size h, which we take equal to one here, h ⋅ f =1 ⋅1 =1. Since the step size is the change in t, when we multiply the step size and this value is then added to the initial y value to obtain the next value to be used for computations. Y0 + h f = y 1 =1 +1 ⋅1 =2, the above steps should be repeated to find y 2, y 3 and y 4
35.
Finite difference method
–
Today, FDMs are the dominant approach to numerical solutions of partial differential equations. First, assuming the function whose derivatives are to be approximated is properly-behaved, by Taylors theorem, we can create a Taylor Series expansion f = f + f ′1. H n + R n, where n. denotes the factorial of n, the error in a methods solution is defined as the difference between the approximation and the exact analytical solution. To use a finite difference method to approximate the solution to a problem and this is usually done by dividing the domain into a uniform grid. Note that this means that finite-difference methods produce sets of numerical approximations to the derivative. An expression of general interest is the truncation error of a method. Typically expressed using Big-O notation, local truncation error refers to the error from an application of a method. That is, it is the quantity f ′ − f i ′ if f ′ refers to the exact value, the remainder term of a Taylor polynomial is convenient for analyzing the local truncation error. Using the Lagrange form of the remainder from the Taylor polynomial for f, N +1, where x 0 < ξ < x 0 + h, the dominant term of the local truncation error can be discovered. For example, again using the formula for the first derivative. 2, and with some algebraic manipulation, this leads to f − f i h = f ′ + f ″2, a final expression of this example and its order is, f − f i h = f ′ + O. This means that, in case, the local truncation error is proportional to the step sizes. The quality and duration of simulated FDM solution depends on the discretization equation selection, the data quality and simulation duration increase significantly with smaller step size. Therefore, a balance between data quality and simulation duration is necessary for practical usage. Large time steps are useful for increasing speed in practice. However, time steps which are too large may create instabilities, the von Neumann method is usually applied to determine the numerical model stability. For example, consider the differential equation u ′ =3 u +2. The last equation is an equation, and solving this equation gives an approximate solution to the differential equation
36.
Finite element method
–
The finite element method is a numerical method for solving problems of engineering and mathematical physics. It is also referred to as finite element analysis, typical problem areas of interest include structural analysis, heat transfer, fluid flow, mass transport, and electromagnetic potential. The analytical solution of these problems generally require the solution to boundary value problems for partial differential equations, the finite element method formulation of the problem results in a system of algebraic equations. The method yields approximate values of the unknowns at discrete number of points over the domain, to solve the problem, it subdivides a large problem into smaller, simpler parts that are called finite elements. The simple equations that model these finite elements are assembled into a larger system of equations that models the entire problem. FEM then uses variational methods from the calculus of variations to approximate a solution by minimizing an associated error function, the global system of equations has known solution techniques, and can be calculated from the initial values of the original problem to obtain a numerical answer. To explain the approximation in this process, FEM is commonly introduced as a case of Galerkin method. The process, in language, is to construct an integral of the inner product of the residual. In simple terms, it is a procedure that minimizes the error of approximation by fitting trial functions into the PDE, the residual is the error caused by the trial functions, and the weight functions are polynomial approximation functions that project the residual. These equation sets are the element equations and they are linear if the underlying PDE is linear, and vice versa. In step above, a system of equations is generated from the element equations through a transformation of coordinates from the subdomains local nodes to the domains global nodes. This spatial transformation includes appropriate orientation adjustments as applied in relation to the coordinate system. The process is carried out by FEM software using coordinate data generated from the subdomains. FEM is best understood from its application, known as finite element analysis. FEA as applied in engineering is a tool for performing engineering analysis. It includes the use of mesh generation techniques for dividing a complex problem into small elements, FEA is a good choice for analyzing problems over complicated domains, when the domain changes, when the desired precision varies over the entire domain, or when the solution lacks smoothness. For instance, in a crash simulation it is possible to increase prediction accuracy in important areas like the front of the car. Another example would be in weather prediction, where it is more important to have accurate predictions over developing highly nonlinear phenomena rather than relatively calm areas