1.
Geometry
–
Geometry is a branch of mathematics concerned with questions of shape, size, relative position of figures, and the properties of space. A mathematician who works in the field of geometry is called a geometer, Geometry arose independently in a number of early cultures as a practical way for dealing with lengths, areas, and volumes. Geometry began to see elements of mathematical science emerging in the West as early as the 6th century BC. By the 3rd century BC, geometry was put into a form by Euclid, whose treatment, Euclids Elements. Geometry arose independently in India, with texts providing rules for geometric constructions appearing as early as the 3rd century BC, islamic scientists preserved Greek ideas and expanded on them during the Middle Ages. By the early 17th century, geometry had been put on a solid footing by mathematicians such as René Descartes. Since then, and into modern times, geometry has expanded into non-Euclidean geometry and manifolds, while geometry has evolved significantly throughout the years, there are some general concepts that are more or less fundamental to geometry. These include the concepts of points, lines, planes, surfaces, angles, contemporary geometry has many subfields, Euclidean geometry is geometry in its classical sense. The mandatory educational curriculum of the majority of nations includes the study of points, lines, planes, angles, triangles, congruence, similarity, solid figures, circles, Euclidean geometry also has applications in computer science, crystallography, and various branches of modern mathematics. Differential geometry uses techniques of calculus and linear algebra to problems in geometry. It has applications in physics, including in general relativity, topology is the field concerned with the properties of geometric objects that are unchanged by continuous mappings. In practice, this often means dealing with large-scale properties of spaces, convex geometry investigates convex shapes in the Euclidean space and its more abstract analogues, often using techniques of real analysis. It has close connections to convex analysis, optimization and functional analysis, algebraic geometry studies geometry through the use of multivariate polynomials and other algebraic techniques. It has applications in areas, including cryptography and string theory. Discrete geometry is concerned mainly with questions of relative position of simple objects, such as points. It shares many methods and principles with combinatorics, Geometry has applications to many fields, including art, architecture, physics, as well as to other branches of mathematics. The earliest recorded beginnings of geometry can be traced to ancient Mesopotamia, the earliest known texts on geometry are the Egyptian Rhind Papyrus and Moscow Papyrus, the Babylonian clay tablets such as Plimpton 322. For example, the Moscow Papyrus gives a formula for calculating the volume of a truncated pyramid, later clay tablets demonstrate that Babylonian astronomers implemented trapezoid procedures for computing Jupiters position and motion within time-velocity space
2.
Computer science
–
Computer science is the study of the theory, experimentation, and engineering that form the basis for the design and use of computers. An alternate, more succinct definition of science is the study of automating algorithmic processes that scale. A computer scientist specializes in the theory of computation and the design of computational systems and its fields can be divided into a variety of theoretical and practical disciplines. Some fields, such as computational complexity theory, are highly abstract, other fields still focus on challenges in implementing computation. Human–computer interaction considers the challenges in making computers and computations useful, usable, the earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, further, algorithms for performing computations have existed since antiquity, even before the development of sophisticated computing equipment. Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623, in 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner. He may be considered the first computer scientist and information theorist, for, among other reasons and he started developing this machine in 1834, and in less than two years, he had sketched out many of the salient features of the modern computer. A crucial step was the adoption of a card system derived from the Jacquard loom making it infinitely programmable. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information, when the machine was finished, some hailed it as Babbages dream come true. During the 1940s, as new and more powerful computing machines were developed, as it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. Computer science began to be established as an academic discipline in the 1950s. The worlds first computer science program, the Cambridge Diploma in Computer Science. The first computer science program in the United States was formed at Purdue University in 1962. Since practical computers became available, many applications of computing have become distinct areas of study in their own rights and it is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM released the IBM704 and later the IBM709 computers, still, working with the IBM was frustrating if you had misplaced as much as one letter in one instruction, the program would crash, and you would have to start the whole process over again. During the late 1950s, the science discipline was very much in its developmental stages. Time has seen significant improvements in the usability and effectiveness of computing technology, modern society has seen a significant shift in the users of computer technology, from usage only by experts and professionals, to a near-ubiquitous user base
3.
Quantum field theory
–
QFT treats particles as excited states of the underlying physical field, so these are called field quanta. In quantum field theory, quantum mechanical interactions among particles are described by interaction terms among the corresponding underlying quantum fields and these interactions are conveniently visualized by Feynman diagrams, which are a formal tool of relativistically covariant perturbation theory, serving to evaluate particle processes. The first achievement of quantum theory, namely quantum electrodynamics, is still the paradigmatic example of a successful quantum field theory. Ordinarily, quantum mechanics cannot give an account of photons which constitute the prime case of relativistic particles, since photons have rest mass zero, and correspondingly travel in the vacuum at the speed c, a non-relativistic theory such as ordinary QM cannot give even an approximate description. Photons are implicit in the emission and absorption processes which have to be postulated, for instance, the formalism of QFT is needed for an explicit description of photons. In fact most topics in the development of quantum theory were related to the interaction of radiation and matter. However, quantum mechanics as formulated by Dirac, Heisenberg, and Schrödinger in 1926–27 started from atomic spectra, as soon as the conceptual framework of quantum mechanics was developed, a small group of theoreticians tried to extend quantum methods to electromagnetic fields. A good example is the paper by Born, Jordan & Heisenberg. The basic idea was that in QFT the electromagnetic field should be represented by matrices in the way that position. The ideas of QM were thus extended to systems having a number of degrees of freedom. The inception of QFT is usually considered to be Diracs famous 1927 paper on The quantum theory of the emission and absorption of radiation, here Dirac coined the name quantum electrodynamics for the part of QFT that was developed first. Employing the theory of the harmonic oscillator, Dirac gave a theoretical description of how photons appear in the quantization of the electromagnetic radiation field. Later, Diracs procedure became a model for the quantization of fields as well. These first approaches to QFT were further developed during the three years. P. Jordan introduced creation and annihilation operators for fields obeying Fermi–Dirac statistics and these differ from the corresponding operators for Bose–Einstein statistics in that the former satisfy anti-commutation relations while the latter satisfy commutation relations. The methods of QFT could be applied to derive equations resulting from the treatment of particles, e. g. the Dirac equation, the Klein–Gordon equation. Schweber points out that the idea and procedure of second quantization goes back to Jordan, in a number of papers from 1927, some difficult problems concerning commutation relations, statistics, and Lorentz invariance were eventually solved. The first comprehensive account of a theory of quantum fields, in particular
4.
Statistics
–
Statistics is a branch of mathematics dealing with the collection, analysis, interpretation, presentation, and organization of data. In applying statistics to, e. g. a scientific, industrial, or social problem, populations can be diverse topics such as all people living in a country or every atom composing a crystal. Statistics deals with all aspects of data including the planning of data collection in terms of the design of surveys, statistician Sir Arthur Lyon Bowley defines statistics as Numerical statements of facts in any department of inquiry placed in relation to each other. When census data cannot be collected, statisticians collect data by developing specific experiment designs, representative sampling assures that inferences and conclusions can safely extend from the sample to the population as a whole. In contrast, an observational study does not involve experimental manipulation, inferences on mathematical statistics are made under the framework of probability theory, which deals with the analysis of random phenomena. A standard statistical procedure involves the test of the relationship between two data sets, or a data set and a synthetic data drawn from idealized model. A hypothesis is proposed for the relationship between the two data sets, and this is compared as an alternative to an idealized null hypothesis of no relationship between two data sets. Rejecting or disproving the hypothesis is done using statistical tests that quantify the sense in which the null can be proven false. Working from a hypothesis, two basic forms of error are recognized, Type I errors and Type II errors. Multiple problems have come to be associated with this framework, ranging from obtaining a sufficient sample size to specifying an adequate null hypothesis, measurement processes that generate statistical data are also subject to error. Many of these errors are classified as random or systematic, the presence of missing data or censoring may result in biased estimates and specific techniques have been developed to address these problems. Statistics continues to be an area of research, for example on the problem of how to analyze Big data. Statistics is a body of science that pertains to the collection, analysis, interpretation or explanation. Some consider statistics to be a mathematical science rather than a branch of mathematics. While many scientific investigations make use of data, statistics is concerned with the use of data in the context of uncertainty, mathematical techniques used for this include mathematical analysis, linear algebra, stochastic analysis, differential equations, and measure-theoretic probability theory. In applying statistics to a problem, it is practice to start with a population or process to be studied. Populations can be diverse topics such as all living in a country or every atom composing a crystal. Ideally, statisticians compile data about the entire population and this may be organized by governmental statistical institutes
5.
Tilde
–
The tilde is a grapheme with several uses. The name of the character came into English from Spanish, which in came from the Latin titulus. The reason for the name was that it was written over a letter as a scribal abbreviation, as a mark of suspension. Thus the commonly used words Anno Domini were frequently abbreviated to Ao Dñi, such a mark could denote the omission of one letter or several letters. This saved on the expense of the labour and the cost of vellum. Medieval European charters written in Latin are largely made up of such abbreviated words with suspension marks and other abbreviations, the tilde has since been applied to a number of other uses as a diacritic mark or a character in its own right. These are encoded in Unicode at U+0303 ◌̃ Combining Tilde and U+007E ~ Tilde, in lexicography, the latter kind of tilde and the swung dash are used in dictionaries to indicate the omission of the entry word. This symbol informally means approximately, about, or around, such as ~30 minutes before and it can mean similar to, including of the same order of magnitude as, such as, x ~ y meaning that x and y are of the same order of magnitude. The tilde is used to indicate equal to or approximately equal to by placing it over the = symbol, like so. The text of the Domesday Book of 1086, relating for example, the text with abbreviations expanded is as follows, Mollande tempore regis Edwardi geldabat pro iiii hidis et uno ferling. In dominio sunt iii carucae et x servi et xxx villani et xx bordarii cum xvi carucis, ibi xii acrae prati et xv acrae silvae. Pastura iii leugae in longitudine et latitudine, elwardus tenebat tempore regis Edwardi pro manerio et geldabat pro dimidia hida. Ibi sunt v villani cum i servo, valet xx solidos ad pensam et arsuram. Eidem manerio est injuste adjuncta Nimete et valet xv solidos, ipsi manerio pertinet tercius denarius de Hundredis Nortmoltone et Badentone et Brantone et tercium animal pasturae morarum. The incorporation of the tilde into ASCII is a result of its appearance as a distinct character on mechanical typewriters in the late nineteenth century. Any good typewriter store had a catalog of alternative keyboards that could be specified for machines ordered from the factory, at that time, the tilde was used only in Spanish and Portuguese typewriters. In Modern Spanish, the tilde is used only with n and N, both were conveniently assigned to a single mechanical typebar, which sacrificed a key that was felt to be less important, usually the 1⁄2— 1⁄4 key. Portuguese, however, uses not ñ but nh and it uses the tilde on the vowels a and o
6.
Error function
–
In mathematics, the error function is a special function of sigmoid shape that occurs in probability, statistics, and partial differential equations describing diffusion. It is defined as, erf =1 π ∫ − x x e − t 2 d t =2 π ∫0 x e − t 2 d t. The error function is used in measurement theory, and its use in branches of mathematics is typically unrelated to the characterization of measurement errors. In statistics, it is common to have a variable Y, the error is then defined as ε = Y ^ − Y. This makes the error a normally distributed random variable with mean 0 and some variance σ2 and this is true for any random variable with distribution N, but the application to error variables is how the error function got its name. The previous paragraph can be generalized to any variance, given a variable ε ∼ N and this is used in statistics to predict behavior of any sample with respect to the population mean. This usage is similar to the Q-function, which in fact can be written in terms of the error function, another form of erfc for non-negative x is known as Craigs formula, erfc =2 π ∫0 π /2 exp d θ. The imaginary error function, denoted erfi, is defined as erfi = − i erf =2 π ∫0 x e t 2 d t =2 π e x 2 D, where D is the Dawson function. Despite the name imaginary error function, erfi is real when x is real, the error function is related to the cumulative distribution Φ, the integral of the standard normal distribution, by Φ =12 +12 erf =12 erfc . The property erf = − erf means that the function is an odd function. This directly results from the fact that the integrand e − t 2 is an even function, for any complex number z, erf = erf ¯ where z ¯ is the complex conjugate of z. The integrand ƒ = exp and ƒ = erf are shown in the complex z-plane in figures 2 and 3, level of Im =0 is shown with a thick green line. Negative integer values of Im are shown with red lines. Positive integer values of Im are shown with blue lines. Intermediate levels of Im = constant are shown with thin green lines, intermediate levels of Re = constant are shown with thin red lines for negative values and with thin blue lines for positive values. The error function at +∞ is exactly 1, at the real axis, erf approaches unity at z → +∞ and −1 at z → −∞. At the imaginary axis, it tends to ±i∞, the error function is an entire function, it has no singularities and its Taylor expansion always converges. =2 π which holds for complex number z
7.
Gamma function
–
In mathematics, the gamma function is an extension of the factorial function, with its argument shifted down by 1, to real and complex numbers. That is, if n is an integer, Γ =. The gamma function is defined for all numbers except the non-positive integers. The gamma function can be seen as a solution to the interpolation problem. The simple formula for the factorial, x. =1 ×2 × … × x, a good solution to this is the gamma function. There are infinitely many continuous extensions of the factorial to non-integers, the gamma function is the most useful solution in practice, being analytic, and it can be characterized in several ways. The Bohr–Mollerup theorem proves that these properties, together with the assumption that f be logarithmically convex, uniquely determine f for positive, from there, the gamma function can be extended to all real and complex values by using the unique analytic continuation of f. Also see Eulers infinite product definition below where the properties f =1 and f = x f together with the requirement that limn→+∞. nx / f =1 uniquely define the same function. The notation Γ is due to Legendre, if the real part of the complex number z is positive, then the integral Γ = ∫0 ∞ x z −1 e − x d x converges absolutely, and is known as the Euler integral of the second kind. The identity Γ = Γ z can be used to extend the integral formulation for Γ to a meromorphic function defined for all complex numbers z. It is this version that is commonly referred to as the gamma function. When seeking to approximate z. for a number z, it turns out that it is effective to first compute n. for some large integer n. And then use the relation m. = m. backwards n times. Furthermore, this approximation is exact in the limit as n goes to infinity, specifically, for a fixed integer m, it is the case that lim n → + ∞ n. m. =1, and we can ask that the formula is obeyed when the arbitrary integer m is replaced by an arbitrary complex number z lim n → + ∞ n. z. =1. Multiplying both sides by z. gives z. = lim n → + ∞ n. z, Z = lim n → + ∞1 ⋯ n ⋯ z = ∏ n =1 + ∞. Similarly for the function, the definition as an infinite product due to Euler is valid for all complex numbers z except the non-positive integers. By this construction, the function is the unique function that simultaneously satisfies Γ =1, Γ = z Γ for all complex numbers z except the non-positive integers
8.
Boundary layer
–
In the Earths atmosphere, the atmospheric boundary layer is the air layer near the ground affected by diurnal heat, moisture or momentum transfer to or from the surface. On an aircraft wing the boundary layer is the part of the close to the wing. Laminar boundary layers can be classified according to their structure. When a fluid rotates and viscous forces are balanced by the Coriolis effect, in the theory of heat transfer, a thermal boundary layer occurs. A surface can have multiple types of boundary layer simultaneously, the viscous nature of airflow reduces the local velocities on a surface and is responsible for skin friction. The layer of air over the surface that is slowed down or stopped by viscosity, is the boundary layer. There are two different types of boundary layer flow, laminar and turbulent, laminar Boundary Layer Flow The laminar boundary is a very smooth flow, while the turbulent boundary layer contains swirls or eddies. The laminar flow creates less skin friction drag than the turbulent flow, Boundary layer flow over a wing surface begins as a smooth laminar flow. As the flow continues back from the edge, the laminar boundary layer increases in thickness. Turbulent Boundary Layer Flow At some distance back from the leading edge, the low energy laminar flow, however, tends to break down more suddenly than the turbulent layer. The aerodynamic boundary layer was first defined by Ludwig Prandtl in a paper presented on August 12,1904 at the third International Congress of Mathematicians in Heidelberg and this allows a closed-form solution for the flow in both areas, a significant simplification of the full Navier–Stokes equations. The majority of the transfer to and from a body also takes place within the boundary layer. The pressure distribution throughout the layer in the direction normal to the surface remains constant throughout the boundary layer. The thickness of the velocity boundary layer is defined as the distance from the solid body at which the viscous flow velocity is 99% of the freestream velocity. Displacement Thickness is an alternative definition stating that the boundary layer represents a deficit in mass compared to inviscid flow with slip at the wall. It is the distance by which the wall would have to be displaced in the case to give the same total mass flow as the viscous case. The no-slip condition requires the flow velocity at the surface of an object be zero. The flow velocity will then increase rapidly within the layer, governed by the boundary layer equations
9.
Probability theory
–
Probability theory is the branch of mathematics concerned with probability, the analysis of random phenomena. It is not possible to predict precisely results of random events, two representative mathematical results describing such patterns are the law of large numbers and the central limit theorem. As a mathematical foundation for statistics, probability theory is essential to human activities that involve quantitative analysis of large sets of data. Methods of probability theory also apply to descriptions of complex systems given only partial knowledge of their state, a great discovery of twentieth century physics was the probabilistic nature of physical phenomena at atomic scales, described in quantum mechanics. Christiaan Huygens published a book on the subject in 1657 and in the 19th century, initially, probability theory mainly considered discrete events, and its methods were mainly combinatorial. Eventually, analytical considerations compelled the incorporation of continuous variables into the theory and this culminated in modern probability theory, on foundations laid by Andrey Nikolaevich Kolmogorov. Kolmogorov combined the notion of space, introduced by Richard von Mises. This became the mostly undisputed axiomatic basis for modern probability theory, most introductions to probability theory treat discrete probability distributions and continuous probability distributions separately. The more mathematically advanced measure theory-based treatment of probability covers the discrete, continuous, consider an experiment that can produce a number of outcomes. The set of all outcomes is called the space of the experiment. The power set of the space is formed by considering all different collections of possible results. For example, rolling an honest die produces one of six possible results, one collection of possible results corresponds to getting an odd number. Thus, the subset is an element of the set of the sample space of die rolls. In this case, is the event that the die falls on some odd number, If the results that actually occur fall in a given event, that event is said to have occurred. Probability is a way of assigning every event a value between zero and one, with the requirement that the event made up of all possible results be assigned a value of one, the probability that any one of the events, or will occur is 5/6. This is the same as saying that the probability of event is 5/6 and this event encompasses the possibility of any number except five being rolled. The mutually exclusive event has a probability of 1/6, and the event has a probability of 1, discrete probability theory deals with events that occur in countable sample spaces. Modern definition, The modern definition starts with a finite or countable set called the sample space, which relates to the set of all possible outcomes in classical sense, denoted by Ω
10.
Factorial
–
In mathematics, the factorial of a non-negative integer n, denoted by n. is the product of all positive integers less than or equal to n. =5 ×4 ×3 ×2 ×1 =120, the value of 0. is 1, according to the convention for an empty product. The factorial operation is encountered in areas of mathematics, notably in combinatorics, algebra. Its most basic occurrence is the fact there are n. ways to arrange n distinct objects into a sequence. This fact was known at least as early as the 12th century, fabian Stedman, in 1677, described factorials as applied to change ringing. After describing a recursive approach, Stedman gives a statement of a factorial, Now the nature of these methods is such, the factorial function is formally defined by the product n. = ∏ k =1 n k, or by the relation n. = {1 if n =0. The factorial function can also be defined by using the rule as n. All of the above definitions incorporate the instance 0, =1, in the first case by the convention that the product of no numbers at all is 1. This is convenient because, There is exactly one permutation of zero objects, = n. ×, valid for n >0, extends to n =0. It allows for the expression of many formulae, such as the function, as a power series. It makes many identities in combinatorics valid for all applicable sizes, the number of ways to choose 0 elements from the empty set is =0. More generally, the number of ways to choose n elements among a set of n is = n. n, the factorial function can also be defined for non-integer values using more advanced mathematics, detailed in the section below. This more generalized definition is used by advanced calculators and mathematical software such as Maple or Mathematica, although the factorial function has its roots in combinatorics, formulas involving factorials occur in many areas of mathematics. There are n. different ways of arranging n distinct objects into a sequence, often factorials appear in the denominator of a formula to account for the fact that ordering is to be ignored. A classical example is counting k-combinations from a set with n elements, one can obtain such a combination by choosing a k-permutation, successively selecting and removing an element of the set, k times, for a total of n k _ = n ⋯ possibilities. This however produces the k-combinations in an order that one wishes to ignore, since each k-combination is obtained in k. different ways. This number is known as the coefficient, because it is also the coefficient of Xk in n