1.
Particle physics
–
Particle physics is the branch of physics that studies the nature of the particles that constitute matter and radiation. By our current understanding, these particles are excitations of the quantum fields that also govern their interactions. The currently dominant theory explaining these fundamental particles and fields, along with their dynamics, is called the Standard Model, in more technical terms, they are described by quantum state vectors in a Hilbert space, which is also treated in quantum field theory. All particles and their interactions observed to date can be described almost entirely by a field theory called the Standard Model. The Standard Model, as formulated, has 61 elementary particles. Those elementary particles can combine to form composite particles, accounting for the hundreds of species of particles that have been discovered since the 1960s. The Standard Model has been found to agree with almost all the tests conducted to date. However, most particle physicists believe that it is a description of nature. In recent years, measurements of mass have provided the first experimental deviations from the Standard Model. The idea that all matter is composed of elementary particles dates from at least the 6th century BC, in the 19th century, John Dalton, through his work on stoichiometry, concluded that each element of nature was composed of a single, unique type of particle. Throughout the 1950s and 1960s, a variety of particles were found in collisions of particles from increasingly high-energy beams. It was referred to informally as the particle zoo, the current state of the classification of all elementary particles is explained by the Standard Model. It describes the strong, weak, and electromagnetic fundamental interactions, the species of gauge bosons are the gluons, W−, W+ and Z bosons, and the photons. The Standard Model also contains 24 fundamental particles, which are the constituents of all matter, finally, the Standard Model also predicted the existence of a type of boson known as the Higgs boson. Early in the morning on 4 July 2012, physicists with the Large Hadron Collider at CERN announced they had found a new particle that behaves similarly to what is expected from the Higgs boson, the worlds major particle physics laboratories are, Brookhaven National Laboratory. Its main facility is the Relativistic Heavy Ion Collider, which collides heavy ions such as gold ions and it is the worlds first heavy ion collider, and the worlds only polarized proton collider. Its main projects are now the electron-positron colliders VEPP-2000, operated since 2006 and its main project is now the Large Hadron Collider, which had its first beam circulation on 10 September 2008, and is now the worlds most energetic collider of protons. It also became the most energetic collider of heavy ions after it began colliding lead ions and its main facility is the Hadron Elektron Ring Anlage, which collides electrons and positrons with protons

2.
Supersymmetry
–
Each particle from one group is associated with a particle from the other, known as its superpartner, the spin of which differs by a half-integer. In a theory with perfectly unbroken supersymmetry, each pair of superpartners would share the same mass, for example, there would be a selectron, a bosonic version of the electron with the same mass as the electron, that would be easy to find in a laboratory. Thus, since no superpartners have been observed, if supersymmetry exists it must be a broken symmetry so that superpartners may differ in mass. Spontaneously-broken supersymmetry could solve many problems in particle physics including the hierarchy problem. The simplest realization of spontaneously-broken supersymmetry, the so-called Minimal Supersymmetric Standard Model, is one of the best studied candidates for physics beyond the Standard Model, there is only indirect evidence and motivation for the existence of supersymmetry. Direct confirmation would entail production of superpartners in collider experiments, such as the Large Hadron Collider, the first run of the LHC found no evidence for supersymmetry, and thus set limits on superpartner masses in supersymmetric theories. While some remain enthusiastic about supersymmetry, this first run at the LHC led some physicists to explore other ideas, the LHC resumed its search for supersymmetry and other new physics in its second run. There are numerous phenomenological motivations for supersymmetry close to the electroweak scale, supersymmetry close to the electroweak scale ameliorates the hierarchy problem that afflicts the Standard Model. In the Standard Model, the electroweak scale receives enormous Planck-scale quantum corrections, the observed hierarchy between the electroweak scale and the Planck scale must be achieved with extraordinary fine tuning. In a supersymmetric theory, on the hand, Planck-scale quantum corrections cancel between partners and superpartners. The hierarchy between the scale and the Planck scale is achieved in a natural manner, without miraculous fine-tuning. The idea that the symmetry groups unify at high-energy is called Grand unification theory. In the Standard Model, however, the weak, strong, in a supersymmetry theory, the running of the gauge couplings are modified, and precise high-energy unification of the gauge couplings is achieved. The modified running also provides a mechanism for radiative electroweak symmetry breaking. TeV-scale supersymmetry typically provides a dark matter particle at a mass scale consistent with thermal relic abundance calculations. Supersymmetry is also motivated by solutions to several problems, for generally providing many desirable mathematical properties. Supersymmetric quantum field theory is much easier to analyze, as many more problems become exactly solvable. When supersymmetry is imposed as a symmetry, Einsteins theory of general relativity is included automatically

3.
Spontaneous symmetry breaking
–
Spontaneous symmetry breaking is a spontaneous process of symmetry breaking, by which a physical system in a symmetrical state ends up in an asymmetrical state. In particular, it can describe systems where the equations of motion or the Lagrangian obey symmetries, in explicit symmetry breaking, if we consider two outcomes, the probability of a pair of outcomes can be different. By definition, spontaneous symmetry breaking requires the existence of a symmetrical probability distribution--any pair of outcomes has the same probability, in other words, the underlying laws are invariant under a symmetry transformation. The system as a whole changes under such transformations, phases of matter, such as crystals, magnets, and conventional superconductors, as well as simple phase transitions can be described by spontaneous symmetry breaking. Notable exceptions include topological phases of matter like the fractional quantum Hall effect, consider a symmetrical upward dome with a trough circling the bottom. If a ball is put at the peak of the dome. But the ball may spontaneously break this symmetry by rolling down the dome into the trough, afterward, the ball has come to a rest at some fixed point on the perimeter. The dome and the ball retain their individual symmetry, but the system does not, in the simplest idealized relativistic model, the spontaneously broken symmetry is summarized through an illustrative scalar field theory. The relevant Lagrangian, which dictates how a system behaves, can be split up into kinetic and potential terms. An example of a potential, due to Jeffrey Goldstone is illustrated in the graph at the right and this potential has an infinite number of possible minima given by for any real θ between 0 and 2π. The system also has a vacuum state corresponding to Φ =0. This state has a U symmetry, however, once the system falls into a specific stable vacuum state, this symmetry will appear to be lost, or spontaneously broken. For ferromagnetic materials, the laws are invariant under spatial rotations. Here, the parameter is the magnetization, which measures the magnetic dipole density. Above the Curie temperature, the parameter is zero, which is spatially invariant. Below the Curie temperature, however, the magnetization acquires a constant nonvanishing value, the residual rotational symmetries which leave the orientation of this vector invariant remain unbroken, unlike the other rotations which do not and are thus spontaneously broken. The laws describing a solid are invariant under the full Euclidean group, the displacement and the orientation are the order parameters. Similar comments can be made about the microwave background

4.
Supergravity
–
In theoretical physics, supergravity is a field theory that combines the principles of supersymmetry and general relativity. Together, these imply that, in supergravity, the supersymmetry is a local symmetry, since the generators of supersymmetry are convoluted with the Poincaré group to form a super-Poincaré algebra, it can be seen that supergravity follows naturally from local supersymmetry. Like any field theory of gravity, a supergravity theory contains a field whose quantum is the graviton. Supersymmetry requires the graviton field to have a superpartner and this field has spin 3/2 and its quantum is the gravitino. The number of fields is equal to the number of supersymmetries. The first theory of local supersymmetry was proposed in 1975 by Dick Arnowitt, Supergravity theories with N>1 are usually referred to as extended supergravity. Some supergravity theories were shown to be related to certain higher-dimensional supergravity theories via dimensional reduction, in these classes of models collectively now known as minimal supergravity Grand Unification Theories, gravity mediates the breaking of SUSY through the existence of a hidden sector. MSUGRA naturally generates the Soft SUSY breaking terms which are a consequence of the Super Higgs effect, radiative breaking of electroweak symmetry through Renormalization Group Equations follows as an immediate consequence. One of these supergravities, the 11-dimensional theory, generated considerable excitement as the first potential candidate for the theory of everything and these problems are avoided in 12 dimensions if two of these dimensions are timelike, as has been often emphasized by Itzhak Bars. Today many techniques exist to embed the model gauge group in supergravity in any number of dimensions. For example, in the mid and late 1980s, the gauge symmetry in type I. In type II string theory they could also be obtained by compactifying on certain Calabi–Yau manifolds, today one may also use D-branes to engineer gauge symmetries. In 1978, Eugène Cremmer, Bernard Julia and Joël Scherk found the action for an 11-dimensional supergravity theory. This remains today the only known classical 11-dimensional theory with local supersymmetry, other 11-dimensional theories are known that are quantum-mechanically inequivalent to the CJS theory, but classically equivalent. For example, in the mid 1980s Bernard de Wit and Hermann Nicolai found an alternate theory in D=11 Supergravity with Local SU Invariance. In 1980, Peter Freund and M. A. Rubin showed that compactification from 11 dimensions preserving all the SUSY generators could occur in two ways, leaving only 4 or 7 macroscopic dimensions, unfortunately, the noncompact dimensions have to form an anti-de Sitter space. Many of the details of the theory were fleshed out by Peter van Nieuwenhuizen, Sergio Ferrara, the initial excitement over 11-dimensional supergravity soon waned, as various failings were discovered, and attempts to repair the model failed as well. Problems included, The compact manifolds which were known at the time and which contained the standard model were not compatible with supersymmetry, and could not hold quarks or leptons

5.
Higgs mechanism
–
In the Standard Model of particle physics, the Higgs mechanism is essential to explain the generation mechanism of the property mass for gauge bosons. Without the Higgs mechanism, all bosons would be massless, but measurements show that the W+, W−, the Higgs field resolves this conundrum. The simplest description of the mechanism adds a quantum field that permeates all space, below some extremely high temperature, the field causes spontaneous symmetry breaking during interactions. The breaking of symmetry triggers the Higgs mechanism, causing the bosons it interacts with to have mass, in the Standard Model, the phrase Higgs mechanism refers specifically to the generation of masses for the W±, and Z weak gauge bosons through electroweak symmetry breaking. The Higgs mechanism was incorporated into modern particle physics by Steven Weinberg and Abdus Salam, in the standard model, at temperatures high enough that electroweak symmetry is unbroken, all elementary particles are massless. At a critical temperature, the Higgs field becomes tachyonic, the symmetry is broken by condensation. Fermions, such as the leptons and quarks in the Standard Model, can acquire mass as a result of their interaction with the Higgs field. In the standard model, the Higgs field is an SU doublet, the Higgs field, through the interactions specified by its potential, induces spontaneous breaking of three out of the four generators of the gauge group U. This is often written as SU × U, because the phase factor also acts on other fields in particular quarks. Three out of its four components would ordinarily amount to Goldstone bosons, the gauge group of the electroweak part of the standard model is SU × U. The group SU is the group of all 2-by-2 unitary matrices with unit determinant, rotating the coordinates so that the second basis vector points in the direction of the Higgs boson makes the vacuum expectation value of H the spinor. The generators for rotations about the x, y, and z axes are by half the Pauli matrices σx, σy, while the Tx and Ty generators mix up the top and bottom components of the spinor, the Tz rotations only multiply each by opposite phases. This phase can be undone by a U rotation of angle 1/2θ, consequently, under both an SU Tz-rotation and a U rotation by an amount 1/2θ, the vacuum is invariant. This combination of generators preserves the vacuum, and defines the unbroken gauge group in the standard model, the part of the gauge field in this direction stays massless, and amounts to the physical photon. In spite of the introduction of spontaneous symmetry breaking, the mass terms preclude chiral gauge invariance, for these fields the mass terms should always be replaced by a gauge-invariant Higgs mechanism. The quantities γμ are the Dirac matrices, and Gψ is the already-mentioned Yukawa coupling parameter, already now the mass-generation follows the same principle as above, namely from the existence of a finite expectation value | ⟨ ϕ ⟩ |, as described above. Again, this is crucial for the existence of the property mass, spontaneous symmetry breaking offered a framework to introduce bosons into relativistic quantum field theories. However, according to Goldstones theorem, these bosons should be massless, the only observed particles which could be approximately interpreted as Goldstone bosons were the pions, which Yoichiro Nambu related to chiral symmetry breaking

6.
Stochastic differential equation
–
A stochastic differential equation is a differential equation in which one or more of the terms is a stochastic process, resulting in a solution which is also a stochastic process. SDEs are used to various phenomena such as unstable stock prices or physical systems subject to thermal fluctuations. Typically, SDEs contain a variable which represents random white noise calculated as the derivative of Brownian motion or the Wiener process, however, other types of random behaviour are possible, such as jump processes. Early work on SDEs was done to describe Brownian motion in Einsteins famous paper, however, one of the earlier works related to Brownian motion is credited to Bachelier in his thesis Theory of Speculation. This work was followed upon by Langevin, later Itô and Stratonovich put SDEs on more solid mathematical footing. In physical science, SDEs are usually written as Langevin equations and these are sometimes ambiguously called the Langevin equation even though there are many possible forms. Those forms consist of a differential equation containing a deterministic function. A second form includes the Smoluchowski equation or the Fokker-Planck equation and these are partial differential equations which describe the time evolution of probability distribution functions. The third form is the Itô stochastic differential equation, which is most frequently used in mathematics and this is similar to the Langevin form, but it is usually written in differential notation. SDEs are denoted in two varieties, corresponding to two versions of stochastic calculus, Brownian motion or the Wiener process was discovered to be exceptionally complex mathematically. The Wiener process is almost surely nowhere differentiable, thus, it requires its own rules of calculus, there are two dominating versions of stochastic calculus, the Itô stochastic calculus and the Stratonovich stochastic calculus. Each of the two has advantages and disadvantages, and newcomers are often confused whether the one is more appropriate than the other in a given situation, guidelines exist and conveniently, one can readily convert an Itô SDE to an equivalent Stratonovich SDE and back again. Still, one must be careful which calculus to use when the SDE is initially written down, Numerical solution of stochastic differential equations and especially stochastic partial differential equations is a young field relatively speaking. Almost all algorithms that are used for the solution of differential equations will work very poorly for SDEs. A textbook describing many different algorithms is Kloeden & Platen, Methods include the Euler–Maruyama method, Milstein method and Runge–Kutta method. In physics, SDEs are typically written in the Langevin form and this form is usually usable because there are standard techniques for transforming higher-order equations into several coupled first-order equations by introducing new unknowns. If the g i are constants, the system is said to be subject to noise, otherwise it is said to be subject to multiplicative noise. This term is misleading as it has come to mean the general case even though it appears to imply the limited case in which g ∝ x

7.
Chaos theory
–
Chaos theory is a branch of mathematics focused on the behavior of dynamical systems that are highly sensitive to initial conditions. This happens even though these systems are deterministic, meaning that their behavior is fully determined by their initial conditions. In other words, the nature of these systems does not make them predictable. This behavior is known as chaos, or simply chaos. The theory was summarized by Edward Lorenz as, Chaos, When the present determines the future, Chaotic behavior exists in many natural systems, such as weather and climate. It also occurs spontaneously in some systems with components, such as road traffic. This behavior can be studied through analysis of a mathematical model, or through analytical techniques such as recurrence plots. Chaos theory has applications in several disciplines, including meteorology, sociology, physics, environmental science, computer science, engineering, economics, biology, ecology, the theory formed the basis for such fields of study as complex dynamical systems, edge of chaos theory, self-assembly process. Chaos theory concerns deterministic systems whose behavior can in principle be predicted, Chaotic systems are predictable for a while and then appear to become random. Some examples of Lyapunov times are, chaotic electrical circuits, about 1 millisecond, weather systems, a few days, in chaotic systems, the uncertainty in a forecast increases exponentially with elapsed time. Hence, mathematically, doubling the forecast time more than squares the proportional uncertainty in the forecast and this means, in practice, a meaningful prediction cannot be made over an interval of more than two or three times the Lyapunov time. When meaningful predictions cannot be made, the system appears random, in common usage, chaos means a state of disorder. However, in theory, the term is defined more precisely. Although no universally accepted mathematical definition of chaos exists, a commonly used definition originally formulated by Robert L, in these cases, while it is often the most practically significant property, sensitivity to initial conditions need not be stated in the definition. If attention is restricted to intervals, the second property implies the other two, an alternative, and in general weaker, definition of chaos uses only the first two properties in the above list. Sensitivity to initial conditions means that each point in a system is arbitrarily closely approximated by other points with significantly different future paths. Thus, a small change, or perturbation, of the current trajectory may lead to significantly different future behavior. C. Entitled Predictability, Does the Flap of a Butterflys Wings in Brazil set off a Tornado in Texas, the flapping wing represents a small change in the initial condition of the system, which causes a chain of events leading to large-scale phenomena

8.
Turbulence
–
Turbulence or turbulent flow is a flow regime in fluid dynamics characterized by chaotic changes in pressure and flow velocity. It is in contrast to a flow regime, which occurs when a fluid flows in parallel layers. Turbulence is caused by kinetic energy in parts of a fluid flow. For this reason turbulence is easier to create in low viscosity fluids, in general terms, in turbulent flow, unsteady vortices appear of many sizes which interact with each other, consequently drag due to friction effects increases. This would increase the energy needed to pump fluid through a pipe, however this effect can also be exploited by such as aerodynamic spoilers on aircraft, which deliberately spoil the laminar flow to increase drag and reduce lift. The onset of turbulence can be predicted by a constant called the Reynolds number. However, turbulence has long resisted detailed physical analysis, and the interactions within turbulence creates a complex situation. Richard Feynman has described turbulence as the most important unsolved problem of classical physics, smoke rising from a cigarette is mostly turbulent flow. However, for the first few centimeters the flow is laminar, the smoke plume becomes turbulent as its Reynolds number increases, due to its flow velocity and characteristic length increasing. If the golf ball were smooth, the boundary layer flow over the front of the sphere would be laminar at typical conditions. However, the layer would separate early, as the pressure gradient switched from favorable to unfavorable. To prevent this happening, the surface is dimpled to perturb the boundary layer. This results in higher skin friction, but moves the point of boundary layer separation further along, resulting in form drag. The flow conditions in industrial equipment and machines. The external flow over all kind of such as cars, airplanes, ships. The motions of matter in stellar atmospheres, a jet exhausting from a nozzle into a quiescent fluid. As the flow emerges into this external fluid, shear layers originating at the lips of the nozzle are created and these layers separate the fast moving jet from the external fluid, and at a certain critical Reynolds number they become unstable and break down to turbulence. Biologically generated turbulence resulting from swimming animals affects ocean mixing, snow fences work by inducing turbulence in the wind, forcing it to drop much of its snow load near the fence

9.
Pink noise
–
Pink noise or 1⁄f noise is a signal or process with a frequency spectrum such that the power spectral density is inversely proportional to the frequency of the signal. In pink noise, each carries a equal amount of noise energy. The name arises from the appearance of visible light with this power spectrum. These pink-like noises occur widely in nature and are a source of considerable interest in many fields, the distinction between the noises with α near 1 and those with a broad range of α approximately corresponds to a much more basic distinction. The former generally come from systems in quasi-equilibrium, as discussed below. The latter generally correspond to a range of non-equilibrium driven dynamical systems. The term flicker noise is used to refer to pink noise. There is equal energy in all octaves of frequency, in terms of power at a constant bandwidth, pink noise falls off at 3 dB per octave. At high enough frequencies pink noise is never dominant, however, humans still differentiate between white noise and pink noise with ease. Systems that do not have a response can be equalized by creating an inverse filter using a graphic equalizer. Because pink noise has a tendency to occur in physical systems. Pink noise can be processed, filtered, and/or effects can be added to produce desired sounds, various crest factors of pink noise can be used in simulations of various levels of dynamic range compression in music signals. On some digital pink-noise generators the crest factor can be specified, the power spectrum of pink noise is 1/f only for one-dimensional signals. For two-dimensional signals the power spectrum is reciprocal to f 2 In general, in an n-dimensional system, for higher-dimensional signals it is still true that each octave carries an equal amount of noise power. The frequency spectrum of two-dimensional signals, for instance, is also two-dimensional, in the past quarter century, pink noise has been discovered in the statistical fluctuations of an extraordinarily diverse number of physical and biological systems. Examples of its occurrence include fluctuations in tide and river heights, quasar light emissions, heart beat, firings of single neurons, an accessible introduction to the significance of pink noise is one given by Martin Gardner in his Scientific American column Mathematical Games. In this column, Gardner asked for the sense in which music imitates nature, sounds in nature are not musical in that they tend to be either too repetitive or too chaotic. The answer to question was given in a statistical sense by Voss and Clarke

10.
Chronology of the universe
–
The chronology of the universe describes the history and future of the universe according to Big Bang cosmology. The metric expansion of space is estimated to have begun 13.8 billion years ago, the time since the Big Bang is also known as cosmic time. The solar system formed at about 4.6 billion years ago, the far future, after cessation of stellar formation, with various scenarios for the ultimate fate of the universe. Little is understood about physics at this temperature, different hypotheses propose different scenarios, in inflationary cosmology, times before the end of inflation does not follow the traditional big bang timeline. Models attempting to formulate processes of the Planck epoch are speculative proposals for New Physics, examples include the Hartle–Hawking initial state, string landscape, string gas cosmology, and the ekpyrotic universe. Between 10−43 second and 10−36 second after the Big Bang As the universe expanded and cooled and these can be regarded as phase transitions much like condensation and freezing phase transitions of ordinary matter. The grand unification epoch began when gravitation separated from the gauge forces, the non-gravitational physics of this epoch would be described by a so-called grand unified theory. The grand unification epoch ended when the GUT forces further separate into the strong, while decelerating expansion would magnify deviations from homogeneity, making the universe more chaotic, accelerating expansion would make the universe more homogeneous. Inflation ended when the field decayed into ordinary particles in a process called reheating. The time of reheating is usually quoted as a time after the Big Bang, according to the simplest inflationary models, inflation ended at a temperature corresponding to roughly 10−32 second after the Big Bang. As explained above, this does not imply that the inflationary era lasted less than 10−32 second, in fact, in order to explain the observed homogeneity of the universe, the duration must be longer than 10−32 second. In inflationary cosmology, the earliest meaningful time after the Big Bang is the time of the end of inflation, in inflationary cosmology, the electroweak epoch began when the inflationary epoch ended, at roughly 10−32 seconds. There is currently insufficient observational evidence to explain why the universe contains far more baryons than antibaryons, a candidate explanation for this phenomenon must allow the Sakharov conditions to be satisfied at some time after the end of cosmological inflation. While particle physics suggests asymmetries under which conditions are met. After cosmic inflation ends, the universe is filled with a quark–gluon plasma, from this point onwards the physics of the early universe is better understood, and the energies involved in the Quark epoch are directly amenable to experiment. If supersymmetry is a property of our universe, then it must be broken at an energy that is no lower than 1 TeV, the electroweak symmetry scale. The masses of particles and their superpartners would then no longer be equal, between 10−6 second and 1 second after the Big Bang The quark–gluon plasma that composes the universe cools until hadrons, including baryons such as protons and neutrons, can form. At approximately 1 second after the Big Bang neutrinos decouple and begin traveling freely through space and this cosmic neutrino background, while unlikely to ever be observed in detail since the neutrino energies are very low, is analogous to the cosmic microwave background that was emitted much later

11.
Big Bang
–
The Big Bang theory is the prevailing cosmological model for the universe from the earliest known periods through its subsequent large-scale evolution. If the known laws of physics are extrapolated to the highest density regime, detailed measurements of the expansion rate of the universe place this moment at approximately 13.8 billion years ago, which is thus considered the age of the universe. After the initial expansion, the universe cooled sufficiently to allow the formation of subatomic particles, giant clouds of these primordial elements later coalesced through gravity in halos of dark matter, eventually forming the stars and galaxies visible today. Since Georges Lemaître first noted in 1927 that a universe could be traced back in time to an originating single point. More recently, measurements of the redshifts of supernovae indicate that the expansion of the universe is accelerating, the known physical laws of nature can be used to calculate the characteristics of the universe in detail back in time to an initial state of extreme density and temperature. American astronomer Edwin Hubble observed that the distances to faraway galaxies were strongly correlated with their redshifts, assuming the Copernican principle, the only remaining interpretation is that all observable regions of the universe are receding from all others. Since we know that the distance between galaxies increases today, it must mean that in the past galaxies were closer together, the continuous expansion of the universe implies that the universe was denser and hotter in the past. Large particle accelerators can replicate the conditions that prevailed after the early moments of the universe, resulting in confirmation, however, these accelerators can only probe so far into high energy regimes. Consequently, the state of the universe in the earliest instants of the Big Bang expansion is still poorly understood, the first subatomic particles to be formed included protons, neutrons, and electrons. Though simple atomic nuclei formed within the first three minutes after the Big Bang, thousands of years passed before the first electrically neutral atoms formed, the majority of atoms produced by the Big Bang were hydrogen, along with helium and traces of lithium. Giant clouds of primordial elements later coalesced through gravity to form stars and galaxies. The framework for the Big Bang model relies on Albert Einsteins theory of relativity and on simplifying assumptions such as homogeneity. The governing equations were formulated by Alexander Friedmann, and similar solutions were worked on by Willem de Sitter, extrapolation of the expansion of the universe backwards in time using general relativity yields an infinite density and temperature at a finite time in the past. This singularity indicates that general relativity is not a description of the laws of physics in this regime. How closely models based on general relativity alone can be used to extrapolate toward the singularity is debated—certainly no closer than the end of the Planck epoch. This primordial singularity is itself called the Big Bang, but the term can also refer to a more generic early hot. The agreement of independent measurements of this age supports the model that describes in detail the characteristics of the universe. The earliest phases of the Big Bang are subject to much speculation, in the most common models the universe was filled homogeneously and isotropically with a very high energy density and huge temperatures and pressures and was very rapidly expanding and cooling