1.
General relativity
–
General relativity is the geometric theory of gravitation published by Albert Einstein in 1915 and the current description of gravitation in modern physics. General relativity generalizes special relativity and Newtons law of gravitation, providing a unified description of gravity as a geometric property of space and time. In particular, the curvature of spacetime is directly related to the energy and momentum of whatever matter, the relation is specified by the Einstein field equations, a system of partial differential equations. Examples of such differences include gravitational time dilation, gravitational lensing, the redshift of light. The predictions of relativity have been confirmed in all observations. Although general relativity is not the only theory of gravity. Einsteins theory has important astrophysical implications, for example, it implies the existence of black holes—regions of space in which space and time are distorted in such a way that nothing, not even light, can escape—as an end-state for massive stars. The bending of light by gravity can lead to the phenomenon of gravitational lensing, General relativity also predicts the existence of gravitational waves, which have since been observed directly by physics collaboration LIGO. In addition, general relativity is the basis of current cosmological models of an expanding universe. Soon after publishing the special theory of relativity in 1905, Einstein started thinking about how to incorporate gravity into his new relativistic framework. In 1907, beginning with a thought experiment involving an observer in free fall. After numerous detours and false starts, his work culminated in the presentation to the Prussian Academy of Science in November 1915 of what are now known as the Einstein field equations. These equations specify how the geometry of space and time is influenced by whatever matter and radiation are present, the Einstein field equations are nonlinear and very difficult to solve. Einstein used approximation methods in working out initial predictions of the theory, but as early as 1916, the astrophysicist Karl Schwarzschild found the first non-trivial exact solution to the Einstein field equations, the Schwarzschild metric. This solution laid the groundwork for the description of the stages of gravitational collapse. In 1917, Einstein applied his theory to the universe as a whole, in line with contemporary thinking, he assumed a static universe, adding a new parameter to his original field equations—the cosmological constant—to match that observational presumption. By 1929, however, the work of Hubble and others had shown that our universe is expanding and this is readily described by the expanding cosmological solutions found by Friedmann in 1922, which do not require a cosmological constant. Lemaître used these solutions to formulate the earliest version of the Big Bang models, in which our universe has evolved from an extremely hot, Einstein later declared the cosmological constant the biggest blunder of his life

2.
Anti-de Sitter space
–
In mathematics and physics, n-dimensional anti-de Sitter space is a maximally symmetric Lorentzian manifold with constant negative scalar curvature. The anti-de Sitter space and de Sitter space are named after Willem de Sitter, professor of astronomy at Leiden University, Willem de Sitter and Albert Einstein worked together closely in the 1920s in Leiden on the spacetime structure of the universe. Einsteins theory of relativity places space and time on equal footing, so that one considers the geometry of a unified spacetime instead of considering space, the cases of spacetime of constant curvature are de Sitter space, Minkowski space, and anti-de Sitter space. As such, they are solutions of Einsteins field equations for an empty universe with a positive, zero, or negative cosmological constant. Anti-de Sitter space generalises to any number of space dimensions and this non-technical explanation first defines the terms used in the introductory material of this entry. Then, it sets forth the underlying idea of a general relativity-like spacetime. It also explains that Minkowski space, de Sitter space and anti-de Sitter space, as applied to general relativity, finally, it offers some caveats that describe in general terms how this non-technical explanation fails to capture the full detail of the mathematical concept. The space of special relativity is an example, negative curvature means curved hyperbolically, like a saddle surface or the Gabriels Horn surface, similar to that of a trumpet bell. It might be described as being the opposite of the surface of a sphere, general relativity is a theory of the nature of time, space and gravity in which gravity is a curvature of space and time that results from the presence of matter or energy. Energy and matter are equivalent, and space and time can be translated into equivalent units based on the speed of light, of course, in general relativity, both the small and large objects mutually influence the curvature of spacetime. The attractive force of gravity created by matter is due to a curvature of spacetime. As a result, in relativity, the familiar Newtonian equation of gravity F = G m 1 m 2 r 2 is merely an approximation of the gravity-like effects seen in general relativity. However this approximation becomes inaccurate in extreme physical situations, for example, in general relativity, objects in motion have a slightly different gravitation effect than objects at rest. In normal circumstances, gravity bends time so slightly that the differences between Newtonian gravity and general relativity are detectable only with precise instruments, de Sitter space involves a variation of general relativity in which spacetime is slightly curved in the absence of matter or energy. This is analogous to the relationship between Euclidean geometry and non-Euclidean geometry, an intrinsic curvature of spacetime in the absence of matter or energy is modeled by the cosmological constant in general relativity. This corresponds to the vacuum having a density and pressure. This spacetime geometry results in initially parallel timelike geodesics diverging, with spacelike sections having positive curvature, an anti-de Sitter space in general relativity is similar to a de Sitter space, except with the sign of the curvature changed. This corresponds to a cosmological constant

3.
Cosmological constant
–
In cosmology, the cosmological constant is the value of the energy density of the vacuum of space. It was originally introduced by Albert Einstein in 1917 as an addition to his theory of relativity to hold back gravity and achieve a static universe. Einstein abandoned the concept after Hubbles 1929 discovery that all galaxies outside the Local Group are moving away from each other, from 1929 until the early 1990s, most cosmology researchers assumed the cosmological constant to be zero. When Λ is zero, this reduces to the field equation of general relativity. When T is zero, the equation describes empty space. The cosmological constant has the effect as an intrinsic energy density of the vacuum. In this context, it is moved onto the right-hand side of the equation, and defined with a proportionality factor of 8π, Λ = 8πρvac. It is common to quote values of energy density directly, though using the name cosmological constant. A positive vacuum energy density resulting from a cosmological constant implies a negative pressure, if the energy density is positive, the associated negative pressure will drive an accelerated expansion of the universe, as observed. This ratio is usually denoted ΩΛ, and is estimated to be 0. 6911±0.0062, according to results published by the Planck Collaboration in 2015. In a flat universe ΩΛ is the fraction of the energy of the due to the cosmological constant. Another ratio that is used by scientists is the equation of state, usually denoted w and this ratio is w = −1 for a true cosmological constant, and is generally different for alternative time-varying forms of vacuum energy such as quintessence. To counteract this possibility, Einstein added the cosmological constant, likewise, a universe that contracts slightly will continue contracting. However, the cosmological constant remained a subject of theoretical and empirical interest, empirically, the onslaught of cosmological data in the past decades strongly suggests that our universe has a positive cosmological constant. The explanation of this small but positive value is a theoretical challenge. Observations announced in 1998 of distance–redshift relation for Type Ia supernovae indicated that the expansion of the universe is accelerating. When combined with measurements of the microwave background radiation these implied a value of ΩΛ ≈0.7. There are other causes of an accelerating universe, such as quintessence

4.
Topological quantum field theory
–
A topological quantum field theory is a quantum field theory which computes topological invariants. Donaldson, Jones, Witten, and Kontsevich have all won Fields Medals for work related to field theory. In a topological field theory, the correlation functions do not depend on the metric of spacetime and this means that the theory is not sensitive to changes in the shape of spacetime, if the spacetime warps or contracts, the correlation functions do not change. Topological field theories are not very interesting on the flat Minkowski spacetime used in particle physics, Minkowski space can be contracted to a point, so a TQFT on Minkowski space computes only trivial topological invariants. Consequently, TQFTs are usually studied on curved spacetimes, such as, for example, most of the known topological field theories are defined on spacetimes of dimension less than five. It seems that a few theories exist, but they are not very well understood. Quantum gravity is believed to be background-independent, and TQFTs provide examples of independent quantum field theories. This has prompted ongoing theoretical investigation of this class of models, the known topological field theories fall into two general classes, Schwarz-type TQFTs and Witten-type TQFTs. Witten TQFTs are also referred to as cohomological field theories. In Schwarz-type TQFTs, the functions or partition functions of the system are computed by the path integral of metric independent action functionals. For instance, in the BF model, the spacetime is a two-dimensional manifold M, the observables are constructed from a two-form F, an auxiliary scalar B, and their derivatives. The action is S = ∫ M B F The spacetime metric does not appear anywhere in the theory, the first example appeared in 1977 and is due to A. Schwarz, its action functional is, ∫ M A ∧ d A. Another more famous example is Chern–Simons theory, which can be used to compute knot invariants, in general partition functions depend on a metric but the above examples are shown to be metric-independent. The first example of the field theories of Witten-type appeared in Wittens paper in 1988. Though its action functional contains the spacetime metric gαβ, after a topological twist it turns out to be metric independent, the independence of the stress-energy tensor Tαβ of the system from the metric depends on whether BRST-operator is closed. Following Wittens example a lot of examples are found in string theory,4. The stress-energy-tensor is of the form T α β = δ G α β for an arbitrary tensor G α β. As an example given a 2-form field B with the differential operator δ which satisfies δ2 =0, the expression δ δ B α β S is proportional to δ G with another 2-form G. In the third equality it was used the fact that δ O i = δ S =0, since ∫ d μ O i G e i S is only a number, the Lie derivative applied on it vanishes

5.
Edward Witten
–
Edward Witten is an American theoretical physicist and professor of mathematical physics at the Institute for Advanced Study in Princeton, New Jersey. Witten is a researcher in string theory, quantum gravity, supersymmetric quantum field theories, in addition to his contributions to physics, Wittens work has significantly impacted pure mathematics. In 1990 he became the first and so far the only physicist to be awarded a Fields Medal by the International Mathematical Union, in 2004, Time magazine stated that Witten is widely thought to be the worlds smartest living theoretical physicist. Witten was born in Baltimore, Maryland, to a Jewish family and he is the son of Lorraine Witten and Louis Witten, a theoretical physicist specializing in gravitation and general relativity. Witten attended the Park School of Baltimore, and received his Bachelor of Arts with a major in history and he published articles in The New Republic and The Nation. In 1968, Witten published an article in The Nation arguing that the New Left had no strategy and he worked briefly for George McGoverns presidential campaign. Witten attended the University of Wisconsin–Madison for one semester as a graduate student before dropping out. He held a fellowship at Harvard University, visited Oxford University, was a fellow in the Harvard Society of Fellows. Witten was awarded the Fields Medal by the International Mathematical Union in 1990, Time and again he has surprised the mathematical community by a brilliant application of physical insight leading to new and deep mathematical theorems. E has made an impact on contemporary mathematics. In his hands physics is once again providing a source of inspiration. As an example of Wittens work in mathematics, Atiyah cites his application of techniques from quantum field theory to the mathematical subject of low-dimensional topology. In particular, Witten realized that a theory now called Chern–Simons theory could provide a framework for understanding the mathematical theory of knots. Another result for which Witten was awarded the Fields Medal was his proof in 1981 of the energy theorem in general relativity. This theorem asserts that the energy of a gravitating system is always positive. It establishes Minkowski space as a ground state of the gravitational field. While the original proof of this due to Richard Schoen and Shing-Tung Yau used variational methods. Wittens work gave a proof of a classical result, the Morse inequalities

6.
Gauge group
–
In physics, a gauge theory is a type of field theory in which the Lagrangian is invariant under a continuous group of local transformations. An invariant is a model that holds no matter the mathematical procedure applied to it. This is the concept behind gauge invariance, the idea of fields as described by Michael Faraday in his study of electromagnetism led to the postulate that fields could be described mathematically as scalars and vectors. When a field is transformed, but the result is not, applying gauge theory creates a unification which describes mathematical formulas or models that hold good for all fields of the same class. The term gauge refers to any specific mathematical formalism to regulate redundant degrees of freedom in the Lagrangian, the transformations between possible gauges, called gauge transformations, form a Lie group—referred to as the symmetry group or the gauge group of the theory. Associated with any Lie group is the Lie algebra of group generators, for each group generator there necessarily arises a corresponding field called the gauge field. Gauge fields are included in the Lagrangian to ensure its invariance under the local group transformations, when such a theory is quantized, the quanta of the gauge fields are called gauge bosons. If the symmetry group is non-commutative, the theory is referred to as non-abelian. Many powerful theories in physics are described by Lagrangians that are invariant under some symmetry transformation groups, when they are invariant under a transformation identically performed at every point in the spacetime in which the physical processes occur, they are said to have a global symmetry. Local symmetry, the cornerstone of gauge theories, is a stricter constraint, in fact, a global symmetry is just a local symmetry whose groups parameters are fixed in spacetime. Gauge theories are important as the field theories explaining the dynamics of elementary particles. Quantum electrodynamics is a gauge theory with the symmetry group U and has one gauge field. The Standard Model is a gauge theory with the symmetry group U×SU×SU and has a total of twelve gauge bosons. Gauge theories are important in explaining gravitation in the theory of general relativity. Its case is unusual in that the gauge field is a tensor. Theories of quantum gravity, beginning with gauge gravitation theory, also postulate the existence of a gauge boson known as the graviton, both gauge invariance and diffeomorphism invariance reflect a redundancy in the description of the system. An alternative theory of gravitation, gauge theory gravity, replaces the principle of covariance with a true gauge principle with new gauge fields. Historically, these ideas were first stated in the context of classical electromagnetism, however, the modern importance of gauge symmetries appeared first in the relativistic quantum mechanics of electrons – quantum electrodynamics, elaborated on below

7.
Solvable group
–
In mathematics, more specifically in the field of group theory, a solvable group or soluble group is a group that can be constructed from abelian groups using extensions. Equivalently, a group is a group whose derived series terminates in the trivial subgroup. Historically, the word solvable arose from Galois theory and the proof of the unsolvability of quintic equation. Specifically, an equation is solvable by radicals if and only if the corresponding Galois group is solvable. Or equivalently, if its derived series, the normal series G ▹ G ▹ G ▹ ⋯. These two definitions are equivalent, since for every group H and every normal subgroup N of H, the least n such that G = is called the derived length of the solvable group G. For finite groups, an equivalent definition is that a group is a group with a composition series all of whose factors are cyclic groups of prime order. This is equivalent because a group has finite composition length. The Jordan–Hölder theorem guarantees that if one composition series has this property, for the Galois group of a polynomial, these cyclic groups correspond to nth roots over some field. All abelian groups are trivially solvable – a subnormal series being given by just the group itself, but non-abelian groups may or may not be solvable. More generally, all nilpotent groups are solvable, in particular, finite p-groups are solvable, as all finite p-groups are nilpotent. A small example of a solvable, non-nilpotent group is the symmetric group S3, in fact, as the smallest simple non-abelian group is A5, it follows that every group with order less than 60 is solvable. The group S5 is not solvable — it has a series, giving factor groups isomorphic to A5 and C2. Generalizing this argument, coupled with the fact that An is a normal, maximal, non-abelian simple subgroup of Sn for n >4, we see that Sn is not solvable for n >4. This is a key step in the proof that for every n >4 there are polynomials of n which are not solvable by radicals. This property is used in complexity theory in the proof of Barringtons theorem. The celebrated Feit–Thompson theorem states that every group of odd order is solvable. In particular this implies that if a group is simple

8.
Quantum gravity
–
Quantum gravity is a field of theoretical physics that seeks to describe gravity according to the principles of quantum mechanics, and where quantum effects cannot be ignored. The current understanding of gravity is based on Albert Einsteins general theory of relativity, the necessity of a quantum mechanical description of gravity is sometimes said to follow from the fact that one cannot consistently couple a classical system to a quantum one. This is false as is shown, for example, by Walds explicit construction of a consistent semiclassical theory, the problem is that the theory one gets in this way is not renormalizable and therefore cannot be used to make meaningful physical predictions. As a result, theorists have taken up more radical approaches to the problem of quantum gravity, a theory of quantum gravity that is also a grand unification of all known interactions is sometimes referred to as The Theory of Everything. As a result, quantum gravity is a mainly theoretical enterprise, much of the difficulty in meshing these theories at all energy scales comes from the different assumptions that these theories make on how the universe works. Quantum field theory, if conceived of as a theory of particles, General relativity models gravity as a curvature within space-time that changes as a gravitational mass moves. Historically, the most obvious way of combining the two ran quickly into what is known as the renormalization problem, another possibility is to focus on fields rather than on particles, which are just one way of characterizing certain fields in very special spacetimes. This solves worries about consistency, but does not appear to lead to a version of full general theory of relativity. Quantum gravity can be treated as a field theory. Effective quantum field theories come with some high-energy cutoff, beyond which we do not expect that the theory provides a description of nature. The infinities then become large but finite quantities depending on this finite cutoff scale and this same logic works just as well for the highly successful theory of low-energy pions as for quantum gravity. Indeed, the first quantum-mechanical corrections to graviton-scattering and Newtons law of gravitation have been explicitly computed. In fact, gravity is in ways a much better quantum field theory than the Standard Model. Specifically, the problem of combining quantum mechanics and gravity becomes an issue only at high energies. This problem must be put in the context, however. While there is no proof of the existence of gravitons. The predicted find would result in the classification of the graviton as a force similar to the photon of the electromagnetic field. Many of the notions of a unified theory of physics since the 1970s assume, and to some degree depend upon

9.
Killing form
–
In mathematics, the Killing form, named after Wilhelm Killing, is a symmetric bilinear form that plays a basic role in the theories of Lie groups and Lie algebras. The Killing form was introduced into Lie algebra theory by Élie Cartan in his thesis. The name Killing form first appeared in a paper of Armand Borel in 1951, Borel admits that the name seems to be a misnomer, and that it would be more correct to call it the Cartan form. A basic result Cartan made use of was Cartans criterion, which states that the Killing form is non-degenerate if, consider a Lie algebra g over a field K. Every element x of g defines the adjoint endomorphism ad of g with the help of the Lie bracket, as a d =. Now, supposing g is of dimension, the trace of the composition of two such endomorphisms defines a symmetric bilinear form B = t r a c e, with values in K. The Killing form B is bilinear and symmetric, the Killing form is an invariant form, in the sense that it has the associativity property B = B, where is the Lie bracket. If g is a simple Lie algebra then any invariant symmetric bilinear form on g is a multiple of the Killing form. The Killing form is also invariant under automorphisms s of the algebra g, the Cartan criterion states that a Lie algebra is semisimple if and only if the Killing form is non-degenerate. The Killing form of a nilpotent Lie algebra is identically zero, if I, J are two ideals in a Lie algebra g with zero intersection, then I and J are orthogonal subspaces with respect to the Killing form. The orthogonal complement with respect to B of an ideal is again an ideal, if a given Lie algebra g is a direct sum of its ideals I1. In, then the Killing form of g is the direct sum of the Killing forms of the individual summands. Given a basis ei of the Lie algebra g, the elements of the Killing form are given by B i j = t r / I a d where Iad is the Dynkin index of the adjoint representation of g. Here = = = c i m n c j k m e n in Einstein summation notation, the index k functions as column index and the index n as row index in the matrix adad. In the above indexed definition, we are careful to distinguish upper and lower indices. This is because, in cases, the Killing form can be used as a metric tensor on a manifold. When the Lie algebra is semisimple over a field, its Killing form is nondegenerate. In this case, it is possible to choose a basis for g such that the structure constants with all upper indices are completely antisymmetric. The Killing form for some Lie algebras g are, Suppose that g is a semisimple Lie algebra over the field of real numbers R, by Cartans criterion, the Killing form is nondegenerate, and can be diagonalized in a suitable basis with the diagonal entries ±1

10.
AdS/CFT correspondence
–
On one side are anti-de Sitter spaces which are used in theories of quantum gravity, formulated in terms of string theory or M-theory. On the other side of the correspondence are conformal field theories which are quantum field theories, the duality represents a major advance in our understanding of string theory and quantum gravity. It also provides a toolkit for studying strongly coupled quantum field theories. This fact has been used to study aspects of nuclear. The AdS/CFT correspondence was first proposed by Juan Maldacena in late 1997, important aspects of the correspondence were elaborated in articles by Steven Gubser, Igor Klebanov, and Alexander Markovich Polyakov, and by Edward Witten. By 2015, Maldacenas article had over 10,000 citations and our current understanding of gravity is based on Albert Einsteins general theory of relativity. Formulated in 1915, general relativity explains gravity in terms of the geometry of space and time and it is formulated in the language of classical physics developed by physicists such as Isaac Newton and James Clerk Maxwell. The other nongravitational forces are explained in the framework of quantum mechanics, developed in the first half of the twentieth century by a number of different physicists, quantum mechanics provides a radically different way of describing physical phenomena based on probability. Quantum gravity is the branch of physics that seeks to describe gravity using the principles of quantum mechanics, currently, the most popular approach to quantum gravity is string theory, which models elementary particles not as zero-dimensional points but as one-dimensional objects called strings. In the AdS/CFT correspondence, one typically considers theories of quantum gravity derived from string theory or its modern extension, in everyday life, there are three familiar dimensions of space, and there is one dimension of time. Thus, in the language of physics, one says that spacetime is four-dimensional. The quantum gravity theories appearing in the AdS/CFT correspondence are typically obtained from string and this produces a theory in which spacetime has effectively a lower number of dimensions and the extra dimensions are curled up into circles. A standard analogy for compactification is to consider an object such as a garden hose. Thus, an ant crawling inside it would move in two dimensions, the application of quantum mechanics to physical objects such as the electromagnetic field, which are extended in space and time, is known as quantum field theory. In particle physics, quantum field theories form the basis for our understanding of elementary particles, quantum field theories are also used throughout condensed matter physics to model particle-like objects called quasiparticles. In the AdS/CFT correspondence, one considers, in addition to a theory of quantum gravity and this is a particularly symmetric and mathematically well behaved type of quantum field theory. In the AdS/CFT correspondence, one considers string theory or M-theory on an anti-de Sitter background and this means that the geometry of spacetime is described in terms of a certain vacuum solution of Einsteins equation called anti-de Sitter space. It is closely related to space, which can be viewed as a disk as illustrated on the right

11.
Monster group
–
The finite simple groups have been completely classified. Every such group belongs to one of 18 countably infinite families, the Monster group contains all but six of the other sporadic groups as subquotients. Robert Griess has called these 6 exceptions pariahs, and refers to the other 20 as the happy family and it is difficult to make a good constructive definition of the Monster because of its complexity. Martin Gardner wrote an account of the monster group in his June 1980 Mathematical Games column in Scientific American. The Monster was predicted by Bernd Fischer and Robert Griess as a group containing a double cover of Fischers Baby Monster group as a centralizer of an involution. The character table of the Monster, a 194-by-194 array, was calculated in 1979 by Fischer and it was not clear in the 1970s that the Monster actually existed. Griess constructed M as the group of the Griess algebra. In his 1982 paper he referred to the Monster as the Friendly Giant, John Conway and Jacques Tits subsequently simplified this construction. Griesss construction showed that the Monster existed, Thompson showed that its uniqueness would follow from the existence of a 196, 883-dimensional faithful representation. A proof of the existence of such a representation was announced by Norton, Griess, Meierfrankenfeld & Segev gave the first complete published proof of the uniqueness of the Monster. The Monster was a culmination of a development of simple groups and can be built from any 2 of 3 subquotients, the Fischer group Fi24, the Baby Monster. The Schur multiplier and the automorphism group of the Monster are both trivial. The minimal degree of a complex representation is 196,883. The smallest faithful linear representation over any field has dimension 196,882 over the field with 2 elements, the smallest faithful permutation representation of the Monster is on 24 ·37 ·53 ·74 ·11 ·132 ·29 ·41 ·59 ·71 points. The Monster can be realized as a Galois group over the rational numbers, the Monster is unusual among simple groups in that there is no known easy way to represent its elements. This is not due so much to its size as to the absence of small representations, for example, the simple groups A100 and SL20 are far larger, but easy to calculate with as they have small permutation or linear representations. All sporadic groups other than the Monster also have linear representations small enough that they are easy to work with on a computer. Performing calculations with these matrices is possible but is too expensive in terms of time and storage space to be useful, as each such matrix occupies over four and a half gigabytes

12.
ArXiv
–
In many fields of mathematics and physics, almost all scientific papers are self-archived on the arXiv repository. Begun on August 14,1991, arXiv. org passed the half-million article milestone on October 3,2008, by 2014 the submission rate had grown to more than 8,000 per month. The arXiv was made possible by the low-bandwidth TeX file format, around 1990, Joanne Cohn began emailing physics preprints to colleagues as TeX files, but the number of papers being sent soon filled mailboxes to capacity. Additional modes of access were added, FTP in 1991, Gopher in 1992. The term e-print was quickly adopted to describe the articles and its original domain name was xxx. lanl. gov. Due to LANLs lack of interest in the rapidly expanding technology, in 1999 Ginsparg changed institutions to Cornell University and it is now hosted principally by Cornell, with 8 mirrors around the world. Its existence was one of the factors that led to the current movement in scientific publishing known as open access. Mathematicians and scientists regularly upload their papers to arXiv. org for worldwide access, Ginsparg was awarded a MacArthur Fellowship in 2002 for his establishment of arXiv. The annual budget for arXiv is approximately $826,000 for 2013 to 2017, funded jointly by Cornell University Library, annual donations were envisaged to vary in size between $2,300 to $4,000, based on each institution’s usage. As of 14 January 2014,174 institutions have pledged support for the period 2013–2017 on this basis, in September 2011, Cornell University Library took overall administrative and financial responsibility for arXivs operation and development. Ginsparg was quoted in the Chronicle of Higher Education as saying it was supposed to be a three-hour tour, however, Ginsparg remains on the arXiv Scientific Advisory Board and on the arXiv Physics Advisory Committee. The lists of moderators for many sections of the arXiv are publicly available, additionally, an endorsement system was introduced in 2004 as part of an effort to ensure content that is relevant and of interest to current research in the specified disciplines. Under the system, for categories that use it, an author must be endorsed by an established arXiv author before being allowed to submit papers to those categories. Endorsers are not asked to review the paper for errors, new authors from recognized academic institutions generally receive automatic endorsement, which in practice means that they do not need to deal with the endorsement system at all. However, the endorsement system has attracted criticism for allegedly restricting scientific inquiry, perelman appears content to forgo the traditional peer-reviewed journal process, stating, If anybody is interested in my way of solving the problem, its all there – let them go and read about it. The arXiv generally re-classifies these works, e. g. in General mathematics, papers can be submitted in any of several formats, including LaTeX, and PDF printed from a word processor other than TeX or LaTeX. The submission is rejected by the software if generating the final PDF file fails, if any image file is too large. ArXiv now allows one to store and modify an incomplete submission, the time stamp on the article is set when the submission is finalized

13.
Gravitational anomaly
–
The adjective gravitational is derived from the symmetry of a gravitational theory, namely from general covariance. A gravitational anomaly is generally synonmous with diffeomorphism anomaly, since general covariance is symmetry under coordinate reparametrization, general covariance is the basis of general relativity, the current theory of gravitation. Therefore, all gravitational anomalies must cancel out, the anomaly usually appears as a Feynman diagram with a chiral fermion running in the loop with n external gravitons attached to the loop where n =1 + D /2 where D is the spacetime dimension. Field-theoretic pure gravitational anomalies occur only in even spacetime dimensions, however, diffeomorphism anomalies can occur in the case of an odd-dimensional spacetime manifold with boundary. Consider a classical gravitational field represented by the vielbein e μ a and a quantized Fermi field ψ. Einstein anomaly δ ξ W = − ∫ d 4 x e ξ ν, weyl anomaly δ σ W = ∫ d 4 x e σ ⟨ T μ μ ⟩, which indicates that the trace is non-zero. Mixed anomaly Green–Schwarz mechanism Alvarez-Gaumé, Luis, Edward Witten

14.
Holographic principle
–
First proposed by Gerard t Hooft, it was given a precise string-theory interpretation by Leonard Susskind who combined his ideas with previous ones of t Hooft and Charles Thorn. As pointed out by Raphael Bousso, Thorn observed in 1978 that string theory admits a lower-dimensional description in which gravity emerges from it in what would now be called a holographic way. Cosmological holography has not been made mathematically precise, partly because the horizon has a non-zero area. The holographic principle was inspired by black hole thermodynamics, which conjectures that the entropy in any region scales with the radius squared. In the case of a hole, the insight was that the informational content of all the objects that have fallen into the hole might be entirely contained in surface fluctuations of the event horizon. The holographic principle resolves the black hole information paradox within the framework of string theory, however, there exist classical solutions to the Einstein equations that allow values of the entropy larger than those allowed by an area law, hence in principle larger than those of a black hole. These are the so-called Wheelers bags of gold, the existence of such solutions conflicts with the holographic interpretation, and their effects in a quantum theory of gravity including the holographic principle are not yet fully understood. An object with relatively high entropy is microscopically random, like a hot gas, a known configuration of classical fields has zero entropy, there is nothing random about electric and magnetic fields, or gravitational waves. Since black holes are exact solutions of Einsteins equations, they were not to have any entropy either. But Jacob Bekenstein noted that this leads to a violation of the law of thermodynamics. If one throws a hot gas with entropy into a hole, once it crosses the event horizon. The random properties of the gas would no longer be seen once the black hole had absorbed the gas and settled down. One way of salvaging the second law is if black holes are in random objects with an entropy that increases by an amount greater than the entropy of the consumed gas. Bekenstein assumed that black holes are maximum entropy objects—that they have more entropy than anything else in the same volume, in a sphere of radius R, the entropy in a relativistic gas increases as the energy increases. The only known limit is gravitational, when there is too much energy the gas collapses into a black hole, Bekenstein used this to put an upper bound on the entropy in a region of space, and the bound was proportional to the area of the region. He concluded that the black hole entropy is proportional to the area of the event horizon. Stephen Hawking had shown earlier that the horizon area of a collection of black holes always increases with time. The horizon is a boundary defined by light-like geodesics, it is those light rays that are just barely unable to escape, if neighboring geodesics start moving toward each other they eventually collide, at which point their extension is inside the black hole

15.
Quantum field theory in curved spacetime
–
In particle physics, quantum field theory in curved spacetime is an extension of standard, Minkowski space quantum field theory to curved spacetime. A general prediction of this theory is that particles can be created by time-dependent gravitational fields, for non-zero cosmological constants, on curved spacetimes quantum fields lose their interpretation as asymptotic particles. Only in certain situations, such as in asymptotically flat spacetimes, can the notion of incoming and outgoing particle be recovered, even then, as in flat spacetime, the asymptotic particle interpretation depends on the observer. Another observation is that unless the metric tensor has a global timelike Killing vector. The concept of a vacuum is not invariant under diffeomorphisms and this is because a mode decomposition of a field into positive and negative frequency modes is not invariant under diffeomorphisms. If t′ is a diffeomorphism, in general, the Fourier transform of exp will contain negative frequencies even if k >0, creation operators correspond to positive frequencies, while annihilation operators correspond to negative frequencies. This is why a state which looks like a vacuum to one observer cannot look like a state to another observer. Indeed, the viewpoint of local quantum physics is suitable to generalize the procedure to the theory of quantum fields developed on curved backgrounds. Several rigorous results concerning QFT in the presence of a black hole have been obtained, the most striking application of the theory is Hawkings prediction that Schwarzschild black holes radiate with a thermal spectrum. A related prediction is the Unruh effect, accelerated observers in the measure an thermal bath of particles. This formalism is used to predict the primordial density perturbation spectrum arising from cosmic inflation. Since this spectrum is measured by a variety of cosmological measurements—such as the CMB - if inflation is correct this particular prediction of the theory has already been verified, the Dirac equation can be formulated in curved spacetime, see Dirac equation in curved spacetime for details. The theory of field theory in curved spacetime can be considered as a first approximation to quantum gravity. A second step towards that theory would be semiclassical gravity, which would include the influence of particles created by a gravitational field on the spacetime. However gravity is not renormalizable in QFT, so merely formulating QFT in curved spacetime is not a theory of quantum gravity. Field Statistical field theory Topological quantum field theory Local quantum field theory General relativity Quantum geometry Quantum spacetime Quantum field theory N. D. Birrell & P. C. W, aspects of quantum field theory in curved space-time. Theorems on the Uniqueness and Thermal Properties of Stationary, Nonsingular, Quantum field theory in curved space-time and black hole thermodynamics. Quantum Field Theory in Curved Spacetime, Local Wick polynomials and time ordered products of quantum fields in curved space-time

16.
Hawking radiation
–
Hawking radiation is blackbody radiation that is predicted to be released by black holes, due to quantum effects near the event horizon. Hawking radiation reduces the mass and energy of black holes and is also known as black hole evaporation. Because of this, black holes that do not gain mass through other means are expected to shrink, micro black holes are predicted to be larger net emitters of radiation than larger black holes and should shrink and dissipate faster. In June 2008, NASA launched the Fermi space telescope, which is searching for the terminal gamma-ray flashes expected from evaporating primordial black holes. In the event that speculative large extra dimension theories are correct, CERNs Large Hadron Collider may be able to create black holes. In September 2010, a signal that is related to black hole Hawking radiation was claimed to have been observed in a laboratory experiment involving optical light pulses. However, the results remain unverified and debatable, other projects have been launched to look for this radiation within the framework of analog gravity. Black holes are sites of immense gravitational attraction, classically, the gravitation is so powerful that nothing, not even electromagnetic radiation, can escape from the black hole. It is yet unknown how gravity can be incorporated into quantum mechanics, nevertheless, far from the black hole the gravitational effects can be weak enough for calculations to be reliably performed in the framework of quantum field theory in curved spacetime. Hawking showed that quantum effects allow black holes to emit exact black body radiation, the electromagnetic radiation is produced as if emitted by a black body with a temperature inversely proportional to the mass of the black hole. Physical insight into the process may be gained by imagining that particle–antiparticle radiation is emitted from just beyond the event horizon. This radiation does not come directly from the hole itself. As the particle–antiparticle pair was produced by the holes gravitational energy. An alternative view of the process is that vacuum fluctuations cause a particle–antiparticle pair to appear close to the event horizon of a black hole, one of the pair falls into the black hole while the other escapes. In order to preserve total energy, the particle that fell into the hole must have had a negative energy. This causes the black hole to lose mass, and, to an outside observer, in another model, the process is a quantum tunnelling effect, whereby particle–antiparticle pairs will form from the vacuum, and one will tunnel outside the event horizon. This leads to the black hole information paradox, however, according to the conjectured gauge-gravity duality, black holes in certain cases are equivalent to solutions of quantum field theory at a non-zero temperature. This means that no information loss is expected in black holes, if this is correct, then Hawkings original calculation should be corrected, though it is not known how

17.
Black hole
–
A black hole is a region of spacetime exhibiting such strong gravitational effects that nothing—not even particles and electromagnetic radiation such as light—can escape from inside it. The theory of relativity predicts that a sufficiently compact mass can deform spacetime to form a black hole. The boundary of the region from which no escape is possible is called the event horizon, although the event horizon has an enormous effect on the fate and circumstances of an object crossing it, no locally detectable features appear to be observed. In many ways a black hole acts like a black body. Moreover, quantum theory in curved spacetime predicts that event horizons emit Hawking radiation. This temperature is on the order of billionths of a kelvin for black holes of stellar mass, objects whose gravitational fields are too strong for light to escape were first considered in the 18th century by John Michell and Pierre-Simon Laplace. Black holes were considered a mathematical curiosity, it was during the 1960s that theoretical work showed they were a generic prediction of general relativity. The discovery of neutron stars sparked interest in gravitationally collapsed compact objects as a possible astrophysical reality, black holes of stellar mass are expected to form when very massive stars collapse at the end of their life cycle. After a black hole has formed, it can continue to grow by absorbing mass from its surroundings, by absorbing other stars and merging with other black holes, supermassive black holes of millions of solar masses may form. There is general consensus that supermassive black holes exist in the centers of most galaxies, despite its invisible interior, the presence of a black hole can be inferred through its interaction with other matter and with electromagnetic radiation such as visible light. Matter that falls onto a black hole can form an accretion disk heated by friction. If there are other stars orbiting a black hole, their orbits can be used to determine the black holes mass, such observations can be used to exclude possible alternatives such as neutron stars.3 million solar masses. On 15 June 2016, a detection of a gravitational wave event from colliding black holes was announced. The idea of a body so massive that light could not escape was briefly proposed by astronomical pioneer John Michell in a letter published in 1783-4. Michell correctly noted that such supermassive but non-radiating bodies might be detectable through their effects on nearby visible bodies. In 1915, Albert Einstein developed his theory of general relativity, only a few months later, Karl Schwarzschild found a solution to the Einstein field equations, which describes the gravitational field of a point mass and a spherical mass. A few months after Schwarzschild, Johannes Droste, a student of Hendrik Lorentz, independently gave the solution for the point mass. This solution had a peculiar behaviour at what is now called the Schwarzschild radius, the nature of this surface was not quite understood at the time

18.
Black hole complementarity
–
Black hole complementarity is a conjectured solution to the black hole information paradox, proposed by Leonard Susskind and Larus Thorlacius, and Gerard t Hooft. But how can this be possible if information cannot escape the event horizon without traveling faster than light and this seems to rule out Hawking radiation as the carrier of the missing information. It also appears as if information cannot be reflected at the event horizon as there is nothing special about it locally. According to an observer, the infinite time dilation at the horizon itself makes it appear as if it takes an infinite amount of time to reach the horizon. He also postulated a stretched horizon, which is a membrane hovering about a Planck length outside the event horizon, according to the external observer, infalling information heats up the stretched horizon, which then reradiates it as Hawking radiation, with the entire evolution being unitary. However, according to an observer, nothing special happens at the event horizon itself. This isnt to say there are two copies of the lying about — one at or just outside the horizon. Instead, an observer can only detect the information at the horizon itself, or inside, complementarity is a feature of the quantum mechanics of noncommuting observables, and Susskind proposed that both stories are complementary in the quantum sense. To an infalling observer, information and entropy pass through the horizon with nothing strange happening, to an external observer, the information and entropy is absorbed into the stretched horizon which acts like a dissipative fluid with entropy, viscosity and electrical conductivity. See the membrane paradigm for more details, the stretched horizon is conducting with surface charges which rapidly spread out over the horizon. Global symmetries dont exist in quantum gravity, baryon number is violated, but only at very small scales, and the proton has a very long lifetime. But with a short enough time resolution, the proton oscillates between different baryon numbers and the time warping near the horizon magnifies that, alternatively, the hot temperatures of the stretched horizon cause the proton to decay. But an infalling observer never has time to see the proton decay, recently, it appears that black hole complementarity combined with the monogamy of entanglement suggests the existence of a firewall

19.
Black hole information paradox
–
The black hole information paradox is a puzzle resulting from the combination of quantum mechanics and general relativity. Calculations suggest that information could permanently disappear in a black hole. A fundamental postulate of the Copenhagen interpretation of quantum mechanics is that information about a system is encoded in its wave function up to when the wave function collapses. The evolution of the function is determined by a unitary operator. There are two principles in play, Quantum determinism means that given a present wave function, its future changes are uniquely determined by the evolution operator. Reversibility refers to the fact that the operator has an inverse. The combination of the two means that information must always be preserved, specifically, Hawkings calculations indicated that black hole evaporation via Hawking radiation does not preserve information. Today, many believe that the holographic principle demonstrates that Hawkings conclusion was incorrect. In 2004 Hawking himself conceded a bet he had made, agreeing that black hole evaporation does in fact preserve information, in 1975, Stephen Hawking and Jacob Bekenstein showed that black holes should slowly radiate away energy, which poses a problem. From the no-hair theorem, one would expect the Hawking radiation to be independent of the material entering the black hole. This violates Liouvilles theorem and presents a physical paradox, but since everything within the interior of the black hole will hit the singularity within a finite time, the part which is traced over partially might disappear completely from the physical system. Hawking remained convinced that the equations of black-hole thermodynamics together with the no-hair theorem led to the conclusion that quantum information may be destroyed and this annoyed many physicists, notably John Preskill, who bet Hawking and Kip Thorne in 1997 that information was not lost in black holes. The solution to the problem that concluded the battle is the holographic principle, with this, Susskind quashes Hawking in quarrel over quantum quandary. There are various ideas about how the paradox is solved and his argument assumes the unitarity of the AdS/CFT correspondence which implies that an AdS black hole that is dual to a thermal conformal field theory. When announcing his result, Hawking also conceded the 1997 bet, according to Roger Penrose, loss of unitarity in quantum systems is not a problem, quantum measurements are by themselves already non-unitary. Penrose claims that quantum systems will in no longer evolve unitarily as soon as gravitation comes into play. The Conformal Cyclic Cosmology advocated by Penrose critically depends on the condition that information is in fact lost in black holes, the significance of the findings was subsequently debated by others. Information is irretrievably lostAdvantage, Seems to be a consequence of relatively non-controversial calculation based on semiclassical gravity

20.
ER=EPR
–
ER=EPR is a conjecture in physics stating that entangled particles are connected by a wormhole. The conjecture was proposed by Leonard Susskind and Juan Maldacena in 2013 and they proposed that a nontraversable wormhole is equivalent to a pair of maximally entangled black holes. The symbol is derived from the first letters of the surnames of authors who wrote the first paper on wormholes, the two papers were published in 1935, but the authors did not claim any connection between the concepts. This is a resolution to the AMPS firewall paradox. Whether or not there is a firewall depends upon what is thrown into the other distant black hole, however, as the firewall lies inside the event horizon, no external superluminal signalling would be possible. They backed up their conjecture by showing that the production of charged black holes in a background magnetic field leads to entangled black holes. Susskind and Maldacena envisioned gathering up all the Hawking particles and smushing them together until they collapse into a black hole and that black hole would be entangled, and thus connected via wormhole, with the original black hole. That trick transformed a confusing mess of Hawking particles — paradoxically entangled with both a hole and each other — into two black holes connected by a wormhole. Entanglement overload is averted, and the problem goes away. This conjecture sits uncomfortably with the linearity of quantum mechanics, an entangled state is a linear superposition of separable states. Presumably, separable states are not connected by any wormholes, the conjecture leads to a grander conjecture that the geometry of space, time and gravity is determined by entanglement. ER = EPR or Whats Behind the Horizons of Black Holes

21.
Firewall (physics)
–
A black hole firewall is a hypothetical phenomenon where an observer that falls into an old black hole encounters high-energy quanta at the event horizon. The firewall phenomenon was proposed in 2012 by Ahmed Almheiri, Donald Marolf, Joseph Polchinski, the proposal is sometimes referred to as the AMPS firewall, an acronym for the names of the authors of the 2012 paper. The use of a firewall to resolve this inconsistency remains controversial,2016 LIGO observations provide tentative evidence of a firewall, or of some other phenomenon causing the black hole event horizon to be fuzzy. According to quantum theory in curved spacetime, a single emission of Hawking radiation involves two mutually entangled particles. The outgoing particle escapes and is emitted as a quantum of Hawking radiation, assume a black hole formed a finite time in the past and will fully evaporate away in some finite time in the future. Then, it will emit a finite amount of information encoded within its Hawking radiation. Assume that at time t, more than half of the information had already been emitted, in order to resolve the paradox, physicists may eventually be forced to give up one of three time-tested theories, Einsteins equivalence principle, unitarity, or existing quantum field theory. Some scientists suggest that the entanglement must somehow get immediately broken between the particle and the outgoing particle. Breaking this entanglement would release inconceivable amounts of energy, thus creating a black hole firewall at the black hole event horizon. This resolution requires a violation of Einsteins equivalence principle, which states that free-falling is indistinguishable from floating in empty space, some scientists suggest that there is in fact no entanglement between the emitted particle and previous Hawking radiation. This resolution would require black hole information loss, a violation of unitarity. The firewall would exist at the black holes event horizon, matter passing through the event horizon into the black hole would immediately be burned to a crisp by an arbitrarily hot seething maelstrom of particles at the firewall. In a merger of two holes, the characteristics of a firewall may leave a mark on the outgoing gravitational radiation as echoes when waves bounce in the vicinity of the fuzzy event horizon. The expected quantity of such echoes is theoretically unclear, as physicists dont currently have a physical model of firewalls. Over the next two years, the random noise hypothesis should be more solidly confirmed or rejected, as additional data is accumulated by the Laser Interferometer Gravitational-Wave Observatory. If confirmed, these echoes would be evidence in favor of a firewall. Black hole information paradox Black hole thermodynamics Magnetospheric eternally collapsing object

22.
Gravitational singularity
–
The quantities used to measure gravitational field strength are the scalar invariant curvatures of space-time, which includes a measure of the density of matter. Since such quantities become infinite within the singularity, the laws of normal space-time could not exist, the Penrose–Hawking singularity theorems define a singularity to have geodesics that cannot be extended in a smooth manner. The termination of such a geodesic is considered to be the singularity, according to modern general relativity, the initial state of the universe, at the beginning of the Big Bang, was a singularity. Many theories in physics have mathematical singularities of one kind or another, equations for these physical theories predict that the ball of mass of some quantity becomes infinite or increases without limit. This is generally a sign for a piece in the theory, as in the Ultraviolet Catastrophe, re-normalization. Some theories, such as the theory of quantum gravity suggest that singularities may not exist. A conical singularity occurs when there is a point where the limit of every diffeomorphism invariant quantity is finite, thus, space-time looks like a cone around this point, where the singularity is located at the tip of the cone. The metric can be finite everywhere if a suitable system is used. An example of such a singularity is a cosmic string. Solutions to the equations of general relativity or another theory of gravity often result in encountering points where the metric blows up to infinity, however, many of these points are completely regular, and the infinities are merely a result of using an inappropriate coordinate system at this point. In order to test whether there is a singularity at a certain point, such quantities are the same in every coordinate system, so these infinities will not go away by a change of coordinates. An example is the Schwarzschild solution that describes a non-rotating, uncharged black hole, in coordinate systems convenient for working in regions far away from the black hole, a part of the metric becomes infinite at the event horizon. However, space-time at the event horizon is regular, the regularity becomes evident when changing to another coordinate system, where the metric is perfectly smooth. On the other hand, in the center of the hole, where the metric becomes infinite as well. The existence of the singularity can be verified by noting that the Kretschmann scalar, being the square of the Riemann tensor i. e. R μ ν ρ σ R μ ν ρ σ, such a singularity may also theoretically become a wormhole. For example, any observer inside the event horizon of a black hole would fall into its center within a finite period of time. The classical version of the Big Bang cosmological model of the universe contains a causal singularity at the start of time, extrapolating backward to this hypothetical time 0 results in a universe with all spatial dimensions of size zero, infinite density, infinite temperature, and infinite space-time curvature. Until the early 1990s, it was believed that general relativity hides every singularity behind an event horizon

23.
String theory
–
In physics, string theory is a theoretical framework in which the point-like particles of particle physics are replaced by one-dimensional objects called strings. It describes how strings propagate through space and interact with each other. On distance scales larger than the scale, a string looks just like an ordinary particle, with its mass, charge. In string theory, one of the vibrational states of the string corresponds to the graviton. Thus string theory is a theory of quantum gravity, String theory is a broad and varied subject that attempts to address a number of deep questions of fundamental physics. Despite much work on problems, it is not known to what extent string theory describes the real world or how much freedom the theory allows to choose the details. String theory was first studied in the late 1960s as a theory of the nuclear force. Subsequently, it was realized that the properties that made string theory unsuitable as a theory of nuclear physics made it a promising candidate for a quantum theory of gravity. The earliest version of string theory, bosonic string theory, incorporated only the class of known as bosons. It later developed into superstring theory, which posits a connection called supersymmetry between bosons and the class of particles called fermions. In late 1997, theorists discovered an important relationship called the AdS/CFT correspondence, one of the challenges of string theory is that the full theory does not have a satisfactory definition in all circumstances. Another issue is that the theory is thought to describe an enormous landscape of possible universes, and these issues have led some in the community to criticize these approaches to physics and question the value of continued research on string theory unification. In the twentieth century, two theoretical frameworks emerged for formulating the laws of physics, one of these frameworks was Albert Einsteins general theory of relativity, a theory that explains the force of gravity and the structure of space and time. The other was quantum mechanics, a different formalism for describing physical phenomena using probability. In spite of successes, there are still many problems that remain to be solved. One of the deepest problems in physics is the problem of quantum gravity. The general theory of relativity is formulated within the framework of classical physics, in addition to the problem of developing a consistent theory of quantum gravity, there are many other fundamental problems in the physics of atomic nuclei, black holes, and the early universe. String theory is a framework that attempts to address these questions

24.
Bosonic string theory
–
Bosonic string theory is the original version of string theory, developed in the late 1960s. It is so called because it only contains bosons in the spectrum, in the 1980s, supersymmetry was discovered in the context of string theory, and a new version of string theory called superstring theory became the real focus. Although bosonic string theory has many features, it falls short as a viable physical model in two significant areas. First, it only the existence of bosons whereas many physical particles are fermions. Second, it predicts the existence of a mode of the string with imaginary mass, in addition, bosonic string theory in a general spacetime dimension displays inconsistencies due to the conformal anomaly. But, as was first noticed by Claud Lovelace, in a spacetime of 26 dimensions, the dimension for the theory. This would leave only the four dimensions of spacetime visible to low energy experiments. The existence of a dimension where the anomaly cancels is a general feature of all string theories. There are four possible bosonic string theories, depending on whether open strings are allowed, recall that a theory of open strings also must include closed strings, open strings can be thought as having their endpoints fixed on a D25-brane that fills all of spacetime. A specific orientation of the means that only interaction corresponding to an orientable worldsheet are allowed. A sketch of the spectra of the four theories is as follows, Note that all four theories have a negative energy tachyon. The rest of this article applies to the closed, oriented theory, corresponding to borderless, G is the metric on the target spacetime, which is usually taken to be the Minkowski metric in the perturbative theory. Under a Wick rotation, this is brought to a Euclidean metric G μ ν = δ μ ν, M is the worldsheet as a topological manifold parametrized by the ξ coordinates. T is the tension and related to the Regge slope as T =12 π α ′. I0 has diffeomorphism and Weyl invariance, a normalization factor N is introduced to compensate overcounting from symmetries. While the computation of the partition function correspond to the cosmological constant, the symmetry group of the action actually reduces drastically the integration space to a finite dimensional manifold. One still has to quotient away diffeomorphisms, the fundamental problem of perturbative bosonic strings therefore becomes the parametrization of Moduli space, which is non-trivial for genus h ≥4. At tree-level, corresponding to genus 0, the cosmological constant vanishes, Z0 =0

25.
M-theory
–
M-theory is a theory in physics that unifies all consistent versions of superstring theory. The existence of such a theory was first conjectured by Edward Witten at a string theory conference at the University of Southern California in the spring of 1995, Wittens announcement initiated a flurry of research activity known as the second superstring revolution. Prior to Wittens announcement, string theorists had identified five versions of superstring theory, although these theories appeared, at first, to be very different, work by several physicists showed that the theories were related in intricate and nontrivial ways. In particular, physicists found that apparently distinct theories could be unified by mathematical transformations called S-duality and T-duality, Wittens conjecture was based in part on the existence of these dualities and in part on the relationship of the string theories to a field theory called eleven-dimensional supergravity. Modern attempts to formulate M-theory are typically based on theory or the AdS/CFT correspondence. Investigations of the structure of M-theory have spawned important theoretical results in physics and mathematics. More speculatively, M-theory may provide a framework for developing a theory of all of the fundamental forces of nature. One of the deepest problems in physics is the problem of quantum gravity. The current understanding of gravity is based on Albert Einsteins general theory of relativity, however, nongravitational forces are described within the framework of quantum mechanics, a radically different formalism for describing physical phenomena based on probability. String theory is a framework that attempts to reconcile gravity. In string theory, the particles of particle physics are replaced by one-dimensional objects called strings. String theory describes how strings propagate through space and interact with each other, in a given version of string theory, there is only one kind of string, which may look like a small loop or segment of ordinary string, and it can vibrate in different ways. On distance scales larger than the scale, a string will look just like an ordinary particle, with its mass, charge. In this way, all of the different elementary particles may be viewed as vibrating strings, one of the vibrational states of a string gives rise to the graviton, a quantum mechanical particle that carries gravitational force. There are several versions of string theory, type I, type IIA, type IIB, the different theories allow different types of strings, and the particles that arise at low energies exhibit different symmetries. For example, the type I theory includes both open strings and closed strings, while types IIA and IIB include only closed strings, each of these five string theories arises as a special limiting case of M-theory. This theory, like its string theory predecessors, is an example of a theory of gravity. It describes a force just like the familiar gravitational force subject to the rules of quantum mechanics, in everyday life, there are three familiar dimensions of space, height, width and depth

26.
Supergravity
–
In theoretical physics, supergravity is a field theory that combines the principles of supersymmetry and general relativity. Together, these imply that, in supergravity, the supersymmetry is a local symmetry, since the generators of supersymmetry are convoluted with the Poincaré group to form a super-Poincaré algebra, it can be seen that supergravity follows naturally from local supersymmetry. Like any field theory of gravity, a supergravity theory contains a field whose quantum is the graviton. Supersymmetry requires the graviton field to have a superpartner and this field has spin 3/2 and its quantum is the gravitino. The number of fields is equal to the number of supersymmetries. The first theory of local supersymmetry was proposed in 1975 by Dick Arnowitt, Supergravity theories with N>1 are usually referred to as extended supergravity. Some supergravity theories were shown to be related to certain higher-dimensional supergravity theories via dimensional reduction, in these classes of models collectively now known as minimal supergravity Grand Unification Theories, gravity mediates the breaking of SUSY through the existence of a hidden sector. MSUGRA naturally generates the Soft SUSY breaking terms which are a consequence of the Super Higgs effect, radiative breaking of electroweak symmetry through Renormalization Group Equations follows as an immediate consequence. One of these supergravities, the 11-dimensional theory, generated considerable excitement as the first potential candidate for the theory of everything and these problems are avoided in 12 dimensions if two of these dimensions are timelike, as has been often emphasized by Itzhak Bars. Today many techniques exist to embed the model gauge group in supergravity in any number of dimensions. For example, in the mid and late 1980s, the gauge symmetry in type I. In type II string theory they could also be obtained by compactifying on certain Calabi–Yau manifolds, today one may also use D-branes to engineer gauge symmetries. In 1978, Eugène Cremmer, Bernard Julia and Joël Scherk found the action for an 11-dimensional supergravity theory. This remains today the only known classical 11-dimensional theory with local supersymmetry, other 11-dimensional theories are known that are quantum-mechanically inequivalent to the CJS theory, but classically equivalent. For example, in the mid 1980s Bernard de Wit and Hermann Nicolai found an alternate theory in D=11 Supergravity with Local SU Invariance. In 1980, Peter Freund and M. A. Rubin showed that compactification from 11 dimensions preserving all the SUSY generators could occur in two ways, leaving only 4 or 7 macroscopic dimensions, unfortunately, the noncompact dimensions have to form an anti-de Sitter space. Many of the details of the theory were fleshed out by Peter van Nieuwenhuizen, Sergio Ferrara, the initial excitement over 11-dimensional supergravity soon waned, as various failings were discovered, and attempts to repair the model failed as well. Problems included, The compact manifolds which were known at the time and which contained the standard model were not compatible with supersymmetry, and could not hold quarks or leptons

27.
Superstring theory
–
Superstring theory is an attempt to explain all of the particles and fundamental forces of nature in one theory by modelling them as vibrations of tiny supersymmetric strings. Since the second superstring revolution, the five superstring theories are regarded as different limits of a single theory tentatively called M-theory, the development of a quantum field theory of a force invariably results in infinite possibilities. Development of quantum theory of gravity therefore requires different means than those used for the other forces, according to the theory, the fundamental constituents of reality are strings of the Planck length that vibrate at resonant frequencies. Every string, in theory, has a resonance, or harmonic. Different harmonics determine different fundamental particles, the tension in a string is on the order of the Planck force. The graviton, for example, is predicted by the theory to be a string with wave amplitude zero, since its beginnings in late sixties, the theory was developed through several decades of intense research and combined effort of numerous scientists. It has developed into a broad and varied subject with connections to quantum gravity, particle and condensed matter physics, cosmology, superstring theory is based on supersymmetry. No supersymmetric particles have been discovered and recent research at LHC, for instance, the mass constraint of the Minimal Supersymmetric Standard Model squarks has been up to 1.1 TeV, and gluinos up to 500 GeV. No report on suggesting large extra dimensions has been delivered from LHC, there have been no principles so far to limit the number of vacua in the concept of a landscape of vacua. Our physical space is observed to have three spatial dimensions and, along with time, is a boundless four-dimensional continuum known as spacetime. However, nothing prevents a theory from including more than 4 dimensions, in the case of string theory, consistency requires spacetime to have 10 dimensions. If the extra dimensions are compactified, then the six dimensions must be in the form of a Calabi–Yau manifold. Within the more complete framework of M-theory, they would have to form of a G2 manifold. Calabi-Yaus are interesting mathematical spaces in their own right, a particular exact symmetry of string/M-theory called T-duality, has led to the discovery of equivalences between different Calabi-Yaus called Mirror Symmetry. Superstring theory is not the first theory to propose extra spatial dimensions and it can be seen as building upon the Kaluza–Klein theory, which proposed a 4+1-dimensional theory of gravity. When compactified on a circle, the gravity in the extra dimension precisely describes electromagnetism from the perspective of the 3 remaining large space dimensions, also, to obtain a consistent, fundamental, quantum theory requires the upgrade to string theory—not just the extra dimensions. Theoretical physicists were troubled by the existence of five separate superstring theories, the five consistent superstring theories are, The type I string has one supersymmetry in the ten-dimensional sense. This theory is special in the sense that it is based on unoriented open and closed strings, the type II string theories have two supersymmetries in the ten-dimensional sense

28.
Canonical quantum gravity
–
In physics, canonical quantum gravity is an attempt to quantize the canonical formulation of general relativity. It is a Hamiltonian formulation of Einsteins general theory of relativity, Diracs approach allows the quantization of systems that include gauge symmetries using Hamiltonian techniques in a fixed gauge choice. Newer approaches based in part on the work of DeWitt and Dirac include the Hartle–Hawking state, Regge calculus, in the Hamiltonian formulation of ordinary classical mechanics the Poisson bracket is an important concept. With the use of Poisson brackets, the Hamiltons equations can be rewritten as and these equations describe a flow or orbit in phase space generated by the Hamiltonian H. Given any phase space function F, we have d d t F =, Canonical classical general relativity is an example of a fully constrained theory. For canonical quantization in general terms, phase space is replaced by an appropriate Hilbert space and these quantum constraint equations are the central equations of canonical quantum general relativity, at least in the Dirac approach which is the approach usually taken. Imposing these constraints classically are basically admissibility conditions on the initial data, in Diracs approach it turns out that the first class quantum constraints imposed on a wavefunction also generate gauge transformations. This symmetry arises from the requirement that the laws of general relativity cannot depend on any a-priori given space-time geometry. A more rigorous argument has been provided by Lee Smolin, “A background independent operator must always be finite and this is because the regulator scale and the background metric are always introduced together in the regularization procedure. Because of this the dependence of the operator on the cuttoff. When one takes the limit of the regulator parameter going to zero one isolates the non-vanishing terms, if these have any dependence on the regulator parameter then it must also have dependence on the background metric. Conversely, if the terms that are nonvanishing in the limit the regulator is removed have no dependence on the metric, it must be finite. ”In fact, as mentioned below. So there is no need for renormalization and the elimination of infinities, Canonical quantum gravity on the other hand makes no such assumption and instead allows the theory itself tell you, in principle, what the true structure of quantum space-time is. This quantization of geometric observables is in fact realized in quantum gravity.3. The function N is called the function and the functions β k are called the shift functions. Note that γ μ ν = g μ ν + n μ n ν, while this form of the Lagrangian is manifestly invariant under redefinition of the spatial coordinates, it makes general covariance opaque. Since the lapse function and shift functions may be eliminated by a gauge transformation and this is indicated in moving to the Hamiltonian formalism by the fact that their conjugate momenta, respectively π and π i, vanish identically. These are called primary constraints by Dirac, a popular choice of gauge, called synchronous gauge, is N =1 and β i =0, although they can, in principle, be chosen to be any function of the coordinates

29.
Loop quantum gravity
–
Loop quantum gravity is a theory that attempts to describe the quantum properties of the universe and gravity. It is also a theory of quantum spacetime because, according to relativity, gravity is a manifestation of the geometry of spacetime. LQG is an attempt to merge quantum mechanics and general relativity, from the point of view of Einsteins theory, it comes as no surprise that all attempts to treat gravity simply like one more quantum force have failed. According to Einstein, gravity is not a force – it is a property of space-time itself, Loop quantum gravity is an attempt to develop a quantum theory of gravity based directly on Einsteins geometrical formulation. The main output of the theory is a picture of space where space is granular. The granularity is a consequence of the quantization. It has the nature as the granularity of the photons in the quantum theory of electromagnetism. Here, it is itself that is discrete. In other words, there is a minimum distance possible to travel through it, more precisely, space can be viewed as an extremely fine fabric or network woven of finite loops. These networks of loops are called spin networks, the evolution of a spin network over time is called a spin foam. The predicted size of this structure is the Planck length, which is approximately 10−35 meters, According to the theory, there is no meaning to distance at scales smaller than the Planck scale. Therefore, LQG predicts that not just matter, but space itself, has an atomic structure, today LQG is a vast area of research, developing in several directions, which involves about 30 research groups worldwide. They all share the physical assumptions and the mathematical description of quantum space. Research into the consequences of the theory is proceeding in several directions. Among these, the most well-developed is the application of LQG to cosmology, LQC applies LQG ideas to the study of the early universe and the physics of the Big Bang. Its most spectacular consequence is that the evolution of the universe can be continued beyond the Big Bang, the Big Bang appears thus to be replaced by a sort of cosmic Big Bounce. In 1986, Abhay Ashtekar reformulated Einsteins general relativity in a closer to that of the rest of fundamental physics. Carlo Rovelli and Lee Smolin defined a nonperturbative and background-independent quantum theory of gravity in terms of these loop solutions, in 1994, Rovelli and Smolin showed that the quantum operators of the theory associated to area and volume have a discrete spectrum

30.
Causal dynamical triangulation
–
This means that it does not assume any pre-existing arena, but rather attempts to show how the spacetime fabric itself evolves. The Loops 05 conference, hosted by many loop quantum gravity theorists, included several presentations which discussed CDT in great depth and it has sparked considerable interest as it appears to have a good semi-classical description. At large scales, it re-creates the familiar 4-dimensional spacetime, but it shows spacetime to be 2-d near the Planck scale and these interesting results agree with the findings of Lauscher and Reuter, who use an approach called Quantum Einstein Gravity, and with other recent theoretical work. The same publication gives CDT, and its authors, a feature article in its July 2008 issue. Near the Planck scale, the structure of itself is supposed to be constantly changing due to quantum fluctuations. CDT theory uses a process which varies dynamically and follows deterministic rules. The results of researchers suggest that this is a way to model the early universe. Using a structure called a simplex, it divides spacetime into tiny triangular sections, CDT avoids this problem by allowing only those configurations in which the timelines of all joined edges of simplices agree. CDT is a modification of quantum Regge calculus where spacetime is discretized by approximating it with a piecewise linear manifold in a process called triangulation, in this process, a d-dimensional spacetime is considered as formed by space slices that are labeled by a discrete time variable t. Each space slice is approximated by a simplicial manifold composed by regular -dimensional simplices, in place of a smooth manifold there is a network of triangulation nodes, where space is locally flat but globally curved, as with the individual faces and the overall surface of a geodesic dome. The crucial development is that the network of simplices is constrained to evolve in a way that preserves causality and this allows a path integral to be calculated non-perturbatively, by summation of all possible configurations of the simplices, and correspondingly, of all possible spatial geometries. Simply put, each individual simplex is like a block of spacetime. This rule preserves causality, a feature missing from previous triangulation theories, when simplexes are joined in this way, the complex evolves in an orderly fashion, and eventually creates the observed framework of dimensions. CDT derives the observed nature and properties of spacetime from a set of assumptions. The idea of deriving what is observed from first principles is very attractive to physicists, CDT models the character of spacetime both in the ultra-microscopic realm near the Planck scale, and at the scale of the cosmos, so CDT may provide insights into the nature of reality. Evaluation of the implications of CDT relies heavily on Monte Carlo simulation by computer. Some feel that this makes CDT an inelegant quantum gravity theory, also, it has been argued that discrete time-slicing may not accurately reproduce all possible modes of a dynamical system. However, research by Markopoulou and Smolin demonstrates that the cause for those concerns may be limited, therefore, many physicists still regard this line of reasoning as promising

31.
Causal sets
–
The causal sets program is an approach to quantum gravity. Its founding principles are that spacetime is discrete and that spacetime events are related by a partial order. This partial order has the meaning of the causality relations between spacetime events. The conformal factor that is left undetermined is related to the volume of regions in the spacetime and this volume factor can be recovered by specifying a volume element for each space time point. The volume of a space time region could then be found by counting the number of points in that region, causal sets was initiated by Rafael Sorkin who continues to be the main proponent of the program. He has coined the slogan Order + Number = Geometry to characterize the above argument, the program provides a theory in which space time is fundamentally discrete while retaining local Lorentz invariance. Ruth Kastner developed the relativistic transactional interpretation which is argued that it can provide the dynamics for the causal sets program, a causal set is a set C with a partial order relation ⪯ that is Reflexive, For all x ∈ C, we have x ⪯ x. Antisymmetric, For all x, y ∈ C, we have x ⪯ y ⪯ x ⟹ x = y, transitive, For all x, y, z ∈ C, we have x ⪯ y ⪯ z implies x ⪯ z. Locally finite, For all x, z ∈ C, we have card < ∞, here card denotes the cardinality of a set A. Well write x ≺ y if x ⪯ y and x ≠ y, the set C represents the set of spacetime events and the order relation ⪯ represents the causal relationship between events. Although this definition uses the convention we could have chosen the irreflexive convention in which the order relation is irreflexive. The causal relation of a Lorentzian manifold satisfies the first three conditions and it is the local finiteness condition that introduces spacetime discreteness. Given a causal set we may ask whether it can be embedded into a Lorentzian manifold, an embedding would be a map taking elements of the causal set into points in the manifold such that the order relation of the causal set matches the causal ordering of the manifold. A further criterion is needed however before the embedding is suitable, if, on average, the number of causal set elements mapped into a region of the manifold is proportional to the volume of the region then the embedding is said to be faithful. This is called the Hauptvermutung, meaning fundamental conjecture and it is difficult to define this conjecture precisely because it is difficult to decide when two spacetimes are similar on large scales. Modelling spacetime as a set would require us to restrict attention to those causal sets that are manifold-like. Given a causal set this is a property to determine. The difficulty of determining whether a set can be embedded into a manifold can be approached from the other direction

32.
Spin foam
–
It is closely related to loop quantum gravity. Loop quantum gravity has a covariant formulation that, at present and this is a quantum field theory where the invariance under diffeomorphisms of general relativity is implemented. The resulting path integral represents a sum all the possible configuration of the geometry. A spin network is a graph, together with labels on its vertices and edges which encodes aspects of a spatial geometry. A spin network is defined as a diagram that makes a basis of connections between the elements of a manifold for the Hilbert spaces defined over them. Spin networks provide a representation for computations of amplitudes between two different hypersurfaces of the manifold, any evolution of spin network provides a spin foam over a manifold of one dimension higher than the dimensions of the corresponding spin network. A spin foam may be viewed as a quantum history, Spin networks provide a language to describe quantum geometry of space. Spin foam does the job on spacetime. Spacetime can be defined as a superposition of spin foams, which is a generalized Feynman diagram where instead of a graph, in topology this sort of space is called a 2-complex. A spin foam is a type of 2-complex, with labels for vertices, edges and faces. The boundary of a foam is a spin network, just as in the theory of manifolds. In Loop Quantum Gravity, the present Spinfoam Theory has been inspired by the work of Ponzano-Regge model, quantization of the structure leads to a generalized Feynman path integral over connected paths of spin networks between spin network boundaries. The idea was reintroduced in and later developed into the Barrett–Crane model, alejandro Perez, Spin Foam Models for Quantum Gravity Carlo Rovelli, Zakopane lectures on loop gravity

33.
Superfluid vacuum theory
–
The microscopic structure of this physical vacuum is currently unknown and is a subject of intensive studies in SVT. The concept of a luminiferous aether as a medium sustaining electromagnetic waves was discarded after the advent of the theory of relativity. The aether, as conceived in classical physics leads to contradictions, in particular. This conflicts with the requirement that all directions within a light cone are equivalent. However, as early as in 1951 P. A. M, Dirac published two papers where he pointed out that we should take into account quantum fluctuations in the flow of the aether. His arguments involve the application of the uncertainty principle to the velocity of aether at any space-time point, in fact, it will be distributed over various possible values. At best, one could represent the aether by a function representing the perfect vacuum state for which all aether velocities are equally probable. These works can be regarded as the point of the theory. Inspired by the Dirac ideas, K. P. Sinha and they noted that particle-like small fluctuations of superfluid background obey the Lorentz symmetry, even if the superfluid itself is non-relativistic. Nevertheless, they decided to treat the superfluid as the relativistic matter - by putting it into the tensor of the Einstein field equations. This did not allow them to describe the relativistic gravity as a fluctuation of the superfluid vacuum. Since then, several theories have been proposed within the SVT framework and they differ in how the structure and properties of the background superfluid must look like. In absence of data which would rule out some of them. Further, in the theory of relativity the Galilean symmetry arises as the approximate one - when particles velocities are small compared to speed of light in vacuum. To summarize, the fluctuations of vacuum superfluid behave like relativistic objects at small momenta E2 ∝ | p → |2, the yet unknown nontrivial physics is believed to be located somewhere between these two regimes. In the relativistic quantum theory the physical vacuum is also assumed to be some sort of non-trivial medium to which one can associate certain energy. This is because the concept of empty space contradicts to the postulates of quantum mechanics. According to QFT, even in absence of particles the background is always filled by pairs of creating and annihilating virtual particles

34.
Quantum cosmology
–
Quantum cosmology is the attempt in theoretical physics to develop a quantum theory of the Universe. This approach attempts to open questions of classical physical cosmology. The classical cosmology is based on Albert Einsteins general theory of relativity and it describes the evolution of the universe very well, as long as you do not approach the Big Bang. It is the singularity and the Planck time where relativity theory fails to provide what must be demanded of a final theory of space. Therefore, a theory is needed that integrates relativity theory and quantum theory, such an approach is attempted for instance with the loop quantum gravity, another approach with the string theory. Dark energy Minisuperspace Hamilton–Jacobi–Einstein equation Theory of everything Martin Bojowald, Quantum Cosmology, a Fundamental Description of the Universe. A Laymans Explanation of Quantum Cosmology Lectures on Quantum Cosmology by J. J

35.
Gravity
–
Gravity, or gravitation, is a natural phenomenon by which all things with mass are brought toward one another, including planets, stars and galaxies. Since energy and mass are equivalent, all forms of energy, including light, on Earth, gravity gives weight to physical objects and causes the ocean tides. Gravity has a range, although its effects become increasingly weaker on farther objects. The most extreme example of this curvature of spacetime is a hole, from which nothing can escape once past its event horizon. More gravity results in time dilation, where time lapses more slowly at a lower gravitational potential. Gravity is the weakest of the four fundamental interactions of nature, the gravitational attraction is approximately 1038 times weaker than the strong force,1036 times weaker than the electromagnetic force and 1029 times weaker than the weak force. As a consequence, gravity has an influence on the behavior of subatomic particles. On the other hand, gravity is the dominant interaction at the macroscopic scale, for this reason, in part, pursuit of a theory of everything, the merging of the general theory of relativity and quantum mechanics into quantum gravity, has become an area of research. While the modern European thinkers are credited with development of gravitational theory, some of the earliest descriptions came from early mathematician-astronomers, such as Aryabhata, who had identified the force of gravity to explain why objects do not fall out when the Earth rotates. Later, the works of Brahmagupta referred to the presence of force, described it as an attractive force. Modern work on gravitational theory began with the work of Galileo Galilei in the late 16th and this was a major departure from Aristotles belief that heavier objects have a higher gravitational acceleration. Galileo postulated air resistance as the reason that objects with less mass may fall slower in an atmosphere, galileos work set the stage for the formulation of Newtons theory of gravity. In 1687, English mathematician Sir Isaac Newton published Principia, which hypothesizes the inverse-square law of universal gravitation. Newtons theory enjoyed its greatest success when it was used to predict the existence of Neptune based on motions of Uranus that could not be accounted for by the actions of the other planets. Calculations by both John Couch Adams and Urbain Le Verrier predicted the position of the planet. A discrepancy in Mercurys orbit pointed out flaws in Newtons theory, the issue was resolved in 1915 by Albert Einsteins new theory of general relativity, which accounted for the small discrepancy in Mercurys orbit. The simplest way to test the equivalence principle is to drop two objects of different masses or compositions in a vacuum and see whether they hit the ground at the same time. Such experiments demonstrate that all objects fall at the rate when other forces are negligible

36.
Newton's law of universal gravitation
–
This is a general physical law derived from empirical observations by what Isaac Newton called inductive reasoning. It is a part of classical mechanics and was formulated in Newtons work Philosophiæ Naturalis Principia Mathematica, in modern language, the law states, Every point mass attracts every single other point mass by a force pointing along the line intersecting both points. The force is proportional to the product of the two masses and inversely proportional to the square of the distance between them, the first test of Newtons theory of gravitation between masses in the laboratory was the Cavendish experiment conducted by the British scientist Henry Cavendish in 1798. It took place 111 years after the publication of Newtons Principia, Newtons law of gravitation resembles Coulombs law of electrical forces, which is used to calculate the magnitude of the electrical force arising between two charged bodies. Both are inverse-square laws, where force is proportional to the square of the distance between the bodies. Coulombs law has the product of two charges in place of the product of the masses, and the constant in place of the gravitational constant. Newtons law has since been superseded by Albert Einsteins theory of general relativity, at the same time Hooke agreed that the Demonstration of the Curves generated thereby was wholly Newtons. In this way, the question arose as to what, if anything and this is a subject extensively discussed since that time and on which some points, outlined below, continue to excite controversy. And that these powers are so much the more powerful in operating. Thus Hooke clearly postulated mutual attractions between the Sun and planets, in a way that increased with nearness to the attracting body, Hookes statements up to 1674 made no mention, however, that an inverse square law applies or might apply to these attractions. Hookes gravitation was also not yet universal, though it approached universality more closely than previous hypotheses and he also did not provide accompanying evidence or mathematical demonstration. It was later on, in writing on 6 January 1679|80 to Newton, Newton, faced in May 1686 with Hookes claim on the inverse square law, denied that Hooke was to be credited as author of the idea. Among the reasons, Newton recalled that the idea had been discussed with Sir Christopher Wren previous to Hookes 1679 letter, Newton also pointed out and acknowledged prior work of others, including Bullialdus, and Borelli. D T Whiteside has described the contribution to Newtons thinking that came from Borellis book, a copy of which was in Newtons library at his death. Newton further defended his work by saying that had he first heard of the inverse square proportion from Hooke, Hooke, without evidence in favor of the supposition, could only guess that the inverse square law was approximately valid at great distances from the center. Thus Newton gave a justification, otherwise lacking, for applying the inverse square law to large spherical planetary masses as if they were tiny particles, after his 1679-1680 correspondence with Hooke, Newton adopted the language of inward or centripetal force. They also involved the combination of tangential and radial displacements, which Newton was making in the 1660s, the lesson offered by Hooke to Newton here, although significant, was one of perspective and did not change the analysis. This background shows there was basis for Newton to deny deriving the inverse square law from Hooke, on the other hand, Newton did accept and acknowledge, in all editions of the Principia, that Hooke had separately appreciated the inverse square law in the solar system

37.
Introduction to general relativity
–
General relativity is a theory of gravitation that was developed by Albert Einstein between 1907 and 1915. According to general relativity, the gravitational effect between masses results from their warping of spacetime. By the beginning of the 20th century, Newtons law of gravitation had been accepted for more than two hundred years as a valid description of the gravitational force between masses. In Newtons model, gravity is the result of a force between massive objects. Although even Newton was troubled by the nature of that force. General relativity also predicts novel effects of gravity, such as waves, gravitational lensing. Many of these predictions have been confirmed by experiment or observation, General relativity has developed into an essential tool in modern astrophysics. It provides the foundation for the current understanding of black holes and their strong gravity is thought to be responsible for the intense radiation emitted by certain types of astronomical objects. General relativity is also part of the framework of the standard Big Bang model of cosmology, although general relativity is not the only relativistic theory of gravity, it is the simplest such theory that is consistent with the experimental data. In September 1905, Albert Einstein published his theory of special relativity, special relativity introduced a new framework for all of physics by proposing new concepts of space and time. Some then-accepted physical theories were inconsistent with that framework, a key example was Newtons theory of gravity, several physicists, including Einstein, searched for a theory that would reconcile Newtons law of gravity and special relativity. Only Einsteins theory proved to be consistent with experiments and observations, a person in a free-falling elevator experiences weightlessness, objects either float motionless or drift at constant speed. Since everything in the elevator is falling together, no effect can be observed. In this way, the experiences of an observer in free fall are indistinguishable from those of an observer in deep space, such observers are the privileged observers Einstein described in his theory of special relativity, observers for whom light travels along straight lines at constant speed. Roughly speaking, the states that a person in a free-falling elevator cannot tell that they are in free fall. Every experiment in such an environment has the same results as it would for an observer at rest or moving uniformly in deep space. Most effects of gravity vanish in free fall, but effects that seem the same as those of gravity can be produced by a frame of reference. Objects are falling to the floor because the room is aboard a rocket in space, the objects are being pulled towards the floor by the same inertial force that presses the driver of an accelerating car into the back of his seat

38.
History of general relativity
–
General relativity is a theory of gravitation that was developed by Albert Einstein between 1907 and 1915, with contributions by many others after 1915. According to general relativity, the gravitational attraction between masses results from the warping of space and time by those masses. Within a century of Newtons formulation, careful astronomical observation revealed unexplainable variations between the theory and the observations, under Newtons model, gravity was the result of an attractive force between massive objects. Although even Newton was bothered by the nature of that force. General relativity also predicts novel effects of gravity, such as waves, gravitational lensing. Many of these predictions have been confirmed by experiment or observation, general relativity has developed into an essential tool in modern astrophysics. It provides the foundation for the current understanding of black holes and their strong gravity is thought to be responsible for the intense radiation emitted by certain types of astronomical objects. General relativity is also part of the framework of the standard Big Bang model of cosmology, so, while still working at the patent office in 1907, Einstein had what he would call his happiest thought. He realized that the principle of relativity could be extended to gravitational fields, consequently, in 1907 he wrote an article on acceleration under special relativity. In that article, he argued that free fall is really inertial motion, and this argument is called the Equivalence principle. In the same article, Einstein also predicted the phenomenon of time dilation. In 1911, Einstein published another article expanding on the 1907 article and he used special relativity to see that the rate of clocks at the top of a box accelerating upward would be faster than the rate of clocks at the bottom. He concludes that the rates of clocks depend on their position in a field. Also the deflection of light by massive bodies was predicted, although the approximation was crude, it allowed him to calculate that the deflection is nonzero. German astronomer Erwin Finlay-Freundlich publicized Einsteins challenge to scientists around the world and this urged astronomers to detect the deflection of light during a solar eclipse, and gave Einstein confidence that the scalar theory of gravity proposed by Gunnar Nordström was incorrect. But the actual value for the deflection that he calculated was too small by a factor of two, because the approximation he used doesnt work well for things moving at near the speed of light. When Einstein finished the full theory of relativity, he would rectify this error. Another of Einsteins notable thought experiments about the nature of the field is that of the rotating disk

39.
Mathematics of general relativity
–
The mathematics of general relativity refers to various mathematical structures and techniques that are used in studying and formulating Albert Einsteins theory of general relativity. The main tools used in this theory of gravitation are tensor fields defined on a Lorentzian manifold representing spacetime. This article is a description of the mathematics of general relativity. Note, General relativity articles using tensors will use the index notation. The term general covariance was used in the formulation of general relativity. This will be discussed further below, most modern approaches to mathematical general relativity begin with the concept of a manifold. More precisely, the basic physical construct representing gravitation - a curved spacetime - is modelled by a four-dimensional, smooth, connected, other physical descriptors are represented by various tensors, discussed below. The rationale for choosing a manifold as the mathematical structure is to reflect desirable physical properties. For example, in the theory of manifolds, each point is contained in a chart. The idea of coordinate charts as local observers who can perform measurements in their vicinity also makes good physical sense, for cosmological problems, a coordinate chart may be quite large. An important distinction in physics is the difference between local and global structures, an important problem in general relativity is to tell when two spacetimes are the same, at least locally. This problem has its roots in manifold theory where determining if two Riemannian manifolds of the dimension are locally isometric. This latter problem has been solved and its adaptation for general relativity is called the Cartan–Karlhede algorithm, one of the profound consequences of relativity theory was the abolition of privileged reference frames. The description of phenomena should not depend upon who does the measuring - one reference frame should be as good as any other. Special relativity demonstrated that no reference frame was preferential to any other inertial reference frame. General relativity eliminated preference for inertial reference frames by showing that there is no preferred reference frame for describing nature, any observer can make measurements and the precise numerical quantities obtained only depend on the coordinate system used. This suggested a way of formulating relativity using invariant structures, those that are independent of the system used. The most suitable mathematical structure seemed to be a tensor, mathematically, tensors are generalised linear operators - multilinear maps

40.
Tests of general relativity
–
At its introduction in 1915, the general theory of relativity did not have a solid empirical foundation. Beginning in 1974, Hulse, Taylor and others have studied the behaviour of binary pulsars experiencing much stronger gravitational fields than those found in the Solar System. Both in the field limit and with the stronger fields present in systems of binary pulsars the predictions of general relativity have been extremely well tested locally. As a consequence of the principle, Lorentz invariance holds locally in non-rotating. Experiments related to Lorentz invariance and thus special relativity are described in Tests of special relativity, in February 2016, the Advanced LIGO team announced that they had directly detected gravitational waves from a black hole merger. This discovery along with a second discovery announced in June 2016 tested general relativity in the strong field limit. He also mentioned three classical tests with comments, The chief attraction of the lies in its logical completeness. If a single one of the conclusions drawn from it proves wrong, it must be given up, under Newtonian physics, a two-body system consisting of a lone object orbiting a spherical mass would trace out an ellipse with the spherical mass at a focus. The point of closest approach, called the periapsis, is fixed, a number of effects in the Solar System cause the perihelia of planets to precess around the Sun. The principal cause is the presence of planets which perturb one anothers orbit. Mercury deviates from the precession predicted from these Newtonian effects and this anomalous rate of precession of the perihelion of Mercurys orbit was first recognized in 1859 as a problem in celestial mechanics, by Urbain Le Verrier. A number of ad hoc and ultimately unsuccessful solutions were proposed, in general relativity, this remaining precession, or change of orientation of the orbital ellipse within its orbital plane, is explained by gravitation being mediated by the curvature of spacetime. Einstein showed that general relativity agrees closely with the amount of perihelion shift. This was a factor motivating the adoption of general relativity. Although earlier measurements of planetary orbits were made using conventional telescopes, the total observed precession of Mercury is 574. 10±0.65 arc-seconds per century relative to the inertial ICRF. This precession can be attributed to the causes, The correction by 42.98 is 3/2 multiple of classical prediction with PPN parameters γ = β =0. Thus the effect can be explained by general relativity. More recent calculations based on precise measurements have not materially changed the situation

41.
Parameterized post-Newtonian formalism
–
Post-Newtonian formalism is a calculational tool that expresses Einsteins equations of gravity in terms of the lowest-order deviations from Newtons law of universal gravitation. This allows approximations to Einsteins equations to be made in the case of weak fields, higher order terms can be added to increase accuracy, but for strong fields sometimes it is preferable to solve the complete equations numerically. In the limit, when the speed of gravity becomes infinite. It is used as a tool to compare Newtonian and Einsteinian gravity in the limit in which the field is weak. In general, PPN formalism can be applied to all theories of gravitation in which all bodies satisfy the Einstein equivalence principle. The speed of light remains constant in PPN formalism and it assumes that the tensor is always symmetric. The earliest parameterizations of the post-Newtonian approximation were performed by Sir Arthur Stanley Eddington in 1922, however, they dealt solely with the vacuum gravitational field outside an isolated spherical body. Dr. Ken Nordtvedt expanded this to include 7 parameters, clifford Martin Will introduced a stressed, continuous matter description of celestial bodies. The versions described here are based on Wei-Tou Ni, Will and Nordtvedt, Charles W. Misner et al. ten post-Newtonian parameters completely characterize the weak-field behavior of the theory. The formalism has been a tool in tests of general relativity. In the notation of Will, Ni and Misner et al. they have the values, g μ ν is the 4 by 4 symmetric metric tensor. ζ1, ζ2, ζ3, ζ4 and α3 measure the failure of conservation of energy, momentum, ϵ is on the order of potentials such as U, the square magnitude of the coordinate velocities of matter, etc. W i is the velocity vector of the PPN coordinate system relative to the mean rest-frame of the universe, W2 = δ i j w i w j is the square magnitude of that velocity. δ i j =1 if and only if i = j,0 otherwise. There are ten metric potentials, U, U i j, Φ W, A, Φ1, Φ2, Φ3, Φ4, V i and W i,10 linear equations in 10 unknowns are solved by inverting a 10 by 10 matrix. Step 2, Set the cosmological boundary conditions, assume a homogeneous isotropic cosmology, with isotropic coordinates in the rest frame of the universe. A complete cosmological solution may or may not be needed, call the results g μ ν = diag , ϕ0, K μ, B μ ν. Step 3, Get new variables from h μ ν = g μ ν − g μ ν, step 4, Substitute these forms into the field equations, keeping only such terms as are necessary to obtain a final consistent solution for h μ ν

42.
Linearized gravity
–
In linearized gravity the metric tensor, g, of spacetime is treated as a sum of an exact solution of Einsteins equations and a perturbation h. G = η + h where η is the background metric that is being perturbed about. The perturbation is treated using the methods of perturbation theory, linearized by ignoring all terms of higher than one in the perturbation. The Einstein field equations, being nonlinear in the metric, are difficult to solve exactly and these equations are linear in the metric, and the sum of two solutions of the linearized EFE is also a solution. The idea of ignoring the nonlinear part is thus encapsulated in this linearization procedure, the method is used to derive the Newtonian limit, including the first corrections, much like for a derivation of the existence of gravitational waves that led, after quantization, to gravitons. This approximation is also known as the weak-field approximation as it is valid if the perturbation h is very small. Note that we are raising and lowering the indices with respect to η and not g and this is the standard practice in linearized gravity. The way of thinking in linearized gravity is this, the background metric η is the metric, the approximation can also be used to derive Newtonian gravity as the weak-field approximation of Einsteinian gravity. The equations are obtained by assuming the spacetime metric is only slightly different from some baseline metric, then the difference in the metrics can be considered as a field on the baseline metric, whose behaviour is approximated by a set of linear equations. H must be compared to η, | h μ ν | ≪1. Then one ignores all products of h with h or its derivatives and it is further assumed in this approximation scheme that all indices of h and its derivatives are raised and lowered with η. The metric h is clearly symmetric, since g and η are. To solve it, this can be rewritten as Δ h b d = −16 π G c 4 + ∂2 h b d c 2 ∂ t 2 where ∆ is the Laplacian on a spatial slice. If the stress-energy changes slowly, then this gives h b d = −14 π ∫1 | r − s | d 3 s as a generalization of the Newtonian formula for gravitational potential. This is solved iteratively by first replacing the time derivative by zero. The linearized EFE are used primarily in the theory of gravitational radiation, correspondence principle Gravitoelectromagnetism Lanczos tensor Parameterized post-Newtonian formalism Post-Newtonian expansion Quasinormal mode Stephani, Hans. General Relativity, An Introduction to the Theory of the Gravitational Field, adler, Ronald, Bazin, Maurice & Schiffer, Menahem

43.
ADM formalism
–
It was first published in 1959. The formalism supposes that spacetime is foliated into a family of spacelike surfaces Σ t, labeled by their time coordinate t, the dynamic variables of this theory are taken to be the metric tensor of three dimensional spatial slices γ i j and their conjugate momenta π i j. Using these variables it is possible to define a Hamiltonian, in addition to the twelve variables γ i j and π i j, there are four Lagrange multipliers, the lapse function, N, and components of shift vector field, N i. These describe how each of the leaves Σ t of the foliation of spacetime are welded together, the equations of motion for these variables can be freely specified, this freedom corresponds to the freedom to specify how to lay out the coordinate system in space and time. The text here uses Einstein notation in which summation over repeated indices is assumed, two types of derivatives are used, Partial derivatives are denoted either by the operator ∂ i or by subscripts preceded by a comma. Covariant derivatives are denoted either by the operator ∇ i or by subscripts preceded by a semicolon, the absolute value of the determinant of the matrix of metric tensor coefficients is represented by g. Other tensor symbols written without indices represent the trace of the tensor such as π = g i j π i j. This is the Lagrangian from the Einstein–Hilbert action, the desired outcome of the derivation is to define an embedding of three-dimensional spatial slices in the four-dimensional spacetime. The metric of the three-dimensional slices g i j = g i j will be the coordinates for a Hamiltonian formulation. The conjugate momenta can then be computed π i j = g g i p g j q using standard techniques, the symbols Γ i j 0 are Christoffel symbols associated with the metric of the full four-dimensional spacetime. The lapse N = −1 /2 and the shift vector N i = g 0 i are the elements of the four-metric tensor. Having identified the quantities for the formulation, the step is to rewrite the Lagrangian in terms of these variables. Note also that the lapse and the shift appear in the Hamiltonian as Lagrange multipliers, the hats represents operators in quantum theory. This leads to the Wheeler–DeWitt equation, there are relatively few exact solutions to the Einstein field equations. In order to other solutions, there is an active field of study known as numerical relativity in which supercomputers are used to find approximate solutions to the equations. In order to such solutions numerically, most researchers start with a formulation of the Einstein equations closely related to the ADM formulation. The most common approaches start with an initial value problem based on the ADM formalism, in Hamiltonian formulations, the basic point is replacement of set of second order equations by another first order set of equations. We may get this second set of equations by Hamiltonian formulation in an easy way, of course this is very useful for numerical physics, because the reduction of order of differential equations must be done, if we want to prepare equations for a computer

44.
Theory of everything
–
Finding a ToE is one of the major unsolved problems in physics. Over the past few centuries, two theoretical frameworks have been developed that, as a whole, most closely resemble a ToE and these two theories upon which all modern physics rests are general relativity and quantum field theory. GR is a framework that only focuses on gravity for understanding the universe in regions of both large-scale and high-mass, stars, galaxies, clusters of galaxies, etc. QFT successfully implemented the Standard Model and unified the interactions between the three forces, weak, strong, and electromagnetic force. Through years of research, physicists have experimentally confirmed with tremendous accuracy virtually every prediction made by two theories when in their appropriate domains of applicability. In accordance with their findings, scientists learned that GR and QFT. Since the usual domains of applicability of GR and QFT are so different, in pursuit of this goal, quantum gravity has become an area of active research. Eventually a single explanatory framework, called string theory, has emerged that intends to be the theory of the universe. String theory posits that at the beginning of the universe, the four forces were once a single fundamental force. According to string theory, every particle in the universe, at its most microscopic level, string theory further claims that it is through these specific oscillatory patterns of strings that a particle of unique mass and force charge is created. Initially, the theory of everything was used with an ironic connotation to refer to various overgeneralized theories. For example, a grandfather of Ijon Tichy — a character from a cycle of Stanisław Lems science fiction stories of the 1960s — was known to work on the General Theory of Everything. Physicist John Ellis claims to have introduced the term into the literature in an article in Nature in 1986. Over time, the term stuck in popularizations of theoretical physics research, in ancient Greece, pre-Socratic philosophers speculated that the apparent diversity of observed phenomena was due to a single type of interaction, namely the motions and collisions of atoms. The concept of atom, introduced by Democritus, was a philosophical attempt to unify all phenomena observed in nature. Archimedes was possibly the first scientist known to have described nature with axioms and he thus tried to describe everything starting from a few axioms. Any theory of everything is similarly expected to be based on axioms, in the late 17th century, Isaac Newtons description of the long-distance force of gravity implied that not all forces in nature result from things coming into contact. Laplace thus envisaged a combination of gravitation and mechanics as a theory of everything, modern quantum mechanics implies that uncertainty is inescapable, and thus that Laplaces vision has to be amended, a theory of everything must include gravitation and quantum mechanics

45.
Modified Newtonian dynamics
–
In physics, Modified Newtonian Dynamics is a theory that proposes a modification of Newtons laws to account for observed properties of galaxies. In MOND, violation of Newtons Laws occurs at small accelerations. MOND is an example of a class of known as modified gravity. Since Milgroms original proposal, MOND has successfully predicted a variety of phenomena that are difficult to understand from a dark matter perspective. However, MOND and its generalisations do not adequately account for observed properties of galaxy clusters, several independent observations point to the fact that the visible mass in galaxies and galaxy clusters is insufficient to account for their dynamics, when analysed using Newtons laws. The former leads to the dark matter hypothesis, the leads to MOND. Agreement with Newtonian mechanics requires μ →1 for x ≫1, beyond these limits, the interpolating function is not specified by the theory, although it is possible to weakly constrain it empirically. Two common choices are the simple interpolating function, μ =11 + a 0 a, thus, in the deep-MOND regime, F N = m a 2 a 0. By fitting his law to rotation curve data, Milgrom found a0 ≈1.2 x 10−10 m s−2 to be optimal and this simple law is sufficient to make predictions for a broad range of galactic phenomena. Milgroms law can be interpreted in two different ways, one possibility is to treat it as a modification to the classical law of inertia, so that the force on an object is not proportional to the particles acceleration a but rather to μa. In this case, the dynamics would apply not only to gravitational phenomena. In this interpretation, Milgroms modification would apply exclusively to gravitational phenomena, by itself, Milgroms law is not a complete and self-contained physical theory, but rather an ad-hoc empirically motivated variant of one of the several equations that constitute classical mechanics. Several complete classical theories have proposed, which generally yield Milgroms law exactly in situations of high symmetry. A subset of these theories have been further embedded within relativistic theories. Distinguishing both theoretically and observationally between these alternatives is a subject of current research, the majority of astronomers, astrophysicists and cosmologists accept ΛCDM, and are committed to a dark matter solution of the missing-mass problem. MOND, by contrast, is studied by only a handful of researchers. Since MOND was specifically designed to produce flat rotation curves, these do not constitute evidence for the theory, nevertheless, a broad range of astrophysical phenomena are neatly accounted for within the MOND framework. Many of these came to light after the publication of Milgroms original papers and are difficult to explain using the dark matter hypothesis