1.
University of Cambridge
–
The University of Cambridge is a collegiate public research university in Cambridge, England, often regarded as one of the most prestigious universities in the world. Founded in 1209 and given royal status by King Henry III in 1231, Cambridge is the second-oldest university in the English-speaking world. The university grew out of an association of scholars who left the University of Oxford after a dispute with the townspeople, the two ancient universities share many common features and are often referred to jointly as Oxbridge. Cambridge is formed from a variety of institutions which include 31 constituent colleges, Cambridge University Press, a department of the university, is the worlds oldest publishing house and the second-largest university press in the world. The university also operates eight cultural and scientific museums, including the Fitzwilliam Museum, Cambridges libraries hold a total of around 15 million books, eight million of which are in Cambridge University Library, a legal deposit library. In the year ended 31 July 2015, the university had an income of £1.64 billion. The central university and colleges have an endowment of around £5.89 billion. The university is linked with the development of the high-tech business cluster known as Silicon Fen. It is a member of associations and forms part of the golden triangle of leading English universities and Cambridge University Health Partners. As of 2017, Cambridge is ranked the fourth best university by three ranking tables and no other institution in the world ranks in the top 10 for as many subjects. Cambridge is consistently ranked as the top university in the United Kingdom, the university has educated many notable alumni, including eminent mathematicians, scientists, politicians, lawyers, philosophers, writers, actors, and foreign Heads of State. Ninety-five Nobel laureates, fifteen British prime ministers and ten Fields medalists have been affiliated with Cambridge as students, faculty, by the late 12th century, the Cambridge region already had a scholarly and ecclesiastical reputation, due to monks from the nearby bishopric church of Ely. The University of Oxford went into suspension in protest, and most scholars moved to such as Paris, Reading. After the University of Oxford reformed several years later, enough remained in Cambridge to form the nucleus of the new university. A bull in 1233 from Pope Gregory IX gave graduates from Cambridge the right to teach everywhere in Christendom, the colleges at the University of Cambridge were originally an incidental feature of the system. No college is as old as the university itself, the colleges were endowed fellowships of scholars. There were also institutions without endowments, called hostels, the hostels were gradually absorbed by the colleges over the centuries, but they have left some indicators of their time, such as the name of Garret Hostel Lane. Hugh Balsham, Bishop of Ely, founded Peterhouse, Cambridges first college, the most recently established college is Robinson, built in the late 1970s

2.
Anthropic principle
–
The anthropic principle is a philosophical consideration that observations of the Universe must be compatible with the conscious and sapient life that observes it. Some proponents of the principle reason that it explains why this universe has the age. As a result, they believe it is unremarkable that this universe has fundamental constants that happen to fall within the narrow range thought to be compatible with life. Most often such arguments draw upon some notion of the multiverse for there to be a population of universes to select from and from which selection bias could occur. The anthropic principle states that this is a necessity, because if life were impossible, no living entity would be there to observe it, and thus would not be known. That is, it must be possible to some universe, and hence. The term anthropic in anthropic principle has been argued to be a misnomer, while singling out our kind of carbon-based life, none of the finely tuned phenomena require human life or some kind of carbon chauvinism. Any form of life or any form of atom, stone, star or galaxy would do. The anthropic principle has given rise to confusion and controversy. All versions of the principle have been accused of discouraging the search for a physical understanding of the universe. e. It is a tautology or truism, however, building a substantive argument based on a tautological foundation is problematic. Stronger variants of the principle are not tautologies and thus make claims considered controversial by some. In 1961, Robert Dicke noted that the age of the universe, as seen by living observers, instead, biological factors constrain the universe to be more or less in a golden age, neither too young nor too old. If the universe were one tenth as old as its present age, there would not have sufficient time to build up appreciable levels of metallicity especially carbon. Small rocky planets did not yet exist, Dicke later reasoned that the density of matter in the universe must be almost exactly the critical density needed to prevent the Big Crunch. A slight increase in the interaction would bind the dineutron and the diproton. Water, as well as sufficiently long-lived stable stars, both essential for the emergence of life as we know it, would not exist. More generally, small changes in the strengths of the four fundamental interactions can greatly affect the universes age, structure

3.
No-hair theorem
–
All other information about the matter which formed a black hole or is falling into it, disappears behind the black-hole event horizon and is therefore permanently inaccessible to external observers. Physicist John Archibald Wheeler expressed this idea with the black holes have no hair which was the origin of the name. In a later interview, John Wheeler says that Jacob Bekenstein coined this phrase, the first version of the no-hair theorem for the simplified case of the uniqueness of the Schwarzschild metric was shown by Werner Israel in 1967. The result was generalized to the cases of charged or spinning black holes. There is still no rigorous mathematical proof of a general no-hair theorem, even in the case of gravity alone, the conjecture has only been partially resolved by results of Stephen Hawking, Brandon Carter, and David C. Robinson, under the hypothesis of non-degenerate event horizons and the technical, restrictive. None of the particle physics pseudo-charges are conserved in the black hole. These numbers represent the attributes of an object which can be determined from a distance by examining its gravitational. All other variations in the hole will either escape to infinity or be swallowed up by the black hole. By changing the reference frame one can set the momentum and position to zero. This eliminates eight of the numbers, leaving three which are independent of the reference frame, mass, angular momentum magnitude, and electric charge. Thus any black hole which has been isolated for a significant period of time can be described by the Kerr–Newman metric in a chosen reference frame. It has since extended to include the case where the cosmological constant is positive. Magnetic charge, if detected as predicted by some theories, would form the fourth parameter possessed by a black hole. However, these exceptions are often unstable solutions and/or do not lead to conserved quantum numbers so that The spirit of the conjecture, however. It has been proposed that black holes may be considered to be bound states of hairless black holes. In 2004, the analytical solution of a -dimensional spherically symmetric black hole with minimally coupled self-interacting scalar field was derived. The solution is stable and does not possess any unphysical properties, however, the LIGO results provide some experimental evidence consistent with the uniqueness or no-hair theorem

4.
Penrose diagram
–
In theoretical physics, a Penrose diagram is a two-dimensional diagram capturing the causal relations between different points in spacetime. It is an extension of a Minkowski diagram where the vertical dimension represents time, and the horizontal dimension represents space, the biggest difference is that locally, the metric on a Penrose diagram is conformally equivalent to the actual metric in spacetime. The conformal factor is such that the entire infinite spacetime is transformed into a Penrose diagram of finite size. For spherically symmetric spacetimes, every point in the diagram corresponds to a 2-sphere, straight lines of constant time and straight lines of constant space ordinates therefore become hyperbolas, which appear to converge at points in the corners of the diagram. These points represent conformal infinity for space and time, Penrose diagrams are more properly called Penrose–Carter diagrams, acknowledging both Brandon Carter and Roger Penrose, who were the first researchers to employ them. They are also called conformal diagrams, or simply spacetime diagrams, two lines drawn at 45° angles should intersect in the diagram only if the corresponding two light rays intersect in the actual spacetime. So, a Penrose diagram can be used as an illustration of spacetime regions that are accessible to observation. The diagonal boundary lines of a Penrose diagram correspond to the infinity or to singularities where light rays must end, thus, Penrose diagrams are also useful in the study of asymptotic properties of spacetimes and singularities. Penrose diagrams are used to illustrate the causal structure of spacetimes containing black holes. Singularities are denoted by a boundary, unlike the timelike boundary found on conventional space-time diagrams. This is due to the interchanging of timelike and spacelike coordinates within the horizon of a black hole. The singularity is represented by a boundary to make it clear that once an object has passed the horizon it will inevitably hit the singularity even if it attempts to take evasive action. Penrose diagrams are used to illustrate the hypothetical Einstein-Rosen bridge connecting two separate universes in the maximally extended Schwarzschild black hole solution. The precursors to the Penrose diagrams were Kruskal–Szekeres diagrams and these introduced the method of aligning the event horizon into past and future horizons oriented at 45° angles, and splitting the singularity into past and future horizontally-oriented lines. The Einstein-Rosen bridge closes off so rapidly that passage between the two asymptotically flat exterior regions would require faster-than-light velocity, and is therefore impossible, in addition, highly blue-shifted light rays would make it impossible for anyone to pass through. In the case of the hole, there is also a negative universe entered through a ring-shaped singularity that can be passed through if entering the hole close to its axis of rotation. Causality Causal structure Conformal cyclic cosmology Weyl transformation dInverno, Ray, see Chapter 17 for a very readable introduction to the concept of conformal infinity plus examples. Complete Analytic Extension of the Symmetry Axis of Kerrs Solution of Einsteins Equations, see also on-line version Hawking, Stephen & Ellis, G. F. R

5.
Doomsday argument
–
The Doomsday argument is a probabilistic argument that claims to predict the number of future members of the human species given only an estimate of the total number of humans born so far. Simply put, it says that supposing that all humans are born in a random order, leslie and has since been independently discovered by J. Richard Gott and Holger Bech Nielsen. Similar principles of eschatology were proposed earlier by Heinz von Foerster, F is uniformly distributed on even after learning of the absolute position n. That is, for example, there is a 95% chance that f is in the interval, in other words, we could assume that we could be 95% certain that we would be within the last 95% of all the humans ever to be born. If we know our absolute position n, this implies a bound for N obtained by rearranging n/N >0.05 to give N < 20n. Assuming that the population stabilizes at 10 billion and a life expectancy of 80 years. This problem is similar to the famous German tank problem, the step that converts N into an extinction time depends upon a finite human lifespan. If immortality becomes common, and the birth rate drops to zero, a precise formulation of the Doomsday Argument requires the Bayesian interpretation of probability. The assumption of no prior knowledge on the distribution of N, assume for simplicity that the total number of humans who will ever be born is 60 billion, or 6,000 billion. Now, if we assume that the number of humans who will ever be born equals N1, the probability that X is amongst the first 60 billion humans who have ever lived is of course 100%. However, if the number of humans who will ever be born equals N2, in essence the DA therefore suggests that human extinction is more likely to occur sooner rather than later. It is possible to sum the probabilities for each value of N, for example, taking the numbers above, it is 99% certain that N is smaller than 6,000 billion. Note that as remarked above, this argument assumes that the probability for N is flat, or 50% for N1. On the other hand, it is possible to conclude, given X, more precisely, Bayes theorem tells us that P=PP/P, and the conservative application of the Copernican principle tells us only how to calculate P. Taking P to be flat, we still have to make an assumption about the prior probability P that the number of humans is N. If we conclude that N2 is much more likely than N1, a further, more detailed discussion, as well as relevant distributions P, are given below in the Rebuttals section. The Doomsday argument does not say that humanity cannot or will not exist indefinitely and it does not put any upper limit on the number of humans that will ever exist, nor provide a date for when humanity will become extinct. An abbreviated form of the argument does make these claims, by confusing probability with certainty, however, the actual DAs conclusion is, There is a 95% chance of extinction within 9,120 years

6.
General relativity
–
General relativity is the geometric theory of gravitation published by Albert Einstein in 1915 and the current description of gravitation in modern physics. General relativity generalizes special relativity and Newtons law of gravitation, providing a unified description of gravity as a geometric property of space and time. In particular, the curvature of spacetime is directly related to the energy and momentum of whatever matter, the relation is specified by the Einstein field equations, a system of partial differential equations. Examples of such differences include gravitational time dilation, gravitational lensing, the redshift of light. The predictions of relativity have been confirmed in all observations. Although general relativity is not the only theory of gravity. Einsteins theory has important astrophysical implications, for example, it implies the existence of black holes—regions of space in which space and time are distorted in such a way that nothing, not even light, can escape—as an end-state for massive stars. The bending of light by gravity can lead to the phenomenon of gravitational lensing, General relativity also predicts the existence of gravitational waves, which have since been observed directly by physics collaboration LIGO. In addition, general relativity is the basis of current cosmological models of an expanding universe. Soon after publishing the special theory of relativity in 1905, Einstein started thinking about how to incorporate gravity into his new relativistic framework. In 1907, beginning with a thought experiment involving an observer in free fall. After numerous detours and false starts, his work culminated in the presentation to the Prussian Academy of Science in November 1915 of what are now known as the Einstein field equations. These equations specify how the geometry of space and time is influenced by whatever matter and radiation are present, the Einstein field equations are nonlinear and very difficult to solve. Einstein used approximation methods in working out initial predictions of the theory, but as early as 1916, the astrophysicist Karl Schwarzschild found the first non-trivial exact solution to the Einstein field equations, the Schwarzschild metric. This solution laid the groundwork for the description of the stages of gravitational collapse. In 1917, Einstein applied his theory to the universe as a whole, in line with contemporary thinking, he assumed a static universe, adding a new parameter to his original field equations—the cosmological constant—to match that observational presumption. By 1929, however, the work of Hubble and others had shown that our universe is expanding and this is readily described by the expanding cosmological solutions found by Friedmann in 1922, which do not require a cosmological constant. Lemaître used these solutions to formulate the earliest version of the Big Bang models, in which our universe has evolved from an extremely hot, Einstein later declared the cosmological constant the biggest blunder of his life

7.
Centre national de la recherche scientifique
–
The French National Center for Scientific Research is the largest governmental research organisation in France and the largest fundamental science agency in Europe. It employs 32,000 permanent employees and 6,000 temporary workers, the National Committee for Scientific Research, which is in charge of the recruitment and evaluation of researchers, is divided into 47 sections. Research groups are affiliated with one institute and an optional secondary institute. For administrative purposes, the CNRS is divided into 18 regional divisions, CNRS research units are called laboratoires informally and unités de recherche in administrative parlance. They are either operated solely by CNRS or UPR) or in association with other institutions, each research unit has a unique numeric code attached and is headed by a director. A research unit may be subdivided into research groups, CNRS also has support units, which, analogously to the research units, are called unités propres de service or unités mixtes de service. A UPS or UMS may for instance supply administrative, computing, library, the headquarters of CNRS are at the Campus Gérard Mégie in the 16th arrondissement of Paris. Researchers who are permanent employees of the CNRS are classified in two categories, in order of seniority, Research scientists, 2nd class, 1st class, Research directors, 2nd class, 1st class, exceptional class. In principle, research directors tend to research groups. Employees for support activities include research engineers, studies engineers, assistant engineers, contrary to what the name would seem to imply, these can have administrative duties. All permanent support employees are recruited through annual nationwide competitive campaigns, following a 1983 reform, the candidates selected have the status of civil servants and are part of the public service. The CNRS is represented with administrative offices in Brussels, Beijing, Tokyo, Singapore, Washington, bonn, Moscow, Tunis, Johannesburg, Santiago de Chile, Israel, and New Delhi. The CNRS was created on 19 October 1939 by decree of President Albert Lebrun, since 1954, the centre has annually awarded gold, silver, and bronze medals to French scientists and junior researchers. The performance of the CNRS has been questioned, with calls for wide-ranging reforms, in particular, the effectiveness of the recruitment, compensation, career management, and evaluation procedures have been under scrutiny. Governmental projects include the transformation of the CNRS into an organ allocating support to projects on an ad hoc basis. Another controversial plan advanced by the government involves breaking up the CNRS into six separate institutes, alain Fuchs was appointed president on 20 January 2010. His position combines the positions of president and director general

8.
Dennis W. Sciama
–
Dennis William Siahou Sciama, FRS was a British physicist who, through his own work and that of his students, played a major role in developing British physics after the Second World War. He is considered one of the fathers of modern cosmology, Sciama was born in Manchester, England, the son of Nelly Ades and Abraham Sciama. He was of Syrian Jewish ancestry—his father born in Manchester and his mother born in Egypt both traced their roots back to Aleppo, Syria, Sciama earned his PhD in 1953 at Cambridge University under the supervision of Paul Dirac, with a dissertation on Machs principle and inertia. His work later influenced the formulation of theories of gravity. In 1983, he moved from Oxford to Trieste, becoming Professor of Astrophysics at the International School of Advanced Studies, during the 1990s, he divided his time between Trieste and Oxford, where he was a visiting professor until the end of his life. His main home remained in his house in Park Town, Oxford, Sciama made connections among some topics in astronomy and astrophysics. Most significant was his work in relativity, with and without quantum theory. He helped revitalize the classical relativistic alternative to general relativity known as Einstein-Cartan gravity, early in his career, he supported Fred Hoyles steady state cosmology, and interacted with Hoyle, Hermann Bondi, and Thomas Gold. When evidence against the state theory, e. g. the cosmic microwave radiation, mounted in the 1960s. During his last years, Sciama became interested in the issue of Dark Matter in galaxies, among other aspects he pursued a theory of dark matter that consists of a heavy neutrino, certainly disfavored in his realization, but still possible in a more complicated scenario. Barrow David Deutsch Adrian Melott Paolo Molaro Paolo Salucci Antony Valentini Sciama also strongly influenced Roger Penrose, the 1960s group he led in Cambridge, has proved of lasting influence. Sciama was elected a Fellow of the Royal Society in 1982 and he was also an honorary member of the American Academy of Arts and Sciences, the American Philosophical Society and the Academia Lincei of Rome. He served as president of the International Society of General Relativity and Gravitation, in 1959, Sciama married Lidia Dina, a social anthropologist, who survived him, along with their two daughters. His work at SISSA and the University of Oxford led to the creation of a series in his honour. In 2009, the Institute of Cosmology and Gravitation at the University of Portsmouth elected to name their new building, the Physical Foundations of General Relativity. Short and clearly written non-mathematical book on the physical and conceptual foundations of General Relativity, could be read with profit by physics students before immersing themselves in more technical studies of General Relativity. Modern Cosmology and the Dark Matter Problem, Sciama has been portrayed in a number of biographical projects about his most famous student, Stephen Hawking, In the 2004 BBC TV movie Hawking, Sciama was played by John Sessions. In the 2014 film The Theory of Everything, Sciama was played by David Thewlis, physicist Adrian Melott strongly criticized the portrayal of Sciama in the film

9.
Royal Society
–
Founded in November 1660, it was granted a royal charter by King Charles II as The Royal Society. The society is governed by its Council, which is chaired by the Societys President, according to a set of statutes and standing orders. The members of Council and the President are elected from and by its Fellows, the members of the society. As of 2016, there are about 1,600 fellows, allowed to use the postnominal title FRS, there are also royal fellows, honorary fellows and foreign members, the last of which are allowed to use the postnominal title ForMemRS. The Royal Society President is Venkatraman Ramakrishnan, who took up the post on 30 November 2015, since 1967, the society has been based at 6–9 Carlton House Terrace, a Grade I listed building in central London which was previously used by the Embassy of Germany, London. The Royal Society started from groups of physicians and natural philosophers, meeting at variety of locations and they were influenced by the new science, as promoted by Francis Bacon in his New Atlantis, from approximately 1645 onwards. A group known as The Philosophical Society of Oxford was run under a set of rules still retained by the Bodleian Library, after the English Restoration, there were regular meetings at Gresham College. It is widely held that these groups were the inspiration for the foundation of the Royal Society, I will not say, that Mr Oldenburg did rather inspire the French to follow the English, or, at least, did help them, and hinder us. But tis well known who were the men that began and promoted that design. This initial royal favour has continued and, since then, every monarch has been the patron of the society, the societys early meetings included experiments performed first by Hooke and then by Denis Papin, who was appointed in 1684. These experiments varied in their area, and were both important in some cases and trivial in others. The Society returned to Gresham in 1673, there had been an attempt in 1667 to establish a permanent college for the society. Michael Hunter argues that this was influenced by Solomons House in Bacons New Atlantis and, to a lesser extent, by J. V. The first proposal was given by John Evelyn to Robert Boyle in a letter dated 3 September 1659, he suggested a scheme, with apartments for members. The societys ideas were simpler and only included residences for a handful of staff and these plans were progressing by November 1667, but never came to anything, given the lack of contributions from members and the unrealised—perhaps unrealistic—aspirations of the society. During the 18th century, the gusto that had characterised the early years of the society faded, with a number of scientific greats compared to other periods. The pointed lightning conductor had been invented by Benjamin Franklin in 1749, during the same time period, it became customary to appoint society fellows to serve on government committees where science was concerned, something that still continues. The 18th century featured remedies to many of the early problems

10.
Theoretical physics
–
Theoretical physics is a branch of physics that employs mathematical models and abstractions of physical objects and systems to rationalize, explain and predict natural phenomena. This is in contrast to physics, which uses experimental tools to probe these phenomena. The advancement of science depends in general on the interplay between experimental studies and theory, in some cases, theoretical physics adheres to standards of mathematical rigor while giving little weight to experiments and observations. Conversely, Einstein was awarded the Nobel Prize for explaining the photoelectric effect, a physical theory is a model of physical events. It is judged by the extent to which its predictions agree with empirical observations, the quality of a physical theory is also judged on its ability to make new predictions which can be verified by new observations. A physical theory similarly differs from a theory, in the sense that the word theory has a different meaning in mathematical terms. A physical theory involves one or more relationships between various measurable quantities, archimedes realized that a ship floats by displacing its mass of water, Pythagoras understood the relation between the length of a vibrating string and the musical tone it produces. Other examples include entropy as a measure of the uncertainty regarding the positions and motions of unseen particles, Theoretical physics consists of several different approaches. In this regard, theoretical particle physics forms a good example, for instance, phenomenologists might employ empirical formulas to agree with experimental results, often without deep physical understanding. Modelers often appear much like phenomenologists, but try to model speculative theories that have certain desirable features, some attempt to create approximate theories, called effective theories, because fully developed theories may be regarded as unsolvable or too complicated. Other theorists may try to unify, formalise, reinterpret or generalise extant theories, or create completely new ones altogether. Sometimes the vision provided by pure mathematical systems can provide clues to how a system might be modeled, e. g. the notion, due to Riemann and others. Theoretical problems that need computational investigation are often the concern of computational physics, Theoretical advances may consist in setting aside old, incorrect paradigms or may be an alternative model that provides answers that are more accurate or that can be more widely applied. In the latter case, a correspondence principle will be required to recover the previously known result, sometimes though, advances may proceed along different paths. However, an exception to all the above is the wave–particle duality, Physical theories become accepted if they are able to make correct predictions and no incorrect ones. They are also likely to be accepted if they connect a wide range of phenomena. Testing the consequences of a theory is part of the scientific method, Physical theories can be grouped into three categories, mainstream theories, proposed theories and fringe theories. Theoretical physics began at least 2,300 years ago, under the Pre-socratic philosophy, during the Middle Ages and Renaissance, the concept of experimental science, the counterpoint to theory, began with scholars such as Ibn al-Haytham and Francis Bacon

11.
Black hole
–
A black hole is a region of spacetime exhibiting such strong gravitational effects that nothing—not even particles and electromagnetic radiation such as light—can escape from inside it. The theory of relativity predicts that a sufficiently compact mass can deform spacetime to form a black hole. The boundary of the region from which no escape is possible is called the event horizon, although the event horizon has an enormous effect on the fate and circumstances of an object crossing it, no locally detectable features appear to be observed. In many ways a black hole acts like a black body. Moreover, quantum theory in curved spacetime predicts that event horizons emit Hawking radiation. This temperature is on the order of billionths of a kelvin for black holes of stellar mass, objects whose gravitational fields are too strong for light to escape were first considered in the 18th century by John Michell and Pierre-Simon Laplace. Black holes were considered a mathematical curiosity, it was during the 1960s that theoretical work showed they were a generic prediction of general relativity. The discovery of neutron stars sparked interest in gravitationally collapsed compact objects as a possible astrophysical reality, black holes of stellar mass are expected to form when very massive stars collapse at the end of their life cycle. After a black hole has formed, it can continue to grow by absorbing mass from its surroundings, by absorbing other stars and merging with other black holes, supermassive black holes of millions of solar masses may form. There is general consensus that supermassive black holes exist in the centers of most galaxies, despite its invisible interior, the presence of a black hole can be inferred through its interaction with other matter and with electromagnetic radiation such as visible light. Matter that falls onto a black hole can form an accretion disk heated by friction. If there are other stars orbiting a black hole, their orbits can be used to determine the black holes mass, such observations can be used to exclude possible alternatives such as neutron stars.3 million solar masses. On 15 June 2016, a detection of a gravitational wave event from colliding black holes was announced. The idea of a body so massive that light could not escape was briefly proposed by astronomical pioneer John Michell in a letter published in 1783-4. Michell correctly noted that such supermassive but non-radiating bodies might be detectable through their effects on nearby visible bodies. In 1915, Albert Einstein developed his theory of general relativity, only a few months later, Karl Schwarzschild found a solution to the Einstein field equations, which describes the gravitational field of a point mass and a spherical mass. A few months after Schwarzschild, Johannes Droste, a student of Hendrik Lorentz, independently gave the solution for the point mass. This solution had a peculiar behaviour at what is now called the Schwarzschild radius, the nature of this surface was not quite understood at the time

12.
Meudon
–
Meudon is a municipality in the southwestern suburbs of Paris, France. It is in the département of Hauts-de-Seine and it is located 9.1 km from the center of Paris. The town of Meudon is built on the hills and valleys of the Seine, the wood of Meudon lies for the most part to the west of the town. The northwest part of Meudon, overlooking the Seine, is known as Bellevue, at Meudon, the argile plastique clay was extensively mined in the 19th century. The first fossil of the European diatryma Gastornis parisiensis was discovered in deposits by Gaston Planté. Archaeological sites show that Meudon has been populated since Neolithic times, the Gauls called the area Mol-Dum, and the Romans Latinized the name as Moldunum. The handsome Galliera Institutions, on the hill of Fleury, were founded by the duchess of Galliera for the care of aged persons and orphans, the buildings were completed in 1885. The old castle of Meudon was rebuilt in Renaissance style in the mid-sixteenth century and it was bought by Louis XIV as a residence for Louis, le Grand Dauphin, under whom Meudon became a center of aristocratic life. A branch of the Paris Observatory was founded in 1877 on the ruins, the Meudon town hall is about 43 m in altitude above that of Paris and the climb from there to the observatory offers some rewarding views of Paris. Nicolas-Joseph Cugnot, the inventor of the worlds first automobile, is reported to have carried out some early trials at Meudon in the early 1770s, chalais-Meudon was important in the pioneering of aviation, initially balloons and airships, but also the early heavier-than-air machines. A Corps dAérostatiers under the command of Jean-Marie-Joseph Coutelle was established in 1794, Hangar Y was built in 1880 at the request of the military engineer Captain Charles Renard, for the construction of balloons and airships. The building is 70 m long,24 m wide and around 26 m high. The airship La France, designed by Renard and Arthur Krebs, was built in Hangar Y in 1884 and was the first airship which was controllable during flight and which could return to its starting point. Although a choice residential district, access to the railway and the Seine river have made Meudon a manufacturing center since the 1840s, metal products and military explosives have been continuously produced there since then. From 1921 to 1981 the Air Museum was located here until it moved to Le Bourget Airport, CNRS has a campus in Bellevue. Meudon is well served by public transports operated jointly by the SNCF, Meudon is served by the line C of the RER by the Meudon – Val Fleury station. Meudon is also served by the transilien line N through the Meudon station, the T2 tramway line links the Pont de Bezons station to the Porte de Versailles station. Meudon is served by the Brimborion station and the Meudon-sur-Seine station, the T6 tramway line runs from Châtillon to Viroflay

13.
Geodesic
–
In differential geometry, a geodesic is a generalization of the notion of a straight line to curved spaces. The term has been generalized to include measurements in more general mathematical spaces, for example, in graph theory. In the presence of a connection, a geodesic is defined to be a curve whose tangent vectors remain parallel if they are transported along it. If this connection is the Levi-Civita connection induced by a Riemannian metric, geodesics are of particular importance in general relativity. Timelike geodesics in general relativity describe the motion of free falling test particles, the shortest path between two points in a curved space can be found by writing the equation for the length of a curve, and then minimizing this length using the calculus of variations. This has some technical problems, because there is an infinite dimensional space of different ways to parameterize the shortest path. Equivalently, a different quantity may be defined, termed the energy of the curve, intuitively, one can understand this second formulation by noting that an elastic band stretched between two points will contract its length, and in so doing will minimize its energy. The resulting shape of the band is a geodesic, in Riemannian geometry geodesics are not the same as shortest curves between two points, though the two concepts are closely related. The difference is that geodesics are only locally the shortest distance between points, and are parameterized with constant velocity, going the long way round on a great circle between two points on a sphere is a geodesic but not the shortest path between the points. The map t → t2 from the interval to itself gives the shortest path between 0 and 1, but is not a geodesic because the velocity of the corresponding motion of a point is not constant. Geodesics are commonly seen in the study of Riemannian geometry and more generally metric geometry, in general relativity, geodesics describe the motion of point particles under the influence of gravity alone. In particular, the path taken by a rock, an orbiting satellite. More generally, the topic of sub-Riemannian geometry deals with the paths that objects may take when they are not free and this article presents the mathematical formalism involved in defining, finding, and proving the existence of geodesics, in the case of Riemannian and pseudo-Riemannian manifolds. The article geodesic discusses the case of general relativity in greater detail. The most familiar examples are the lines in Euclidean geometry. On a sphere, the images of geodesics are the great circles, the shortest path from point A to point B on a sphere is given by the shorter arc of the great circle passing through A and B. If A and B are antipodal points, then there are infinitely many shortest paths between them, geodesics on an ellipsoid behave in a more complicated way than on a sphere, in particular, they are not closed in general. In metric geometry, a geodesic is a curve which is locally a distance minimizer

14.
Analytic continuation
–
In complex analysis, a branch of mathematics, analytic continuation is a technique to extend the domain of a given analytic function. The step-wise continuation technique may, however, come up against difficulties and these may have an essentially topological nature, leading to inconsistencies. They may alternatively have to do with the presence of mathematical singularities, the case of several complex variables is rather different, since singularities then need not be isolated points, and its investigation was a major reason for the development of sheaf cohomology. Suppose f is a function defined on a non-empty open subset U of the complex plane C. If V is an open subset of C, containing U. In other words, the restriction of F to U is the function f we started with and this is because F1 − F2 is an analytic function which vanishes on the open, connected domain U of f and hence must vanish on its entire domain. This follows directly from the identity theorem for holomorphic functions, a common way to define functions in complex analysis proceeds by first specifying the function on a small domain only, and then extending it by analytic continuation. In practice, this continuation is often done by first establishing some functional equation on the small domain, examples are the Riemann zeta function and the gamma function. The concept of a cover was first developed to define a natural domain for the analytic continuation of an analytic function. The idea of finding the maximal analytic continuation of a function in turn led to the development of the idea of Riemann surfaces, the power series defined below is generalized by the idea of a germ. The general theory of analytic continuation and its generalizations are known as sheaf theory, let f = ∑ k =0 ∞ α k k be a power series converging in the disk Dr, r >0, defined by D r =. Note that without loss of generality, here and below, we always assume that a maximal such r was chosen, even if that r is ∞. Also note that it would be equivalent to begin with a function defined on some small open set. We say that the vector g = is a germ of f, the base g0 of g is z0, the stem of g is and the top g1 of g is α0. The top of g is the value of f at z0, any vector g = is a germ if it represents a power series of an analytic function around z0 with some radius of convergence r >0. Therefore, we can speak of the set of germs G. This compatibility condition is neither transitive, symmetric nor antisymmetric, if we extend the relation by transitivity, we obtain a symmetric relation, which is therefore also an equivalence relation on germs. This extension by transitivity is one definition of analytic continuation, the equivalence relation will be denoted ≅

15.
Stephen Hawking
–
Hawking was the first to set forth a theory of cosmology explained by a union of the general theory of relativity and quantum mechanics. He is a supporter of the many-worlds interpretation of quantum mechanics. In 2002, Hawking was ranked number 25 in the BBCs poll of the 100 Greatest Britons, Hawking has a rare early-onset, slow-progressing form of amyotrophic lateral sclerosis that has gradually paralysed him over the decades. He now communicates using a single cheek muscle attached to a speech-generating device, Hawking was born on 8 January 1942 in Oxford, England to Frank and Isobel Hawking. Despite their families financial constraints, both attended the University of Oxford, where Frank read medicine and Isobel read Philosophy. The two met shortly after the beginning of the Second World War at a research institute where Isobel was working as a secretary. They lived in Highgate, but, as London was being bombed in those years, Hawking has two younger sisters, Philippa and Mary, and an adopted brother, Edward. In 1950, when Hawkings father became head of the division of parasitology at the National Institute for Medical Research, Hawking and his moved to St Albans. In St Albans, the family were considered intelligent and somewhat eccentric. They lived an existence in a large, cluttered, and poorly maintained house. During one of Hawkings fathers frequent absences working in Africa, the rest of the family spent four months in Majorca visiting his mothers friend Beryl and her husband, Hawking began his schooling at the Byron House School in Highgate, London. He later blamed its progressive methods for his failure to learn to read while at the school, in St Albans, the eight-year-old Hawking attended St Albans High School for Girls for a few months. At that time, younger boys could attend one of the houses, the family placed a high value on education. Hawkings father wanted his son to attend the well-regarded Westminster School and his family could not afford the school fees without the financial aid of a scholarship, so Hawking remained at St Albans. From 1958 on, with the help of the mathematics teacher Dikran Tahta, they built a computer from clock parts, although at school Hawking was known as Einstein, Hawking was not initially successful academically. With time, he began to show aptitude for scientific subjects and, inspired by Tahta. Hawkings father advised him to medicine, concerned that there were few jobs for mathematics graduates. He wanted Hawking to attend University College, Oxford, his own alma mater, as it was not possible to read mathematics there at the time, Hawking decided to study physics and chemistry

16.
Mass
–
In physics, mass is a property of a physical body. It is the measure of a resistance to acceleration when a net force is applied. It also determines the strength of its gravitational attraction to other bodies. The basic SI unit of mass is the kilogram, Mass is not the same as weight, even though mass is often determined by measuring the objects weight using a spring scale, rather than comparing it directly with known masses. An object on the Moon would weigh less than it does on Earth because of the lower gravity and this is because weight is a force, while mass is the property that determines the strength of this force. In Newtonian physics, mass can be generalized as the amount of matter in an object, however, at very high speeds, special relativity postulates that energy is an additional source of mass. Thus, any body having mass has an equivalent amount of energy. In addition, matter is a defined term in science. There are several distinct phenomena which can be used to measure mass, active gravitational mass measures the gravitational force exerted by an object. Passive gravitational mass measures the force exerted on an object in a known gravitational field. The mass of an object determines its acceleration in the presence of an applied force, according to Newtons second law of motion, if a body of fixed mass m is subjected to a single force F, its acceleration a is given by F/m. A bodys mass also determines the degree to which it generates or is affected by a gravitational field and this is sometimes referred to as gravitational mass. The standard International System of Units unit of mass is the kilogram, the kilogram is 1000 grams, first defined in 1795 as one cubic decimeter of water at the melting point of ice. Then in 1889, the kilogram was redefined as the mass of the prototype kilogram. As of January 2013, there are proposals for redefining the kilogram yet again. In this context, the mass has units of eV/c2, the electronvolt and its multiples, such as the MeV, are commonly used in particle physics. The atomic mass unit is 1/12 of the mass of a carbon-12 atom, the atomic mass unit is convenient for expressing the masses of atoms and molecules. Outside the SI system, other units of mass include, the slug is an Imperial unit of mass, the pound is a unit of both mass and force, used mainly in the United States

17.
Angular momentum
–
In physics, angular momentum is the rotational analog of linear momentum. It is an important quantity in physics because it is a conserved quantity – the angular momentum of a system remains constant unless acted on by an external torque. The definition of momentum for a point particle is a pseudovector r×p. This definition can be applied to each point in continua like solids or fluids, unlike momentum, angular momentum does depend on where the origin is chosen, since the particles position is measured from it. The angular momentum of an object can also be connected to the angular velocity ω of the object via the moment of inertia I. However, while ω always points in the direction of the rotation axis, Angular momentum is additive, the total angular momentum of a system is the vector sum of the angular momenta. For continua or fields one uses integration, torque can be defined as the rate of change of angular momentum, analogous to force. Applications include the gyrocompass, control moment gyroscope, inertial systems, reaction wheels, flying discs or Frisbees. In general, conservation does limit the motion of a system. In quantum mechanics, angular momentum is an operator with quantized eigenvalues, Angular momentum is subject to the Heisenberg uncertainty principle, meaning only one component can be measured with definite precision, the other two cannot. Also, the spin of elementary particles does not correspond to literal spinning motion, Angular momentum is a vector quantity that represents the product of a bodys rotational inertia and rotational velocity about a particular axis. Angular momentum can be considered an analog of linear momentum. Thus, where momentum is proportional to mass m and linear speed v, p = m v, angular momentum is proportional to moment of inertia I. Unlike mass, which only on amount of matter, moment of inertia is also dependent on the position of the axis of rotation. Unlike linear speed, which occurs in a line, angular speed occurs about a center of rotation. Therefore, strictly speaking, L should be referred to as the angular momentum relative to that center and this simple analysis can also apply to non-circular motion if only the component of the motion which is perpendicular to the radius vector is considered. In that case, L = r m v ⊥, where v ⊥ = v sin θ is the component of the motion. It is this definition, × to which the moment of momentum refers

18.
Neutron star
–
A neutron star is the collapsed core of a large star. Neutron stars are the smallest and densest stars known to exist, though neutron stars typically have a radius on the order of 10 km, they can have masses of about twice that of the Sun. They result from the explosion of a massive star, combined with gravitational collapse. They are supported against further collapse by neutron degeneracy pressure, a described by the Pauli exclusion principle. If the remnant has too great a density, something which occurs in excess of a limit of the size of neutron stars at 2–3 solar masses. Neutron stars that can be observed are very hot and typically have a temperature around 6×105 K. They are so dense that a normal-sized matchbox containing neutron-star material would have a mass of approximately 3 billion tonnes and their magnetic fields are between 108 and 1015 times as strong as that of the Earth. The gravitational field at the stars surface is about 2×1011 times that of the Earth. As the stars core collapses, its rotation rate increases as a result of conservation of angular momentum, some neutron stars emit beams of electromagnetic radiation that make them detectable as pulsars. Indeed, the discovery of pulsars in 1967 was the first observational suggestion that stars exist. The radiation from pulsars is thought to be emitted from regions near their magnetic poles. The fastest-spinning neutron star known is PSR J1748-2446ad, rotating at a rate of 716 times a second or 43,000 revolutions per minute, giving a linear speed at the surface on the order of 0.24 c. There are thought to be around 100 million neutron stars in the Milky Way, however, most are old and cold, and neutron stars can only be easily detected in certain instances, such as if they are a pulsar or part of a binary system. Soft gamma repeaters are conjectured to be a type of neutron star with strong magnetic fields, known as magnetars, or alternatively. Additionally, such accretion can recycle old pulsars and potentially cause them to mass and spin-up to very fast rotation rates. The merger of binary stars may be the source of short-duration gamma-ray bursts and are likely strong sources of gravitational waves. Any main-sequence star with a mass of above 8 times the mass of the sun has the potential to produce a neutron star. As the star evolves away from the sequence, subsequent nuclear burning produces an iron-rich core

19.
International Standard Book Number
–
The International Standard Book Number is a unique numeric commercial book identifier. An ISBN is assigned to each edition and variation of a book, for example, an e-book, a paperback and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, the method of assigning an ISBN is nation-based and varies from country to country, often depending on how large the publishing industry is within a country. The initial ISBN configuration of recognition was generated in 1967 based upon the 9-digit Standard Book Numbering created in 1966, the 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108. Occasionally, a book may appear without a printed ISBN if it is printed privately or the author does not follow the usual ISBN procedure, however, this can be rectified later. Another identifier, the International Standard Serial Number, identifies periodical publications such as magazines, the ISBN configuration of recognition was generated in 1967 in the United Kingdom by David Whitaker and in 1968 in the US by Emery Koltay. The 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108, the United Kingdom continued to use the 9-digit SBN code until 1974. The ISO on-line facility only refers back to 1978, an SBN may be converted to an ISBN by prefixing the digit 0. For example, the edition of Mr. J. G. Reeder Returns, published by Hodder in 1965, has SBN340013818 -340 indicating the publisher,01381 their serial number. This can be converted to ISBN 0-340-01381-8, the check digit does not need to be re-calculated, since 1 January 2007, ISBNs have contained 13 digits, a format that is compatible with Bookland European Article Number EAN-13s. An ISBN is assigned to each edition and variation of a book, for example, an ebook, a paperback, and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, a 13-digit ISBN can be separated into its parts, and when this is done it is customary to separate the parts with hyphens or spaces. Separating the parts of a 10-digit ISBN is also done with either hyphens or spaces, figuring out how to correctly separate a given ISBN number is complicated, because most of the parts do not use a fixed number of digits. ISBN issuance is country-specific, in that ISBNs are issued by the ISBN registration agency that is responsible for country or territory regardless of the publication language. Some ISBN registration agencies are based in national libraries or within ministries of culture, in other cases, the ISBN registration service is provided by organisations such as bibliographic data providers that are not government funded. In Canada, ISBNs are issued at no cost with the purpose of encouraging Canadian culture. In the United Kingdom, United States, and some countries, where the service is provided by non-government-funded organisations. Australia, ISBNs are issued by the library services agency Thorpe-Bowker

20.
ArXiv
–
In many fields of mathematics and physics, almost all scientific papers are self-archived on the arXiv repository. Begun on August 14,1991, arXiv. org passed the half-million article milestone on October 3,2008, by 2014 the submission rate had grown to more than 8,000 per month. The arXiv was made possible by the low-bandwidth TeX file format, around 1990, Joanne Cohn began emailing physics preprints to colleagues as TeX files, but the number of papers being sent soon filled mailboxes to capacity. Additional modes of access were added, FTP in 1991, Gopher in 1992. The term e-print was quickly adopted to describe the articles and its original domain name was xxx. lanl. gov. Due to LANLs lack of interest in the rapidly expanding technology, in 1999 Ginsparg changed institutions to Cornell University and it is now hosted principally by Cornell, with 8 mirrors around the world. Its existence was one of the factors that led to the current movement in scientific publishing known as open access. Mathematicians and scientists regularly upload their papers to arXiv. org for worldwide access, Ginsparg was awarded a MacArthur Fellowship in 2002 for his establishment of arXiv. The annual budget for arXiv is approximately $826,000 for 2013 to 2017, funded jointly by Cornell University Library, annual donations were envisaged to vary in size between $2,300 to $4,000, based on each institution’s usage. As of 14 January 2014,174 institutions have pledged support for the period 2013–2017 on this basis, in September 2011, Cornell University Library took overall administrative and financial responsibility for arXivs operation and development. Ginsparg was quoted in the Chronicle of Higher Education as saying it was supposed to be a three-hour tour, however, Ginsparg remains on the arXiv Scientific Advisory Board and on the arXiv Physics Advisory Committee. The lists of moderators for many sections of the arXiv are publicly available, additionally, an endorsement system was introduced in 2004 as part of an effort to ensure content that is relevant and of interest to current research in the specified disciplines. Under the system, for categories that use it, an author must be endorsed by an established arXiv author before being allowed to submit papers to those categories. Endorsers are not asked to review the paper for errors, new authors from recognized academic institutions generally receive automatic endorsement, which in practice means that they do not need to deal with the endorsement system at all. However, the endorsement system has attracted criticism for allegedly restricting scientific inquiry, perelman appears content to forgo the traditional peer-reviewed journal process, stating, If anybody is interested in my way of solving the problem, its all there – let them go and read about it. The arXiv generally re-classifies these works, e. g. in General mathematics, papers can be submitted in any of several formats, including LaTeX, and PDF printed from a word processor other than TeX or LaTeX. The submission is rejected by the software if generating the final PDF file fails, if any image file is too large. ArXiv now allows one to store and modify an incomplete submission, the time stamp on the article is set when the submission is finalized

21.
Virtual International Authority File
–
The Virtual International Authority File is an international authority file. It is a joint project of national libraries and operated by the Online Computer Library Center. The project was initiated by the US Library of Congress, the German National Library, the National Library of France joined the project on October 5,2007. The project transitions to a service of the OCLC on April 4,2012, the aim is to link the national authority files to a single virtual authority file. In this file, identical records from the different data sets are linked together, a VIAF record receives a standard data number, contains the primary see and see also records from the original records, and refers to the original authority records. The data are available online and are available for research and data exchange. Reciprocal updating uses the Open Archives Initiative Protocol for Metadata Harvesting protocol, the file numbers are also being added to Wikipedia biographical articles and are incorporated into Wikidata. VIAFs clustering algorithm is run every month, as more data are added from participating libraries, clusters of authority records may coalesce or split, leading to some fluctuation in the VIAF identifier of certain authority records