Sloan Digital Sky Survey
The Sloan Digital Sky Survey or SDSS is a major multi-spectral imaging and spectroscopic redshift survey using a dedicated 2.5-m wide-angle optical telescope at Apache Point Observatory in New Mexico, United States. The project was named after the Alfred P. Sloan Foundation. Data collection began in 2000; the main galaxy sample has a median redshift of z = 0.1. Data release 8, released in January 2011, includes all photometric observations taken with the SDSS imaging camera, covering 14,555 square degrees on the sky. Data release 9, released to the public on 31 July 2012, includes the first results from the Baryon Oscillation Spectroscopic Survey spectrograph, including over 800,000 new spectra. Over 500,000 of the new spectra are of objects in the Universe 7 billion years ago. Data release 10, released to the public on 31 July 2013, includes all data from previous releases, plus the first results from the APO Galactic Evolution Experiment spectrograph, including over 57,000 high-resolution infrared spectra of stars in the Milky Way.
DR10 includes over 670,000 new BOSS spectra of galaxies and quasars in the distant universe. The publicly available images from the survey were made between 1998 and 2009. SDSS uses a dedicated 2.5 m wide-angle optical telescope. The imaging camera was retired in late 2009, since the telescope has observed in spectroscopic mode. Images were taken using a photometric system of five filters; these images are processed to produce lists of objects observed and various parameters, such as whether they seem pointlike or extended and how the brightness on the CCDs relates to various kinds of astronomical magnitude. For imaging observations, the SDSS telescope used the drift scanning technique, which tracks the telescope along a great circle on the sky and continuously records small strips of the sky; the image of the stars in the focal plane drifts along the CCD chip, the charge is electronically shifted along the detectors at the same rate, instead of staying fixed as in tracked telescopes.. This method allows consistent astrometry over the widest possible field, minimises overheads from reading out the detectors.
The disadvantage is minor distortion effects. The telescope's imaging camera is made up of 30 CCD chips, each with a resolution of 2048×2048 pixels, totaling 120 megapixels; the chips are arranged in 5 rows of 6 chips. Each row has a different optical filter with average wavelengths of 355.1, 468.6, 616.5, 748.1 and 893.1 nm, with 95% completeness in typical seeing to magnitudes of 22.0, 22.2, 22.2, 21.3, 20.5, for u, g, r, i, z respectively. The filters are placed on the camera in the order r, i, u, z, g. To reduce noise, the camera is cooled to 190 kelvins by liquid nitrogen. Using these photometric data, stars and quasars are selected for spectroscopy; the spectrograph operates by feeding an individual optical fibre for each target through a hole drilled in an aluminum plate. Each hole is positioned for a selected target, so every field in which spectra are to be acquired requires a unique plate; the original spectrograph attached to the telescope was capable of recording 640 spectra while the updated spectrograph for SDSS III can record 1000 spectra at once.
Over the course of each night, between six and nine plates are used for recording spectra. In spectroscopic mode, the telescope tracks the sky in the standard way, keeping the objects focused on their corresponding fibre tips; every night the telescope produces about 200 GB of data. During its first phase of operations, 2000–2005, the SDSS imaged more than 8,000 square degrees of the sky in five optical bandpasses, it obtained spectra of galaxies and quasars selected from 5,700 square degrees of that imaging, it obtained repeated imaging of a 300 square degree stripe in the southern Galactic cap. In 2005 the survey entered a new phase, the SDSS-II, by extending the observations to explore the structure and stellar makeup of the Milky Way, the SEGUE and the Sloan Supernova Survey, which watches after supernova Ia events to measure the distances to far objects; the survey covers over 7,500 square degrees of the Northern Galactic Cap with data from nearly 2 million objects and spectra from over 800,000 galaxies and 100,000 quasars.
The information on the position and distance of the objects has allowed the large-scale structure of the Universe, with its voids and filaments, to be investigated for the first time. All of these data were obtained in SDSS-I, but a small part of the footprint was finished in SDSS-II; the Sloan Extension for Galactic Understanding and Exploration obtained spectra of 240,000 stars in order to create a detailed three-dimensional map of the Milky Way. SEGUE data provide evidence for the age and phase space distribution of stars within the various Galactic components, providing crucial clues for understanding the structure, formation a
In mathematics and physics, a scalar field associates a scalar value to every point in a space – physical space. The scalar may either be a physical quantity. In a physical context, scalar fields are required to be independent of the choice of reference frame, meaning that any two observers using the same units will agree on the value of the scalar field at the same absolute point in space regardless of their respective points of origin. Examples used in physics include the temperature distribution throughout space, the pressure distribution in a fluid, spin-zero quantum fields, such as the Higgs field; these fields are the subject of scalar field theory. Mathematically, a scalar field on a region U is a real or complex-valued function or distribution on U; the region U may be a set in some Euclidean space, Minkowski space, or more a subset of a manifold, it is typical in mathematics to impose further conditions on the field, such that it be continuous or continuously differentiable to some order.
A scalar field is a tensor field of order zero, the term "scalar field" may be used to distinguish a function of this kind with a more general tensor field, density, or differential form. Physically, a scalar field is additionally distinguished by having units of measurement associated with it. In this context, a scalar field should be independent of the coordinate system used to describe the physical system—that is, any two observers using the same units must agree on the numerical value of a scalar field at any given point of physical space. Scalar fields are contrasted with other physical quantities such as vector fields, which associate a vector to every point of a region, as well as tensor fields and spinor fields. More subtly, scalar fields are contrasted with pseudoscalar fields. In physics, scalar fields describe the potential energy associated with a particular force; the force is a vector field, which can be obtained as the gradient of the potential energy scalar field. Examples include: Potential fields, such as the Newtonian gravitational potential, or the electric potential in electrostatics, are scalar fields which describe the more familiar forces.
A temperature, humidity or pressure field, such as those used in meteorology. In quantum field theory, a scalar field is associated with spin-0 particles; the scalar field may be complex valued. Complex scalar fields represent charged particles; these include the charged Higgs field of the Standard Model, as well as the charged pions mediating the strong nuclear interaction. In the Standard Model of elementary particles, a scalar Higgs field is used to give the leptons and massive vector bosons their mass, via a combination of the Yukawa interaction and the spontaneous symmetry breaking; this mechanism is known as the Higgs mechanism. A candidate for the Higgs boson was first detected at CERN in 2012. In scalar theories of gravitation scalar fields are used to describe the gravitational field. Scalar-tensor theories represent the gravitational interaction through a scalar; such attempts are for example the Jordan theory as a generalization of the Kaluza–Klein theory and the Brans–Dicke theory. Scalar fields like the Higgs field can be found within scalar-tensor theories, using as scalar field the Higgs field of the Standard Model.
This field interacts Yukawa-like with the particles that get mass through it. Scalar fields are found within superstring theories as dilaton fields, breaking the conformal symmetry of the string, though balancing the quantum anomalies of this tensor. Scalar fields are supposed to cause the accelerated expansion of the universe, helping to solve the horizon problem and giving a hypothetical reason for the non-vanishing cosmological constant of cosmology. Massless scalar fields in this context are known as inflatons. Massive scalar fields are proposed, using for example Higgs-like fields. Vector fields; some examples of vector fields include the electromagnetic field and the Newtonian gravitational field. Tensor fields, which associate a tensor to every point in space. For example, in general relativity gravitation is associated with the tensor field called Einstein tensor. In Kaluza–Klein theory, spacetime is extended to five dimensions and its Riemann curvature tensor can be separated out into ordinary four-dimensional gravitation plus an extra set, equivalent to Maxwell's equations for the electromagnetic field, plus an extra scalar field known as the "dilaton".
The dilaton scalar is found among the massless bosonic fields in string theory. Scalar field theory Vector-valued function
The ΛCDM or Lambda-CDM model is a parametrization of the Big Bang cosmological model in which the universe contains three major components: first, a cosmological constant denoted by Lambda and associated with dark energy. It is referred to as the standard model of Big Bang cosmology because it is the simplest model that provides a reasonably good account of the following properties of the cosmos: the existence and structure of the cosmic microwave background the large-scale structure in the distribution of galaxies the abundances of hydrogen and lithium the accelerating expansion of the universe observed in the light from distant galaxies and supernovaeThe model assumes that general relativity is the correct theory of gravity on cosmological scales, it emerged in the late 1990s as a concordance cosmology, after a period of time when disparate observed properties of the universe appeared mutually inconsistent, there was no consensus on the makeup of the energy density of the universe. The ΛCDM model can be extended by adding cosmological inflation and other elements that are current areas of speculation and research in cosmology.
Some alternative models challenge the assumptions of the ΛCDM model. Examples of these are modified Newtonian dynamics, entropic gravity, modified gravity, theories of large-scale variations in the matter density of the universe, bimetric gravity, scale invariance of empty space. Most modern cosmological models are based on the cosmological principle, which states that our observational location in the universe is not unusual or special; the model includes an expansion of metric space, well documented both as the red shift of prominent spectral absorption or emission lines in the light from distant galaxies and as the time dilation in the light decay of supernova luminosity curves. Both effects are attributed to a Doppler shift in electromagnetic radiation as it travels across expanding space. Although this expansion increases the distance between objects that are not under shared gravitational influence, it does not increase the size of the objects in space, it allows for distant galaxies to recede from each other at speeds greater than the speed of light.
The letter Λ represents the cosmological constant, associated with a vacuum energy or dark energy in empty space, used to explain the contemporary accelerating expansion of space against the attractive effects of gravity. A cosmological constant has negative pressure, p = − ρ c 2, which contributes to the stress-energy tensor that, according to the general theory of relativity, causes accelerating expansion; the fraction of the total energy density of our universe, dark energy, Ω Λ, is estimated to be 0.669 ± 0.038 based on the 2018 Dark Energy Survey results using Type Ia Supernovae or 0.6847 ± 0.0073 based on the 2018 release of Planck satellite data, or more than 68.3% of the mass-energy density of the universe. Dark matter is postulated in order to account for gravitational effects observed in large-scale structures that cannot be accounted for by the quantity of observed matter. Cold dark matter is non-baryonic, i.e. it consists of matter other than protons and neutrons. The dark matter constitutes about 26.8% of the mass-energy density of the universe.
The remaining 4.8% comprises all ordinary matter observed as atoms, chemical elements and plasma, the stuff of which visible planets and galaxies are made. The great majority of ordinary matter in the universe is unseen, since visible stars and gas inside galaxies and clusters account for less than 10% of the ordinary matter contribution to the mass-energy density of the universe; the energy density includes a small fraction in cosmic microwave background radiation, not more than 0.5% in relic neutrinos. Although small today, these were much more important in the distant past, dominating the matter at redshift > 3200. The model includes a single originating event, the "Big Bang", not an explosion but the abrupt appearance of expanding space-time containing radiation at temperatures of around 1015 K; this was followed by an exponential expansion of space by a scale multiplier of 1027 or more, known as cosmic inflation. The early universe remained hot for several hundred thousand years, a state, detectable as a residual cosmic microwave background, or CMB, a low energy radiation emanating from all parts of the sky.
The "Big Bang" scenario, with cosmic inflation and standard particle physics, is the only current c
Mordehai "Moti" Milgrom is an Israeli physicist and professor in the department of Particle Physics and Astrophysics at the Weizmann Institute in Rehovot, Israel. He received his first degree from the Hebrew University of Jerusalem in 1966, he studied at the Weizmann Institute of Science and completed his doctorate in 1972. In 1981, he proposed Modified Newtonian dynamics as an alternative to the dark matter and galaxy rotation curve problems. Milgrom suggests that Newton's Second Law be modified for small accelerations. In the academic years 1980–1981 and 1985–1986 he was at the Institute for Advanced Study in Princeton. Before 1980 he worked on high-energy astrophysics and became well-known for his kinematical model of SS 433. Modified Newtonian dynamics is the invention of Mordehai Milgrom; the idea of an acceleration-based modification of dynamics or gravity would have occurred to someone else sooner or but it is safe to say that in the early 1980s no one but Milgrom had considered such a possible modification as an alternative to astrophysical dark matter.
It was a brilliant stroke of insight to realize that astronomical systems were not only characterized by large scale but by low internal accelerations and this could account for the known systematics in the kinematics and photometry of galactic systems. However, the idea was hardly greeted with overwhelming enthusiasm. Milgrom has three daughters. Cosmic rays Gamma ray burst Gamma x-ray sources. Milgrom, Mordehai, "Does Dark Matter Really Exist?", Scientific American, pp. 42–50, 52 Schilling, Govert, "Battlefield Galactica: Dark Matter vs. MOND", Sky & Telescope, pp. 30–36 Zhiping Li, Ran Li.. "The relativistic astrodynamics of spiral tracks, localized equivalence principle and the dark matter problem of our Milky Way galaxy". Sciencepaper Online. Dark Matter Doubters not Silenced Yet MOND - A Pedagogical Review - M. Milgrom, 2001 M. Milgrom @ Astrophysics Data System
Event Horizon Telescope
The Event Horizon Telescope is a large telescope array consisting of a global network of radio telescopes and combining data from several very-long-baseline interferometry stations around the Earth. The aim of the EHT project is to observe the immediate environment of supermassive black holes, with an angular resolution high enough to resolve structures on the size scale of the black hole's event horizon. Among the project's observational targets are the two black holes with the largest apparent angular size: M87* at the center of the supergiant elliptical galaxy Messier 87, Sagittarius A* at the center of the Milky Way; the first image of a black hole, the supermassive one at the center of galaxy Messier 87, was published by the EHT Collaboration on April 10, 2019. The array made this observation at a wavelength of 1.3mm and with a theoretical diffraction-limited resolution of 25 micro-arcseconds. Future plans involve improving the array's resolution by adding new telescopes and by taking shorter-wavelength observations.
The EHT is composed of many radio observatories or radio telescope facilities around the world to produce a high-sensitivity, high-angular-resolution telescope. Through the technique of very-long-baseline interferometry, many independent radio antennas separated by hundreds or thousands of miles can be used in concert to create a virtual telescope with an effective diameter of the entire planet; the effort includes development and deployment of submillimeter dual polarization receivers stable frequency standards to enable very-long-baseline interferometry at 230–450 GHz, higher-bandwidth VLBI backends and recorders, as well as commissioning of new submillimeter VLBI sites. The idea was first envisioned by German radioastronomer Heino Falcke in 1993, key theoretical aspects of the project were developed in the EU-funded Black Hole Cam project, lead by Falcke, Michael Kramer, Luciano Rezzolla; each year since its first data capture in 2006, the EHT array has moved to add more observatories to its global network of radio telescopes.
The first image of the Milky Way's supermassive black hole, Sagittarius A*, was expected to be produced in April 2017, but because the South Pole Telescope is closed during winter, the data shipment delayed the processing to December 2017 when the shipment arrived. Data collected on hard drives are transported by airplane from the various telescopes to the MIT Haystack Observatory and the Max Planck Institute for Radio Astronomy, where the data are cross-correlated and analyzed on a grid computer made from about 800 CPUs all connected through a 40 Gbit/s network; the Event Horizon Telescope Collaboration announced its first results in simultaneous press conferences worldwide on April 10, 2019. The announcement featured the first direct image of a black hole, which showed the supermassive black hole at the center of Messier 87, designated M87*; the scientific results were presented in a series of six papers published in The Astrophysical Journal Letters. The image provided a test for Albert Einstein's general theory of relativity under extreme conditions.
Studies have tested general relativity by looking at the motions of stars and gas clouds near the edge of a black hole. However, an image of a black hole brings observations closer to the event horizon. Relativity predicts a dark shadow-like region, caused by gravitational bending and capture of light, which matches the observed image; the published paper states: "Overall, the observed image is consistent with expectations for the shadow of a spinning Kerr black hole as predicted by general relativity." Paul T. P. Ho, EHT Board member, said: "Once we were sure we had imaged the shadow, we could compare our observations to extensive computer models that include the physics of warped space, superheated matter, strong magnetic fields. Many of the features of the observed image match our theoretical understanding well."The image provided new measurements for the mass and diameter of M87*. EHT measured the black hole's mass to be 6.5 ± 0.7 billion solar masses and measured the diameter of its event horizon to be 40 billion kilometres 2.5 times smaller than the shadow that it casts, seen at the center of the image.
From the asymmetry in the ring, EHT inferred that the matter on the brighter south side of the disk is moving towards Earth, the observer. This is based on the theory that approaching matter appears brighter because of relativistic beaming. Previous observations of the black hole's jet showed that the black hole's spin axis is inclined at an angle of 17° relative to the observer's line of sight. From these two observations, EHT concluded. Images were created independently by four teams to assess the reliability of the results; these methods included both an established algorithm in radio astronomy known as CLEAN as well as more advanced data processing methods, such as the CHIRP method created by Katherine Bouman and others. The algorithms that were used were Narayan & Nityananda's 1986 regularized maximum likelihood algorithm and Jan Högbom's 1974 CLEAN algorithm; the EHT collaboration consists of 13 stakeholder institutes: the Academia Sinica Institute of Astronomy and Astrophysics the University of Arizona the University of Chicago the East Asian Observatory Goethe University Frankfurt Institut de radioastronomie millimétrique, Large Millimeter Telescope Max Planck Institute for Radio Astronomy MIT Haystack Observatory National Astronomical Observatory of Japan Perimeter Institute for Theoretical Physics Radboud University Smith
A neutrino is a fermion that interacts only via the weak subatomic force and gravity. The neutrino is so named because it is electrically neutral and because its rest mass is so small that it was long thought to be zero; the mass of the neutrino is much smaller than that of the other known elementary particles. The weak force has a short range, the gravitational interaction is weak, neutrinos, as leptons, do not participate in the strong interaction. Thus, neutrinos pass through normal matter unimpeded and undetected. Weak interactions create neutrinos in one of three leptonic flavors: electron neutrinos, muon neutrinos, or tau neutrinos, in association with the corresponding charged lepton. Although neutrinos were long believed to be massless, it is now known that there are three discrete neutrino masses with different tiny values, but they do not correspond uniquely to the three flavors. A neutrino created with a specific flavor is in an associated specific quantum superposition of all three mass states.
As a result, neutrinos oscillate between different flavors in flight. For example, an electron neutrino produced in a beta decay reaction may interact in a distant detector as a muon or tau neutrino. Although only differences of squares of the three mass values are known as of 2016, cosmological observations imply that the sum of the three masses must be less than one millionth that of the electron. For each neutrino, there exists a corresponding antiparticle, called an antineutrino, which has half-integer spin and no electric charge, they are distinguished from the neutrinos by having opposite signs of lepton chirality. To conserve total lepton number, in nuclear beta decay, electron neutrinos appear together with only positrons or electron-antineutrinos, electron antineutrinos with electrons or electron neutrinos. Neutrinos are created by various radioactive decays, including in beta decay of atomic nuclei or hadrons, nuclear reactions such as those that take place in the core of a star or artificially in nuclear reactors, nuclear bombs or particle accelerators, during a supernova, in the spin-down of a neutron star, or when accelerated particle beams or cosmic rays strike atoms.
The majority of neutrinos in the vicinity of the Earth are from nuclear reactions in the Sun. In the vicinity of the Earth, about 65 billion solar neutrinos per second pass through every square centimeter perpendicular to the direction of the Sun. For study, neutrinos can be created artificially with nuclear reactors and particle accelerators. There is intense research activity involving neutrinos, with goals that include the determination of the three neutrino mass values, the measurement of the degree of CP violation in the leptonic sector. Neutrinos can be used for tomography of the interior of the earth; the neutrino was postulated first by Wolfgang Pauli in 1930 to explain how beta decay could conserve energy and angular momentum. In contrast to Niels Bohr, who proposed a statistical version of the conservation laws to explain the observed continuous energy spectra in beta decay, Pauli hypothesized an undetected particle that he called a "neutron", using the same -on ending employed for naming both the proton and the electron.
He considered that the new particle was emitted from the nucleus together with the electron or beta particle in the process of beta decay. James Chadwick discovered a much more massive neutral nuclear particle in 1932 and named it a neutron leaving two kinds of particles with the same name. Earlier Pauli had used the term "neutron" for both the neutral particle that conserved energy in beta decay, a presumed neutral particle in the nucleus; the word "neutrino" entered the scientific vocabulary through Enrico Fermi, who used it during a conference in Paris in July 1932 and at the Solvay Conference in October 1933, where Pauli employed it. The name was jokingly coined by Edoardo Amaldi during a conversation with Fermi at the Institute of Physics of via Panisperna in Rome, in order to distinguish this light neutral particle from Chadwick's heavy neutron. In Fermi's theory of beta decay, Chadwick's large neutral particle could decay to a proton and the smaller neutral particle: n0 → p+ + e− + νeFermi's paper, written in 1934, unified Pauli's neutrino with Paul Dirac's positron and Werner Heisenberg's neutron–proton model and gave a solid theoretical basis for future experimental work.
The journal Nature rejected Fermi's paper, saying that the theory was "too remote from reality". He submitted the paper to an Italian journal, which accepted it, but the general lack of interest in his theory at that early date caused him to switch to experimental physics. By 1934 there was experimental evidence against Bohr's idea that energy conservation is invalid for beta decay: At the Solvay conference of that year, measurements of the energy spectra of beta particles were reported, showing that there is a strict limit on the energy of electrons from each type of beta decay; such a limit is not expected if the conservation of energy is invalid, in which case any amount of energy would be statistically available in at least a few decays. The natural explanation of the beta decay spectrum as first measured in 1934 was that only a limited amount of en
Gravity Probe B
Gravity Probe B was a satellite-based mission to test two unverified predictions of general relativity: the geodetic effect and frame-dragging. This was to be accomplished by measuring precisely, tiny changes in the direction of spin of four gyroscopes contained in an Earth satellite orbiting at 650 km altitude, crossing directly over the poles; the satellite was launched on 20 April 2004 on a Delta II rocket. The spaceflight phase lasted until 2005; this provided a test of general relativity and related models. The principal investigator was Francis Everitt. Initial results confirmed the expected geodetic effect to an accuracy of about 1%; the expected frame-dragging effect was similar in magnitude to the current noise level. Work continued to model and account for these sources of error, thus permitting extraction of the frame-dragging signal. By August 2008, the frame-dragging effect had been confirmed to within 15% of the expected result, the December 2008 NASA report indicated that the geodetic effect was confirmed to better than 0.5%.
In an article published in the journal Physical Review Letters in 2011, the authors reported analysis of the data from all four gyroscopes results in a geodetic drift rate of −6601.8±18.3 mas/yr and a frame-dragging drift rate of −37.2±7.2 mas/yr, in good agreement with the general relativity predictions of −6606.1±0.28% mas/yr and −39.2±0.19% mas/yr, respectively. Gravity Probe B was a relativity gyroscope experiment funded by NASA. Efforts were led by Stanford University physics department with Lockheed Martin as the primary subcontractor. Mission scientists viewed it as the second gravity experiment in space, following the successful launch of Gravity Probe A in 1976; the mission plans were to test two unverified predictions of general relativity: the geodetic effect and frame-dragging. This was to be accomplished by measuring precisely, tiny changes in the direction of spin of four gyroscopes contained in an Earth satellite orbiting at 650 km altitude, crossing directly over the poles; the gyroscopes were intended to be so free from disturbance that they would provide a near-perfect space-time reference system.
This would allow them to reveal how space and time are "warped" by the presence of the Earth, by how much the Earth's rotation "drags" space-time around with it. The geodetic effect is an effect caused by space-time being "curved" by the mass of the Earth. A gyroscope's axis when parallel transported around the Earth in one complete revolution does not end up pointing in the same direction as before; the angle "missing" may be thought of as the amount the gyroscope "leans over" into the slope of the space-time curvature. A more precise explanation for the space curvature part of the geodetic precession is obtained by using a nearly flat cone to model the space curvature of the Earth's gravitational field; such a cone is made by cutting out a thin "pie-slice" from a circle and gluing the cut edges together. The spatial geodetic precession is a measure of the missing "pie-slice" angle. Gravity Probe B was expected to measure this effect to an accuracy of one part in 10,000, the most stringent check on general relativistic predictions to date.
The much smaller frame-dragging effect is an example of gravitomagnetism. It is an analog of magnetism in classical electrodynamics, but caused by rotating masses rather than rotating electric charges. Only two analyses of the laser-ranging data obtained by the two LAGEOS satellites, published in 1997 and 2004, claimed to have found the frame-dragging effect with an accuracy of about 20% and 10% whereas Gravity Probe B aimed to measure the frame dragging effect to a precision of 1%. However, Lorenzo Iorio claimed that the level of total uncertainty of the tests conducted with the two LAGEOS satellites has been underestimated. A recent analysis of Mars Global Surveyor data has claimed to have confirmed the frame dragging effect to a precision of 0.5%, although the accuracy of this claim is disputed. The Lense–Thirring effect of the Sun has been investigated in view of a possible detection with the inner planets in the near future; the launch was planned for 19 April 2004 at Vandenberg Air Force Base but was scrubbed within 5 minutes of the scheduled launch window due to changing winds in the upper atmosphere.
An unusual feature of the mission is that it only had a one-second launch window due to the precise orbit required by the experiment. On 20 April, at 9:57:23 AM PDT the spacecraft was launched successfully; the satellite was placed in orbit at 11:12:33 AM after a cruise period over the south pole and a short second burn. The mission lasted 16 months; some preliminary results were presented at a special session during the American Physical Society meeting in April 2007. NASA requested a proposal for extending the GP-B data analysis phase through December 2007; the data analysis phase was further extended to September 2008 using funding from Richard Fairbank, Stanford and NASA, beyond that point using non-NASA funding only. Final science results were reported in 2011; the Gravity Probe B experiment comprised four London moment gyroscopes and a reference telescope sighted on HR8703, a binary star in the constellation Pegasus. In polar orbit, with the gyro spin directions pointing toward HR8703, the frame-dragging and geodetic effects came out at right angles, each gyroscope measuring both.
The gyroscopes were housed in