At standard pressure, the chemical element helium exists in a liquid form only at the low temperature of −270 °C. Its boiling point and critical point depend on which isotope of helium is present: the common isotope helium-4 or the rare isotope helium-3; these are the only two stable isotopes of helium. See the table below for the values of these physical quantities; the density of liquid helium-4 at its boiling point and a pressure of one atmosphere is about 0.125 grams per cm3, or about 1/8th the density of liquid water. Helium was first liquefied on July 10, 1908, by the Dutch physicist Heike Kamerlingh Onnes at the University of Leiden in the Netherlands. At that time, helium-3 was unknown. In more recent decades, liquid helium has been used as a cryogenic refrigerant, liquid helium is produced commercially for use in superconducting magnets such as those used in magnetic resonance imaging, nuclear magnetic resonance, Magnetoencephalography, experiments in physics, such as low temperature Mössbauer spectroscopy.
The temperature required to produce liquid helium is low because of the weakness of the attractions between the helium atoms. These interatomic forces in helium are weak to begin with because helium is a noble gas, but the interatomic attractions are reduced more by the effects of quantum mechanics; these are significant in helium because of its low atomic mass of about four atomic mass units. The zero point energy of liquid helium is less. Hence in liquid helium, its ground state energy can decrease by a occurring increase in its average interatomic distance; however at greater distances, the effects of the interatomic forces in helium are weaker. Because of the weak interatomic forces in helium, the element remains a liquid at atmospheric pressure all the way from its liquefaction point down to absolute zero. Liquid helium solidifies only under low temperatures and great pressures. At temperatures below their liquefaction points, both helium-4 and helium-3 undergo transitions to superfluids.
Liquid helium-4 and the rare helium-3 are not miscible. Below 0.9 kelvin at their saturated vapor pressure, a mixture of the two isotopes undergoes a phase separation into a normal fluid that floats on a denser superfluid consisting of helium-4. This phase separation happens because the overall mass of liquid helium can reduce its thermodynamic enthalpy by separating. At low temperatures, the superfluid phase, rich in helium-4, can contain up to 6% of helium-3 in solution; this makes possible the small-scale use of the dilution refrigerator, capable of reaching temperatures of a few millikelvins. Superfluid helium-4 has different properties from ordinary liquid helium. Important early work on the characteristics of liquid helium was done by the Soviet physicist Lev Landau extended by the American physicist Richard Feynman. General He-3 and He-4 phase diagrams, etc. Helium-3 phase diagram, etc. Onnes's liquifaction of helium Kamerlingh Onnes's 1908 article and analyzed on BibNum
High-temperature superconductors are materials that behave as superconductors at unusually high temperatures. The first high-Tc superconductor was discovered in 1986 by IBM researchers Georg Bednorz and K. Alex Müller, who were awarded the 1987 Nobel Prize in Physics "for their important break-through in the discovery of superconductivity in ceramic materials". Whereas "ordinary" or metallic superconductors have transition temperatures below 30 K and must be cooled using liquid helium in order to achieve superconductivity, HTS have been observed with transition temperatures as high as 138 K, can be cooled to superconductivity using liquid nitrogen; until 2008, only certain compounds of copper and oxygen were known to have HTS properties, the term high-temperature superconductor was used interchangeably with cuprate superconductor for compounds such as bismuth strontium calcium copper oxide and yttrium barium copper oxide. Several iron-based compounds are now known to be superconducting at high temperatures.
In 2015, hydrogen sulfide under high pressure was found to undergo superconducting transition near 203 K, due the formation of H3S, a new record high temperature superconductor. For an explanation about Tc, see Superconductivity § Superconducting phase transition and the second bullet item of BCS theory § Successes of the BCS theory; the phenomenon of superconductivity was discovered by Kamerlingh Onnes in 1911, in metallic mercury below 4 K. Since, researchers have attempted to observe superconductivity at increasing temperatures with the goal of finding a room-temperature superconductor. By the late 1970s, superconductivity was observed in several metallic compounds at temperatures that were much higher than those for elemental metals and which could exceed 20 K. In 1986, J. Georg Bednorz and K. Alex Müller, working at the IBM research lab near Zurich, Switzerland were exploring a new class of ceramics for superconductivity. Bednorz encountered a barium-doped compound of lanthanum and copper oxide whose resistance dropped to zero at a temperature around 35 K.
Their results were soon confirmed by many groups, notably Paul Chu at the University of Houston and Shoji Tanaka at the University of Tokyo. Shortly after, P. W. Anderson, at Princeton University came up with the first theoretical description of these materials, using the resonating valence bond theory, but a full understanding of these materials is still developing today; these superconductors are now known to possess a d-wave pair symmetry. The first proposal that high-temperature cuprate superconductivity involves d-wave pairing was made in 1987 by Bickers and Scalettar, followed by three subsequent theories in 1988 by Inui, Doniach and Ruckenstein, using spin-fluctuation theory, by Gros, Poilblanc and Zhang, by Kotliar and Liu identifying d-wave pairing as a natural consequence of the RVB theory; the confirmation of the d-wave nature of the cuprate superconductors was made by a variety of experiments, including the direct observation of the d-wave nodes in the excitation spectrum through Angle Resolved Photoemission Spectroscopy, the observation of a half-integer flux in tunneling experiments, indirectly from the temperature dependence of the penetration depth, specific heat and thermal conductivity.
Until 2015 the superconductor with the highest transition temperature, confirmed by multiple independent research groups was mercury barium calcium copper oxide at around 133 K. After more than twenty years of intensive research, the origin of high-temperature superconductivity is still not clear, but it seems that instead of electron-phonon attraction mechanisms, as in conventional superconductivity, one is dealing with genuine electronic mechanisms, instead of conventional, purely s-wave pairing, more exotic pairing symmetries are thought to be involved. In 2014, evidence showing that fractional particles can happen in quasi two-dimensional magnetic materials, was found by EPFL scientists lending support for Anderson's theory of high-temperature superconductivity; the structure of high-Tc copper oxide or cuprate superconductors are closely related to perovskite structure, the structure of these compounds has been described as a distorted, oxygen deficient multi-layered perovskite structure.
One of the properties of the crystal structure of oxide superconductors is an alternating multi-layer of CuO2 planes with superconductivity taking place between these layers. The more layers of CuO2, the higher Tc; this structure causes a large anisotropy in normal conducting and superconducting properties, since electrical currents are carried by holes induced in the oxygen sites of the CuO2 sheets. The electrical conduction is anisotropic, with a much higher conductivity parallel to the CuO2 plane than in the perpendicular direction. Critical temperatures depend on the chemical compositions, cations substitutions and oxygen content, they can be classified as superstripes. The first superconductor found with Tc > 77 K (liquid nitrogen b
Absolute zero is the lowest limit of the thermodynamic temperature scale, a state at which the enthalpy and entropy of a cooled ideal gas reach their minimum value, taken as 0. The fundamental particles of nature have minimum vibrational motion, retaining only quantum mechanical, zero-point energy-induced particle motion; the theoretical temperature is determined by extrapolating the ideal gas law. The corresponding Kelvin and Rankine temperature scales set their zero points at absolute zero by definition, it is thought of as the lowest temperature possible, but it is not the lowest enthalpy state possible, because all real substances begin to depart from the ideal gas when cooled as they approach the change of state to liquid, to solid. In the quantum-mechanical description, matter at absolute zero is in its ground state, the point of lowest internal energy; the laws of thermodynamics indicate that absolute zero cannot be reached using only thermodynamic means, because the temperature of the substance being cooled approaches the temperature of the cooling agent asymptotically, a system at absolute zero still possesses quantum mechanical zero-point energy, the energy of its ground state at absolute zero.
The kinetic energy of the ground state cannot be removed. Scientists and technologists achieve temperatures close to absolute zero, where matter exhibits quantum effects such as superconductivity and superfluidity. At temperatures near 0 K, nearly all molecular motion ceases and ΔS = 0 for any adiabatic process, where S is the entropy. In such a circumstance, pure substances can form perfect crystals as T → 0. Max Planck's strong form of the third law of thermodynamics states the entropy of a perfect crystal vanishes at absolute zero in which a perfect crystal is gone; the original Nernst heat theorem makes the weaker and less controversial claim that the entropy change for any isothermal process approaches zero as T → 0: lim T → 0 Δ S = 0 The implication is that the entropy of a perfect crystal approaches a constant value. The Nernst postulate identifies the isotherm T = 0 as coincident with the adiabat S = 0, although other isotherms and adiabats are distinct; as no two adiabats intersect, no other adiabat can intersect the T = 0 isotherm.
No adiabatic process initiated at nonzero temperature can lead to zero temperature. A perfect crystal is one in which the internal lattice structure extends uninterrupted in all directions; the perfect order can be represented by translational symmetry along three axes. Every lattice element of the structure is in its proper place, whether it is a single atom or a molecular grouping. For substances that exist in two stable crystalline forms, such as diamond and graphite for carbon, there is a kind of chemical degeneracy; the question remains whether both can have zero entropy at T = 0 though each is ordered. Perfect crystals never occur in practice. Using the Debye model, the specific heat and entropy of a pure crystal are proportional to T 3, while the enthalpy and chemical potential are proportional to T 4; these quantities drop toward their T = 0 limiting values and approach with zero slopes. For the specific heats at least, the limiting value itself is zero, as borne out by experiments to below 10 K.
The less detailed Einstein model shows this curious drop in specific heats. In fact, all specific heats vanish at absolute zero, not just those of crystals. For the coefficient of thermal expansion. Maxwell's relations show that various other quantities vanish; these phenomena were unanticipated. Since the relation between changes in Gibbs free energy, the enthalpy and the entropy is Δ G = Δ H − T Δ S thus, as T decreases, ΔG and ΔH approach each other. Experimentally, it is found that all spontaneous processes result in a decrease in G as they proceed toward equilibrium. If ΔS and/or T are small, the condition ΔG < 0 may imply that ΔH < 0, which would indicate an exothermic reaction. However, this is not required. Moreover, the slopes of the derivatives of ΔG and ΔH converge and are equal to zero at T = 0; this ensures that ΔG and ΔH are nearly the same over a considerable range of temperatures and justifies the approximate empirical Principle of Thomsen and Berthelot, which states that the equilibrium state to which a system proceeds is the one that evolves the greatest amount of heat, i.e. an actual process is the most exothermic one.
One model that estimates the properties of an electron gas at absolute zero in metals is the Fermi gas. The electrons, being Fermions, must be in different quantum states, which leads the electrons to get high typical velocities at absolute zero; the maximum energy that electrons can have at absolute zero is called the Fermi energy. The Fermi
The Josephson effect is the phenomenon of supercurrent, a current that flows indefinitely long without any voltage applied, across a device known as a Josephson junction, which consists of two or more superconductors coupled by a weak link. The weak link can consist of a thin insulating barrier, a short section of non-superconducting metal, or a physical constriction that weakens the superconductivity at the point of contact; the Josephson effect is an example of a macroscopic quantum phenomenon. It is named after the British physicist Brian David Josephson, who predicted in 1962 the mathematical relationships for the current and voltage across the weak link; the DC Josephson effect had been seen in experiments prior to 1962, but had been attributed to "super-shorts" or breaches in the insulating barrier leading to the direct conduction of electrons between the superconductors. The first paper to claim the discovery of Josephson's effect, to make the requisite experimental checks, was that of Philip Anderson and John Rowell.
These authors were awarded patents on the effects. Before Josephson's prediction, it was only known that normal electrons can flow through an insulating barrier, by means of quantum tunneling. Josephson was the first to predict the tunneling of superconducting Cooper pairs. For this work, Josephson received the Nobel Prize in Physics in 1973. Josephson junctions have important applications in quantum-mechanical circuits, such as SQUIDs, superconducting qubits, RSFQ digital electronics; the NIST standard for one volt is achieved by an array of 20,208 Josephson junctions in series. Types of Josephson junction include the pi Josephson junction, varphi Josephson junction, long Josephson junction, Superconducting tunnel junction. A "Dayem bridge" is a thin-film variant of the Josephson junction in which the weak link consists of a superconducting wire with dimensions on the scale of a few micrometres or less; the Josephson junction count of a device is used as a benchmark for its complexity. The Josephson effect has found wide usage, for example in the following areas.
SQUIDs, or superconducting quantum interference devices, are sensitive magnetometers that operate via the Josephson effect. They are used in science and engineering. In precision metrology, the Josephson effect provides an reproducible conversion between frequency and voltage. Since the frequency is defined and by the caesium standard, the Josephson effect is used, for most practical purposes, to give the standard representation of a volt, the Josephson voltage standard. However, the International Bureau of Weights and Measures has not changed the official SI unit definition. Single-electron transistors are constructed of superconducting materials, allowing use to be made of the Josephson effect to achieve novel effects; the resulting device is called a "superconducting single-electron transistor". The Josephson effect is used for the most precise measurements of elementary charge in terms of the Josephson constant and von Klitzing constant, related to the quantum Hall effect. RSFQ digital electronics is based on shunted Josephson junctions.
In this case, the junction switching event is associated to the emission of one magnetic flux quantum 1 2 e h that carries the digital information: the absence of switching is equivalent to 0, while one switching event carries a 1. Josephson junctions are integral in superconducting quantum computing as qubits such as in a flux qubit or others schemes where the phase and charge act as the conjugate variables. Superconducting tunnel junction detectors may become a viable replacement for CCDs for use in astronomy and astrophysics in a few years; these devices are effective across a wide spectrum from ultraviolet to infrared, in x-rays. The technology has been tried out on the William Herschel Telescope in the SCAM instrument. Quiterons and similar superconducting switching devices. Josephson effect has been observed in SHeQUIDs, the superfluid helium analog of a dc-SQUID; the basic equations governing the dynamics of the Josephson effect are U = ℏ 2 e ∂ φ ∂ t I = I c sin where U and I are the voltage across and the current through the Josephson junction, φ is the "phase difference" across the junction, I c is a constant, the "critical current" of the junction.
The critical current is an important phenomenological parameter of the device that can be affected by temperature as well as by an applied magnetic field. The physical constant h 2 e is the magnetic flux quant
Carbon nanotubes are allotropes of carbon with a cylindrical nanostructure. These cylindrical carbon molecules have unusual properties, which are valuable for nanotechnology, electronics and other fields of materials science and technology. Owing to the material's exceptional strength and stiffness, nanotubes have been constructed with a length-to-diameter ratio of up to 132,000,000:1 larger than that for any other material. In addition, owing to their extraordinary thermal conductivity and mechanical and electrical properties, carbon nanotubes find applications as additives to various structural materials. For instance, nanotubes form a tiny portion of the material in some baseball bats, golf clubs, car parts, or damascus steel. Nanotubes are members of the fullerene structural family, their name is derived from their long, hollow structure with the walls formed by one-atom-thick sheets of carbon, called graphene. These sheets are rolled at specific and discrete angles, the combination of the rolling angle and radius decides the nanotube properties, for example, whether the individual nanotube shell is a metal or semiconductor.
Nanotubes are categorized as multi-walled nanotubes. Individual nanotubes align themselves into "ropes" held together by van der Waals forces, more pi-stacking. Applied quantum chemistry orbital hybridization, best describes the chemical bonding in nanotubes; the chemical bonding of nanotubes involves sp2-hybrid carbon atoms. These bonds, which are similar to those of graphite and stronger than those found in alkanes and diamond, provide nanotubes their unique strength. There is no consensus on some terms describing carbon nanotubes in scientific literature: both "-wall" and "-walled" are being used in combination with "single", "double", "triple", or "multi", the letter C is omitted in the abbreviation, for example, multi-walled carbon nanotube. D = a π = 78.3 p m. SWNTs are an important variety of carbon nanotubes because most of their properties change with the values, this dependence is non-monotonic. In particular, their band gap can vary from zero to about 2 eV and their electrical conductivity can show metallic or semiconducting behavior.
Single-walled nanotubes are candidates for miniaturizing electronics. The most basic building block of these systems is an electric wire, SWNTs with diameters of an order of a nanometer can be excellent conductors. One useful application of SWNTs is in the development of the first intermolecular field-effect transistors; the first intermolecular logic gate using SWCNT FETs was made in 2001. A logic gate requires both a p-FET and an n-FET; because SWNTs are p-FETs when exposed to oxygen and n-FETs otherwise, it is possible to expose half of an SWNT to oxygen and protect the other half from it. The resulting SWNT acts as a not logic gate with both p- and n-type FETs in the same molecule. Prices for single-walled nanotubes declined from around $1500 per gram as of 2000 to retail prices of around $50 per gram of as-produced 40–60% by weight SWNTs as of March 2010; as of 2016, the retail price of as-produced 75% by weight SWNTs was $2 per gram, cheap enough for widespread use. SWNTs are forecast to make a large impact in electronics applications by 2020 according to The Global Market for Carbon Nanotubes report.i Multi-walled nanotubes consist of multiple rolled layers of graphene.
There are two models. In the Russian Doll model, sheets of graphite are arranged in concentric cylinders, e.g. a single-walled nanotube within a larger single-walled nanotube. In the Parchment model, a single sheet of graphite is rolled in around itself, resembling a scroll of parchment or a rolled newspaper; the interlayer distance in multi-walled nanotubes is close to the distance between graphene layers in graphite 3.4 Å. The Russian Doll structure is observed more commonly, its individual shells can be described as SWNTs, which can be semiconducting. Because of statistical probability and restrictions on the relative diameters of the individual tubes, one of the shells, thus the whole MWNT, is a zero-gap metal. Double-walled carbon nanotubes form a special class of nanotubes because their morphology and properties are similar to those of SWNTs but they are more resistant to chemicals; this is important when it is necessary to graft chemical functions to the surface of the nanotubes to add properties to the CNT.
Covalent functionalization of SWNTs will break some C=C double bonds, leaving "holes" in the structure on the nanotube and thus modifying both its mechanical and electrical properties. In the case of DWNTs, only the outer wall is modified. DWNT synthesis on the gram-scale by the CCVD technique was first proposed in 2003 from the selective reduction of oxide solutions in methane and hydrogen; the telescopic motion ability of inner shells and their unique mechanical properties will permit the use of multi-walled
Direct current is the unidirectional flow of electric charge. A battery is a good example of a DC power supply. Direct current may flow in a conductor such as a wire, but can flow through semiconductors, insulators, or through a vacuum as in electron or ion beams; the electric current flows in a constant direction, distinguishing it from alternating current. A term used for this type of current was galvanic current; the abbreviations AC and DC are used to mean alternating and direct, as when they modify current or voltage. Direct current may be obtained from an alternating current supply by use of a rectifier, which contains electronic elements or electromechanical elements that allow current to flow only in one direction. Direct current may be converted into alternating current with a motor-generator set. Direct current is used as a power supply for electronic systems. Large quantities of direct-current power are used in production of aluminum and other electrochemical processes, it is used for some railways in urban areas.
High-voltage direct current is used to transmit large amounts of power from remote generation sites or to interconnect alternating current power grids. Direct current was produced in 1800 by Italian physicist Alessandro Volta's battery, his Voltaic pile; the nature of how current flowed. French physicist André-Marie Ampère conjectured that current travelled in one direction from positive to negative; when French instrument maker Hippolyte Pixii built the first dynamo electric generator in 1832, he found that as the magnet used passed the loops of wire each half turn, it caused the flow of electricity to reverse, generating an alternating current. At Ampère's suggestion, Pixii added a commutator, a type of "switch" where contacts on the shaft work with "brush" contacts to produce direct current; the late 1870s and early 1880s saw electricity starting to be generated at power stations. These were set up to power arc lighting running on high voltage direct current or alternating current; this was followed by the wide spread use of low voltage direct current for indoor electric lighting in business and homes after inventor Thomas Edison launched his incandescent bulb based electric "utility" in 1882.
Because of the significant advantages of alternating current over direct current in using transformers to raise and lower voltages to allow much longer transmission distances, direct current was replaced over the next few decades by alternating current in power delivery. In the mid-1950s, high-voltage direct current transmission was developed, is now an option instead of long-distance high voltage alternating current systems. For long distance underseas cables, this DC option is the only technically feasible option. For applications requiring direct current, such as third rail power systems, alternating current is distributed to a substation, which utilizes a rectifier to convert the power to direct current; the term DC is used to refer to power systems that use only one polarity of voltage or current, to refer to the constant, zero-frequency, or varying local mean value of a voltage or current. For example, the voltage across a DC voltage source is constant as is the current through a DC current source.
The DC solution of an electric circuit is the solution where all currents are constant. It can be shown that any stationary voltage or current waveform can be decomposed into a sum of a DC component and a zero-mean time-varying component. Although DC stands for "direct current", DC refers to "constant polarity". Under this definition, DC voltages can vary in time, as seen in the raw output of a rectifier or the fluctuating voice signal on a telephone line; some forms of DC have no variations in voltage, but may still have variations in output power and current. A direct current circuit is an electrical circuit that consists of any combination of constant voltage sources, constant current sources, resistors. In this case, the circuit voltages and currents are independent of time. A particular circuit voltage or current does not depend on the past value of any circuit voltage or current; this implies that the system of equations that represent a DC circuit do not involve integrals or derivatives with respect to time.
If a capacitor or inductor is added to a DC circuit, the resulting circuit is not speaking, a DC circuit. However, most such circuits have a DC solution; this solution gives the circuit currents when the circuit is in DC steady state. Such a circuit is represented by a system of differential equations; the solution to these equations contain a time varying or transient part as well as constant or steady state part. It is this steady state part, the DC solution. There are some circuits. Two simple examples are a constant current source connected to a capacitor and a constant voltage source connected to an inductor. In electronics, it is common to refer to a circuit, powered by a DC voltage source such as a battery or the output of a DC power supply as a DC circuit though what is meant is that the circuit is DC powered. DC is found in many extra-low voltage applications and some low-voltage applications where these are powered by batteries or solar power systems. Most electronic circuits require a DC power supply.
Domestic DC installations have differ
Gravity Probe B
Gravity Probe B was a satellite-based mission to test two unverified predictions of general relativity: the geodetic effect and frame-dragging. This was to be accomplished by measuring precisely, tiny changes in the direction of spin of four gyroscopes contained in an Earth satellite orbiting at 650 km altitude, crossing directly over the poles; the satellite was launched on 20 April 2004 on a Delta II rocket. The spaceflight phase lasted until 2005; this provided a test of general relativity and related models. The principal investigator was Francis Everitt. Initial results confirmed the expected geodetic effect to an accuracy of about 1%; the expected frame-dragging effect was similar in magnitude to the current noise level. Work continued to model and account for these sources of error, thus permitting extraction of the frame-dragging signal. By August 2008, the frame-dragging effect had been confirmed to within 15% of the expected result, the December 2008 NASA report indicated that the geodetic effect was confirmed to better than 0.5%.
In an article published in the journal Physical Review Letters in 2011, the authors reported analysis of the data from all four gyroscopes results in a geodetic drift rate of −6601.8±18.3 mas/yr and a frame-dragging drift rate of −37.2±7.2 mas/yr, in good agreement with the general relativity predictions of −6606.1±0.28% mas/yr and −39.2±0.19% mas/yr, respectively. Gravity Probe B was a relativity gyroscope experiment funded by NASA. Efforts were led by Stanford University physics department with Lockheed Martin as the primary subcontractor. Mission scientists viewed it as the second gravity experiment in space, following the successful launch of Gravity Probe A in 1976; the mission plans were to test two unverified predictions of general relativity: the geodetic effect and frame-dragging. This was to be accomplished by measuring precisely, tiny changes in the direction of spin of four gyroscopes contained in an Earth satellite orbiting at 650 km altitude, crossing directly over the poles; the gyroscopes were intended to be so free from disturbance that they would provide a near-perfect space-time reference system.
This would allow them to reveal how space and time are "warped" by the presence of the Earth, by how much the Earth's rotation "drags" space-time around with it. The geodetic effect is an effect caused by space-time being "curved" by the mass of the Earth. A gyroscope's axis when parallel transported around the Earth in one complete revolution does not end up pointing in the same direction as before; the angle "missing" may be thought of as the amount the gyroscope "leans over" into the slope of the space-time curvature. A more precise explanation for the space curvature part of the geodetic precession is obtained by using a nearly flat cone to model the space curvature of the Earth's gravitational field; such a cone is made by cutting out a thin "pie-slice" from a circle and gluing the cut edges together. The spatial geodetic precession is a measure of the missing "pie-slice" angle. Gravity Probe B was expected to measure this effect to an accuracy of one part in 10,000, the most stringent check on general relativistic predictions to date.
The much smaller frame-dragging effect is an example of gravitomagnetism. It is an analog of magnetism in classical electrodynamics, but caused by rotating masses rather than rotating electric charges. Only two analyses of the laser-ranging data obtained by the two LAGEOS satellites, published in 1997 and 2004, claimed to have found the frame-dragging effect with an accuracy of about 20% and 10% whereas Gravity Probe B aimed to measure the frame dragging effect to a precision of 1%. However, Lorenzo Iorio claimed that the level of total uncertainty of the tests conducted with the two LAGEOS satellites has been underestimated. A recent analysis of Mars Global Surveyor data has claimed to have confirmed the frame dragging effect to a precision of 0.5%, although the accuracy of this claim is disputed. The Lense–Thirring effect of the Sun has been investigated in view of a possible detection with the inner planets in the near future; the launch was planned for 19 April 2004 at Vandenberg Air Force Base but was scrubbed within 5 minutes of the scheduled launch window due to changing winds in the upper atmosphere.
An unusual feature of the mission is that it only had a one-second launch window due to the precise orbit required by the experiment. On 20 April, at 9:57:23 AM PDT the spacecraft was launched successfully; the satellite was placed in orbit at 11:12:33 AM after a cruise period over the south pole and a short second burn. The mission lasted 16 months; some preliminary results were presented at a special session during the American Physical Society meeting in April 2007. NASA requested a proposal for extending the GP-B data analysis phase through December 2007; the data analysis phase was further extended to September 2008 using funding from Richard Fairbank, Stanford and NASA, beyond that point using non-NASA funding only. Final science results were reported in 2011; the Gravity Probe B experiment comprised four London moment gyroscopes and a reference telescope sighted on HR8703, a binary star in the constellation Pegasus. In polar orbit, with the gyro spin directions pointing toward HR8703, the frame-dragging and geodetic effects came out at right angles, each gyroscope measuring both.
The gyroscopes were housed in