The apparent magnitude of an astronomical object is a number, a measure of its brightness as seen by an observer on Earth. The magnitude scale is logarithmic. A difference of 1 in magnitude corresponds to a change in brightness by a factor of 5√100, or about 2.512. The brighter an object appears, the lower its magnitude value, with the brightest astronomical objects having negative apparent magnitudes: for example Sirius at −1.46. The measurement of apparent magnitudes or brightnesses of celestial objects is known as photometry. Apparent magnitudes are used to quantify the brightness of sources at ultraviolet and infrared wavelengths. An apparent magnitude is measured in a specific passband corresponding to some photometric system such as the UBV system. In standard astronomical notation, an apparent magnitude in the V filter band would be denoted either as mV or simply as V, as in "mV = 15" or "V = 15" to describe a 15th-magnitude object; the scale used to indicate magnitude originates in the Hellenistic practice of dividing stars visible to the naked eye into six magnitudes.
The brightest stars in the night sky were said to be of first magnitude, whereas the faintest were of sixth magnitude, the limit of human visual perception. Each grade of magnitude was considered twice the brightness of the following grade, although that ratio was subjective as no photodetectors existed; this rather crude scale for the brightness of stars was popularized by Ptolemy in his Almagest and is believed to have originated with Hipparchus. In 1856, Norman Robert Pogson formalized the system by defining a first magnitude star as a star, 100 times as bright as a sixth-magnitude star, thereby establishing the logarithmic scale still in use today; this implies that a star of magnitude m is about 2.512 times as bright as a star of magnitude m + 1. This figure, the fifth root of 100, became known as Pogson's Ratio; the zero point of Pogson's scale was defined by assigning Polaris a magnitude of 2. Astronomers discovered that Polaris is variable, so they switched to Vega as the standard reference star, assigning the brightness of Vega as the definition of zero magnitude at any specified wavelength.
Apart from small corrections, the brightness of Vega still serves as the definition of zero magnitude for visible and near infrared wavelengths, where its spectral energy distribution approximates that of a black body for a temperature of 11000 K. However, with the advent of infrared astronomy it was revealed that Vega's radiation includes an Infrared excess due to a circumstellar disk consisting of dust at warm temperatures. At shorter wavelengths, there is negligible emission from dust at these temperatures. However, in order to properly extend the magnitude scale further into the infrared, this peculiarity of Vega should not affect the definition of the magnitude scale. Therefore, the magnitude scale was extrapolated to all wavelengths on the basis of the black-body radiation curve for an ideal stellar surface at 11000 K uncontaminated by circumstellar radiation. On this basis the spectral irradiance for the zero magnitude point, as a function of wavelength, can be computed. Small deviations are specified between systems using measurement apparatuses developed independently so that data obtained by different astronomers can be properly compared, but of greater practical importance is the definition of magnitude not at a single wavelength but applying to the response of standard spectral filters used in photometry over various wavelength bands.
With the modern magnitude systems, brightness over a wide range is specified according to the logarithmic definition detailed below, using this zero reference. In practice such apparent magnitudes do not exceed 30; the brightness of Vega is exceeded by four stars in the night sky at visible wavelengths as well as the bright planets Venus and Jupiter, these must be described by negative magnitudes. For example, the brightest star of the celestial sphere, has an apparent magnitude of −1.4 in the visible. Negative magnitudes for other bright astronomical objects can be found in the table below. Astronomers have developed other photometric zeropoint systems as alternatives to the Vega system; the most used is the AB magnitude system, in which photometric zeropoints are based on a hypothetical reference spectrum having constant flux per unit frequency interval, rather than using a stellar spectrum or blackbody curve as the reference. The AB magnitude zeropoint is defined such that an object's AB and Vega-based magnitudes will be equal in the V filter band.
As the amount of light received by a telescope is reduced by transmission through the Earth's atmosphere, any measurement of apparent magnitude is corrected for what it would have been as seen from above the atmosphere. The dimmer an object appears, the higher the numerical value given to its apparent magnitude, with a difference of 5 magnitudes corresponding to a brightness factor of 100. Therefore, the apparent magnitude m, in the spectral band x, would be given by m x = − 5 log 100 , more expressed in terms of common logarithms as m x
Right ascension is the angular distance of a particular point measured eastward along the celestial equator from the Sun at the March equinox to the point above the earth in question. When paired with declination, these astronomical coordinates specify the direction of a point on the celestial sphere in the equatorial coordinate system. An old term, right ascension refers to the ascension, or the point on the celestial equator that rises with any celestial object as seen from Earth's equator, where the celestial equator intersects the horizon at a right angle, it contrasts with oblique ascension, the point on the celestial equator that rises with any celestial object as seen from most latitudes on Earth, where the celestial equator intersects the horizon at an oblique angle. Right ascension is the celestial equivalent of terrestrial longitude. Both right ascension and longitude measure an angle from a primary direction on an equator. Right ascension is measured from the Sun at the March equinox i.e. the First Point of Aries, the place on the celestial sphere where the Sun crosses the celestial equator from south to north at the March equinox and is located in the constellation Pisces.
Right ascension is measured continuously in a full circle from that alignment of Earth and Sun in space, that equinox, the measurement increasing towards the east. As seen from Earth, objects noted to have 12h RA are longest visible at the March equinox. On those dates at midnight, such objects will reach their highest point. How high depends on their declination. Any units of angular measure could have been chosen for right ascension, but it is customarily measured in hours and seconds, with 24h being equivalent to a full circle. Astronomers have chosen this unit to measure right ascension because they measure a star's location by timing its passage through the highest point in the sky as the Earth rotates; the line which passes through the highest point in the sky, called the meridian, is the projection of a longitude line onto the celestial sphere. Since a complete circle contains 24h of right ascension or 360°, 1/24 of a circle is measured as 1h of right ascension, or 15°. A full circle, measured in right-ascension units, contains 24 × 60 × 60 = 86400s, or 24 × 60 = 1440m, or 24h.
Because right ascensions are measured in hours, they can be used to time the positions of objects in the sky. For example, if a star with RA = 1h 30m 00s is at its meridian a star with RA = 20h 00m 00s will be on the/at its meridian 18.5 sidereal hours later. Sidereal hour angle, used in celestial navigation, is similar to right ascension, but increases westward rather than eastward. Measured in degrees, it is the complement of right ascension with respect to 24h, it is important not to confuse sidereal hour angle with the astronomical concept of hour angle, which measures angular distance of an object westward from the local meridian. The Earth's axis rotates westward about the poles of the ecliptic, completing one cycle in about 26,000 years; this movement, known as precession, causes the coordinates of stationary celestial objects to change continuously, if rather slowly. Therefore, equatorial coordinates are inherently relative to the year of their observation, astronomers specify them with reference to a particular year, known as an epoch.
Coordinates from different epochs must be mathematically rotated to match each other, or to match a standard epoch. Right ascension for "fixed stars" near the ecliptic and equator increases by about 3.05 seconds per year on average, or 5.1 minutes per century, but for fixed stars further from the ecliptic the rate of change can be anything from negative infinity to positive infinity. The right ascension of Polaris is increasing quickly; the North Ecliptic Pole in Draco and the South Ecliptic Pole in Dorado are always at right ascension 18h and 6h respectively. The used standard epoch is J2000.0, January 1, 2000 at 12:00 TT. The prefix "J" indicates. Prior to J2000.0, astronomers used the successive Besselian epochs B1875.0, B1900.0, B1950.0. The concept of right ascension has been known at least as far back as Hipparchus who measured stars in equatorial coordinates in the 2nd century BC, but Hipparchus and his successors made their star catalogs in ecliptic coordinates, the use of RA was limited to special cases.
With the invention of the telescope, it became possible for astronomers to observe celestial objects in greater detail, provided that the telescope could be kept pointed at the object for a period of time. The easiest way to do, to use an equatorial mount, which allows the telescope to be aligned with one of its two pivots parallel to the Earth's axis. A motorized clock drive is used with an equatorial mount to cancel out the Earth's rotation; as the equatorial mount became adopted for observation, the equatorial coordinate system, which includes right ascension, was adopted at the same time for simplicity. Equatorial mounts could be pointed at objects with known right ascension and declination by the use of setting circles; the first star catalog to use right ascen
In astronomy, stellar classification is the classification of stars based on their spectral characteristics. Electromagnetic radiation from the star is analyzed by splitting it with a prism or diffraction grating into a spectrum exhibiting the rainbow of colors interspersed with spectral lines; each line indicates a particular chemical element or molecule, with the line strength indicating the abundance of that element. The strengths of the different spectral lines vary due to the temperature of the photosphere, although in some cases there are true abundance differences; the spectral class of a star is a short code summarizing the ionization state, giving an objective measure of the photosphere's temperature. Most stars are classified under the Morgan-Keenan system using the letters O, B, A, F, G, K, M, a sequence from the hottest to the coolest; each letter class is subdivided using a numeric digit with 0 being hottest and 9 being coolest. The sequence has been expanded with classes for other stars and star-like objects that do not fit in the classical system, such as class D for white dwarfs and classes S and C for carbon stars.
In the MK system, a luminosity class is added to the spectral class using Roman numerals. This is based on the width of certain absorption lines in the star's spectrum, which vary with the density of the atmosphere and so distinguish giant stars from dwarfs. Luminosity class 0 or Ia+ is used for hypergiants, class I for supergiants, class II for bright giants, class III for regular giants, class IV for sub-giants, class V for main-sequence stars, class sd for sub-dwarfs, class D for white dwarfs; the full spectral class for the Sun is G2V, indicating a main-sequence star with a temperature around 5,800 K. The conventional color description takes into account only the peak of the stellar spectrum. In actuality, stars radiate in all parts of the spectrum; because all spectral colors combined appear white, the actual apparent colors the human eye would observe are far lighter than the conventional color descriptions would suggest. This characteristic of'lightness' indicates that the simplified assignment of colors within the spectrum can be misleading.
Excluding color-contrast illusions in dim light, there are indigo, or violet stars. Red dwarfs are a deep shade of orange, brown dwarfs do not appear brown, but hypothetically would appear dim grey to a nearby observer; the modern classification system is known as the Morgan–Keenan classification. Each star is assigned a spectral class from the older Harvard spectral classification and a luminosity class using Roman numerals as explained below, forming the star's spectral type. Other modern stellar classification systems, such as the UBV system, are based on color indexes—the measured differences in three or more color magnitudes; those numbers are given labels such as "U-V" or "B-V", which represent the colors passed by two standard filters. The Harvard system is a one-dimensional classification scheme by astronomer Annie Jump Cannon, who re-ordered and simplified a prior alphabetical system. Stars are grouped according to their spectral characteristics by single letters of the alphabet, optionally with numeric subdivisions.
Main-sequence stars vary in surface temperature from 2,000 to 50,000 K, whereas more-evolved stars can have temperatures above 100,000 K. Physically, the classes indicate the temperature of the star's atmosphere and are listed from hottest to coldest; the spectral classes O through M, as well as other more specialized classes discussed are subdivided by Arabic numerals, where 0 denotes the hottest stars of a given class. For example, A0 denotes A9 denotes the coolest ones. Fractional numbers are allowed; the Sun is classified as G2. Conventional color descriptions are traditional in astronomy, represent colors relative to the mean color of an A class star, considered to be white; the apparent color descriptions are what the observer would see if trying to describe the stars under a dark sky without aid to the eye, or with binoculars. However, most stars in the sky, except the brightest ones, appear white or bluish white to the unaided eye because they are too dim for color vision to work. Red supergiants are cooler and redder than dwarfs of the same spectral type, stars with particular spectral features such as carbon stars may be far redder than any black body.
The fact that the Harvard classification of a star indicated its surface or photospheric temperature was not understood until after its development, though by the time the first Hertzsprung–Russell diagram was formulated, this was suspected to be true. In the 1920s, the Indian physicist Meghnad Saha derived a theory of ionization by extending well-known ideas in physical chemistry pertaining to the dissociation of molecules to the ionization of atoms. First he applied it to the solar chromosphere to stellar spectra. Harvard astronomer Cecilia Payne demonstrated that the O-B-A-F-G-K-M spectral sequence is a sequence in temperature; because the classification sequence predates our understanding that it is a temperature sequence, the placement of a spectrum into a given subtype, such as B3 or A7, depends upon estimates of the strengths of absorption features in stellar spectra. As a result, these subtypes are not evenly divided into any sort of mathematically representable intervals; the Yerkes spectral classification called the MKK system from the authors' initial
Hipparcos was a scientific satellite of the European Space Agency, launched in 1989 and operated until 1993. It was the first space experiment devoted to precision astrometry, the accurate measurement of the positions of celestial objects on the sky; this permitted the accurate determination of proper motions and parallaxes of stars, allowing a determination of their distance and tangential velocity. When combined with radial velocity measurements from spectroscopy, this pinpointed all six quantities needed to determine the motion of stars; the resulting Hipparcos Catalogue, a high-precision catalogue of more than 118,200 stars, was published in 1997. The lower-precision Tycho Catalogue of more than a million stars was published at the same time, while the enhanced Tycho-2 Catalogue of 2.5 million stars was published in 2000. Hipparcos' follow-up mission, was launched in 2013; the word "Hipparcos" is an acronym for HIgh Precision PARallax COllecting Satellite and a reference to the ancient Greek astronomer Hipparchus of Nicaea, noted for applications of trigonometry to astronomy and his discovery of the precession of the equinoxes.
By the second half of the 20th century, the accurate measurement of star positions from the ground was running into insurmountable barriers to improvements in accuracy for large-angle measurements and systematic terms. Problems were dominated by the effects of the Earth's atmosphere, but were compounded by complex optical terms and gravitational instrument flexures, the absence of all-sky visibility. A formal proposal to make these exacting observations from space was first put forward in 1967. Although proposed to the French space agency CNES, it was considered too complex and expensive for a single national programme, its acceptance within the European Space Agency's scientific programme, in 1980, was the result of a lengthy process of study and lobbying. The underlying scientific motivation was to determine the physical properties of the stars through the measurement of their distances and space motions, thus to place theoretical studies of stellar structure and evolution, studies of galactic structure and kinematics, on a more secure empirical basis.
Observationally, the objective was to provide the positions and annual proper motions for some 100,000 stars with an unprecedented accuracy of 0.002 arcseconds, a target in practice surpassed by a factor of two. The name of the space telescope, "Hipparcos" was an acronym for High Precision Parallax Collecting Satellite, it reflected the name of the ancient Greek astronomer Hipparchus, considered the founder of trigonometry and the discoverer of the precession of the equinoxes; the spacecraft carried a single all-reflective, eccentric Schmidt telescope, with an aperture of 29 cm. A special beam-combining mirror superimposed two fields of view, 58 degrees apart, into the common focal plane; this complex mirror consisted of two mirrors tilted in opposite directions, each occupying half of the rectangular entrance pupil, providing an unvignetted field of view of about 1°×1°. The telescope used a system of grids, at the focal surface, composed of 2688 alternate opaque and transparent bands, with a period of 1.208 arc-sec.
Behind this grid system, an image dissector tube with a sensitive field of view of about 38-arc-sec diameter converted the modulated light into a sequence of photon counts from which the phase of the entire pulse train from a star could be derived. The apparent angle between two stars in the combined fields of view, modulo the grid period, was obtained from the phase difference of the two star pulse trains. Targeting the observation of some 100,000 stars, with an astrometric accuracy of about 0.002 arc-sec, the final Hipparcos Catalogue comprised nearly 120,000 stars with a median accuracy of better than 0.001 arc-sec. An additional photomultiplier system viewed a beam splitter in the optical path and was used as a star mapper, its purpose was to monitor and determine the satellite attitude, in the process, to gather photometric and astrometric data of all stars down to about 11th magnitude. These measurements were made in two broad bands corresponding to B and V in the UBV photometric system.
The positions of these latter stars were to be determined to a precision of 0.03 arc-sec, a factor of 25 less than the main mission stars. Targeting the observation of around 400,000 stars, the resulting Tycho Catalogue comprised just over 1 million stars, with a subsequent analysis extending this to the Tycho-2 Catalogue of about 2.5 million stars. The attitude of the spacecraft about its center of gravity was controlled to scan the celestial sphere in a regular precessional motion maintaining a constant inclination between the spin axis and the direction to the Sun; the spacecraft spun around its Z-axis at the rate of 11.25 revolutions/day at an angle of 43° to the Sun. The Z-axis rotated about the sun-satellite line at 6.4 revolutions/year. The spacecraft consisted of two platforms and six vertical panels, all made of aluminum honeycomb; the solar array consisted of three deployable sections. Two S-band antennas were located on the top and bottom of the spacecraft, providing an omni-directional downlink data rate of 24 kbit/s.
An attitude and orbit-control subsystem ensured correct dynamic attitude control and determination during the operational lifetim
Stellar evolution is the process by which a star changes over the course of time. Depending on the mass of the star, its lifetime can range from a few million years for the most massive to trillions of years for the least massive, longer than the age of the universe; the table shows the lifetimes of stars as a function of their masses. All stars are born from collapsing clouds of gas and dust called nebulae or molecular clouds. Over the course of millions of years, these protostars settle down into a state of equilibrium, becoming what is known as a main-sequence star. Nuclear fusion powers a star for most of its life; the energy is generated by the fusion of hydrogen atoms at the core of the main-sequence star. As the preponderance of atoms at the core becomes helium, stars like the Sun begin to fuse hydrogen along a spherical shell surrounding the core; this process causes the star to grow in size, passing through the subgiant stage until it reaches the red giant phase. Stars with at least half the mass of the Sun can begin to generate energy through the fusion of helium at their core, whereas more-massive stars can fuse heavier elements along a series of concentric shells.
Once a star like the Sun has exhausted its nuclear fuel, its core collapses into a dense white dwarf and the outer layers are expelled as a planetary nebula. Stars with around ten or more times the mass of the Sun can explode in a supernova as their inert iron cores collapse into an dense neutron star or black hole. Although the universe is not old enough for any of the smallest red dwarfs to have reached the end of their lives, stellar models suggest they will become brighter and hotter before running out of hydrogen fuel and becoming low-mass white dwarfs. Stellar evolution is not studied by observing the life of a single star, as most stellar changes occur too to be detected over many centuries. Instead, astrophysicists come to understand how stars evolve by observing numerous stars at various points in their lifetime, by simulating stellar structure using computer models. Stellar evolution starts with the gravitational collapse of a giant molecular cloud. Typical giant molecular clouds are 100 light-years across and contain up to 6,000,000 solar masses.
As it collapses, a giant molecular cloud breaks into smaller pieces. In each of these fragments, the collapsing gas releases gravitational potential energy as heat; as its temperature and pressure increase, a fragment condenses into a rotating sphere of superhot gas known as a protostar. A protostar continues to grow by accretion of gas and dust from the molecular cloud, becoming a pre-main-sequence star as it reaches its final mass. Further development is determined by its mass. Mass is compared to the mass of the Sun: 1.0 M☉ means 1 solar mass. Protostars are encompassed in dust, are thus more visible at infrared wavelengths. Observations from the Wide-field Infrared Survey Explorer have been important for unveiling numerous Galactic protostars and their parent star clusters. Protostars with masses less than 0.08 M☉ never reach temperatures high enough for nuclear fusion of hydrogen to begin. These are known as brown dwarfs; the International Astronomical Union defines brown dwarfs as stars massive enough to fuse deuterium at some point in their lives.
Objects smaller than 13 MJ are classified as sub-brown dwarfs. Both types, deuterium-burning and not, shine dimly and die away cooling over hundreds of millions of years. For a more-massive protostar, the core temperature will reach 10 million kelvin, initiating the proton–proton chain reaction and allowing hydrogen to fuse, first to deuterium and to helium. In stars of over 1 M☉, the carbon–nitrogen–oxygen fusion reaction contributes a large portion of the energy generation; the onset of nuclear fusion leads quickly to a hydrostatic equilibrium in which energy released by the core maintains a high gas pressure, balancing the weight of the star's matter and preventing further gravitational collapse. The star thus evolves to a stable state, beginning the main-sequence phase of its evolution. A new star will sit at a specific point on the main sequence of the Hertzsprung–Russell diagram, with the main-sequence spectral type depending upon the mass of the star. Small cold, low-mass red dwarfs fuse hydrogen and will remain on the main sequence for hundreds of billions of years or longer, whereas massive, hot O-type stars will leave the main sequence after just a few million years.
A mid-sized yellow dwarf star, like the Sun, will remain on the main sequence for about 10 billion years. The Sun is thought to be in the middle of its main sequence lifespan; the core exhausts its supply of hydrogen and the star begins to evolve off of the main sequence. Without the outward pressure generated by the fusion of hydrogen to counteract the force of gravity the core contracts until either electron degeneracy pressure becomes sufficient to oppose gravity or the core becomes hot enough for helium fusion to begin. Which of these happens first depends upon the star's mass. What happens after a low-mass star ceases to produce energy through fusion has not been directly observed. Recent astrophysical models suggest that red dwarfs of 0.1 M☉ may stay on the main sequence for some six to twelve tril
Astrometry is the branch of astronomy that involves precise measurements of the positions and movements of stars and other celestial bodies. The information obtained by astrometric measurements provides information on the kinematics and physical origin of the Solar System and our galaxy, the Milky Way; the history of astrometry is linked to the history of star catalogues, which gave astronomers reference points for objects in the sky so they could track their movements. This can be dated back to Hipparchus, who around 190 BC used the catalogue of his predecessors Timocharis and Aristillus to discover Earth's precession. In doing so, he developed the brightness scale still in use today. Hipparchus compiled a catalogue with their positions. Hipparchus's successor, included a catalogue of 1,022 stars in his work the Almagest, giving their location and brightness. In the 10th century, Abd al-Rahman al-Sufi carried out observations on the stars and described their positions and star color. Ibn Yunus observed more than 10,000 entries for the Sun's position for many years using a large astrolabe with a diameter of nearly 1.4 metres.
His observations on eclipses were still used centuries in Simon Newcomb's investigations on the motion of the Moon, while his other observations of the motions of the planets Jupiter and Saturn inspired Laplace's Obliquity of the Ecliptic and Inequalities of Jupiter and Saturn. In the 15th century, the Timurid astronomer Ulugh Beg compiled the Zij-i-Sultani, in which he catalogued 1,019 stars. Like the earlier catalogs of Hipparchus and Ptolemy, Ulugh Beg's catalogue is estimated to have been precise to within 20 minutes of arc. In the 16th century, Tycho Brahe used improved instruments, including large mural instruments, to measure star positions more than with a precision of 15–35 arcsec. Taqi al-Din measured the right ascension of the stars at the Constantinople Observatory of Taqi ad-Din using the "observational clock" he invented; when telescopes became commonplace, setting circles sped measurements James Bradley first tried to measure stellar parallaxes in 1729. The stellar movement proved too insignificant for his telescope, but he instead discovered the aberration of light and the nutation of the Earth's axis.
His cataloguing of 3222 stars was refined in 1807 by Friedrich Bessel, the father of modern astrometry. He made the first measurement of stellar parallax: 0.3 arcsec for the binary star 61 Cygni. Being difficult to measure, only about 60 stellar parallaxes had been obtained by the end of the 19th century by use of the filar micrometer. Astrographs using astronomical photographic plates sped the process in the early 20th century. Automated plate-measuring machines and more sophisticated computer technology of the 1960s allowed more efficient compilation of star catalogues. In the 1980s, charge-coupled devices replaced photographic plates and reduced optical uncertainties to one milliarcsecond; this technology made astrometry less expensive. In 1989, the European Space Agency's Hipparcos satellite took astrometry into orbit, where it could be less affected by mechanical forces of the Earth and optical distortions from its atmosphere. Operated from 1989 to 1993, Hipparcos measured large and small angles on the sky with much greater precision than any previous optical telescopes.
During its 4-year run, the positions and proper motions of 118,218 stars were determined with an unprecedented degree of accuracy. A new "Tycho catalog" drew together a database of 1,058,332 to within 20-30 mas. Additional catalogues were compiled for the 23,882 double/multiple stars and 11,597 variable stars analyzed during the Hipparcos mission. Today, the catalogue most used is USNO-B1.0, an all-sky catalogue that tracks proper motions, positions and other characteristics for over one billion stellar objects. During the past 50 years, 7,435 Schmidt camera plates were used to complete several sky surveys that make the data in USNO-B1.0 accurate to within 0.2 arcsec. Apart from the fundamental function of providing astronomers with a reference frame to report their observations in, astrometry is fundamental for fields like celestial mechanics, stellar dynamics and galactic astronomy. In observational astronomy, astrometric techniques help identify stellar objects by their unique motions, it is instrumental for keeping time, in that UTC is the atomic time synchronized to Earth's rotation by means of exact astronomical observations.
Astrometry is an important step in the cosmic distance ladder because it establishes parallax distance estimates for stars in the Milky Way. Astrometry has been used to support claims of extrasolar planet detection by measuring the displacement the proposed planets cause in their parent star's apparent position on the sky, due to their mutual orbit around the center of mass of the system. Astrometry is more accurate in space missions that are not affected by the distorting effects of the Earth's atmosphere. NASA's planned Space Interferometry Mission was to utilize astrometric techniques to detect terrestrial planets orbiting 200 or so of the nearest solar-type stars; the European Space Agency's Gaia Mission, launched in 2013, applies astrometric techniques in its stellar census. In addition to the detection of exoplanets, it can be used to determine their mass. Astrometric measurements are used by astrophysicists to constrain certain models in celestial mechanics. By measuring the velocities of pulsars, it is possible to put a limit on the asymmetry of supernova explosions.
The Kelvin scale is an absolute thermodynamic temperature scale using as its null point absolute zero, the temperature at which all thermal motion ceases in the classical description of thermodynamics. The kelvin is the base unit of temperature in the International System of Units; until 2018, the kelvin was defined as the fraction 1/273.16 of the thermodynamic temperature of the triple point of water. In other words, it was defined such that the triple point of water is 273.16 K. On 16 November 2018, a new definition was adopted, in terms of a fixed value of the Boltzmann constant. For legal metrology purposes, the new definition will come into force on 20 May 2019; the Kelvin scale is named after the Belfast-born, Glasgow University engineer and physicist William Thomson, 1st Baron Kelvin, who wrote of the need for an "absolute thermometric scale". Unlike the degree Fahrenheit and degree Celsius, the kelvin is not referred to or written as a degree; the kelvin is the primary unit of temperature measurement in the physical sciences, but is used in conjunction with the degree Celsius, which has the same magnitude.
The definition implies that absolute zero is equivalent to −273.15 °C. In 1848, William Thomson, made Lord Kelvin, wrote in his paper, On an Absolute Thermometric Scale, of the need for a scale whereby "infinite cold" was the scale's null point, which used the degree Celsius for its unit increment. Kelvin calculated; this absolute scale is known today as the Kelvin thermodynamic temperature scale. Kelvin's value of "−273" was the negative reciprocal of 0.00366—the accepted expansion coefficient of gas per degree Celsius relative to the ice point, giving a remarkable consistency to the accepted value. In 1954, Resolution 3 of the 10th General Conference on Weights and Measures gave the Kelvin scale its modern definition by designating the triple point of water as its second defining point and assigned its temperature to 273.16 kelvins. In 1967/1968, Resolution 3 of the 13th CGPM renamed the unit increment of thermodynamic temperature "kelvin", symbol K, replacing "degree Kelvin", symbol °K. Furthermore, feeling it useful to more explicitly define the magnitude of the unit increment, the 13th CGPM held in Resolution 4 that "The kelvin, unit of thermodynamic temperature, is equal to the fraction 1/273.16 of the thermodynamic temperature of the triple point of water."In 2005, the Comité International des Poids et Mesures, a committee of the CGPM, affirmed that for the purposes of delineating the temperature of the triple point of water, the definition of the Kelvin thermodynamic temperature scale would refer to water having an isotopic composition specified as Vienna Standard Mean Ocean Water.
In 2018, Resolution A of the 26th CGPM adopted a significant redefinition of SI base units which included redefining the Kelvin in terms of a fixed value for the Boltzmann constant of 1.380649×10−23 J/K. When spelled out or spoken, the unit is pluralised using the same grammatical rules as for other SI units such as the volt or ohm; when reference is made to the "Kelvin scale", the word "kelvin"—which is a noun—functions adjectivally to modify the noun "scale" and is capitalized. As with most other SI unit symbols there is a space between the kelvin symbol. Before the 13th CGPM in 1967–1968, the unit kelvin was called a "degree", the same as with the other temperature scales at the time, it was distinguished from the other scales with either the adjective suffix "Kelvin" or with "absolute" and its symbol was °K. The latter term, the unit's official name from 1948 until 1954, was ambiguous since it could be interpreted as referring to the Rankine scale. Before the 13th CGPM, the plural form was "degrees absolute".
The 13th CGPM changed the unit name to "kelvin". The omission of "degree" indicates that it is not relative to an arbitrary reference point like the Celsius and Fahrenheit scales, but rather an absolute unit of measure which can be manipulated algebraically. In science and engineering, degrees Celsius and kelvins are used in the same article, where absolute temperatures are given in degrees Celsius, but temperature intervals are given in kelvins. E.g. "its measured value was 0.01028 °C with an uncertainty of 60 µK." This practice is permissible because the degree Celsius is a special name for the kelvin for use in expressing relative temperatures, the magnitude of the degree Celsius is equal to that of the kelvin. Notwithstanding that the official endorsement provided by Resolution 3 of the 13th CGPM states "a temperature interval may be expressed in degrees Celsius", the practice of using both °C and K is widespread throughout the scientific world; the use of SI prefixed forms of the degree Celsius to express a temperature interval has not been adopted.
In 2005 the CIPM embarked on a programme to redefine the kelvin using a more experimentally rigorous methodology. In particular, the committee proposed redefining the kelvin such that Boltzmann's constant takes the exact value 1.3806505×10−23 J/K. The committee had hoped tha