The zodiac is an area of the sky that extends 8° north or south of the ecliptic, the apparent path of the Sun across the celestial sphere over the course of the year. The paths of the Moon and visible planets are within the belt of the zodiac. In Western astrology, astronomy, the zodiac is divided into twelve signs, each occupying 30° of celestial longitude and corresponding to the constellations Aries, Gemini, Leo, Libra, Sagittarius, Capricorn and Pisces; the twelve astrological signs form a celestial coordinate system, or more an ecliptic coordinate system, which takes the ecliptic as the origin of latitude and the Sun's position at vernal equinox as the origin of longitude. The English word zodiac derives from zōdiacus, the Latinized form of the Ancient Greek zōidiakòs kýklos, meaning "cycle or circle of little animals". Zōidion is the diminutive of zōion; the name reflects the prominence of animals among the twelve signs. The zodiac was in use by the Roman era, based on concepts inherited by Hellenistic astronomy from Babylonian astronomy of the Chaldean period, which, in turn, derived from an earlier system of lists of stars along the ecliptic.
The construction of the zodiac is described in the Almagest. Although the zodiac remains the basis of the ecliptic coordinate system in use in astronomy besides the equatorial one, the term and the names of the twelve signs are today associated with horoscopic astrology; the term "zodiac" may refer to the region of the celestial sphere encompassing the paths of the planets corresponding to the band of about eight arc degrees above and below the ecliptic. The zodiac of a given planet is the band. By extension, the "zodiac of the comets" may refer to the band encompassing most short-period comets; the division of the ecliptic into the zodiacal signs originates in Babylonian astronomy during the first half of the 1st millennium BC. The zodiac draws on stars in earlier Babylonian star catalogues, such as the MUL. APIN catalogue, compiled around 1000 BC; some of the constellations can be traced further back, to Bronze Age sources, including Gemini "The Twins", from MAŠ. TAB. BA. GAL. GAL "The Great Twins", Cancer "The Crab", from AL.
LUL "The Crayfish", among others. Around the end of the 5th century BC, Babylonian astronomers divided the ecliptic into twelve equal "signs", by analogy to twelve schematic months of thirty days each; each sign contained thirty degrees of celestial longitude, thus creating the first known celestial coordinate system. According to calculations by modern astrophysics, the zodiac was introduced between 409 and 398 BC and within a few years of 401 BC Unlike modern astronomers, who place the beginning of the sign of Aries at the place of the Sun at the vernal equinox; the divisions do not correspond to where the constellations started and ended in the sky. The Sun in fact passed through at least 13, not 12 Babylonian constellations. In order to align with the number of months in a year, designers of the system omitted the major constellation Ophiuchus. Including smaller figures, astronomers have counted up to 21 eligible zodiac constellations. Changes in the orientation of the Earth's axis of rotation means that the time of year the sun is in a given constellation has changed since Babylonian times.
Because the division was made into equal arcs, 30° each, they constituted an ideal system of reference for making predictions about a planet's longitude. However, Babylonian techniques of observational measurements were in a rudimentary stage of evolution and they measured the position of a planet in reference to a set of "normal stars" close to the ecliptic as observational reference points to help positioning a planet within this ecliptic coordinate system. In Babylonian astronomical diaries, a planet position was given with respect to a zodiacal sign alone, less in specific degrees within a sign; when the degrees of longitude were given, they were expressed with reference to the 30° of the zodiacal sign, i.e. not with a reference to the continuous 360° ecliptic. In astronomical ephemerides, the positions of significant astronomical phenomena were computed in sexagesimal fractions of a degree. For daily ephemerides, the daily positions of a planet were not as important as the astrologically significant dates when the planet crossed from one zodiacal sign to the next.
Knowledge of the Babylonian zodiac is reflected in the Hebrew Bible. Some authors have linked the twelve tribes of Israel with the twelve signs and/or the lunar Hebrew calendar having 12 lunar months in a lunar year. Martin and others have argued that the arrangement of the tribes around the Tabernacle corresponded to the order of the Zodiac, with Judah, Reuben and Dan representing the middle signs of Leo, Aquarius and Scorpio, respectively; such connectio
Stellar evolution is the process by which a star changes over the course of time. Depending on the mass of the star, its lifetime can range from a few million years for the most massive to trillions of years for the least massive, longer than the age of the universe; the table shows the lifetimes of stars as a function of their masses. All stars are born from collapsing clouds of gas and dust called nebulae or molecular clouds. Over the course of millions of years, these protostars settle down into a state of equilibrium, becoming what is known as a main-sequence star. Nuclear fusion powers a star for most of its life; the energy is generated by the fusion of hydrogen atoms at the core of the main-sequence star. As the preponderance of atoms at the core becomes helium, stars like the Sun begin to fuse hydrogen along a spherical shell surrounding the core; this process causes the star to grow in size, passing through the subgiant stage until it reaches the red giant phase. Stars with at least half the mass of the Sun can begin to generate energy through the fusion of helium at their core, whereas more-massive stars can fuse heavier elements along a series of concentric shells.
Once a star like the Sun has exhausted its nuclear fuel, its core collapses into a dense white dwarf and the outer layers are expelled as a planetary nebula. Stars with around ten or more times the mass of the Sun can explode in a supernova as their inert iron cores collapse into an dense neutron star or black hole. Although the universe is not old enough for any of the smallest red dwarfs to have reached the end of their lives, stellar models suggest they will become brighter and hotter before running out of hydrogen fuel and becoming low-mass white dwarfs. Stellar evolution is not studied by observing the life of a single star, as most stellar changes occur too to be detected over many centuries. Instead, astrophysicists come to understand how stars evolve by observing numerous stars at various points in their lifetime, by simulating stellar structure using computer models. Stellar evolution starts with the gravitational collapse of a giant molecular cloud. Typical giant molecular clouds are 100 light-years across and contain up to 6,000,000 solar masses.
As it collapses, a giant molecular cloud breaks into smaller pieces. In each of these fragments, the collapsing gas releases gravitational potential energy as heat; as its temperature and pressure increase, a fragment condenses into a rotating sphere of superhot gas known as a protostar. A protostar continues to grow by accretion of gas and dust from the molecular cloud, becoming a pre-main-sequence star as it reaches its final mass. Further development is determined by its mass. Mass is compared to the mass of the Sun: 1.0 M☉ means 1 solar mass. Protostars are encompassed in dust, are thus more visible at infrared wavelengths. Observations from the Wide-field Infrared Survey Explorer have been important for unveiling numerous Galactic protostars and their parent star clusters. Protostars with masses less than 0.08 M☉ never reach temperatures high enough for nuclear fusion of hydrogen to begin. These are known as brown dwarfs; the International Astronomical Union defines brown dwarfs as stars massive enough to fuse deuterium at some point in their lives.
Objects smaller than 13 MJ are classified as sub-brown dwarfs. Both types, deuterium-burning and not, shine dimly and die away cooling over hundreds of millions of years. For a more-massive protostar, the core temperature will reach 10 million kelvin, initiating the proton–proton chain reaction and allowing hydrogen to fuse, first to deuterium and to helium. In stars of over 1 M☉, the carbon–nitrogen–oxygen fusion reaction contributes a large portion of the energy generation; the onset of nuclear fusion leads quickly to a hydrostatic equilibrium in which energy released by the core maintains a high gas pressure, balancing the weight of the star's matter and preventing further gravitational collapse. The star thus evolves to a stable state, beginning the main-sequence phase of its evolution. A new star will sit at a specific point on the main sequence of the Hertzsprung–Russell diagram, with the main-sequence spectral type depending upon the mass of the star. Small cold, low-mass red dwarfs fuse hydrogen and will remain on the main sequence for hundreds of billions of years or longer, whereas massive, hot O-type stars will leave the main sequence after just a few million years.
A mid-sized yellow dwarf star, like the Sun, will remain on the main sequence for about 10 billion years. The Sun is thought to be in the middle of its main sequence lifespan; the core exhausts its supply of hydrogen and the star begins to evolve off of the main sequence. Without the outward pressure generated by the fusion of hydrogen to counteract the force of gravity the core contracts until either electron degeneracy pressure becomes sufficient to oppose gravity or the core becomes hot enough for helium fusion to begin. Which of these happens first depends upon the star's mass. What happens after a low-mass star ceases to produce energy through fusion has not been directly observed. Recent astrophysical models suggest that red dwarfs of 0.1 M☉ may stay on the main sequence for some six to twelve tril
Astrometry is the branch of astronomy that involves precise measurements of the positions and movements of stars and other celestial bodies. The information obtained by astrometric measurements provides information on the kinematics and physical origin of the Solar System and our galaxy, the Milky Way; the history of astrometry is linked to the history of star catalogues, which gave astronomers reference points for objects in the sky so they could track their movements. This can be dated back to Hipparchus, who around 190 BC used the catalogue of his predecessors Timocharis and Aristillus to discover Earth's precession. In doing so, he developed the brightness scale still in use today. Hipparchus compiled a catalogue with their positions. Hipparchus's successor, included a catalogue of 1,022 stars in his work the Almagest, giving their location and brightness. In the 10th century, Abd al-Rahman al-Sufi carried out observations on the stars and described their positions and star color. Ibn Yunus observed more than 10,000 entries for the Sun's position for many years using a large astrolabe with a diameter of nearly 1.4 metres.
His observations on eclipses were still used centuries in Simon Newcomb's investigations on the motion of the Moon, while his other observations of the motions of the planets Jupiter and Saturn inspired Laplace's Obliquity of the Ecliptic and Inequalities of Jupiter and Saturn. In the 15th century, the Timurid astronomer Ulugh Beg compiled the Zij-i-Sultani, in which he catalogued 1,019 stars. Like the earlier catalogs of Hipparchus and Ptolemy, Ulugh Beg's catalogue is estimated to have been precise to within 20 minutes of arc. In the 16th century, Tycho Brahe used improved instruments, including large mural instruments, to measure star positions more than with a precision of 15–35 arcsec. Taqi al-Din measured the right ascension of the stars at the Constantinople Observatory of Taqi ad-Din using the "observational clock" he invented; when telescopes became commonplace, setting circles sped measurements James Bradley first tried to measure stellar parallaxes in 1729. The stellar movement proved too insignificant for his telescope, but he instead discovered the aberration of light and the nutation of the Earth's axis.
His cataloguing of 3222 stars was refined in 1807 by Friedrich Bessel, the father of modern astrometry. He made the first measurement of stellar parallax: 0.3 arcsec for the binary star 61 Cygni. Being difficult to measure, only about 60 stellar parallaxes had been obtained by the end of the 19th century by use of the filar micrometer. Astrographs using astronomical photographic plates sped the process in the early 20th century. Automated plate-measuring machines and more sophisticated computer technology of the 1960s allowed more efficient compilation of star catalogues. In the 1980s, charge-coupled devices replaced photographic plates and reduced optical uncertainties to one milliarcsecond; this technology made astrometry less expensive. In 1989, the European Space Agency's Hipparcos satellite took astrometry into orbit, where it could be less affected by mechanical forces of the Earth and optical distortions from its atmosphere. Operated from 1989 to 1993, Hipparcos measured large and small angles on the sky with much greater precision than any previous optical telescopes.
During its 4-year run, the positions and proper motions of 118,218 stars were determined with an unprecedented degree of accuracy. A new "Tycho catalog" drew together a database of 1,058,332 to within 20-30 mas. Additional catalogues were compiled for the 23,882 double/multiple stars and 11,597 variable stars analyzed during the Hipparcos mission. Today, the catalogue most used is USNO-B1.0, an all-sky catalogue that tracks proper motions, positions and other characteristics for over one billion stellar objects. During the past 50 years, 7,435 Schmidt camera plates were used to complete several sky surveys that make the data in USNO-B1.0 accurate to within 0.2 arcsec. Apart from the fundamental function of providing astronomers with a reference frame to report their observations in, astrometry is fundamental for fields like celestial mechanics, stellar dynamics and galactic astronomy. In observational astronomy, astrometric techniques help identify stellar objects by their unique motions, it is instrumental for keeping time, in that UTC is the atomic time synchronized to Earth's rotation by means of exact astronomical observations.
Astrometry is an important step in the cosmic distance ladder because it establishes parallax distance estimates for stars in the Milky Way. Astrometry has been used to support claims of extrasolar planet detection by measuring the displacement the proposed planets cause in their parent star's apparent position on the sky, due to their mutual orbit around the center of mass of the system. Astrometry is more accurate in space missions that are not affected by the distorting effects of the Earth's atmosphere. NASA's planned Space Interferometry Mission was to utilize astrometric techniques to detect terrestrial planets orbiting 200 or so of the nearest solar-type stars; the European Space Agency's Gaia Mission, launched in 2013, applies astrometric techniques in its stellar census. In addition to the detection of exoplanets, it can be used to determine their mass. Astrometric measurements are used by astrophysicists to constrain certain models in celestial mechanics. By measuring the velocities of pulsars, it is possible to put a limit on the asymmetry of supernova explosions.
In astronomy, metallicity is used to describe the abundance of elements present in an object that are heavier than hydrogen or helium. Most of the physical matter in the Universe is in the form of hydrogen and helium, so astronomers use the word "metals" as a convenient short term for "all elements except hydrogen and helium"; this usage is distinct from the usual physical definition of a solid metal. For example and nebulae with high abundances of carbon, nitrogen and neon are called "metal-rich" in astrophysical terms though those elements are non-metals in chemistry; the presence of heavier elements hails from stellar nucleosynthesis, the theory that the majority of elements heavier than hydrogen and helium in the Universe are formed in the cores of stars as they evolve. Over time, stellar winds and supernovae deposit the metals into the surrounding environment, enriching the interstellar medium and providing recycling materials for the birth of new stars, it follows that older generations of stars, which formed in the metal-poor early Universe have lower metallicities than those of younger generations, which formed in a more metal-rich Universe.
Observed changes in the chemical abundances of different types of stars, based on the spectral peculiarities that were attributed to metallicity, led astronomer Walter Baade in 1944 to propose the existence of two different populations of stars. These became known as Population I and Population II stars. A third stellar population was introduced in 1978, known as Population III stars; these metal-poor stars were theorised to have been the "first-born" stars created in the Universe. Astronomers use several different methods to describe and approximate metal abundances, depending on the available tools and the object of interest; some methods include determining the fraction of mass, attributed to gas versus metals, or measuring the ratios of the number of atoms of two different elements as compared to the ratios found in the Sun. Stellar composition is simply defined by the parameters X, Y and Z. Here X is the mass fraction of hydrogen, Y is the mass fraction of helium, Z is the mass fraction of all the remaining chemical elements.
Thus X + Y + Z = 1.00. In most stars, nebulae, H II regions, other astronomical sources and helium are the two dominant elements; the hydrogen mass fraction is expressed as X ≡ m H / M, where M is the total mass of the system, m H is the fractional mass of the hydrogen it contains. The helium mass fraction is denoted as Y ≡ m He / M; the remainder of the elements are collectively referred to as "metals", the metallicity—the mass fraction of elements heavier than helium—can be calculated as Z = ∑ i > He m i M = 1 − X − Y. For the surface of the Sun, these parameters are measured to have the following values: Due to the effects of stellar evolution, neither the initial composition nor the present day bulk composition of the Sun is the same as its present-day surface composition; the overall stellar metallicity is defined using the total iron content of the star, as iron is among the easiest to measure with spectral observations in the visible spectrum. The abundance ratio is defined as the logarithm of the ratio of a star's iron abundance compared to that of the Sun and is expressed thus: = log 10 star − log 10 sun, where N Fe and N H are the number of iron and hydrogen atoms per unit of volume respectively.
The unit used for metallicity is the dex, contraction of "decimal exponent". By this formulation, stars with a higher metallicity than the Sun have a positive logarithmic value, whereas those with a lower metallicity than the Sun have a negative value. For example, stars with a value of +1 have 10 times the metallicity of the Sun. Young Population I stars have higher iron-to-hydrogen ratios than older Population II stars. Primordial Population III stars are estimated to have a metallicity of less than −6.0, that is, less than a millionth of the abundance of iron in the Sun. The same notation is used to express variations in abundances between other the individual elements as compared to solar proportions. For example, the notati
The Kelvin scale is an absolute thermodynamic temperature scale using as its null point absolute zero, the temperature at which all thermal motion ceases in the classical description of thermodynamics. The kelvin is the base unit of temperature in the International System of Units; until 2018, the kelvin was defined as the fraction 1/273.16 of the thermodynamic temperature of the triple point of water. In other words, it was defined such that the triple point of water is 273.16 K. On 16 November 2018, a new definition was adopted, in terms of a fixed value of the Boltzmann constant. For legal metrology purposes, the new definition will come into force on 20 May 2019; the Kelvin scale is named after the Belfast-born, Glasgow University engineer and physicist William Thomson, 1st Baron Kelvin, who wrote of the need for an "absolute thermometric scale". Unlike the degree Fahrenheit and degree Celsius, the kelvin is not referred to or written as a degree; the kelvin is the primary unit of temperature measurement in the physical sciences, but is used in conjunction with the degree Celsius, which has the same magnitude.
The definition implies that absolute zero is equivalent to −273.15 °C. In 1848, William Thomson, made Lord Kelvin, wrote in his paper, On an Absolute Thermometric Scale, of the need for a scale whereby "infinite cold" was the scale's null point, which used the degree Celsius for its unit increment. Kelvin calculated; this absolute scale is known today as the Kelvin thermodynamic temperature scale. Kelvin's value of "−273" was the negative reciprocal of 0.00366—the accepted expansion coefficient of gas per degree Celsius relative to the ice point, giving a remarkable consistency to the accepted value. In 1954, Resolution 3 of the 10th General Conference on Weights and Measures gave the Kelvin scale its modern definition by designating the triple point of water as its second defining point and assigned its temperature to 273.16 kelvins. In 1967/1968, Resolution 3 of the 13th CGPM renamed the unit increment of thermodynamic temperature "kelvin", symbol K, replacing "degree Kelvin", symbol °K. Furthermore, feeling it useful to more explicitly define the magnitude of the unit increment, the 13th CGPM held in Resolution 4 that "The kelvin, unit of thermodynamic temperature, is equal to the fraction 1/273.16 of the thermodynamic temperature of the triple point of water."In 2005, the Comité International des Poids et Mesures, a committee of the CGPM, affirmed that for the purposes of delineating the temperature of the triple point of water, the definition of the Kelvin thermodynamic temperature scale would refer to water having an isotopic composition specified as Vienna Standard Mean Ocean Water.
In 2018, Resolution A of the 26th CGPM adopted a significant redefinition of SI base units which included redefining the Kelvin in terms of a fixed value for the Boltzmann constant of 1.380649×10−23 J/K. When spelled out or spoken, the unit is pluralised using the same grammatical rules as for other SI units such as the volt or ohm; when reference is made to the "Kelvin scale", the word "kelvin"—which is a noun—functions adjectivally to modify the noun "scale" and is capitalized. As with most other SI unit symbols there is a space between the kelvin symbol. Before the 13th CGPM in 1967–1968, the unit kelvin was called a "degree", the same as with the other temperature scales at the time, it was distinguished from the other scales with either the adjective suffix "Kelvin" or with "absolute" and its symbol was °K. The latter term, the unit's official name from 1948 until 1954, was ambiguous since it could be interpreted as referring to the Rankine scale. Before the 13th CGPM, the plural form was "degrees absolute".
The 13th CGPM changed the unit name to "kelvin". The omission of "degree" indicates that it is not relative to an arbitrary reference point like the Celsius and Fahrenheit scales, but rather an absolute unit of measure which can be manipulated algebraically. In science and engineering, degrees Celsius and kelvins are used in the same article, where absolute temperatures are given in degrees Celsius, but temperature intervals are given in kelvins. E.g. "its measured value was 0.01028 °C with an uncertainty of 60 µK." This practice is permissible because the degree Celsius is a special name for the kelvin for use in expressing relative temperatures, the magnitude of the degree Celsius is equal to that of the kelvin. Notwithstanding that the official endorsement provided by Resolution 3 of the 13th CGPM states "a temperature interval may be expressed in degrees Celsius", the practice of using both °C and K is widespread throughout the scientific world; the use of SI prefixed forms of the degree Celsius to express a temperature interval has not been adopted.
In 2005 the CIPM embarked on a programme to redefine the kelvin using a more experimentally rigorous methodology. In particular, the committee proposed redefining the kelvin such that Boltzmann's constant takes the exact value 1.3806505×10−23 J/K. The committee had hoped tha
In physics, an orbit is the gravitationally curved trajectory of an object, such as the trajectory of a planet around a star or a natural satellite around a planet. Orbit refers to a repeating trajectory, although it may refer to a non-repeating trajectory. To a close approximation and satellites follow elliptic orbits, with the central mass being orbited at a focal point of the ellipse, as described by Kepler's laws of planetary motion. For most situations, orbital motion is adequately approximated by Newtonian mechanics, which explains gravity as a force obeying an inverse-square law. However, Albert Einstein's general theory of relativity, which accounts for gravity as due to curvature of spacetime, with orbits following geodesics, provides a more accurate calculation and understanding of the exact mechanics of orbital motion; the apparent motions of the planets were described by European and Arabic philosophers using the idea of celestial spheres. This model posited the existence of perfect moving spheres or rings to which the stars and planets were attached.
It assumed the heavens were fixed apart from the motion of the spheres, was developed without any understanding of gravity. After the planets' motions were more measured, theoretical mechanisms such as deferent and epicycles were added. Although the model was capable of reasonably predicting the planets' positions in the sky and more epicycles were required as the measurements became more accurate, hence the model became unwieldy. Geocentric it was modified by Copernicus to place the Sun at the centre to help simplify the model; the model was further challenged during the 16th century, as comets were observed traversing the spheres. The basis for the modern understanding of orbits was first formulated by Johannes Kepler whose results are summarised in his three laws of planetary motion. First, he found that the orbits of the planets in our Solar System are elliptical, not circular, as had been believed, that the Sun is not located at the center of the orbits, but rather at one focus. Second, he found that the orbital speed of each planet is not constant, as had been thought, but rather that the speed depends on the planet's distance from the Sun.
Third, Kepler found a universal relationship between the orbital properties of all the planets orbiting the Sun. For the planets, the cubes of their distances from the Sun are proportional to the squares of their orbital periods. Jupiter and Venus, for example, are about 5.2 and 0.723 AU distant from the Sun, their orbital periods about 11.86 and 0.615 years. The proportionality is seen by the fact that the ratio for Jupiter, 5.23/11.862, is equal to that for Venus, 0.7233/0.6152, in accord with the relationship. Idealised orbits meeting these rules are known as Kepler orbits. Isaac Newton demonstrated that Kepler's laws were derivable from his theory of gravitation and that, in general, the orbits of bodies subject to gravity were conic sections. Newton showed that, for a pair of bodies, the orbits' sizes are in inverse proportion to their masses, that those bodies orbit their common center of mass. Where one body is much more massive than the other, it is a convenient approximation to take the center of mass as coinciding with the center of the more massive body.
Advances in Newtonian mechanics were used to explore variations from the simple assumptions behind Kepler orbits, such as the perturbations due to other bodies, or the impact of spheroidal rather than spherical bodies. Lagrange developed a new approach to Newtonian mechanics emphasizing energy more than force, made progress on the three body problem, discovering the Lagrangian points. In a dramatic vindication of classical mechanics, in 1846 Urbain Le Verrier was able to predict the position of Neptune based on unexplained perturbations in the orbit of Uranus. Albert Einstein in his 1916 paper The Foundation of the General Theory of Relativity explained that gravity was due to curvature of space-time and removed Newton's assumption that changes propagate instantaneously; this led astronomers to recognize that Newtonian mechanics did not provide the highest accuracy in understanding orbits. In relativity theory, orbits follow geodesic trajectories which are approximated well by the Newtonian predictions but the differences are measurable.
All the experimental evidence that can distinguish between the theories agrees with relativity theory to within experimental measurement accuracy. The original vindication of general relativity is that it was able to account for the remaining unexplained amount in precession of Mercury's perihelion first noted by Le Verrier. However, Newton's solution is still used for most short term purposes since it is easier to use and sufficiently accurate. Within a planetary system, dwarf planets and other minor planets and space debris orbit the system's barycenter in elliptical orbits. A comet in a parabolic or hyperbolic orbit about a barycenter is not gravitationally bound to the star and therefore is not considered part of the star's planetary system. Bodies which are gravitationally bound to one of the planets in a planetary system, either natural or artificial satellites, follow orbits about a barycenter near or within that planet. Owing to mutual gravitational perturbations, the eccentricities of the planetary orbits vary over time.
Mercury, the smallest planet in the Solar System, has the most eccentric orbit
The parsec is a unit of length used to measure large distances to astronomical objects outside the Solar System. A parsec is defined as the distance at which one astronomical unit subtends an angle of one arcsecond, which corresponds to 648000/π astronomical units. One parsec is equal to 31 trillion kilometres or 19 trillion miles; the nearest star, Proxima Centauri, is about 1.3 parsecs from the Sun. Most of the stars visible to the unaided eye in the night sky are within 500 parsecs of the Sun; the parsec unit was first suggested in 1913 by the British astronomer Herbert Hall Turner. Named as a portmanteau of the parallax of one arcsecond, it was defined to make calculations of astronomical distances from only their raw observational data quick and easy for astronomers. For this reason, it is the unit preferred in astronomy and astrophysics, though the light-year remains prominent in popular science texts and common usage. Although parsecs are used for the shorter distances within the Milky Way, multiples of parsecs are required for the larger scales in the universe, including kiloparsecs for the more distant objects within and around the Milky Way, megaparsecs for mid-distance galaxies, gigaparsecs for many quasars and the most distant galaxies.
In August 2015, the IAU passed Resolution B2, which, as part of the definition of a standardized absolute and apparent bolometric magnitude scale, mentioned an existing explicit definition of the parsec as 648000/π astronomical units, or 3.08567758149137×1016 metres. This corresponds to the small-angle definition of the parsec found in many contemporary astronomical references; the parsec is defined as being equal to the length of the longer leg of an elongated imaginary right triangle in space. The two dimensions on which this triangle is based are its shorter leg, of length one astronomical unit, the subtended angle of the vertex opposite that leg, measuring one arc second. Applying the rules of trigonometry to these two values, the unit length of the other leg of the triangle can be derived. One of the oldest methods used by astronomers to calculate the distance to a star is to record the difference in angle between two measurements of the position of the star in the sky; the first measurement is taken from the Earth on one side of the Sun, the second is taken half a year when the Earth is on the opposite side of the Sun.
The distance between the two positions of the Earth when the two measurements were taken is twice the distance between the Earth and the Sun. The difference in angle between the two measurements is twice the parallax angle, formed by lines from the Sun and Earth to the star at the distant vertex; the distance to the star could be calculated using trigonometry. The first successful published direct measurements of an object at interstellar distances were undertaken by German astronomer Friedrich Wilhelm Bessel in 1838, who used this approach to calculate the 3.5-parsec distance of 61 Cygni. The parallax of a star is defined as half of the angular distance that a star appears to move relative to the celestial sphere as Earth orbits the Sun. Equivalently, it is the subtended angle, from that star's perspective, of the semimajor axis of the Earth's orbit; the star, the Sun and the Earth form the corners of an imaginary right triangle in space: the right angle is the corner at the Sun, the corner at the star is the parallax angle.
The length of the opposite side to the parallax angle is the distance from the Earth to the Sun (defined as one astronomical unit, the length of the adjacent side gives the distance from the sun to the star. Therefore, given a measurement of the parallax angle, along with the rules of trigonometry, the distance from the Sun to the star can be found. A parsec is defined as the length of the side adjacent to the vertex occupied by a star whose parallax angle is one arcsecond; the use of the parsec as a unit of distance follows from Bessel's method, because the distance in parsecs can be computed as the reciprocal of the parallax angle in arcseconds. No trigonometric functions are required in this relationship because the small angles involved mean that the approximate solution of the skinny triangle can be applied. Though it may have been used before, the term parsec was first mentioned in an astronomical publication in 1913. Astronomer Royal Frank Watson Dyson expressed his concern for the need of a name for that unit of distance.
He proposed the name astron, but mentioned that Carl Charlier had suggested siriometer and Herbert Hall Turner had proposed parsec. It was Turner's proposal. In the diagram above, S represents the Sun, E the Earth at one point in its orbit, thus the distance ES is one astronomical unit. The angle SDE is one arcsecond so by definition D is a point in space at a distance of one parsec from the Sun. Through trigonometry, the distance SD is calculated as follows: S D = E S tan 1 ″ S D ≈ E S 1 ″ = 1 au 1 60 × 60 × π