Astrometry is the branch of astronomy that involves precise measurements of the positions and movements of stars and other celestial bodies. The information obtained by astrometric measurements provides information on the kinematics and physical origin of the Solar System and our galaxy, the Milky Way; the history of astrometry is linked to the history of star catalogues, which gave astronomers reference points for objects in the sky so they could track their movements. This can be dated back to Hipparchus, who around 190 BC used the catalogue of his predecessors Timocharis and Aristillus to discover Earth's precession. In doing so, he developed the brightness scale still in use today. Hipparchus compiled a catalogue with their positions. Hipparchus's successor, included a catalogue of 1,022 stars in his work the Almagest, giving their location and brightness. In the 10th century, Abd al-Rahman al-Sufi carried out observations on the stars and described their positions and star color. Ibn Yunus observed more than 10,000 entries for the Sun's position for many years using a large astrolabe with a diameter of nearly 1.4 metres.
His observations on eclipses were still used centuries in Simon Newcomb's investigations on the motion of the Moon, while his other observations of the motions of the planets Jupiter and Saturn inspired Laplace's Obliquity of the Ecliptic and Inequalities of Jupiter and Saturn. In the 15th century, the Timurid astronomer Ulugh Beg compiled the Zij-i-Sultani, in which he catalogued 1,019 stars. Like the earlier catalogs of Hipparchus and Ptolemy, Ulugh Beg's catalogue is estimated to have been precise to within 20 minutes of arc. In the 16th century, Tycho Brahe used improved instruments, including large mural instruments, to measure star positions more than with a precision of 15–35 arcsec. Taqi al-Din measured the right ascension of the stars at the Constantinople Observatory of Taqi ad-Din using the "observational clock" he invented; when telescopes became commonplace, setting circles sped measurements James Bradley first tried to measure stellar parallaxes in 1729. The stellar movement proved too insignificant for his telescope, but he instead discovered the aberration of light and the nutation of the Earth's axis.
His cataloguing of 3222 stars was refined in 1807 by Friedrich Bessel, the father of modern astrometry. He made the first measurement of stellar parallax: 0.3 arcsec for the binary star 61 Cygni. Being difficult to measure, only about 60 stellar parallaxes had been obtained by the end of the 19th century by use of the filar micrometer. Astrographs using astronomical photographic plates sped the process in the early 20th century. Automated plate-measuring machines and more sophisticated computer technology of the 1960s allowed more efficient compilation of star catalogues. In the 1980s, charge-coupled devices replaced photographic plates and reduced optical uncertainties to one milliarcsecond; this technology made astrometry less expensive. In 1989, the European Space Agency's Hipparcos satellite took astrometry into orbit, where it could be less affected by mechanical forces of the Earth and optical distortions from its atmosphere. Operated from 1989 to 1993, Hipparcos measured large and small angles on the sky with much greater precision than any previous optical telescopes.
During its 4-year run, the positions and proper motions of 118,218 stars were determined with an unprecedented degree of accuracy. A new "Tycho catalog" drew together a database of 1,058,332 to within 20-30 mas. Additional catalogues were compiled for the 23,882 double/multiple stars and 11,597 variable stars analyzed during the Hipparcos mission. Today, the catalogue most used is USNO-B1.0, an all-sky catalogue that tracks proper motions, positions and other characteristics for over one billion stellar objects. During the past 50 years, 7,435 Schmidt camera plates were used to complete several sky surveys that make the data in USNO-B1.0 accurate to within 0.2 arcsec. Apart from the fundamental function of providing astronomers with a reference frame to report their observations in, astrometry is fundamental for fields like celestial mechanics, stellar dynamics and galactic astronomy. In observational astronomy, astrometric techniques help identify stellar objects by their unique motions, it is instrumental for keeping time, in that UTC is the atomic time synchronized to Earth's rotation by means of exact astronomical observations.
Astrometry is an important step in the cosmic distance ladder because it establishes parallax distance estimates for stars in the Milky Way. Astrometry has been used to support claims of extrasolar planet detection by measuring the displacement the proposed planets cause in their parent star's apparent position on the sky, due to their mutual orbit around the center of mass of the system. Astrometry is more accurate in space missions that are not affected by the distorting effects of the Earth's atmosphere. NASA's planned Space Interferometry Mission was to utilize astrometric techniques to detect terrestrial planets orbiting 200 or so of the nearest solar-type stars; the European Space Agency's Gaia Mission, launched in 2013, applies astrometric techniques in its stellar census. In addition to the detection of exoplanets, it can be used to determine their mass. Astrometric measurements are used by astrophysicists to constrain certain models in celestial mechanics. By measuring the velocities of pulsars, it is possible to put a limit on the asymmetry of supernova explosions.
The apparent magnitude of an astronomical object is a number, a measure of its brightness as seen by an observer on Earth. The magnitude scale is logarithmic. A difference of 1 in magnitude corresponds to a change in brightness by a factor of 5√100, or about 2.512. The brighter an object appears, the lower its magnitude value, with the brightest astronomical objects having negative apparent magnitudes: for example Sirius at −1.46. The measurement of apparent magnitudes or brightnesses of celestial objects is known as photometry. Apparent magnitudes are used to quantify the brightness of sources at ultraviolet and infrared wavelengths. An apparent magnitude is measured in a specific passband corresponding to some photometric system such as the UBV system. In standard astronomical notation, an apparent magnitude in the V filter band would be denoted either as mV or simply as V, as in "mV = 15" or "V = 15" to describe a 15th-magnitude object; the scale used to indicate magnitude originates in the Hellenistic practice of dividing stars visible to the naked eye into six magnitudes.
The brightest stars in the night sky were said to be of first magnitude, whereas the faintest were of sixth magnitude, the limit of human visual perception. Each grade of magnitude was considered twice the brightness of the following grade, although that ratio was subjective as no photodetectors existed; this rather crude scale for the brightness of stars was popularized by Ptolemy in his Almagest and is believed to have originated with Hipparchus. In 1856, Norman Robert Pogson formalized the system by defining a first magnitude star as a star, 100 times as bright as a sixth-magnitude star, thereby establishing the logarithmic scale still in use today; this implies that a star of magnitude m is about 2.512 times as bright as a star of magnitude m + 1. This figure, the fifth root of 100, became known as Pogson's Ratio; the zero point of Pogson's scale was defined by assigning Polaris a magnitude of 2. Astronomers discovered that Polaris is variable, so they switched to Vega as the standard reference star, assigning the brightness of Vega as the definition of zero magnitude at any specified wavelength.
Apart from small corrections, the brightness of Vega still serves as the definition of zero magnitude for visible and near infrared wavelengths, where its spectral energy distribution approximates that of a black body for a temperature of 11000 K. However, with the advent of infrared astronomy it was revealed that Vega's radiation includes an Infrared excess due to a circumstellar disk consisting of dust at warm temperatures. At shorter wavelengths, there is negligible emission from dust at these temperatures. However, in order to properly extend the magnitude scale further into the infrared, this peculiarity of Vega should not affect the definition of the magnitude scale. Therefore, the magnitude scale was extrapolated to all wavelengths on the basis of the black-body radiation curve for an ideal stellar surface at 11000 K uncontaminated by circumstellar radiation. On this basis the spectral irradiance for the zero magnitude point, as a function of wavelength, can be computed. Small deviations are specified between systems using measurement apparatuses developed independently so that data obtained by different astronomers can be properly compared, but of greater practical importance is the definition of magnitude not at a single wavelength but applying to the response of standard spectral filters used in photometry over various wavelength bands.
With the modern magnitude systems, brightness over a wide range is specified according to the logarithmic definition detailed below, using this zero reference. In practice such apparent magnitudes do not exceed 30; the brightness of Vega is exceeded by four stars in the night sky at visible wavelengths as well as the bright planets Venus and Jupiter, these must be described by negative magnitudes. For example, the brightest star of the celestial sphere, has an apparent magnitude of −1.4 in the visible. Negative magnitudes for other bright astronomical objects can be found in the table below. Astronomers have developed other photometric zeropoint systems as alternatives to the Vega system; the most used is the AB magnitude system, in which photometric zeropoints are based on a hypothetical reference spectrum having constant flux per unit frequency interval, rather than using a stellar spectrum or blackbody curve as the reference. The AB magnitude zeropoint is defined such that an object's AB and Vega-based magnitudes will be equal in the V filter band.
As the amount of light received by a telescope is reduced by transmission through the Earth's atmosphere, any measurement of apparent magnitude is corrected for what it would have been as seen from above the atmosphere. The dimmer an object appears, the higher the numerical value given to its apparent magnitude, with a difference of 5 magnitudes corresponding to a brightness factor of 100. Therefore, the apparent magnitude m, in the spectral band x, would be given by m x = − 5 log 100 , more expressed in terms of common logarithms as m x
A giant star is a star with larger radius and luminosity than a main-sequence star of the same surface temperature. They lie above the main sequence on the Hertzsprung–Russell diagram and correspond to luminosity classes II and III; the terms giant and dwarf were coined for stars of quite different luminosity despite similar temperature or spectral type by Ejnar Hertzsprung about 1905. Giant stars have radii up to a few hundred times the Sun and luminosities between 10 and a few thousand times that of the Sun. Stars still more luminous than giants are referred to as hypergiants. A hot, luminous main-sequence star may be referred to as a giant, but any main-sequence star is properly called a dwarf no matter how large and luminous it is. A star becomes a giant after all the hydrogen available for fusion at its core has been depleted and, as a result, leaves the main sequence; the behaviour of a post-main-sequence star depends on its mass. For a star with a mass above about 0.25 solar masses, once the core is depleted of hydrogen it contracts and heats up so that hydrogen starts to fuse in a shell around the core.
The portion of the star outside the shell expands and cools, but with only a small increase in luminosity, the star becomes a subgiant. The inert helium core continues to grow and increase temperature as it accretes helium from the shell, but in stars up to about 10-12 M☉ it does not become hot enough to start helium burning. Instead, after just a few million years the core reaches the Schönberg–Chandrasekhar limit collapses, may become degenerate; this causes the outer layers to expand further and generates a strong convective zone that brings heavy elements to the surface in a process called the first dredge-up. This strong convection increases the transport of energy to the surface, the luminosity increases and the star moves onto the red-giant branch where it will stably burn hydrogen in a shell for a substantial fraction of its entire life; the core continues to gain mass and increase in temperature, whereas there is some mass loss in the outer layers. § 5.9. If the star's mass, when on the main sequence, was below 0.4 M☉, it will never reach the central temperatures necessary to fuse helium.
P. 169. It will therefore remain a hydrogen-fusing red giant until it runs out of hydrogen, at which point it will become a helium white dwarf. § 4.1, 6.1. According to stellar evolution theory, no star of such low mass can have evolved to that stage within the age of the Universe. In stars above about 0.4 M☉ the core temperature reaches 108 K and helium will begin to fuse to carbon and oxygen in the core by the triple-alpha process.§ 5.9, chapter 6. When the core is degenerate helium fusion begins explosively, but most of the energy goes into lifting the degeneracy and the core becomes convective; the energy generated by helium fusion reduces the pressure in the surrounding hydrogen-burning shell, which reduces its energy-generation rate. The overall luminosity of the star decreases, its outer envelope contracts again, the star moves from the red-giant branch to the horizontal branch. Chapter 6; when the core helium is exhausted, a star with up to about 8 M☉ has a carbon–oxygen core that becomes degenerate and starts helium burning in a shell.
As with the earlier collapse of the helium core, this starts convection in the outer layers, triggers a second dredge-up, causes a dramatic increase in size and luminosity. This is the asymptotic giant branch analogous to the red-giant branch but more luminous, with a hydrogen-burning shell contributing most of the energy. Stars only remain on the AGB for around a million years, becoming unstable until they exhaust their fuel, go through a planetary nebula phase, become a carbon–oxygen white dwarf. § 7.1–7.4. Main-sequence stars with masses above about 12 M☉ are very luminous and they move horizontally across the HR diagram when they leave the main sequence becoming blue giants before they expand further into blue supergiants, they start core-helium burning before the core becomes degenerate and develop smoothly into red supergiants without a strong increase in luminosity. At this stage they have comparable luminosities to bright AGB stars although they have much higher masses, but will further increase in luminosity as they burn heavier elements and become a supernova.
Stars in the 8-12 M☉ range have somewhat intermediate properties and have been called super-AGB stars. They follow the tracks of lighter stars through RGB, HB, AGB phases, but are massive enough to initiate core carbon burning and some neon burning, they form oxygen–magnesium–neon cores, which may collapse in an electron-capture supernova, or they may leave behind an oxygen–neon white dwarf. O class main sequence stars are highly luminous; the giant phase for such stars is a brief phase of increased size and luminosity before developing a supergiant spectral luminosity class. Type O giants may be more than a hundred thousand times as luminous as the sun, brighter than many supergiants. Classification is complex and difficult with small differences between luminosity classes and a continuous range of intermediate forms; the most massive stars develop giant or supergiant spectral features while still burning hydrogen in their cores, due to mixing of heavy elements to the surface and high luminosity which produces a powerful stellar wind and causes the star's atmosphere to expand.
A star whose initial mass is less than 0.25 M☉ will not become a giant star at all. For most of th
The photosphere is a star's outer shell from which light is radiated. The term itself is derived from Ancient Greek roots, φῶς, φωτός/phos, photos meaning "light" and σφαῖρα/sphaira meaning "sphere", in reference to it being a spherical surface, perceived to emit light, it extends into a star's surface until the plasma becomes opaque, equivalent to an optical depth of 2/3, or equivalently, a depth from which 50% of light will escape without being scattered. In other words, a photosphere is the deepest region of a luminous object a star, transparent to photons of certain wavelengths; the surface of a star is defined to have a temperature given by the effective temperature in the Stefan–Boltzmann law. Stars, except neutron stars, have no liquid surface. Therefore, the photosphere is used to describe the Sun's or another star's visual surface; the Sun is composed of the chemical elements hydrogen and helium. All heavier elements, called metals in astronomy, account for less than 2% of the mass, with oxygen, carbon and iron being the most abundant.
The Sun's photosphere has a temperature between 4,500 and 6,000 K and a density somewhere around 1×10−3 to 1×10−6 kg/m3. The Sun's photosphere is around 100 kilometers thick, is composed of convection cells called granules—cells of plasma each 1000 kilometers in diameter with hot rising plasma in the center and cooler plasma falling in the narrow spaces between them, flowing at velocities of 7 kilometer per second; each granule has a lifespan of only about twenty minutes, resulting in a continually shifting "boiling" pattern. Grouping the typical granules are super granules up to 30,000 kilometers in diameter with lifespans of up to 24 hours and flow speeds of about 500 meter per second, carrying magnetic field bundles to the edges of the cells. Other magnetically-related phenomena include sunspots and solar faculae dispersed between the granules; these details are too fine to be seen. The Sun's visible atmosphere has other layers above the photosphere: the 2,000 kilometer-deep chromosphere lies just between the photosphere and the much hotter but more tenuous corona.
Other "surface features" on the photosphere are solar sunspots. Animated explanation of the Photosphere. Animated explanation of the temperature of the Photosphere. Solar Lower Atmosphere and Magnetism
Limb darkening is an optical effect seen in stars, where the center part of the disk appears brighter than the edge or limb of the image. Its understanding offered early solar astronomers an opportunity to construct models with such gradients; this encouraged the development of the theory of radiative transfer. Crucial to understanding limb darkening is the idea of optical depth. A distance equal to one optical depth is the thickness of the absorbing gas from which a fraction of 1/e photons can escape; this is what defines the visible edge of a star, since it is at a few optical depths that the star becomes opaque. The radiation reaching us is approximated by the sum of all the emission along the entire line of sight, up to that point where the optical depth is unity. In particular, if the intensity of radiation in the star varies linearly with optical depth the radiation reaching us will be of the intensity at an optical depth of unity; when we look near the edge of a star, we cannot "see" to the same depth as when we look at the center because the line of sight must travel at an oblique angle through the stellar gas when looking near the limb.
In other words, the star radius at which we see the optical depth as being unity increases as we move our line of sight towards the limb. The second effect is the fact that the effective temperature of the stellar atmosphere is decreasing for an increasing distance from the center of the star; the radiation emitted from a gas is a strong function of temperature. For a black body, for example, the spectrally integrated intensity is proportional to the fourth power of the temperature. Since when we look at a star, at first approximation, the radiation comes from the point at which the optical depth is unity, that point is deeper in when looking at the center, the temperature will be higher, the intensity will be greater, than when we look at the limb. In fact, the temperature in the atmosphere of a star does not always decrease with increasing height, for certain spectral lines, the optical depth is unity in a region of increasing temperature. In this case we see the phenomenon of "limb brightening".
Outside the lower atmosphere, well above the temperature-minimum region, we find the million-kelvin solar corona. For most wavelengths this region is optically thin, i.e. has small optical depth, must therefore be limb-brightened if spherically symmetric. Further complication comes from the existence of rough structure; the classical analysis of stellar limb darkening, as described below, assumes the existence of a smooth hydrostatic equilibrium, at some level of precision this assumption must fail. Instead, the boundary between the chromosphere and the corona consists of a complicated transition region, best observed at ultraviolet wavelengths only detectable from space. In the figure shown here, as long as the observer at point P is outside the stellar atmosphere, the intensity seen in the direction θ will be a function only of the angle of incidence ψ; this is most conveniently approximated as a polynomial in cos ψ: I I = ∑ k = 0 N a k cos k ψ, where I is the intensity seen at P along a line of sight forming angle ψ with respect to the stellar radius, I is the central intensity.
In order that the ratio be unity for ψ = 0, we must have ∑ k = 0 N a k = 1. For example, for a Lambertian radiator we will have all ak = 0 except a0 = 1; as another example, for the sun at 550 nm, the limb darkening is well expressed by N = 2 and a 0 = 1 − a 1 − a 2 = 0.3, a 1 = 0.93, a 2 = − 0.23. The equation for limb darkening is sometimes more conveniently written as I I = 1 + ∑ k = 1 N A k k, which now has N independent coefficients rather than N + 1 coefficients that must sum to unity; the ak constants can be related to the Ak constants. For N = 2, A 1 = −, A 2 = a 2. For the sun at 550 nm, we have A 1 = − 0.47, A 2 = − 0.23. This model give
The Kelvin scale is an absolute thermodynamic temperature scale using as its null point absolute zero, the temperature at which all thermal motion ceases in the classical description of thermodynamics. The kelvin is the base unit of temperature in the International System of Units; until 2018, the kelvin was defined as the fraction 1/273.16 of the thermodynamic temperature of the triple point of water. In other words, it was defined such that the triple point of water is 273.16 K. On 16 November 2018, a new definition was adopted, in terms of a fixed value of the Boltzmann constant. For legal metrology purposes, the new definition will come into force on 20 May 2019; the Kelvin scale is named after the Belfast-born, Glasgow University engineer and physicist William Thomson, 1st Baron Kelvin, who wrote of the need for an "absolute thermometric scale". Unlike the degree Fahrenheit and degree Celsius, the kelvin is not referred to or written as a degree; the kelvin is the primary unit of temperature measurement in the physical sciences, but is used in conjunction with the degree Celsius, which has the same magnitude.
The definition implies that absolute zero is equivalent to −273.15 °C. In 1848, William Thomson, made Lord Kelvin, wrote in his paper, On an Absolute Thermometric Scale, of the need for a scale whereby "infinite cold" was the scale's null point, which used the degree Celsius for its unit increment. Kelvin calculated; this absolute scale is known today as the Kelvin thermodynamic temperature scale. Kelvin's value of "−273" was the negative reciprocal of 0.00366—the accepted expansion coefficient of gas per degree Celsius relative to the ice point, giving a remarkable consistency to the accepted value. In 1954, Resolution 3 of the 10th General Conference on Weights and Measures gave the Kelvin scale its modern definition by designating the triple point of water as its second defining point and assigned its temperature to 273.16 kelvins. In 1967/1968, Resolution 3 of the 13th CGPM renamed the unit increment of thermodynamic temperature "kelvin", symbol K, replacing "degree Kelvin", symbol °K. Furthermore, feeling it useful to more explicitly define the magnitude of the unit increment, the 13th CGPM held in Resolution 4 that "The kelvin, unit of thermodynamic temperature, is equal to the fraction 1/273.16 of the thermodynamic temperature of the triple point of water."In 2005, the Comité International des Poids et Mesures, a committee of the CGPM, affirmed that for the purposes of delineating the temperature of the triple point of water, the definition of the Kelvin thermodynamic temperature scale would refer to water having an isotopic composition specified as Vienna Standard Mean Ocean Water.
In 2018, Resolution A of the 26th CGPM adopted a significant redefinition of SI base units which included redefining the Kelvin in terms of a fixed value for the Boltzmann constant of 1.380649×10−23 J/K. When spelled out or spoken, the unit is pluralised using the same grammatical rules as for other SI units such as the volt or ohm; when reference is made to the "Kelvin scale", the word "kelvin"—which is a noun—functions adjectivally to modify the noun "scale" and is capitalized. As with most other SI unit symbols there is a space between the kelvin symbol. Before the 13th CGPM in 1967–1968, the unit kelvin was called a "degree", the same as with the other temperature scales at the time, it was distinguished from the other scales with either the adjective suffix "Kelvin" or with "absolute" and its symbol was °K. The latter term, the unit's official name from 1948 until 1954, was ambiguous since it could be interpreted as referring to the Rankine scale. Before the 13th CGPM, the plural form was "degrees absolute".
The 13th CGPM changed the unit name to "kelvin". The omission of "degree" indicates that it is not relative to an arbitrary reference point like the Celsius and Fahrenheit scales, but rather an absolute unit of measure which can be manipulated algebraically. In science and engineering, degrees Celsius and kelvins are used in the same article, where absolute temperatures are given in degrees Celsius, but temperature intervals are given in kelvins. E.g. "its measured value was 0.01028 °C with an uncertainty of 60 µK." This practice is permissible because the degree Celsius is a special name for the kelvin for use in expressing relative temperatures, the magnitude of the degree Celsius is equal to that of the kelvin. Notwithstanding that the official endorsement provided by Resolution 3 of the 13th CGPM states "a temperature interval may be expressed in degrees Celsius", the practice of using both °C and K is widespread throughout the scientific world; the use of SI prefixed forms of the degree Celsius to express a temperature interval has not been adopted.
In 2005 the CIPM embarked on a programme to redefine the kelvin using a more experimentally rigorous methodology. In particular, the committee proposed redefining the kelvin such that Boltzmann's constant takes the exact value 1.3806505×10−23 J/K. The committee had hoped tha
A star is type of astronomical object consisting of a luminous spheroid of plasma held together by its own gravity. The nearest star to Earth is the Sun. Many other stars are visible to the naked eye from Earth during the night, appearing as a multitude of fixed luminous points in the sky due to their immense distance from Earth; the most prominent stars were grouped into constellations and asterisms, the brightest of which gained proper names. Astronomers have assembled star catalogues that identify the known stars and provide standardized stellar designations. However, most of the estimated 300 sextillion stars in the Universe are invisible to the naked eye from Earth, including all stars outside our galaxy, the Milky Way. For at least a portion of its life, a star shines due to thermonuclear fusion of hydrogen into helium in its core, releasing energy that traverses the star's interior and radiates into outer space. All occurring elements heavier than helium are created by stellar nucleosynthesis during the star's lifetime, for some stars by supernova nucleosynthesis when it explodes.
Near the end of its life, a star can contain degenerate matter. Astronomers can determine the mass, age and many other properties of a star by observing its motion through space, its luminosity, spectrum respectively; the total mass of a star is the main factor. Other characteristics of a star, including diameter and temperature, change over its life, while the star's environment affects its rotation and movement. A plot of the temperature of many stars against their luminosities produces a plot known as a Hertzsprung–Russell diagram. Plotting a particular star on that diagram allows the age and evolutionary state of that star to be determined. A star's life begins with the gravitational collapse of a gaseous nebula of material composed of hydrogen, along with helium and trace amounts of heavier elements; when the stellar core is sufficiently dense, hydrogen becomes converted into helium through nuclear fusion, releasing energy in the process. The remainder of the star's interior carries energy away from the core through a combination of radiative and convective heat transfer processes.
The star's internal pressure prevents it from collapsing further under its own gravity. A star with mass greater than 0.4 times the Sun's will expand to become a red giant when the hydrogen fuel in its core is exhausted. In some cases, it will fuse heavier elements in shells around the core; as the star expands it throws a part of its mass, enriched with those heavier elements, into the interstellar environment, to be recycled as new stars. Meanwhile, the core becomes a stellar remnant: a white dwarf, a neutron star, or if it is sufficiently massive a black hole. Binary and multi-star systems consist of two or more stars that are gravitationally bound and move around each other in stable orbits; when two such stars have a close orbit, their gravitational interaction can have a significant impact on their evolution. Stars can form part of a much larger gravitationally bound structure, such as a star cluster or a galaxy. Stars have been important to civilizations throughout the world, they have used for celestial navigation and orientation.
Many ancient astronomers believed that stars were permanently affixed to a heavenly sphere and that they were immutable. By convention, astronomers grouped stars into constellations and used them to track the motions of the planets and the inferred position of the Sun; the motion of the Sun against the background stars was used to create calendars, which could be used to regulate agricultural practices. The Gregorian calendar used nearly everywhere in the world, is a solar calendar based on the angle of the Earth's rotational axis relative to its local star, the Sun; the oldest dated star chart was the result of ancient Egyptian astronomy in 1534 BC. The earliest known star catalogues were compiled by the ancient Babylonian astronomers of Mesopotamia in the late 2nd millennium BC, during the Kassite Period; the first star catalogue in Greek astronomy was created by Aristillus in 300 BC, with the help of Timocharis. The star catalog of Hipparchus included 1020 stars, was used to assemble Ptolemy's star catalogue.
Hipparchus is known for the discovery of the first recorded nova. Many of the constellations and star names in use today derive from Greek astronomy. In spite of the apparent immutability of the heavens, Chinese astronomers were aware that new stars could appear. In 185 AD, they were the first to observe and write about a supernova, now known as the SN 185; the brightest stellar event in recorded history was the SN 1006 supernova, observed in 1006 and written about by the Egyptian astronomer Ali ibn Ridwan and several Chinese astronomers. The SN 1054 supernova, which gave birth to the Crab Nebula, was observed by Chinese and Islamic astronomers. Medieval Islamic astronomers gave Arabic names to many stars that are still used today and they invented numerous astronomical instruments that could compute the positions of the stars, they built the first large observatory research institutes for the purpose of producing Zij star catalogues. Among these, the Book of Fixed Stars was written by the Persian astronomer Abd al-Rahman al-Sufi, who observed a number of stars, star clusters and galaxies.
According to A. Zahoor, in the 11th century, the Persian polymath scholar Abu Rayhan Biruni described the Milky