The apparent magnitude of an astronomical object is a number, a measure of its brightness as seen by an observer on Earth. The magnitude scale is logarithmic. A difference of 1 in magnitude corresponds to a change in brightness by a factor of 5√100, or about 2.512. The brighter an object appears, the lower its magnitude value, with the brightest astronomical objects having negative apparent magnitudes: for example Sirius at −1.46. The measurement of apparent magnitudes or brightnesses of celestial objects is known as photometry. Apparent magnitudes are used to quantify the brightness of sources at ultraviolet and infrared wavelengths. An apparent magnitude is measured in a specific passband corresponding to some photometric system such as the UBV system. In standard astronomical notation, an apparent magnitude in the V filter band would be denoted either as mV or simply as V, as in "mV = 15" or "V = 15" to describe a 15th-magnitude object; the scale used to indicate magnitude originates in the Hellenistic practice of dividing stars visible to the naked eye into six magnitudes.
The brightest stars in the night sky were said to be of first magnitude, whereas the faintest were of sixth magnitude, the limit of human visual perception. Each grade of magnitude was considered twice the brightness of the following grade, although that ratio was subjective as no photodetectors existed; this rather crude scale for the brightness of stars was popularized by Ptolemy in his Almagest and is believed to have originated with Hipparchus. In 1856, Norman Robert Pogson formalized the system by defining a first magnitude star as a star, 100 times as bright as a sixth-magnitude star, thereby establishing the logarithmic scale still in use today; this implies that a star of magnitude m is about 2.512 times as bright as a star of magnitude m + 1. This figure, the fifth root of 100, became known as Pogson's Ratio; the zero point of Pogson's scale was defined by assigning Polaris a magnitude of 2. Astronomers discovered that Polaris is variable, so they switched to Vega as the standard reference star, assigning the brightness of Vega as the definition of zero magnitude at any specified wavelength.
Apart from small corrections, the brightness of Vega still serves as the definition of zero magnitude for visible and near infrared wavelengths, where its spectral energy distribution approximates that of a black body for a temperature of 11000 K. However, with the advent of infrared astronomy it was revealed that Vega's radiation includes an Infrared excess due to a circumstellar disk consisting of dust at warm temperatures. At shorter wavelengths, there is negligible emission from dust at these temperatures. However, in order to properly extend the magnitude scale further into the infrared, this peculiarity of Vega should not affect the definition of the magnitude scale. Therefore, the magnitude scale was extrapolated to all wavelengths on the basis of the black-body radiation curve for an ideal stellar surface at 11000 K uncontaminated by circumstellar radiation. On this basis the spectral irradiance for the zero magnitude point, as a function of wavelength, can be computed. Small deviations are specified between systems using measurement apparatuses developed independently so that data obtained by different astronomers can be properly compared, but of greater practical importance is the definition of magnitude not at a single wavelength but applying to the response of standard spectral filters used in photometry over various wavelength bands.
With the modern magnitude systems, brightness over a wide range is specified according to the logarithmic definition detailed below, using this zero reference. In practice such apparent magnitudes do not exceed 30; the brightness of Vega is exceeded by four stars in the night sky at visible wavelengths as well as the bright planets Venus and Jupiter, these must be described by negative magnitudes. For example, the brightest star of the celestial sphere, has an apparent magnitude of −1.4 in the visible. Negative magnitudes for other bright astronomical objects can be found in the table below. Astronomers have developed other photometric zeropoint systems as alternatives to the Vega system; the most used is the AB magnitude system, in which photometric zeropoints are based on a hypothetical reference spectrum having constant flux per unit frequency interval, rather than using a stellar spectrum or blackbody curve as the reference. The AB magnitude zeropoint is defined such that an object's AB and Vega-based magnitudes will be equal in the V filter band.
As the amount of light received by a telescope is reduced by transmission through the Earth's atmosphere, any measurement of apparent magnitude is corrected for what it would have been as seen from above the atmosphere. The dimmer an object appears, the higher the numerical value given to its apparent magnitude, with a difference of 5 magnitudes corresponding to a brightness factor of 100. Therefore, the apparent magnitude m, in the spectral band x, would be given by m x = − 5 log 100 , more expressed in terms of common logarithms as m x
Astronomical spectroscopy is the study of astronomy using the techniques of spectroscopy to measure the spectrum of electromagnetic radiation, including visible light and radio, which radiates from stars and other celestial objects. A stellar spectrum can reveal many properties of stars, such as their chemical composition, density, distance and relative motion using Doppler shift measurements. Spectroscopy is used to study the physical properties of many other types of celestial objects such as planets, nebulae and active galactic nuclei. Astronomical spectroscopy is used to measure three major bands of radiation: visible spectrum, X-ray. While all spectroscopy looks at specific areas of the spectrum, different methods are required to acquire the signal depending on the frequency. Ozone and molecular oxygen absorb light with wavelengths under 300 nm, meaning that X-ray and ultraviolet spectroscopy require the use of a satellite telescope or rocket mounted detectors. Radio signals have much longer wavelengths than optical signals, require the use of antennas or radio dishes.
Infrared light is absorbed by atmospheric water and carbon dioxide, so while the equipment is similar to that used in optical spectroscopy, satellites are required to record much of the infrared spectrum. Physicists have been looking at the solar spectrum since Isaac Newton first used a simple prism to observe the refractive properties of light. In the early 1800s Joseph von Fraunhofer used his skills as a glass maker to create pure prisms, which allowed him to observe 574 dark lines in a continuous spectrum. Soon after this, he combined telescope and prism to observe the spectrum of Venus, the Moon and various stars such as Betelgeuse; the resolution of a prism is limited by its size. This issue was resolved in the early 1900s with the development of high-quality reflection gratings by J. S. Plaskett at the Dominion Observatory in Ottawa, Canada. Light striking a mirror will reflect at the same angle, however a small portion of the light will be refracted at a different angle. By creating a "blazed" grating which utilizes a large number of parallel mirrors, the small portion of light can be focused and visualized.
These new spectroscopes were more detailed than a prism, required less light, could be focused on a specific region of the spectrum by tilting the grating. The limitation to a blazed grating is the width of the mirrors, which can only be ground a finite amount before focus is lost. In order to overcome this limitation holographic gratings were developed. Volume phase holographic gratings use a thin film of dichromated gelatin on a glass surface, subsequently exposed to a wave pattern created by an interferometer; this wave pattern sets up a reflection pattern similar to the blazed gratings but utilizing Bragg diffraction, a process where the angle of reflection is dependent on the arrangement of the atoms in the gelatin. The holographic gratings can have up to 6000 lines/mm and can be up to twice as efficient in collecting light as blazed gratings; because they are sealed between two sheets of glass, the holographic gratings are versatile lasting decades before needing replacement. Light dispersed by the grating or prism in a spectrograph can be recorded by a detector.
Photographic plates were used to record spectra until electronic detectors were developed, today optical spectrographs most employ charge-coupled devices. The wavelength scale of a spectrum can be calibrated by observing the spectrum of emission lines of known wavelength from a gas-discharge lamp; the flux scale of a spectrum can be calibrated as a function of wavelength by comparison with an observation of a standard star with corrections for atmospheric absorption of light. Radio astronomy was founded with the work of Karl Jansky in the early 1930s, while working for Bell Labs, he built a radio antenna to look at potential sources of interference for transatlantic radio transmissions. One of the sources of noise discovered came not from Earth, but from the center of the Milky Way, in the constellation Sagittarius. In 1942, JS Hey captured the sun's radio frequency using military radar receivers. Radio spectroscopy started with the discovery of the 21-centimeter H I line in 1951. Radio interferometry was pioneered in 1946, when Joseph Lade Pawsey, Ruby Payne-Scott and Lindsay McCready used a single antenna atop a sea cliff to observe 200 MHz solar radiation.
Two incident beams, one directly from the sun and the other reflected from the sea surface, generated the necessary interference. The first multi-receiver interferometer was built in the same year by Martin Vonberg. In 1960, Ryle and Antony Hewish published the technique of aperture synthesis to analyze interferometer data; the aperture synthesis process, which involves autocorrelating and discrete Fourier transforming the incoming signal, recovers both the spatial and frequency variation in flux. The result is a 3D image. For this work and Hewish were jointly awarded the 1974 Nobel Prize in Physics. Newton used a prism to split white light into a spectrum of color, Fraunhofer's high-quality prisms allowed scientists to see dark lines of an unknown origin. In the 1850s, Gustav Kirchhoff and Robert Bunsen described the phenomena behind these dark lines
In astronomy, stellar classification is the classification of stars based on their spectral characteristics. Electromagnetic radiation from the star is analyzed by splitting it with a prism or diffraction grating into a spectrum exhibiting the rainbow of colors interspersed with spectral lines; each line indicates a particular chemical element or molecule, with the line strength indicating the abundance of that element. The strengths of the different spectral lines vary due to the temperature of the photosphere, although in some cases there are true abundance differences; the spectral class of a star is a short code summarizing the ionization state, giving an objective measure of the photosphere's temperature. Most stars are classified under the Morgan-Keenan system using the letters O, B, A, F, G, K, M, a sequence from the hottest to the coolest; each letter class is subdivided using a numeric digit with 0 being hottest and 9 being coolest. The sequence has been expanded with classes for other stars and star-like objects that do not fit in the classical system, such as class D for white dwarfs and classes S and C for carbon stars.
In the MK system, a luminosity class is added to the spectral class using Roman numerals. This is based on the width of certain absorption lines in the star's spectrum, which vary with the density of the atmosphere and so distinguish giant stars from dwarfs. Luminosity class 0 or Ia+ is used for hypergiants, class I for supergiants, class II for bright giants, class III for regular giants, class IV for sub-giants, class V for main-sequence stars, class sd for sub-dwarfs, class D for white dwarfs; the full spectral class for the Sun is G2V, indicating a main-sequence star with a temperature around 5,800 K. The conventional color description takes into account only the peak of the stellar spectrum. In actuality, stars radiate in all parts of the spectrum; because all spectral colors combined appear white, the actual apparent colors the human eye would observe are far lighter than the conventional color descriptions would suggest. This characteristic of'lightness' indicates that the simplified assignment of colors within the spectrum can be misleading.
Excluding color-contrast illusions in dim light, there are indigo, or violet stars. Red dwarfs are a deep shade of orange, brown dwarfs do not appear brown, but hypothetically would appear dim grey to a nearby observer; the modern classification system is known as the Morgan–Keenan classification. Each star is assigned a spectral class from the older Harvard spectral classification and a luminosity class using Roman numerals as explained below, forming the star's spectral type. Other modern stellar classification systems, such as the UBV system, are based on color indexes—the measured differences in three or more color magnitudes; those numbers are given labels such as "U-V" or "B-V", which represent the colors passed by two standard filters. The Harvard system is a one-dimensional classification scheme by astronomer Annie Jump Cannon, who re-ordered and simplified a prior alphabetical system. Stars are grouped according to their spectral characteristics by single letters of the alphabet, optionally with numeric subdivisions.
Main-sequence stars vary in surface temperature from 2,000 to 50,000 K, whereas more-evolved stars can have temperatures above 100,000 K. Physically, the classes indicate the temperature of the star's atmosphere and are listed from hottest to coldest; the spectral classes O through M, as well as other more specialized classes discussed are subdivided by Arabic numerals, where 0 denotes the hottest stars of a given class. For example, A0 denotes A9 denotes the coolest ones. Fractional numbers are allowed; the Sun is classified as G2. Conventional color descriptions are traditional in astronomy, represent colors relative to the mean color of an A class star, considered to be white; the apparent color descriptions are what the observer would see if trying to describe the stars under a dark sky without aid to the eye, or with binoculars. However, most stars in the sky, except the brightest ones, appear white or bluish white to the unaided eye because they are too dim for color vision to work. Red supergiants are cooler and redder than dwarfs of the same spectral type, stars with particular spectral features such as carbon stars may be far redder than any black body.
The fact that the Harvard classification of a star indicated its surface or photospheric temperature was not understood until after its development, though by the time the first Hertzsprung–Russell diagram was formulated, this was suspected to be true. In the 1920s, the Indian physicist Meghnad Saha derived a theory of ionization by extending well-known ideas in physical chemistry pertaining to the dissociation of molecules to the ionization of atoms. First he applied it to the solar chromosphere to stellar spectra. Harvard astronomer Cecilia Payne demonstrated that the O-B-A-F-G-K-M spectral sequence is a sequence in temperature; because the classification sequence predates our understanding that it is a temperature sequence, the placement of a spectrum into a given subtype, such as B3 or A7, depends upon estimates of the strengths of absorption features in stellar spectra. As a result, these subtypes are not evenly divided into any sort of mathematically representable intervals; the Yerkes spectral classification called the MKK system from the authors' initial
Proper motion is the astronomical measure of the observed changes in the apparent places of stars or other celestial objects in the sky, as seen from the center of mass of the Solar System, compared to the abstract background of the more distant stars. The components for proper motion in the equatorial coordinate system are given in the direction of right ascension and of declination, their combined value is computed as the total proper motion. It has dimensions of angle per time arcseconds per year or milliarcseconds per year. Knowledge of the proper motion and radial velocity allows calculations of true stellar motion or velocity in space in respect to the Sun, by coordinate transformation, the motion in respect to the Milky Way. Proper motion is not "proper", because it includes a component due to the motion of the Solar System itself. Over the course of centuries, stars appear to maintain nearly fixed positions with respect to each other, so that they form the same constellations over historical time.
Ursa Major or Crux, for example, looks nearly the same now. However, precise long-term observations show that the constellations change shape, albeit slowly, that each star has an independent motion; this motion is caused by the movement of the stars relative to the Solar System. The Sun travels in a nearly circular orbit about the center of the Milky Way at a speed of about 220 km/s at a radius of 8 kPc from the center, which can be taken as the rate of rotation of the Milky Way itself at this radius; the proper motion is a two-dimensional vector and is thus defined by two quantities: its position angle and its magnitude. The first quantity indicates the direction of the proper motion on the celestial sphere, the second quantity is the motion's magnitude expressed in arcseconds per year or milliarcsecond per year. Proper motion may alternatively be defined by the angular changes per year in the star's right ascension and declination, using a constant epoch in defining these; the components of proper motion by convention are arrived at.
Suppose an object moves from coordinates to coordinates in a time Δt. The proper motions are given by: μ α = α 2 − α 1 Δ t, μ δ = δ 2 − δ 1 Δ t; the magnitude of the proper motion μ is given by the Pythagorean theorem: μ 2 = μ δ 2 + μ α 2 ⋅ cos 2 δ, μ 2 = μ δ 2 + μ α ∗ 2, where δ is the declination. The factor in cos2δ accounts for the fact that the radius from the axis of the sphere to its surface varies as cosδ, for example, zero at the pole. Thus, the component of velocity parallel to the equator corresponding to a given angular change in α is smaller the further north the object's location; the change μα, which must be multiplied by cosδ to become a component of the proper motion, is sometimes called the "proper motion in right ascension", μδ the "proper motion in declination". If the proper motion in right ascension has been converted by cosδ, the result is designated μα*. For example, the proper motion results in right ascension in the Hipparcos Catalogue have been converted. Hence, the individual proper motions in right ascension and declination are made equivalent for straightforward calculations of various other stellar motions.
The position angle θ is related to these components by: μ sin θ = μ α cos δ = μ α ∗, μ cos θ = μ δ. Motions in equatorial coordinates can be converted to motions in galactic coordinates. For the majority of stars seen in the sky, the observed proper motions are small and unremarkable; such stars are either faint or are distant, have changes of below 10 milliarcseconds per year, do not appear to move appreciably over many millennia. A few do have significant motions, are called high-proper motion stars. Motions can be in seemingly random directions. Two or more stars, double stars or open star clusters, which are moving in similar directions, exhibit so-called shared or common proper motion, suggesting they may be gravitationally attached or share similar motion in space. Barnard's Star has the largest proper motion of all stars, moving at 10.3 seconds of arc per year. L
Stellar parallax is the apparent shift of position of any nearby star against the background of distant objects. Created by the different orbital positions of Earth, the small observed shift is largest at time intervals of about six months, when Earth arrives at opposite sides of the Sun in its orbit, giving a baseline distance of about two astronomical units between observations; the parallax itself is considered to be half of this maximum, about equivalent to the observational shift that would occur due to the different positions of Earth and the Sun, a baseline of one astronomical unit. Stellar parallax is so difficult to detect that its existence was the subject of much debate in astronomy for hundreds of years, it was first observed in 1806 by Giuseppe Calandrelli who reported parallax in α-Lyrae in his work "Osservazione e riflessione sulla parallasse annua dall’alfa della Lira". In 1838 Friedrich Bessel made the first successful parallax measurement, for the star 61 Cygni, using a Fraunhofer heliometer at Königsberg Observatory.
Once a star's parallax is known, its distance from Earth can be computed trigonometrically. But the more distant an object is, the smaller its parallax. With 21st-century techniques in astrometry, the limits of accurate measurement make distances farther away than about 100 parsecs too approximate to be useful when obtained by this technique; this limits the applicability of parallax as a measurement of distance to objects that are close on a galactic scale. Other techniques, such as spectral red-shift, are required to measure the distance of more remote objects. Stellar parallax measures are given in the tiny units of arcseconds, or in thousandths of arcseconds; the distance unit parsec is defined as the length of the leg of a right triangle adjacent to the angle of one arcsecond at one vertex, where the other leg is 1 AU long. Because stellar parallaxes and distances all involve such skinny right triangles, a convenient trigonometric approximation can be used to convert parallaxes to distance.
The approximate distance is the reciprocal of the parallax: d ≃ 1 / p. For example, Proxima Centauri, whose parallax is 0.7687, is 1 / 0.7687 parsecs = 1.3009 parsecs distant. Stellar parallax is so small that its apparent absence was used as a scientific argument against heliocentrism during the early modern age, it is clear from Euclid's geometry that the effect would be undetectable if the stars were far enough away, but for various reasons such gigantic distances involved seemed implausible: it was one of Tycho Brahe's principal objections to Copernican heliocentrism that in order for it to be compatible with the lack of observable stellar parallax, there would have to be an enormous and unlikely void between the orbit of Saturn and the eighth sphere. James Bradley first tried to measure stellar parallaxes in 1729; the stellar movement proved too insignificant for his telescope, but he instead discovered the aberration of light and the nutation of Earth's axis, catalogued 3222 stars. Stellar parallax is most measured using annual parallax, defined as the difference in position of a star as seen from Earth and Sun, i.e. the angle subtended at a star by the mean radius of Earth's orbit around the Sun.
The parsec is defined as the distance. Annual parallax is measured by observing the position of a star at different times of the year as Earth moves through its orbit. Measurement of annual parallax was the first reliable way to determine the distances to the closest stars; the first successful measurements of stellar parallax were made by Friedrich Bessel in 1838 for the star 61 Cygni using a heliometer. Being difficult to measure, only about 60 stellar parallaxes had been obtained by the end of the 19th century by use of the filar micrometer. Astrographs using astronomical photographic plates sped the process in the early 20th century. Automated plate-measuring machines and more sophisticated computer technology of the 1960s allowed more efficient compilation of star catalogues. In the 1980s, charge-coupled devices replaced photographic plates and reduced optical uncertainties to one milliarcsecond. Stellar parallax remains the standard for calibrating other measurement methods. Accurate calculations of distance based on stellar parallax require a measurement of the distance from Earth to the Sun, now known to exquisite accuracy based on radar reflection off the surfaces of planets.
The angles involved in these calculations are small and thus difficult to measure. The nearest star to the Sun, Proxima Centauri, has a parallax of 0.7687 ± 0.0003 arcsec. This angle is that subtended by an object 2 centimeters in diameter located 5.3 kilometers away. In 1989 the satellite Hipparcos was launched for obtaining parallaxes and proper motions of nearby stars, increasing the number of stellar parallaxes measured to milliarcsecond accuracy a thousandfold. So, Hipparcos is only able to measure parallax angles for stars up to about 1,600 light-years away, a little more than one percent of the diameter of the Milky Way Galaxy; the Hubble telescope WFC3 now has a precision of 20 to 40 microarcseconds, enabling reliable distance measurements u
The radial velocity of an object with respect to a given point is the rate of change of the distance between the object and the point. That is, the radial velocity is the component of the object's velocity that points in the direction of the radius connecting the object and the point. In astronomy, the point is taken to be the observer on Earth, so the radial velocity denotes the speed with which the object moves away from or approaches the Earth. In astronomy, radial velocity is measured to the first order of approximation by Doppler spectroscopy; the quantity obtained by this method may be called the barycentric radial-velocity measure or spectroscopic radial velocity. However, due to relativistic and cosmological effects over the great distances that light travels to reach the observer from an astronomical object, this measure cannot be transformed to a geometric radial velocity without additional assumptions about the object and the space between it and the observer. By contrast, astrometric radial velocity is determined by astrometric observations.
Light from an object with a substantial relative radial velocity at emission will be subject to the Doppler effect, so the frequency of the light decreases for objects that were receding and increases for objects that were approaching. The radial velocity of a star or other luminous distant objects can be measured by taking a high-resolution spectrum and comparing the measured wavelengths of known spectral lines to wavelengths from laboratory measurements. A positive radial velocity indicates the distance between the objects was increasing. In many binary stars, the orbital motion causes radial velocity variations of several kilometers per second; as the spectra of these stars vary due to the Doppler effect, they are called spectroscopic binaries. Radial velocity can be used to estimate the ratio of the masses of the stars, some orbital elements, such as eccentricity and semimajor axis; the same method has been used to detect planets around stars, in the way that the movement's measurement determines the planet's orbital period, while the resulting radial-velocity amplitude allows the calculation of the lower bound on a planet's mass using the binary mass function.
Radial velocity methods alone may only reveal a lower bound, since a large planet orbiting at a high angle to the line of sight will perturb its star radially as much as a much smaller planet with an orbital plane on the line of sight. It has been suggested that planets with high eccentricities calculated by this method may in fact be two-planet systems of circular or near-circular resonant orbit; the radial velocity method to detect exoplanets is based on the detection of variations in the velocity of the central star, due to the changing direction of the gravitational pull from an exoplanet as it orbits the star. When the star moves towards us, its spectrum is blueshifted, while it is redshifted when it moves away from us. By looking at the spectrum of a star—and so, measuring its velocity—it can be determined if it moves periodically due to the influence of an exoplanet companion. From the instrumental perspective, velocities are measured relative to the telescope's motion. So an important first step of the data reduction is to remove the contributions of the Earth's elliptic motion around the sun at ± 30 km/s, a monthly rotation of ± 13 m/s of the Earth around the center of gravity of the Earth-Moon system, the daily rotation of the telescope with the Earth crust around the Earth axis, up to ±460 m/s at the equator and proportional to the cosine of the telescope's geographic latitude, small contributions from the Earth polar motion at the level of mm/s, contributions of 230 km/s from the motion around the Galactic center and associated proper motions.
In the case of spectroscopic measurements corrections of the order of ±20 cm/s with respect to aberration. Proper motion Peculiar velocity Relative velocity Space velocity The Radial Velocity Equation in the Search for Exoplanets
In astronomy, luminosity is the total amount of energy emitted per unit of time by a star, galaxy, or other astronomical object. As a term for energy emitted per unit time, luminosity is synonymous with power. In SI units luminosity is measured in joules per second or watts. Values for luminosity are given in the terms of the luminosity of the Sun, L⊙. Luminosity can be given in terms of the astronomical magnitude system: the absolute bolometric magnitude of an object is a logarithmic measure of its total energy emission rate, while absolute magnitude is a logarithmic measure of the luminosity within some specific wavelength range or filter band. In contrast, the term brightness in astronomy is used to refer to an object's apparent brightness: that is, how bright an object appears to an observer. Apparent brightness depends on both the luminosity of the object and the distance between the object and observer, on any absorption of light along the path from object to observer. Apparent magnitude is a logarithmic measure of apparent brightness.
The distance determined by luminosity measures can be somewhat ambiguous, is thus sometimes called the luminosity distance. In astronomy, luminosity is the amount of electromagnetic energy; when not qualified, the term "luminosity" means bolometric luminosity, measured either in the SI units, watts, or in terms of solar luminosities. A bolometer is the instrument used to measure radiant energy over a wide band by absorption and measurement of heating. A star radiates neutrinos, which carry off some energy, contributing to the star's total luminosity; the IAU has defined a nominal solar luminosity of 3.828×1026 W to promote publication of consistent and comparable values in units of the solar luminosity. While bolometers do exist, they cannot be used to measure the apparent brightness of a star because they are insufficiently sensitive across the electromagnetic spectrum and because most wavelengths do not reach the surface of the Earth. In practice bolometric magnitudes are measured by taking measurements at certain wavelengths and constructing a model of the total spectrum, most to match those measurements.
In some cases, the process of estimation is extreme, with luminosities being calculated when less than 1% of the energy output is observed, for example with a hot Wolf-Rayet star observed only in the infra-red. Bolometric luminosities can be calculated using a bolometric correction to a luminosity in a particular passband; the term luminosity is used in relation to particular passbands such as a visual luminosity of K-band luminosity. These are not luminosities in the strict sense of an absolute measure of radiated power, but absolute magnitudes defined for a given filter in a photometric system. Several different photometric systems exist; some such as the UBV or Johnson system are defined against photometric standard stars, while others such as the AB system are defined in terms of a spectral flux density. A star's luminosity can be determined from two stellar characteristics: size and effective temperature; the former is represented in terms of solar radii, R⊙, while the latter is represented in kelvins, but in most cases neither can be measured directly.
To determine a star's radius, two other metrics are needed: the star's angular diameter and its distance from Earth. Both can be measured with great accuracy in certain cases, with cool supergiants having large angular diameters, some cool evolved stars having masers in their atmospheres that can be used to measure the parallax using VLBI. However, for most stars the angular diameter or parallax, or both, are far below our ability to measure with any certainty. Since the effective temperature is a number that represents the temperature of a black body that would reproduce the luminosity, it cannot be measured directly, but it can be estimated from the spectrum. An alternative way to measure stellar luminosity is to measure the star's apparent brightness and distance. A third component needed to derive the luminosity is the degree of interstellar extinction, present, a condition that arises because of gas and dust present in the interstellar medium, the Earth's atmosphere, circumstellar matter.
One of astronomy's central challenges in determining a star's luminosity is to derive accurate measurements for each of these components, without which an accurate luminosity figure remains elusive. Extinction can only be measured directly if the actual and observed luminosities are both known, but it can be estimated from the observed colour of a star, using models of the expected level of reddening from the interstellar medium. In the current system of stellar classification, stars are grouped according to temperature, with the massive young and energetic Class O stars boasting temperatures in excess of 30,000 K while the less massive older Class M stars exhibit temperatures less than 3,500 K; because luminosity is proportional to temperature to the fourth power, the large variation in stellar temperatures produces an vaster variation in stellar luminosity. Because the luminosity depends on a high power of the stellar mass, high mass luminous stars have much shorter lifetimes; the most luminous stars are always young stars, no more than a few million years for the most extreme.
In the Hertzsprung–Russell diagram, the x-axis represents temperature or spectral type while the y-axis represents luminosity or magnitude. The vast majority of stars are found along the main sequence with blue Class O stars found at the top left of the chart while red Class M stars fall to the bottom right. Certain stars like Deneb and Betelgeuse are