Apparent magnitude

The apparent magnitude of an astronomical object is a number, a measure of its brightness as seen by an observer on Earth. The magnitude scale is logarithmic. A difference of 1 in magnitude corresponds to a change in brightness by a factor of 5√100, or about 2.512. The brighter an object appears, the lower its magnitude value, with the brightest astronomical objects having negative apparent magnitudes: for example Sirius at −1.46. The measurement of apparent magnitudes or brightnesses of celestial objects is known as photometry. Apparent magnitudes are used to quantify the brightness of sources at ultraviolet and infrared wavelengths. An apparent magnitude is measured in a specific passband corresponding to some photometric system such as the UBV system. In standard astronomical notation, an apparent magnitude in the V filter band would be denoted either as mV or simply as V, as in "mV = 15" or "V = 15" to describe a 15th-magnitude object; the scale used to indicate magnitude originates in the Hellenistic practice of dividing stars visible to the naked eye into six magnitudes.

The brightest stars in the night sky were said to be of first magnitude, whereas the faintest were of sixth magnitude, the limit of human visual perception. Each grade of magnitude was considered twice the brightness of the following grade, although that ratio was subjective as no photodetectors existed; this rather crude scale for the brightness of stars was popularized by Ptolemy in his Almagest and is believed to have originated with Hipparchus. In 1856, Norman Robert Pogson formalized the system by defining a first magnitude star as a star, 100 times as bright as a sixth-magnitude star, thereby establishing the logarithmic scale still in use today; this implies that a star of magnitude m is about 2.512 times as bright as a star of magnitude m + 1. This figure, the fifth root of 100, became known as Pogson's Ratio; the zero point of Pogson's scale was defined by assigning Polaris a magnitude of 2. Astronomers discovered that Polaris is variable, so they switched to Vega as the standard reference star, assigning the brightness of Vega as the definition of zero magnitude at any specified wavelength.

Apart from small corrections, the brightness of Vega still serves as the definition of zero magnitude for visible and near infrared wavelengths, where its spectral energy distribution approximates that of a black body for a temperature of 11000 K. However, with the advent of infrared astronomy it was revealed that Vega's radiation includes an Infrared excess due to a circumstellar disk consisting of dust at warm temperatures. At shorter wavelengths, there is negligible emission from dust at these temperatures. However, in order to properly extend the magnitude scale further into the infrared, this peculiarity of Vega should not affect the definition of the magnitude scale. Therefore, the magnitude scale was extrapolated to all wavelengths on the basis of the black-body radiation curve for an ideal stellar surface at 11000 K uncontaminated by circumstellar radiation. On this basis the spectral irradiance for the zero magnitude point, as a function of wavelength, can be computed. Small deviations are specified between systems using measurement apparatuses developed independently so that data obtained by different astronomers can be properly compared, but of greater practical importance is the definition of magnitude not at a single wavelength but applying to the response of standard spectral filters used in photometry over various wavelength bands.

With the modern magnitude systems, brightness over a wide range is specified according to the logarithmic definition detailed below, using this zero reference. In practice such apparent magnitudes do not exceed 30; the brightness of Vega is exceeded by four stars in the night sky at visible wavelengths as well as the bright planets Venus and Jupiter, these must be described by negative magnitudes. For example, the brightest star of the celestial sphere, has an apparent magnitude of −1.4 in the visible. Negative magnitudes for other bright astronomical objects can be found in the table below. Astronomers have developed other photometric zeropoint systems as alternatives to the Vega system; the most used is the AB magnitude system, in which photometric zeropoints are based on a hypothetical reference spectrum having constant flux per unit frequency interval, rather than using a stellar spectrum or blackbody curve as the reference. The AB magnitude zeropoint is defined such that an object's AB and Vega-based magnitudes will be equal in the V filter band.

As the amount of light received by a telescope is reduced by transmission through the Earth's atmosphere, any measurement of apparent magnitude is corrected for what it would have been as seen from above the atmosphere. The dimmer an object appears, the higher the numerical value given to its apparent magnitude, with a difference of 5 magnitudes corresponding to a brightness factor of 100. Therefore, the apparent magnitude m, in the spectral band x, would be given by m x = − 5 log 100 , more expressed in terms of common logarithms as m x

Order of magnitude

An order of magnitude is an approximate measure of the number of digits that a number has in the commonly-used base-ten number system. It is equal to the whole number floor of logarithm. For example, the order of magnitude of 1500 is 3, because 1500 = 1.5 × 103. Differences in order of magnitude can be measured on a base-10 logarithmic scale in “decades”. Examples of numbers of different magnitudes can be found at Orders of magnitude; the order of magnitude of a number is the smallest power of 10 used to represent that number. To work out the order of magnitude of a number N, the number is first expressed in the following form: N = a × 10 b where 10 10 ≤ a < 10. B represents the order of magnitude of the number; the order of magnitude can be any integer. The table below enumerates the order of magnitude of some numbers in light of this definition: The geometric mean of 10 b and 10 b + 1 is 10 × 10 b, meaning that a value of 10 b represents a geometric "halfway point" within the range of possible values of a.

Some use a simpler definition where 0.5 < a ≤ 5 because the arithmetic mean of 10 b and 10 b + c approaches 5 × 10 b + c − 1 for increasing c. This definition has the effect of lowering the values of b slightly: Yet others restrict a to values where 1 ≤ a < 10, making the order of magnitude of a number equal to its exponent part in scientific notation. Orders of magnitude are used to make approximate comparisons. If numbers differ by one order of magnitude, x is about ten times different in quantity than y. If values differ by two orders of magnitude, they differ by a factor of about 100. Two numbers of the same order of magnitude have the same scale: the larger value is less than ten times the smaller value; the order of magnitude of a number is, intuitively speaking, the number of powers of 10 contained in the number. More the order of magnitude of a number can be defined in terms of the common logarithm as the integer part of the logarithm, obtained by truncation. For example, the number 4000000 has a logarithm of 6.602.

When truncating, a number of this order of magnitude is between 106 and 107. In a similar example, with the phrase "He had a seven-figure income", the order of magnitude is the number of figures minus one, so it is easily determined without a calculator to 6. An order of magnitude is an approximate position on a logarithmic scale. An order-of-magnitude estimate of a variable, whose precise value is unknown, is an estimate rounded to the nearest power of ten. For example, an order-of-magnitude estimate for a variable between about 3 billion and 30 billion is 10 billion. To round a number to its nearest order of magnitude, one rounds its logarithm to the nearest integer, thus 4000000, which has a logarithm of 6.602, has 7 as its nearest order of magnitude, because "nearest" implies rounding rather than truncation. For a number written in scientific notation, this logarithmic rounding scale requires rounding up to the next power of ten when the multiplier is greater than the square root of ten. For example, the nearest order of magnitude for 1.7×108 is 8, whereas the nearest order of magnitude for 3.7×108 is 9.

An order-of-magnitude estimate is sometimes called a zeroth order approximation. An order-of-magnitude difference between two values is a factor of 10. For example, the mass of the planet Saturn is 95 times that of Earth, so Saturn is two orders of magnitude more massive than Earth. Order-of-magnitude differences are called decades. Other orders of magnitude may be calculated using bases other than 10; the ancient Greeks ranked the nighttime brightness of celestial bodies by 6 levels in which each level was the fifth root of one hundred as bright as the nearest weaker level of brightness, thus the brightest level being 5 orders of magnitude brighter than the weakest indicates that it is 5 or a factor of 100 times brighter. The different decimal numeral systems of the world use a larger base to better envision the size of the number, have created names for the powers of this larger base; the table shows what number the order of magnitude aim at for base 10 and for base 1000000. It can be seen that the order of magnitude is included in the number name in this example, because bi- means 2 and tri- means 3, the suffix -illion tells that the base is 1000000.

But the number names billion, trillion themselves (here with other meaning than in the first cha

Magnitude of eclipse

The magnitude of eclipse is the fraction of the angular diameter of a celestial body being eclipsed. This applies to all celestial eclipses; the magnitude of a partial or annular solar eclipse is always between 0.0 and 1.0, while the magnitude of a total solar eclipse is always greater than or equal to 1.0. The cycle of magnitude of eclipse is 159 years minus 17 days; the southern central eclipse alternates to northern or vice versa. Saros = number +3; this measure should not be confused with the covered fraction of the apparent area of the eclipsed body, whereas the magnitude of an eclipse is a ratio of diameters. Neither should it be confused with the astronomical magnitude scale of apparent brightness; the apparent sizes of the Moon and Sun are both 0.5°, or 30', but both vary because the distance between Earth and Moon varies. In an annular solar eclipse, the magnitude of the eclipse is the ratio between the apparent angular diameters of the Moon and that of the Sun during the maximum eclipse, yielding a ratio less than 1.0.

As the magnitude of eclipse is less than one, the disk of the Moon cannot cover the Sun. When the centers of the two disks are sufficiently aligned, a ring of sunlight remains visible around the Moon; this is called an annular eclipse, from Latin annulus, meaning "ring". For a total solar eclipse to happen, the ratio of the apparent diameters of the Moon and of the Sun must be 1.0 or more, the three celestial bodies must be aligned centrally enough. When, the case, the Moon's disk covers the Sun's disk in the sky completely; the path of totality is a narrow strip, at most a few hundreds of kilometers across. In a partial solar eclipse, the magnitude of the eclipse is the fraction of the Sun's diameter occulted by the Moon at the time of maximum eclipse; as seen from one location, the momentary eclipse magnitude varies, being 0.0 at the start of the eclipse, rising to some maximum value, decreasing to 0.0 at the end of the eclipse. When one says "the magnitude of the eclipse" without further specification, one means the maximum value of the magnitude of the eclipse.

The eclipse magnitude varies not only between eclipses, but by viewing location. An eclipse may total in another; these mixed-type eclipses are called hybrid. The effect on a lunar eclipse is quite similar, with a few differences. First, the eclipsed body is the Moon and the eclipsing'body' is the Earth's shadow. Second, since the Earth's shadow at the Moon's distance always is larger than the Moon, a lunar eclipse can never be annular but is always partial or total. Third, the Earth's shadow has two components: the much brighter penumbra. A lunar eclipse will have two geometric magnitudes: the umbral magnitude and the penumbral magnitude. If the three bodies are not aligned enough, the Moon does not reach into the Earth's umbra - it may still pass through the Earth's penumbra though, such an eclipse is called a penumbral eclipse. Solar eclipse Lunar eclipse Java applet demonstrating eclipse magnitude and obscuration

Magnitude (astronomy)

In astronomy, magnitude is a unitless measure of the brightness of an object in a defined passband in the visible or infrared spectrum, but sometimes across all wavelengths. An imprecise but systematic determination of the magnitude of objects was introduced in ancient times by Hipparchus; the scale is logarithmic and defined such that each step of one magnitude changes the brightness by a factor of the fifth root of 100, or 2.512. For example, a magnitude 1 star is 100 times brighter than a magnitude 6 star; the brighter an object appears, the lower the value of its magnitude, with the brightest objects reaching negative values. Astronomers use two different definitions of magnitude: absolute magnitude; the apparent magnitude is the brightness of an object. Apparent magnitude depends on an object's intrinsic luminosity, its distance, the extinction reducing its brightness; the absolute magnitude describes the intrinsic luminosity emitted by an object and is defined to be equal to the apparent magnitude that the object would have if it were placed at a certain distance from Earth, 10 parsecs for stars.

A more complex definition of absolute magnitude is used for planets and small Solar System bodies, based on its brightness at one astronomical unit from the observer and the Sun. The Sun has an apparent magnitude of −27 and Sirius, the brightest visible star in the night sky, −1.46. Apparent magnitudes can be assigned to artificial objects in Earth orbit with the International Space Station sometimes reaching a magnitude of −6; the magnitude system dates back 2000 years to the Greek astronomer Hipparchus who classified stars by their apparent brightness, which they saw as size. To the unaided eye, a more prominent star such as Sirius or Arcturus appears larger than a less prominent star such as Mizar, which in turn appears larger than a faint star such as Alcor. In 1736, the mathematician John Keill described the ancient naked-eye magnitude system in this way: The fixed Stars appear to be of different Bignesses, not because they are so, but because they are not all distant from us; those that are nearest will excel in Bigness.

Hence arise the Distribution of Stars, according to their Order and Dignity, into Classes. For all the other Stars, which are only seen by the Help of a Telescope, which are called Telescopical, are not reckoned among these six Orders. Altho' the Distinction of Stars into six Degrees of Magnitude is received by Astronomers, and among those Stars which are reckoned of the brightest Class, there appears a Variety of Magnitude. For Example: The little Dog was by Tycho placed among the Stars of the second Magnitude, which Ptolemy reckoned among the Stars of the first Class: And therefore it is not either of the first or second Order, but ought to be ranked in a Place between both. Note that the brighter the star, the smaller the magnitude: Bright "first magnitude" stars are "1st-class" stars, while stars visible to the naked eye are "sixth magnitude" or "6th-class"; the system was a simple delineation of stellar brightness into six distinct groups but made no allowance for the variations in brightness within a group.

Tycho Brahe attempted to directly measure the "bigness" of the stars in terms of angular size, which in theory meant that a star's magnitude could be determined by more than just the subjective judgment described in the above quote. He concluded that first magnitude stars measured 2 arc minutes in apparent diameter, with second through sixth magnitude stars measuring 1 1⁄2′, 1 1⁄12′, 3⁄4′, 1⁄2′, 1⁄3′, respectively; the development of the telescope showed that these large sizes were illusory—stars appeared much smaller through the telescope. However, early telescopes produced a spurious disk-like image of a star, larger for brighter stars and smaller for fainter ones. Astronomers from Galileo to Jaques Cassini mistook these spurious disks for the physical bodies of stars, thus into the eighteenth century continued to think of magnitude in terms of the physical size of a star. Johannes Hevelius produced a precise table of star sizes measured telescopically, but now the measured diameters ranged from just over six seconds of arc for first magnitude down to just under 2 seconds for sixth magnitude.

By the time of William Herschel astronomers recognized that the telescopic disks of stars were spurious and a function of the telescope as well as the brightness of the stars, but still spoke in terms of a star's size more than its brightness. Well into the nineteenth century the magnitude system

Richter magnitude scale

The so-called Richter magnitude scale – more Richter's magnitude scale, or just Richter magnitude – for measuring the strength of earthquakes refers to the original "magnitude scale" developed by Charles F. Richter and presented in his landmark 1935 paper, revised and renamed the Local magnitude scale, denoted as "ML" or "ML"; because of various shortcomings of the ML scale most seismological authorities now use other scales, such as the moment magnitude scale, to report earthquake magnitudes, but much of the news media still refers to these as "Richter" magnitudes. All magnitude scales retain the logarithmic character of the original, are scaled to have comparable numeric values. Prior to the development of the magnitude scale the only measure of an earthquake's strength or "size" was a subjective assessment of the intensity of shaking observed near the epicenter of the earthquake, categorized by various seismic intensity scales such as the Rossi-Forel scale. In 1883 John Milne surmised that the shaking of large earthquakes might generate waves detectable around the globe, in 1899 E. Von Rehbur Paschvitz observed in Germany seismic waves attributable to an earthquake in Tokyo.

In the 1920s Harry O. Wood and John A. Anderson developed the Wood–Anderson seismograph, one of the first practical instruments for recording seismic waves. Wood built, under the auspices of the California Institute of Technology and the Carnegie Institute, a network of seismographs stretching across Southern California, he recruited the young and unknown Charles Richter to measure the seismograms and locate the earthquakes generating the seismic waves. In 1931 Kiyoo Wadati showed how he had measured, for several strong earthquakes in Japan, the amplitude of the shaking observed at various distances from the epicenter, he plotted the logarithm of the amplitude against the distance, found a series of curves that showed a rough correlation with the estimated magnitudes of the earthquakes. Richter resolved some difficulties with this method using data collected by his colleague Beno Gutenberg, produced similar curves, confirming that they could be used to compare the relative magnitudes of different earthquakes.

To produce a practical method of assigning an absolute measure of magnitude required additional developments. First, to span the wide range of possible values Richter adopted Gutenberg's suggestion of a logarithmic scale, where each step represents a tenfold increase of magnitude, similar to the magnitude scale used by astronomers for star brightness. Second, he wanted a magnitude of zero to be around the limit of human perceptibility. Third, he specified the Wood–Anderson seismograph as the standard instrument for producing seismograms. Magnitude was defined as "the logarithm of the maximum trace amplitude, expressed in microns", measured at a distance of 100 km; the scale was calibrated by defining a magnitude 3 shock as one that produces a maximum amplitude of 1 micron on a seismogram recorded by a Wood–Anderson torsion seismograph. Richter calculated a table of distance corrections, in that for distances less than 200 kilometers the attenuation is affected by the structure and properties of the regional geology.

When Richter presented the resulting scale in 1935 he called it a "magnitude" scale. "Richter magnitude" appears to have originated when Perry Byerly told the press that the scale was Richter's, "should be referred to as such." In 1956 Gutenberg and Richter, while still referring to "magnitude scale", labelled it "local magnitude", with the symbol ML , to distinguish it from two other scales they had developed, the surface wave magnitude and body wave magnitude scales. The Richter scale was defined in 1935 for particular instruments; the particular instrument used would become saturated by strong earthquakes and unable to record high values. The scale was replaced in the 1970s by the moment magnitude scale. Although values measured for earthquakes now are M w, they are reported by the press as Richter values for earthquakes of magnitude over 8, when the Richter scale becomes meaningless. Anything above 5 is classified as a risk by the USGS; the Richter and MMS scales measure the energy released by an earthquake.

The energy and effects are not strongly correlated. Several scales have been described as the "Richter scale" the local magnitude M L and the surface wave M s scale. In addition, the body wave magnitude, m b, the moment magnitude, M w, abbreviated MMS, have been used for decades. A couple of new techniques to measure magnitude are in the development stage by seismologists. All magnitude scales have been designed to give numerically similar results; this goal has been achieved well for M

Seismic magnitude scales

Seismic magnitude scales are used to describe the overall strength or "size" of an earthquake. These are distinguished from seismic intensity scales that categorize the intensity or severity of ground shaking caused by an earthquake at a given location. Magnitudes are determined from measurements of an earthquake's seismic waves as recorded on a seismogram. Magnitude scales vary how they are measured. Different magnitude scales are necessary because of differences in earthquakes, the information available, the purposes for which the magnitudes are used; the Earth's crust is stressed by tectonic forces. When this stress becomes great enough to rupture the crust, or to overcome the friction that prevents one block of crust from slipping past another, energy is released, some of it in the form of various kinds of seismic waves that cause ground-shaking, or quaking. Magnitude is an estimate of the relative "size" or strength of an earthquake, thus its potential for causing ground-shaking, it is "approximately related to the released seismic energy."

Intensity refers to the strength or force of shaking at a given location, can be related to the peak ground velocity. With an isoseismal map of the observed intensities an earthquake's magnitude can be estimated from both the maximum intensity observed, from the extent of the area where the earthquake was felt; the intensity of local ground-shaking depends on several factors besides the magnitude of the earthquake, one of the most important being soil conditions. For instance, thick layers of soft soil can amplify seismic waves at a considerable distance from the source, while sedimentary basins will resonate, increasing the duration of shaking; this is why, in the 1989 Loma Prieta earthquake, the Marina district of San Francisco was one of the most damaged areas, though it was nearly 100 km from the epicenter. Geological structures were significant, such as where seismic waves passing under the south end of San Francisco Bay reflected off the base of the Earth's crust towards San Francisco and Oakland.

A similar effect channeled seismic waves between the other major faults in the area. An earthquake radiates energy in the form of different kinds of seismic waves, whose characteristics reflect the nature of both the rupture and the earth's crust the waves travel through. Determination of an earthquake's magnitude involves identifying specific kinds of these waves on a seismogram, measuring one or more characteristics of a wave, such as its timing, amplitude, frequency, or duration. Additional adjustments are made for distance, kind of crust, the characteristics of the seismograph that recorded the seismogram; the various magnitude scales represent different ways of deriving magnitude from such information as is available. All magnitude scales retain the logarithmic scale as devised by Charles Richter, are adjusted so the mid-range correlates with the original "Richter" scale. Most magnitude scales are based on measurements of only part of an earthquake's seismic wave-train, therefore are incomplete.

This results in systematic underestimation of magnitude in certain cases, a condition called saturation. Since 2005 the International Association of Seismology and Physics of the Earth's Interior has standardized the measurement procedures and equations for the principal magnitude scales, ML , Ms , mb , mB and mbLg ; the first scale for measuring earthquake magnitudes, developed in 1935 by Charles F. Richter and popularly known as the "Richter" scale, is the Local magnitude scale, label ML or ML. Richter established two features now common to all magnitude scales. First, the scale is logarithmic, so that each unit represents a ten-fold increase in the amplitude of the seismic waves; as the energy of a wave is 101.5 times its amplitude, each unit of magnitude represents a nearly 32-fold increase in the energy of an earthquake. Second, Richter arbitrarily defined the zero point of the scale to be where an earthquake at a distance of 100 km makes a maximum horizontal displacement of 0.001 millimeters on a seismogram recorded with a Wood-Anderson torsion seismograph.

Subsequent magnitude scales are calibrated to be in accord with the original "Richter" scale around magnitude 6. All "Local" magnitudes are based on the maximum amplitude of the ground shaking, without distinguishing the different seismic waves, they underestimate the strength: of distant earthquakes because of attenuation of the S-waves, of deep earthquakes because the surface waves are smaller, of strong earthquakes because they do not take into account the duration of shaking. The original "Richter" scale, developed in the geological context of Southern California and Nevada, was found to be inaccurate for earthquakes in the central and eastern parts of the continent because of differences in the continental crust. All these problems prompted the development of other scales. Most seismological authorities, such as the United States Geological Survey, report earthquake magnitudes above 4.0 as moment magnitude, which the press describes as "Richter magnitude". Richter's original "local" scale has been adapted for other localities.

These may be with a lowercase "l", either Ml, or Ml. Whether the values are comparable depends on whether the local conditions have been adequately determined and the formula suitably adjusted. In Japan, for shallow earthquakes within 600 km, the Japanese Meteorological Agenc