Astronomical spectroscopy is the study of astronomy using the techniques of spectroscopy to measure the spectrum of electromagnetic radiation, including visible light and radio, which radiates from stars and other celestial objects. A stellar spectrum can reveal many properties of stars, such as their chemical composition, density, distance and relative motion using Doppler shift measurements. Spectroscopy is used to study the physical properties of many other types of celestial objects such as planets, nebulae and active galactic nuclei. Astronomical spectroscopy is used to measure three major bands of radiation: visible spectrum, X-ray. While all spectroscopy looks at specific areas of the spectrum, different methods are required to acquire the signal depending on the frequency. Ozone and molecular oxygen absorb light with wavelengths under 300 nm, meaning that X-ray and ultraviolet spectroscopy require the use of a satellite telescope or rocket mounted detectors. Radio signals have much longer wavelengths than optical signals, require the use of antennas or radio dishes.
Infrared light is absorbed by atmospheric water and carbon dioxide, so while the equipment is similar to that used in optical spectroscopy, satellites are required to record much of the infrared spectrum. Physicists have been looking at the solar spectrum since Isaac Newton first used a simple prism to observe the refractive properties of light. In the early 1800s Joseph von Fraunhofer used his skills as a glass maker to create pure prisms, which allowed him to observe 574 dark lines in a continuous spectrum. Soon after this, he combined telescope and prism to observe the spectrum of Venus, the Moon and various stars such as Betelgeuse; the resolution of a prism is limited by its size. This issue was resolved in the early 1900s with the development of high-quality reflection gratings by J. S. Plaskett at the Dominion Observatory in Ottawa, Canada. Light striking a mirror will reflect at the same angle, however a small portion of the light will be refracted at a different angle. By creating a "blazed" grating which utilizes a large number of parallel mirrors, the small portion of light can be focused and visualized.
These new spectroscopes were more detailed than a prism, required less light, could be focused on a specific region of the spectrum by tilting the grating. The limitation to a blazed grating is the width of the mirrors, which can only be ground a finite amount before focus is lost. In order to overcome this limitation holographic gratings were developed. Volume phase holographic gratings use a thin film of dichromated gelatin on a glass surface, subsequently exposed to a wave pattern created by an interferometer; this wave pattern sets up a reflection pattern similar to the blazed gratings but utilizing Bragg diffraction, a process where the angle of reflection is dependent on the arrangement of the atoms in the gelatin. The holographic gratings can have up to 6000 lines/mm and can be up to twice as efficient in collecting light as blazed gratings; because they are sealed between two sheets of glass, the holographic gratings are versatile lasting decades before needing replacement. Light dispersed by the grating or prism in a spectrograph can be recorded by a detector.
Photographic plates were used to record spectra until electronic detectors were developed, today optical spectrographs most employ charge-coupled devices. The wavelength scale of a spectrum can be calibrated by observing the spectrum of emission lines of known wavelength from a gas-discharge lamp; the flux scale of a spectrum can be calibrated as a function of wavelength by comparison with an observation of a standard star with corrections for atmospheric absorption of light. Radio astronomy was founded with the work of Karl Jansky in the early 1930s, while working for Bell Labs, he built a radio antenna to look at potential sources of interference for transatlantic radio transmissions. One of the sources of noise discovered came not from Earth, but from the center of the Milky Way, in the constellation Sagittarius. In 1942, JS Hey captured the sun's radio frequency using military radar receivers. Radio spectroscopy started with the discovery of the 21-centimeter H I line in 1951. Radio interferometry was pioneered in 1946, when Joseph Lade Pawsey, Ruby Payne-Scott and Lindsay McCready used a single antenna atop a sea cliff to observe 200 MHz solar radiation.
Two incident beams, one directly from the sun and the other reflected from the sea surface, generated the necessary interference. The first multi-receiver interferometer was built in the same year by Martin Vonberg. In 1960, Ryle and Antony Hewish published the technique of aperture synthesis to analyze interferometer data; the aperture synthesis process, which involves autocorrelating and discrete Fourier transforming the incoming signal, recovers both the spatial and frequency variation in flux. The result is a 3D image. For this work and Hewish were jointly awarded the 1974 Nobel Prize in Physics. Newton used a prism to split white light into a spectrum of color, Fraunhofer's high-quality prisms allowed scientists to see dark lines of an unknown origin. In the 1850s, Gustav Kirchhoff and Robert Bunsen described the phenomena behind these dark lines
X-ray detectors are devices used to measure the flux, spatial distribution, and/or other properties of X-rays. Detectors can be divided into two major categories: dose measurement devices. To obtain an image with any type of image detector the part of the patient to be X-rayed is placed between the X-ray source and the image receptor to produce a shadow of the internal structure of that particular part of the body. X-rays are blocked by dense tissues such as bone, pass more through soft tissues. Areas where the X-rays strike darken when developed, causing bones to appear lighter than the surrounding soft tissue. Contrast compounds containing barium or iodine, which are radiopaque, can be ingested in the gastrointestinal tract or injected in the artery or veins to highlight these vessels; the contrast compounds have high atomic numbered elements in them that block the X-rays and hence the once hollow organ or vessel can be more seen. In the pursuit of nontoxic contrast materials, many types of high atomic number elements were evaluated.
Some elements chosen proved to be harmful – for example, thorium was once used as a contrast medium – which turned out to be toxic, causing a high incidence of cancer decades after use. Modern contrast material has improved and, while there is no way to determine who may have a sensitivity to the contrast, the incidence of serious allergic reactions is low. Typical x-ray film contains silver halide crystal "grains" primarily silver bromide. Grain size and composition can be adjusted to affect the film properties, for example to improve resolution in the developed image; when the film is exposed to radiation the halide is ionised and free electrons are trapped in crystal defects. Silver ions are attracted to these defects and reduced, creating clusters of transparent silver atoms. In the developing process these are converted to opaque silver atoms which form the viewable image, darkest where the most radiation was detected. Further developing steps stabilise the sensitised grains and remove unsensitised grains to prevent further exposure.
The first radiographs were made by the action of X-rays on sensitized glass photographic plates. X-ray film soon replaced the glass plates, film has been used for decades to acquire medical and industrial images. Digital computers gained the ability to store and display enough data to make digital imaging possible. Since the 1990s, computerized radiography and digital radiography have been replacing photographic film in medical and dental applications, though film technology remains in widespread use in industrial radiography processes; the metal silver is a non-renewable resource although silver can be reclaimed from spent X-ray film. Where X-ray films required wet processing facilities, newer digital technologies do not. Digital archiving of images saves physical storage space; because photographic plates are sensitive to X-rays, they provide a means of recording the image, but they require much X-ray exposure. The addition of a fluorescent intensifying screen in close contact with the film allows a lower dose to the patient, because the screen improve the efficiency of x-ray detection, making more activation of the film from the same amount of x-rays, or the same activation of the film from a smaller amount of x-rays.
Phosphor plate radiography is a method of recording X-rays using photostimulated luminescence, pioneered by Fuji in the 1980s. A photostimulable phosphor plate is used in place of the photographic plate. After the plate is X-rayed, excited electrons in the phosphor material remain'trapped' in'colour centres' in the crystal lattice until stimulated by a laser beam passed over the plate surface; the light given off during laser stimulation is collected by a photomultiplier tube, the resulting signal is converted into a digital image by computer technology. The PSP plate can be reused, existing X-ray equipment requires no modification to use them; the technique may be known as computed radiography. X-rays are used in "real-time" procedures such as angiography or contrast studies of the hollow organs using fluoroscopy. Angioplasty, medical interventions of the arterial system, rely on X-ray-sensitive contrast to identify treatable lesions. Solid state detectors use semiconductors to detect x-rays.
Direct digital detectors are so-called because they directly convert x-ray photons to electrical charge and thus a digital image. Indirect systems may have intervening steps for example first converting x-ray photons to visible light, an electronic signal. Both systems use thin film transistors to read out and convert the electronic signal to a digital image. Unlike film or CR no manual scanning or development step is required to obtain a digital image, so in this sense both systems are "direct". Both types of system have higher quantum efficiency than CR. Since the 1970s, silicon or germanium doped with lithium semiconductor detectors have been develop
A variable star is a star whose brightness as seen from Earth fluctuates. This variation may be caused by a change in emitted light or by something blocking the light, so variable stars are classified as either: Intrinsic variables, whose luminosity changes. Extrinsic variables, whose apparent changes in brightness are due to changes in the amount of their light that can reach Earth. Many most, stars have at least some variation in luminosity: the energy output of our Sun, for example, varies by about 0.1% over an 11-year solar cycle. An ancient Egyptian calendar of lucky and unlucky days composed some 3,200 years ago may be the oldest preserved historical document of the discovery of a variable star, the eclipsing binary Algol. Of the modern astronomers, the first variable star was identified in 1638 when Johannes Holwarda noticed that Omicron Ceti pulsated in a cycle taking 11 months; this discovery, combined with supernovae observed in 1572 and 1604, proved that the starry sky was not eternally invariable as Aristotle and other ancient philosophers had taught.
In this way, the discovery of variable stars contributed to the astronomical revolution of the sixteenth and early seventeenth centuries. The second variable star to be described was the eclipsing variable Algol, by Geminiano Montanari in 1669. Chi Cygni was identified in 1686 by G. Kirch R Hydrae in 1704 by G. D. Maraldi. By 1786 ten variable stars were known. John Goodricke himself discovered Beta Lyrae. Since 1850 the number of known variable stars has increased especially after 1890 when it became possible to identify variable stars by means of photography; the latest edition of the General Catalogue of Variable Stars lists more than 46,000 variable stars in the Milky Way, as well as 10,000 in other galaxies, over 10,000'suspected' variables. The most common kinds of variability involve changes in brightness, but other types of variability occur, in particular changes in the spectrum. By combining light curve data with observed spectral changes, astronomers are able to explain why a particular star is variable.
Variable stars are analysed using photometry, spectrophotometry and spectroscopy. Measurements of their changes in brightness can be plotted to produce light curves. For regular variables, the period of variation and its amplitude can be well established. Peak brightnesses in the light curve are known as maxima. Amateur astronomers can do useful scientific study of variable stars by visually comparing the star with other stars within the same telescopic field of view of which the magnitudes are known and constant. By estimating the variable's magnitude and noting the time of observation a visual lightcurve can be constructed; the American Association of Variable Star Observers collects such observations from participants around the world and shares the data with the scientific community. From the light curve the following data are derived: are the brightness variations periodical, irregular, or unique? What is the period of the brightness fluctuations? What is the shape of the light curve? From the spectrum the following data are derived: what kind of star is it: what is its temperature, its luminosity class? is it a single star, or a binary? does the spectrum change with time?
Changes in brightness may depend on the part of the spectrum, observed if the wavelengths of spectral lines are shifted this points to movements strong magnetic fields on the star betray themselves in the spectrum abnormal emission or absorption lines may be indication of a hot stellar atmosphere, or gas clouds surrounding the star. In few cases it is possible to make pictures of a stellar disk; these may show darker spots on its surface. Combining light curves with spectral data gives a clue as to the changes that occur in a variable star. For example, evidence for a pulsating star is found in its shifting spectrum because its surface periodically moves toward and away from us, with the same frequency as its changing brightness. About two-thirds of all variable stars appear to be pulsating. In the 1930s astronomer Arthur Stanley Eddington showed that the mathematical equations that describe the interior of a star may lead to instabilities that cause a star to pulsate; the most common type of instability is related to oscillations in the degree of ionization in outer, convective layers of the star.
Suppose the star is in the swelling phase. Its outer layers expand; because of the decreasing temperature the degree of ionization decreases. This makes the gas more transparent, thus makes it easier for the star to radiate its energy; this in turn will make the star start to contract. As the gas is thereby compressed, it is heated and the degree of ionization again increases. Thi
Dry plate known as gelatin process, is an improved type of photographic plate. It was invented by Dr. Richard L. Maddox in 1871, by 1879 it was so well introduced that the first dry plate factory had been established. With much of the complex chemistry work centralized into a factory, the new process simplified the work of photographers, allowing them to expand their business. Gelatin emulsions, as proposed by Maddox, were sensitive to touch and mechanical friction and were not much more sensitive to light than collodion emulsions. Charles Harper Bennett discovered a method of hardening the emulsion, making it more resistant to friction in 1873. In 1878, Bennett discovered that by prolonged heating, the sensitivity of the emulsion could be increased. George Eastman developed a machine to coat plates in 1879 and opened the Eastman Film and Dry Plate Company, reducing the cost of photography. A competitor of Eastman in the development and manufacture of gelatin dry plates was the architectural photographer Albert Levy.
A Silver Salted Gelatine Emulsion, Richard L. Maddox, The ABC of Modern Photography, W. A. Burton, History of Photography, Josef Maria Eder From Dry Plates To Ektachrome Film: A Story of Photographic Research, C. E. Kenneth Mees, The silver gelatin dry plate process Early Photographic Processes - Dry Plates Contemporary dry plate photography
In astronomy, declination is one of the two angles that locate a point on the celestial sphere in the equatorial coordinate system, the other being hour angle. Declination's angle is measured north or south of the celestial equator, along the hour circle passing through the point in question; the root of the word declination means "a bending away" or "a bending down". It comes from the same root as the words recline. In some 18th and 19th century astronomical texts, declination is given as North Pole Distance, equivalent to 90 -. For instance an object marked as declination -5 would have a NPD of 95, a declination of -90 would have a NPD of 180. Declination in astronomy is comparable to geographic latitude, projected onto the celestial sphere, hour angle is comparable to longitude. Points north of the celestial equator have positive declinations, while those south have negative declinations. Any units of angular measure can be used for declination, but it is customarily measured in the degrees and seconds of sexagesimal measure, with 90° equivalent to a quarter circle.
Declinations with magnitudes greater than 90° do not occur, because the poles are the northernmost and southernmost points of the celestial sphere. An object at the celestial equator has a declination of 0° north celestial pole has a declination of +90° south celestial pole has a declination of −90°The sign is customarily included whether positive or negative; the Earth's axis rotates westward about the poles of the ecliptic, completing one circuit in about 26,000 years. This effect, known as precession, causes the coordinates of stationary celestial objects to change continuously, if rather slowly. Therefore, equatorial coordinates are inherently relative to the year of their observation, astronomers specify them with reference to a particular year, known as an epoch. Coordinates from different epochs must be mathematically rotated to match each other, or to match a standard epoch; the used standard epoch is J2000.0, January 1, 2000 at 12:00 TT. The prefix "J" indicates. Prior to J2000.0, astronomers used the successive Besselian Epochs B1875.0, B1900.0, B1950.0.
A star's direction remains nearly fixed due to its vast distance, but its right ascension and declination do change due to precession of the equinoxes and proper motion, cyclically due to annual parallax. The declinations of Solar System objects change rapidly compared to those of stars, due to orbital motion and close proximity; as seen from locations in the Earth's Northern Hemisphere, celestial objects with declinations greater than 90° − φ appear to circle daily around the celestial pole without dipping below the horizon, are therefore called circumpolar stars. This occurs in the Southern Hemisphere for objects with declinations less than −90° − φ. An extreme example is the pole star which has a declination near to +90°, so is circumpolar as seen from anywhere in the Northern Hemisphere except close to the equator. Circumpolar stars never dip below the horizon. Conversely, there are other stars that never rise above the horizon, as seen from any given point on the Earth's surface. If a star whose declination is δ is circumpolar for some observer a star whose declination is −δ never rises above the horizon, as seen by the same observer.
If a star is circumpolar for an observer at latitude φ it never rises above the horizon as seen by an observer at latitude −φ. Neglecting atmospheric refraction, declination is always 0 ° at west points of the horizon. At the north point, it is 90° − |φ|, at the south point, −90° + |φ|. From the poles, declination is uniform around the entire horizon 0°. Non-circumpolar stars are visible only during certain seasons of the year; the Sun's declination varies with the seasons. As seen from arctic or antarctic latitudes, the Sun is circumpolar near the local summer solstice, leading to the phenomenon of it being above the horizon at midnight, called midnight sun. Near the local winter solstice, the Sun remains below the horizon all day, called polar night; when an object is directly overhead its declination is always within 0.01 degrees of the observer's latitude. The first complication applies to all celestial objects: the object's declination equals the observer's astronomic latitude, but the term "latitude" ordinarily means geodetic latitude, the latitude on maps and GPS devices.
In the continental United States and surrounding area, the difference is a few arcseconds but can be as great as 41 arcseconds. The second complication is that, assuming no deflection of the vertical, "overhead" means perpendicular to the ellipsoid at observer's location, but the perpendicular line does not pass through the center of the earth. For the moon this discrepancy can reach 0.003 degrees.
The interdisciplinary field of materials science commonly termed materials science and engineering is the design and discovery of new materials solids. The intellectual origins of materials science stem from the Enlightenment, when researchers began to use analytical thinking from chemistry and engineering to understand ancient, phenomenological observations in metallurgy and mineralogy. Materials science still incorporates elements of physics and engineering; as such, the field was long considered by academic institutions as a sub-field of these related fields. Beginning in the 1940s, materials science began to be more recognized as a specific and distinct field of science and engineering, major technical universities around the world created dedicated schools of the study, within either the Science or Engineering schools, hence the naming. Materials science is a syncretic discipline hybridizing metallurgy, solid-state physics, chemistry, it is the first example of a new academic discipline emerging by fusion rather than fission.
Many of the most pressing scientific problems humans face are due to the limits of the materials that are available and how they are used. Thus, breakthroughs in materials science are to affect the future of technology significantly. Materials scientists emphasize understanding how the history of a material influences its structure, thus the material's properties and performance; the understanding of processing-structure-properties relationships is called the § materials paradigm. This paradigm is used to advance understanding in a variety of research areas, including nanotechnology and metallurgy. Materials science is an important part of forensic engineering and failure analysis – investigating materials, structures or components which fail or do not function as intended, causing personal injury or damage to property; such investigations are key to understanding, for example, the causes of various aviation accidents and incidents. The material of choice of a given era is a defining point. Phrases such as Stone Age, Bronze Age, Iron Age, Steel Age are historic, if arbitrary examples.
Deriving from the manufacture of ceramics and its putative derivative metallurgy, materials science is one of the oldest forms of engineering and applied science. Modern materials science evolved directly from metallurgy, which itself evolved from mining and ceramics and earlier from the use of fire. A major breakthrough in the understanding of materials occurred in the late 19th century, when the American scientist Josiah Willard Gibbs demonstrated that the thermodynamic properties related to atomic structure in various phases are related to the physical properties of a material. Important elements of modern materials science are a product of the space race: the understanding and engineering of the metallic alloys, silica and carbon materials, used in building space vehicles enabling the exploration of space. Materials science has driven, been driven by, the development of revolutionary technologies such as rubbers, plastics and biomaterials. Before the 1960s, many eventual materials science departments were metallurgy or ceramics engineering departments, reflecting the 19th and early 20th century emphasis on metals and ceramics.
The growth of materials science in the United States was catalyzed in part by the Advanced Research Projects Agency, which funded a series of university-hosted laboratories in the early 1960s "to expand the national program of basic research and training in the materials sciences." The field has since broadened to include every class of materials, including ceramics, semiconductors, magnetic materials and nanomaterials classified into three distinct groups: ceramics and polymers. The prominent change in materials science during the recent decades is active usage of computer simulations to find new materials, predict properties, understand phenomena. A material is defined as a substance, intended to be used for certain applications. There are a myriad of materials around us—they can be found in anything from buildings to spacecraft. Materials can be further divided into two classes: crystalline and non-crystalline; the traditional examples of materials are metals, semiconductors and polymers.
New and advanced materials that are being developed include nanomaterials and energy materials to name a few. The basis of materials science involves studying the structure of materials, relating them to their properties. Once a materials scientist knows about this structure-property correlation, they can go on to study the relative performance of a material in a given application; the major determinants of the structure of a material and thus of its properties are its constituent chemical elements and the way in which it has been processed into its final form. These characteristics, taken together and related through the laws of thermodynamics and kinetics, govern a material's microstructure, thus its properties; as mentioned above, structure is one of the most important components of the field of materials science. Materials science examines the structure of materials from the atomic scale, all the way up to the macro scale. Characterization is the way; this involves methods such as diffraction with X-rays, electrons, or neutrons, various forms of spectroscopy and chemical analysis such as Raman spectroscopy, energy-dispersive spectroscopy, thermal analysis, electron microscope analysis, etc.
A Schmidt camera referred to as the Schmidt telescope, is a catadioptric astrophotographic telescope designed to provide wide fields of view with limited aberrations. The design was invented by Bernhard Schmidt in 1930; some notable examples are the UK Schmidt Telescope and the ESO Schmidt. A recent example is the Kepler space telescope exoplanet finder. Other related designs are Lurie -- Houghton telescope; the Schmidt camera was invented by German-Estonian optician Bernhard Schmidt in 1930. Its optical components are an easy-to-make spherical primary mirror, an aspherical correcting lens, known as a Schmidt corrector plate, located at the center of curvature of the primary mirror; the film or other detector is placed at the prime focus. The design is noted for allowing fast focal ratios, while controlling coma and astigmatism. Schmidt cameras have strongly curved focal planes, thus requiring that the film, plate, or other detector be correspondingly curved. In some cases the detector is made curved.
A field flattener, in its simplest form a planoconvex lens in front of the film plate or detector, is sometimes used. Since the corrector plate is at the center of curvature of the primary mirror in this design the tube length can be long for a wide-field telescope. There are the drawbacks of having the obstruction of the film holder or detector mounted at the focus halfway up the tube assembly, a small amount of light is blocked and there is a loss in contrast in the image due to diffraction effects of the obstruction and its support structure; because of its wide field of view, the Schmidt camera is used as a survey instrument, for research programs in which a large amount of sky must be covered. These include astronomical surveys and asteroid searches, nova patrols. In addition, Schmidt cameras and derivative designs are used for tracking artificial earth satellites; the first large Schmidt telescopes were built at Hamburg Observatory and Palomar Observatory shortly before the Second World War.
Between 1945 and 1980, about 8 more large Schmidt telescopes were built around the world. One famous and productive Schmidt camera is the Oschin Schmidt Telescope at Palomar Observatory, completed in 1948; this instrument was used in the National Geographic Society - Palomar Observatory Sky Survey, the POSS-II survey, the Palomar-Leiden Surveys, other projects. The European Southern Observatory with a 1-meter Schmidt telescope at La Silla and the UK Science Research Council with a 1.2 meter Schmidt telescope at Siding Spring Observatory engaged in a collaborative sky survey to complement the first Palomar Sky Survey, but focusing on the southern hemisphere. The technical improvements developed during this survey encouraged the development of the Second Palomar Observatory Sky Survey; the telescope used in the Lowell Observatory Near-Earth-Object Search is a Schmidt camera. The Schmidt telescope of the Karl Schwarzschild Observatory is the largest Schmidt camera of the world. A Schmidt telescope was at the heart of the Hipparcos satellite from the European Space Agency.
This was used in the Hipparcos Survey which mapped the distances of more than a million stars with unprecedented accuracy - this included 99% of all stars up to magnitude 11. The spherical mirror used in this telescope was accurate; the Kepler photometer, mounted on NASA's Kepler space telescope, is the largest Schmidt camera launched into space. Starting in the early 1970s, Celestron marketed an 8-inch Schmidt Camera; the camera was focused in the factory and was made of materials with low expansion coefficients so it would never need to be focused in the field. Early models required the photographer to cut and develop individual frames of 35 mm film as the film holder could only hold one frame of film. About 300 Celestron Schmidt Cameras were produced; the Schmidt system was popular, used for television projection systems. Large Schmidt projectors were used in theaters but systems as small as 8-inches were made for home use and other small venues. Schmidt noted in the 1930s that the corrector plate could be replaced with a simple aperture at the mirror's center of curvature for a slow camera.
Such a design was used to construct a working 1/8-scale model of the Palomar Schmidt, with a 5° field. The retronym "lensless Schmidt" has been given to this configuration. Yrjö Väisälä designed an "astronomical camera" similar to Bernhard Schmidt's "Schmidt camera", but the design was unpublished. Väisälä did mention it in lecture notes in 1924 with a footnote: "problematic spherical focal plane". Once Väisälä saw Schmidt's publication, he promptly went ahead and solved the field-flattening problem in Schmidt's design by placing a doubly convex lens in front of the film holder; this resulting system is known as: sometimes as Väisälä camera. In 1940, James Baker of Harvard University modified the Schmidt camera design to include a convex secondary mirror, which reflected light back toward the primary; the photographic plate was installed near the primary, facing the sky. This variant is called the Baker-Schmidt camera; the Baker-Nunn design, by Dr. Baker and Joseph Nunn, replaced the Baker-Schmidt ca