Radio astronomy is a subfield of astronomy that studies celestial objects at radio frequencies. The first detection of radio waves from an astronomical object was in 1932, when Karl Jansky at Bell Telephone Laboratories observed radiation coming from the Milky Way. Subsequent observations have identified a number of different sources of radio emission; these include stars and galaxies, as well as new classes of objects, such as radio galaxies, quasars and masers. The discovery of the cosmic microwave background radiation, regarded as evidence for the Big Bang theory, was made through radio astronomy. Radio astronomy is conducted using large radio antennas referred to as radio telescopes, that are either used singularly, or with multiple linked telescopes utilizing the techniques of radio interferometry and aperture synthesis; the use of interferometry allows radio astronomy to achieve high angular resolution, as the resolving power of an interferometer is set by the distance between its components, rather than the size of its components.
Before Jansky observed the Milky Way in the 1930s, physicists speculated that radio waves could be observed from astronomical sources. In the 1860s, James Clerk Maxwell's equations had shown that electromagnetic radiation is associated with electricity and magnetism, could exist at any wavelength. Several attempts were made to detect radio emission from the Sun including an experiment by German astrophysicists Johannes Wilsing and Julius Scheiner in 1896 and a centimeter wave radiation apparatus set up by Oliver Lodge between 1897 and 1900; these attempts were unable to detect any emission due to technical limitations of the instruments. The discovery of the radio reflecting ionosphere in 1902, led physicists to conclude that the layer would bounce any astronomical radio transmission back into space, making them undetectable. Karl Jansky made the discovery of the first astronomical radio source serendipitously in the early 1930s; as an engineer with Bell Telephone Laboratories, he was investigating static that interfered with short wave transatlantic voice transmissions.
Using a large directional antenna, Jansky noticed that his analog pen-and-paper recording system kept recording a repeating signal of unknown origin. Since the signal peaked about every 24 hours, Jansky suspected the source of the interference was the Sun crossing the view of his directional antenna. Continued analysis showed that the source was not following the 24-hour daily cycle of the Sun but instead repeating on a cycle of 23 hours and 56 minutes. Jansky discussed the puzzling phenomena with his friend and teacher Albert Melvin Skellett, who pointed out that the time between the signal peaks was the exact length of a sidereal day. By comparing his observations with optical astronomical maps, Jansky concluded that the radiation source peaked when his antenna was aimed at the densest part of the Milky Way in the constellation of Sagittarius, he concluded that since the Sun were not large emitters of radio noise, the strange radio interference may be generated by interstellar gas and dust in the galaxy.
Jansky announced his discovery in 1933. He wanted to investigate the radio waves from the Milky Way in further detail, but Bell Labs reassigned him to another project, so he did no further work in the field of astronomy, his pioneering efforts in the field of radio astronomy have been recognized by the naming of the fundamental unit of flux density, the jansky, after him. Grote Reber was inspired by Jansky's work, built a parabolic radio telescope 9m in diameter in his backyard in 1937, he began by repeating Jansky's observations, conducted the first sky survey in the radio frequencies. On February 27, 1942, James Stanley Hey, a British Army research officer, made the first detection of radio waves emitted by the Sun; that year George Clark Southworth, at Bell Labs like Jansky detected radiowaves from the sun. Both researchers were bound by wartime security surrounding radar, so Reber, not, published his 1944 findings first. Several other people independently discovered solar radiowaves, including E. Schott in Denmark and Elizabeth Alexander working on Norfolk Island.
At Cambridge University, where ionospheric research had taken place during World War II, J. A. Ratcliffe along with other members of the Telecommunications Research Establishment that had carried out wartime research into radar, created a radiophysics group at the university where radio wave emissions from the Sun were observed and studied; this early research soon branched out into the observation of other celestial radio sources and interferometry techniques were pioneered to isolate the angular source of the detected emissions. Martin Ryle and Antony Hewish at the Cavendish Astrophysics Group developed the technique of Earth-rotation aperture synthesis; the radio astronomy group in Cambridge went on to found the Mullard Radio Astronomy Observatory near Cambridge in the 1950s. During the late 1960s and early 1970s, as computers became capable of handling the computationally intensive Fourier transform inversions required, they used aperture synthesis to create a'One-Mile' and a'5 km' effective
The emissivity of the surface of a material is its effectiveness in emitting energy as thermal radiation. Thermal radiation is electromagnetic radiation and it may include both visible radiation and infrared radiation, not visible to human eyes; the thermal radiation from hot objects is visible to the eye. Quantitatively, emissivity is the ratio of the thermal radiation from a surface to the radiation from an ideal black surface at the same temperature as given by the Stefan–Boltzmann law; the ratio varies from 0 to 1. The surface of a perfect black body emits thermal radiation at the rate of 448 watts per square metre at room temperature. Emissivities are important in several contexts: Insulated windows – Warm surfaces are cooled directly by air, but they cool themselves by emitting thermal radiation; this second cooling mechanism is important for simple glass windows, which have emissivities close to the maximum possible value of 1.0. "Low-E windows" with transparent low emissivity coatings emit less thermal radiation than ordinary windows.
In winter, these coatings can halve the rate at which a window loses heat compared to an uncoated glass window. Solar heat collectors – Similarly, solar heat collectors lose heat by emitting thermal radiation. Advanced solar collectors incorporate selective surfaces that have low emissivities; these collectors waste little of the solar energy through emission of thermal radiation. Thermal shielding – For the protection of structures from high surface temperatures, such as reusable spacecraft or hypersonic aircraft, high emissivity coatings, with emissivity values near 0.9, are applied on the surface of insulating ceramics. This facilitates radiative cooling and protection of the underlying structure and is an alternative to ablative coatings, used in single-use reentry capsules. Planetary temperatures – The planets are solar thermal collectors on a large scale; the temperature of a planet's surface is determined by the balance between the heat absorbed by the planet from sunlight, heat emitted from its core, thermal radiation emitted back into space.
Emissivity of a planet is determined by the nature of its atmosphere. Temperature measurements – Pyrometers and infrared cameras are instruments used to measure the temperature of an object by using its thermal radiation; the calibration of these instruments involves the emissivity of the surface. Hemispherical emissivity of a surface, denoted ε, is defined as ε = M e M e ∘, where Me is the radiant exitance of that surface. Spectral hemispherical emissivity in frequency and spectral hemispherical emissivity in wavelength of a surface, denoted εν and ελ are defined as ε ν = M e, ν M e, ν ∘, ε λ = M e, λ M e, λ ∘, where Me,ν is the spectral radiant exitance in frequency of that surface. Directional emissivity of a surface, denoted εΩ, is defined as ε Ω = L e, Ω L e, Ω ∘, where Le,Ω is the radiance of that surface. Spectral directional emissivity in frequency and spectral directional emissivity in wavelength of a surface, denoted εν,Ω and ελ,Ω are defined as ε ν, Ω = L e, Ω, ν L e, Ω, ν ∘, ε λ, Ω = L e, Ω, λ L e, Ω, λ ∘, where Le,Ω,ν is the spectral radiance in frequency of that surface.
The interdisciplinary field of materials science commonly termed materials science and engineering is the design and discovery of new materials solids. The intellectual origins of materials science stem from the Enlightenment, when researchers began to use analytical thinking from chemistry and engineering to understand ancient, phenomenological observations in metallurgy and mineralogy. Materials science still incorporates elements of physics and engineering; as such, the field was long considered by academic institutions as a sub-field of these related fields. Beginning in the 1940s, materials science began to be more recognized as a specific and distinct field of science and engineering, major technical universities around the world created dedicated schools of the study, within either the Science or Engineering schools, hence the naming. Materials science is a syncretic discipline hybridizing metallurgy, solid-state physics, chemistry, it is the first example of a new academic discipline emerging by fusion rather than fission.
Many of the most pressing scientific problems humans face are due to the limits of the materials that are available and how they are used. Thus, breakthroughs in materials science are to affect the future of technology significantly. Materials scientists emphasize understanding how the history of a material influences its structure, thus the material's properties and performance; the understanding of processing-structure-properties relationships is called the § materials paradigm. This paradigm is used to advance understanding in a variety of research areas, including nanotechnology and metallurgy. Materials science is an important part of forensic engineering and failure analysis – investigating materials, structures or components which fail or do not function as intended, causing personal injury or damage to property; such investigations are key to understanding, for example, the causes of various aviation accidents and incidents. The material of choice of a given era is a defining point. Phrases such as Stone Age, Bronze Age, Iron Age, Steel Age are historic, if arbitrary examples.
Deriving from the manufacture of ceramics and its putative derivative metallurgy, materials science is one of the oldest forms of engineering and applied science. Modern materials science evolved directly from metallurgy, which itself evolved from mining and ceramics and earlier from the use of fire. A major breakthrough in the understanding of materials occurred in the late 19th century, when the American scientist Josiah Willard Gibbs demonstrated that the thermodynamic properties related to atomic structure in various phases are related to the physical properties of a material. Important elements of modern materials science are a product of the space race: the understanding and engineering of the metallic alloys, silica and carbon materials, used in building space vehicles enabling the exploration of space. Materials science has driven, been driven by, the development of revolutionary technologies such as rubbers, plastics and biomaterials. Before the 1960s, many eventual materials science departments were metallurgy or ceramics engineering departments, reflecting the 19th and early 20th century emphasis on metals and ceramics.
The growth of materials science in the United States was catalyzed in part by the Advanced Research Projects Agency, which funded a series of university-hosted laboratories in the early 1960s "to expand the national program of basic research and training in the materials sciences." The field has since broadened to include every class of materials, including ceramics, semiconductors, magnetic materials and nanomaterials classified into three distinct groups: ceramics and polymers. The prominent change in materials science during the recent decades is active usage of computer simulations to find new materials, predict properties, understand phenomena. A material is defined as a substance, intended to be used for certain applications. There are a myriad of materials around us—they can be found in anything from buildings to spacecraft. Materials can be further divided into two classes: crystalline and non-crystalline; the traditional examples of materials are metals, semiconductors and polymers.
New and advanced materials that are being developed include nanomaterials and energy materials to name a few. The basis of materials science involves studying the structure of materials, relating them to their properties. Once a materials scientist knows about this structure-property correlation, they can go on to study the relative performance of a material in a given application; the major determinants of the structure of a material and thus of its properties are its constituent chemical elements and the way in which it has been processed into its final form. These characteristics, taken together and related through the laws of thermodynamics and kinetics, govern a material's microstructure, thus its properties; as mentioned above, structure is one of the most important components of the field of materials science. Materials science examines the structure of materials from the atomic scale, all the way up to the macro scale. Characterization is the way; this involves methods such as diffraction with X-rays, electrons, or neutrons, various forms of spectroscopy and chemical analysis such as Raman spectroscopy, energy-dispersive spectroscopy, thermal analysis, electron microscope analysis, etc.
In physics, energy is the quantitative property that must be transferred to an object in order to perform work on, or to heat, the object. Energy is a conserved quantity; the SI unit of energy is the joule, the energy transferred to an object by the work of moving it a distance of 1 metre against a force of 1 newton. Common forms of energy include the kinetic energy of a moving object, the potential energy stored by an object's position in a force field, the elastic energy stored by stretching solid objects, the chemical energy released when a fuel burns, the radiant energy carried by light, the thermal energy due to an object's temperature. Mass and energy are related. Due to mass–energy equivalence, any object that has mass when stationary has an equivalent amount of energy whose form is called rest energy, any additional energy acquired by the object above that rest energy will increase the object's total mass just as it increases its total energy. For example, after heating an object, its increase in energy could be measured as a small increase in mass, with a sensitive enough scale.
Living organisms require exergy to stay alive, such as the energy. Human civilization requires energy to function, which it gets from energy resources such as fossil fuels, nuclear fuel, or renewable energy; the processes of Earth's climate and ecosystem are driven by the radiant energy Earth receives from the sun and the geothermal energy contained within the earth. The total energy of a system can be subdivided and classified into potential energy, kinetic energy, or combinations of the two in various ways. Kinetic energy is determined by the movement of an object – or the composite motion of the components of an object – and potential energy reflects the potential of an object to have motion, is a function of the position of an object within a field or may be stored in the field itself. While these two categories are sufficient to describe all forms of energy, it is convenient to refer to particular combinations of potential and kinetic energy as its own form. For example, macroscopic mechanical energy is the sum of translational and rotational kinetic and potential energy in a system neglects the kinetic energy due to temperature, nuclear energy which combines utilize potentials from the nuclear force and the weak force), among others.
The word energy derives from the Ancient Greek: translit. Energeia, lit.'activity, operation', which appears for the first time in the work of Aristotle in the 4th century BC. In contrast to the modern definition, energeia was a qualitative philosophical concept, broad enough to include ideas such as happiness and pleasure. In the late 17th century, Gottfried Leibniz proposed the idea of the Latin: vis viva, or living force, which defined as the product of the mass of an object and its velocity squared. To account for slowing due to friction, Leibniz theorized that thermal energy consisted of the random motion of the constituent parts of matter, although it would be more than a century until this was accepted; the modern analog of this property, kinetic energy, differs from vis viva only by a factor of two. In 1807, Thomas Young was the first to use the term "energy" instead of vis viva, in its modern sense. Gustave-Gaspard Coriolis described "kinetic energy" in 1829 in its modern sense, in 1853, William Rankine coined the term "potential energy".
The law of conservation of energy was first postulated in the early 19th century, applies to any isolated system. It was argued for some years whether heat was a physical substance, dubbed the caloric, or a physical quantity, such as momentum. In 1845 James Prescott Joule discovered the generation of heat; these developments led to the theory of conservation of energy, formalized by William Thomson as the field of thermodynamics. Thermodynamics aided the rapid development of explanations of chemical processes by Rudolf Clausius, Josiah Willard Gibbs, Walther Nernst, it led to a mathematical formulation of the concept of entropy by Clausius and to the introduction of laws of radiant energy by Jožef Stefan. According to Noether's theorem, the conservation of energy is a consequence of the fact that the laws of physics do not change over time. Thus, since 1918, theorists have understood that the law of conservation of energy is the direct mathematical consequence of the translational symmetry of the quantity conjugate to energy, namely time.
In 1843, James Prescott Joule independently discovered the mechanical equivalent in a series of experiments. The most famous of them used the "Joule apparatus": a descending weight, attached to a string, caused rotation of a paddle immersed in water insulated from heat transfer, it showed that the gravitational potential energy lost by the weight in descending was equal to the internal energy gained by the water through friction with the paddle. In the International System of Units, the unit of energy is the joule, named after James Prescott Joule, it is a derived unit. It is equal to the energy expended in applying a force of one newton through a distance of one metre; however energy is expressed in many other units not part of the SI, such as ergs, British Thermal Units, kilowatt-hours and kilocalories, which require a conversion factor when expressed in SI units. The SI unit of energy rate is the watt, a joule per second. Thus, one joule is one watt-second, 3600 joules equal one wa
The Planck constant is a physical constant, the quantum of electromagnetic action, which relates the energy carried by a photon to its frequency. A photon's energy is equal to its frequency multiplied by the Planck constant; the Planck constant is of fundamental importance in quantum mechanics, in metrology it is the basis for the definition of the kilogram. At the end of the 19th century, physicists were unable to explain why the observed spectrum of black body radiation, which by had been measured, diverged at higher frequencies from that predicted by existing theories. In 1900, Max Planck empirically derived a formula for the observed spectrum, he assumed that a hypothetical electrically charged oscillator in a cavity that contained black body radiation could only change its energy in a minimal increment, E, proportional to the frequency of its associated electromagnetic wave. He was able to calculate the proportionality constant, h, from the experimental measurements, that constant is named in his honor.
In 1905, the value E was associated by Albert Einstein with a "quantum" or minimal element of the energy of the electromagnetic wave itself. The light quantum behaved in some respects as an electrically neutral particle, as opposed to an electromagnetic wave, it was called a photon. Max Planck received the 1918 Nobel Prize in Physics "in recognition of the services he rendered to the advancement of Physics by his discovery of energy quanta". Since energy and mass are equivalent, the Planck constant relates mass to frequency. By 2017, the Planck constant had been measured with sufficient accuracy in terms of the SI base units, that it was central to replacing the metal cylinder, called the International Prototype of the Kilogram, that had defined the kilogram since 1889; the new definition was unanimously approved at the General Conference on Weights and Measures on 16 November 2018 as part of the 2019 redefinition of SI base units. For this new definition of the kilogram, the Planck constant, as defined by the ISO standard, was set to 6.62607015×10−34 J⋅s exactly.
The kilogram was the last SI base unit to be re-defined by a fundamental physical property to replace a physical artefact. In the last years of the 19th century, Max Planck was investigating the problem of black-body radiation first posed by Kirchhoff some 40 years earlier; every physical body continuously emits electromagnetic radiation. At low frequencies, Planck's law tends to the Rayleigh–Jeans law, while in the limit of high frequencies it tends to the Wien approximation but there was no overall expression or explanation for the shape of the observed emission spectrum. Approaching this problem, Planck hypothesized that the equations of motion for light describe a set of harmonic oscillators, one for each possible frequency, he examined how the entropy of the oscillators varied with the temperature of the body, trying to match Wien's law, was able to derive an approximate mathematical function for black-body spectrum. To create Planck's law, which predicts blackbody emissions by fitting the observed curves, he multiplied the classical expression by a complex factor that involves a constant, h, in both the numerator and the denominator, which subsequently became known as the Planck Constant.
The spectral radiance of a body, Bν, describes the amount of energy it emits at different radiation frequencies. It is the power emitted per unit area of the body, per unit solid angle of emission, per unit frequency. Planck showed that the spectral radiance of a body for frequency ν at absolute temperature T is given by B ν = 2 h ν 3 c 2 1 e h ν k B T − 1 where kB is the Boltzmann constant, h is the Planck constant, c is the speed of light in the medium, whether material or vacuum; the spectral radiance can be expressed per unit wavelength λ instead of per unit frequency. In this case, it is given by B λ = 2 h c 2 λ 5 1 e h c λ k B T − 1. Showing how radiated energy emitted at shorter wavelengths increases more with temperature than energy emitted at longer wavelengths; the law may be expressed in other terms, such as the number of photons emitted at a certain wavelength, or the energy density in a volume of radiation. The SI units of Bν are W·sr−1·m−2·Hz−1, while those of Bλ are W·sr−1·m−3.
Planck soon realized. There were several different solutions, each of which gave a different value for the entropy of the oscillators. To save his theory, Planck resorted to using the then-controversial theory of statistical mechanics, which he described as "an act of despair … I was ready to sacrifice any of my previous convictions about physics." One of his new boundary conditions was to interpret UN [the vibrational energy
A pyrometer is a type of remote-sensing thermometer used to measure the temperature of a surface. Various forms of pyrometers have existed. In the modern usage, it is a device that from a distance determines the temperature of a surface from the amount of the thermal radiation it emits, a process known as pyrometry and sometimes radiometry; the word pyrometer comes from the Greek word for fire, "πῦρ", meter, meaning to measure. The word pyrometer was coined to denote a device capable of measuring the temperature of an object by its incandescence, visible light emitted by a body, at least red-hot. Modern pyrometers or infrared thermometers measure the temperature of cooler objects, down to room temperature, by detecting their infrared radiation flux. A modern pyrometer has a detector; the optical system focuses the thermal radiation onto the detector. The output signal of the detector is related to the thermal radiation or irradiance j* of the target object through the Stefan–Boltzmann law, the constant of proportionality σ, called the Stefan-Boltzmann constant and the emissivity ε of the object.
J ⋆ = ε σ T 4 This output is used to infer the object's temperature from a distance, with no need for the pyrometer to be in thermal contact with the object. Pyrometry of gases presents difficulties; these are most overcome by using thin filament pyrometry or soot pyrometry. Both techniques involve small solids in contact with hot gases; the potter Josiah Wedgwood invented the first pyrometer to measure the temperature in his kilns, which first compared the color of clay fired at known temperatures, but was upgraded to measuring the shrinkage of pieces of clay, which depended on kiln temperature. Examples used the expansion of a metal bar; the first disappearing filament pyrometer was built by L. Holborn and F. Kurlbaum in 1901; this device had a thin electrical filament between an incandescent object. The current through the filament was adjusted until it was of the same colour as the object, no longer visible; the temperature returned by the vanishing filament pyrometer and others of its kind, called brightness pyrometers, is dependent on the emissivity of the object.
With greater use of brightness pyrometers, it became obvious that problems existed with relying on knowledge of the value of emissivity. Emissivity was found to change drastically, with surface roughness and surface composition, the temperature itself. To get around these difficulties, the ratio or two-color pyrometer was developed, they rely on the fact that Planck's law, which relates temperature to the intensity of radiation emitted at individual wavelengths, can be solved for temperature if Planck's statement of the intensities at two different wavelengths is divided. This solution assumes that the emissivity is the same at both wavelengths and cancels out in the division; this is known as the gray body assumption. Ratio pyrometers are two brightness pyrometers in a single instrument; the operational principles of the ratio pyrometers were developed in the 1920s and 1930s, they were commercially available in 1939. As the ratio pyrometer came into popular use, it was determined that many materials, of which metals are an example, do not have the same emissivity at two wavelengths.
For these materials, the emissivity does not cancel out and the temperature measurement is in error. The amount of error depends on the emissivities and the wavelengths where the measurements are taken. Two-color ratio pyrometers can not measure. To more measure the temperature of real objects with unknown or changing emissivities, multiwavelength pyrometers were envisioned at the US National Institute of Standards and Technology and described in 1992. Multiwavelength pyrometers use three or more wavelengths and mathematical manipulation of the results to attempt to achieve accurate temperature measurement when the emissivity is unknown and different at all wavelengths. Pyrometers are suited to the measurement of moving objects or any surfaces that can not be reached or can not be touched. Temperature is a fundamental parameter in metallurgical furnace operations. Reliable and continuous measurement of the metal temperature is essential for effective control of the operation. Smelting rates can be maximized, slag can be produced at the optimum temperature, fuel consumption is minimized and refractory life may be lengthened.
Thermocouples were the traditional devices used for this purpose, but they are unsuitable for continuous measurement because they melt and degrade. Salt bath furnaces are used for heat treatment. At high working temperatures with intense heat transfer between the molten salt and the steel being treated, precision is maintained by measuring the temperature of the molten salt. Most errors are caused by slag on the surface, cooler than the salt bath; the tuyère pyrometer is an optical instrument for temperature measurement through the tuyeres which are used for feeding air or reactants into the bath of the furnace. A steam boiler may be fitted with a pyrometer to measure the steam temperature in the superheater. A hot air balloon is equipped with a pyrometer for measuring the temperature at the top of