Eddy (fluid dynamics)
In fluid dynamics, an eddy is the swirling of a fluid and the reverse current created when the fluid is in a turbulent flow regime. The moving fluid creates a space devoid of downstream-flowing fluid on the downstream side of the object. Fluid behind the obstacle flows into the void creating a swirl of fluid on each edge of the obstacle, followed by a short reverse flow of fluid behind the obstacle flowing upstream, toward the back of the obstacle; this phenomenon is observed behind large emergent rocks in swift-flowing rivers. The propensity of a fluid to swirl is used to promote good fuel/air mixing in internal combustion engines. In fluid mechanics and transport phenomena, an eddy is not a property of the fluid, but a violent swirling motion caused by the position and direction of turbulent flow. In 1883, scientist Osborne Reynolds conducted a fluid dynamics experiment involving water and dye, where he adjusted the velocities of the fluids and observed the transition from laminar to turbulent flow, characterized by the formation of eddies and vortices.
Turbulent flow is defined as the flow in which the system's inertial forces are dominant over the viscous forces. This phenomenon is described by Reynolds number, a unit-less number used to determine when turbulent flow will occur. Conceptually, the Reynolds number is the ratio between inertial viscous forces; the general form for the Reynolds number flowing through a tube of radius r: R e = 2 v ρ r μ = ρ v d μ where: v = v e l o c i t y ρ = d e n s i t y r = r a d i u s μ = v i s c o s i t y The transition from laminar to turbulent flow in a fluid is defined by the critical Reynolds number: R e c ≈ 2000 In terms of the critical Reynolds number, the critical velocity is represented as: v c = R c μ ρ d Hemodynamics is the study of blood flow in the circulatory system. Blood flow in straight sections of the arterial tree are laminar, but branches and curvatures in the system cause turbulent flow. Turbulent flow in the arterial tree can cause a number of concerning effects, including atherosclerotic lesions, postsurgical neointimal hyperplasia, in-stent restenosis, vein bypass graft failure, transplant vasculopathy, aortic valve calcification.
Lift and drag properties of golf balls are customized by the manipulation of dimples along the surface of the ball, allowing for the golf ball to travel further and faster in the air. The data from turbulent-flow phenomena has been used to model different transitions in fluid flow regimes, which are used to mix fluids and increase reaction rates within industrial processes. Oceanic and atmospheric currents transfer particles and organisms all across the globe. While the transport of organisms, such as phytoplankton, are essential for the preservation of ecosystems and other pollutants are mixed in the current flow and can carry pollution far from its origin. Eddy formations circulate trash and other pollutants into concentrated areas which researchers are tracking to improve clean-up and pollution prevention. Mesoscale ocean eddies play crucial roles in transferring heat poleward, as well as maintaining heat gradients at different depths; these are turbulence models in which the Reynolds stresses, as obtained from a Reynolds averaging of the Navier–Stokes equations, are modelled by a linear constitutive relationship with the mean flow straining field, as: − ρ ⟨ u i u j ⟩ = 2 μ t S i, j − 2 3 ρ κ δ i, j where μ t is the coefficient termed turbulence "viscosity" κ = 1 2 is the mean turbulent kinetic energy S i, j is the mean strain rateNote that that inclusion of 2 3 ρ κ δ i, j in the linear constitutive relation is required by tensorial algebra purposes when solving for two-equation turbulence models (or any other turbulence model that solves a transport equation for κ.
Aerodynamics, from Greek ἀήρ aer + δυναμική, is the study of motion of air as interaction with a solid object, such as an airplane wing. It is a sub-field of fluid dynamics and gas dynamics, many aspects of aerodynamics theory are common to these fields; the term aerodynamics is used synonymously with gas dynamics, the difference being that "gas dynamics" applies to the study of the motion of all gases, is not limited to air. The formal study of aerodynamics began in the modern sense in the eighteenth century, although observations of fundamental concepts such as aerodynamic drag were recorded much earlier. Most of the early efforts in aerodynamics were directed toward achieving heavier-than-air flight, first demonstrated by Otto Lilienthal in 1891. Since the use of aerodynamics through mathematical analysis, empirical approximations, wind tunnel experimentation, computer simulations has formed a rational basis for the development of heavier-than-air flight and a number of other technologies.
Recent work in aerodynamics has focused on issues related to compressible flow and boundary layers and has become computational in nature. Modern aerodynamics only dates back to the seventeenth century, but aerodynamic forces have been harnessed by humans for thousands of years in sailboats and windmills, images and stories of flight appear throughout recorded history, such as the Ancient Greek legend of Icarus and Daedalus. Fundamental concepts of continuum and pressure gradients appear in the work of Aristotle and Archimedes. In 1726, Sir Isaac Newton became the first person to develop a theory of air resistance, making him one of the first aerodynamicists. Dutch-Swiss mathematician Daniel Bernoulli followed in 1738 with Hydrodynamica in which he described a fundamental relationship between pressure and flow velocity for incompressible flow known today as Bernoulli's principle, which provides one method for calculating aerodynamic lift. In 1757, Leonhard Euler published the more general Euler equations which could be applied to both compressible and incompressible flows.
The Euler equations were extended to incorporate the effects of viscosity in the first half of the 1800s, resulting in the Navier–Stokes equations. The Navier-Stokes equations are the most general governing equations of fluid flow and but are difficult to solve for the flow around all but the simplest of shapes. In 1799, Sir George Cayley became the first person to identify the four aerodynamic forces of flight, as well as the relationships between them, in doing so outlined the path toward achieving heavier-than-air flight for the next century. In 1871, Francis Herbert Wenham constructed the first wind tunnel, allowing precise measurements of aerodynamic forces. Drag theories were developed by Jean le Rond d'Alembert, Gustav Kirchhoff, Lord Rayleigh. In 1889, Charles Renard, a French aeronautical engineer, became the first person to reasonably predict the power needed for sustained flight. Otto Lilienthal, the first person to become successful with glider flights, was the first to propose thin, curved airfoils that would produce high lift and low drag.
Building on these developments as well as research carried out in their own wind tunnel, the Wright brothers flew the first powered airplane on December 17, 1903. During the time of the first flights, Frederick W. Lanchester, Martin Kutta, Nikolai Zhukovsky independently created theories that connected circulation of a fluid flow to lift. Kutta and Zhukovsky went on to develop a two-dimensional wing theory. Expanding upon the work of Lanchester, Ludwig Prandtl is credited with developing the mathematics behind thin-airfoil and lifting-line theories as well as work with boundary layers; as aircraft speed increased, designers began to encounter challenges associated with air compressibility at speeds near or greater than the speed of sound. The differences in air flows under such conditions leads to problems in aircraft control, increased drag due to shock waves, the threat of structural failure due to aeroelastic flutter; the ratio of the flow speed to the speed of sound was named the Mach number after Ernst Mach, one of the first to investigate the properties of supersonic flow.
William John Macquorn Rankine and Pierre Henri Hugoniot independently developed the theory for flow properties before and after a shock wave, while Jakob Ackeret led the initial work of calculating the lift and drag of supersonic airfoils. Theodore von Kármán and Hugh Latimer Dryden introduced the term transonic to describe flow speeds around Mach 1 where drag increases rapidly; this rapid increase in drag led aerodynamicists and aviators to disagree on whether supersonic flight was achievable until the sound barrier was broken for the first time in 1947 using the Bell X-1 aircraft. By the time the sound barrier was broken, aerodynamicists' understanding of the subsonic and low supersonic flow had matured; the Cold War prompted the design of an ever-evolving line of high performance aircraft. Computational fluid dynamics began as an effort to solve for flow properties around complex objects and has grown to the point where entire aircraft can be designed using computer software, with wind-tunnel tests followed by flight tests to confirm the computer predictions.
Understanding of supersonic and hypersonic aerodynamics has matured since the 1960s, the goals of aerodynamicists have shifted from the behavior of fluid flow to the engineering of a vehicle such that it interacts pedictably with the fluid flow. Designing aircraft for supersonic and hypersonic conditions, as well as the desire to improve the aerodynamic efficiency of current aircraft and propulsion systems, continues to motivate new research in aero
The emission spectrum of a chemical element or chemical compound is the spectrum of frequencies of electromagnetic radiation emitted due to an atom or molecule making a transition from a high energy state to a lower energy state. The photon energy of the emitted photon is equal to the energy difference between the two states. There are many possible electron transitions for each atom, each transition has a specific energy difference; this collection of different transitions, leading to different radiated wavelengths, make up an emission spectrum. Each element's emission spectrum is unique. Therefore, spectroscopy can be used to identify the elements in matter of unknown composition; the emission spectra of molecules can be used in chemical analysis of substances. In physics, emission is the process by which a higher energy quantum mechanical state of a particle becomes converted to a lower one through the emission of a photon, resulting in the production of light; the frequency of light emitted is a function of the energy of the transition.
Since energy must be conserved, the energy difference between the two states equals the energy carried off by the photon. The energy states of the transitions can lead to emissions over a large range of frequencies. For example, visible light is emitted by the coupling of electronic states in molecules. On the other hand, nuclear shell transitions can emit high energy gamma rays, while nuclear spin transitions emit low energy radio waves; the emittance of an object quantifies. This may be related to other properties of the object through the Stefan–Boltzmann law. For most substances, the amount of emission varies with the temperature and the spectroscopic composition of the object, leading to the appearance of color temperature and emission lines. Precise measurements at many wavelengths allow the identification of a substance via emission spectroscopy. Emission of radiation is described using semi-classical quantum mechanics: the particle's energy levels and spacings are determined from quantum mechanics, light is treated as an oscillating electric field that can drive a transition if it is in resonance with the system's natural frequency.
The quantum mechanics problem is treated using time-dependent perturbation theory and leads to the general result known as Fermi's golden rule. The description has been superseded by quantum electrodynamics, although the semi-classical version continues to be more useful in most practical computations; when the electrons in the atom are excited, for example by being heated, the additional energy pushes the electrons to higher energy orbitals. When the electrons fall back down and leave the excited state, energy is re-emitted in the form of a photon; the wavelength of the photon is determined by the difference in energy between the two states. These emitted photons form the element's spectrum; the fact that only certain colors appear in an element's atomic emission spectrum means that only certain frequencies of light are emitted. Each of these frequencies are related to energy by the formula: E photon = h ν,where E photon is the energy of the photon, ν is its frequency, h is Planck's constant.
This concludes. The principle of the atomic emission spectrum explains the varied colors in neon signs, as well as chemical flame test results; the frequencies of light that an atom can emit are dependent on states. When excited, an electron moves to orbital; when the electron falls back to its ground level the light is emitted. The above picture shows the visible light emission spectrum for hydrogen. If only a single atom of hydrogen were present only a single wavelength would be observed at a given instant. Several of the possible emissions are observed because the sample contains many hydrogen atoms that are in different initial energy states and reach different final energy states; these different combinations lead to simultaneous emissions at different wavelengths. As well as the electronic transitions discussed above, the energy of a molecule can change via rotational and vibronic transitions; these energy transitions lead to spaced groups of many different spectral lines, known as spectral bands.
Unresolved band spectra may appear as a spectral continuum. Light consists of electromagnetic radiation of different wavelengths. Therefore, when the elements or their compounds are heated either on a flame or by an electric arc they emit energy in the form of light. Analysis of this light, with the help of a spectroscope gives us a discontinuous spectrum. A spectroscope or a spectrometer is an instrument, used for separating the components of light, which have different wavelengths; the spectrum appears in a series of lines called the line spectrum. This line spectrum is called an atomic spectrum; each element has a different atomic spectrum. The production of line spectra by the atoms of an element indicate that an atom can radiate only a certain amount of energy; this leads to the conclusion that bound electrons cannot have just any amount of energy but only a certain amount of energy. The emission spectrum can be used to determine the composition of a material, since it is different for each element of the periodic table.
One example is astronomical spectroscopy: iden
Space Shuttle Endeavour
Space Shuttle Endeavour is a retired orbiter from NASA's Space Shuttle program and the fifth and final operational Shuttle built. It embarked on its first mission, STS-49, in May 1992 and its 25th and final mission, STS-134, in May 2011. STS-134 was expected to be the final mission of the Space Shuttle program, but with the authorization of STS-135, Atlantis became the last shuttle to fly; the United States Congress approved the construction of Endeavour in 1987 to replace Challenger, destroyed in 1986. Structural spares built during the construction of Discovery and Atlantis were used in its assembly. NASA chose, on cost grounds, to build Endeavour from spares rather than refitting Enterprise or accepting a Rockwell International proposal to build two Shuttles for the price of one; the orbiter is named after the British HMS Endeavour, the ship which took Captain James Cook on his first voyage of discovery. This is; this has caused confusion, including when NASA itself misspelled a sign on the launch pad in 2007.
The Space Shuttle carried a piece of the original wood from Cook's ship inside the cockpit. The name honored Endeavour, the Command Module of Apollo 15, named after Cook's ship. Endeavour was named through a national competition involving students in elementary and secondary schools. Entries included an essay about the name, the story behind it and why it was appropriate for a NASA shuttle, the project that supported the name. Endeavour was the most popular entry, accounting for one-third of the state-level winners; the national winners were Senatobia Middle School in Senatobia, Mississippi, in the elementary division and Tallulah Falls School in Tallulah Falls, Georgia, in the upper school division. They were honored at several ceremonies in Washington, D. C. including a White House ceremony where then-President George H. W. Bush presented awards to each school. Endeavour was delivered by Rockwell International Space Transportation Systems Division in May 1991 and first launched a year in May 1992, on STS-49.
Rockwell International claimed that it had made no profit on Space Shuttle Endeavour, despite construction costing US$2.2 billion. On its first mission, it redeployed the stranded INTELSAT VI communications satellite; the first African-American woman astronaut, Mae Jemison, was launched into space on the mission STS-47 on September 12, 1992. Endeavour flew the first servicing mission STS-61 for the Hubble Space Telescope in 1993. In 1997 it was withdrawn from service for eight months for a retrofit, including installation of a new airlock. In December 1998, it delivered the Unity Module to the International Space Station. Endeavour's last Orbiter Major Modification period began in December 2003 and ended on October 6, 2005. During this time, Endeavour received major hardware upgrades, including a new, multi-functional, electronic display system referred to as a glass cockpit, an advanced GPS receiver, along with safety upgrades recommended by the Columbia Accident Investigation Board for the shuttle's return to flight following the loss of Columbia during reentry on 1 February 2003.
The STS-118 mission, Endeavour's first since the refit, included astronaut Barbara Morgan assigned to the Teacher in Space project, a member of the Astronaut Corps from 1998 to 2008, as part of the crew. Morgan was the backup for Christa McAuliffe, on the ill-fated mission STS-51-L in 1986; as it was constructed than its elder sisters, Endeavour was built with new hardware designed to improve and expand orbiter capabilities. Most of this equipment was incorporated into the other three orbiters during out-of-service major inspection and modification programs. Endeavour's upgrades include: A 40-foot diameter drag chute that reduced the orbiter's landing roll-out distance from 3,000 feet to 2,000 feet; the plumbing and electrical connections needed for Extended Duration Orbiter modifications to allow up to a 28-day mission. Updated avionics systems that included advanced general purpose computers, improved inertial measurement units and tactical air navigation systems, enhanced master events controllers and multiplexer-demultiplexers, a solid-state star tracker and improved nose wheel steering mechanisms.
An improved version of the Auxiliary Power Units that provided power to operate the Shuttle's hydraulic systems. Modifications resulting from a 2005–2006 refit of Endeavour included: The Station-to-Shuttle Power Transfer System, which converted 8 kilowatts of DC power from the ISS main voltage of 120VDC to the orbiter bus voltage of 28VDC; this upgrade allowed Endeavour to remain on-orbit while docked at ISS for an additional 3- to 4-day duration. The corresponding power equipment was added to the ISS during the STS-116 station assembly mission, Endeavour flew with SSPTS capability during STS-118. Endeavour flew its final mission, STS-134, to the International Space Station in May 2011. After the conclusion of STS-134, Endeavour was formally decommissioned. STS-134 was intended to launch in late 2010, but on July 1 NASA released a statement saying the Endeavour mission was rescheduled for February 27, 2011."The target dates were adjusted because critical payload hardware for STS-133 will not be ready in time to support the planned 16 September launch," NASA said in a statement.
With the Discovery launch moving to November, Endeavour mission "cannot fly as planned, so the next available launch window is in February 2011," NASA said, a
The ionosphere is the ionized part of Earth's upper atmosphere, from about 60 km to 1,000 km altitude, a region that includes the thermosphere and parts of the mesosphere and exosphere. The ionosphere is ionized by solar radiation, it forms the inner edge of the magnetosphere. It has practical importance because, among other functions, it influences radio propagation to distant places on the Earth; as early as 1839, the German mathematician and physicist Carl Friedrich Gauss postulated that an electrically conducting region of the atmosphere could account for observed variations of Earth's magnetic field. Sixty years Guglielmo Marconi received the first trans-Atlantic radio signal on December 12, 1901, in St. John's, Newfoundland using a 152.4 m kite-supported antenna for reception. The transmitting station in Poldhu, used a spark-gap transmitter to produce a signal with a frequency of 500 kHz and a power of 100 times more than any radio signal produced; the message received was three dits, the Morse code for the letter S.
To reach Newfoundland the signal would have to bounce off the ionosphere twice. Dr. Jack Belrose has contested this, based on theoretical and experimental work. However, Marconi did achieve transatlantic wireless communications in Glace Bay, Nova Scotia, one year later. In 1902, Oliver Heaviside proposed the existence of the Kennelly–Heaviside layer of the ionosphere which bears his name. Heaviside's proposal included means by which radio signals are transmitted around the Earth's curvature. Heaviside's proposal, coupled with Planck's law of black-body radiation, may have hampered the growth of radio astronomy for the detection of electromagnetic waves from celestial bodies until 1932. In 1902, Arthur Edwin Kennelly discovered some of the ionosphere's radio-electrical properties. In 1912, the U. S. Congress imposed the Radio Act of 1912 on amateur radio operators, limiting their operations to frequencies above 1.5 MHz. The government thought; this led to the discovery of HF radio propagation via the ionosphere in 1923.
In 1926, Scottish physicist Robert Watson-Watt introduced the term ionosphere in a letter published only in 1969 in Nature: We have in quite recent years seen the universal adoption of the term'stratosphere'..and..the companion term'troposphere'... The term'ionosphere', for the region in which the main characteristic is large scale ionisation with considerable mean free paths, appears appropriate as an addition to this series. In the early 1930s, test transmissions of Radio Luxembourg inadvertently provided evidence of the first radio modification of the ionosphere. Edward V. Appleton was awarded a Nobel Prize in 1947 for his confirmation in 1927 of the existence of the ionosphere. Lloyd Berkner first measured the density of the ionosphere; this permitted the first complete theory of short-wave radio propagation. Maurice V. Wilkes and J. A. Ratcliffe researched the topic of radio propagation of long radio waves in the ionosphere. Vitaly Ginzburg has developed a theory of electromagnetic wave propagation in plasmas such as the ionosphere.
In 1962, the Canadian satellite Alouette 1 was launched to study the ionosphere. Following its success were Alouette 2 in 1965 and the two ISIS satellites in 1969 and 1971, further AEROS-A and -B in 1972 and 1975, all for measuring the ionosphere. On July 26, 1963 the first operational geosynchronous satellite Syncom 2 was launched; the board radio beacons on this satellite enabled – for the first time – the measurement of total electron content variation along a radio beam from geostationary orbit to an earth receiver. Australian geophysicist Elizabeth Essex-Cohen from 1969 onwards was using this technique to monitor the atmosphere above Australia and Antarctica; the ionosphere is a shell of electrons and electrically charged atoms and molecules that surrounds the Earth, stretching from a height of about 50 km to more than 1,000 km. It exists due to ultraviolet radiation from the Sun; the lowest part of the Earth's atmosphere, the troposphere extends from the surface to about 10 km. Above, the stratosphere, followed by the mesosphere.
In the stratosphere incoming solar radiation creates the ozone layer. At heights of above 80 km, in the thermosphere, the atmosphere is so thin that free electrons can exist for short periods of time before they are captured by a nearby positive ion; the number of these free electrons is sufficient to affect radio propagation. This portion of the atmosphere is ionized and contains a plasma, referred to as the ionosphere. Ultraviolet, X-ray and shorter wavelengths of solar radiation are ionizing, since photons at these frequencies contain sufficient energy to dislodge an electron from a neutral gas atom or molecule upon absorption. In this process the light electron obtains a high velocity so that the temperature of the created electronic gas is much higher than the one of ions and neutrals; the reverse process to ionization is recombination, in which a free electron is "captured" by a positive ion. Recombination occurs spontaneously, causes the emission of a photon carrying away the energy produced upon recombination.
As gas density increases at lower altitudes, the recombination process prevails, since the gas molecules and ions are closer together. The balance between these two processes determines th
The stratosphere is the second major layer of Earth's atmosphere, just above the troposphere, below the mesosphere. The stratosphere is stratified in temperature, with warmer layers higher and cooler layers closer to the Earth; this is in contrast to the troposphere, near the Earth's surface, where temperature decreases with altitude. The border between the troposphere and stratosphere, the tropopause, marks where this temperature inversion begins. Near the equator, the stratosphere starts at as high as 20 km, around 10 km at midlatitudes, at about 7 km at the poles. Temperatures range from an average of −51 °C near the tropopause to an average of −15 °C near the mesosphere. Stratospheric temperatures vary within the stratosphere as the seasons change, reaching low temperatures in the polar night. Winds in the stratosphere can far exceed those in the troposphere, reaching near 60 m/s in the Southern polar vortex; the mechanism describing the formation of the ozone layer was described by British mathematician Sydney Chapman in 1930.
Molecular oxygen absorbs high energy sunlight in the UV-C region, at wavelengths shorter than about 240 nm. Radicals produced from the homolytically split oxygen molecules combine with molecular oxygen to form ozone. Ozone in turn is photolysed much more than molecular oxygen as it has a stronger absorption that occurs at longer wavelengths, where the solar emission is more intense. Ozone photolysis produces O and O2; the oxygen atom product combines with atmospheric molecular oxygen releasing heat. The rapid photolysis and reformation of ozone heats the stratosphere resulting in a temperature inversion; this increase of temperature with altitude is characteristic of the stratosphere. Within the stratosphere temperatures increase with altitude; this vertical stratification, with warmer layers above and cooler layers below, makes the stratosphere dynamically stable: there is no regular convection and associated turbulence in this part of the atmosphere. However, exceptionally energetic convection processes, such as volcanic eruption columns and overshooting tops in severe supercell thunderstorms, may carry convection into the stratosphere on a local and temporary basis.
Overall the attenuation of solar UV at wavelengths that damage DNA by the ozone layer allows life to exist on the surface of the planet outside of the ocean. All air entering the stratosphere must pass through the tropopause, the temperature minimum that divides the troposphere and stratosphere; the rising air is freeze dried. The top of the stratosphere is called the stratopause, above which the temperature decreases with height. Sydney Chapman gave a correct description of the source of stratospheric ozone and its ability to generate heat within the stratosphere. We now know that there are additional ozone loss mechanisms, that these mechanisms are catalytic meaning that a small amount of the catalyst can destroy a great number of ozone molecules; the first is due to the reaction of hydroxyl radicals OH· with ozone. OH is formed by the reaction of electronically excited oxygen atoms produced by ozone photolysis, with water vapor. While the stratosphere is dry, additional water vapor is produced in situ by the photochemical oxidation of methane.
The HO2 radical produced by the reaction of OH with O3 is recycled to OH by reaction with oxygen atoms or ozone. In addition, solar proton events can affect ozone levels via radiolysis with the subsequent formation of OH. Laughing gas or nitrous oxide is produced by biological activity at the surface and is oxidised to NO in the stratosphere. Chlorofluorocarbon molecules are photolysed in the stratosphere releasing chlorine atoms that react with ozone giving ClO and O2; the chlorine atoms are recycled when ClO reacts with O in the upper stratosphere, or when ClO reacts with itself in the chemistry of the Antarctic ozone hole. Paul J. Crutzen, Mario J. Molina and F. Sherwood Rowland were awarded the Nobel Prize in Chemistry in 1995 for their work describing the formation and decomposition of stratospheric ozone. Commercial airliners cruise at altitudes of 9–12 km, in the lower reaches of the stratosphere in temperate latitudes; this optimizes fuel efficiency due to the low temperatures encountered near the tropopause and low air density, reducing parasitic drag on the airframe.
Stated another way, it allows the airliner to fly faster while maintaining lift equal to the weight of the plane. It allows the airplane to stay above the turbulent weather of the troposphere; the Concorde aircraft cruised at mach 2 at about 18,000 m, the SR-71 cruised at mach 3 at 26,000 m, all within the stratosphere. Because the temperature in the tropopause and lower stratosphere is constant with increasing altitude little convection and its resultant turbulence occurs there. Most turbulence at this altitude is caused by variations in the jet stream and other local wind shears, although areas of significant convective activity (thunderstor
The sodium layer is a layer of neutral atoms of sodium within Earth's mesosphere. This layer lies within an altitude range of 80–105 km above sea level and has a depth of about 5 km; the sodium comes from the ablation of meteors. Atmospheric sodium below this layer is chemically bound in compounds such as sodium oxide, while the sodium atoms above the layer tend to be ionized; the density varies with season. For a typical thickness of 5 km this corresponds to volume density of 8000 sodium atoms/cm3. Atoms of sodium in this layer can become excited due to solar wind, or other causes. Once excited, these atoms radiate efficiently around 589 nm, in the yellow portion of the spectrum; these radiation bands are known as the sodium D lines. The resulting radiation is one of the sources of air glow. Astronomers have found the sodium layer to be useful for creating an artificial laser guide star in the upper atmosphere; the star is used by adaptive optics to compensate for movements in the atmosphere. As a result, optical telescopes can perform much closer to their theoretical limit of resolution.
The sodium layer was first discovered in 1929 by American astronomer Vesto Slipher. In 1939 the British-American geophysicist Sydney Chapman proposed a reaction-cycle theory to explain the night-glow phenomenon. Metallic vapor layers