1.
Meteorology
–
Meteorology is a branch of the atmospheric sciences which includes atmospheric chemistry and atmospheric physics, with a major focus on weather forecasting. The study of meteorology dates back millennia, though significant progress in meteorology did not occur until the 18th century, the 19th century saw modest progress in the field after weather observation networks were formed across broad regions. Prior attempts at prediction of weather depended on historical data, Meteorological phenomena are observable weather events that are explained by the science of meteorology. Different spatial scales are used to describe and predict weather on local, regional, Meteorology, climatology, atmospheric physics, and atmospheric chemistry are sub-disciplines of the atmospheric sciences. Meteorology and hydrology compose the interdisciplinary field of hydrometeorology, the interactions between Earths atmosphere and its oceans are part of a coupled ocean-atmosphere system. Meteorology has application in diverse fields such as the military, energy production, transport, agriculture. The word meteorology is from Greek μετέωρος metéōros lofty, high and -λογία -logia -logy, varāhamihiras classical work Brihatsamhita, written about 500 AD, provides clear evidence that a deep knowledge of atmospheric processes existed even in those times. In 350 BC, Aristotle wrote Meteorology, Aristotle is considered the founder of meteorology. One of the most impressive achievements described in the Meteorology is the description of what is now known as the hydrologic cycle and they are all called swooping bolts because they swoop down upon the Earth. Lightning is sometimes smoky, and is then called smoldering lightning, sometimes it darts quickly along, at other times, it travels in crooked lines, and is called forked lightning. When it swoops down upon some object it is called swooping lightning, the Greek scientist Theophrastus compiled a book on weather forecasting, called the Book of Signs. The work of Theophrastus remained a dominant influence in the study of weather, in 25 AD, Pomponius Mela, a geographer for the Roman Empire, formalized the climatic zone system. According to Toufic Fahd, around the 9th century, Al-Dinawari wrote the Kitab al-Nabat, ptolemy wrote on the atmospheric refraction of light in the context of astronomical observations. St. Roger Bacon was the first to calculate the size of the rainbow. He stated that a rainbow summit can not appear higher than 42 degrees above the horizon, in the late 13th century and early 14th century, Kamāl al-Dīn al-Fārisī and Theodoric of Freiberg were the first to give the correct explanations for the primary rainbow phenomenon. Theoderic went further and also explained the secondary rainbow, in 1716, Edmund Halley suggested that aurorae are caused by magnetic effluvia moving along the Earths magnetic field lines. In 1441, King Sejongs son, Prince Munjong, invented the first standardized rain gauge and these were sent throughout the Joseon Dynasty of Korea as an official tool to assess land taxes based upon a farmers potential harvest. In 1450, Leone Battista Alberti developed a swinging-plate anemometer, and was known as the first anemometer, in 1607, Galileo Galilei constructed a thermoscope
2.
Turbulence
–
Turbulence or turbulent flow is a flow regime in fluid dynamics characterized by chaotic changes in pressure and flow velocity. It is in contrast to a flow regime, which occurs when a fluid flows in parallel layers. Turbulence is caused by kinetic energy in parts of a fluid flow. For this reason turbulence is easier to create in low viscosity fluids, in general terms, in turbulent flow, unsteady vortices appear of many sizes which interact with each other, consequently drag due to friction effects increases. This would increase the energy needed to pump fluid through a pipe, however this effect can also be exploited by such as aerodynamic spoilers on aircraft, which deliberately spoil the laminar flow to increase drag and reduce lift. The onset of turbulence can be predicted by a constant called the Reynolds number. However, turbulence has long resisted detailed physical analysis, and the interactions within turbulence creates a complex situation. Richard Feynman has described turbulence as the most important unsolved problem of classical physics, smoke rising from a cigarette is mostly turbulent flow. However, for the first few centimeters the flow is laminar, the smoke plume becomes turbulent as its Reynolds number increases, due to its flow velocity and characteristic length increasing. If the golf ball were smooth, the boundary layer flow over the front of the sphere would be laminar at typical conditions. However, the layer would separate early, as the pressure gradient switched from favorable to unfavorable. To prevent this happening, the surface is dimpled to perturb the boundary layer. This results in higher skin friction, but moves the point of boundary layer separation further along, resulting in form drag. The flow conditions in industrial equipment and machines. The external flow over all kind of such as cars, airplanes, ships. The motions of matter in stellar atmospheres, a jet exhausting from a nozzle into a quiescent fluid. As the flow emerges into this external fluid, shear layers originating at the lips of the nozzle are created and these layers separate the fast moving jet from the external fluid, and at a certain critical Reynolds number they become unstable and break down to turbulence. Biologically generated turbulence resulting from swimming animals affects ocean mixing, snow fences work by inducing turbulence in the wind, forcing it to drop much of its snow load near the fence
3.
Radiosonde
–
Modern radiosondes measure or calculate the following variables, altitude, pressure, temperature, relative humidity, wind, cosmic ray readings at high altitude and geographical position. Radiosondes measuring ozone concentration are known as ozonesondes, radiosondes may operate at a radio frequency of 403 MHz or 1680 MHz. A radiosonde whose position is tracked as it ascends to give wind speed, most radiosondes have radar reflectors and are technically rawinsondes. A radiosonde that is dropped from an airplane and falls, rather than being carried by a balloon is called a dropsonde, radiosondes are an essential source of meteorological data, and hundreds are launched all over the world daily. This proved to be difficult because the kites were linked to the ground and were difficult to manoeuvre in gusty conditions. Furthermore, the sounding was limited to low altitudes because of the link to the ground, gustave Hermite and Georges Besançon, from France, were the first in 1892 to use a balloon to fly the meteograph. In 1898, Léon Teisserenc de Bort organized at the Observatoire de Météorologie Dynamique de Trappes the first regular use of these balloons. Data from these launches showed that the temperature lowered with height up to an altitude, which varied with the season. De Borts discovery of the tropopause and stratosphere was announced in 1902 at the French Academy of Sciences, other researchers, like Richard Aßmann and William Henry Dines, were working at the same times with similar instruments. In 1924, Colonel William Blaire in the U. S. Signal Corps did the first primitive experiments with weather measurements from balloon, the first true radiosonde that sent precise encoded telemetry from weather sensors was invented in France by Robert Bureau. Bureau coined the name radiosonde and flew the first instrument on January 7,1929, developed independently a year later, Pavel Molchanov flew a radiosonde on January 30,1930. Molchanovs design became a popular standard because of its simplicity and because it converted sensor readings to Morse code, working with a modified Molchanov sonde, Sergey Vernov was the first to use radiosondes to perform cosmic ray readings at high altitude. On April 1,1935, he took measurements up to 13.6 km using a pair of Geiger counters in a circuit to avoid counting secondary ray showers. The sondes were tracked for two days, a rubber or latex balloon filled with either helium or hydrogen lifts the device up through the atmosphere. The maximum altitude to which the balloon ascends is determined by the diameter, balloon sizes can range from 100 to 3,000 g. As the balloon ascends through the atmosphere, the pressure decreases, eventually, the balloon will expand to the extent that its skin will break, terminating the ascent. An 800 g balloon will burst at about 21 km, after bursting, a small parachute on the radiosondes support line carries it to Earth. A typical radiosonde flight lasts 60 to 90 minutes, one radiosonde from Clark Air Base, Philippines, reached an altitude of 155,092 ft
4.
Supercell
–
A supercell is a thunderstorm that is characterized by the presence of a mesocyclone, a deep, persistently rotating updraft. For this reason, these storms are referred to as rotating thunderstorms. Of the four classifications of thunderstorms, supercells are the overall least common and have the potential to be the most severe, supercells are often isolated from other thunderstorms, and can dominate the local weather up to 32 kilometres away. Supercells are often put into three types, Classic, Low-precipitation, and High-precipitation. LP supercells are found in climates that are more arid, such as the high plains of the United States. Supercells are usually isolated from other thunderstorms, although they can sometimes be embedded in a squall line. Typically, supercells are found in the sector of a low pressure system propagating generally in a north easterly direction in line with the cold front of the low pressure system. Because they can last for hours, they are known as quasi-steady-state storms, supercells have the capability to deviate from the mean wind. If they track to the right or left of the wind, they are said to be right-movers or left-movers. Supercells can sometimes develop two separate updrafts with opposing rotations, which splits the storm into two supercells, one left-mover and one right-mover, supercells can be any size – large or small, low or high topped. They usually produce copious amounts of hail, torrential rainfall, strong winds, supercells are one of the few types of clouds that typically spawn tornadoes within the mesocyclone, although only 30% or fewer do so. Supercells can occur anywhere in the world under the weather conditions. The first storm to be identified as the type was the Wokingham storm over England. Browning did the work that was followed up by Lemon. Supercells occur occasionally in many other regions, including eastern China. The areas with highest frequencies of supercells are similar to those with the most occurrences of tornadoes, see tornado climatology and Tornado Alley. The current conceptual model of a supercell was described in Severe Thunderstorm Evolution and Mesocyclone Structure as Related to Tornadogenesis by Leslie R. Lemon, supercells derive their rotation through tilting of horizontal vorticity caused by wind shear. Strong updrafts lift the air turning about an axis and cause this air to turn about a vertical axis
5.
Richardson number
–
The Richardson number is named after Lewis Fry Richardson. The Richardson number, or one of several variants, is of importance in weather forecasting and in investigating density and turbidity currents in oceans, lakes. If the Richardson number is less than unity, buoyancy is unimportant in the flow. If it is greater than unity, buoyancy is dominant. If the Richardson number is of order unity, then the flow is likely to be buoyancy-driven, in aviation, the Richardson number is used as a rough measure of expected air turbulence. A lower value indicates a degree of turbulence. Values in the range 10 to 0.1 are typical, in thermal convection problems, Richardson number represents the importance of natural convection relative to the forced convection. The Richardson number can also be expressed by using a combination of the Grashof number and Reynolds number, R i = G r R e 2. Typically, the convection is negligible when Ri <0.1, forced convection is negligible when Ri >10. It may be noted that usually the forced convection is large relative to natural convection except in the case of extremely low forced flow velocities, in the design of water filled thermal energy storage tanks, the Richardson number can be useful. In oceanography, the Richardson number has a general form which takes stratification into account. R i = N2 /2 where N is the Brunt–Väisälä frequency, the Richardson number defined above is always considered positive. A negative value of N² indicates unstable density gradients with active convective overturning, under such circumstances the magnitude of negative Ri is not generally of interest. It can be shown that Ri < 1/4 is a condition for velocity shear to overcome the tendency of a stratified fluid to remain stratified. When Ri is large, turbulent mixing across the stratification is generally suppressed
6.
Atmospheric thermodynamics
–
Atmospheric thermodynamics is the study of heat-to-work transformations that take place in the earths atmosphere and manifest as weather or climate. Atmospheric thermodynamic diagrams are used as tools in the forecasting of storm development, the atmosphere is an example of a non-equilibrium system. Those dynamics are modified by the force of the pressure gradient, the tools used include the law of energy conservation, the ideal gas law, specific heat capacities, the assumption of isentropic processes, and moist adiabatic processes. Considerations of moist air and cloud theories typically involve various temperatures, such as equivalent potential temperature, wet-bulb and virtual temperatures. Connected areas are energy, momentum, and mass transfer, turbulence interaction between air particles in clouds, convection, dynamics of tropical cyclones, and large scale dynamics of the atmosphere and these equations form a basis for the numerical weather and climate predictions. In 1873, thermodynamicist Willard Gibbs published Graphical Methods in the Thermodynamics of Fluids and these sorts of foundations naturally began to be applied towards the development of theoretical models of atmospheric thermodynamics which drew the attention of the best minds. Papers on atmospheric thermodynamics appeared in the 1860s that treated such topics as dry, in 1884 Heinrich Hertz devised first atmospheric thermodynamic diagram. In 1911 von Alfred Wegener published a book Thermodynamik der Atmosphäre, Leipzig, from here the development of atmospheric thermodynamics as a branch of science began to take root. The term atmospheric thermodynamics, itself, can be traced to Frank W. Verys 1919 publication, by the late 1970s various textbooks on the subject began to appear. Today, atmospheric thermodynamics is an part of weather forecasting. The thermodynamic efficiency of the Hadley system, considered as an engine, has been relatively constant over the 1979~2010 period. Parcels of air traveling close to the sea surface take up heat and water vapor, the rising air, and condensation, produces circulatory winds that are propelled by the Coriolis force, which whip up waves and increase the amount of warm moist air that powers the cyclone. Both a decreasing temperature in the upper troposphere or a temperature of the atmosphere close to the surface will increase the maximum winds observed in hurricanes. When applied to hurricane dynamics it defines a Carnot heat engine cycle, the Clausius–Clapeyron relation shows how the water-holding capacity of the atmosphere increases by about 8% per Celsius increase in temperature. This water-holding capacity, or equilibrium vapor pressure, can be approximated using the August-Roche-Magnus formula e s =6.1094 exp and this shows that when atmospheric temperature increases the absolute humidity should also increase exponentially. An air-sea interaction theory for tropical cyclones, J. Atmos
7.
International Standard Book Number
–
The International Standard Book Number is a unique numeric commercial book identifier. An ISBN is assigned to each edition and variation of a book, for example, an e-book, a paperback and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, the method of assigning an ISBN is nation-based and varies from country to country, often depending on how large the publishing industry is within a country. The initial ISBN configuration of recognition was generated in 1967 based upon the 9-digit Standard Book Numbering created in 1966, the 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108. Occasionally, a book may appear without a printed ISBN if it is printed privately or the author does not follow the usual ISBN procedure, however, this can be rectified later. Another identifier, the International Standard Serial Number, identifies periodical publications such as magazines, the ISBN configuration of recognition was generated in 1967 in the United Kingdom by David Whitaker and in 1968 in the US by Emery Koltay. The 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108, the United Kingdom continued to use the 9-digit SBN code until 1974. The ISO on-line facility only refers back to 1978, an SBN may be converted to an ISBN by prefixing the digit 0. For example, the edition of Mr. J. G. Reeder Returns, published by Hodder in 1965, has SBN340013818 -340 indicating the publisher,01381 their serial number. This can be converted to ISBN 0-340-01381-8, the check digit does not need to be re-calculated, since 1 January 2007, ISBNs have contained 13 digits, a format that is compatible with Bookland European Article Number EAN-13s. An ISBN is assigned to each edition and variation of a book, for example, an ebook, a paperback, and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, a 13-digit ISBN can be separated into its parts, and when this is done it is customary to separate the parts with hyphens or spaces. Separating the parts of a 10-digit ISBN is also done with either hyphens or spaces, figuring out how to correctly separate a given ISBN number is complicated, because most of the parts do not use a fixed number of digits. ISBN issuance is country-specific, in that ISBNs are issued by the ISBN registration agency that is responsible for country or territory regardless of the publication language. Some ISBN registration agencies are based in national libraries or within ministries of culture, in other cases, the ISBN registration service is provided by organisations such as bibliographic data providers that are not government funded. In Canada, ISBNs are issued at no cost with the purpose of encouraging Canadian culture. In the United Kingdom, United States, and some countries, where the service is provided by non-government-funded organisations. Australia, ISBNs are issued by the library services agency Thorpe-Bowker
8.
Adiabatic process
–
In thermodynamics, an adiabatic process is one that occurs without transfer of heat or matter between a thermodynamic system and its surroundings. In an adiabatic process, energy is transferred only as work, the adiabatic process provides a rigorous conceptual basis for the theory used to expound the first law of thermodynamics, and as such it is a key concept in thermodynamics. The adiabatic flame temperature is the temperature that would be achieved by a if the process of combustion took place in the absence of heat loss to the surroundings. A process that does not involve the transfer of heat or matter into or out of a system, so that Q =0, is called an adiabatic process, the assumption that a process is adiabatic is a frequently made simplifying assumption. Even though the cylinders are not insulated and are quite conductive, the same can be said to be true for the expansion process of such a system. The assumption of adiabatic isolation of a system is a useful one, the behaviour of actual machines deviates from these idealizations, but the assumption of such perfect behaviour provide a useful first approximation of how the real world works. According to Laplace, when sound travels in a gas, there is no loss of heat in the medium and the propagation of sound is adiabatic. For this adiabatic process, the modulus of elasticity E = γP where γ is the ratio of specific heats at constant pressure and at constant volume, such a process is called an isentropic process and is said to be reversible. Fictively, if the process is reversed, the energy added as work can be recovered entirely as work done by the system, if the walls of a system are not adiabatic, and energy is transferred in as heat, entropy is transferred into the system with the heat. Such a process is neither adiabatic nor isentropic, having Q >0, naturally occurring adiabatic processes are irreversible. The transfer of energy as work into an isolated system can be imagined as being of two idealized extreme kinds. In one such kind, there is no entropy produced within the system, in nature, this ideal kind occurs only approximately, because it demands an infinitely slow process and no sources of dissipation. The other extreme kind of work is work, for which energy is added as work solely through friction or viscous dissipation within the system. The second law of thermodynamics observes that a process, of transfer of energy as work, always consists at least of isochoric work. Every natural process, adiabatic or not, is irreversible, with ΔS >0, the adiabatic compression of a gas causes a rise in temperature of the gas. Adiabatic expansion against pressure, or a spring, causes a drop in temperature, in contrast, free expansion is an isothermal process for an ideal gas. Adiabatic heating occurs when the pressure of a gas is increased from work done on it by its surroundings and this finds practical application in diesel engines which rely on the lack of quick heat dissipation during their compression stroke to elevate the fuel vapor temperature sufficiently to ignite it. Adiabatic heating occurs in the Earths atmosphere when an air mass descends, for example, in a wind, Foehn wind
9.
Buoyancy
–
In science, buoyancy or upthrust, is an upward force exerted by a fluid that opposes the weight of an immersed object. In a column of fluid, pressure increases with depth as a result of the weight of the overlying fluid, thus the pressure at the bottom of a column of fluid is greater than at the top of the column. Similarly, the pressure at the bottom of an object submerged in a fluid is greater than at the top of the object and this pressure difference results in a net upwards force on the object. For this reason, an object whose density is greater than that of the fluid in which it is submerged tends to sink, If the object is either less dense than the liquid or is shaped appropriately, the force can keep the object afloat. This can occur only in a reference frame, which either has a gravitational field or is accelerating due to a force other than gravity defining a downward direction. In a situation of fluid statics, the net upward force is equal to the magnitude of the weight of fluid displaced by the body. The center of buoyancy of an object is the centroid of the volume of fluid. Archimedes principle is named after Archimedes of Syracuse, who first discovered this law in 212 B. C, more tersely, Buoyancy = weight of displaced fluid. The weight of the fluid is directly proportional to the volume of the displaced fluid. Thus, among completely submerged objects with equal masses, objects with greater volume have greater buoyancy and this is also known as upthrust. Suppose a rocks weight is measured as 10 newtons when suspended by a string in a vacuum with gravity acting upon it, suppose that when the rock is lowered into water, it displaces water of weight 3 newtons. The force it exerts on the string from which it hangs would be 10 newtons minus the 3 newtons of buoyancy force,10 −3 =7 newtons. Buoyancy reduces the apparent weight of objects that have sunk completely to the sea floor and it is generally easier to lift an object up through the water than it is to pull it out of the water. The density of the object relative to the density of the fluid can easily be calculated without measuring any volumes. Density of object density of fluid = weight weight − apparent immersed weight Example, If you drop wood into water, Example, A helium balloon in a moving car. During a period of increasing speed, the air mass inside the car moves in the direction opposite to the cars acceleration, the balloon is also pulled this way. However, because the balloon is buoyant relative to the air, it ends up being pushed out of the way, If the car slows down, the same balloon will begin to drift backward. For the same reason, as the car goes round a curve and this is the equation to calculate the pressure inside a fluid in equilibrium
10.
Lightning
–
Lightning is a sudden electrostatic discharge that occurs during a thunder storm. This discharge occurs between electrically charged regions of a cloud, between two clouds, or between a cloud and the ground. The charged regions in the atmosphere temporarily equalize themselves through this discharge referred to as an if it hits an object on the ground. Lightning causes light in the form of plasma, and sound in the form of thunder, Lightning may be seen and not heard when it occurs at a distance too great for the sound to carry as far as the light from the strike or flash. This article incorporates public domain material from the National Oceanic and Atmospheric Administration document Understanding Lightning, the details of the charging process are still being studied by scientists, but there is general agreement on some of the basic concepts of thunderstorm electrification. The main charging area in a thunderstorm occurs in the part of the storm where air is moving upward rapidly and temperatures range from -15 to -25 Celsius. At that place, the combination of temperature and rapid upward air movement produces a mixture of super-cooled cloud droplets, small ice crystals, the updraft carries the super-cooled cloud droplets and very small ice crystals upward. At the same time, the graupel, which is larger and denser. The differences in the movement of the precipitation cause collisions to occur, when the rising ice crystals collide with graupel, the ice crystals become positively charged and the graupel becomes negatively charged. The updraft carries the positively charged ice crystals upward toward the top of the storm cloud, the larger and denser graupel is either suspended in the middle of the thunderstorm cloud or falls toward the lower part of the storm. The result is that the part of the thunderstorm cloud becomes positively charged while the middle to lower part of the thunderstorm cloud becomes negatively charged. This part of the cloud is called the anvil. While this is the charging process for the thunderstorm cloud. In addition, there is a small but important positive charge buildup near the bottom of the cloud due to the precipitation. Many factors affect the frequency, distribution, strength and physical properties of a lightning flash in a particular region of the world. These factors include ground elevation, latitude, prevailing wind currents, relative humidity, proximity to warm and cold bodies of water, to a certain degree, the ratio between IC, CC and CG lightning may also vary by season in middle latitudes. Lightnings relative unpredictability limits a complete explanation of how or why it occurs, the actual discharge is the final stage of a very complex process. At its peak, a thunderstorm produces three or more strikes to the Earth per minute
11.
Solar radiation
–
Solar irradiance is the power per unit area received from the Sun in the form of electromagnetic radiation in the wavelength range of the measuring instrument. Irradiance may be measured in space or at the Earths surface after atmospheric absorption and it is measured perpendicular to the incoming sunlight. Total solar irradiance, is a measure of the power over all wavelengths per unit area incident on the Earths upper atmosphere. The solar constant is a measure of mean TSI at a distance of one astronomical Unit. Irradiance is a function of distance from the Sun, the solar cycle, Irradiance on Earth is also measured perpendicular to the incoming sunlight. Insolation is the received on Earth per unit area on a horizontal surface. It depends on the height of the Sun above the horizon, the solar irradiance integrated over time is called solar irradiation, solar exposure or insolation. However, insolation is often used interchangeably with irradiance in practice, the SI unit of irradiance is watt per square meter. An alternate unit of measure is the Langley per unit time, the solar energy industry uses watt-hour per square metre per unit time, the relation to the SI unit is thus 1 kW/m2 =24 kWh/m2/day =8760 kWh/m2/year. Irradiance can also be expressed in Suns, where one Sun equals 1000 W/m2 at the point of arrival, part of the radiation reaching an object is absorbed and the remainder reflected. Usually the absorbed radiation is converted to energy, increasing the objects temperature. Manmade or natural systems, however, can convert part of the radiation into another form such as electricity or chemical bonds. The proportion of reflected radiation is the objects reflectivity or albedo, insolation onto a surface is largest when the surface directly faces the sun. As the angle between the surface and the Sun moves from normal, the insolation is reduced in proportion to the angles cosine, see effect of sun angle on climate. In the figure, the angle shown is between the ground and the rather than between the vertical direction and the sunbeam, hence the sine rather than the cosine is appropriate. A sunbeam one mile wide arrives from directly overhead, and another at a 30° angle to the horizontal, the sine of a 30° angle is 1/2, whereas the sine of a 90° angle is 1. Therefore, the angled sunbeam spreads the light over twice the area, consequently, half as much light falls on each square mile. This projection effect is the reason why Earths polar regions are much colder than equatorial regions
12.
Surface weather analysis
–
Surface weather analysis is a special type of weather map that provides a view of weather elements over a geographical area at a specified time based on information from ground-based weather stations. The first weather maps in the 19th century were drawn well after the fact to help devise a theory on storm systems, use of surface analyses began first in the United States, spreading worldwide during the 1870s. Use of the Norwegian cyclone model for frontal analysis began in the late 1910s across Europe, surface weather analyses have special symbols that show frontal systems, cloud cover, precipitation, or other important information. For example, an H may represent high pressure, implying clear skies, an L, on the other hand, may represent low pressure, which frequently accompanies precipitation. Various symbols are used not just for frontal zones and other surface boundaries on weather maps, areas of precipitation help determine the frontal type and location. The use of charts in a modern sense began in the middle portion of the 19th century in order to devise a theory on storm systems. The development of a network by 1845 made it possible to gather weather information from multiple distant locations quickly enough to preserve its value for real-time applications. The Smithsonian Institution developed its network of observers over much of the central, the U. S. Army Signal Corps inherited this network between 1870 and 1874 by an act of Congress, and expanded it to the west coast soon afterwards. The weather data was at first less useful as a result of the different times at which weather observations were made, the first attempts at time standardization took hold in Great Britain by 1855. The entire United States did not finally come under the influence of time zones until 1905, other countries followed the lead of the United States in taking simultaneous weather observations, starting in 1873. Other countries then began preparing surface analyses, the use of frontal zones on weather maps did not appear until the introduction of the Norwegian cyclone model in the late 1910s, despite Loomis earlier attempt at a similar notion in 1841. Since the leading edge of air mass changes bore resemblance to the fronts of World War I. The effort to automate map plotting began in the United States in 1969, hong Kong completed their process of automated surface plotting by 1987. In the United States, this development was achieved when Intergraph workstations were replaced by n-AWIPS workstations, recent advances in both the fields of meteorology and geographic information systems have made it possible to devise finely tailored weather maps. Weather information can quickly be matched to relevant geographical detail, for instance, icing conditions can be mapped onto the road network. This will likely continue to lead to changes in the way surface analyses are created and displayed over the several years. The pressureNET project is an attempt to gather surface pressure data using smartphones. When analyzing a weather map, a model is plotted at each point of observation
13.
Visibility
–
In meteorology, visibility is a measure of the distance at which an object or light can be clearly discerned. It is reported within surface weather observations and METAR code either in meters or statute miles, visibility affects all forms of traffic, roads, sailing and aviation. Meteorological visibility refers to transparency of air, in dark, meteorological visibility is still the same as in daylight for the same air, note. — The two distances have different values in air of a given extinction coefficient, and the latter b) varies with the background illumination. The former a) is represented by the optical range. In extremely clean air in Arctic or mountainous areas, the visibility can be up to 161 kilometres where there are large markers such as mountains or high ridges, however, visibility is often reduced somewhat by air pollution and high humidity. Various weather stations report this as haze or mist, fog and smoke can reduce visibility to near zero, making driving extremely dangerous. The same can happen in a sandstorm in and near desert areas, heavy rain not only causes low visibility, but the inability to brake quickly due to hydroplaning. Blizzards and ground blizzards are also defined in part by low visibility, to define visibility the case of a perfectly black object being viewed against a perfectly white background is examined. Because the object is assumed to be black, it must absorb all of the light incident on it. Thus when x=0, F =0 and CV =1, between the object and the observer, F is affected by additional light that is scattered into the observers line of sight and the absorption of light by gases and particles. Light scattered by particles outside of a beam may ultimately contribute to the irradiance at the target. Unlike absorbed light, scattered light is not lost from a system, rather, it can change directions and contribute to other directions. It is only lost from the beam traveling in one particular direction. The multiple scatterings contribution to the irradiance at x is modified by the individual particle scattering coefficient, the concentration of particles. The intensity change dF is the result of effects over a distance dx. Because dx is a measure of the amount of suspended gases and particles, the fractional reduction in F is d F = − b ext F d x where bext is the attenuation coefficient. The scattering of light into the observers line of sight can increase F over the distance dx. This increase is defined as b FB dx, where b is a constant, the overall change in intensity is expressed as d F = d x Since FB represents the background intensity, it is independent of x by definition
14.
Vorticity
–
Conceptually, vorticity could be determined by marking the part of continuum in a small neighborhood of the point in question, and watching their relative displacements as they move along the flow. The vorticity vector would be twice the angular velocity vector of those particles relative to their center of mass. This quantity must not be confused with the velocity of the particles relative to some other point. More precisely, the vorticity is a pseudovector field ω→, defined as the curl of the flow velocity u→ vector, the definition can be expressed by the vector analysis formula, ω → ≡ ∇ × u →, where ∇ is the del operator. The vorticity of a flow is always perpendicular to the plane of the flow. The vorticity is related to the flows circulation along a path by the Stokes theorem. Namely, for any infinitesimal surface element C with normal direction n→ and area dA, many phenomena, such as the blowing out of a candle by a puff of air, are more readily explained in terms of vorticity rather than the basic concepts of pressure and velocity. This applies, in particular, to the formation and motion of vortex rings, in a mass of continuum that is rotating like a rigid body, the vorticity is twice the angular velocity vector of that rotation. This is the case, for example, of water in a tank that has been spinning for a while around its vertical axis, the vorticity may be nonzero even when all particles are flowing along straight and parallel pathlines, if there is shear. The vorticity will be zero on the axis, and maximum near the walls, conversely, a flow may have zero vorticity even though its particles travel along curved trajectories. An example is the ideal irrotational vortex, where most particles rotate about some straight axis, another way to visualize vorticity is to imagine that, instantaneously, a tiny part of the continuum becomes solid and the rest of the flow disappears. If that tiny new solid particle is rotating, rather than just moving with the flow, mathematically, the vorticity of a three-dimensional flow is a pseudovector field, usually denoted by ω→, defined as the curl or rotational of the velocity field v→ describing the continuum motion. In Cartesian coordinates, ω → = ∇ × v → = × = In words, the evolution of the vorticity field in time is described by the vorticity equation, which can be derived from the Navier–Stokes equations. This is clearly true in the case of 2-D potential flow, Vorticity is a useful tool to understand how the ideal potential flow solutions can be perturbed to model real flows. In general, the presence of viscosity causes a diffusion of vorticity away from the vortex cores into the flow field. This flow is accounted for by the term in the vorticity transport equation. Thus, in cases of very viscous flows, the vorticity will be diffused throughout the flow field, a vortex line or vorticity line is a line which is everywhere tangent to the local vorticity vector. Vortex lines are defined by the relation d x ω x = d y ω y = d z ω z, a vortex tube is the surface in the continuum formed by all vortex-lines passing through a given closed curve in the continuum
15.
Wind
–
Wind is the flow of gases on a large scale. On the surface of the Earth, wind consists of the movement of air. Winds are commonly classified by their scale, their speed, the types of forces that cause them, the regions in which they occur. The strongest observed winds on a planet in the Solar System occur on Neptune, Winds have various aspects, an important one being its velocity, another the density of the gas involved, another its energy content or wind energy. In meteorology, winds are referred to according to their strength. Short bursts of high speed wind are termed gusts, strong winds of intermediate duration are termed squalls. Long-duration winds have various names associated with their strength, such as breeze, gale, storm. The two main causes of large-scale atmospheric circulation are the differential heating between the equator and the poles, and the rotation of the planet, within the tropics, thermal low circulations over terrain and high plateaus can drive monsoon circulations. In coastal areas the sea breeze/land breeze cycle can define local winds, in areas that have variable terrain, mountain, Wind powers the voyages of sailing ships across Earths oceans. Hot air balloons use the wind to take trips, and powered flight uses it to increase lift. Areas of wind caused by various weather phenomena can lead to dangerous situations for aircraft. When winds become strong, trees and man-made structures are damaged or destroyed, Winds can shape landforms, via a variety of aeolian processes such as the formation of fertile soils, such as loess, and by erosion. Wind also affects the spread of wildfires, Winds can disperse seeds from various plants, enabling the survival and dispersal of those plant species, as well as flying insect populations. When combined with temperatures, wind has a negative impact on livestock. Wind affects animals food stores, as well as their hunting, Wind is caused by differences in the atmospheric pressure. When a difference in atmospheric pressure exists, air moves from the higher to the pressure area. On a rotating planet, air will also be deflected by the Coriolis effect, globally, the two major driving factors of large-scale wind patterns are the differential heating between the equator and the poles and the rotation of the planet. Outside the tropics and aloft from frictional effects of the surface, near the Earths surface, friction causes the wind to be slower than it would be otherwise
16.
Wind shear
–
Wind shear, sometimes referred to as windshear or wind gradient, is a difference in wind speed and/or direction over a relatively short distance in the atmosphere. Atmospheric wind shear is normally described as vertical or horizontal wind shear. Vertical wind shear is a change in speed or direction with change in altitude. Horizontal wind shear is a change in speed with change in lateral position for a given altitude. Wind shear has a significant effect during take-off and landing of aircraft due to its effects on control of the aircraft and this phenomenon is a concern for architects. Sound movement through the atmosphere is affected by shear, which can bend the wave front, causing sounds to be heard where they normally would not. The thermal wind concept explains how differences in speed at different heights are dependent on horizontal temperature differences. Wind shear refers to the variation of wind over either horizontal or vertical distances, airplane pilots generally regard significant wind shear to be a horizontal change in airspeed of 30 knots for light aircraft, and near 45 knots for airliners at flight altitude. Vertical speed changes greater than 4.9 knots also qualify as significant wind shear for aircraft, Low level wind shear can affect aircraft airspeed during take off and landing in disastrous ways, and airliner pilots are trained to avoid all microburst wind shear. Wind shear is also a key factor in the creation of severe thunderstorms, the additional hazard of turbulence is often associated with wind shear. Weather situations where shear is observed include, Weather fronts, significant shear is observed when the temperature difference across the front is 5 °C or more, and the front moves at 30 knots or faster. Because fronts are three-dimensional phenomena, frontal shear can be observed at any altitude between surface and tropopause, and therefore be seen both horizontally and vertically, vertical wind shear above warm fronts is more of an aviation concern than near and behind cold fronts due to their greater duration. Associated with upper level jet streams is a known as clear air turbulence. The CAT is strongest on the anticyclonic shear side of the jet, when a nocturnal low-level jet forms overnight above the Earths surface ahead of a cold front, significant low level vertical wind shear can develop near the lower portion of the low level jet. This is also known as wind shear since it is not due to nearby thunderstorms. When winds blow over a mountain, vertical shear is observed on the lee side, if the flow is strong enough, turbulent eddies known as rotors associated with lee waves may form, which are dangerous to ascending and descending aircraft. When on a clear and calm night, an inversion is formed near the ground. The change in wind can be 90 degrees in direction and 40 kt in speed, even a nocturnal low level jet can sometimes be observed
17.
Condensation
–
Condensation is the change of the physical state of matter from gas phase into liquid phase, and is the reverse of evaporation. The word most often refers to the water cycle and it can also be defined as the change in the state of water vapour to liquid water when in contact with a liquid or solid surface or cloud condensation nuclei within the atmosphere. When the transition happens from the phase into the solid phase directly. A few distinct reversibility scenarios emerge here with respect to the nature of the surface, absorption into the surface of a liquid —is reversible as evaporation. Adsorption onto solid surface at pressures and temperatures higher than the species triple point—also reversible as evaporation, adsorption onto solid surface at pressures and temperatures lower than the species triple point—is reversible as sublimation. Condensation commonly occurs when a vapor is cooled and/or compressed to its limit when the molecular density in the gas phase reaches its maximal threshold. Vapor cooling and compressing equipment that collects condensed liquids is called a condenser, psychrometry measures the rates of condensation through evaporation into the air moisture at various atmospheric pressures and temperatures. Water is the product of its vapor condensation—condensation is the process of such phase conversion, condensation is a crucial component of distillation, an important laboratory and industrial chemistry application. Because condensation is a naturally occurring phenomenon, it can often be used to water in large quantities for human use. Many structures are made solely for the purpose of collecting water from condensation, such as air wells and it is also a crucial process in forming particle tracks in a cloud chamber. In this case, ions produced by an incident particle act as centers for the condensation of the vapor producing the visible cloud trails. Furthermore, condensation is a step in many industrial processes, such as power generation, water desalination, thermal management, refrigeration. Numerous living beings use water made accessible by condensation, a few examples of these are the Australian Thorny Devil, the darkling beetles of the Namibian coast, and the Coast Redwoods of the West Coast of the United States. To alleviate these issues, the air humidity needs to be lowered. This can be done in a number of ways, for example opening windows, turning on extractor fans, using dehumidifiers, drying clothes outside and covering pots and pans whilst cooking. Air conditioning or ventilation systems can be installed that help remove moisture from the air, the amount of water vapour that can be stored in the air can be increased simply by increasing the temperature. However, this can be a double edged sword as most condensation in the home occurs when warm, as the air is cooled, it can no longer hold as much water vapour. This leads to deposition of water on the cool surface and this is very apparent when central heating is used in combination with single glazed windows in winter
18.
Cloud condensation nuclei
–
Cloud condensation nuclei or CCNs are small particles typically 0.2 µm, or 1/100th the size of a cloud droplet on which water vapour condenses. Water requires a surface to make the transition from a vapour to a liquid. In the atmosphere, this presents itself as tiny solid or liquid particles called CCNs. When no CCNs are present, water vapour can be supercooled at about -13°C for 5-6 hours before droplets spontaneously form, in above freezing temperatures the air would have to be supersaturated to around 400% before the droplets could form. The concept of condensation nuclei is used in cloud seeding. It has further suggested that creating such nuclei could be used for marine cloud brightening. The number of cloud condensation nuclei in the air can be measured, the total mass of CCNs injected into the atmosphere has been estimated at 2x1012 kg over a years time. There are many different types of atmospheric particulates that can act as CCN, sulfate and sea salt, for instance, readily absorb water whereas soot, organic carbon and mineral particles do not. This is made more complicated by the fact that many of the chemical species may be mixed within the particles. Additionally, while some particles do not make very good CCN, there is also speculation that solar variation may affect cloud properties via CCNs, and hence affect climate. These sulfate aerosols form partly from the dimethyl sulfide produced by phytoplankton in the open ocean, large algal blooms in ocean surface waters occur in a wide range of latitudes and contribute considerable DMS into the atmosphere to act as nuclei. The idea that an increase in temperature would also increase phytoplankton activity. An increase of phytoplankton has been observed by scientists in certain areas, a counter-hypothesis is advanced in The Revenge of Gaia, the book by James Lovelock. Warming oceans are likely to become stratified, with most ocean nutrients trapped in the bottom layers while most of the light needed for photosynthesis in the warm top layer. Under this scenario, deprived of nutrients, marine phytoplankton would decline, as would sulfate cloud condensation nuclei, and this is known as the CLAW hypothesis but no conclusive evidence to support this has yet been reported. Bergeron process Evapotranspiration Global dimming Seed crystal Water cycle www. grida. no
19.
Fog
–
Fog consists of visible cloud water droplets or ice crystals suspended in the air at or near the Earths surface. Fog can be considered a type of low-lying cloud and is influenced by nearby bodies of water, topography. In turn, fog has affected many human activities, such as shipping, travel, the term fog is typically distinguished from the more generic term cloud in that fog is low-lying, and the moisture in the fog is often generated locally. By definition, fog reduces visibility to less than 1 kilometre, for aviation purposes in the UK, a visibility of less than 5 kilometres but greater than 999 metres is considered to be mist if the relative humidity is 70% or greater, below 70%, haze is reported. Fog forms when the difference between air temperature and dew point is less than 2.5 °C or 4 °F, Fog begins to form when water vapor condenses into tiny liquid water droplets suspended in the air. Water vapor normally begins to condense on condensation nuclei such as dust, ice, Fog, like its elevated cousin stratus, is a stable cloud deck which tends to form when a cool, stable air mass is trapped underneath a warm air mass. Fog normally occurs at a relative humidity near 100% and this occurs from either added moisture in the air, or falling ambient air temperature. However, fog can form at lower humidities, and can fail to form with relative humidity at 100%. At 100% relative humidity, the air cannot hold additional moisture, thus, Fog can form suddenly and can dissipate just as rapidly. The sudden formation of fog is known as flash fog, Fog commonly produces precipitation in the form of drizzle or very light snow. Drizzle occurs when the humidity of fog attains 100% and the cloud droplets begin to coalesce into larger droplets. This can occur when the fog layer is lifted and cooled sufficiently, drizzle becomes freezing drizzle when the temperature at the surface drops below the freezing point. The inversion boundary varies its altitude primarily in response to the weight of the air above it, the marine layer, and any fogbank it may contain, will be squashed when the pressure is high, and conversely, may expand upwards when the pressure above it is lowering. Fog can form in a number of ways, depending on how the cooling that caused the condensation occurred, radiation fog is formed by the cooling of land after sunset by thermal radiation in calm conditions with clear sky. The warm ground produces condensation in the air by heat conduction. In perfect calm the fog layer can be less than a meter deep, radiation fogs occur at night, and usually do not last long after sunrise, but they can persist all day in the winter months especially in areas bounded by high ground. Radiation fog is most common in autumn and early winter, examples of this phenomenon include the Tule fog. Ground fog is fog that obscures less than 60% of the sky, advection fog occurs when moist air passes over a cool surface by advection and is cooled
20.
Lifted condensation level
–
The RH of air increases when it is cooled, since the amount of water vapor in the air remains constant, while the saturation vapor pressure decreases almost exponentially with decreasing temperature. If the air parcel is lifting further beyond the LCL, water vapor in the air parcel will begin condensing, forming cloud droplets. The LCL is an approximation of the height of the cloud base which will be observed on days when air is lifted mechanically from the surface to the cloud base. The LCL can be computed numerically, approximated by various formulas. The LCL and dew point are similar, with one key difference, to find the LCL, an air pressure is decreased while it is lifted, causing it to expand. To determine the dew point, in contrast, the pressure is constant. Below the LCL, the dew point temperature is less than the actual temperature, as an air parcel is lifted, its pressure and temperature decrease. This point is the LCL, this is depicted in the diagram. From the initial dew point temperature of the parcel at its starting pressure, the intersection of these two lines is the LCL. Interestingly, there is no exact analytical formula for the LCL. Normally an iterative procedure is used to determine a highly accurate solution for the LCL, there are also many different ways to approximate the LCL, to various degrees of accuracy. The most well known and widely used among these is Espys equation and his equation makes use of the relationship between the LCL and dew point temperature discussed above. In the Earths atmosphere near the surface, the rate for dry adiabatic lifting is about 9.8 K/km. This gives the slopes of the shown in the diagram. The altitude where they intersect can be computed as the ratio between the difference in the temperature and initial dew point temperature to the difference in the slopes of the two curves. Since the slopes are the two rates, their difference is about 8 K/km. Inverting this gives 0.125 km/K, or 125 m/K and this formula is accurate to within about 1% for the LCL height under normal atmospheric conditions, but requires knowing the dew point temperature. Applying this directly in Espys formula, however, results in an overestimate of the LCL at lower temperatures
21.
Precipitation
–
In meteorology, precipitation is any product of the condensation of atmospheric water vapor that falls under gravity. The main forms of precipitation include drizzle, rain, sleet, snow, graupel, Precipitation occurs when a portion of the atmosphere becomes saturated with water vapor, so that the water condenses and precipitates. Thus, fog and mist are not precipitation but suspensions, because the vapor does not condense sufficiently to precipitate. Two processes, possibly acting together, can lead to air becoming saturated, Precipitation forms as smaller droplets coalesce via collision with other rain drops or ice crystals within a cloud. Short, intense periods of rain in scattered locations are called showers, moisture that is lifted or otherwise forced to rise over a layer of sub-freezing air at the surface may be condensed into clouds and rain. This process is active when freezing rain is occurring. A stationary front is often present near the area of freezing rain, provided necessary and sufficient atmospheric moisture content, the moisture within the rising air will condense into clouds, namely stratus and cumulonimbus. Eventually, the droplets will grow large enough to form raindrops. Lake-effect snowfall can be locally heavy, thundersnow is possible within a cyclones comma head and within lake effect precipitation bands. In mountainous areas, heavy precipitation is possible where upslope flow is maximized within windward sides of the terrain at elevation, on the leeward side of mountains, desert climates can exist due to the dry air caused by compressional heating. The movement of the trough, or intertropical convergence zone. Precipitation is a component of the water cycle, and is responsible for depositing the fresh water on the planet. Approximately 505,000 cubic kilometres of water falls as precipitation each year,398,000 cubic kilometres of it over the oceans and 107,000 cubic kilometres over land. Given the Earths surface area, that means the globally averaged annual precipitation is 990 millimetres, Climate classification systems such as the Köppen climate classification system use average annual rainfall to help differentiate between differing climate regimes. Precipitation may occur on celestial bodies, e. g. when it gets cold, Mars has precipitation which most likely takes the form of frost. Precipitation is a component of the water cycle, and is responsible for depositing most of the fresh water on the planet. Approximately 505,000 km3 of water falls as precipitation each year,398,000 km3 of it over the oceans, given the Earths surface area, that means the globally averaged annual precipitation is 990 millimetres. Mechanisms of producing precipitation include convective, stratiform, and orographic rainfall, Precipitation can be divided into three categories, based on whether it falls as liquid water, liquid water that freezes on contact with the surface, or ice
22.
Water vapor
–
Water vapor, water vapour or aqueous vapor, is the gaseous phase of water. It is one state of water within the hydrosphere, water vapor can be produced from the evaporation or boiling of liquid water or from the sublimation of ice. Unlike other forms of water, water vapor is invisible, under typical atmospheric conditions, water vapor is continuously generated by evaporation and removed by condensation. It is lighter than air and triggers convection currents that can lead to clouds, use of water vapor, as steam, has been important to humans for cooking and as a major component in energy production and transport systems since the industrial revolution. Likewise the detection of water vapor would indicate a similar distribution in other planetary systems. Water vapor is significant in that it can be evidence supporting the presence of extraterrestrial liquid water in the case of some planetary mass objects. Whenever a water molecule leaves a surface and diffuses into a surrounding gas, each individual water molecule which transitions between a more associated and a less associated state does so through the absorption or release of kinetic energy. The aggregate measurement of kinetic energy transfer is defined as thermal energy. Liquid water that becomes water vapor takes a parcel of heat with it, the amount of water vapor in the air determines how fast each molecule will return to the surface. When a net evaporation occurs, the body of water will undergo a net cooling directly related to the loss of water, in the US, the National Weather Service measures the actual rate of evaporation from a standardized pan open water surface outdoors, at various locations nationwide. Others do likewise around the world, the US data is collected and compiled into an annual evaporation map. The measurements range from under 30 to over 120 inches per year, formulas can be used for calculating the rate of evaporation from a water surface such as a swimming pool. In some countries, the evaporation rate far exceeds the precipitation rate, evaporative cooling is restricted by atmospheric conditions. Humidity is the amount of vapor in the air. The vapor content of air is measured with devices known as hygrometers, the measurements are usually expressed as specific humidity or percent relative humidity. This condition is referred to as complete saturation. Humidity ranges from 0 gram per cubic metre in dry air to 30 grams per cubic metre when the vapor is saturated at 30 °C, another form of evaporation is sublimation, by which water molecules become gaseous directly, leaving the surface of ice without first becoming liquid water. Sublimation accounts for the slow disappearance of ice and snow at temperatures too low to cause melting
23.
Atmospheric convection
–
Atmospheric convection is the result of a parcel-environment instability, or temperature difference, layer in the atmosphere. Different lapse rates within dry and moist air lead to instability, mixing of air during the day which expands the height of the planetary boundary layer leads to increased winds, cumulus cloud development, and decreased surface dew points. Moist convection leads to development, which is often responsible for severe weather throughout the world. Special threats from thunderstorms include hail, downbursts, and tornadoes, there are a few general archetypes of atmospheric instability that correspond to convection and lack thereof. Steeper and/or positive lapse rates suggests atmospheric convection is more likely and this is because any displaced air parcels will become more buoyant, given their sign of adiabatic temperature change, in the steep lapse rate environments. Convection begins at the level of free convection, where it begins its ascent through the free convective layer, the rising parcel, if having enough momentum, will continue to rise to the maximum parcel level until negative buoyancy decelerates the parcel to a stop. Acceleration is of relevance to convection. Drag produced by the updraft creates a force to counter that from the buoyancy. This could be thought of as similar to the velocity of a falling object. This force from buoyancy can be measured by Convective Available Potential Energy, see the CAPE, buoyancy, and parcel links for a more in depth mathematical explanation of these processes. Within the atmosphere, this means from the surface to above the 500 hPa level, oceanic deep convection only occurs at a few locations. While less dynamically important than in the atmosphere, it is responsible for the spreading of cold water through the low layers of the ocean, as such, it is important for the large-scale temperature structure of the whole ocean. A thermal column is a section of rising air in the lower altitudes of the Earths atmosphere. Thermals are created by the heating of the Earths surface from solar radiation. The Sun warms the ground, which in turn warms the air directly above it, the warmer air expands, becoming less dense than the surrounding air mass, and creating a thermal low. The mass of air rises, and as it does. It stops rising when it has cooled to the temperature as the surrounding air. Associated with a thermal is a downward flow surrounding the thermal column, the downward moving exterior is caused by colder air being displaced at the top of the thermal
24.
Convective available potential energy
–
CAPE is effectively the positive buoyancy of an air parcel and is an indicator of atmospheric instability, which makes it very valuable in predicting severe weather. It is a form of fluid instability found in thermally stratified atmospheres in which a colder fluid overlies a warmer one and this usually creates vertically developed clouds from convection, due to the rising motion, which can eventually lead to thunderstorms. It could also be created by other phenomena, such as a cold front, even if the air is cooler on the surface, there is still warmer air in the mid-levels, that can rise into the upper-levels. However, if there is not enough water vapor present, there is no ability for condensation, thus storms, clouds, CAPE exists within the conditionally unstable layer of the troposphere, the free convective layer, where an ascending air parcel is warmer than the ambient air. CAPE is measured in joules per kilogram of air, any value greater than 0 J/kg indicates instability and an increasing possibility of thunderstorms and hail. CAPE for a region is most often calculated from a thermodynamic or sounding diagram using air temperature. CAPE is effectively positive buoyancy, expressed B+ or simply B, the opposite of convective inhibition, which is expressed as B-, as with CIN, CAPE is usually expressed in J/kg but may also be expressed as m2/s2, as the values are equivalent. In fact, CAPE is sometimes referred to as positive buoyant energy and this type of CAPE is the maximum energy available to an ascending parcel and to moist convection. When a layer of CIN is present, the layer must be eroded by surface heating or mechanical lifting, neglecting the virtual temperature correction may result in substantial relative errors in the calculated value of CAPE for small CAPE values. CAPE may also exist below the LFC, but if a layer of CIN is present, it is unavailable to deep, the result is deep, moist convection, or simply, a thunderstorm. When a parcel is unstable, it continue to move vertically, in either direction, dependent on whether it receives upward or downward forcing. There are multiple types of CAPE, downdraft CAPE, estimates the potential strength of rain, other types of CAPE may depend on the depth being considered. Other examples are surface based CAPE, mixed layer or mean layer CAPE, most unstable or maximum usable CAPE, hence, there will be a counteracting force to the initial displacement. Such a condition is referred to as convective stability, in these circumstances, small deviations from the initial state will become amplified. This condition is referred to as convective instability, thunderstorms form when air parcels are lifted vertically. Deep, moist convection requires a parcel to be lifted to the LFC where it then rises spontaneously until reaching a layer of non-positive buoyancy, the atmosphere is warm at the surface and lower levels of the troposphere where there is mixing, but becomes substantially cooler with height. The temperature profile of the atmosphere, the change in temperature, when the rising air parcel cools more slowly than the surrounding atmosphere, it remains warmer and less dense. The parcel continues to rise freely through the atmosphere until it reaches an area of air less dense than itself, the amount of CAPE also modulates how low-level vorticity is entrained and then stretched in the updraft, with importance to tornadogenesis
25.
Convective inhibition
–
Convective inhibition is a numerical measure in meteorology that indicates the amount of energy that will prevent an air parcel from rising from the surface to the level of free convection. CIN is the amount of required to overcome the negatively buoyant energy the environment exerts on an air parcel. In most cases, when CIN exists, it covers a layer from the ground to the level of free convection. The negatively buoyant energy exerted on an air parcel is a result of the air parcel being cooler than the air which surrounds it, the layer of air dominated by CIN is warmer and more stable than the layers above or below it. The situation in which convective inhibition is measured is when layers of air are above a particular region of air. The effect of having warm air above a cooler air parcel is to prevent the air parcel from rising into the atmosphere. This creates a region of air. Convective inhibition indicates the amount of energy that will be required to force the cooler packet of air to rise and this energy comes from fronts, heating, moistening, or mesoscale convergence boundaries such as outflow and sea breeze boundaries, or orographic lift. Typically, an area with a high convection inhibition number is considered stable and has very little likelihood of developing a thunderstorm, conceptually, it is the opposite of CAPE. CIN hinders updrafts necessary to produce convective weather, such as thunderstorms, although, when large amounts of CIN are reduced by heating and moistening during a convective storm, the storm will be more severe than in the case when no CIN was present. CIN is strengthened by low altitude dry air advection and surface air cooling, surface cooling causes a small capping inversion to form aloft allowing the air to become stable. Incoming weather fronts and short waves influence the strengthening or weakening of CIN, CIN is calculated by measurements recorded electronically by a Rawinsonde which carries devices which measure weather parameters, such as air temperature and pressure. A single value for CIN is calculated from one balloon ascent by use of the equation below, in many cases, the z-bottom value is the ground and the z-top value is the LFC. CIN is an energy per mass and the units of measurement are joules per kilogram. CIN is expressed as an energy value. CIN values greater than 200 J/kg are sufficient to prevent convection in the atmosphere, CIN = ∫ z bottom z top g d z The CIN energy value is an important figure on a skew-T log-P diagram and is a helpful value in evaluating the severity of a convective event. On a skew-T log-P diagram, CIN is any area between the warmer environment virtual temperature profile and the cooler parcel virtual temperature profile, CIN is effectively negative buoyancy, expressed B-, the opposite of convective available potential energy, which is expressed as B+ or simply B. As with CAPE, CIN is usually expressed in J/kg but may also be expressed as m2/s2, in fact, CIN is sometimes referred to as negative buoyant energy
26.
Convective instability
–
In meteorology, convective instability or stability of an air mass refers to its ability to resist vertical motion. A stable atmosphere makes vertical movement difficult, and small vertical disturbances dampen out, in an unstable atmosphere, vertical air movements tend to become larger, resulting in turbulent airflow and convective activity. Instability can lead to significant turbulence, extensive vertical clouds, adiabatic cooling and heating are phenomena of rising or descending air. Rising air expands and cools due to the decrease in air pressure as altitude increases, the opposite is true of descending air, as atmospheric pressure increases, the temperature of descending air increases as it is compressed. Adiabatic heating and adiabatic cooling are used to describe this temperature change. The adiabatic lapse rate is the rate at which a rising or falling air mass lowers or increases per distance of vertical displacement, the ambient lapse rate is the temperature change in the air per vertical distance. Instability results from difference between the adiabatic lapse rate of an air mass and the ambient lapse rate in the atmosphere, if the adiabatic lapse rate is lower than the ambient lapse rate, an air mass displaced upward cools less rapidly than the air in which it is moving. Hence, such an air mass becomes warmer relative to the atmosphere, as warmer air is less dense, such an air mass would tend to continue to rise. Conversely, if the adiabatic lapse rate is higher than the ambient lapse rate, hence, such an airmass becomes cooler relative to the atmosphere. As cooler air is more dense, the rise of such an airmass would tend to be resisted, when air rises, moist air cools at a lower rate than dry air. That is, for the vertical movement, a parcel of moist air will be warmer than a parcel of dry air. This is because of the condensation of vapor in the air parcel due to expansion cooling. As water vapor condenses, latent heat is released into the air parcel, moist air has more water vapor than dry air, so more latent heat is released into the parcel of moist air as it rises. Dry air does not have as much water vapor, therefore dry air cools at a rate with vertical movement than moist air. As a result of the latent heat that is released during water vapor condensation and this makes moist air generally less stable than dry air. The dry adiabatic lapse rate is 3 °C per 1,000 vertical feet, the moist adiabatic lapse rate varies from 1.1 °C to 2.8 °C per 1,000 vertical feet. The combination of moisture and temperature determine the stability of the air, cool, dry air is very stable and resists vertical movement, which leads to good and generally clear weather. The greatest instability occurs when the air is moist and warm, typically, thunderstorms appear on a daily basis in these regions due to the instability of the surrounding air
27.
Free convective layer
–
In atmospheric sciences, the free convective layer is the layer of conditional or potential instability in the troposphere. It is a layer of positive buoyancy and is the layer where deep, on an atmospheric sounding, it is the layer between the level of free convection and the equilibrium level. The FCL is important to a variety of processes and to severe thunderstorm forecasting. It is the layer of instability, the area on thermodynamic diagrams where an ascending air parcel is warmer than its environment. Integrating buoyant energy from the LFC to the EL gives the convective available potential energy, an estimate of the maximum energy available to convection. The depth of the FCL is expressed by the formula, FCL = ZEL - ZLFC or FCL = PEL - PLFC Deep, moist convection is essentially a thunderstorm, an air parcel ascending from the near surface layer must work through the stable layer of convective inhibition when present. This work comes from increasing instability in the low levels by raising the temperature or dew point, at the level of neutral buoyancy, a parcel is cooler than the environment and is stable, so it slow down, eventually ceasing at the maximum parcel level. Blanchard, David O. Assessing the Vertical Distribution of Convective Available Potential Energy, Atmospheric thermodynamics Mixed layer Atmospheric indices
28.
Temperature
–
A temperature is an objective comparative measurement of hot or cold. It is measured by a thermometer, several scales and units exist for measuring temperature, the most common being Celsius, Fahrenheit, and, especially in science, Kelvin. Absolute zero is denoted as 0 K on the Kelvin scale, −273.15 °C on the Celsius scale, the kinetic theory offers a valuable but limited account of the behavior of the materials of macroscopic bodies, especially of fluids. Temperature is important in all fields of science including physics, geology, chemistry, atmospheric sciences, medicine. The Celsius scale is used for temperature measurements in most of the world. Because of the 100 degree interval, it is called a centigrade scale.15, the United States commonly uses the Fahrenheit scale, on which water freezes at 32°F and boils at 212°F at sea-level atmospheric pressure. Many scientific measurements use the Kelvin temperature scale, named in honor of the Scottish physicist who first defined it and it is a thermodynamic or absolute temperature scale. Its zero point, 0K, is defined to coincide with the coldest physically-possible temperature and its degrees are defined through thermodynamics. The temperature of zero occurs at 0K = −273. 15°C. For historical reasons, the triple point temperature of water is fixed at 273.16 units of the measurement increment, Temperature is one of the principal quantities in the study of thermodynamics. There is a variety of kinds of temperature scale and it may be convenient to classify them as empirically and theoretically based. Empirical temperature scales are historically older, while theoretically based scales arose in the middle of the nineteenth century, empirically based temperature scales rely directly on measurements of simple physical properties of materials. For example, the length of a column of mercury, confined in a capillary tube, is dependent largely on temperature. Such scales are only within convenient ranges of temperature. For example, above the point of mercury, a mercury-in-glass thermometer is impracticable. A material is of no use as a thermometer near one of its phase-change temperatures, in spite of these restrictions, most generally used practical thermometers are of the empirically based kind. Especially, it was used for calorimetry, which contributed greatly to the discovery of thermodynamics, nevertheless, empirical thermometry has serious drawbacks when judged as a basis for theoretical physics. Theoretically based temperature scales are based directly on theoretical arguments, especially those of thermodynamics, kinetic theory and they rely on theoretical properties of idealized devices and materials
29.
Dew point
–
Dew point is the temperature to which air must be cooled to become saturated with water vapor. When further cooled, the water vapor will condense to form liquid dew. Dew point is called frost point when the temperature is below freezing. The measurement of dew point is related to humidity, a higher dew point means there will be more moisture in the air. Given that all the factors influencing humidity remain constant, at ground level the relative humidity rises as the temperature falls. This is because water vapor condenses as the temperature falls. Dew point temperature is never greater than the air temperature because relative humidity cannot exceed 100%. In technical terms, the dew point is the temperature at which the vapor in a sample of air at constant barometric pressure condenses into liquid water at the same rate at which it evaporates. At temperatures below the dew point, the rate of condensation will be greater than that of evaporation, the condensed water is called dew when it forms on a solid surface. The condensed water is called either fog or a cloud, depending on its altitude, a high relative humidity implies that the dew point is closer to the current air temperature. Relative humidity of 100% indicates the dew point is equal to the current temperature, when the moisture content remains constant and temperature increases, relative humidity decreases. General aviation pilots use dew point data to calculate the likelihood of carburetor icing and fog, at a given temperature but independent of barometric pressure, the dew point is a consequence of the absolute humidity, the mass of water per unit volume of air. If both the temperature and pressure rise, however, the dew point will increase and the humidity will decrease accordingly. Reducing the absolute humidity without changing other variables will bring the dew point back down to its initial value, in the same way, increasing the absolute humidity after a temperature drop brings the dew point back down to its initial level. If the temperature rises in conditions of constant pressure, then the dew point will remain constant but the relative humidity will drop. For this reason, a constant relative humidity with different temperatures implies that when it is hotter, a higher fraction of the air is present as water vapor compared to when it is cooler. At a given barometric pressure but independent of temperature, the dew point indicates the fraction of water vapor in the air, or, put differently. If the pressure rises without changing this mole fraction, the dew point will rise accordingly, reducing the mole fraction, i. e. making the air less humid, would bring the dew point back down to its initial value
30.
Dew point depression
–
The dew point depression is the difference between the temperature and dew point temperature at a certain height in the atmosphere. For a constant temperature, the smaller the difference, the more there is. In the lower troposphere, more moisture results in lower cloud bases, LCL height is an important factor modulating severe thunderstorms. LCL height also factors in downburst and microburst activity, conversely, instability is increased when there is a mid-level dry layer known as a dry punch, which is favorable for convection if the lower layer is buoyant. As it measures moisture content in the atmosphere, the dew point depression is also an important indicator in agricultural and forest meteorology, particularly in predicting wildfires
31.
Dry-bulb temperature
–
The dry-bulb temperature is the temperature of air measured by a thermometer freely exposed to the air but shielded from radiation and moisture. DBT is the temperature that is thought of as air temperature. It indicates the amount of heat in the air and is proportional to the mean kinetic energy of the air molecules. Temperature is usually measured in degrees Celsius, Kelvin, or Fahrenheit, unlike wet bulb temperature, dry bulb temperature does not indicate the amount of moisture in the air. In construction, it is an important consideration when designing a building for a certain climate, nall called it one of the most important climate variables for human comfort and building energy efficiency. DBT is an important variable in Psychrometrics, being the horizontal axis of a Psychrometric chart, Psychrometric chart Wet-bulb temperature Hygrometer Atmospheric thermodynamics