Cosmic voids are vast spaces between filaments, which contain few or no galaxies. Voids have a diameter of 10 to 100 megaparsecs, they have less than one tenth of the average density of matter abundance, considered typical for the observable universe. They were first discovered in 1978 in a pioneering study by Stephen Gregory and Laird A. Thompson at the Kitt Peak National Observatory. Voids are believed to have been formed by baryon acoustic oscillations in the Big Bang, collapses of mass followed by implosions of the compressed baryonic matter. Starting from small anisotropies from quantum fluctuations in the early universe, the anisotropies grew larger in scale over time. Regions of higher density collapsed more under gravity resulting in the large-scale, foam-like structure or "cosmic web" of voids and galaxy filaments seen today. Voids located in high-density environments are smaller than voids situated in low-density spaces of the universe. Voids appear to correlate with the observed temperature of the cosmic microwave background because of the Sachs–Wolfe effect.
Colder regions correlate with voids and hotter regions correlate with filaments because of gravitational redshifting. As the Sachs–Wolfe effect is only significant if the universe is dominated by radiation or dark energy, the existence of voids is significant in providing physical evidence for dark energy; the structure of our Universe can be broken down into components that can help describe the characteristics of individual regions of the cosmos. These are the main structural components of the cosmic web: Voids – vast spherical regions with low cosmic mean densities, up to 100 megaparsecs in diameter. Walls – the regions that contain the typical cosmic mean density of matter abundance. Walls can be further broken down into two smaller structural features: Clusters – concentrated zones where walls meet and intersect, adding to the effective size of the local wall. Filaments – the branching arms of walls that can stretch for tens of megaparsecs. Voids have a mean density less than a tenth of the average density of the universe.
This serves as a working definition though there is no single agreed-upon definition of what constitutes a void. The matter density value used for describing the cosmic mean density is based on a ratio of the number of galaxies per unit volume rather than the total mass of the matter contained in a unit volume. Cosmic voids as a topic of study in astrophysics began in the mid-1970s when redshift surveys became more popular and led two separate teams of astrophysicists in 1978 to identifying superclusters and voids in the distribution of galaxies and Abell clusters in a large region of space; the new redshift surveys revolutionized the field of astronomy by adding depth to the two-dimensional maps of cosmological structure, which were densely packed and overlapping, allowing for the first three-dimensional mapping of the universe. In the redshift surveys, the depth was calculated from the individual redshifts of the galaxies due to the expansion of the universe according to Hubble's law. A summarized timeline of important events in the field of cosmic voids from its beginning to recent times is listed below: 1961 – Large-scale structural features such as "second-order clusters", a specific type of supercluster, were brought to the astronomical community's attention.
1978 – The first two papers on the topic of voids in the large-scale structure were published referencing voids found in the foreground of the Coma/A1367 clusters. 1981 – Discovery of a large void in the Boötes region of the sky, nearly 50 h−1 Mpc in diameter. 1983 – Computer simulations sophisticated enough to provide reliable results of growth and evolution of the large-scale structure emerged and yielded insight on key features of the large-scale galaxy distribution. 1985 – Details of the supercluster and void structure of the Perseus-Pisces region were surveyed. 1989 – The Center for Astrophysics Redshift Survey revealed that large voids, sharp filaments, the walls that surround them dominate the large-scale structure of the universe. 1991 – The Las Campanas Redshift Survey confirmed the abundance of voids in the large-scale structure of the universe. 1995 – Comparisons of optically selected galaxy surveys indicate that the same voids are found regardless of the sample selection. 2001 – The completed two-degree Field Galaxy Redshift Survey adds a large amount of voids to the database of all known cosmic voids.
2009 – The Sloan Digital Sky Survey data combined with previous large-scale surveys now provide the most complete view of the detailed structure of cosmic voids. There exist a number of ways for finding voids with the results of large-scale surveys of the universe. Of the many different algorithms all fall into one of three general categories; the first class consists of void finders that try to find empty regions of space based on local galaxy density. The second class are those which try to find voids via the geometrical structures in the dark matter distribution as suggested by the galaxies; the third class is made up of those finders which identify structures dynamically by using gravitationally unstable points in the distribution of dark matter. The three most popular methods through the study of cosmic voids are listed below: This first-class method uses each galaxy in a catalog as its target and uses the Nearest Neighbor Approximation to calculate the cosmic density in the region contained in a spherical radius determined by the distance
Absolute threshold of hearing
The absolute threshold of hearing is the minimum sound level of a pure tone that an average human ear with normal hearing can hear with no other sound present. The absolute threshold relates to the sound; the absolute threshold is not a discrete point, is therefore classed as the point at which a sound elicits a response a specified percentage of the time. This is known as the auditory threshold; the threshold of hearing is reported as the RMS sound pressure of 20 micropascals, i.e. 0 dB SPL, corresponding to a sound intensity of 0.98 pW/m2 at 1 atmosphere and 25 °C. It is the quietest sound a young human with undamaged hearing can detect at 1,000 Hz; the threshold of hearing is frequency-dependent and it has been shown that the ear's sensitivity is best at frequencies between 2 kHz and 5 kHz, where the threshold reaches as low as −9 dB SPL. Measurement of the absolute hearing threshold provides some basic information about our auditory system; the tools used to collect such information are called psychophysical methods.
Through these, the perception of a physical stimulus and our psychological response to the sound is measured. Several psychophysical methods can measure absolute threshold; these vary. Firstly, the test specifies the manner in which the subject should respond; the test presents the sound to the listener and manipulates the stimulus level in a predetermined pattern. The absolute threshold is defined statistically as an average of all obtained hearing thresholds; some procedures use a series of trials, with each trial using the ‘single-interval “yes”/”no” paradigm’. This means that sound may be present or absent in the single interval, the listener has to say whether he thought the stimulus was there; when the interval does not contain a stimulus, it is called a "catch trial". Classical methods date back to the 19th century and were first described by Gustav Theodor Fechner in his work Elements of Psychophysics. Three methods are traditionally used for testing a subject's perception of a stimulus: the method of limits, the method of constant stimuli, the method of adjustment.
Method of limits In the method of limits, the tester controls the level of the stimuli. Single-interval yes/no paradigm' is used; the trial uses several series of ascending runs. The trial starts with the descending run, where a stimulus is presented at a level well above the expected threshold; when the subject responds to the stimulus, the level of intensity of the sound is decreased by a specific amount and presented again. The same pattern is repeated until the subject stops responding to the stimuli, at which point the descending run is finished. In the ascending run, which comes after, the stimulus is first presented well below the threshold and gradually increased in two decibel steps until the subject responds; as there are no clear margins to ‘hearing’ and ‘not hearing’, the threshold for each run is determined as the midpoint between the last audible and first inaudible level. The subject's absolute hearing threshold is calculated as the mean of all obtained thresholds in both ascending and descending runs.
There are several issues related to the method of limits. First is anticipation, caused by the subject's awareness that the turn-points determine a change in response. Anticipation produces better ascending thresholds and worse descending thresholds. Habituation creates opposite effect, occurs when the subject becomes accustomed to responding either “yes” in the descending runs and/or “no” in the ascending runs. For this reason, thresholds are improved in descending runs. Another problem may be related to step size. Too large a step compromises accuracy of the measurement as the actual threshold may be just between two stimulus levels. Since the tone is always present, “yes” is always the correct answer. Method of constant stimuli In the method of constant stimuli, the tester sets the level of stimuli and presents them at random order. Thus, there are descending trials; the subject responds “yes”/”no” after each presentation. The stimuli are presented many times at each level and the threshold is defined as the stimulus level at which the subject scored 50% correct.
“Catch” trials may be included in this method. Method of constant stimuli has several advantages over the method of limits. Firstly, the random order of stimuli means that the correct answer cannot be predicted by the listener. Secondarily, as the tone may be absent, “yes” is not always the correct answer. Catch trials help to detect the amount of a listener's guessing; the main disadvantage lies in the large number of trials needed to obtain the data, therefore time required to complete the test. Method of adjustment Method of adjustment shares some features with the method of limits, but differs in others. There are descending and ascending runs and the listener knows that the stimulus is always present. However, unlike in the method of limits, here the stimulus is controlled by the listener; the subject reduces the level of the tone until it cannot be detected anymore, or increases until it can be heard again. The stimulus level is varied continuously via a dial and the stimulus level is measured by the tester at the end.
The threshold is the mean of the just inaudible levels. This method can produce several biases. To avoid giving cues about the actual stimulus level, the dial must be unlabeled. Apart from mentioned anticipation and habituation, stimulus persistence could influence the result from the method of adjustment. In the descending runs, the subject m
The cathode-ray tube is a vacuum tube that contains one or more electron guns and a phosphorescent screen, is used to display images. It modulates and deflects electron beam onto the screen to create the images; the images may represent electrical waveforms, radar targets, or other phenomena. CRTs have been used as memory devices, in which case the visible light emitted from the fluorescent material is not intended to have significant meaning to a visual observer. In television sets and computer monitors, the entire front area of the tube is scanned repetitively and systematically in a fixed pattern called a raster. An image is produced by controlling the intensity of each of the three electron beams, one for each additive primary color with a video signal as a reference. In all modern CRT monitors and televisions, the beams are bent by magnetic deflection, a varying magnetic field generated by coils and driven by electronic circuits around the neck of the tube, although electrostatic deflection is used in oscilloscopes, a type of electronic test instrument.
A CRT is constructed from a glass envelope, large, deep heavy, fragile. The interior of a CRT is evacuated to 0.01 pascals to 133 nanopascals, evacuation being necessary to facilitate the free flight of electrons from the gun to the tube's face. The fact that it is evacuated makes handling an intact CRT dangerous due to the risk of breaking the tube and causing a violent implosion that can hurl shards of glass at great velocity; as a matter of safety, the face is made of thick lead glass so as to be shatter-resistant and to block most X-ray emissions if the CRT is used in a consumer product. Since the late 2000s, CRTs have been superseded by newer "flat panel" display technologies such as LCD, plasma display, OLED displays, which in the case of LCD and OLED displays have lower manufacturing costs and power consumption, as well as less weight and bulk. Flat panel displays can be made in large sizes. Cathode rays were discovered by Johann Wilhelm Hittorf in 1869 in primitive Crookes tubes, he observed that some unknown rays were emitted from the cathode which could cast shadows on the glowing wall of the tube, indicating the rays were traveling in straight lines.
In 1890, Arthur Schuster demonstrated cathode rays could be deflected by electric fields, William Crookes showed they could be deflected by magnetic fields. In 1897, J. J. Thomson succeeded in measuring the mass of cathode rays, showing that they consisted of negatively charged particles smaller than atoms, the first "subatomic particles", which were named electrons; the earliest version of the CRT was known as the "Braun tube", invented by the German physicist Ferdinand Braun in 1897. It was a modification of the Crookes tube with a phosphor-coated screen; the first cathode-ray tube to use a hot cathode was developed by John B. Johnson and Harry Weiner Weinhart of Western Electric, became a commercial product in 1922. In 1925, Kenjiro Takayanagi demonstrated a CRT television that received images with a 40-line resolution. By 1927, he improved the resolution to 100 lines, unrivaled until 1931. By 1928, he was the first to transmit human faces in half-tones on a CRT display. By 1935, he had invented an early all-electronic CRT television.
It was named in 1929 by inventor Vladimir K. Zworykin, influenced by Takayanagi's earlier work. RCA was granted a trademark for the term in 1932; the first commercially made electronic television sets with cathode-ray tubes were manufactured by Telefunken in Germany in 1934. Flat panel displays dropped in price and started displacing cathode-ray tubes in the 2000s, with LCD screens exceeding CRTs in 2008; the last known manufacturer of CRTs ceased in 2015. In oscilloscope CRTs, electrostatic deflection is used, rather than the magnetic deflection used with television and other large CRTs; the beam is deflected horizontally by applying an electric field between a pair of plates to its left and right, vertically by applying an electric field to plates above and below. Televisions use magnetic rather than electrostatic deflection because the deflection plates obstruct the beam when the deflection angle is as large as is required for tubes that are short for their size. Various phosphors are available depending upon the needs of the display application.
The brightness and persistence of the illumination depends upon the type of phosphor used on the CRT screen. Phosphors are available with persistences ranging from less than one microsecond to several seconds. For visual observation of brief transient events, a long persistence phosphor may be desirable. For events which are fast and repetitive, or high frequency, a short-persistence phosphor is preferable; when displaying fast one-shot events, the electron beam must deflect quickly, with few electrons impinging on the screen, leading to a faint or invisible image on the display. Oscilloscope CRTs designed for fast signals can give a brighter display by passing the electron beam through a micro-channel plate just before it reaches
Breathing is the process of moving air into and out of the lungs to facilitate gas exchange with the internal environment by bringing in oxygen and flushing out carbon dioxide. All aerobic creatures need oxygen for cellular respiration, which uses the oxygen to break down foods for energy and produces carbon dioxide as a waste product. Breathing, or "external respiration", brings air into the lungs where gas exchange takes place in the alveoli through diffusion; the body's circulatory system transports these gases to and from the cells, where "cellular respiration" takes place. The breathing of all vertebrates with lungs consists of repetitive cycles of inhalation and exhalation through a branched system of tubes or airways which lead from the nose to the alveoli; the number of respiratory cycles per minute is the breathing or respiratory rate, is one of the four primary vital signs of life. Under normal conditions the breathing depth and rate is automatically, unconsciously, controlled by several homeostatic mechanisms which keep the partial pressures of carbon dioxide and oxygen in the arterial blood constant.
Keeping the partial pressure of carbon dioxide in the arterial blood unchanged under a wide variety of physiological circumstances, contributes to tight control of the pH of the extracellular fluids. Over-breathing and under-breathing, which decrease and increase the arterial partial pressure of carbon dioxide cause a rise in the pH of ECF in the first case, a lowering of the pH in the second. Both cause distressing symptoms. Breathing has other important functions, it provides a mechanism for speech and similar expressions of the emotions. It is used for reflexes such as yawning and sneezing. Animals that cannot thermoregulate by perspiration, because they lack sufficient sweat glands, may lose heat by evaporation through panting; the lungs are not capable of inflating themselves, will expand only when there is an increase in the volume of the thoracic cavity. In humans, as in the other mammals, this is achieved through the contraction of the diaphragm, but by the contraction of the intercostal muscles which pull the rib cage upwards and outwards as shown in the diagrams on the left.
During forceful inhalation the accessory muscles of inhalation, which connect the ribs and sternum to the cervical vertebrae and base of the skull, in many cases through an intermediary attachment to the clavicles, exaggerate the pump handle and bucket handle movements, bringing about a greater change in the volume of the chest cavity. During exhalation, at rest, all the muscles of inhalation relax, returning the chest and abdomen to a position called the “resting position”, determined by their anatomical elasticity. At this point the lungs contain the functional residual capacity of air, which, in the adult human, has a volume of about 2.5–3.0 liters. During heavy breathing as, for instance, during exercise, exhalation is brought about by relaxation of all the muscles of inhalation, but, in addition, the abdominal muscles, instead of being passive, now contract causing the rib cage to be pulled downwards; this not only decreases the size of the rib cage but pushes the abdominal organs upwards against the diaphragm which bulges into the thorax.
The end-exhalatory lung volume is now less air than the resting "functional residual capacity". However, in a normal mammal, the lungs cannot be emptied completely. In an adult human, there is always still at least one liter of residual air left in the lungs after maximum exhalation. Diaphragmatic breathing causes the abdomen to fall back, it is, therefore referred to as "abdominal breathing". These terms are used interchangeably because they describe the same action; when the accessory muscles of inhalation are activated during labored breathing, the clavicles are pulled upwards, as explained above. This external manifestation of the use of the accessory muscles of inhalation is sometimes referred to as clavicular breathing, seen during asthma attacks and in people with chronic obstructive pulmonary disease. Air is breathed in and out through the nose; the nasal cavities are quite narrow, firstly by being divided in two by the nasal septum, secondly by lateral walls that have several longitudinal folds, or shelves, called nasal conchae, thus exposing a large area of nasal mucous membrane to the air as it is inhaled.
This causes the inhaled air to take up moisture from the wet mucus, warmth from the underlying blood vessels, so that the air is nearly saturated with water vapor and is at body temperature by the time it reaches the larynx. Part of this moisture and heat is recaptured as the exhaled air moves out over the dried-out, cooled mucus in the nasal passages, during breathing out; the sticky mucus traps much of the particulate matter, breathed in, preventing it from reaching the lungs. The anatomy of a typical mammalian respiratory system, below the structures listed among the "upper airways", is described as a respiratory tree or tracheobronchial tree. Larger airways give rise to branches that are narrower, but more numerous than the "trunk" airway that gives rise to the branches; the human respiratory tree may consist of, on average, 23 such branchings into progressively smaller airways, while the respiratory tree of the mouse has up to 13 such branchings. Proximal div
An explosion is a rapid increase in volume and release of energy in an extreme manner with the generation of high temperatures and the release of gases. Supersonic explosions created by high explosives are known as detonations and travel via supersonic shock waves. Subsonic explosions are created by low explosives through a slower burning process known as deflagration. Explosions can occur in nature due to a large influx of energy. Most natural explosions arise from volcanic processes of various sorts. Explosive volcanic eruptions occur. Explosions occur as a result of impact events and in phenomena such as hydrothermal explosions. Explosions can occur outside of Earth in the universe in events such as supernova. Explosions occur during bushfires in eucalyptus forests where the volatile oils in the tree tops combust. Among the largest known explosions in the universe are supernovae, which results when a star explodes from the sudden starting or stopping of nuclear fusion gamma-ray bursts, whose nature is still in some dispute.
Solar flares are an example of a common explosion on the Sun, on most other stars as well. The energy source for solar flare activity comes from the tangling of magnetic field lines resulting from the rotation of the Sun's conductive plasma. Another type of large astronomical explosion occurs when a large meteoroid or an asteroid impacts the surface of another object, such as a planet; the most common artificial explosives are chemical explosives involving a rapid and violent oxidation reaction that produces large amounts of hot gas. Gunpowder was the first explosive to put to use. Other notable early developments in chemical explosive technology were Frederick Augustus Abel's development of nitrocellulose in 1865 and Alfred Nobel's invention of dynamite in 1866. Chemical explosions are initiated by an electric spark or flame in the presence of Oxygen. Accidental explosions may occur in rocket engines, etc.. A high current electrical fault can create an'electrical explosion' by forming a high energy electrical arc which vaporizes metal and insulation material.
This arc flash hazard is a danger to persons working on energized switchgear. Excessive magnetic pressure within an ultra-strong electromagnet can cause a magnetic explosion. A physical process, as opposed to chemical or nuclear, e.g. the bursting of a sealed or sealed container under internal pressure is referred to as an explosion. Examples include a simple tin can of beans tossed into a fire. Boiling liquid expanding vapor explosions are one type of mechanical explosion that can occur when a vessel containing a pressurized liquid is ruptured, causing a rapid increase in volume as the liquid evaporates. Note that the contents of the container may cause a subsequent chemical explosion, the effects of which can be more serious, such as a propane tank in the midst of a fire. In such a case, to the effects of the mechanical explosion when the tank fails are added the effects from the explosion resulting from the released propane in the presence of an ignition source. For this reason, emergency workers differentiate between the two events.
In addition to stellar nuclear explosions, a man-made nuclear weapon is a type of explosive weapon that derives its destructive force from nuclear fission or from a combination of fission and fusion. As a result a nuclear weapon with a small yield is more powerful than the largest conventional explosives available, with a single weapon capable of destroying an entire city. Explosive force is released in a direction perpendicular to the surface of the explosive. If a grenade is in mid air during the explosion, the direction of the blast will be 360°. In contrast, in a shaped charge the explosive forces are focused to produce a greater local effect; the speed of the reaction is what distinguishes an explosive reaction from an ordinary combustion reaction. Unless the reaction occurs rapidly, the thermally expanding gases will be moderately dissipated in the medium, with no large differential in pressure and there will be no explosion. Consider a wood fire; as the fire burns, there is the evolution of heat and the formation of gases, but neither is liberated enough to build up a sudden substantial pressure differential and cause an explosion.
This can be likened to the difference between the energy discharge of a battery, slow, that of a flash capacitor like that in a camera flash, which releases its energy all at once. The generation of heat in large quantities accompanies most explosive chemical reactions; the exceptions are called entropic explosives and include organic peroxides such as acetone peroxide. It is the rapid liberation of heat that causes the gaseous products of most explosive reactions to expand and generate high pressures; this rapid generation of high pressures of the released gas constitutes the explosion. The liberation of heat with insufficient rapidity will not cause an explosion. For example, although a unit mass of coal yields five times as much heat as a unit mass of nitroglycerin, the coal cannot be used as an explosive because the rate at which it yields this heat is quite slow. In fact, a substance which burns less may evolve more total heat than an explosive which detonates rapidly. In the form
The Armstrong limit or Armstrong's line is a measure of altitude above which atmospheric pressure is sufficiently low that water boils at the normal temperature of the human body. Exposure to pressure below this limit results in a rapid loss of consciousness, followed by a series of changes to cardiovascular and neurological functions, death, unless pressure is restored within 60-90 seconds. On Earth, the limit is around 18–19 km above sea level, above which atmospheric air pressure drops below 0.0618 atm. The term is named after United States Air Force General Harry George Armstrong, the first to recognize this phenomenon. At or above the Armstrong limit, exposed body fluids such as saliva, urine and the liquids wetting the alveoli within the lungs—but not vascular blood —will boil away without a full-body pressure suit, no amount of breathable oxygen delivered by any means will sustain life for more than a few minutes; the NASA technical report Rapid Decompression Emergencies in Pressure-Suited Subjects, which discusses the brief accidental exposure of a human to near vacuum, notes the result of exposure to pressure below that associated with the Armstrong limit: "The subject reported that... his last conscious memory was of the water on his tongue beginning to boil."At the nominal body temperature of 37 °C, water has a vapour pressure of 6.3 kilopascals.
A pressure of 6.3 kPa—the Armstrong limit—is about 1/16 of the standard sea-level atmospheric pressure of 101.3 kilopascals. Modern formulas for calculating the standard pressure at a given altitude vary—as do the precise pressures one will measure at a given altitude on a given day—but a common formula shows that 6.3 kPa is found at an altitude of 19,000 m. Blood pressure is a gauge pressure. To calculate blood pressure, it has to be summed to ambient pressure for calculating when blood will boil; this is similar to a flat automobile tire: with zero gauge pressure, a flat tire at altitude of the Armstrong limit would still have an absolute pressure of 63 hectopascals, that is, it will have the ambient pressure at the altitude where the 6.3 kPA pressure level occurs, both inside and out of it. If one inflates the tire to non-zero gauge pressure, this internal pressure is in addition to those 6.3 kilopascals of ambient pressure. This means that for an individual with a diastolic low blood pressure 8.0 kPa, the person's blood pressure would be 14.3 kPa, the sum of the blood pressure and the ambient pressure.
This pressure is more than twice the ambient pressure at the Armstrong limit. This extra pressure is more than sufficient to prevent blood from outright boiling at 18 km while the heart is still beating. Well below the Armstrong limit, humans require supplemental oxygen in order to avoid hypoxia. For most people, this is needed at altitudes above 4,500 m. Commercial jetliners are required to maintain cabin pressurization at a cabin altitude of not greater than 2,400 m. U. S. regulations on general aviation aircraft require that the minimum required flight crew, but not the passengers, be on supplemental oxygen if the plane spends more than half an hour at a cabin altitude above 3,800 m. The minimum required flight crew must be on supplemental oxygen if the plane spends any time above a cabin altitude of 4,300 m, the passengers must be provided with supplemental oxygen above a cabin altitude of 4,500 m. Skydivers, who are at altitude only before jumping, do not exceed 4,500 m; the Armstrong limit describes the altitude associated with an objective defined natural phenomenon: the vapor pressure of body-temperature water.
In the late 1940s, it represented a new fundamental, hard limit to altitude that went beyond the somewhat subjective observations of human physiology and the time‑dependent effects of hypoxia experienced at lower altitudes. Pressure suits had long been worn at altitudes well below the Armstrong limit to avoid hypoxia. In 1936, Francis Swain of the Royal Air Force reached 15,230 m flying a Bristol Type 138 while wearing a pressure suit. Two years Italian military officer Mario Pezzi set an altitude record of 17,083 m, wearing a pressure suit in his Caproni Ca.161bis biplane though he was well below the altitude at which body-temperature water boils. A pressure suit is required at around 15,000 m for a well conditioned and experienced pilot to safely operate an aircraft in unpressurized cabins. In an unpressurized cockpit, at altitudes greater than 11,900 m above sea level, the physiological reaction when breathing pure oxygen is hypoxia—inadequate oxygen level causing confusion and eventual loss of consciousness.
Air contains 20.95% oxygen. At 11,900 m, breathing pure oxygen through an unsealed face mask, one is breathing the same partial pressure of oxygen as one would experience with regular air at around 3,600 m above sea level. At higher altitudes, oxygen must be delivered through a sealed mask with increased pressure, to maintain a physiologically adequate partial pressure of oxygen. If the user does not wear a pressure suit or a counter-pressure garment that restricts the movement of their chest, the high pressure air can ca
Vapor pressure or equilibrium vapor pressure is defined as the pressure exerted by a vapor in thermodynamic equilibrium with its condensed phases at a given temperature in a closed system. The equilibrium vapor pressure is an indication of a liquid's evaporation rate, it relates to the tendency of particles to escape from the liquid. A substance with a high vapor pressure at normal temperatures is referred to as volatile; the pressure exhibited by vapor present above a liquid surface is known as vapor pressure. As the temperature of a liquid increases, the kinetic energy of its molecules increases; as the kinetic energy of the molecules increases, the number of molecules transitioning into a vapor increases, thereby increasing the vapor pressure. The vapor pressure of any substance increases non-linearly with temperature according to the Clausius–Clapeyron relation; the atmospheric pressure boiling point of a liquid is the temperature at which the vapor pressure equals the ambient atmospheric pressure.
With any incremental increase in that temperature, the vapor pressure becomes sufficient to overcome atmospheric pressure and lift the liquid to form vapor bubbles inside the bulk of the substance. Bubble formation deeper in the liquid requires a higher temperature due to the higher fluid pressure, because fluid pressure increases above the atmospheric pressure as the depth increases. More important at shallow depths is the higher temperature required to start bubble formation; the surface tension of the bubble wall leads to an overpressure in the small, initial bubbles. Thus, thermometer calibration should not rely on the temperature in boiling water; the vapor pressure that a single component in a mixture contributes to the total pressure in the system is called partial pressure. For example, air at sea level, saturated with water vapor at 20 °C, has partial pressures of about 2.3 kPa of water, 78 kPa of nitrogen, 21 kPa of oxygen and 0.9 kPa of argon, totaling 102.2 kPa, making the basis for standard atmospheric pressure.
Vapor pressure is measured in the standard units of pressure. The International System of Units recognizes pressure as a derived unit with the dimension of force per area and designates the pascal as its standard unit. One pascal is one newton per square meter. Experimental measurement of vapor pressure is a simple procedure for common pressures between 1 and 200 kPa. Most accurate results are obtained near the boiling point of substances and large errors result for measurements smaller than 1kPa. Procedures consist of purifying the test substance, isolating it in a container, evacuating any foreign gas measuring the equilibrium pressure of the gaseous phase of the substance in the container at different temperatures. Better accuracy is achieved when care is taken to ensure that the entire substance and its vapor are at the prescribed temperature; this is done, as with the use of an isoteniscope, by submerging the containment area in a liquid bath. Low vapor pressures of solids can be measured using the Knudsen effusion cell method.
In a medical context, vapor pressure is sometimes expressed in other units millimeters of mercury. This is important for volatile anesthetics, most of which are liquids at body temperature, but with a high vapor pressure. Anesthetics with a higher vapor pressure at body temperature will be excreted more as they are exhaled from the lungs; the Antoine equation is a mathematical expression of the relation between the vapor pressure and the temperature of pure liquid or solid substances. The basic form of the equation is: log P = A − B C + T and it can be transformed into this temperature-explicit form: T = B A − log P − C where: P is the absolute vapor pressure of a substance T is the temperature of the substance A, B and C are substance-specific coefficients log is either log 10 or log e A simpler form of the equation with only two coefficients is sometimes used: log P = A − B T which can be transformed to: T = B A − log P Sublimations and vaporizations of the same substance have separate sets of Antoine coefficients, as do components in mixtures.
Each parameter set for a specific compound is only applicable over a specified temperature range. Temperature ranges are chosen to maintain the equation's accuracy of a few up to 8–10 percent. For many volatile substances, several different sets of parameters are available and used for different temperature ranges; the Antoine equation has poor accuracy with any single parameter set when used from a compound's melting point to its critical temperature. Accuracy is usually poor when vapor pressure is under 10 Torr because of the limitations of the apparatus used to establish the Antoine parameter values; the Wagner equation gives "o