1.
Solid
–
Solid is one of the four fundamental states of matter. It is characterized by structural rigidity and resistance to changes of shape or volume, unlike a liquid, a solid object does not flow to take on the shape of its container, nor does it expand to fill the entire volume available to it like a gas does. The atoms in a solid are tightly bound to other, either in a regular geometric lattice or irregularly. The branch of physics deals with solids is called solid-state physics. Materials science is concerned with the physical and chemical properties of solids. Solid-state chemistry is concerned with the synthesis of novel materials, as well as the science of identification. The atoms, molecules or ions which make up solids may be arranged in a repeating pattern. Materials whose constituents are arranged in a regular pattern are known as crystals, in some cases, the regular ordering can continue unbroken over a large scale, for example diamonds, where each diamond is a single crystal. Almost all common metals, and many ceramics, are polycrystalline, in other materials, there is no long-range order in the position of the atoms. These solids are known as amorphous solids, examples include polystyrene, whether a solid is crystalline or amorphous depends on the material involved, and the conditions in which it was formed. Solids which are formed by slow cooling will tend to be crystalline, likewise, the specific crystal structure adopted by a crystalline solid depends on the material involved and on how it was formed. While many common objects, such as an ice cube or a coin, are chemically identical throughout, for example, a typical rock is an aggregate of several different minerals and mineraloids, with no specific chemical composition. Wood is an organic material consisting primarily of cellulose fibers embedded in a matrix of organic lignin. In materials science, composites of more than one constituent material can be designed to have desired properties, the forces between the atoms in a solid can take a variety of forms. For example, a crystal of sodium chloride is made up of sodium and chlorine. In diamond or silicon, the atoms share electrons and form covalent bonds, in metals, electrons are shared in metallic bonding. Some solids, particularly most organic compounds, are together with van der Waals forces resulting from the polarization of the electronic charge cloud on each molecule. The dissimilarities between the types of solid result from the differences between their bonding, metals typically are strong, dense, and good conductors of both electricity and heat
2.
Liquid
–
A liquid is a nearly incompressible fluid that conforms to the shape of its container but retains a constant volume independent of pressure. As such, it is one of the four states of matter. A liquid is made up of tiny vibrating particles of matter, such as atoms, water is, by far, the most common liquid on Earth. Like a gas, a liquid is able to flow and take the shape of a container, most liquids resist compression, although others can be compressed. Unlike a gas, a liquid does not disperse to fill every space of a container, a distinctive property of the liquid state is surface tension, leading to wetting phenomena. The density of a liquid is usually close to that of a solid, therefore, liquid and solid are both termed condensed matter. On the other hand, as liquids and gases share the ability to flow, although liquid water is abundant on Earth, this state of matter is actually the least common in the known universe, because liquids require a relatively narrow temperature/pressure range to exist. Most known matter in the universe is in form as interstellar clouds or in plasma form within stars. Liquid is one of the four states of matter, with the others being solid, gas. Unlike a solid, the molecules in a liquid have a greater freedom to move. The forces that bind the molecules together in a solid are only temporary in a liquid, a liquid, like a gas, displays the properties of a fluid. A liquid can flow, assume the shape of a container, if liquid is placed in a bag, it can be squeezed into any shape. These properties make a suitable for applications such as hydraulics. Liquid particles are bound firmly but not rigidly and they are able to move around one another freely, resulting in a limited degree of particle mobility. As the temperature increases, the vibrations of the molecules causes distances between the molecules to increase. When a liquid reaches its point, the cohesive forces that bind the molecules closely together break. If the temperature is decreased, the distances between the molecules become smaller, only two elements are liquid at standard conditions for temperature and pressure, mercury and bromine. Four more elements have melting points slightly above room temperature, francium, caesium, gallium and rubidium, metal alloys that are liquid at room temperature include NaK, a sodium-potassium metal alloy, galinstan, a fusible alloy liquid, and some amalgams
3.
Gas
–
Gas is one of the four fundamental states of matter. A pure gas may be made up of atoms, elemental molecules made from one type of atom. A gas mixture would contain a variety of pure gases much like the air, what distinguishes a gas from liquids and solids is the vast separation of the individual gas particles. This separation usually makes a colorless gas invisible to the human observer, the interaction of gas particles in the presence of electric and gravitational fields are considered negligible as indicated by the constant velocity vectors in the image. One type of commonly known gas is steam, the gaseous state of matter is found between the liquid and plasma states, the latter of which provides the upper temperature boundary for gases. Bounding the lower end of the temperature scale lie degenerative quantum gases which are gaining increasing attention, high-density atomic gases super cooled to incredibly low temperatures are classified by their statistical behavior as either a Bose gas or a Fermi gas. For a comprehensive listing of these states of matter see list of states of matter. The only chemical elements which are stable multi atom homonuclear molecules at temperature and pressure, are hydrogen, nitrogen and oxygen. These gases, when grouped together with the noble gases. Alternatively they are known as molecular gases to distinguish them from molecules that are also chemical compounds. The word gas is a neologism first used by the early 17th-century Flemish chemist J. B. van Helmont, according to Paracelsuss terminology, chaos meant something like ultra-rarefied water. An alternative story is that Van Helmonts word is corrupted from gahst and these four characteristics were repeatedly observed by scientists such as Robert Boyle, Jacques Charles, John Dalton, Joseph Gay-Lussac and Amedeo Avogadro for a variety of gases in various settings. Their detailed studies ultimately led to a relationship among these properties expressed by the ideal gas law. Gas particles are separated from one another, and consequently have weaker intermolecular bonds than liquids or solids. These intermolecular forces result from interactions between gas particles. Like-charged areas of different gas particles repel, while oppositely charged regions of different gas particles attract one another, transient, randomly induced charges exist across non-polar covalent bonds of molecules and electrostatic interactions caused by them are referred to as Van der Waals forces. The interaction of these forces varies within a substance which determines many of the physical properties unique to each gas. A comparison of boiling points for compounds formed by ionic and covalent bonds leads us to this conclusion, the drifting smoke particles in the image provides some insight into low pressure gas behavior
4.
Fog
–
Fog consists of visible cloud water droplets or ice crystals suspended in the air at or near the Earths surface. Fog can be considered a type of low-lying cloud and is influenced by nearby bodies of water, topography. In turn, fog has affected many human activities, such as shipping, travel, the term fog is typically distinguished from the more generic term cloud in that fog is low-lying, and the moisture in the fog is often generated locally. By definition, fog reduces visibility to less than 1 kilometre, for aviation purposes in the UK, a visibility of less than 5 kilometres but greater than 999 metres is considered to be mist if the relative humidity is 70% or greater, below 70%, haze is reported. Fog forms when the difference between air temperature and dew point is less than 2.5 °C or 4 °F, Fog begins to form when water vapor condenses into tiny liquid water droplets suspended in the air. Water vapor normally begins to condense on condensation nuclei such as dust, ice, Fog, like its elevated cousin stratus, is a stable cloud deck which tends to form when a cool, stable air mass is trapped underneath a warm air mass. Fog normally occurs at a relative humidity near 100% and this occurs from either added moisture in the air, or falling ambient air temperature. However, fog can form at lower humidities, and can fail to form with relative humidity at 100%. At 100% relative humidity, the air cannot hold additional moisture, thus, Fog can form suddenly and can dissipate just as rapidly. The sudden formation of fog is known as flash fog, Fog commonly produces precipitation in the form of drizzle or very light snow. Drizzle occurs when the humidity of fog attains 100% and the cloud droplets begin to coalesce into larger droplets. This can occur when the fog layer is lifted and cooled sufficiently, drizzle becomes freezing drizzle when the temperature at the surface drops below the freezing point. The inversion boundary varies its altitude primarily in response to the weight of the air above it, the marine layer, and any fogbank it may contain, will be squashed when the pressure is high, and conversely, may expand upwards when the pressure above it is lowering. Fog can form in a number of ways, depending on how the cooling that caused the condensation occurred, radiation fog is formed by the cooling of land after sunset by thermal radiation in calm conditions with clear sky. The warm ground produces condensation in the air by heat conduction. In perfect calm the fog layer can be less than a meter deep, radiation fogs occur at night, and usually do not last long after sunrise, but they can persist all day in the winter months especially in areas bounded by high ground. Radiation fog is most common in autumn and early winter, examples of this phenomenon include the Tule fog. Ground fog is fog that obscures less than 60% of the sky, advection fog occurs when moist air passes over a cool surface by advection and is cooled
5.
Exudate
–
An exudate is a fluid emitted by an organism through pores or a wound, a process known as exuding. Exudate is derived from exude, to ooze, from the Latin exsūdāre, an exudate is any fluid that filters from the circulatory system into lesions or areas of inflammation. It can be a pus-like or clear fluid, when an injury occurs, leaving skin exposed, it leaks out of the blood vessels and into nearby tissues. The fluid is composed of serum, fibrin, and white blood cells, exudate may ooze from cuts or from areas of infection or inflammation. Purulent or suppurative exudate consists of plasma with both active and dead neutrophils, fibrinogen, and necrotic parenchymal cells and this kind of exudate is consistent with more severe infections, and is commonly referred to as pus. Fibrinous exudate is composed mainly of fibrinogen and fibrin and it is characteristic of rheumatic carditis, but is seen in all severe injuries such as strep throat and bacterial pneumonia. Fibrinous inflammation is often difficult to due to blood vessels growing into the exudate. Often, large amounts of antibiotics are necessary for resolution, catarrhal exudate is seen in the nose and throat and is characterized by a high content of mucus. Serous exudate is seen in mild inflammation, with relatively low protein. Its consistency resembles that of serum, and can usually be seen in certain disease states like tuberculosis, malignant pleural effusion is effusion where cancer cells are present. It is usually classified as exudate, there is an important distinction between transudates and exudates. Transudates are caused by disturbances of hydrostatic or colloid osmotic pressure and they have a low protein content in comparison to exudates. Medical distinction between transudates and exudates is through the measurement of the gravity of extracted fluid. Specific gravity is used to measure the protein content of the fluid, the higher the specific gravity, the greater the likelihood of capillary permeability changes in relation to body cavities. For example, the gravity of the transudate is usually less than 1.012. Rivalta test may be used to differentiate an exudate from a transudate and it is not clear if there is a distinction in the difference of transudates and exudates in plants. Plant exudates include saps, gums, latex, and resin, sometimes nectar is considered an exudate. Plant roots exude a variety of molecules into the rhizosphere, including acids, sugars, polysaccharides and ectoenzymes, exudation of these compounds has various benefits to the plant and to the microorganisms of the rhizosphere
6.
Geyser
–
A geyser is a spring characterized by intermittent discharge of water ejected turbulently and accompanied by steam. As a fairly rare phenomenon, the formation of geysers is due to particular hydrogeological conditions that exist in only a few places on Earth, generally all geyser field sites are located near active volcanic areas, and the geyser effect is due to the proximity of magma. Generally, surface water works its way down to a depth of around 2,000 metres where it contacts hot rocks. The resultant boiling of the water results in the geyser effect of hot water. Over one thousand known geysers exist worldwide, at least 1,283 geysers have erupted in Yellowstone National Park, Wyoming, United States, and an average of 465 geysers are active there in a given year. Like many other phenomena, geysers are not unique to planet Earth. Jet-like eruptions, often referred to as cryogeysers, have been observed on several of the moons of the solar system. Due to the low ambient pressures, these eruptions consist of vapor without liquid, they are more easily visible by particles of dust. Water vapor jets have been observed near the pole of Saturns moon Enceladus. There are also signs of carbon dioxide eruptions from the polar ice cap of Mars. In the latter two cases, instead of being driven by energy, the eruptions seem to rely on solar heating via a solid-state greenhouse effect. The word geyser comes from Geysir, the name of a spring at Haukadalur, Iceland, that name, in turn, comes from the Icelandic verb geysa, to gush. Geysers are generally associated with volcanic areas, as the water boils, the resulting pressure forces a superheated column of steam and water to the surface through the geysers internal plumbing. The formation of geysers specifically requires the combination of three conditions that are usually found in volcanic terrain. The heat needed for geyser formation comes from magma that needs to be near the surface of the earth, the fact that geysers need heat much higher than normally found near the earths surface is the reason they are associated with volcanoes or volcanic areas. The pressures encountered at the areas where the water is heated make the point of the water much higher than at normal atmospheric pressures. The water ejected from a geyser travels underground through deep, pressurized fissures in the Earths crust, in order for the heated water to form a geyser, a plumbing system made of fractures, fissures, porous spaces, and sometimes cavities is required. This includes a reservoir to hold the water while it is being heated, Geysers are generally aligned along faults
7.
Steam
–
Steam is water in the gas phase, which is formed when water boils. Steam is invisible, however, steam often refers to wet steam, at lower pressures, such as in the upper atmosphere or at the top of high mountains, water boils at a lower temperature than the nominal 100 °C at standard pressure. If heated further it becomes superheated steam, piston type steam engines played a central role to the Industrial Revolution and modern steam turbines are used to generate more than 80% of the worlds electricity. If liquid water comes in contact with a hot surface or depressurizes quickly below its vapor pressure. Steam explosions have been responsible for many accidents, and may also have been responsible for much of the damage to the plant in the Chernobyl disaster. Steam is traditionally created by heating a boiler via burning coal and other fuels, water vapor that includes water droplets is described as wet steam. As wet steam is heated further, the droplets evaporate, and at a high temperature all of the water evaporates. Superheated steam is steam at a higher than its boiling point for the pressure. Steam tables contain thermodynamic data for water/steam and are used by engineers and scientists in design. Additionally, thermodynamic phase diagrams for water/steam, such as a diagram or a Mollier diagram shown in this article. Steam charts are used for analysing thermodynamic cycles. In agriculture, steam is used for sterilization to avoid the use of harmful chemical agents. Steams capacity to transfer heat is used in the home, for cooking vegetables, steam cleaning of fabric, carpets and flooring. In each case, water is heated in a boiler, steam is also used in ironing clothes to add enough humidity with the heat to take wrinkles out and put intentional creases into the clothing. About 90% of all electricity is generated using steam as the working fluid, in electric generation, steam is typically condensed at the end of its expansion cycle, and returned to the boiler for re-use. However, in cogeneration, steam is piped into buildings through a heating system to provide heat energy after its use in the electric generation cycle. The worlds biggest steam generation system is the New York City steam system, in other industrial applications steam is used for energy storage, which is introduced and extracted by heat transfer, usually through pipes. Steam is a reservoir for thermal energy because of waters high heat of vaporization
8.
Haze
–
Haze is traditionally an atmospheric phenomenon where dust, smoke and other dry particles obscure the clarity of the sky. Sources for haze particles include farming, traffic, industry, seen from afar and depending upon the direction of view with respect to the sun, haze may appear brownish or bluish, while mist tends to be bluish-grey. Whereas haze often is thought of as a phenomenon of dry air, however, haze particles may act as condensation nuclei for the subsequent formation of mist droplets, such forms of haze are known as wet haze. The term haze, in literature, generally is used to denote visibility-reducing aerosols of the wet type. Such aerosols commonly arise from complex chemical reactions occur as sulfur dioxide gases emitted during combustion are converted into small droplets of sulphuric acid. The reactions are enhanced in the presence of sunlight, high relative humidity, a small component of wet haze aerosols appear to be derived from compounds released by trees, such as terpenes. For all these reasons, wet haze tends to be primarily a warm-season phenomenon, large areas of haze covering many thousands of kilometers may be produced under favorable conditions each summer. Haze often occurs when dust and smoke particles accumulate in relatively dry air, when weather conditions block the dispersal of smoke and other pollutants they concentrate and form a usually low-hanging shroud that impairs visibility and may become a respiratory health threat. Industrial pollution can result in dense haze, which is known as smog, since 1991, haze has been a particularly acute problem in Southeast Asia. The main source of the haze has been occurring in Sumatra. In response to the 1997 Southeast Asian haze, the ASEAN countries agreed on a Regional Haze Action Plan, in 2002, all ASEAN countries except Indonesia signed the Agreement on Transboundary Haze Pollution, but the pollution is still a problem today. Under the agreement the ASEAN secretariat hosts a co-ordination and support unit, during the 2013 Southeast Asian haze, Singapore experienced a record high pollution level, with the 3-hour Pollution Standards Index reaching a record high of 401. A full list of areas is available on EPAs website. Haze is no longer a domestic problem and it has become one of the causes of international disputes among neighboring countries. Haze migrates to adjacent countries and thereby pollutes other countries as well, one of the most recent problems concerned the two neighboring countries Malaysia and Indonesia. Winds blow most of the fumes across the narrow Strait of Malacca to Malaysia, the 2015 Southeast Asian haze constitutes an ongoing crisis. Haze causes issues in the area of photography, where the penetration of large amounts of dense atmosphere may be necessary to image distant subjects. This results in the effect of a loss of contrast in the subject
9.
Air pollution
–
Air pollution occurs when harmful substances including particulates and biological molecules are introduced into Earths atmosphere. It may cause diseases, allergies or death in humans, it may cause harm to other living organisms such as animals and food crops. Human activity and natural processes can both generate air pollution, indoor air pollution and poor urban air quality are listed as two of the worlds worst toxic pollution problems in the 2008 Blacksmith Institute Worlds Worst Polluted Places report. According to the 2014 WHO report, air pollution in 2012 caused the deaths of around 7 million people worldwide, an air pollutant is a substance in the air that can have adverse effects on humans and the ecosystem. The substance can be particles, liquid droplets, or gases. A pollutant can be of natural origin or man-made, pollutants are classified as primary or secondary. Primary pollutants are usually produced from a process, such as ash from a volcanic eruption, other examples include carbon monoxide gas from motor vehicle exhaust, or the sulfur dioxide released from factories. Secondary pollutants are not emitted directly, rather, they form in the air when primary pollutants react or interact. Ground level ozone is a prominent example of a secondary pollutant, some pollutants may be both primary and secondary, they are both emitted directly and formed from other primary pollutants. Substances emitted into the atmosphere by human activity include, Carbon dioxide - Debate continues over whether carbon dioxide should be classified as an atmospheric pollutant, because of its role as a greenhouse gas it has been described as the leading pollutant and the worst climate pollution. Against this it is argued that carbon dioxide is a component of the atmosphere, essential for plant life. This question of terminology has practical effects, for example as determining whether the U. S. Clean Air Act is deemed to regulate CO2 emissions, CO2 increase in earths atmosphere has been accelerating. Sulfur oxides - particularly sulfur dioxide, a compound with the formula SO2. SO2 is produced by volcanoes and in industrial processes. Coal and petroleum often contain sulfur compounds, and their combustion generates sulfur dioxide, further oxidation of SO2, usually in the presence of a catalyst such as NO2, forms H2SO4, and thus acid rain. This is one of the causes for concern over the impact of the use of these fuels as power sources. Nitrogen oxides - Nitrogen oxides, particularly nitrogen dioxide, are expelled from high temperature combustion and they can be seen as a brown haze dome above or a plume downwind of cities. Nitrogen dioxide is a compound with the formula NO2
10.
Smoke
–
It is commonly an unwanted by-product of fires, but may also be used for pest control, communication, defensive and offensive capabilities in the military, cooking, or smoking. Smoke is used in rituals where incense, sage, or resin is burned to produce a smell for spiritual purposes, smoke is sometimes used as a flavoring agent, and preservative for various foodstuffs. Smoke is also a component of internal combustion engine exhaust gas, smoke inhalation is the primary cause of death in victims of indoor fires. The smoke kills by a combination of damage, poisoning and pulmonary irritation caused by carbon monoxide, hydrogen cyanide. Smoke is an aerosol of solid particles and liquid droplets that are close to the range of sizes for Mie scattering of visible light. This effect has been likened to three-dimensional textured privacy glass — a smoke cloud does not obstruct an image, the composition of smoke depends on the nature of the burning fuel and the conditions of combustion. High temperature also leads to production of nitrogen oxides, sulfur content yields sulfur dioxide, or in case of incomplete combustion, hydrogen sulfide. Carbon and hydrogen are almost completely oxidized to carbon dioxide and water, fires burning with lack of oxygen produce a significantly wider palette of compounds, many of them toxic. Partial oxidation of carbon produces carbon monoxide, nitrogen-containing materials can yield hydrogen cyanide, ammonia, hydrogen gas can be produced instead of water. Content of halogens such as chlorine may lead to production of e. g. hydrogen chloride, phosgene, dioxin, hydrogen fluoride can be formed from fluorocarbons, whether fluoropolymers subjected to fire or halocarbon fire suppression agents. Phosphorus and antimony oxides and their products can be formed from some fire retardant additives. Heterocyclic compounds may be also present, heavier hydrocarbons may condense as tar, smoke with significant tar content is yellow to brown. Partial oxidation of the released hydrocarbons yields in a palette of other compounds, aldehydes, ketones, alcohols. The visible particulate matter in such smokes is most commonly composed of carbon, other particulates may be composed of drops of condensed tar, or solid particles of ash. The presence of metals in the fuel yields particles of metal oxides, particles of inorganic salts may also be formed, e. g. ammonium sulfate, ammonium nitrate, or sodium chloride. Inorganic salts present on the surface of the particles may make them hydrophilic. Many organic compounds, typically the aromatic hydrocarbons, may be adsorbed on the surface of the solid particles. Metal oxides can be present when metal-containing fuels are burned, e. g. solid rocket fuels containing aluminium, depleted uranium projectiles after impacting the target ignite, producing particles of uranium oxides
11.
Aerosol spray
–
Aerosol spray is a type of dispensing system which creates an aerosol mist of liquid particles. This is used with a can or bottle that contains a payload, when the containers valve is opened, the payload is forced out of a small hole and emerges as an aerosol or mist. As propellant expands to drive out the payload, only some propellant evaporates inside the can to maintain a constant pressure, outside the can, the droplets of propellant evaporate rapidly, leaving the payload suspended as very fine particles or droplets. Typical payload liquids dispensed in this way are insecticides, deodorants and paints, an atomizer is a similar device that is pressurised by a hand-operated pump rather than by stored propellant. The concepts of aerosol probably go as far back as 1790, the first aerosol spray can patent was granted in Oslo in 1927 to Erik Rotheim, a Norwegian chemical engineer, and a United States patent was granted for the invention in 1931. The patent rights were sold to a United States company for 100,000 Norwegian kroner, the Norwegian Postal Service, Posten Norge, celebrated the invention by issuing a stamp in 1998. In 1939, American Julian S. Kahn received a patent for a spray can. Kahns idea was to mix cream and a propellant from two sources to make whipped cream at home — not a true aerosol in that sense, moreover, in 1949, he disclaimed his first four claims, which were the foundation of his following patent claims. It was not until 1941 that the spray can was first put to good use by Americans Lyle Goodhue and William Sullivan. Their design of a spray can dubbed the bug bomb, is the ancestor of many popular commercial spray products. In 1948, three companies were granted licenses by the United States government to manufacture aerosols, two of the three companies, Chase Products Company and Claire Manufacturing, still manufacture aerosols to this day. The crimp-on valve, used to control the spray in low-pressure aerosols was developed in 1949 by Bronx machine shop proprietor Robert H. Abplanalp. In 1974, Drs. Frank Sherwood Rowland and Mario J. Molina proposed that chlorofluorocarbons, used as propellants in aerosol sprays, contributed to the depletion of Earths ozone layer. In response to this theory, the U. S. Congress passed amendments to the Clean Air Act in 1977 authorizing the Environmental Protection Agency to regulate the presence of CFCs in the atmosphere. The United Nations Environment Programme called for ozone layer research that same year, in 1985, Joe Farman, Brian G. Gardiner, and Jon Shanklin published the first scientific paper detailing the hole in the ozone layer. That same year, the Vienna Convention was signed in response to the UNs authorization, two years later, the Montreal Protocol, which regulated the production of CFCs was formally signed. It came into effect in 1989, the U. S. formally phased out CFCs in 1995. Usually the gas is the vapor of a liquid with boiling point slightly lower than room temperature and this means that inside the pressurized can, the vapor can exist in equilibrium with its bulk liquid at a pressure that is higher than atmospheric pressure, but not dangerously high
12.
Airborne disease
–
An airborne disease is any disease that is caused by pathogens and transmitted through the air. Such diseases include many that are of importance both in human and veterinary medicine. Strictly speaking airborne diseases do not include conditions caused simply by air pollution such as dusts and poisons, though their study, airborne diseases include any that are caused and transmitted through the air. Some are of medical importance. The pathogens transmitted may be any kind of microbe, and they may be spread in aerosols, dust or liquids. The aerosols might be generated from sources of such as the bodily secretions of an infected animal or person, or biological wastes such as accumulate in lofts, caves, garbage. Airborne pathogens or allergens cause inflammation in the nose, throat, sinuses. This is caused by the inhalation of these pathogens that affect a persons respiratory system or even the rest of the body, sinus congestion, coughing and sore throats are examples of inflammation of the upper respiratory air way due to these airborne agents. Air pollution plays a significant role in airborne diseases which is linked to asthma, pollutants are said to influence lung function by increasing air way inflammation. Many common infections can spread by airborne transmission at least in cases, including, Anthrax, Chickenpox, Influenza, Measles, Smallpox, Cryptococcosis. Airborne diseases can also affect non-humans, for example, Newcastle disease is an avian disease that affects many types of domestic poultry worldwide which is transmitted via airborne contamination. Often, airborne pathogens or allergens cause inflammation in the nose, throat, sinuses, upper airway inflammation causes coughing congestion, and sore throat. This is caused by the inhalation of these pathogens that affect a persons respiratory system or even the rest of the body, sinus congestion, coughing and sore throats are examples of inflammation of the upper respiratory air way due to these airborne agents. An airborne disease can be caused by exposure to a source, people receive the disease through a portal of entry, mouth, nose, cut, or needle puncture. Airborne transmission of disease depends on physical variables endemic to the infectious particle. Environmental factors influence the efficacy of airborne disease transmission, the most evident environmental conditions are temperature, rainfall, mean of sunshine daily hours, latitude, altitude are characteristic agents to take in account when assessing the possibility of spread of any airborne infection. Furthermore, some infrequent or exceptional extreme events also influence the dissemination of airborne diseases, as storms, hurricanes, typhoons. Climate conditions determine temperature, winds and relative humidity in any territory and those are the main factors affecting the spread, duration and infectiousness of droplets containing infectious particles
13.
Breathing
–
Breathing is the process that moves air in and out of the lungs, to allow the diffusion of oxygen and carbon dioxide to and from the external environment into and out of the blood. Breathing sometimes also refers to the equivalent process using other respiratory organs such as gills in fish, for organisms with lungs, breathing is also called pulmonary ventilation, which consists of inhalation and exhalation. Breathing is one part of physiological respiration required to sustain life, aerobic organisms require oxygen at cellular level to release energy by metabolizing energy-rich molecules such as fatty acids and glucose. This is often referred to as cellular respiration, breathing is only one of the processes that delivers oxygen to where it is needed in the body and removes excess carbon dioxide. After breathing, the process in this chain of events is the transport of these gases throughout the body by the circulatory system. Breathing fulfills another vital function, that of regulating the pH of the fluids of the body. It is, in fact, this homeostatic function which determines the rate, the medical term for normal relaxed breathing is eupnea. At the end of each exhalation the adult human lungs still contain 2.5 –3.0 liters of air, breathing replaces only about 15% of this volume of gas with moistened ambient air with each breath. This ensures that the composition of the FRC changes very little during the breathing cycle, the equilibration of the gases in the alveolar blood with those in the alveolar air occurs by passive diffusion. Breathing is used for a number of functions, such as speech, expression of the emotions, self-maintenance activities and, in animals that cannot sweat through the skin. In mammals, breathing in at rest is due to the contraction and flattening of the diaphragm. In the process the size of the cavity has increased in volume. This increased thoracic volume results in a fall in pressure in the thorax, during exhalation, at rest, the diaphragm relaxes, returning the chest and abdomen to a position which is determined by their anatomical elasticity. This is the resting mid-position of the thorax when the lungs contain the residual capacity of air. Resting exhalation lasts about twice as long as inhalation because the diaphragm relaxes more gently than it contracts during inhalation and this prevents undue narrowing of the airways, from which the air escapes more easily than from the alveoli. During heavy breathing, as, for instance, during exercise and this increases the volume of the rib cage, adding to the volume increase caused by the descending diaphragm. The end-exhalatory lung volume is now well below the resting mid-position, however, in a normal mammal, the lungs cannot be emptied completely. In an adult human there is still at least 1 liter of residual air left in the lungs after maximum exhalation
14.
Fly ash
–
Fly ash, also known as pulverised fuel ash in the United Kingdom, is one of the coal combustion products, composed of the fine particles that are driven out of the boiler with the flue gases. Ash that falls in the bottom of the boiler is called bottom ash, in modern coal-fired power plants, fly ash is generally captured by electrostatic precipitators or other particle filtration equipment before the flue gases reach the chimneys. Together with bottom ash removed from the bottom of the boiler, in the past, fly ash was generally released into the atmosphere, but air pollution control standards now require that it be captured prior to release by fitting pollution control equipment. In the US, fly ash is stored at coal power plants or placed in landfills. About 43% is recycled, often used as a pozzolan to produce hydraulic cement or hydraulic plaster, pozzolans ensure the setting of concrete and plaster and provide concrete with more protection from wet conditions and chemical attack. Coal Combustion Residuals are listed in the subtitle D, in that case the ash produced is often classified as hazardous waste. Fly ash material solidifies while suspended in the exhaust gases and is collected by electrostatic precipitators or filter bags. Since the particles solidify rapidly while suspended in the exhaust gases, fly ash particles are spherical in shape. The major consequence of the cooling is that few minerals have time to crystallize. Nevertheless, some refractory phases in the coal do not melt. In consequence, fly ash is a heterogeneous material, siO2, Al2O3, Fe2O3 and occasionally CaO are the main chemical components present in fly ashes. The mineralogy of fly ashes is very diverse, the main phases encountered are a glass phase, together with quartz, mullite and the iron oxides hematite, magnetite and/or maghemite. Other phases often identified are cristobalite, anhydrite, free lime, periclase, calcite, sylvite, halite, portlandite, rutile and anatase. The Ca-bearing minerals anorthite, gehlenite, akermanite and various calcium silicates, the mercury content can reach 1 ppm, but is generally included in the range 0.01 -1 ppm for bituminous coal. The concentrations of trace elements vary as well according to the kind of coal combusted to form it. In fact, in the case of coal, with the notable exception of boron. Two classes of fly ash are defined by ASTM C618, Class F fly ash, the chief difference between these classes is the amount of calcium, silica, alumina, and iron content in the ash. The chemical properties of the fly ash are largely influenced by the content of the coal burned
15.
Frederick G. Donnan
–
Frederick George Donnan CBE FRS FRSE was an Irish physical chemist who is known for his work on membrane equilibria, and commemorated in the Donnan equilibrium describing ionic transport in cells. He spent most of his career at University College London, Donnan was born in Colombo, Ceylon, the son of William Donnan, a Belfast merchant, and his wife, Jane Ross Turnley Liggate. He spent his life in Ulster. He was blind in one eye as the result of a childhood accident, Donnan then became a research student at University College London, joining the academic staff in 1901. In 1903 he became a Lecturer on Organic Chemistry at the Royal College of Science, Dublin, in 1913 he returned to University College London, where he remained until his retirement, serving as Head of Department from 1928 to 1937. He died in Canterbury on 16 December 1956 and he was unmarried and had no children. During the First World War, Donnan was a consultant to the Ministry of Munitions, quinan on plants for the fixation of nitrogen, for compounds essential for the manufacture of munitions. It was for work that Donnan received the CBE in 1920. It was also during this period that he coined the word aerosol, donnans 1911 paper on membrane equilibrium was important for leather and gelatin technology, but even more so for understanding the transport of materials between living cells and their surroundings. It was on this so-called Donnan equilibrium that he frequently was asked to lecture across Europe and America, the Donnan equilibrium remains an important concept for understanding ion transport in cells. Just before World War II, Donnan was active in helping European refugees wanting to flee from the Nazis, among those he assisted were Hermann Arthur Jahn and Edward Teller, who wrote their paper on the Jahn-Teller effect while in London. Founder member of the Faraday Society and its president from 1924-6 and he died in Canterbury, England on 16 December 1956. Works by or about Frederick G. Donnan at Internet Archive
16.
World War I
–
World War I, also known as the First World War, the Great War, or the War to End All Wars, was a global war originating in Europe that lasted from 28 July 1914 to 11 November 1918. More than 70 million military personnel, including 60 million Europeans, were mobilised in one of the largest wars in history and it was one of the deadliest conflicts in history, and paved the way for major political changes, including revolutions in many of the nations involved. The war drew in all the worlds great powers, assembled in two opposing alliances, the Allies versus the Central Powers of Germany and Austria-Hungary. These alliances were reorganised and expanded as more nations entered the war, Italy, Japan, the trigger for the war was the assassination of Archduke Franz Ferdinand of Austria, heir to the throne of Austria-Hungary, by Yugoslav nationalist Gavrilo Princip in Sarajevo on 28 June 1914. This set off a crisis when Austria-Hungary delivered an ultimatum to the Kingdom of Serbia. Within weeks, the powers were at war and the conflict soon spread around the world. On 25 July Russia began mobilisation and on 28 July, the Austro-Hungarians declared war on Serbia, Germany presented an ultimatum to Russia to demobilise, and when this was refused, declared war on Russia on 1 August. Germany then invaded neutral Belgium and Luxembourg before moving towards France, after the German march on Paris was halted, what became known as the Western Front settled into a battle of attrition, with a trench line that changed little until 1917. On the Eastern Front, the Russian army was successful against the Austro-Hungarians, in November 1914, the Ottoman Empire joined the Central Powers, opening fronts in the Caucasus, Mesopotamia and the Sinai. In 1915, Italy joined the Allies and Bulgaria joined the Central Powers, Romania joined the Allies in 1916, after a stunning German offensive along the Western Front in the spring of 1918, the Allies rallied and drove back the Germans in a series of successful offensives. By the end of the war or soon after, the German Empire, Russian Empire, Austro-Hungarian Empire, national borders were redrawn, with several independent nations restored or created, and Germanys colonies were parceled out among the victors. During the Paris Peace Conference of 1919, the Big Four imposed their terms in a series of treaties, the League of Nations was formed with the aim of preventing any repetition of such a conflict. This effort failed, and economic depression, renewed nationalism, weakened successor states, and feelings of humiliation eventually contributed to World War II. From the time of its start until the approach of World War II, at the time, it was also sometimes called the war to end war or the war to end all wars due to its then-unparalleled scale and devastation. In Canada, Macleans magazine in October 1914 wrote, Some wars name themselves, during the interwar period, the war was most often called the World War and the Great War in English-speaking countries. Will become the first world war in the sense of the word. These began in 1815, with the Holy Alliance between Prussia, Russia, and Austria, when Germany was united in 1871, Prussia became part of the new German nation. Soon after, in October 1873, German Chancellor Otto von Bismarck negotiated the League of the Three Emperors between the monarchs of Austria-Hungary, Russia and Germany
17.
Solution
–
In chemistry, a solution is a homogeneous mixture composed of two or more substances. In such a mixture, a solute is a substance dissolved in another substance, the mixing process of a solution happens at a scale where the effects of chemical polarity are involved, resulting in interactions that are specific to solvation. The solution assumes the characteristics of the solvent when the solvent is the fraction of the mixture. The concentration of a solute in a solution is the mass of that solute expressed as a percentage of the mass of the whole solution, a solution is a homogeneous mixture of two or more substances. The particles of solute in a solution cannot be seen by the naked eye, a solution does not allow beams of light to scatter. The solute from a solution cannot be separated by filtration and it is composed of only one phase. Homogeneous means that the components of the form a single phase. Heterogeneous means that the components of the mixture are of different phase, the properties of the mixture can be uniformly distributed through the volume but only in absence of diffusion phenomena or after their completion. Usually, the present in the greatest amount is considered the solvent. Solvents can be gases, liquids or solids, one or more components present in the solution other than the solvent are called solutes. The solution has the physical state as the solvent. If the solvent is a gas, only gases are dissolved under a set of conditions. An example of a solution is air. Since interactions between molecules play almost no role, dilute gases form rather trivial solutions, in part of the literature, they are not even classified as solutions, but addressed as mixtures. If the solvent is a liquid, then almost all gases, liquids, here are some examples, Gas in liquid, Oxygen in water Carbon dioxide in water – a less simple example, because the solution is accompanied by a chemical reaction. Liquid in liquid, The mixing of two or more substances of the same chemistry but different concentrations to form a constant, alcoholic beverages are basically solutions of ethanol in water. Solid in liquid, Sucrose in water Sodium chloride or any other salt in water, solutions in water are especially common. Counterexamples are provided by liquid mixtures that are not homogeneous, colloids, body fluids are examples for complex liquid solutions, containing many solutes
18.
Dispersity
–
In physical and organic chemistry, the dispersity is a measure of the heterogeneity of sizes of molecules or particles in a mixture. A collection of objects is called if the objects have the same size, shape. A sample of objects that have an inconsistent size, shape, the objects can be in any form of chemical dispersion, such as particles in a colloid, droplets in a cloud, crystals in a rock, or polymer macromolecules in a solution or a solid polymer mass. It can be calculated using the equation ĐM = Mw/Mn, where Mw is the molar mass. It can also be calculated according to degree of polymerization, where ĐX = Xw/Xn, in certain limiting cases where ĐM = ĐX, it is simply referred to as Đ. IUPAC has also deprecated the terms monodisperse, which is considered to be self-contradictory, and polydisperse, a monodisperse, or uniform, polymer is composed of molecules of the same mass. Synthetic monodisperse polymer chains can be made by such as anionic polymerization. This technique is known as living polymerization. It is used commercially for the production of block copolymers, monodisperse collections can be easily created through the use of template-based synthesis, a common method of synthesis in nanotechnology. A polymer material is denoted by the term polydisperse, or non-uniform and this is characteristic of man-made polymers. Natural organic matter produced by the decomposition of plants and wood debris in soils also has a pronounced polydispersed character and it is the case of humic acids and fulvic acids, natural polyelectrolyte substances having respectively higher and lower molecular weights. Another interpretation of dispersity is explained in the article Dynamic light scattering, in this sense, the dispersity values are in the range from 0 to 1. The dispersity index, or formerly polydispersity index, or heterogeneity index, Đ of a polymer is calculated, P D I = M w / M n, where M w is the weight average molecular weight and M n is the number average molecular weight. M n is more sensitive to molecules of low molecular mass, the dispersity indicates the distribution of individual molecular masses in a batch of polymers. Đ has an equal to or greater than 1, but as the polymer chains approach uniform chain length. For some natural polymers Đ is almost taken as unity, typical dispersities vary based on the mechanism of polymerization and can be affected by a variety of reaction conditions. In synthetic polymers, it can vary due to reactant ratio, how close the polymerization went to completion. For typical addition polymerization, Đ can range around 10 to 20, for typical step polymerization, most probable values of Đ are around 2 —Carothers equation limits Đ to values of 2 and below
19.
Aerosol
–
An aerosol is a colloid of fine solid particles or liquid droplets, in air or another gas. Aerosols can be natural or artificial, examples of natural aerosols are fog, forest exudates and geyser steam. Examples of artificial aerosols are haze, dust, particulate air pollutants, the liquid or solid particles have diameter mostly smaller than 1 μm or so, larger particles with a significant settling speed make the mixture a suspension, but the distinction is not clear-cut. In general conversation, aerosol refers to an aerosol spray that delivers a consumer product from a can or similar container. Other technological applications of aerosols include dispersal of pesticides, medical treatment of respiratory illnesses, diseases can also spread by means of small droplets in the breath, also called aerosols. Aerosol science covers generation and removal of aerosols, technological application of aerosols, effects of aerosols on the environment and people, an aerosol is defined as a colloidal system of solid or liquid particles in a gas. An aerosol includes both the particles and the gas, which is usually air. Frederick G. Donnan presumably first used the term aerosol during World War I to describe an aero-solution and this term developed analogously to the term hydrosol, a colloid system with water as the dispersing medium. Primary aerosols contain particles introduced directly into the gas, secondary aerosols form through gas-to-particle conversion, various types of aerosol, classified according to physical form and how they were generated, include dust, fume, mist, smoke and fog. There are several measures of aerosol concentration, Environmental science and health often uses the mass concentration, defined as the mass of particulate matter per unit volume with units such as μg/m3. Also commonly used is the concentration, the number of particles per unit volume with units such as number/m3 or number/cm3. The size of particles has an influence on their properties. A monodisperse aerosol, producible in the laboratory, contains particles of uniform size, most aerosols, however, as polydisperse colloidal systems, exhibit a range of particle sizes. Liquid droplets are almost always nearly spherical, but scientists use an equivalent diameter to characterize the properities of various shapes of solid particles, the equivalent diameter is the diameter of a spherical particle with the same value of some physical property as the irregular particle. The equivalent volume diameter is defined as the diameter of a sphere of the volume as that of the irregular particle. Also commonly used is the aerodynamic diameter, for a monodisperse aerosol, a single number—the particle diameter—suffices to describe the size of the particles. However, more complicated particle-size distributions describe the sizes of the particles in a polydisperse aerosol and this distribution defines the relative amounts of particles, sorted according to size. One approach to defining the size distribution uses a list of the sizes of every particle in a sample
20.
Particle-size distribution
–
Significant energy is usually required to disintegrate soil, etc. particles into the PSD that is then called a grain size distribution. The PSD of a material can be important in understanding its physical and chemical properties and it affects the strength and load-bearing properties of rocks and soils. Particle size distribution can greatly affect the efficiency of any collection device, settling chambers will normally only collect very large particles, those that can be separated using sieve trays. Centrifugal collectors will normally collect particles down to about 20 μm, higher efficiency models can collect particles down to 10 μm. Fabric filters are one of the most efficient and cost effective types of dust collectors available, wet scrubbers that use liquid are commonly known as wet scrubbers. In these systems, the liquid comes into contact with a gas stream containing dust particles. The greater the contact of the gas and liquid streams, the higher the dust removal efficiency, electrostatic precipitators use electrostatic forces to separate dust particles from exhaust gases. They can be efficient at the collection of very fine particles. Filter Press used for filtering liquids by cake filtration mechanism, the PSD plays an important part in the cake formation, cake resistance, and cake characteristics. The filterability of the liquid is determined largely by the size of the particles, ρp, Actual particle density ρg, Gas or sample matrix density r2, Least-squares coefficient of determination. The closer this value is to 1.0, the better the data fit to a hyperplane representing the relationship between the variable and a set of covariate variables. A value equal to 1.0 indicates all data fit perfectly within the hyperplane, λ, Gas mean free path D50, Mass-median-diameter. The log-normal distribution mass median diameter, the MMD is considered to be the average particle diameter by mass. This value is determined mathematically by the equation, σg = D84. 13/D50 = D50/D15.87 The value of σg determines the slope of the least-squares regression curve, α, Relative standard deviation or degree of polydispersity. This value is determined mathematically. For values less than 0.1, the sample can be considered to be monodisperse. α = σg/D50 Re, Particle Reynolds Number, in contrast to the large numerical values noted for flow Reynolds number, particle Reynolds number for fine particles in gaseous mediums is typically less than 0.1. The way PSD is usually defined by the method by which it is determined, the most easily understood method of determination is sieve analysis, where powder is separated on sieves of different sizes
21.
Histogram
–
A histogram is a graphical representation of the distribution of numerical data. It is an estimate of the probability distribution of a variable and was first introduced by Karl Pearson. It is a kind of bar graph, to construct a histogram, the first step is to bin the range of values—that is, divide the entire range of values into a series of intervals—and then count how many values fall into each interval. The bins are usually specified as consecutive, non-overlapping intervals of a variable, the bins must be adjacent, and are often of equal size. If the bins are of size, a rectangle is erected over the bin with height proportional to the frequency — the number of cases in each bin. A histogram may also be normalized to display relative frequencies and it then shows the proportion of cases that fall into each of several categories, with the sum of the heights equaling 1. However, bins need not be of equal width, in that case, the vertical axis is then not the frequency but frequency density — the number of cases per unit of the variable on the horizontal axis. Examples of variable bin width are displayed on Census bureau data below, as the adjacent bins leave no gaps, the rectangles of a histogram touch each other to indicate that the original variable is continuous. Histograms give a sense of the density of the underlying distribution of the data. The total area of a used for probability density is always normalized to 1. If the length of the intervals on the x-axis are all 1, a histogram can be thought of as a simplistic kernel density estimation, which uses a kernel to smooth frequencies over the bins. This yields a smoother probability density function, which will in general more accurately reflect distribution of the underlying variable, the density estimate could be plotted as an alternative to the histogram, and is usually drawn as a curve rather than a set of boxes. Another alternative is the average shifted histogram, which is fast to compute, the histogram is one of the seven basic tools of quality control. Histograms are sometimes confused with bar charts, a histogram is used for continuous data, where the bins represent ranges of data, while a bar chart is a plot of categorical variables. Some authors recommend that bar charts have gaps between the rectangles to clarify the distinction, the etymology of the word histogram is uncertain. Sometimes it is said to be derived from the Ancient Greek ἱστός – anything set upright and it is also said that Karl Pearson, who introduced the term in 1891, derived the name from historical diagram. This is a toy example, The words used to describe the patterns in a histogram are, symmetric, skewed left or right and it is a good idea to plot the data using several different bin widths to learn more about it. Here is an example on tips given in a restaurant, here are a couple more examples, The U. S. Census Bureau found that there were 124 million people who work outside of their homes
22.
Limit (mathematics)
–
In mathematics, a limit is the value that a function or sequence approaches as the input or index approaches some value. Limits are essential to calculus and are used to define continuity, derivatives, the concept of a limit of a sequence is further generalized to the concept of a limit of a topological net, and is closely related to limit and direct limit in category theory. In formulas, a limit is usually written as lim n → c f = L and is read as the limit of f of n as n approaches c equals L. Here lim indicates limit, and the fact that function f approaches the limit L as n approaches c is represented by the right arrow, suppose f is a real-valued function and c is a real number. Intuitively speaking, the lim x → c f = L means that f can be made to be as close to L as desired by making x sufficiently close to c. The first inequality means that the distance x and c is greater than 0 and that x ≠ c, while the second indicates that x is within distance δ of c. Note that the definition of a limit is true even if f ≠ L. Indeed. Now since x +1 is continuous in x at 1, we can now plug in 1 for x, in addition to limits at finite values, functions can also have limits at infinity. In this case, the limit of f as x approaches infinity is 2, in mathematical notation, lim x → ∞2 x −1 x =2. Consider the following sequence,1.79,1.799,1.7999 and it can be observed that the numbers are approaching 1.8, the limit of the sequence. Formally, suppose a1, a2. is a sequence of real numbers, intuitively, this means that eventually all elements of the sequence get arbitrarily close to the limit, since the absolute value | an − L | is the distance between an and L. Not every sequence has a limit, if it does, it is called convergent, one can show that a convergent sequence has only one limit. The limit of a sequence and the limit of a function are closely related, on one hand, the limit as n goes to infinity of a sequence a is simply the limit at infinity of a function defined on the natural numbers n. On the other hand, a limit L of a function f as x goes to infinity, if it exists, is the same as the limit of any sequence a that approaches L. Note that one such sequence would be L + 1/n, in non-standard analysis, the limit of a sequence can be expressed as the standard part of the value a H of the natural extension of the sequence at an infinite hypernatural index n=H. Thus, lim n → ∞ a n = st , here the standard part function st rounds off each finite hyperreal number to the nearest real number. This formalizes the intuition that for very large values of the index. Conversely, the part of a hyperreal a = represented in the ultrapower construction by a Cauchy sequence, is simply the limit of that sequence
23.
Moment (mathematics)
–
In mathematics, a moment is a specific quantitative measure, used in both mechanics and statistics, of the shape of a set of points. If the points represent mass, then the moment is the total mass, the first moment divided by the total mass is the center of mass. The mathematical concept is related to the concept of moment in physics. For a distribution of mass or probability on a bounded interval, the same is not true on unbounded intervals. The n-th moment of a continuous function f of a real variable about a value c is μ n = ∫ − ∞ ∞ n f d x. It is possible to define moments for random variables in a general fashion than moments for real values—see moments in metric spaces. The moment of a function, without explanation, usually refers to the above expression with c =0. For the second and higher moments, the moment are usually used rather than the moments about zero. Other moments may also be defined, for example, the n-th inverse moment about zero is E and the n-th logarithmic moment about zero is E . The n-th moment about zero of a probability density function f is the value of Xn and is called a raw moment or crude moment. The moments about its mean μ are called moments, these describe the shape of the function. If f is a probability density function, then the value of the integral above is called the moment of the probability distribution. When E = ∫ − ∞ ∞ | x n | d F = ∞, if the n-th moment about any point exists, so does the -th moment about every point. The zeroth moment of any probability density function is 1, since the area under any probability density function must be equal to one, the first raw moment is the mean, usually denoted μ ≡ μ1 ≡ E . The second central moment is the variance and its positive square root is the standard deviation σ ≡1 /2. The normalised n-th central moment or standardised moment is the central moment divided by σn. These normalised central moments are dimensionless quantities, which represent the distribution independently of any change of scale. For an electric signal, the first moment is its DC level, the third central moment is the measure of the lopsidedness of the distribution, any symmetric distribution will have a third central moment, if defined, of zero
24.
Function (mathematics)
–
In mathematics, a function is a relation between a set of inputs and a set of permissible outputs with the property that each input is related to exactly one output. An example is the function that each real number x to its square x2. The output of a function f corresponding to a x is denoted by f. In this example, if the input is −3, then the output is 9, likewise, if the input is 3, then the output is also 9, and we may write f =9. The input variable are sometimes referred to as the argument of the function, Functions of various kinds are the central objects of investigation in most fields of modern mathematics. There are many ways to describe or represent a function, some functions may be defined by a formula or algorithm that tells how to compute the output for a given input. Others are given by a picture, called the graph of the function, in science, functions are sometimes defined by a table that gives the outputs for selected inputs. A function could be described implicitly, for example as the inverse to another function or as a solution of a differential equation, sometimes the codomain is called the functions range, but more commonly the word range is used to mean, instead, specifically the set of outputs. For example, we could define a function using the rule f = x2 by saying that the domain and codomain are the numbers. The image of this function is the set of real numbers. In analogy with arithmetic, it is possible to define addition, subtraction, multiplication, another important operation defined on functions is function composition, where the output from one function becomes the input to another function. Linking each shape to its color is a function from X to Y, each shape is linked to a color, there is no shape that lacks a color and no shape that has more than one color. This function will be referred to as the color-of-the-shape function, the input to a function is called the argument and the output is called the value. The set of all permitted inputs to a function is called the domain of the function. Thus, the domain of the function is the set of the four shapes. The concept of a function does not require that every possible output is the value of some argument, a second example of a function is the following, the domain is chosen to be the set of natural numbers, and the codomain is the set of integers. The function associates to any number n the number 4−n. For example, to 1 it associates 3 and to 10 it associates −6, a third example of a function has the set of polygons as domain and the set of natural numbers as codomain
25.
Normal distribution
–
In probability theory, the normal distribution is a very common continuous probability distribution. Normal distributions are important in statistics and are used in the natural and social sciences to represent real-valued random variables whose distributions are not known. The normal distribution is useful because of the limit theorem. Physical quantities that are expected to be the sum of independent processes often have distributions that are nearly normal. Moreover, many results and methods can be derived analytically in explicit form when the relevant variables are normally distributed, the normal distribution is sometimes informally called the bell curve. However, many other distributions are bell-shaped, the probability density of the normal distribution is, f =12 π σ2 e −22 σ2 Where, μ is mean or expectation of the distribution. σ is standard deviation σ2 is variance A random variable with a Gaussian distribution is said to be distributed and is called a normal deviate. The simplest case of a distribution is known as the standard normal distribution. The factor 1 /2 in the exponent ensures that the distribution has unit variance and this function is symmetric around x =0, where it attains its maximum value 1 /2 π and has inflection points at x = +1 and x = −1. Authors may differ also on which normal distribution should be called the standard one, the probability density must be scaled by 1 / σ so that the integral is still 1. If Z is a normal deviate, then X = Zσ + μ will have a normal distribution with expected value μ. Conversely, if X is a normal deviate, then Z = /σ will have a standard normal distribution. Every normal distribution is the exponential of a function, f = e a x 2 + b x + c where a is negative. In this form, the mean value μ is −b/, for the standard normal distribution, a is −1/2, b is zero, and c is − ln /2. The standard Gaussian distribution is denoted with the Greek letter ϕ. The alternative form of the Greek phi letter, φ, is used quite often. The normal distribution is often denoted by N. Thus when a random variable X is distributed normally with mean μ and variance σ2, some authors advocate using the precision τ as the parameter defining the width of the distribution, instead of the deviation σ or the variance σ2
26.
Skewness
–
In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive or negative, or even undefined, the qualitative interpretation of the skew is complicated and unintuitive. Skew must not be thought to refer to the direction the curve appears to be leaning, in fact, conversely, positive skew indicates that the tail on the right side is longer or fatter than the left side. In cases where one tail is long but the tail is fat. Further, in multimodal distributions and discrete distributions, skewness is also difficult to interpret, importantly, the skewness does not determine the relationship of mean and median. In cases where it is necessary, data might be transformed to have a normal distribution, consider the two distributions in the figure just below. Within each graph, the values on the side of the distribution taper differently from the values on the left side. A left-skewed distribution usually appears as a right-leaning curve, positive skew, The right tail is longer, the mass of the distribution is concentrated on the left of the figure. A right-skewed distribution usually appears as a left-leaning curve, Skewness in a data series may sometimes be observed not only graphically but by simple inspection of the values. For instance, consider the sequence, whose values are evenly distributed around a central value of 50. If the distribution is symmetric, then the mean is equal to the median, if, in addition, the distribution is unimodal, then the mean = median = mode. This is the case of a coin toss or the series 1,2,3,4, note, however, that the converse is not true in general, i. e. zero skewness does not imply that the mean is equal to the median. Paul T. von Hippel points out, Many textbooks, teach a rule of thumb stating that the mean is right of the median under right skew and this rule fails with surprising frequency. It can fail in multimodal distributions, or in distributions where one tail is long, most commonly, though, the rule fails in discrete distributions where the areas to the left and right of the median are not equal. Such distributions not only contradict the textbook relationship between mean, median, and skew, they contradict the textbook interpretation of the median. It is sometimes referred to as Pearsons moment coefficient of skewness, or simply the moment coefficient of skewness, the last equality expresses skewness in terms of the ratio of the third cumulant κ3 to the 1. 5th power of the second cumulant κ2. This is analogous to the definition of kurtosis as the fourth cumulant normalized by the square of the second cumulant, the skewness is also sometimes denoted Skew. Starting from a standard cumulant expansion around a distribution, one can show that skewness =6 /standard deviation + O
27.
Pollen
–
Pollen is a fine to coarse powdery substance comprising pollen grains which are male microgametophytes of seed plants, which produce male gametes. If pollen lands on a compatible pistil or female cone, it germinates, individual pollen grains are small enough to require magnification to see detail. The study of pollen is called palynology and is useful in paleoecology, paleontology, archaeology. Pollen in plants is used for transferring haploid male genetic material from the anther of a flower to the stigma of another in cross-pollination. In a case of self-pollination, this takes place from the anther of a flower to the stigma of the same flower. Pollen itself is not the male gamete, each pollen grain contains vegetative cells and a generative cell. In flowering plants the vegetative tube cell produces the pollen tube, pollen is produced in the microsporangia in the male cone of a conifer or other gymnosperm or in the anthers of an angiosperm flower. Pollen grains come in a variety of shapes, sizes. Pollen grains of pines, firs, and spruces are winged, the smallest pollen grain, that of the forget-me-not, is around 6 µm in diameter. Wind-borne pollen grains can be as large as about 90–100 µm, in angiosperms, during flower development the anther is composed of a mass of cells that appear undifferentiated, except for a partially differentiated dermis. As the flower develops, four groups of cells form within the anther. The fertile sporogenous cells are surrounded by layers of cells that grow into the wall of the pollen sac. Some of the cells grow into nutritive cells that supply nutrition for the microspores that form by meiotic division from the sporogenous cells, in a process called microsporogenesis, four haploid microspores are produced from each diploid sporogenous cell, after meiotic division. After the formation of the four microspores, which are contained by callose walls, the exine is what is preserved in the fossil record. Two basic types of microsporogenesis are recognised, simultaneous and successive, in simultaneous microsporogenesis meiotic steps I and II are completed prior to cytokinesis, whereas in successive microsporogenesis cytokinesis follows. While there may be a continuum with intermediate forms, the type of microsporogenesis has systematic significance, the predominant form amongst the monocots is successive, but there are important exceptions. During microgametogenesis, the unicellular microspores undergo mitosis and develop into mature microgametophytes containing the gametes, in some flowering plants, germination of the pollen grain may begin even before it leaves the microsporangium, with the generative cell forming the two sperm cells. Except in the case of submerged aquatic plants, the mature pollen grain has a double wall
28.
Spore
–
In biology, a spore is a unit of sexual or asexual reproduction that may be adapted for dispersal and for survival, often for extended periods of time, in unfavorable conditions. Spores form part of the cycles of many plants, algae, fungi. Bacterial spores are not part of a sexual cycle but are resistant structures used for survival under unfavourable conditions, spores are usually haploid and unicellular and are produced by meiosis in the sporangium of a diploid sporophyte. Under favourable conditions the spore can develop into a new organism using mitotic division, producing a multicellular gametophyte, two gametes fuse to form a zygote which develops into a new sporophyte. This cycle is known as alternation of generations, the term spore derives from the ancient Greek word σπορά spora, meaning seed, sowing, related to σπόρος sporos, sowing, and σπείρειν speirein, to sow. Spores germinate to give rise to haploid gametophytes, while seeds germinate to give rise to diploid sporophytes, vascular plant spores are always haploid. Vascular plants are either homosporous or heterosporous, plants that are homosporous produce spores of the same size and type. Spores can be classified in several ways, In fungi and fungus-like organisms, spores are often classified by the structure in which meiosis, since fungi are often classified according to their spore-producing structures, these spores are often characteristic of a particular taxon of the fungi. Sporangiospores, spores produced by a sporangium in many such as zygomycetes. Zygospores, spores produced by a zygosporangium, characteristic of zygomycetes, ascospores, spores produced by an ascus, characteristic of ascomycetes. Basidiospores, spores produced by a basidium, characteristic of basidiomycetes, aeciospores, spores produced by an aecium in some fungi such as rusts or smuts. Urediniospores, spores produced by a uredinium in some such as rusts or smuts. Teliospores, spores produced by a telium in some such as rusts or smuts. Oospores, spores produced by an oogonium, characteristic of oomycetes, carpospores, spores produced by a carposporophyte, characteristic of red algae. Tetraspores, spores produced by a tetrasporophyte, characteristic of red algae, chlamydospores, thick-walled resting spores of fungi produced to survive unfavorable conditions. Parasitic fungal spores may be classified into internal spores, which germinate within the host, meiospores, spores produced by meiosis, they are thus haploid, and give rise to a haploid daughter cell or a haploid individual. Examples are the cells of gametophytes of seed plants found in flowers or cones. Microspores, meiospores that give rise to a male gametophyte, megaspores, meiospores that give rise to a female gametophyte
29.
Log-normal distribution
–
In probability theory, a log-normal distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed. Thus, if the random variable X is log-normally distributed, then Y = ln has a normal distribution, likewise, if Y has a normal distribution, then X = exp has a log-normal distribution. A random variable which is log-normally distributed takes only positive real values, the distribution is occasionally referred to as the Galton distribution or Galtons distribution, after Francis Galton. The log-normal distribution also has associated with other names, such as McAlister, Gibrat. A log-normal process is the realization of the multiplicative product of many independent random variables. This is justified by considering the limit theorem in the log domain. The log-normal distribution is the maximum entropy probability distribution for a random variate X for which the mean and this relationship is true regardless of the base of the logarithmic or exponential function. If log a is normally distributed, then so is log b , likewise, if e X is log-normally distributed, then so is a X, where a is a positive number ≠1. On a logarithmic scale, μ and σ can be called the location parameter, in contrast, the mean, standard deviation, and variance of the non-logarithmized sample values are respectively denoted m, s. d. and v in this article. The two sets of parameters can be related as μ = ln , σ = ln , a random positive variable x is log-normally distributed if the logarithm of x is normally distributed, N =1 σ2 π exp . A change of variables must conserve differential probability, All moments of the log-normal distribution exist and E = e n μ + n 2 σ22 This can be derived by letting z = ln − σ within the integral. However, the expected value E is not defined for any value of the argument t as the defining integral diverges. In consequence the moment generating function is not defined, the last is related to the fact that the lognormal distribution is not uniquely determined by its moments. In consequence, the function of the log-normal distribution cannot be represented as an infinite convergent series. In particular, its Taylor formal series diverges, ∑ n =0 ∞ n n, a relatively simple approximating formula is available in closed form and given by φ ≈ exp 1 + W where W is the Lambert W function. This approximation is derived via a method but it stays sharp all over the domain of convergence of φ. The geometric mean of the distribution is G M = e μ. By analogy with the statistics, one can define a geometric variance, G V a r = e σ2
30.
Standard deviation
–
In statistics, the standard deviation is a measure that is used to quantify the amount of variation or dispersion of a set of data values. The standard deviation of a variable, statistical population, data set. It is algebraically simpler, though in practice less robust, than the absolute deviation. A useful property of the deviation is that, unlike the variance. There are also other measures of deviation from the norm, including mean absolute deviation, in addition to expressing the variability of a population, the standard deviation is commonly used to measure confidence in statistical conclusions. For example, the margin of error in polling data is determined by calculating the standard deviation in the results if the same poll were to be conducted multiple times. This derivation of a deviation is often called the standard error of the estimate or standard error of the mean when referring to a mean. It is computed as the deviation of all the means that would be computed from that population if an infinite number of samples were drawn. It is very important to note that the deviation of a population. The reported margin of error of a poll is computed from the error of the mean and is typically about twice the standard deviation—the half-width of a 95 percent confidence interval. The standard deviation is also important in finance, where the standard deviation on the rate of return on an investment is a measure of the volatility of the investment. For a finite set of numbers, the deviation is found by taking the square root of the average of the squared deviations of the values from their average value. For example, the marks of a class of eight students are the eight values,2,4,4,4,5,5,7,9. These eight data points have the mean of 5,2 +4 +4 +4 +5 +5 +7 +98 =5 and this formula is valid only if the eight values with which we began form the complete population. If the values instead were a sample drawn from some large parent population. In that case the result would be called the standard deviation. Dividing by n −1 rather than by n gives an estimate of the variance of the larger parent population. This is known as Bessels correction, as a slightly more complicated real-life example, the average height for adult men in the United States is about 70 inches, with a standard deviation of around 3 inches
31.
Arithmetic mean
–
In mathematics and statistics, the arithmetic mean, or simply the mean or average when the context is clear, is the sum of a collection of numbers divided by the number of numbers in the collection. The collection is often a set of results of an experiment, the term arithmetic mean is preferred in some contexts in mathematics and statistics because it helps distinguish it from other means, such as the geometric mean and the harmonic mean. In addition to mathematics and statistics, the mean is used frequently in fields such as economics, sociology, and history. For example, per capita income is the average income of a nations population. While the arithmetic mean is used to report central tendencies, it is not a robust statistic. In a more obscure usage, any sequence of values that form a sequence between two numbers x and y can be called arithmetic means between x and y. The arithmetic mean is the most commonly used and readily understood measure of central tendency, in statistics, the term average refers to any of the measures of central tendency. The arithmetic mean is defined as being equal to the sum of the values of each. For example, let us consider the monthly salary of 10 employees of a firm,2500,2700,2400,2300,2550,2650,2750,2450,2600,2400. The arithmetic mean is 2500 +2700 +2400 +2300 +2550 +2650 +2750 +2450 +2600 +240010 =2530, If the data set is a statistical population, then the mean of that population is called the population mean. If the data set is a sample, we call the statistic resulting from this calculation a sample mean. The arithmetic mean of a variable is denoted by a bar, for example as in x ¯. The arithmetic mean has several properties that make it useful, especially as a measure of central tendency and these include, If numbers x 1, …, x n have mean x ¯, then + ⋯ + =0. The mean is the single number for which the residuals sum to zero. If the arithmetic mean of a population of numbers is desired, the arithmetic mean may be contrasted with the median. The median is defined such that half the values are larger than, and half are smaller than, If elements in the sample data increase arithmetically, when placed in some order, then the median and arithmetic average are equal. For example, consider the data sample 1,2,3,4, the average is 2.5, as is the median. However, when we consider a sample that cannot be arranged so as to increase arithmetically, such as 1,2,4,8,16, in this case, the arithmetic average is 6.2 and the median is 4
32.
Weibull distribution
–
In probability theory and statistics, the Weibull distribution /ˈveɪbʊl/ is a continuous probability distribution. Its complementary cumulative distribution function is an exponential function. The Weibull distribution is related to a number of probability distributions, in particular. If the quantity X is a time-to-failure, the Weibull distribution gives a distribution for which the rate is proportional to a power of time. The shape parameter, k, is that power plus one and this happens if there is significant infant mortality, or defective items failing early and the failure rate decreasing over time as the defective items are weeded out of the population. This might suggest random external events are causing mortality, or failure, the Weibull distribution reduces to an exponential distribution, A value of k >1 indicates that the failure rate increases with time. This happens if there is a process, or parts that are more likely to fail as time goes on. In the context of the diffusion of innovations, this means positive word of mouth, the function is first concave, then convex with an inflexion point at / e 1 / k, k >1. In the field of science, the shape parameter k of a distribution of strengths is known as the Weibull modulus. In the context of diffusion of innovations, the Weibull distribution is a pure imitation/rejection model, in medical statistics a different parameterization is used. The shape parameter k is the same as above and the parameter is b = λ. For x ≥0 the hazard function is h = b k x k −1, a third parameterization is sometimes used. In this the shape parameter k is the same as above, the form of the density function of the Weibull distribution changes drastically with the value of k. For 0 < k <1, the density function tends to ∞ as x approaches zero from above and is strictly decreasing, for k =1, the density function tends to 1/λ as x approaches zero from above and is strictly decreasing. For k >1, the density function tends to zero as x approaches zero from above, increases until its mode, for k =2 the density has a finite positive slope at x =0. As k goes to infinity, the Weibull distribution converges to a Dirac delta distribution centered at x = λ, moreover, the skewness and coefficient of variation depend only on the shape parameter. The cumulative distribution function for the Weibull distribution is F =1 − e − k for x ≥0, the quantile function for the Weibull distribution is Q = λ1 / k for 0 ≤ p <1. The failure rate h is given by h = k λ k −1, the moment generating function of the logarithm of a Weibull distributed random variable is given by E = λ t Γ where Γ is the gamma function
33.
Power law
–
For instance, considering the area of a square in terms of the length of its side, if the length is doubled, the area is multiplied by a factor of four. Few empirical distributions fit a power law for all their values, acoustic attenuation follows frequency power-laws within wide frequency bands for many complex media. Allometric scaling laws for relationships between biological variables are among the best known power-law functions in nature, one attribute of power laws is their scale invariance. Given a relation f = a x − k, scaling the argument x by a constant factor c causes only a proportionate scaling of the function itself and that is, f = a − k = c − k f ∝ f. That is, scaling by a constant c simply multiplies the original power-law relation by the constant c − k, thus, it follows that all power laws with a particular scaling exponent are equivalent up to constant factors, since each is simply a scaled version of the others. This behavior is what produces the linear relationship when logarithms are taken of both f and x, and the straight-line on the plot is often called the signature of a power law. With real data, such straightness is a necessary, but not sufficient, in fact, there are many ways to generate finite amounts of data that mimic this signature behavior, but, in their asymptotic limit, are not true power laws. Thus, accurately fitting and validating power-law models is an area of research in statistics. This can be seen in the thought experiment, imagine a room with your friends. Now imagine the worlds richest person entering the room, with an income of about 1 billion US$. What happens to the income in the room. Income is distributed according to a known as the Pareto distribution. On the one hand, this makes it incorrect to apply traditional statistics that are based on variance, on the other hand, this also allows for cost-efficient interventions. For example, given that car exhaust is distributed according to a power-law among cars it would be sufficient to eliminate those very few cars from the road to reduce total exhaust substantially. For instance, the behavior of water and CO2 at their boiling points fall in the universality class because they have identical critical exponents. In fact, almost all material phase transitions are described by a set of universality classes. Similar observations have made, though not as comprehensively, for various self-organized critical systems. Formally, this sharing of dynamics is referred to as universality, scientific interest in power-law relations stems partly from the ease with which certain general classes of mechanisms generate them
34.
Exponential distribution
–
It is a particular case of the gamma distribution. It is the analogue of the geometric distribution, and it has the key property of being memoryless. In addition to being used for the analysis of Poisson processes, the probability density function of an exponential distribution is f = { λ e − λ x x ≥0,0 x <0. Alternatively, this can be defined using the right-continuous Heaviside step function, H where H=1, f = λ e − λ x H Here λ >0 is the parameter of the distribution, the distribution is supported on the interval [0, ∞). If a random variable X has this distribution, we write X ~ Exp, the exponential distribution exhibits infinite divisibility. The cumulative distribution function is given by F = {1 − e − λ x x ≥0,0 x <0. Where β >0 is mean, standard deviation, and scale parameter of the distribution and that is to say, the expected duration of survival of the system is β units of time. The parametrization involving the rate parameter arises in the context of events arriving at a rate λ, the alternative specification is sometimes more convenient than the one given above, and some authors will use it as a standard definition. This alternative specification is not used here, unfortunately this gives rise to a notational ambiguity. An example of this switch, reference uses λ for β. The mean or expected value of an exponentially distributed random variable X with rate parameter λ is given by E =1 λ = β, see above. In light of the examples given above, this sense, if you receive phone calls at an average rate of 2 per hour. The variance of X is given by Var =1 λ2 = β2, the moments of X, for n =1,2. are given by E = n. The median of X is given by m = ln λ < E , where ln refers to the natural logarithm. Thus the absolute difference between the mean and median is | E − m | =1 − ln λ <1 λ = standard deviation, an exponentially distributed random variable T obeys the relation Pr = Pr, ∀ s, t ≥0. The exponential distribution and the distribution are the only memoryless probability distributions. The exponential distribution is also necessarily the only continuous probability distribution that has a constant Failure rate. The quantile function for Exp is F −1 = − ln λ,0 ≤ p <1 The quartiles are therefore, first quartile, ln/λ median, ln/λ third quartile, ln/λ And as a consequence the interquartile range is ln/λ
35.
Reynolds number
–
The Reynolds number is an important dimensionless quantity in fluid mechanics used to help predict flow patterns in different fluid flow situations. It has wide applications, ranging from liquid flow in a pipe to the passage of air over an aircraft wing. The concept was introduced by George Gabriel Stokes in 1851, but the Reynolds number was named by Arnold Sommerfeld in 1908 after Osborne Reynolds, who popularized its use in 1883. A similar effect is created by the introduction of a stream of higher velocity fluid and this relative movement generates fluid friction, which is a factor in developing turbulent flow. Counteracting this effect is the viscosity of the fluid, which as it increases, progressively inhibits turbulence, the Reynolds number quantifies the relative importance of these two types of forces for given flow conditions, and is a guide to when turbulent flow will occur in a particular situation. Such scaling is not linear and the application of Reynolds numbers to both situations allows scaling factors to be developed, the Reynolds number can be defined for several different situations where a fluid is in relative motion to a surface. These definitions generally include the properties of density and viscosity, plus a velocity. This dimension is a matter of convention – for example radius and diameter are equally valid to describe spheres or circles, for aircraft or ships, the length or width can be used. For flow in a pipe or a sphere moving in a fluid the internal diameter is used today. Other shapes such as pipes or non-spherical objects have an equivalent diameter defined. For fluids of variable density such as gases or fluids of variable viscosity such as non-Newtonian fluids. The velocity may also be a matter of convention in some circumstances, in practice, matching the Reynolds number is not on its own sufficient to guarantee similitude. Fluid flow is chaotic, and very small changes to shape. Nevertheless, Reynolds numbers are an important guide and are widely used. Osborne Reynolds famously studied the conditions in which the flow of fluid in pipes transitioned from laminar flow to turbulent flow, when the velocity was low, the dyed layer remained distinct through the entire length of the large tube. When the velocity was increased, the broke up at a given point. The point at which this happened was the point from laminar to turbulent flow. From these experiments came the dimensionless Reynolds number for dynamic similarity—the ratio of forces to viscous forces
36.
Stokes' law
–
Stokes law is derived by solving the Stokes flow limit for small Reynolds numbers of the Navier–Stokes equations. In SI units, Fd is given in Newtons, η in Pa·s, r in meters, Stokes law makes the following assumptions for the behavior of a particle in a fluid, Laminar Flow Spherical particles Homogeneous material Smooth surfaces Particles do not interfere with each other. Note that for molecules Stokes law is used to define their Stokes radius, the CGS unit of kinematic viscosity was named stokes after his work. Stokes law is the basis of the viscometer, in which the fluid is stationary in a vertical glass tube. A sphere of size and density is allowed to descend through the liquid. If correctly selected, it reaches terminal velocity, which can be measured by the time it takes to pass two marks on the tube, electronic sensing can be used for opaque fluids. Knowing the terminal velocity, the size and density of the sphere, a series of steel ball bearings of different diameters are normally used in the classic experiment to improve the accuracy of the calculation. The school experiment uses glycerine or golden syrup as the fluid, several school experiments often involve varying the temperature and/or concentration of the substances used in order to demonstrate the effects this has on the viscosity. Industrial methods include many different oils, and polymer liquids such as solutions, the importance of Stokes law is illustrated by the fact that it played a critical role in the research leading to at least three Nobel Prizes. Stokes law is important for understanding the swimming of microorganisms and sperm, also, in air, the same theory can be used to explain why small water droplets can remain suspended in air until they grow to a critical size and start falling as rain. Similar use of the equation can be made in the settlement of fine particles in water or other fluids, requiring the force balance Fd = Fg and solving for the velocity V gives the terminal velocity Vs. Note that since buoyant force increases as R3 and Stokes drag increases as R, the terminal velocity increases as R2 and thus varies greatly with particle size as shown below. This velocity V is given by, V =29 μ g R2, in Stokes flow, at very low Reynolds number, the convective acceleration terms in the Navier–Stokes equations are neglected. By using some vector calculus identities, these equations can be shown to result in Laplaces equations for the pressure and each of the components of the vorticity vector, ∇2 ω =0 and ∇2 p =0. For the case of a sphere in a uniform far field flow, the z–axis is through the centre of the sphere and aligned with the mean flow direction, while r is the radius as measured perpendicular to the z–axis. The origin is at the sphere centre, because the flow is axisymmetric around the z–axis, it is independent of the azimuth φ. The azimuthal velocity component in the φ–direction is equal to zero, the volume flux, through a tube bounded by a surface of some constant value ψ, is equal to 2π ψ and is constant. The Laplace operator, applied to the vorticity ωφ, becomes in this coordinate system with axisymmetry
37.
Viscosity
–
The viscosity of a fluid is a measure of its resistance to gradual deformation by shear stress or tensile stress. For liquids, it corresponds to the concept of thickness, for example. Viscosity is a property of the fluid which opposes the motion between the two surfaces of the fluid in a fluid that are moving at different velocities. For a given velocity pattern, the stress required is proportional to the fluids viscosity, a fluid that has no resistance to shear stress is known as an ideal or inviscid fluid. Zero viscosity is observed only at low temperatures in superfluids. Otherwise, all fluids have positive viscosity, and are said to be viscous or viscid. A fluid with a high viscosity, such as pitch. The word viscosity is derived from the Latin viscum, meaning mistletoe, the dynamic viscosity of a fluid expresses its resistance to shearing flows, where adjacent layers move parallel to each other with different speeds. It can be defined through the situation known as a Couette flow. This fluid has to be homogeneous in the layer and at different shear stresses, if the speed of the top plate is small enough, the fluid particles will move parallel to it, and their speed will vary linearly from zero at the bottom to u at the top. Each layer of fluid will move faster than the one just below it, in particular, the fluid will apply on the top plate a force in the direction opposite to its motion, and an equal but opposite one to the bottom plate. An external force is required in order to keep the top plate moving at constant speed. The magnitude F of this force is found to be proportional to the u and the area A of each plate. The proportionality factor μ in this formula is the viscosity of the fluid, the ratio u/y is called the rate of shear deformation or shear velocity, and is the derivative of the fluid speed in the direction perpendicular to the plates. Isaac Newton expressed the forces by the differential equation τ = μ ∂ u ∂ y, where τ = F/A. This formula assumes that the flow is moving along parallel lines and this equation can be used where the velocity does not vary linearly with y, such as in fluid flowing through a pipe. Use of the Greek letter mu for the dynamic viscosity is common among mechanical and chemical engineers. However, the Greek letter eta is used by chemists, physicists
38.
Terminal velocity
–
Terminal velocity is the highest velocity attainable by an object as it falls through a fluid. It occurs when the sum of the force and the buoyancy is equal to the downward force of gravity acting on the object. Since the net force on the object is zero, the object has zero acceleration, in fluid dynamics, an object is moving at its terminal velocity if its speed is constant due to the restraining force exerted by the fluid through which it is moving. As the speed of an object increases, so does the drag force acting on it, at some speed, the drag or force of resistance will equal the gravitational pull on the object. At this point the object ceases to accelerate and continues falling at a constant speed called the terminal velocity, an object moving downward faster than the terminal velocity will slow down until it reaches the terminal velocity. Drag depends on the area, here, the objects cross-section or silhouette in a horizontal plane. An object with a projected area relative to its mass, such as a parachute, has a lower terminal velocity than one with a small projected area relative to its mass. Based on wind resistance, for example, the velocity of a skydiver in a belly-to-earth free-fall position is about 195 km/h. This velocity is the limiting value of the velocity. In this example, a speed of 50% of terminal velocity is reached only about 3 seconds, while it takes 8 seconds to reach 90%,15 seconds to reach 99%. Higher speeds can be attained if the skydiver pulls in his or her limbs, in this case, the terminal velocity increases to about 320 km/h, which is almost the terminal velocity of the peregrine falcon diving down on its prey. In reality, an object approaches its terminal velocity asymptotically, so instead of m use the reduced mass m r = m − ρ V in this and subsequent formulas. The terminal velocity of an object due to the properties of the fluid. Air density increases with decreasing altitude, at about 1% per 80 metres, for objects falling through the atmosphere, for every 160 metres of fall, the terminal velocity decreases 1%. After reaching the terminal velocity, while continuing the fall. Such flows are called creeping flows and the condition to be satisfied for the flows to be creeping flows is the Reynolds number, R e ≪1. From Stokes solution, the force acting on the sphere can be obtained as D =3 π μ d V or C d =24 R e where the Reynolds number. The expression for the force given by equation is called Stokes law
39.
Buoyancy
–
In science, buoyancy or upthrust, is an upward force exerted by a fluid that opposes the weight of an immersed object. In a column of fluid, pressure increases with depth as a result of the weight of the overlying fluid, thus the pressure at the bottom of a column of fluid is greater than at the top of the column. Similarly, the pressure at the bottom of an object submerged in a fluid is greater than at the top of the object and this pressure difference results in a net upwards force on the object. For this reason, an object whose density is greater than that of the fluid in which it is submerged tends to sink, If the object is either less dense than the liquid or is shaped appropriately, the force can keep the object afloat. This can occur only in a reference frame, which either has a gravitational field or is accelerating due to a force other than gravity defining a downward direction. In a situation of fluid statics, the net upward force is equal to the magnitude of the weight of fluid displaced by the body. The center of buoyancy of an object is the centroid of the volume of fluid. Archimedes principle is named after Archimedes of Syracuse, who first discovered this law in 212 B. C, more tersely, Buoyancy = weight of displaced fluid. The weight of the fluid is directly proportional to the volume of the displaced fluid. Thus, among completely submerged objects with equal masses, objects with greater volume have greater buoyancy and this is also known as upthrust. Suppose a rocks weight is measured as 10 newtons when suspended by a string in a vacuum with gravity acting upon it, suppose that when the rock is lowered into water, it displaces water of weight 3 newtons. The force it exerts on the string from which it hangs would be 10 newtons minus the 3 newtons of buoyancy force,10 −3 =7 newtons. Buoyancy reduces the apparent weight of objects that have sunk completely to the sea floor and it is generally easier to lift an object up through the water than it is to pull it out of the water. The density of the object relative to the density of the fluid can easily be calculated without measuring any volumes. Density of object density of fluid = weight weight − apparent immersed weight Example, If you drop wood into water, Example, A helium balloon in a moving car. During a period of increasing speed, the air mass inside the car moves in the direction opposite to the cars acceleration, the balloon is also pulled this way. However, because the balloon is buoyant relative to the air, it ends up being pushed out of the way, If the car slows down, the same balloon will begin to drift backward. For the same reason, as the car goes round a curve and this is the equation to calculate the pressure inside a fluid in equilibrium
40.
Exponential decay
–
A quantity is subject to exponential decay if it decreases at a rate proportional to its current value. Symbolically, this process can be expressed by the differential equation. The solution to this equation is, N = N0 e − λ t, where N is the quantity at time t, and N0 = N is the initial quantity, i. e. the quantity at time t =0. If the decaying quantity, N, is the number of elements in a certain set. This is called the lifetime, τ, and it can be shown that it relates to the decay rate, λ, in the following way. For example, if the population of the assembly, N, is 1000. A very similar equation will be seen below, which arises when the base of the exponential is chosen to be 2, in that case the scaling time is the half-life. A more intuitive characteristic of exponential decay for many people is the time required for the quantity to fall to one half of its initial value. This time is called the half-life, and often denoted by the symbol t1/2, the half-life can be written in terms of the decay constant, or the mean lifetime, as, t 1 /2 = ln λ = τ ln . When this expression is inserted for τ in the equation above, and ln 2 is absorbed into the base. Thus, the amount of left is 2−1 = 1/2 raised to the number of half-lives that have passed. Thus, after 3 half-lives there will be 1/23 = 1/8 of the material left. Therefore, the mean lifetime τ is equal to the half-life divided by the log of 2, or. E. g. polonium-210 has a half-life of 138 days, the equation that describes exponential decay is d N d t = − λ N or, by rearranging, d N N = − λ d t. This is the form of the equation that is most commonly used to describe exponential decay, any one of decay constant, mean lifetime, or half-life is sufficient to characterise the decay. The notation λ for the constant is a remnant of the usual notation for an eigenvalue. In this case, λ is the eigenvalue of the negative of the operator with N as the corresponding eigenfunction. The units of the constant are s−1
41.
Diffusion
–
Diffusion is the net movement of molecules or atoms from a region of high concentration to a region of low concentration. This is also referred to as the movement of a substance down a concentration gradient, a gradient is the change in the value of a quantity with the change in another variable. The word diffusion derives from the Latin word, diffundere, which means to spread out, a distinguishing feature of diffusion is that it results in mixing or mass transport, without requiring bulk motion. Thus, diffusion should not be confused with convection, or advection, an example of a situation in which bulk flow and diffusion can be differentiated is the mechanism by which oxygen enters the body during external respiration. The lungs are located in the cavity, which expands as the first step in external respiration. This expansion leads to an increase in volume of the alveoli in the lungs and this creates a pressure gradient between the air outside the body and the alveoli. The air moves down the gradient through the airways of the lungs and into the alveoli until the pressure of the air. The air arriving in the alveoli has a concentration of oxygen than the “stale” air in the alveoli. The increase in oxygen concentration creates a concentration gradient for oxygen between the air in the alveoli and the blood in the capillaries that surround the alveoli, oxygen then moves by diffusion, down the concentration gradient, into the blood. The other consequence of the air arriving in alveoli is that the concentration of carbon dioxide in the alveoli decreases and this creates a concentration gradient for carbon dioxide to diffuse from the blood into the alveoli. The pumping action of the heart then transports the blood around the body, as the left ventricle of the heart contracts, the volume decreases, which increases the pressure in the ventricle. This creates a gradient between the heart and the capillaries, and blood moves through blood vessels by bulk flow. The concept of diffusion is widely used in, physics, chemistry, biology, sociology, economics, however, in each case, the object that is undergoing diffusion is “spreading out” from a point or location at which there is a higher concentration of that object. In the phenomenological approach, diffusion is the movement of a substance from a region of high concentration to a region of low concentration without bulk motion, according to Ficks laws, the diffusion flux is proportional to the negative gradient of concentrations. It goes from regions of higher concentration to regions of lower concentration, some time later, various generalizations of Ficks laws were developed in the frame of thermodynamics and non-equilibrium thermodynamics. From the atomistic point of view, diffusion is considered as a result of the walk of the diffusing particles. In molecular diffusion, the molecules are self-propelled by thermal energy. Random walk of small particles in suspension in a fluid was discovered in 1827 by Robert Brown, the theory of the Brownian motion and the atomistic backgrounds of diffusion were developed by Albert Einstein
42.
Electric charge
–
Electric charge is the physical property of matter that causes it to experience a force when placed in an electromagnetic field. There are two types of charges, positive and negative. Like charges repel and unlike attract, an absence of net charge is referred to as neutral. An object is charged if it has an excess of electrons. The SI derived unit of charge is the coulomb. In electrical engineering, it is common to use the ampere-hour. The symbol Q often denotes charge, early knowledge of how charged substances interact is now called classical electrodynamics, and is still accurate for problems that dont require consideration of quantum effects. The electric charge is a conserved property of some subatomic particles. Electrically charged matter is influenced by, and produces, electromagnetic fields, the interaction between a moving charge and an electromagnetic field is the source of the electromagnetic force, which is one of the four fundamental forces. 602×10−19 coulombs. The proton has a charge of +e, and the electron has a charge of −e, the study of charged particles, and how their interactions are mediated by photons, is called quantum electrodynamics. Charge is the property of forms of matter that exhibit electrostatic attraction or repulsion in the presence of other matter. Electric charge is a property of many subatomic particles. The charges of free-standing particles are integer multiples of the charge e. Michael Faraday, in his electrolysis experiments, was the first to note the discrete nature of electric charge, robert Millikans oil drop experiment demonstrated this fact directly, and measured the elementary charge. By convention, the charge of an electron is −1, while that of a proton is +1, charged particles whose charges have the same sign repel one another, and particles whose charges have different signs attract. The charge of an antiparticle equals that of the corresponding particle, quarks have fractional charges of either −1/3 or +2/3, but free-standing quarks have never been observed. The electric charge of an object is the sum of the electric charges of the particles that make it up. An ion is an atom that has lost one or more electrons, giving it a net charge, or that has gained one or more electrons
43.
Differential equation
–
A differential equation is a mathematical equation that relates some function with its derivatives. In applications, the functions usually represent physical quantities, the derivatives represent their rates of change, because such relations are extremely common, differential equations play a prominent role in many disciplines including engineering, physics, economics, and biology. In pure mathematics, differential equations are studied from different perspectives. Only the simplest differential equations are solvable by explicit formulas, however, if a self-contained formula for the solution is not available, the solution may be numerically approximated using computers. Differential equations first came into existence with the invention of calculus by Newton, jacob Bernoulli proposed the Bernoulli differential equation in 1695. This is a differential equation of the form y ′ + P y = Q y n for which the following year Leibniz obtained solutions by simplifying it. Historically, the problem of a string such as that of a musical instrument was studied by Jean le Rond dAlembert, Leonhard Euler, Daniel Bernoulli. In 1746, d’Alembert discovered the wave equation, and within ten years Euler discovered the three-dimensional wave equation. The Euler–Lagrange equation was developed in the 1750s by Euler and Lagrange in connection with their studies of the tautochrone problem. This is the problem of determining a curve on which a particle will fall to a fixed point in a fixed amount of time. Lagrange solved this problem in 1755 and sent the solution to Euler, both further developed Lagranges method and applied it to mechanics, which led to the formulation of Lagrangian mechanics. Contained in this book was Fouriers proposal of his heat equation for conductive diffusion of heat and this partial differential equation is now taught to every student of mathematical physics. For example, in mechanics, the motion of a body is described by its position. Newtons laws allow one to express these variables dynamically as an equation for the unknown position of the body as a function of time. In some cases, this equation may be solved explicitly. An example of modelling a real world problem using differential equations is the determination of the velocity of a ball falling through the air, considering only gravity, the balls acceleration towards the ground is the acceleration due to gravity minus the acceleration due to air resistance. Gravity is considered constant, and air resistance may be modeled as proportional to the balls velocity and this means that the balls acceleration, which is a derivative of its velocity, depends on the velocity. Finding the velocity as a function of time involves solving a differential equation, Differential equations can be divided into several types
44.
Brownian motion
–
Brownian motion or pedesis is the random motion of particles suspended in a fluid resulting from their collision with the fast-moving atoms or molecules in the gas or liquid. This transport phenomenon is named after the botanist Robert Brown and this explanation of Brownian motion served as convincing evidence that atoms and molecules exist, and was further verified experimentally by Jean Perrin in 1908. Perrin was awarded the Nobel Prize in Physics in 1926 for his work on the structure of matter. Brownian motion is among the simplest of the stochastic processes. This universality is closely related to the universality of the normal distribution, in both cases, it is often mathematical convenience, rather than the accuracy of the models, that motivates their use. The Roman Lucretiuss scientific poem On the Nature of Things has a description of Brownian motion of dust particles in verses 113 –140 from Book II. He uses this as a proof of the existence of atoms, Observe what happens when sunbeams are admitted into a building and you will see a multitude of tiny particles mingling in a multitude of ways. Their dancing is an indication of underlying movements of matter that are hidden from our sight. It originates with the atoms which move of themselves, then those small compound bodies that are least removed from the impetus of the atoms are set in motion by the impact of their invisible blows and in turn cannon against slightly larger bodies. So the movement mounts up from the atoms and gradually emerges to the level of our senses, so that those bodies are in motion that we see in sunbeams, moved by blows that remain invisible. Although the mingling motion of dust particles is caused largely by air currents, while Jan Ingenhousz described the irregular motion of coal dust particles on the surface of alcohol in 1785, the discovery of this phenomenon is often credited to the botanist Robert Brown in 1827. Brown was studying pollen grains of the plant Clarkia pulchella suspended in water under a microscope when he observed minute particles, ejected by the pollen grains, executing a jittery motion. By repeating the experiment with particles of matter he was able to rule out that the motion was life-related. The first person to describe the mathematics behind Brownian motion was Thorvald N. Thiele in a paper on the method of least squares published in 1880. This was followed independently by Louis Bachelier in 1900 in his PhD thesis The theory of speculation, in which he presented an analysis of the stock. The Brownian motion model of the market is often cited. Albert Einstein and Marian Smoluchowski brought the solution of the problem to the attention of physicists and their equations describing Brownian motion were subsequently verified by the experimental work of Jean Baptiste Perrin in 1908. In this way Einstein was able to determine the size of atoms, in accordance to Avogadros law this volume is the same for all ideal gases, which is 22.414 liters at standard temperature and pressure