1.
Mississippi River
–
The Mississippi River is the chief river of the largest drainage system on the North American continent. Flowing entirely in the United States, it rises in northern Minnesota, with its many tributaries, the Mississippis watershed drains all or parts of 31 U. S. states and 2 Canadian provinces between the Rocky and Appalachian Mountains. The Mississippi ranks as the fourth longest and fifteenth largest river in the world by discharge, the river either borders or passes through the states of Minnesota, Wisconsin, Iowa, Illinois, Missouri, Kentucky, Tennessee, Arkansas, Mississippi, and Louisiana. Native Americans long lived along the Mississippi River and its tributaries, most were hunter-gatherers, but some, such as the Mound Builders, formed prolific agricultural societies. The arrival of Europeans in the 16th century changed the way of life as first explorers, then settlers. The river served first as a barrier, forming borders for New Spain, New France, and the early United States, and then as a vital transportation artery and communications link. Formed from thick layers of the silt deposits, the Mississippi embayment is one of the most fertile agricultural regions of the country. In recent years, the river has shown a shift towards the Atchafalaya River channel in the Delta. The word itself comes from Messipi, the French rendering of the Anishinaabe name for the river, see below in the History section for additional information. In addition to historical traditions shown by names, there are at least two measures of a rivers identity, one being the largest branch, and the other being the longest branch. Using the largest-branch criterion, the Ohio would be the branch of the Lower Mississippi. Using the longest-branch criterion, the Middle Mississippi-Missouri-Jefferson-Beaverhead-Red Rock-Hellroaring Creek River would be the main branch and its length of at least 3,745 mi is exceeded only by the Nile, the Amazon, and perhaps the Yangtze River among the longest rivers in the world. The source of this waterway is at Browers Spring,8,800 feet above sea level in southwestern Montana and this is exemplified by the Gateway Arch in St. Louis and the phrase Trans-Mississippi as used in the name of the Trans-Mississippi Exposition. It is common to qualify a regionally superlative landmark in relation to it, the New Madrid Seismic Zone along the river is also noteworthy. These various basic geographical aspects of the river in turn underlie its human history and present uses of the waterway, the Upper Mississippi runs from its headwaters to its confluence with the Missouri River at St. Louis, Missouri. The source of the Upper Mississippi branch is traditionally accepted as Lake Itasca,1,475 feet above sea level in Itasca State Park in Clearwater County, however, the lake is in turn fed by a number of smaller streams. From its origin at Lake Itasca to St. Louis, Missouri, fourteen of these dams are located above Minneapolis in the headwaters region and serve multiple purposes, including power generation and recreation. The remaining 29 dams, beginning in downtown Minneapolis, all locks and were constructed to improve commercial navigation of the upper river
2.
Kaskaskia, Illinois
–
Kaskaskia is a historically important village in Randolph County, Illinois, United States. In the 2010 census the population was 14, making it the second-smallest incorporated community in the State of Illinois in terms of population, behind Valley City. As a major French colonial town of the Illinois Country, in the 18th century its population was about 7,000. During the American Revolutionary War, the town, which by then had become a center for the British Province of Quebec, was taken by the Virginia militia during the Illinois campaign. It was designated as the county seat of Illinois County, Virginia, Kaskaskia was later named as the capital of the United States Illinois Territory, created on February 3,1809. In 1818, when Illinois became the 21st U. S. state, the town served as the states first capital until 1819. Most of the town was destroyed in April 1881 by flooding, as the Mississippi River shifted eastward to a new channel and this resulted from deforestation of the river banks during the 19th century, due to crews taking wood for fuel to feed the steamboat and railroad traffic. The river now passes east rather than west of the town, the state boundary line, however, remained in its original location. Accordingly, if the Mississippi River is considered to be a break in continuity, Kaskaskia is an exclave of Illinois, lying west of the Mississippi. A small bridge crosses the old riverbed, now a creek that is filled with water during flood season. Kaskaskia has an Illinois telephone area code and a Missouri ZIP Code and its roads are maintained by Illinois Dept. of Transportation, and its few residents vote in the Illinois elections. The town was evacuated in the Great Flood of 1993, which covered it with more than nine feet deep. In 2010, How the States Got Their Shapes on the History Channel featured Kaskaskia, the former Randolph County Sheriff was featured discussing different sites on Kaskaskia Island. The site of Kaskaskia near the river was first a Native American village, the historic Illini peoples lived in this area at the time of European encounter and traded with the French colonists. In 1703, French Jesuit missionaries established a mission with the goal of converting the Illini Native Americans to Catholicism, the congregation built its first stone church in 1714. The French also had a fur trading post in the village, Canadien settlers moved in to farm and to exploit the lead mines on the Missouri side of the river. Favorably situated on a peninsula on the east side of the Mississippi River, Kaskaskia became the capital of Upper Louisiana and the French built Fort de Chartres nearby in 1718. In the same year they imported the first enslaved Africans, shipped from Santo Domingo in the Caribbean, from the years of early French settlement, Kaskaskia was a multicultural village, consisting of a few French men and numerous Illinois and other American Indians
3.
Great Flood of 1993
–
The Great Mississippi and Missouri Rivers Flood of 1993 occurred in the American Midwest, along the Mississippi and Missouri rivers and their tributaries, from April to October 1993. The flood was among the most costly and devastating to ever occur in the United States, the hydrographic basin affected cover around 745 miles in length and 435 miles in width, totaling about 320,000 square miles. In some categories, the 1993 flood even surpassed the 1927 flood and this weather pattern persisted throughout the following autumn. During the winter of 1992–93, the region experienced heavy snowfall and these conditions were followed by persistent spring weather patterns that produced storms over the same locations. Soils across much of the area were saturated by June 1, with additional rainfall all running off into streams and rivers. These wet-weather conditions contrasted sharply with the droughts and heat waves experienced in the southeastern United States, storms, persistent and repetitive in nature during the late spring and summer, bombarded the Upper Midwest with voluminous rainfall. Portions of east-central Iowa received as much as 48 inches of rain between April 1 and August 31,1993, and many areas across the plains had precipitation 400–750% above normal. In the St. Louis National Weather Service forecast area encompassing eastern Missouri and southwest Illinois,36 forecast points rose above flood stage, the 1993 flood broke record river levels set during the 1973 Mississippi and the 1951 Missouri River floods. Civil Air Patrol crews from 21 states served more than 5,000 meals to victims and volunteers. Over 1,000 flood warnings and statements, five times the normal, were issued to notify the public, in such places as St. Louis, river levels were nearly 20 feet above flood stage, the highest ever recorded there in 228 years. The 52-foot -high St. Louis Floodwall, built to handle the volume of the 1844 flood, was able to keep the 1993 flood out with just over two feet to spare. This floodwall was built in the 1960s, to great controversy, had it been breached, the whole of downtown St. Louis would have been submerged. Emergency officials estimated that all of the 700 privately built agricultural levees were overtopped or destroyed along the Missouri River. Navigation on the Mississippi and Missouri River had been closed since early July resulting in a loss of $2 million per day in commerce. In an attempt to strand his wife on the side of the river so he could continue partying. The breach flooded 14,000 acres of farmland, destroyed buildings, the Redwood River in Minnesota began experiencing severe flooding in May. On May 22, Sioux Falls, South Dakota, received 7.5 inches of rain in a three-hour period, from May through July, Sioux Falls, South Dakota received 22.55 inches of rain, the wettest three-month period in its history. As noted above, rains in South Dakota contributed to flooding downstream, in June, flooding occurred along the Black River in Wisconsin, with flooding also starting to occur along the Mississippi, Missouri, and Kansas rivers
4.
Flood
–
A flood is an overflow of water that submerges land which is usually dry. The European Union Floods Directive defines a flood as a covering by water of land not normally covered by water, in the sense of flowing water, the word may also be applied to the inflow of the tide. Floods can also occur in rivers when the flow exceeds the capacity of the river channel. Floods often cause damage to homes and businesses if they are in the flood plains of rivers. Some floods develop slowly, while others such as floods, can develop in just a few minutes. Additionally, floods can be local, impacting a neighborhood or community, or very large, the word flood comes from the Old English flod, a word common to Germanic languages. Deluge myths are stories of a great flood sent by a deity or deities to destroy civilization as an act of divine retribution. Floods can happen on flat or low-lying areas when water is supplied by rainfall or snowmelt more rapidly than it can infiltrate or run off. The excess accumulates in place, sometimes to hazardous depths, surface soil can become saturated, which effectively stops infiltration, where the water table is shallow, such as a floodplain, or from intense rain from one or a series of storms. Infiltration also is slow to negligible through frozen ground, rock, concrete, paving, areal flooding begins in flat areas like floodplains and in local depressions not connected to a stream channel, because the velocity of overland flow depends on the surface slope. Endorheic basins may experience flooding during periods when precipitation exceeds evaporation. Floods occur in all types of river and stream channels, from the smallest ephemeral streams in humid zones to normally-dry channels in arid climates to the worlds largest rivers. When overland flow occurs on tilled fields, it can result in a flood where sediments are picked up by run off. Localized flooding may be caused or exacerbated by drainage obstructions such as landslides, ice, debris, slow-rising floods most commonly occur in large rivers with large catchment areas. The increase in flow may be the result of sustained rainfall, rapid snow melt, monsoons, the cause may be localized convective precipitation or sudden release from an upstream impoundment created behind a dam, landslide, or glacier. In one instance, a flood killed eight people enjoying the water on a Sunday afternoon at a popular waterfall in a narrow canyon. Without any observed rainfall, the rate increased from about 50 to 1,500 cubic feet per second in just one minute. Two larger floods occurred at the site within a week
5.
Water
–
Water is a transparent and nearly colorless chemical substance that is the main constituent of Earths streams, lakes, and oceans, and the fluids of most living organisms. Its chemical formula is H2O, meaning that its molecule contains one oxygen, Water strictly refers to the liquid state of that substance, that prevails at standard ambient temperature and pressure, but it often refers also to its solid state or its gaseous state. It also occurs in nature as snow, glaciers, ice packs and icebergs, clouds, fog, dew, aquifers, Water covers 71% of the Earths surface. It is vital for all forms of life. Only 2. 5% of this water is freshwater, and 98. 8% of that water is in ice and groundwater. Less than 0. 3% of all freshwater is in rivers, lakes, and the atmosphere, a greater quantity of water is found in the earths interior. Water on Earth moves continually through the cycle of evaporation and transpiration, condensation, precipitation. Evaporation and transpiration contribute to the precipitation over land, large amounts of water are also chemically combined or adsorbed in hydrated minerals. Safe drinking water is essential to humans and other even though it provides no calories or organic nutrients. There is a correlation between access to safe water and gross domestic product per capita. However, some observers have estimated that by 2025 more than half of the population will be facing water-based vulnerability. A report, issued in November 2009, suggests that by 2030, in developing regions of the world. Water plays an important role in the world economy, approximately 70% of the freshwater used by humans goes to agriculture. Fishing in salt and fresh water bodies is a source of food for many parts of the world. Much of long-distance trade of commodities and manufactured products is transported by boats through seas, rivers, lakes, large quantities of water, ice, and steam are used for cooling and heating, in industry and homes. Water is an excellent solvent for a variety of chemical substances, as such it is widely used in industrial processes. Water is also central to many sports and other forms of entertainment, such as swimming, pleasure boating, boat racing, surfing, sport fishing, Water is a liquid at the temperatures and pressures that are most adequate for life. Specifically, at atmospheric pressure of 1 bar, water is a liquid between the temperatures of 273.15 K and 373.15 K
6.
Floodplain
–
In other words, a floodplain is an area near a river or a stream which floods when the water level reaches flood stage. Flood plains are made by a meander eroding sideways as it travels downstream, when a river breaks its banks and floods, it leaves behind layers of alluvium. These gradually build up to create the floor of the flood plain, floodplains generally contain unconsolidated sediments, often extending below the bed of the stream. These are accumulations of sand, gravel, loam, silt, and/or clay, and are often important aquifers, geologically ancient floodplains are often represented in the landscape by fluvial terraces. These are old floodplains that remain relatively high above the present floodplain and it is probable that any section of such an alluvial plain would show deposits of a similar character. The floodplain during its formation is marked by meandering or anastomotic streams, oxbow lakes and bayous, marshes or stagnant pools, and is occasionally completely covered with water. When the drainage system has ceased to act or is diverted for any reason. The floodplain differs, however, because it is not altogether flat and it has a gentle slope downstream, and often, for a distance, from the side towards the center. The floodplain is the place for a river to dissipate its energy. Meanders form over the floodplain to slow down the flow of water, in terms of flood management the upper part of the floodplain is crucial as this is where the flood water control starts. Artificial canalisation of the river here will have a impact on wider flooding. This is the basis of flood management. Floodplains can support particularly rich ecosystems, both in quantity and diversity, tugay forests form an ecosystem associated with floodplains, especially in Central Asia. They are a category of riparian zones or systems, a floodplain can contain 100 or even 1,000 times as many species as a river. Microscopic organisms thrive and larger species enter a rapid breeding cycle, opportunistic feeders move in to take advantage. The production of nutrients peaks and falls away quickly, however the surge of new growth endures for some time and this makes floodplains particularly valuable for agriculture. River flow rates are undergoing change following suit with climate change and this change is a threat to the riparian zones and other flood plain forests. These forests have over time synced their seedling deposits after the peaks in flow to best take advantage of the nutrient rich soil generated by peak flow
7.
Environment Agency
–
The Environment Agencys stated purpose is, to protect or enhance the environment, taken as a whole so as to promote the objective of achieving sustainable development. Protection of the environment relates to such as flood and pollution. The vision of the Agency is of a rich, healthy and diverse environment for present and it is organised into eight directorates that report to the chief executive. There are two policy and process directorates, one deals with Flood and Coastal Risk Management and the other with Environment and Business. These are backed up by the Evidence directorate, the fourth directorate is a single Operations delivery unit, responsible for national services, and line management of all the Regional and Area staff. The remaining directorates are central shared service groups for Finance, Legal Services, Resources, in support of its aims, the Agency acts as an operating authority, a regulatory authority and a licence authority. The agency is funded in part from the UK government Department for Environment, Food, funding for asset management and improvement and acquisition of flood risk management assets has traditionally come from local authorities via Flood Defence Committees. This was then effectively repaid by central Government in later years as part of the Formula Spending Share, in 2005 this was simplified by making a direct transfer from Treasury to the Environment Agency in the form of Flood Defence Grant in Aid. The Environment Agencys total funding in 2007–08 was £1,025 million, of that total, £628 million was provided in the form of flood defence grant-in-aid from government. In addition, £347 million was raised through statutory charging schemes and flood defence levies, in 2007–08 had an operational budget of £1.025 billion, of which £628m was grant from the Agencys sponsoring Government Departments. Approximately half the Agencys expenditure is on flood risk management, of the remainder, 12% goes to water resources, and 6% to other water functions including navigation and wildlife. Its chief executive is Sir James Bevan, Sir Philip Dilley resigned as chairman on 11 January 2016, with Emma Howard Boyd becoming acting chair. The Environment Agency was created by the Environment Act 1995, and it had responsibility for the whole of England and Wales but with specifically designated border arrangements with Scotland covering the catchment of the River Tweed. All of the bodies were disbanded and the local authorities relinquished their waste regulatory role. At the same time, the Agency took responsibility for issuing warnings to the public. On 1 April 2013, that part of the Environment Agency covering Wales was merged into Natural Resources Wales, the Environment Agency is the principal flood risk management operating authority. It has the power to flood risk from designated main rivers. These functions in relation to other rivers in England are undertaken by Local Authorities or internal drainage boards, the Environment Agency is also responsible for increasing public awareness of flood risk, flood forecasting and warning and has a general supervisory duty for flood risk management
8.
Tide
–
Tides are the rise and fall of sea levels caused by the combined effects of the gravitational forces exerted by the Moon and the Sun and the rotation of the Earth. Some shorelines experience a semi-diurnal tide—two nearly equal high and low tides each day, other locations experience a diurnal tide—only one high and low tide each day. A mixed tide—two uneven tides a day, or one high, Tides vary on timescales ranging from hours to years due to a number of factors. To make accurate records, tide gauges at fixed stations measure water level over time, gauges ignore variations caused by waves with periods shorter than minutes. These data are compared to the level usually called mean sea level. Tidal phenomena are not limited to the oceans, but can occur in other systems whenever a gravitational field varies in time. For example, the part of the Earth is affected by tides. Tide changes proceed via the following stages, Sea level rises over several hours, covering the intertidal zone, the water rises to its highest level, reaching high tide. Sea level falls over several hours, revealing the intertidal zone, the water stops falling, reaching low tide. Oscillating currents produced by tides are known as tidal streams, the moment that the tidal current ceases is called slack water or slack tide. The tide then reverses direction and is said to be turning, slack water usually occurs near high water and low water. But there are locations where the moments of slack tide differ significantly from those of high, Tides are commonly semi-diurnal, or diurnal. The two high waters on a day are typically not the same height, these are the higher high water. Similarly, the two low waters each day are the low water and the lower low water. The daily inequality is not consistent and is small when the Moon is over the equator. From the highest level to the lowest, Highest Astronomical Tide – The highest tide which can be predicted to occur, note that meteorological conditions may add extra height to the HAT. Mean High Water Springs – The average of the two high tides on the days of spring tides, mean High Water Neaps – The average of the two high tides on the days of neap tides. Mean Sea Level – This is the sea level
9.
Storm surge
–
Most casualties during tropical cyclones occur as the result of storm surges. The deadliest storm surge on record was the 1970 Bhola cyclone, the low-lying coast of the Bay of Bengal is particularly vulnerable to surges caused by tropical cyclones. The deadliest storm surge in the twenty-first century was caused by the Cyclone Nargis, the next deadliest in this century was caused by the Typhoon Haiyan, which killed more than 6,000 people in the central Philippines in 2013 and resulted in economic losses estimated at $14 billion. Louis, Diamondhead and Pass Christian in Mississippi, a high storm surge occurred in New York City from Hurricane Sandy in October 2012, with a high tide of 14 ft. The pressure effects of a tropical cyclone will cause the level in the open ocean to rise in regions of low atmospheric pressure. The rising water level will counteract the low pressure such that the total pressure at some plane beneath the water surface remains constant. This effect is estimated at a 10 mm increase in sea level for every millibar drop in atmospheric pressure, strong surface winds cause surface currents at a 45° angle to the wind direction, by an effect known as the Ekman Spiral. Wind stresses cause a phenomenon referred to as wind set-up, which is the tendency for water levels to increase at the downwind shore, intuitively, this is caused by the storm simply blowing the water towards one side of the basin in the direction of its winds. Because the Ekman Spiral effects spread vertically through the water, the effect is proportional to depth. The pressure effect and the wind set-up on an open coast will be driven into bays in the way as the astronomical tide. The Earths rotation causes the Coriolis effect, which bends currents to the right in the Northern Hemisphere and to the left in the Southern Hemisphere. When this bend brings the currents into more contact with the shore it can amplify the surge. The effect of waves, while powered by the wind, is distinct from a storms wind-powered currents. Powerful wind whips up large, strong waves in the direction of its movement, although these surface waves are responsible for very little water transport in open water, they may be responsible for significant transport near the shore. When waves are breaking on a more or less parallel to the beach. The rainfall effect is experienced predominantly in estuaries, Hurricanes may dump as much as 12 in of rainfall in 24 hours over large areas, and higher rainfall densities in localized areas. As a result, watersheds can quickly surge water into the rivers that drain them and this can increase the water level near the head of tidal estuaries as storm-driven waters surging in from the ocean meet rainfall flowing from the estuary. This situation well exemplified by the southeast coast of Florida
10.
Wind wave
–
In fluid dynamics, wind waves, or wind-generated waves, are surface waves that occur on the free surface of bodies of water. They result from the wind blowing over an area of fluid surface, Waves in the oceans can travel thousands of miles before reaching land. Wind waves on Earth range in size from small ripples, to waves over 100 ft high, when directly generated and affected by local winds, a wind wave system is called a wind sea. After the wind ceases to blow, wind waves are called swells, more generally, a swell consists of wind-generated waves that are not significantly affected by the local wind at that time. They have been generated elsewhere or some time ago, wind waves in the ocean are called ocean surface waves. Wind waves have an amount of randomness, subsequent waves differ in height, duration. The key statistics of wind waves in evolving sea states can be predicted with wind wave models, although waves are usually considered in the water seas of Earth, the hydrocarbon seas of Titan may also have wind-driven waves. The great majority of large breakers seen at a result from distant winds. Water depth All of these work together to determine the size of wind waves. Further exposure to that wind could only cause a dissipation of energy due to the breaking of wave tops. Waves in an area typically have a range of heights. For weather reporting and for analysis of wind wave statistics. This figure represents an average height of the highest one-third of the waves in a time period. The significant wave height is also the value a trained observer would estimate from visual observation of a sea state, given the variability of wave height, the largest individual waves are likely to be somewhat less than twice the reported significant wave height for a particular day or storm. Wave formation on a flat water surface by wind is started by a random distribution of normal pressure of turbulent wind flow over the water. This pressure fluctuation produces normal and tangential stresses in the surface water and it is assumed that, The water is originally at rest. There is a distribution of normal pressure to the water surface from the turbulent wind. Correlations between air and water motions are neglected, the second mechanism involves wind shear forces on the water surface
11.
Flood insurance
–
Flood insurance denotes the specific insurance coverage against property loss from flooding. To determine risk factors for specific properties, insurers will often refer to maps that denote lowlands, floodplains. Nationwide, only 20% of American homes at risk for floods are covered by flood insurance. Most private insurers do not insure against the peril of flood due to the prevalence of adverse selection, in certain flood-prone areas, the federal government requires flood insurance to secure mortgage loans backed by federal agencies such as the FHA and VA. However, the program has never worked as insurance, because of adverse selection and it has never priced people out of living in very risky areas by charging an appropriate premium, instead, too few places are included in the must-insure category, and premiums are artificially low. The lack of insurance can be detrimental to many homeowners who may discover only after the damage has been done that their standard insurance policies do not cover flooding. Very few insurers in the US provide flood insurance coverage due to the hazard of flood typically being confined to a few areas. As a result, it is a risk due to the inability to spread the risk to a wide enough population in order to absorb the potential catastrophic nature of the hazard. In response to this, the government created the National Flood Insurance Program in 1968. The National Association of Insurance Commissioners found that 33 percent of U. S. heads of household still hold the belief that flood damage is covered by a standard homeowners policy. FEMA states that approximately 50% of low flood zone risk borrowers think they are ineligible, anyone residing in a community participating in the NFIP can buy flood insurance, even renters. Flood insurance may be available for residents of approximately 19,000 communities nationwide through the NFIP, usually, the British insurers require from clients living in Flood Risk Areas to flood-proof their homes or face much higher premiums and excesses. Home flood insurance doesnt exist in this country, says the Insurance Bureau of Canada, the insurance bureau defines a flood as water flowing overland and seeping in through windows, doors and cracks
12.
Danube
–
The Danube is Europes second-longest river, after the Volga River, and also the longest river in the European Union region. It is located in Central and Eastern Europe, the Danube was once a long-standing frontier of the Roman Empire, and today flows through 10 countries, more than any other river in the world. Its drainage basin extends into nine more countries, the Latin name Dānuvius is one of a number of Old European river names derived from a Proto-Indo-European *dānu. Other river names from the root include the Dunajec, Dzvina/Daugava, Don, Donets, Dnieper, Dniestr. In Rigvedic Sanskrit, dānu means fluid, drop, in Avestan, in the Rigveda, Dānu once appears as the mother of Vrtra. Known to the ancient Greeks as the Istros a borrowing from a Daco-Thracian name meaning strong, in Latin, the Danube was variously known as Danubius, Danuvius or as Ister. The Dacian/Thracian name was Donaris for the upper Danube and Istros for the lower Danube, the Thraco-Phrygian name was Matoas, the bringer of luck. The Latin name is masculine, as are all its Slavic names, the German Donau is feminine, as it has been re-interpreted as containing the suffix -ouwe wetland. Classified as a waterway, it originates in the town of Donaueschingen, in the Black Forest of Germany, at the confluence of the rivers Brigach. The Danube then flows southeast for about 2,800 km, passing through four capital cities before emptying into the Black Sea via the Danube Delta in Romania and its drainage basin extends into nine more. The highest point of the basin is the summit of Piz Bernina at the Italy–Switzerland border. The land drained by the Danube extends into other countries. Many Danubian tributaries are important rivers in their own right, navigable by barges, from its source to its outlet into the Black Sea, its main tributaries are, The Danube flows through many cities, including four national capitals, more than any other river in the world. Danube remains a mountain river until Passau, with average bottom gradient 0. 0012%. Middle Section, From Devín Gate to Iron Gate, at the border of Serbia and Romania, the riverbed widens and the average bottom gradient becomes only 0. 00006%. Lower Section, From Iron Gate to Sulina, with average gradient as little as 0. 00003%, about 60 of its tributaries are also navigable. In 1994 the Danube was declared one of ten Pan-European transport corridors, routes in Central, the amount of goods transported on the Danube increased to about 100 million tons in 1987. In 1999, transport on the river was difficult by the NATO bombing of three bridges in Serbia during the Kosovo War
13.
Passau
–
Passau is a town in Lower Bavaria, Germany. It is also known as the Dreiflüssestadt or City of Three Rivers, because the Danube is joined at Passau by the Inn from the south, passaus population is 50,000 of whom about 12,000 are students at the local University of Passau. It is renowned in Germany for its institutes of economics, law, theology, computer science, in the 2nd century BC, many of the Boii tribe were pushed north across the Alps out of northern Italy by the Romans. They established a new capital called Boiodurum by the Romans, now within the Innstadt district of Passau, Passau was an ancient Roman colony of ancient Noricum called Batavis, Latin for for the Batavi. The Batavi were an ancient Germanic tribe often mentioned by authors, and they were regularly associated with the Suebian marauders. During the second half of the 5th century, St. Severinus established a monastery here, from the 10th century the bishops of Passau also exercised secular authority as Prince-Bishops in the immediate area around Passau. In the Treaty of Passau, Archduke Ferdinand I, representing Emperor Charles V and this led to the Peace of Augsburg in 1555. During the Renaissance and early period, Passau was one of the most prolific centres of sword. Passau smiths stamped their blades with the Passau wolf, usually a rather simplified rendering of the wolf on the citys coat-of-arms, superstitious warriors believed that the Passau wolf conferred invulnerability on the blades bearer, and thus Passau swords acquired a great premium. According to the Donau-Zeitung, aside from the wolf, some signs and inscriptions were added. As a result, the practice of placing magical charms on swords to protect the wearers came to be known for a time as Passau art. Other cities smiths, including those of Solingen, recognized the value of the Passau wolf. By the 17th century, Solingen was producing more wolf-stamped blades than Passau was, in 1662, a devastating fire consumed most of the city. Passau was subsequently rebuilt in the Baroque style, Passau was secularised and divided between the Electorate of Bavaria and the Electorate of Salzburg in 1803. The portion belonging to Salzburg became part of Bavaria in 1805, from 1892 until 1894, Adolf Hitler and his family lived in Passau. The city archives mention Hitler being in Passau on four different occasions in the 1920s for speeches, on November 3,1902 Heinrich Himmler and his family arrived from Munich. They lived at Theresienstraße 394 until September 2,1904, Himmler maintained contact with locals until May 1945. In November 1933, the building of Nibelungenhalle was announced, intended to hold 8,000 to 10,000 guests, and another 30,000 in front of it, in 1935 the hall also became quarters for a unit of the Austrian Legion
14.
Binomial distribution
–
The binomial distribution is the basis for the popular binomial test of statistical significance. The binomial distribution is used to model the number of successes in a sample of size n drawn with replacement from a population of size N. If the sampling is carried out without replacement, the draws are not independent and so the distribution is a hypergeometric distribution. However, for N much larger than n, the distribution remains a good approximation. In general, if the random variable X follows the distribution with parameters n ∈ ℕ and p ∈. The probability of getting exactly k successes in n trials is given by the probability mass function, N, where = n. k. is the binomial coefficient, hence the name of the distribution. The formula can be understood as follows, K successes occur with probability pk and n − k failures occur with probability n − k. However, the k successes can occur anywhere among the n trials, in creating reference tables for binomial distribution probability, usually the table is filled in up to n/2 values. This is because for k > n/2, the probability can be calculated by its complement as f = f. The probability mass function satisfies the recurrence relation, for every n, p, Looking at the expression ƒ as a function of k. This k value can be found by calculating f f = p and comparing it to 1. There is always an integer M that satisfies p −1 ≤ M < p. ƒ is monotone increasing for k < M and monotone decreasing for k > M, in this case, there are two values for which ƒ is maximal, p and p −1. M is the most probable outcome of the Bernoulli trials and is called the mode, note that the probability of it occurring can be fairly small. It can also be represented in terms of the incomplete beta function, as follows. Some closed-form bounds for the distribution function are given below. Suppose a biased coin comes up heads with probability 0.3 when tossed, what is the probability of achieving 0,1. For example, if n =100, and p =1/4, P k −1 − = n p ∑ k =1 n. Since Var = p, we get, Var = Var = Var + ⋯ + Var = n Var = n p
15.
Expected value
–
In probability theory, the expected value of a random variable, intuitively, is the long-run average value of repetitions of the experiment it represents. For example, the value in rolling a six-sided die is 3.5. Less roughly, the law of large states that the arithmetic mean of the values almost surely converges to the expected value as the number of repetitions approaches infinity. The expected value is known as the expectation, mathematical expectation, EV, average, mean value, mean. More practically, the value of a discrete random variable is the probability-weighted average of all possible values. In other words, each value the random variable can assume is multiplied by its probability of occurring. The same principle applies to a random variable, except that an integral of the variable with respect to its probability density replaces the sum. The expected value does not exist for random variables having some distributions with large tails, for random variables such as these, the long-tails of the distribution prevent the sum/integral from converging. The expected value is a key aspect of how one characterizes a probability distribution, by contrast, the variance is a measure of dispersion of the possible values of the random variable around the expected value. The variance itself is defined in terms of two expectations, it is the value of the squared deviation of the variables value from the variables expected value. The expected value plays important roles in a variety of contexts, in regression analysis, one desires a formula in terms of observed data that will give a good estimate of the parameter giving the effect of some explanatory variable upon a dependent variable. The formula will give different estimates using different samples of data, a formula is typically considered good in this context if it is an unbiased estimator—that is, if the expected value of the estimate can be shown to equal the true value of the desired parameter. In decision theory, and in particular in choice under uncertainty, one example of using expected value in reaching optimal decisions is the Gordon–Loeb model of information security investment. According to the model, one can conclude that the amount a firm spends to protect information should generally be only a fraction of the expected loss. Suppose random variable X can take value x1 with probability p1, value x2 with probability p2, then the expectation of this random variable X is defined as E = x 1 p 1 + x 2 p 2 + ⋯ + x k p k. If all outcomes xi are equally likely, then the weighted average turns into the simple average and this is intuitive, the expected value of a random variable is the average of all values it can take, thus the expected value is what one expects to happen on average. If the outcomes xi are not equally probable, then the simple average must be replaced with the weighted average, the intuition however remains the same, the expected value of X is what one expects to happen on average. Let X represent the outcome of a roll of a fair six-sided die, more specifically, X will be the number of pips showing on the top face of the die after the toss
16.
Coastal flood
–
Coastal flooding occurs when normally dry, low-lying land is flooded by seawater. The extent of flooding is a function of the elevation inland flood waters penetrate which is controlled by the topography of the coastal land exposed to flooding. The height of the waves exceeds the height of the barrier and water flows over the top of the barrier to flood the land behind it, overtopping can result in high velocity flows that can erode significant amounts of the land surface which can undermine defense structures. Breaching of a barrier — again the barrier may be natural or human engineered, breaching is where the barrier is broken down by waves allowing the seawater to extend inland. Coastal flooding is largely a natural event, however human influence on the environment can exacerbate coastal flooding. Extraction of water from reservoirs in the coastal zone can enhance subsidence of the land increasing the risk of flooding. Coastal flooding can result from a variety of different causes including storm surges created by storms like hurricanes and tropical cyclones, rising sea levels due to climate change and by tsunamis. Storms can cause flooding through storm surges which are significantly larger than normal. This is known as set up. Low atmospheric pressure is associated with systems and this tends to increase the surface sea level. Finally increased wave break height results in a water level in the surf zone which is wave set up. Theses three processes interact to create waves that can overtop natural and engineered coastal protection structures thus penetrating seawater further inland than normal, the Intergovernmental Panel on Climate Change estimate global mean sea-level rise from 1990 to 2100 to be between nine and eighty eight centimetres. It is also predicted that climate change there will be an increase in the intensity. This suggests that coastal flooding from storm surges will become more frequent with sea level rise, a rise in sea level alone threatens increased levels of flooding and permanent inundation of low-lying land as sea level simply may exceed the land elevation. There is also evidence to suggest that significant tsunami have been caused in the past by meteor impact into the ocean and it has been said that one way to prevent significant flooding of coastal areas now and into the future is by reducing global sea level rise. This could be minimised by reducing greenhouse gas emissions. However, even if significant emission decreases are achieved there is already a substantial commitment to sea level rise into the future, international climate change policies such as the Kyoto Protocol are seeking to mitigate the future effects of climate change, including sea level rise. In addition to this, more immediate measures of engineered and natural defences are put in place to prevent coastal flooding, there are a variety of ways in which humans are trying to prevent the flooding of coastal environments
17.
Drainage basin
–
A drainage basin or catchment area is any area of land where precipitation collects and drains off into a common outlet, such as into a river, bay, or other body of water. Drainage basins connect into other drainage basins at elevations in a hierarchical pattern, with smaller sub-drainage basins. Other terms used to describe drainage basins are catchment, catchment basin, drainage area, river basin and water basin. In closed drainage basins the water converges to a point inside the basin, known as a sink, which may be a permanent lake. The drainage basin acts as a funnel by collecting all the water within the covered by the basin. Each drainage basin is separated topographically from adjacent basins by a perimeter, drainage basins are similar but not identical to hydrologic units, which are drainage areas delineated so as to nest into a multi-level hierarchical drainage system. Hydrologic units are defined to allow multiple inlets, outlets, or sinks, in a strict sense, all drainage basins are hydrologic units but not all hydrologic units are drainage basins. Drainage basins of the oceans and seas of the world. Grey areas are endorheic basins that do not drain to the oceans, the following is a list of the major ocean basins, About 48. 7% of the worlds land drains to the Atlantic Ocean. The two major mediterranean seas of the world also flow to the Atlantic, The Caribbean Sea and Gulf of Mexico basin includes most of the U. S. The Mediterranean Sea basin includes much of North Africa, east-central Africa, Southern, Central, and Eastern Europe, Turkey, and the areas of Israel, Lebanon. Just over 13% of the land in the world drains to the Pacific Ocean, the Indian Oceans drainage basin also comprises about 13% of Earths land. It drains the eastern coast of Africa, the coasts of the Red Sea and the Persian Gulf, the Indian subcontinent, Burma, antarctica comprises approximately eight percent of the Earths land. The five largest river basins, from largest to smallest, are the basins of the Amazon, the Río de la Plata, the Congo, the Nile, and the Mississippi. The three rivers that drain the most water, from most to least, are the Amazon, Ganga, endorheic drainage basins are inland basins that do not drain to an ocean. Around 18% of all land drains to endorheic lakes or seas or sinks, the largest of these consists of much of the interior of Asia, which drains into the Caspian Sea, the Aral Sea, and numerous smaller lakes. Some of these, such as the Great Basin, are not single drainage basins but collections of separate, in endorheic bodies of standing water where evaporation is the primary means of water loss, the water is typically more saline than the oceans. An extreme example of this is the Dead Sea, drainage basins have been historically important for determining territorial boundaries, particularly in regions where trade by water has been important
18.
Extreme value theory
–
Extreme value theory or extreme value analysis is a branch of statistics dealing with the extreme deviations from the median of probability distributions. It seeks to assess, from a given ordered sample of a random variable. Extreme value analysis is used in many disciplines, such as structural engineering, finance, earth sciences, traffic prediction. For example, EVA might be used in the field of hydrology to estimate the probability of an unusually large flooding event, similarly, for the design of a breakwater, a coastal engineer would seek to estimate the 50-year wave and design the structure accordingly. Two approaches exist for practical extreme value analysis, the first method relies on deriving block maxima series as a preliminary step. In many situations it is customary and convenient to extract the annual maxima, the second method relies on extracting, from a continuous record, the peak values reached for any period during which values exceed a certain threshold. This method is referred to as the Peak Over Threshold method. For AMS data, the analysis may partly rely on the results of the Fisher–Tippett–Gnedenko theorem, however, in practice, various procedures are applied to select between a wider range of distributions. The theorem here relates to the distributions for the minimum or the maximum of a very large collection of independent random variables from the same distribution. For POT data, the analysis may involve fitting two distributions, one for the number of events in a period considered and a second for the size of the exceedances. A common assumption for the first is the Poisson distribution, with the generalized Pareto distribution being used for the exceedances, a tail-fitting can be based on the Pickands–Balkema–de Haan theorem. Novak reserves the term “POT method” to the case where the threshold is non-random, pipeline failures due to pitting corrosion. Anomalous IT network traffic, prevent attackers from reaching important data The field of value theory was pioneered by Leonard Tippett. Tippett was employed by the British Cotton Industry Research Association, where he worked to make cotton thread stronger, in his studies, he realized that the strength of a thread was controlled by the strength of its weakest fibres. With the help of R. A. Fisher, Tippet obtained three asymptotic limits describing the distributions of extremes, emil Julius Gumbel codified this theory in his 1958 book Statistics of Extremes, including the Gumbel distributions that bear his name. A summary of important publications relating to extreme value theory can be found on the article List of publications in statistics. Let X1, …, X n be a sequence of independent and identically distributed variables with cumulative distribution function F, in theory, the exact distribution of the maximum can be derived, Pr = Pr = Pr ⋯ Pr = n. The associated indicator function I n = I is a Bernoulli process with a probability p = that depends on the magnitude z of the extreme event
19.
Independence (probability theory)
–
In probability theory, two events are independent, statistically independent, or stochastically independent if the occurrence of one does not affect the probability of occurrence of other. Similarly, two variables are independent if the realization of one does not affect the probability distribution of the other. Two events A and B are independent if their joint probability equals the product of their probabilities, although the derived expressions may seem more intuitive, they are not the preferred definition, as the conditional probabilities may be undefined if P or P are 0. Furthermore, the preferred definition makes clear by symmetry that when A is independent of B, B is also independent of A. A finite set of events is independent if every pair of events is independent—that is, if. A finite set of events is independent if every event is independent of any intersection of the other events—that is, if and only if for every n-element subset. This is called the rule for independent events. Note that it is not a condition involving only the product of all the probabilities of all single events. For more than two events, an independent set of events is pairwise independent, but the converse is not necessarily true. Two random variables X and Y are independent if and only if the elements of the π-system generated by them are independent, that is to say, for every a and b, the events and are independent events. A set of variables is pairwise independent if and only if every pair of random variables is independent. A set of variables is mutually independent if and only if for any finite subset X1, …, X n and any finite sequence of numbers a 1, …, a n. The measure-theoretically inclined may prefer to substitute events for events in the above definition and that definition is exactly equivalent to the one above when the values of the random variables are real numbers. It has the advantage of working also for complex-valued random variables or for random variables taking values in any measurable space. Intuitively, two random variables X and Y are conditionally independent given Z if, once Z is known, for instance, two measurements X and Y of the same underlying quantity Z are not independent, but they are conditionally independent given Z. The formal definition of independence is based on the idea of conditional distributions. If X, Y, and Z are discrete random variables, if X and Y are conditionally independent given Z, then P = P for any x, y and z with P >0. That is, the distribution for X given Y and Z is the same as that given Z alone
20.
Statistical significance
–
In statistical hypothesis testing, statistical significance is attained whenever the observed p-value of a test statistic is less than the significance level defined for the study. The p-value is the probability of obtaining results at least as extreme as those observed, the significance level, α, is the probability of rejecting the null hypothesis, given that it is true. In any experiment or observation that involves drawing a sample from a population, a significance level is chosen before data collection, and typically set to 5% or much lower, depending on the field of study. This technique for testing the significance of results was developed in the early 20th century, the term significance does not imply importance here, and the term statistical significance is not the same as research, theoretical, or practical significance. For example, the clinical significance refers to the practical importance of a treatment effect. In 1925, Ronald Fisher advanced the idea of hypothesis testing. Fisher suggested a probability of one in twenty as a convenient cutoff level to reject the null hypothesis, in a 1933 paper, Jerzy Neyman and Egon Pearson called this cutoff the significance level, which they named α. They recommended that α be set ahead of time, prior to any data collection, despite his initial suggestion of 0.05 as a significance level, Fisher did not intend this cutoff value to be fixed. In his 1956 publication Statistical methods and scientific inference, he recommended that significance levels be set according to specific circumstances, the significance level α is the threshold for p below which the experimenter assumes the null hypothesis is false, and something else is going on. This means α is also the probability of rejecting the null hypothesis. Sometimes researchers talk about the confidence level γ = instead and this is the probability of not rejecting the null hypothesis given that it is true. Confidence levels and confidence intervals were introduced by Neyman in 1937, Statistical significance plays a pivotal role in statistical hypothesis testing. It is used to determine whether the null hypothesis should be rejected or retained, the null hypothesis is the default assumption that nothing happened or changed. For the null hypothesis to be rejected, a result has to be statistically significant. To determine whether a result is significant, a researcher calculates a p-value. The null hypothesis is rejected if the p-value is less than a predetermined level, α is called the significance level, and is the probability of rejecting the null hypothesis given that it is true. It is usually set at or below 5%, when drawing data from a sample, this means that the rejection region comprises 5% of the sampling distribution. As a result, the hypothesis can be rejected with a less extreme result if a one-tailed test was used
21.
Correlation and dependence
–
In statistics, dependence or association is any statistical relationship, whether causal or not, between two random variables or bivariate data. Familiar examples of dependent phenomena include the correlation between the physical statures of parents and their offspring, and the correlation between the demand for a product and its price, correlations are useful because they can indicate a predictive relationship that can be exploited in practice. For example, an electrical utility may produce less power on a day based on the correlation between electricity demand and weather. In this example there is a relationship, because extreme weather causes people to use more electricity for heating or cooling. However, in general, the presence of a correlation is not sufficient to infer the presence of a causal relationship, formally, random variables are dependent if they do not satisfy a mathematical property of probabilistic independence. In informal parlance, correlation is synonymous with dependence, however, when used in a technical sense, correlation refers to any of several specific types of relationship between mean values. There are several correlation coefficients, often denoted ρ or r, the most common of these is the Pearson correlation coefficient, which is sensitive only to a linear relationship between two variables. Other correlation coefficients have been developed to be more robust than the Pearson correlation – that is, mutual information can also be applied to measure dependence between two variables. It is obtained by dividing the covariance of the two variables by the product of their standard deviations, karl Pearson developed the coefficient from a similar but slightly different idea by Francis Galton. The Pearson correlation is defined only if both of the deviations are finite and nonzero. It is a corollary of the Cauchy–Schwarz inequality that the correlation cannot exceed 1 in absolute value, the correlation coefficient is symmetric, corr = corr. As it approaches zero there is less of a relationship, the closer the coefficient is to either −1 or 1, the stronger the correlation between the variables. If the variables are independent, Pearsons correlation coefficient is 0, for example, suppose the random variable X is symmetrically distributed about zero, and Y = X2. Then Y is completely determined by X, so that X and Y are perfectly dependent, however, in the special case when X and Y are jointly normal, uncorrelatedness is equivalent to independence. If we have a series of n measurements of X and Y written as xi, N, then the sample correlation coefficient can be used to estimate the population Pearson correlation r between X and Y. If x and y are results of measurements that contain measurement error, for the case of a linear model with a single independent variable, the coefficient of determination is the square of r, Pearsons product-moment coefficient. If, as the one variable increases, the other decreases, to illustrate the nature of rank correlation, and its difference from linear correlation, consider the following four pairs of numbers. As we go from each pair to the pair x increases
22.
Mean
–
In mathematics, mean has several different definitions depending on the context. An analogous formula applies to the case of a probability distribution. Not every probability distribution has a mean, see the Cauchy distribution for an example. Moreover, for some distributions the mean is infinite, for example, the arithmetic mean of a set of numbers x1, x2. Xn is typically denoted by x ¯, pronounced x bar, if the data set were based on a series of observations obtained by sampling from a statistical population, the arithmetic mean is termed the sample mean to distinguish it from the population mean. For a finite population, the mean of a property is equal to the arithmetic mean of the given property while considering every member of the population. For example, the mean height is equal to the sum of the heights of every individual divided by the total number of individuals. The sample mean may differ from the mean, especially for small samples. The law of large numbers dictates that the larger the size of the sample, outside of probability and statistics, a wide range of other notions of mean are often used in geometry and analysis, examples are given below. The geometric mean is an average that is useful for sets of numbers that are interpreted according to their product. X ¯ =1 n For example, the mean of five values,4,36,45,50,75 is,1 /5 =243000005 =30. The harmonic mean is an average which is useful for sets of numbers which are defined in relation to some unit, for example speed. AM, GM, and HM satisfy these inequalities, A M ≥ G M ≥ H M Equality holds if, in descriptive statistics, the mean may be confused with the median, mode or mid-range, as any of these may be called an average. The mean of a set of observations is the average of the values, however, for skewed distributions. For example, mean income is typically skewed upwards by a number of people with very large incomes. By contrast, the income is the level at which half the population is below. The mode income is the most likely income, and favors the larger number of people with lower incomes, the mean of a probability distribution is the long-run arithmetic average value of a random variable having that distribution. In this context, it is known as the expected value
23.
Standard deviation
–
In statistics, the standard deviation is a measure that is used to quantify the amount of variation or dispersion of a set of data values. The standard deviation of a variable, statistical population, data set. It is algebraically simpler, though in practice less robust, than the absolute deviation. A useful property of the deviation is that, unlike the variance. There are also other measures of deviation from the norm, including mean absolute deviation, in addition to expressing the variability of a population, the standard deviation is commonly used to measure confidence in statistical conclusions. For example, the margin of error in polling data is determined by calculating the standard deviation in the results if the same poll were to be conducted multiple times. This derivation of a deviation is often called the standard error of the estimate or standard error of the mean when referring to a mean. It is computed as the deviation of all the means that would be computed from that population if an infinite number of samples were drawn. It is very important to note that the deviation of a population. The reported margin of error of a poll is computed from the error of the mean and is typically about twice the standard deviation—the half-width of a 95 percent confidence interval. The standard deviation is also important in finance, where the standard deviation on the rate of return on an investment is a measure of the volatility of the investment. For a finite set of numbers, the deviation is found by taking the square root of the average of the squared deviations of the values from their average value. For example, the marks of a class of eight students are the eight values,2,4,4,4,5,5,7,9. These eight data points have the mean of 5,2 +4 +4 +4 +5 +5 +7 +98 =5 and this formula is valid only if the eight values with which we began form the complete population. If the values instead were a sample drawn from some large parent population. In that case the result would be called the standard deviation. Dividing by n −1 rather than by n gives an estimate of the variance of the larger parent population. This is known as Bessels correction, as a slightly more complicated real-life example, the average height for adult men in the United States is about 70 inches, with a standard deviation of around 3 inches
24.
Stationary process
–
In mathematics and statistics, a stationary process is a stochastic process whose joint probability distribution does not change when shifted in time. Consequently, parameters such as mean and variance, if they are present, since stationarity is an assumption underlying many statistical procedures used in time series analysis, non-stationary data is often transformed to become stationary. The most common cause of violation of stationarity are trends in mean, in the former case of a unit root, stochastic shocks have permanent effects and the process is not mean-reverting. In the latter case of a trend, the process is called a trend stationary process. A trend stationary process is not strictly stationary, but can easily be transformed into a process by removing the underlying trend. Similarly, processes with one or more unit roots can be made stationary through differencing, an important type of non-stationary process that does not include a trend-like behavior is a cyclostationary process, which is a stochastic process that varies cyclically with time. A stationary process is not the thing as a process with a stationary distribution. Besides, all stationary Markov random processes are time-homogeneous, formally, let be a stochastic process and let F X represent the cumulative distribution function of the joint distribution of at times t 1 + τ, …, t k + τ. Then, is said to be stationary if, for all k, for all τ. Since τ does not affect F X, F X is not a function of time, as an example, white noise is stationary. The sound of a cymbal clashing, if hit once, is not stationary because the acoustic power of the clash diminishes with time. However, it would be possible to invent a stochastic process describing when the cymbal is hit, for example, if the cymbal were hit at moments in time corresponding to a homogeneous Poisson process, the overall response would be stationary. An example of a stationary process where the sample space is also discrete is a Bernoulli scheme. Let Y be any scalar random variable, and define a time-series, then is a stationary time series, for which realisations consist of a series of constant values, with a different constant value for each realisation. A law of large numbers does not apply on case, as the limiting value of an average from a single realisation takes the random value determined by Y. A weaker form of stationarity commonly employed in signal processing is known as weak-sense stationarity, wide-sense stationarity, covariance stationarity, WSS random processes only require that 1st moment and autocovariance do not vary with respect to time. Any strictly stationary process which has a mean and a covariance is also WSS, the first property implies that the mean function mx must be constant. The second property implies that the function depends only on the difference between t 1 and t 2 and only needs to be indexed by one variable rather than two variables
25.
Climate change
–
Climate change is a change in the statistical distribution of weather patterns when that change lasts for an extended period of time. Climate change may refer to a change in weather conditions. Climate change is caused by such as biotic processes, variations in solar radiation received by Earth, plate tectonics. Certain human activities have also identified as significant causes of recent climate change. Scientists actively work to understand past and future climate by using observations, more recent data are provided by the instrumental record. The most general definition of change is a change in the statistical properties of the climate system when considered over long periods of time. Accordingly, fluctuations over periods shorter than a few decades, such as El Niño, the term climate change is often used to refer specifically to anthropogenic climate change. Anthropogenic climate change is caused by activity, as opposed to changes in climate that may have resulted as part of Earths natural processes. In this sense, especially in the context of environmental policy, within scientific journals, global warming refers to surface temperature increases while climate change includes global warming and everything else that increasing greenhouse gas levels affect. A related term is climatic change, in 1966, the World Meteorological Organization proposed the term climatic change to encompass all forms of climatic variability on time-scales longer than 10 years, regardless of cause. Change was a given and climatic was used as an adjective to describe this kind of change, when it was realized that human activities had a potential to drastically alter the climate, the term climate change replaced climatic change as the dominant term to reflect an anthropogenic cause. Climate change was incorporated in the title of the Intergovernmental Panel on Climate Change, Climate change, used as a noun, became an issue rather than the technical description of changing weather. On the broadest scale, the rate at which energy is received from the Sun and this energy is distributed around the globe by winds, ocean currents, and other mechanisms to affect the climates of different regions. Factors that can shape climate are called climate forcings or forcing mechanisms, there are a variety of climate change feedbacks that can either amplify or diminish the initial forcing. Some parts of the system, such as the oceans and ice caps, respond more slowly in reaction to climate forcings. There are also key factors which when exceeded can produce rapid change. Forcing mechanisms can be internal or external. Internal forcing mechanisms are natural processes within the system itself
26.
Hydrology
–
Hydrology is the scientific study of the movement, distribution, and quality of water on Earth and other planets, including the water cycle, water resources and environmental watershed sustainability. A practitioner of hydrology is a hydrologist, working within the fields of earth or environmental science, physical geography, geology or civil, Hydrology subdivides into surface water hydrology, groundwater hydrology, and marine hydrology. Domains of hydrology include hydrometeorology, surface hydrology, hydrogeology, drainage-basin management and water quality, oceanography and meteorology are not included because water is only one of many important aspects within those fields. Hydrological research can inform environmental engineering, policy and planning, the term hydrology comes from Greek, ὕδωρ, hýdōr, water, and λόγος, lógos, study. Chemical hydrology is the study of the characteristics of water. Ecohydrology is the study of interactions between organisms and the hydrologic cycle, hydrogeology is the study of the presence and movement of groundwater. Hydroinformatics is the adaptation of technology to hydrology and water resources applications. Hydrometeorology is the study of the transfer of water and energy between land and water surfaces and the lower atmosphere. Isotope hydrology is the study of the signatures of water. Surface hydrology is the study of processes that operate at or near Earths surface. Drainage basin management covers water-storage, in the form of reservoirs, water quality includes the chemistry of water in rivers and lakes, both of pollutants and natural solutes. Determining the water balance of a region, mitigating and predicting flood, landslide and drought risk. Real-time flood forecasting and flood warning, designing irrigation schemes and managing agricultural productivity. Part of the module in catastrophe modeling. Designing dams for water supply or hydroelectric power generation, designing sewers and urban drainage system. Analyzing the impacts of antecedent moisture on sanitary sewer systems, predicting geomorphologic changes, such as erosion or sedimentation. Assessing the impacts of natural and anthropogenic environmental change on water resources, assessing contaminant transport risk and establishing environmental policy guidelines. Hydrology has been a subject of investigation and engineering for millennia, for example, about 4000 BC the Nile was dammed to improve agricultural productivity of previously barren lands
27.
Tide gauge
–
A tide gauge is a device for measuring the change in sea level relative to a datum. Sensors continuously record the height of the level with respect to a height reference surface close to the geoid. Water enters the device by the pipe, and electronic sensors measure its height. Historical data are available for about 1,450 stations worldwide, at some places records cover centuries, for example in Amsterdam where data dating back to 1700 is available. When it comes to estimating the greater ocean picture, new modern tide gauges can often be improved upon by using satellite data, Tide gauges are used to measure tides and quantify the size of tsunamis. The measurements make it possible to derive the sea level. Using this method, sea level slopes up to several 0.1 m/1000 km, a tsunami can be detected when the sea level begins to rise, although warnings from seismic activity can be more useful. Sea-level measurements were made using simple measuring poles or tide staffs until around 1830 and they were the primary means of sea-level measurement for over 150 years and continue to operate at some locations today. While still part of modern-day tide gauge instrumentation, these technologies have since superseded by pressure gauges. To this end, many industries have installed private tide gauges in ports across the country, Data collected from tide gauges is also of interest to scientists measuring global weather patterns, the mean sea water level, and trends - notably those potentially associated with global warming. In recent years new technologies have developed allowing for real-time, remote tide information to be published online via a solar powered wireless connection to a tide sensor, ultrasonic sensors have already been deployed to great effect and the data is regularly broadcast via Twitter and also displayed online. Tides have been measured at Fort Denison since 1857 on completion of the fort, from 1867 successive instruments were used as tide measuring technology developed. The Fort Denison photographs below show float activated tide gauge instruments in a cabinet, a wire connected to the upper drum mechanism passes out through the bottom right of the cabinet and runs over a pulley to drop down to the float system in the large pipe in the well. This system is now obsolete at Fort Denison but maintained as a museum exhibit, to the right of the large pipe in the well is an enclosed pipe which rises to the active modern system. Tide heights and times at Fort Denison are the primary referent for published tide information for other places in the state of New South Wales. Sea level rise Stream gauge Historical Examples Brown University Mean Sea Level Explanation NOAA Tide Data Hydrometrie
28.
Extreme weather
–
Extreme weather includes unexpectable, unusual, unpredictable severe or unseasonal weather, weather at the extremes of the historical distribution—the range that has been seen in the past. Often, extreme events are based on a location’s recorded weather history, in recent years some extreme weather events have been attributed to human-induced global warming, with studies indicating an increasing threat from extreme weather in the future. According to IPCC estimates of annual losses have ranged since 1980 from a few billion to above US$200 billion, heat waves are periods of abnormally high temperatures and heat index. Definitions of a heatwave vary because of the variation of temperatures in different geographic locations, excessive heat is often accompanied by high levels of humidity, but can also be catastrophically dry. Because heatwaves are not visible as other forms of weather are, like hurricanes, tornadoes. Severe heat weather can damage populations and crops due to dehydration or hyperthermia, heat cramps, heat expansion. Dried soils are susceptible to erosion, decreasing lands available for agriculture. Outbreaks of wildfires can increase in frequency as dry vegetation has increased likeliness of igniting, the evaporation of bodies of water can be devastating to marine populations, decreasing the size of the habitats available as well as the amount of nutrition present within the waters. Livestock and other populations may decline as well. During excessive heat plants shut their leaf pores, a mechanism to conserve water. Thus, leaving more pollution and ozone in the air, which leads to a higher mortality in the population and it has been estimated that extra pollution during the hot summer 2006 in the UK, cost 460 lives. The European heat waves from summer 2003 are estimated to have caused 30,000 excess deaths, due to heat stress, power outages can also occur within areas experiencing heat waves due to the increased demand for electricity. The urban heat island effect can increase temperatures, particularly overnight, a cold wave is a weather phenomenon that is distinguished by a cooling of the air. The precise criterion for a wave is determined by the rate at which the temperature falls. This minimum temperature is dependent on the region and time of year. Cold waves generally are capable of occurring any geological location and are formed by large air masses that accumulate over certain regions. A cold wave can cause death and injury to livestock and wildlife, cold waves often necessitate the purchase of fodder for livestock at considerable cost to farmers. Human populations can be inflicted with frostbites when exposed for extended periods of time to cold, extreme winter cold often causes poorly insulated water pipes to freeze
29.
Wayback Machine
–
The Internet Archive launched the Wayback Machine in October 2001. It was set up by Brewster Kahle and Bruce Gilliat, and is maintained with content from Alexa Internet, the service enables users to see archived versions of web pages across time, which the archive calls a three dimensional index. Since 1996, the Wayback Machine has been archiving cached pages of websites onto its large cluster of Linux nodes and it revisits sites every few weeks or months and archives a new version. Sites can also be captured on the fly by visitors who enter the sites URL into a search box, the intent is to capture and archive content that otherwise would be lost whenever a site is changed or closed down. The overall vision of the machines creators is to archive the entire Internet, the name Wayback Machine was chosen as a reference to the WABAC machine, a time-traveling device used by the characters Mr. Peabody and Sherman in The Rocky and Bullwinkle Show, an animated cartoon. These crawlers also respect the robots exclusion standard for websites whose owners opt for them not to appear in search results or be cached, to overcome inconsistencies in partially cached websites, Archive-It. Information had been kept on digital tape for five years, with Kahle occasionally allowing researchers, when the archive reached its fifth anniversary, it was unveiled and opened to the public in a ceremony at the University of California, Berkeley. Snapshots usually become more than six months after they are archived or, in some cases, even later. The frequency of snapshots is variable, so not all tracked website updates are recorded, Sometimes there are intervals of several weeks or years between snapshots. After August 2008 sites had to be listed on the Open Directory in order to be included. As of 2009, the Wayback Machine contained approximately three petabytes of data and was growing at a rate of 100 terabytes each month, the growth rate reported in 2003 was 12 terabytes/month, the data is stored on PetaBox rack systems manufactured by Capricorn Technologies. In 2009, the Internet Archive migrated its customized storage architecture to Sun Open Storage, in 2011 a new, improved version of the Wayback Machine, with an updated interface and fresher index of archived content, was made available for public testing. The index driving the classic Wayback Machine only has a bit of material past 2008. In January 2013, the company announced a ground-breaking milestone of 240 billion URLs, in October 2013, the company announced the Save a Page feature which allows any Internet user to archive the contents of a URL. This became a threat of abuse by the service for hosting malicious binaries, as of December 2014, the Wayback Machine contained almost nine petabytes of data and was growing at a rate of about 20 terabytes each week. Between October 2013 and March 2015 the websites global Alexa rank changed from 162 to 208, in a 2009 case, Netbula, LLC v. Chordiant Software Inc. defendant Chordiant filed a motion to compel Netbula to disable the robots. Netbula objected to the motion on the ground that defendants were asking to alter Netbulas website, in an October 2004 case, Telewizja Polska USA, Inc. v. Echostar Satellite, No.02 C3293,65 Fed. 673, a litigant attempted to use the Wayback Machine archives as a source of admissible evidence, Telewizja Polska is the provider of TVP Polonia and EchoStar operates the Dish Network