In physics, the kinetic energy of an object is the energy that it possesses due to its motion. It is defined as the work needed to accelerate a body of a given mass from rest to its stated velocity. Having gained this energy during its acceleration, the body maintains this kinetic energy unless its speed changes; the same amount of work is done by the body when decelerating from its current speed to a state of rest. In classical mechanics, the kinetic energy of a non-rotating object of mass m traveling at a speed v is 1 2 m v 2. In relativistic mechanics, this is a good approximation only when v is much less than the speed of light; the standard unit of kinetic energy is the joule. The imperial unit of kinetic energy is the foot-pound; the adjective kinetic has its roots in the Greek word κίνησις kinesis, meaning "motion". The dichotomy between kinetic energy and potential energy can be traced back to Aristotle's concepts of actuality and potentiality; the principle in classical mechanics that E ∝ mv2 was first developed by Gottfried Leibniz and Johann Bernoulli, who described kinetic energy as the living force, vis viva.
Willem's Gravesande of the Netherlands provided experimental evidence of this relationship. By dropping weights from different heights into a block of clay, Willem's Gravesande determined that their penetration depth was proportional to the square of their impact speed. Émilie du Châtelet published an explanation. The terms kinetic energy and work in their present scientific meanings date back to the mid-19th century. Early understandings of these ideas can be attributed to Gaspard-Gustave Coriolis, who in 1829 published the paper titled Du Calcul de l'Effet des Machines outlining the mathematics of kinetic energy. William Thomson Lord Kelvin, is given the credit for coining the term "kinetic energy" c. 1849–51. Energy occurs in many forms, including chemical energy, thermal energy, electromagnetic radiation, gravitational energy, electric energy, elastic energy, nuclear energy, rest energy; these can be categorized in two main classes: kinetic energy. Kinetic energy is the movement energy of an object.
Kinetic energy can be transformed into other kinds of energy. Kinetic energy may be best understood by examples that demonstrate how it is transformed to and from other forms of energy. For example, a cyclist uses chemical energy provided by food to accelerate a bicycle to a chosen speed. On a level surface, this speed can be maintained without further work, except to overcome air resistance and friction; the chemical energy has been converted into kinetic energy, the energy of motion, but the process is not efficient and produces heat within the cyclist. The kinetic energy in the moving cyclist and the bicycle can be converted to other forms. For example, the cyclist could encounter a hill just high enough to coast up, so that the bicycle comes to a complete halt at the top; the kinetic energy has now been converted to gravitational potential energy that can be released by freewheeling down the other side of the hill. Since the bicycle lost some of its energy to friction, it never regains all of its speed without additional pedaling.
The energy is not destroyed. Alternatively, the cyclist could connect a dynamo to one of the wheels and generate some electrical energy on the descent; the bicycle would be traveling slower at the bottom of the hill than without the generator because some of the energy has been diverted into electrical energy. Another possibility would be for the cyclist to apply the brakes, in which case the kinetic energy would be dissipated through friction as heat. Like any physical quantity, a function of velocity, the kinetic energy of an object depends on the relationship between the object and the observer's frame of reference. Thus, the kinetic energy of an object is not invariant. Spacecraft use chemical energy to launch and gain considerable kinetic energy to reach orbital velocity. In an circular orbit, this kinetic energy remains constant because there is no friction in near-earth space. However, it becomes apparent at re-entry. If the orbit is elliptical or hyperbolic throughout the orbit kinetic and potential energy are exchanged.
Without loss or gain, the sum of the kinetic and potential energy remains constant. Kinetic energy can be passed from one object to another. In the game of billiards, the player imposes kinetic energy on the cue ball by striking it with the cue stick. If the cue ball collides with another ball, it slows down and the ball it hit accelerates its speed as the kinetic energy is passed on to it. Collisions in billiards are elastic collisions, in which kinetic energy is preserved. In inelastic collisions, kinetic energy is dissipated in various forms of energy, such as heat, binding energy. Flywheels have been developed as a method of energy storage; this illustrates that kinetic energy is stored in rotational motion. Several mathematical descriptions of kinetic energy exist that describe it in the appropriate physical situation. For objects and processes in common human experience, the formula ½mv² given by Newtonian mechanics is suitable. However, if the speed of the object is comparabl
A waterfall is an area where water flows over a vertical drop or a series of steep drops in the course of a stream or river. Waterfalls occur where meltwater drops over the edge of a tabular iceberg or ice shelf. Waterfalls are formed in the upper course of a river in steep mountains; because of their landscape position, many waterfalls occur over bedrock fed by little contributing area, so may be ephemeral and flow only during rainstorms or significant snowmelt. The further downstream, the more perennial a waterfall can be. Waterfalls can have a wide range of depths; when the river courses over resistant bedrock, erosion happens and is dominated by impacts of water-borne sediment on the rock, while downstream the erosion occurs more rapidly. As the watercourse increases its velocity at the edge of the waterfall, it may pluck material from the riverbed, if the bed is fractured or otherwise more erodible. Hydraulic jets and hydraulic jumps at the toe of a falls can generate large forces to erode the bed when forces are amplified by water-borne sediment.
Horseshoe-shaped falls focus the erosion to a central point enhancing riverbed change below a waterfalls. A process known as "potholing" involves local erosion of a deep hole in bedrock due to turbulent whirlpools spinning stones around on the bed, drilling it out. Sand and stones carried by the watercourse therefore increase erosion capacity; this causes the waterfall to recede upstream. Over time, the waterfall will recede back to form a canyon or gorge downstream as it recedes upstream, it will carve deeper into the ridge above it; the rate of retreat for a waterfall can be as high as one-and-a-half metres per year. The rock stratum just below the more resistant shelf will be of a softer type, meaning that undercutting due to splashback will occur here to form a shallow cave-like formation known as a rock shelter under and behind the waterfall; the outcropping, more resistant cap rock will collapse under pressure to add blocks of rock to the base of the waterfall. These blocks of rock are broken down into smaller boulders by attrition as they collide with each other, they erode the base of the waterfall by abrasion, creating a deep plunge pool in the gorge downstream.
Streams can become wider and shallower just above waterfalls due to flowing over the rock shelf, there is a deep area just below the waterfall because of the kinetic energy of the water hitting the bottom. However, a study of waterfalls systematics reported that waterfalls can be wider or narrower above or below a falls, so anything is possible given the right geological and hydrological setting. Waterfalls form in a rocky area due to erosion. After a long period of being formed, the water falling off the ledge will retreat, causing a horizontal pit parallel to the waterfall wall; as the pit grows deeper, the waterfall collapses to be replaced by a steeply sloping stretch of river bed. In addition to gradual processes such as erosion, earth movement caused by earthquakes or landslides or volcanoes can cause a differential in land heights which interfere with the natural course of a water flow, result in waterfalls. A river sometimes flows over a large step in the rocks. Waterfalls can occur along the edge of a glacial trough, where a stream or river flowing into a glacier continues to flow into a valley after the glacier has receded or melted.
The large waterfalls in Yosemite Valley are examples of this phenomenon, referred to as a hanging valley. Another reason hanging valleys may form is where two rivers join and one is flowing faster than the other. Waterfalls can be grouped into ten broad classes based on the average volume of water present on the fall using a logarithmic scale. Class 10 waterfalls include Paulo Afonso Falls and Khone Falls. Classes of other well-known waterfalls include Kaieteur Falls. Alexander von Humboldt "Father of Modern Geography" Humboldt was marking waterfalls on maps for river navigation purposes. Oscar von Engeln Published "Geomorphology: systematic and regional", this book had a whole chapter devoted to waterfalls, is one of the earliest examples of published works on waterfalls. R. W. Young Wrote "Waterfalls: form and process" this work made waterfalls a much more serious topic for research for modern Geoscientists. Ledge waterfall: Water descends vertically over a vertical cliff, maintaining partial contact with the bedrock.
Block/Sheet: Water descends from a wide stream or river. Classical: Ledge waterfalls where fall height is nearly equal to stream width, forming a vertical square shape. Curtain: Ledge waterfalls which descend over a height larger than the width of falling water stream. Plunge: Fast-moving water descends vertically, losing complete contact with the bedrock surface; the contact is lost due to horizontal velocity of the water before it falls. It always starts from a narrow stream. Punchbowl: Water descends in a constricted form and spreads out in a wider pool. Horsetail: Descending water maintains contact with bedrock most of the time. Slide: Water glides down maintaining continuous contact. Ribbon: Water descends over a long narrow strip. Chute: A large quantity of water forced through a narrow, vertical passage. Fan: Water spreads horizontally as
An atmospheric wave is a periodic disturbance in the fields of atmospheric variables which may either propagate or not. Atmospheric waves range in spatial and temporal scale from large-scale planetary waves to minute sound waves. Atmospheric waves with periods which are harmonics of 1 solar day are known as atmospheric tides; the mechanism for the forcing of the wave, for example the generation of the initial or prolonged disturbance in the atmospheric variables, can vary. Waves are either excited by heating or dynamic effects, for example the obstruction of the flow by mountain ranges like the Rocky Mountains in the U. S. or the Alps in Europe. Heating effects can be large-scale. Atmospheric waves transport momentum, fed back into the background flow as the wave dissipates; this wave forcing of the flow is important in the stratosphere, where this momentum deposition by planetary scale Rossby waves gives rise to sudden stratospheric warmings and the deposition by gravity waves gives rise to the quasi-biennial oscillation.
In the mathematical description of atmospheric waves, spherical harmonics are used. When considering a section of a wave along a latitude circle, this is equivalent to a sinusoidal shape; because the propagation of the wave is fundamentally caused by an imbalance of the forces acting on the air, the types of waves and their propagation characteristics vary latitudinally, principally because the Coriolis effect on horizontal flow is maximal at the poles and zero at the equator. The different wave types are: sound waves These are compression waves; the sound wave propagates in the atmosphere though a series of compressions and expansions parallel to the direction of propagation. Internal gravity waves inertio-gravity waves Rossby waves At the equator, mixed Rossby-gravity and Kelvin waves can be observed. Holton, James R.: An Introduction to Dynamic Meteorology, 2004. ISBN 0-12-354015-1
In physics, power is the rate of doing work or of transferring heat, i.e. the amount of energy transferred or converted per unit time. Having no direction, it is a scalar quantity. In the International System of Units, the unit of power is the joule per second, known as the watt in honour of James Watt, the eighteenth-century developer of the condenser steam engine. Another common and traditional measure is horsepower. Being the rate of work, the equation for power can be written: power = work time As a physical concept, power requires both a change in the physical system and a specified time in which the change occurs; this is distinct from the concept of work, only measured in terms of a net change in the state of the physical system. The same amount of work is done when carrying a load up a flight of stairs whether the person carrying it walks or runs, but more power is needed for running because the work is done in a shorter amount of time; the output power of an electric motor is the product of the torque that the motor generates and the angular velocity of its output shaft.
The power involved in moving a vehicle is the product of the traction force of the wheels and the velocity of the vehicle. The rate at which a light bulb converts electrical energy into light and heat is measured in watts—the higher the wattage, the more power, or equivalently the more electrical energy is used per unit time; the dimension of power is energy divided by time. The SI unit of power is the watt, equal to one joule per second. Other units of power include ergs per second, metric horsepower, foot-pounds per minute. One horsepower is equivalent to 33,000 foot-pounds per minute, or the power required to lift 550 pounds by one foot in one second, is equivalent to about 746 watts. Other units include a logarithmic measure relative to a reference of 1 milliwatt. Power, as a function of time, is the rate at which work is done, so can be expressed by this equation: P = d W d t where P is power, W is work, t is time; because work is a force F applied over a distance x, W = F ⋅ x for a constant force, power can be rewritten as: P = d W d t = d d t = F ⋅ d x d t = F ⋅ v In fact, this is valid for any force, as a consequence of applying the fundamental theorem of calculus.
As a simple example, burning one kilogram of coal releases much more energy than does detonating a kilogram of TNT, but because the TNT reaction releases energy much more it delivers far more power than the coal. If ΔW is the amount of work performed during a period of time of duration Δt, the average power Pavg over that period is given by the formula P a v g = Δ W Δ t, it is the average amount of energy converted per unit of time. The average power is simply called "power" when the context makes it clear; the instantaneous power is the limiting value of the average power as the time interval Δt approaches zero. P = lim Δ t → 0 P a v g = lim Δ t → 0 Δ W Δ t = d W d t. In the case of constant power P, the amount of work performed during a period of duration t is given by: W = P t. In the context of energy conversion, it is more customary to use the symbol E rather than W. Power in mechanical systems is the combination of forces and movement. In particular, power is the product of a force on an object and the object's velocity, or the product of a torque on a shaft and the shaft's angular velocity.
Mechanical power is described as the time derivative of work. In mechanics, the work done by a force F on an object that travels along a curve C is given by the line integral: W C = ∫ C F ⋅ v d t = ∫ C F ⋅ d x, where x defines the path C and v is the velocity along this path. If the force F is derivable from a potential applying the gradi
Viscount Ilya Romanovich Prigogine was a physical chemist and Nobel laureate noted for his work on dissipative structures, complex systems, irreversibility. Prigogine was born in Moscow a few months before the Russian Revolution of 1917, into a Jewish family, his father, Roman Prigogine, was a chemical engineer at the Imperial Moscow Technical School. Because the family was critical of the new Soviet system, they left Russia in 1921, they first went to Germany and in 1929, to Belgium, where Prigogine received Belgian nationality in 1949. His brother Alexandre became an ornithologist. Prigogine studied chemistry at the Université Libre de Bruxelles, where in 1950, he became professor. In 1959, he was appointed director of the International Solvay Institute in Belgium. In that year, he started teaching at the University of Texas at Austin in the United States, where he was appointed Regental Professor and Ashbel Smith Professor of Physics and Chemical Engineering. From 1961 until 1966 he was affiliated with the Enrico Fermi Institute at the University of Chicago.
In Austin, in 1967, he co-founded the Center for Thermodynamics and Statistical Mechanics, now the Center for Complex Quantum Systems. In that year, he returned to Belgium, where he became director of the Center for Statistical Mechanics and Thermodynamics, he was a member of numerous scientific organizations, received numerous awards, prizes and 53 honorary degrees. In 1955, Ilya Prigogine was awarded the Francqui Prize for Exact Sciences. For his study in irreversible thermodynamics, he received the Rumford Medal in 1976, in 1977, the Nobel Prize in Chemistry. In 1989, he was awarded the title of Viscount in the Belgian nobility by the King of the Belgians; until his death, he was president of the International Academy of Science and was in 1997, one of the founders of the International Commission on Distance Education, a worldwide accreditation agency. Prigogine received an Honorary Doctorate from Heriot-Watt University in 1985 and in 1998 he was awarded an honoris causa doctorate by the UNAM in Mexico City.
Prigogine was first married to Belgian poet Hélène Jofé and in 1945 they had a son Yves. After their divorce, he married Polish-born chemist Maria Prokopowicz in 1961. In 1970 they had a son Pascal. In 2003 he was one of 22 Nobel Laureates. Prigogine is best known for his definition of dissipative structures and their role in thermodynamic systems far from equilibrium, a discovery that won him the Nobel Prize in Chemistry in 1977. In summary, Ilya Prigogine discovered that importation and dissipation of energy into chemical systems could result in the emergence of new structures due to internal self reorganization. In his 1955 text, Prigogine drew connections between dissipative structures and the Rayleigh-Bénard instability and the Turing mechanism. Dissipative structure theory led to pioneering research in self-organizing systems, as well as philosophical inquiries into the formation of complexity on biological entities and the quest for a creative and irreversible role of time in the natural sciences.
See the criticism by Joel Keizer and Ronald Fox. With professor Robert Herman, he developed the basis of the two fluid model, a traffic model in traffic engineering for urban networks, analogous to the two fluid model in classical statistical mechanics. Prigogine's formal concept of self-organization was used as a "complementary bridge" between General Systems Theory and thermodynamics, conciliating the cloudiness of some important systems theory concepts with scientific rigour. In his years, his work concentrated on the fundamental role of indeterminism in nonlinear systems on both the classical and quantum level. Prigogine and coworkers proposed a Liouville space extension of quantum mechanics. A Liouville space is the vector space formed by the set of linear operators, equipped with an inner product, that act on a Hilbert space. There exists a mapping of each linear operator into Liouville space, yet not every self-adjoint operator of Liouville space has a counterpart in Hilbert space, in this sense Liouville space has a richer structure than Hilbert space.
The Liouville space extension proposal by Prigogine and co-workers aimed to solve the arrow of time problem of thermodynamics and the measurement problem of quantum mechanics. Prigogine co-authored several books with Isabelle Stengers, including The End of Certainty and La Nouvelle Alliance. In his 1996 book, La Fin des certitudes, co-authored by Isabelle Stengers and published in English in 1997 as The End of Certainty: Time and the New Laws of Nature, Prigogine contends that determinism is no longer a viable scientific belief: "The more we know about our universe, the more difficult it becomes to believe in determinism." This is a major departure from the approach of Newton and Schrödinger, all of whom expressed their theories in terms of deterministic equations. According to Prigogine, determinism loses its explanatory power in the face of irreversibility and instability. Prigogine traces the dispute over determinism back to Darwin, whose attempt to explain individual variability according to evolving populations inspired Ludwig Boltzmann to explain the behavior of gases in terms of populations of particles rather than individual particles.
This led to the field of statistical mechanics and the realization that gases undergo irreversible processes. In deterministic physics, all processes are time-reversible, meaning that they can proceed backwa
Temperature is a physical quantity expressing hot and cold. It is measured with a thermometer calibrated in one or more temperature scales; the most used scales are the Celsius scale, Fahrenheit scale, Kelvin scale. The kelvin is the unit of temperature in the International System of Units, in which temperature is one of the seven fundamental base quantities; the Kelvin scale is used in science and technology. Theoretically, the coldest a system can be is when its temperature is absolute zero, at which point the thermal motion in matter would be zero. However, an actual physical system or object can never attain a temperature of absolute zero. Absolute zero is denoted as 0 K on the Kelvin scale, −273.15 °C on the Celsius scale, −459.67 °F on the Fahrenheit scale. For an ideal gas, temperature is proportional to the average kinetic energy of the random microscopic motions of the constituent microscopic particles. Temperature is important in all fields of natural science, including physics, Earth science and biology, as well as most aspects of daily life.
Many physical processes are affected by temperature, such as physical properties of materials including the phase, solubility, vapor pressure, electrical conductivity rate and extent to which chemical reactions occur the amount and properties of thermal radiation emitted from the surface of an object speed of sound is a function of the square root of the absolute temperature Temperature scales differ in two ways: the point chosen as zero degrees, the magnitudes of incremental units or degrees on the scale. The Celsius scale is used for common temperature measurements in most of the world, it is an empirical scale, developed by a historical progress, which led to its zero point 0 °C being defined by the freezing point of water, additional degrees defined so that 100 °C was the boiling point of water, both at sea-level atmospheric pressure. Because of the 100-degree interval, it was called a centigrade scale. Since the standardization of the kelvin in the International System of Units, it has subsequently been redefined in terms of the equivalent fixing points on the Kelvin scale, so that a temperature increment of one degree Celsius is the same as an increment of one kelvin, though they differ by an additive offset of 273.15.
The United States uses the Fahrenheit scale, on which water freezes at 32 °F and boils at 212 °F at sea-level atmospheric pressure. Many scientific measurements use the Kelvin temperature scale, named in honor of the Scots-Irish physicist who first defined it, it is a absolute temperature scale. Its zero point, 0 K, is defined to coincide with the coldest physically-possible temperature, its degrees are defined through thermodynamics. The temperature of absolute zero occurs at 0 K = −273.15 °C, the freezing point of water at sea-level atmospheric pressure occurs at 273.15 K = 0 °C. The International System of Units defines a scale and unit for the kelvin or thermodynamic temperature by using the reliably reproducible temperature of the triple point of water as a second reference point; the triple point is a singular state with its own unique and invariant temperature and pressure, along with, for a fixed mass of water in a vessel of fixed volume, an autonomically and stably self-determining partition into three mutually contacting phases, vapour and solid, dynamically depending only on the total internal energy of the mass of water.
For historical reasons, the triple point temperature of water is fixed at 273.16 units of the measurement increment. There is a variety of kinds of temperature scale, it may be convenient to classify them theoretically based. Empirical temperature scales are older, while theoretically based scales arose in the middle of the nineteenth century. Empirically based temperature scales rely directly on measurements of simple physical properties of materials. For example, the length of a column of mercury, confined in a glass-walled capillary tube, is dependent on temperature, is the basis of the useful mercury-in-glass thermometer; such scales are valid only within convenient ranges of temperature. For example, above the boiling point of mercury, a mercury-in-glass thermometer is impracticable. Most materials expand with temperature increase, but some materials, such as water, contract with temperature increase over some specific range, they are hardly useful as thermometric materials. A material is of no use as a thermometer near one of its phase-change temperatures, for example its boiling-point.
In spite of these restrictions, most used practical thermometers are of the empirically based kind. It was used for calorimetry, which contributed to the discovery of thermodynamics. Empirical thermometry has serious drawbacks when judged as a basis for theoretical physics. Empirically based thermometers, beyond their base as simple direct measurements of ordinary physical properties of thermometric materials, can be re-calibrated, by use of theoretical physical reasoning, this can extend their range of adequacy. Theoretically-based temperature scales are based directly on theoretical arguments those of thermodynamics, kinetic theory and quantum mechanics, they rely on theoretical properties of idealized materials. They are more or less comparable with feasible physical devices and materials. Theoretically based temperature scales are used to provide calibrating standards for practi
Flood control methods are used to reduce or prevent the detrimental effects of flood waters. Flood relief methods are used to reduce the effects of high water levels. Floods are caused by many factors or a combination of any of these prolonged heavy rainfall accelerated snowmelt, severe winds over water, unusual high tides, tsunamis, or failure of dams, retention ponds, or other structures that retained the water. Flooding can be exacerbated by increased amounts of impervious surface or by other natural hazards such as wildfires, which reduce the supply of vegetation that can absorb rainfall. Periodic floods occur on many rivers. During times of rain, some of the water is retained in ponds or soil, some is absorbed by grass and vegetation, some evaporates, the rest travels over the land as surface runoff. Floods occur when ponds, riverbeds and vegetation cannot absorb all the water. Water runs off the land in quantities that cannot be carried within stream channels or retained in natural ponds and man-made reservoirs.
About 30 percent of all precipitation becomes runoff and that amount might be increased by water from melting snow. River flooding is caused by heavy rain, sometimes increased by melting snow. A flood that rises with little or no warning, is called a flash flood. Flash floods result from intense rainfall over a small area, or if the area was saturated from previous precipitation; when rainfall is light, the shorelines of lakes and bays can be flooded by severe winds—such as during hurricanes—that blow water into the shore areas. Coastal areas are sometimes flooded by unusually high tides, such as spring tides when compounded by high winds and storm surges. Flooding has many impacts, it endangers the lives of humans and other species. Rapid water runoff causes soil erosion and concomitant sediment deposition elsewhere; the spawning grounds for fish and other wildlife habitats can become polluted or destroyed. Some prolonged high floods can delay traffic in areas. Floods can interfere with drainage and economical use such as interfering with farming.
Structural damage can occur in bridge abutments, bank lines, sewer lines, other structures within floodways. Waterway navigation and hydroelectric power are impaired. Financial losses due to floods are millions of dollars each year, with the worst floods in recent U. S. history having cost billions of dollars. There are many disruptive effects of flooding on economic activities. However, flooding can bring benefits, such as making soil more fertile and providing nutrients in which it is deficient. Periodic flooding was essential to the well-being of ancient communities along the Tigris-Euphrates Rivers, the Nile River, the Indus River, the Ganges and the Yellow River, among others; the viability for hydrologically based renewable sources of energy is higher in flood-prone regions. This is the method used for remote sensing the disasters. Detection of disasters such as floods and explosions are quite complex in previous days and range of detection is inappropriate. But, it came to possibilities by using Multi temporal visualization of Synthetic Aperture Radar images.
But to obtain the good SAR images perfect spatial registration and precise calibration are necessary to specify changes that have occurred. Calibration of SAR is complex and a sensitive problem. Errors may occur after calibration that involves data fusion and visualization process. Traditional image pre-processing cannot be used here due to the on-Gaussian of radar back scattering, but a processing method called "cross calibration/normalization" is used to solve this problem; the application generates a single disaster image called "fast-ready disaster map" from multitemporal SAR images. These maps are generated without user interaction and helps in providing immediate first aid to the people; this process provides image enhancement and comparison between numerous images using data fusion and visualization process. This proposed processing includes histogram truncation and equalization steps; the process helps in identifying the permanent waters and other classes by combined composition of pre-disaster and post-disaster images into a color image for better identity.
Some methods of flood control have been practiced since ancient times. These methods include planting vegetation to retain extra water, terracing hillsides to slow flow downhill, the construction of floodways. Other techniques include the construction of levees, dams, retention ponds to hold extra water during times of flooding. Many dams and their associated reservoirs are designed or to aid in flood protection and control. Many large dams have flood-control reservations in which the level of a reservoir must be kept below a certain elevation before the onset of the rainy/summer melt season to allow a certain amount of space in which floodwaters can fill. Other beneficial uses of dam created reservoirs include hydroelectric power generation, water conservation, recreation. Reservoir and dam construction and design is based upon standards set out by the government. In the United States and reservoir design is regulated by the US Army Corps of Engineers. Design of a dam and reservoir follows guidelines set by the USACE and covers topics such as design flow rates in consideration to meteorological, topographic and soi