Conservation of energy
In physics and chemistry, the law of conservation of energy states that the total energy of an isolated system remains constant. This law means that energy destroyed. For instance, chemical energy is converted to kinetic energy. If one adds up all the forms of energy that were released in the explosion, such as the kinetic energy and potential energy of the pieces, as well as heat and sound, one will get the exact decrease of chemical energy in the combustion of the dynamite. Classically, conservation of energy was distinct from conservation of mass. Conservation of energy can be rigorously proven by Noether's theorem as a consequence of continuous time translation symmetry. A consequence of the law of conservation of energy is that a perpetual motion machine of the first kind cannot exist, to say, no system without an external energy supply can deliver an unlimited amount of energy to its surroundings. For systems which do not have time translation symmetry, it may not be possible to define conservation of energy.
Examples include curved spacetimes in general relativity or time crystals in condensed matter physics. Ancient philosophers as far back as Thales of Miletus c. 550 BCE had inklings of the conservation of some underlying substance of which everything is made. However, there is no particular reason to identify this with what we know today as "mass-energy". Empedocles wrote that in this universal system, composed of four roots, "nothing comes to be or perishes". In 1605, Simon Stevinus was able to solve a number of problems in statics based on the principle that perpetual motion was impossible. In 1639, Galileo published his analysis of several situations—including the celebrated "interrupted pendulum"—which can be described as conservatively converting potential energy to kinetic energy and back again, he pointed out that the height a moving body rises is equal to the height from which it falls, used this observation to infer the idea of inertia. The remarkable aspect of this observation is that the height to which a moving body ascends on a frictionless surface does not depend on the shape of the surface.
In 1669, Christiaan Huygens published his laws of collision. Among the quantities he listed as being invariant before and after the collision of bodies were both the sum of their linear momentums as well as the sum of their kinetic energies. However, the difference between elastic and inelastic collision was not understood at the time; this led to the dispute among researchers as to which of these conserved quantities was the more fundamental. In his Horologium Oscillatorium, he gave a much clearer statement regarding the height of ascent of a moving body, connected this idea with the impossibility of a perpetual motion. Huygens' study of the dynamics of pendulum motion was based on a single principle: that the center of gravity of a heavy object cannot lift itself; the fact that kinetic energy is scalar, unlike linear momentum, a vector, hence easier to work with did not escape the attention of Gottfried Wilhelm Leibniz. It was Leibniz during 1676–1689 who first attempted a mathematical formulation of the kind of energy, connected with motion.
Using Huygens' work on collision, Leibniz noticed that in many mechanical systems, ∑ i m i v i 2 was conserved so long as the masses did not interact. He called living force of the system; the principle represents an accurate statement of the approximate conservation of kinetic energy in situations where there is no friction. Many physicists at that time, such as Newton, held that the conservation of momentum, which holds in systems with friction, as defined by the momentum: ∑ i m i v i was the conserved vis viva, it was shown that both quantities are conserved given the proper conditions such as an elastic collision. In 1687, Isaac Newton published his Principia, organized around the concept of force and momentum. However, the researchers were quick to recognize that the principles set out in the book, while fine for point masses, were not sufficient to tackle the motions of rigid and fluid bodies; some other principles were required. The law of conservation of vis viva was championed by the father and son duo and Daniel Bernoulli.
The former enunciated the principle of virtual work as used in statics in its full generality in 1715, while the latter based his Hydrodynamica, published in 1738, on this single conservation principle. Daniel's study of loss of vis viva of flowing water led him to formulate the Bernoulli's principle, which relates the loss to be proportional to the change in hydrodynamic pressure. Daniel formulated the notion of work and efficiency for hydraulic machines; this focus on the vis viva by the continental physicists led to the discovery of stationarity principles governing mechanics, such as the D'Al
Fracture mechanics is the field of mechanics concerned with the study of the propagation of cracks in materials. It uses methods of analytical solid mechanics to calculate the driving force on a crack and those of experimental solid mechanics to characterize the material's resistance to fracture. In modern materials science, fracture mechanics is an important tool used to improve the performance of mechanical components, it applies the physics of stress and strain behavior of materials, in particular the theories of elasticity and plasticity, to the microscopic crystallographic defects found in real materials in order to predict the macroscopic mechanical behavior of those bodies. Fractography is used with fracture mechanics to understand the causes of failures and verify the theoretical failure predictions with real life failures; the prediction of crack growth is at the heart of the damage tolerance mechanical design discipline. There are three ways of applying a force to enable a crack to propagate: Mode I fracture – Opening mode, Mode II fracture – Sliding mode, Mode III fracture – Tearing mode.
The processes of material manufacture, processing and forming may introduce flaws in a finished mechanical component. Arising from the manufacturing process and surface flaws are found in all metal structures. Not all such flaws are unstable under service conditions. Fracture mechanics is the analysis of flaws to discover those that are safe and those that are liable to propagate as cracks and so cause failure of the flawed structure. Despite these inherent flaws, it is possible to achieve through damage tolerance analysis the safe operation of a structure. Fracture mechanics as a subject for critical study has been around for a century and thus is new. Fracture mechanics should attempt to provide quantitative answers to the following questions: What is the strength of the component as a function of crack size? What crack size can be tolerated under service loading, i.e. what is the maximum permissible crack size? How long does it take for a crack to grow from a certain initial size, for example the minimum detectable crack size, to the maximum permissible crack size?
What is the service life of a structure when a certain pre-existing flaw size is assumed to exist? During the period available for crack detection how should the structure be inspected for cracks? Fracture mechanics was developed during World War I by English aeronautical engineer A. A. Griffith – thus the term Griffith crack – to explain the failure of brittle materials. Griffith's work was motivated by two contradictory facts: The stress needed to fracture bulk glass is around 100 MPa; the theoretical stress needed for breaking atomic bonds of glass is 10,000 MPa. A theory was needed to reconcile these conflicting observations. Experiments on glass fibers that Griffith himself conducted suggested that the fracture stress increases as the fiber diameter decreases. Hence the uniaxial tensile strength, used extensively to predict material failure before Griffith, could not be a specimen-independent material property. Griffith suggested that the low fracture strength observed in experiments, as well as the size-dependence of strength, was due to the presence of microscopic flaws in the bulk material.
To verify the flaw hypothesis, Griffith introduced an artificial flaw in his experimental glass specimens. The artificial flaw was in the form of a surface crack, much larger than other flaws in a specimen; the experiments showed that the product of the square root of the flaw length and the stress at fracture was nearly constant, expressed by the equation: σ f a ≈ C An explanation of this relation in terms of linear elasticity theory is problematic. Linear elasticity theory predicts that stress at the tip of a sharp flaw in a linear elastic material is infinite. To avoid that problem, Griffith developed a thermodynamic approach to explain the relation that he observed; the growth of a crack, the extension of the surfaces on either side of the crack, requires an increase in the surface energy. Griffith found an expression for the constant C in terms of the surface energy of the crack by solving the elasticity problem of a finite crack in an elastic plate; the approach was: Compute the potential energy stored in a perfect specimen under a uniaxial tensile load.
Fix the boundary so that the applied load does no work and introduce a crack into the specimen. The crack hence reduces the elastic energy near the crack faces. On the other hand, the crack increases the total surface energy of the specimen. Compute the change in the free energy as a function of the crack length. Failure occurs when the free energy attains a peak value at a critical crack length, beyond which the free energy decreases as the crack length increases, i.e. by causing fracture. Using this procedure, Griffith found that C = 2 E γ π where E is the Young's modulus of the material and γ is the surface energy density of the material. Assuming E = 62 GPa and γ = 1 J/m2 gives excellent agreement of Griffith's predicted fracture stress with experimental results for glass. Griffith's criterion
An atmosphere is a layer or a set of layers of gases surrounding a planet or other material body, held in place by the gravity of that body. An atmosphere is more to be retained if the gravity it is subject to is high and the temperature of the atmosphere is low; the atmosphere of Earth is composed of nitrogen, argon, carbon dioxide and other gases in trace amounts. Oxygen is used by most organisms for respiration; the atmosphere helps to protect living organisms from genetic damage by solar ultraviolet radiation, solar wind and cosmic rays. The current composition of the Earth's atmosphere is the product of billions of years of biochemical modification of the paleoatmosphere by living organisms; the term stellar atmosphere describes the outer region of a star and includes the portion above the opaque photosphere. Stars with sufficiently low temperatures may have outer atmospheres with compound molecules. Atmospheric pressure at a particular location is the force per unit area perpendicular to a surface determined by the weight of the vertical column of atmosphere above that location.
On Earth, units of air pressure are based on the internationally recognized standard atmosphere, defined as 101.325 kPa. It is measured with a barometer. Atmospheric pressure decreases with increasing altitude due to the diminishing mass of gas above; the height at which the pressure from an atmosphere declines by a factor of e is called the scale height and is denoted by H. For an atmosphere with a uniform temperature, the scale height is proportional to the temperature and inversely proportional to the product of the mean molecular mass of dry air and the local acceleration of gravity at that location. For such a model atmosphere, the pressure declines exponentially with increasing altitude. However, atmospheres are not uniform in temperature, so estimation of the atmospheric pressure at any particular altitude is more complex. Surface gravity differs among the planets. For example, the large gravitational force of the giant planet Jupiter retains light gases such as hydrogen and helium that escape from objects with lower gravity.
Secondly, the distance from the Sun determines the energy available to heat atmospheric gas to the point where some fraction of its molecules' thermal motion exceed the planet's escape velocity, allowing those to escape a planet's gravitational grasp. Thus and cold Titan and Pluto are able to retain their atmospheres despite their low gravities. Since a collection of gas molecules may be moving at a wide range of velocities, there will always be some fast enough to produce a slow leakage of gas into space. Lighter molecules move faster than heavier ones with the same thermal kinetic energy, so gases of low molecular weight are lost more than those of high molecular weight, it is thought that Venus and Mars may have lost much of their water when, after being photo dissociated into hydrogen and oxygen by solar ultraviolet, the hydrogen escaped. Earth's magnetic field helps to prevent this, as the solar wind would enhance the escape of hydrogen. However, over the past 3 billion years Earth may have lost gases through the magnetic polar regions due to auroral activity, including a net 2% of its atmospheric oxygen.
The net effect, taking the most important escape processes into account, is that an intrinsic magnetic field does not protect a planet from atmospheric escape and that for some magnetizations the presence of a magnetic field works to increase the escape rate. Other mechanisms that can cause atmosphere depletion are solar wind-induced sputtering, impact erosion and sequestration—sometimes referred to as "freezing out"—into the regolith and polar caps. Atmospheres have dramatic effects on the surfaces of rocky bodies. Objects that have no atmosphere, or that have only an exosphere, have terrain, covered in craters. Without an atmosphere, the planet has no protection from meteoroids, all of them collide with the surface as meteorites and create craters. Most meteoroids burn up as meteors before hitting a planet's surface; when meteoroids do impact, the effects are erased by the action of wind. As a result, craters are rare on objects with atmospheres. Wind erosion is a significant factor in shaping the terrain of rocky planets with atmospheres, over time can erase the effects of both craters and volcanoes.
In addition, since liquids can not exist without pressure, an atmosphere allows liquid to be present at the surface, resulting in lakes and oceans. Earth and Titan are known to have liquids at their surface and terrain on the planet suggests that Mars had liquid on its surface in the past. A planet's initial atmospheric composition is related to the chemistry and temperature of the local solar nebula during planetary formation and the subsequent escape of interior gases; the original atmospheres started with a rotating disc of gases that collapsed to form a series of spaced rings that condensed to form the planets. The planet's atmospheres were modified over time by various complex factors, resulting in quite different outcomes; the atmospheres of the planets Venus and Mars are composed of carbon dioxide, with small quantities of nitrogen, argon and traces of other gases. The composition of Earth's atmosphere is governed by the by-products of the life that it sust
In physics and materials science, plasticity describes the deformation of a material undergoing non-reversible changes of shape in response to applied forces. For example, a solid piece of metal being bent or pounded into a new shape displays plasticity as permanent changes occur within the material itself. In engineering, the transition from elastic behavior to plastic behavior is called yield. Plastic deformation is observed in most materials metals, rocks, foams and skin. However, the physical mechanisms that cause plastic deformation can vary widely. At a crystalline scale, plasticity in metals is a consequence of dislocations; such defects are rare in most crystalline materials, but are numerous in some and part of their crystal structure. In brittle materials such as rock and bone, plasticity is caused predominantly by slip at microcracks. In cellular materials such as liquid foams or biological tissues, plasticity is a consequence of bubble or cell rearrangements, notably T1 processes. For many ductile metals, tensile loading applied to a sample will cause it to behave in an elastic manner.
Each increment of load is accompanied by a proportional increment in extension. When the load is removed, the piece returns to its original size. However, once the load exceeds a threshold – the yield strength – the extension increases more than in the elastic region. Elastic deformation, however, is an approximation and its quality depends on the time frame considered and loading speed. If, as indicated in the graph opposite, the deformation includes elastic deformation, it is often referred to as "elasto-plastic deformation" or "elastic-plastic deformation". Perfect plasticity is a property of materials to undergo irreversible deformation without any increase in stresses or loads. Plastic materials that have been hardened by prior deformation, such as cold forming, may need higher stresses to deform further. Plastic deformation is dependent on the deformation speed, i.e. higher stresses have to be applied to increase the rate of deformation. Such materials are said to deform visco-plastically.
The plasticity of a material is directly proportional to the ductility and malleability of the material. Plasticity in a crystal of pure metal is caused by two modes of deformation in the crystal lattice: slip and twinning. Slip is a shear deformation which moves the atoms through many interatomic distances relative to their initial positions. Twinning is the plastic deformation which takes place along two planes due to a set of forces applied to a given metal piece. Most metals show more plasticity. Lead shows sufficient plasticity at room temperature, while cast iron does not possess sufficient plasticity for any forging operation when hot; this property is of importance in forming and extruding operations on metals. Most metals are hence shaped hot. Crystalline materials contain uniform planes of atoms organized with long-range order. Planes may slip past each other along their close-packed directions, as is shown on the slip systems page; the result is a permanent change of shape within the plastic deformation.
The presence of dislocations increases the likelihood of planes. On the nanoscale the primary plastic deformation in simple face centered cubic metals is reversible, as long as there is no material transport in form of cross-glide; the presence of other defects within a crystal may entangle dislocations or otherwise prevent them from gliding. When this happens, plasticity is localized to particular regions in the material. For crystals, these regions of localized plasticity are called shear bands. Microplasticity is a local phenomenon in metals, it occurs for stress values where the metal is globally in the elastic domain while some local areas are in the plastic domain. In amorphous materials, the discussion of "dislocations" is inapplicable, since the entire material lacks long range order; these materials can still undergo plastic deformation. Since amorphous materials, like polymers, are not well-ordered, they contain a large amount of free volume, or wasted space. Pulling these materials in tension opens up these regions and can give materials a hazy appearance.
This haziness is the result of crazing, where fibrils are formed within the material in regions of high hydrostatic stress. The material may go from an ordered appearance to a "crazy" pattern of stretch marks; some materials those prone to martensitic transformations, deform in ways that are not well described by the classic theories of plasticity and elasticity. One of the best-known examples of this is nitinol, which exhibits pseudoelasticity: deformations which are reversible in the context of mechanical design, but irreversible in terms of thermodynamics. In the case of iron, the martensitic phase transformation from bcc to hcp phases induces significant work hardening; these materials plastically deform when the bending moment exceeds the plastic moment. This applies to open cell foams; the foams can be made of any material with a plastic yield point which includes rigid polymers and metals. This method of modeling the foam as beams is only valid if the ratio of the density of the foam to the density of the matter is less than 0.3.
This is. In closed cell foams, the yield strength is increased if the material is under tension because of the membrane that spans the face of the cells. Soils clays, display a significant amount of inelasticity
In physics, the Navier–Stokes equations, named after Claude-Louis Navier and George Gabriel Stokes, describe the motion of viscous fluid substances. These balance equations arise from applying Isaac Newton's second law to fluid motion, together with the assumption that the stress in the fluid is the sum of a diffusing viscous term and a pressure term—hence describing viscous flow; the main difference between them and the simpler Euler equations for inviscid flow is that Navier–Stokes equations factor in the Froude limit and are not conservation equations, but rather a dissipative system, in the sense that they cannot be put into the quasilinear homogeneous form: y t + A y x = 0. Navier–Stokes equations are useful because they describe the physics of many phenomena of scientific and engineering interest, they may be used to model the weather, ocean currents, water flow in a pipe and air flow around a wing. The Navier–Stokes equations, in their full and simplified forms, help with the design of aircraft and cars, the study of blood flow, the design of power stations, the analysis of pollution, many other things.
Coupled with Maxwell's equations, they can be used to study magnetohydrodynamics. The Navier–Stokes equations are of great interest in a purely mathematical sense. Despite their wide range of practical uses, it has not yet been proven whether solutions always exist in three dimensions and, if they do exist, whether they are smooth – i.e. they are infinitely differentiable at all points in the domain. These are called the Navier–Stokes existence and smoothness problems; the Clay Mathematics Institute has called this one of the seven most important open problems in mathematics and has offered a US$1 million prize for a solution or a counterexample. The solution of the equations is a flow velocity, it is a field, since it is defined at every point in an interval of time. Once the velocity field is calculated other quantities of interest, such as pressure or temperature, may be found using additional equations and relations; this is different from what one sees in classical mechanics, where solutions are trajectories of position of a particle or deflection of a continuum.
Studying velocity instead of position makes more sense for a fluid. The Navier–Stokes momentum equation can be derived as a particular form of the Cauchy momentum equation, whose general convective form is D u D t = 1 ρ ∇ ⋅ σ + g By setting the Cauchy stress tensor σ to be the sum of a viscosity term τ and a pressure term − p I we arrive at where D D t is the material derivative, defined as D D t = d e f ∂ ∂ t + u ⋅ ∇, ρ is the density, u is the flow velocity, ∇ ⋅ is the divergence, p is the pressure, t is time, τ is the deviatoric stress tensor, which has order two, g represents body accelerations acting on the continuum, for example gravity, inertial accelerations, electrostatic accelerations, so on,In this form, it is apparent that in the assumption of an inviscid fluid -no deviatoric stress- Cauchy equations reduce to the Euler equations. Assuming conservation of mass we can use the continuity equation, ∂ ρ ∂ t + ∇ ⋅ = 0 to arrive to the conservation form of the equations of motion; this is written: where ⊗ is the outer product: u ⊗ v = u v T.
The left side of the equation describes acceleration, may be composed of time-dependent and convective components. The right side of the equation is in effect a summation of hydrostatic effects, the divergence of deviatoric stress and body forces. All non-relativistic balance equations, such as the Navier–Stokes equations, can be derived by beginning with the Cauchy equations and specifying the stress tensor through a constitutive relation. By expressing the deviatoric stress tensor in terms of viscosity and the fluid velocity gradient, assuming constant viscosity, the above Cauchy equations will lead to the Navier–Stokes equations below. A significant feature of the Cauchy equation and all other continuum equations is the presence
Archimedes' principle states that the upward buoyant force, exerted on a body immersed in a fluid, whether or submerged, is equal to the weight of the fluid that the body displaces and acts in the upward direction at the center of mass of the displaced fluid. Archimedes' principle is a law of physics fundamental to fluid mechanics, it was formulated by Archimedes of Syracuse. In On Floating Bodies, Archimedes suggested that: Any object or immersed in a fluid, is buoyed up by a force equal to the weight of the fluid displaced by the object. Archimedes' principle allows the buoyancy of an object or immersed in a fluid to be calculated; the downward force on the object is its weight. The upward, or buoyant, force on the object is. Thus, the net force on the object is the difference between the magnitudes of the buoyant force and its weight. If this net force is positive, the object rises. In simple words, Archimedes' principle states that, when a body is or immersed in a fluid, it experiences an apparent loss in weight, equal to the weight of the fluid displaced by the immersed part of the body.
Consider a cuboid immersed in a fluid, with one of its sides orthogonal to the direction of gravity. The fluid will exert a normal force on each face, but only the normal forces on top and bottom will contribute to buoyancy; the pressure difference between the bottom and the top face is directly proportional to the height. Multiplying the pressure difference by the area of a face gives a net force on the cuboid – the buoyancy, equaling in size the weight of the fluid displaced by the cuboid. By summing up sufficiently many arbitrarily small cuboids this reasoning may be extended to irregular shapes, so, whatever the shape of the submerged body, the buoyant force is equal to the weight of the displaced fluid. Weight of displaced fluid = weight of object in vacuum − weight of object in fluid The weight of the displaced fluid is directly proportional to the volume of the displaced fluid; the weight of the object in the fluid is reduced, because of the force acting on it, called upthrust. In simple terms, the principle states that the buoyant force on an object is equal to the weight of the fluid displaced by the object, or the density of the fluid multiplied by the submerged volume times the gravity or Fb = ρ x g x V. Thus, among submerged objects with equal masses, objects with greater volume have greater buoyancy.
Suppose a rock's weight is measured as 10 newtons when suspended by a string in a vacuum with gravity acting on it. Suppose that, when the rock is lowered into water, it displaces water of weight 3 newtons; the force it exerts on the string from which it hangs would be 10 newtons minus the 3 newtons of buoyant force: 10 − 3 = 7 newtons. Buoyancy reduces the apparent weight of objects that have sunk to the sea floor, it is easier to lift an object up through the water than it is to pull it out of the water. For a submerged object, Archimedes' principle can be reformulated as follows: apparent immersed weight = weight of object − weight of displaced fluid inserted into the quotient of weights, expanded by the mutual volume density of object density of fluid = weight weight of displaced fluid yields the formula below; the density of the immersed object relative to the density of the fluid can be calculated without measuring any volume is density of object density of fluid = weight weight − apparent immersed weight.
Example: If you drop wood into water, buoyancy will keep it afloat. Example: A helium balloon in a moving car; when increasing speed or driving in a curve, the air moves in the opposite direction to the car's acceleration. However, due to buoyancy, the balloon is pushed "out of the way" by the air, will drift in the same direction as the car's acceleration; when an object is immersed in a liquid, the liquid exerts an upward force, known as the buoyant force, proportional to the weight of the displaced liquid. The sum force acting on the object is equal to the difference between the weight of the object and the weight of displaced liquid. Equilibrium, or neutral buoyancy, is achieved. Archimedes' principle does not consider the surface tension acting on the body. Moreover, Archimedes' principle has been found to break down in complex fluids. There is an exception to Archimedes' principle known as the bottom
Sir George Stokes, 1st Baronet
Sir George Gabriel Stokes, 1st Baronet, was an Anglo-Irish physicist and mathematician. Born in County Sligo, Stokes spent all of his career at the University of Cambridge, where he was the Lucasian Professor of Mathematics from 1849 until his death in 1903; as a physicist, Stokes made seminal contributions to fluid dynamics, including the Navier-Stokes equation, to physical optics, with notable works on polarization and fluorescence. As a mathematician, he popularised "Stokes' theorem" in vector calculus and contributed to the theory of asymptotic expansions. Stokes was made a baronet by the British monarch in 1889. In 1893 he received the Royal Society's Copley Medal the most prestigious scientific prize in the world, "for his researches and discoveries in physical science", he represented Cambridge University in the British House of Commons from 1887 to 1892, sitting as a Tory. Stokes served as president of the Royal Society from 1885 to 1890 and was the Master of Pembroke College, Cambridge.
George Stokes was the youngest son of the Reverend Gabriel Stokes, a clergyman in the Church of Ireland who served as rector of Skreen, in County Sligo. Stokes home life was influenced by his father's evangelical Protestantism. After attending schools in Skreen and Bristol, in 1837 Stokes matriculated at Pembroke College, Cambridge. Four years he graduated as senior wrangler and first Smith's prizeman, achievements that earned him election of a fellow of the college. In accordance with the college statutes, Stokes had to resign the fellowship when he married in 1857. Twelve years under new statutes, he was re-elected to the fellowship and he retained that place until 1902, when on the day before his 83rd birthday, he was elected as the college's Master. Stokes did not hold that position for long, for he died at Cambridge on 1 February the following year, was buried in the Mill Road cemetery. In 1849, Stokes was appointed to the Lucasian professorship of mathematics at Cambridge, a position he held until his death in 1903.
On 1 June 1899, the jubilee of this appointment was celebrated there in a ceremony, attended by numerous delegates from European and American universities. A commemorative gold medal was presented to Stokes by the chancellor of the university and marble busts of Stokes by Hamo Thornycroft were formally offered to Pembroke College and to the university by Lord Kelvin. Stokes, made a baronet in 1889, further served his university by representing it in parliament from 1887 to 1892 as one of the two members for the Cambridge University constituency. During a portion of this period he was president of the Royal Society, of which he had been one of the secretaries since 1854. Since he was Lucasian Professor at this time, Stokes was the first person to hold all three positions simultaneously. Stokes was the oldest of the trio of natural philosophers, James Clerk Maxwell and Lord Kelvin being the other two, who contributed to the fame of the Cambridge school of mathematical physics in the middle of the 19th century.
Stokes's original work began about 1840, from that date onwards the great extent of his output was only less remarkable than the brilliance of its quality. The Royal Society's catalogue of scientific papers gives the titles of over a hundred memoirs by him published down to 1883; some of these are only brief notes, others are short controversial or corrective statements, but many are long and elaborate treatises. In scope, his work covered a wide range of physical inquiry but, as Marie Alfred Cornu remarked in his Rede lecture of 1899, the greater part of it was concerned with waves and the transformations imposed on them during their passage through various media, his first published papers, which appeared in 1842 and 1843, were on the steady motion of incompressible fluids and some cases of fluid motion. These were followed in 1845 by one on the friction of fluids in motion and the equilibrium and motion of elastic solids, in 1850 by another on the effects of the internal friction of fluids on the motion of pendulums.
To the theory of sound he made several contributions, including a discussion of the effect of wind on the intensity of sound and an explanation of how the intensity is influenced by the nature of the gas in which the sound is produced. These inquiries together put the science of fluid dynamics on a new footing, provided a key not only to the explanation of many natural phenomena, such as the suspension of clouds in air, the subsidence of ripples and waves in water, but to the solution of practical problems, such as the flow of water in rivers and channels, the skin resistance of ships, his work on fluid motion and viscosity led to his calculating the terminal velocity for a sphere falling in a viscous medium. This became known as Stokes's law, he derived an expression for the frictional force exerted on spherical objects with small Reynolds numbers. His work is the basis of the falling sphere viscometer, in which the fluid is stationary in a vertical glass tube. A sphere of known size and density is allowed to descend through the liquid.
If selected, it reaches terminal velocity, which can be measured by the time it takes to pass two marks on the tube. Electronic sensing can be used for opaque fluids. Knowing the terminal velocity, the size and density of the sphere, the density of the liquid, Stokes's law can be used to calculate the viscosity of the fluid. A series of steel ball bearings of different diameter is used in the classic experiment to improve the accuracy of the calculation; the school experiment uses glycerine as the fluid, t