1.
Continuum mechanics
–
The French mathematician Augustin-Louis Cauchy was the first to formulate such models in the 19th century. Research in the area continues today. Modeling an object as a continuum assumes that the substance of the object completely fills the space it occupies. Continuum mechanics deals with physical properties of fluids which are independent of any particular coordinate system in which they are observed. These physical properties are then represented by tensors, which are mathematical objects that have the required property of being independent of coordinate system. These tensors can be expressed in coordinate systems for computational convenience. Materials, such as solids, gases, are composed of molecules separated by "empty" space. On a microscopic scale, materials have discontinuities. A continuum is a body that can be continually sub-divided with properties being those of the bulk material. More specifically, the hypothesis/assumption hinges on the concepts of a representative elementary volume and separation of scales based on the Hill -- Mandel condition. The latter then provide a micromechanics basis for finite elements. The levels of SVE and RVE link continuum mechanics to statistical mechanics. The RVE may be assessed only in a limited way via experimental testing: when the constitutive response becomes spatially homogeneous. Specifically for fluids, the Knudsen number is used to assess to what extent the approximation of continuity can be made. Consider traffic on a highway -- with just one lane for simplicity.
Continuum mechanics
–
Figure 1. Configuration of a continuum body
2.
Conservation of energy
–
In physics, the law of conservation of energy states that the total energy of an isolated system remains constant—it is said to be conserved over time. Energy destroyed; rather, it transforms from one form to another. For instance, energy can be converted to kinetic energy in the explosion of a stick of dynamite. A consequence of the law of conservation of energy is that a perpetual machine of the first kind can not exist. That is to say, no system without an external supply can deliver an unlimited amount of energy to its surroundings. Ancient philosophers as back as Thales of Miletus c. 550 BCE had inklings of the conservation of some underlying substance of which everything is made. However, there is no particular reason to identify this with what we know today as "mass-energy". Empedocles wrote that in his universal system, composed of four roots, "nothing perishes"; instead, these elements suffer continual rearrangement. In 1605, Simon Stevinus was able to solve a number of problems in statics based on the principle that perpetual motion was impossible. In 1669, Christian Huygens published his laws of collision. However, the difference between inelastic collision was not understood at the time. This led as to which of these conserved quantities was the more fundamental. Huygens' study of the dynamics of motion was based on a single principle: that the center of gravity of heavy objects can not lift itself. It was Leibniz during 1676–1689 who first attempted a mathematical formulation of the kind of energy, connected with motion. He called living force of the system.
Conservation of energy
–
Gottfried Leibniz
Conservation of energy
–
Gaspard-Gustave Coriolis
Conservation of energy
–
James Prescott Joule
3.
Conservation of mass
–
Hence, the quantity of mass is "conserved" over time. The concept of mass conservation is widely used in many fields such as chemistry, fluid dynamics. In this circumstance, the mass -- energy equivalence theorem states that mass conservation is equivalent to total conservation, the first law of thermodynamics. For a thermodynamically closed system mass is only approximately conserved. For a discussion, see mass in general relativity. A non-creationist philosophy based on the teachings of Mahavira, states that the universe and its constituents such as matter can not be destroyed or created. Its modes are characterised by creation and destruction. A principle of the conservation of matter was also stated by Nasīr al-Dīn al-Tūsī. He wrote that "A body of matter cannot disappear completely. It only changes its form, condition, color and other properties and turns into a different complex or elementary matter". The principle of conservation of mass was first outlined by Mikhail Lomonosov in 1748. He proved it by experiments—though this is sometimes challenged. Antoine Lavoisier had expressed these ideas in 1774. Others whose ideas pre-dated the work of Lavoisier include Joseph Black, Jean Rey. The conservation of mass was obscure for millennia because of the buoyancy effect of the Earth's atmosphere on the weight of gases.
Conservation of mass
–
Russian scientist Mikhail Lomonosov is the one who discovered the law of mass conservation in 1756 by experiments, and came to the conclusion that phlogiston theory is incorrect.
Conservation of mass
Conservation of mass
–
Antoine Lavoisier 's discovery of the Law of Conservation of Mass led to many new findings in the 19th century. Joseph Proust 's law of definite proportions and John Dalton 's atomic theory branched from the discoveries of Antoine Lavoisier. Lavoisier's quantitative experiments revealed that combustion involved oxygen rather than what was previously thought to be phlogiston.
4.
Momentum
–
In classical mechanics, linear momentum, translational momentum, or simply momentum is the product of the mass and velocity of an object, quantified in kilogram-meters per second. It is dimensionally equivalent to the product of force and time, quantified in newton-seconds. Newton's second law of motion states that the change in linear momentum of a body is equal to the net impulse acting on it. If the truck were lighter, or moving more slowly, then it would therefore require less impulse to start or stop. Linear momentum is also a conserved quantity, meaning that if a closed system is not affected by external forces, its linear momentum can not change. In classical mechanics, conservation of linear momentum is implied by Newton's laws. With appropriate definitions, a linear momentum conservation law holds in electrodynamics, quantum mechanics, quantum field theory, general relativity. It is ultimately an expression of the fundamental symmetries of space and time, that of translational symmetry. Linear momentum depends on frame of reference. Observers in different frames would find different values of linear momentum of a system. But each would observe that the value of linear momentum does not change provided the system is isolated. Momentum has a direction well as magnitude. Quantities that have both a direction are known as vector quantities. Because momentum has a direction, it can be used to predict the resulting direction of objects after they collide, well as their speeds. Below, the basic properties of momentum are described in one dimension.
Momentum
–
In a game of pool, momentum is conserved; that is, if one ball stops dead after the collision, the other ball will continue away with all the momentum. If the moving ball continues or is deflected then both balls will carry a portion of the momentum from the collision.
5.
Solid mechanics
–
Solid mechanics is fundamental for civil, aerospace, nuclear, mechanical engineering, for many branches of physics such as materials science. It has specific applications such as understanding the anatomy of living beings, the design of dental prostheses and surgical implants. One of the most common practical applications of solid mechanics is the Euler-Bernoulli equation. Solid mechanics extensively uses tensors to describe stresses, the relationship between them. As shown in the following table, solid mechanics inhabits a central place within continuum mechanics. The field of rheology presents an overlap between fluid mechanics. Its shape departs away from the rest shape due to stress. The amount of departure from shape is called deformation, the proportion of deformation to original size is called strain. This region of deformation is known as the linearly elastic region. It is most common for analysts in solid mechanics to use linear material models, due to ease of computation. However, real materials often exhibit non-linear behavior. As old ones are pushed to their limits, non-linear material models are becoming more common. Those that deform proportionally to the applied load, can be described by the linear elasticity equations such as Hooke's law. This implies that the response has time-dependence. Plastically – Materials that behave elastically generally do so when the applied stress is less than a yield value.
Solid mechanics
–
Continuum mechanics
6.
Stress (mechanics)
–
For example, when a vertical bar is supporting a weight, each particle in the bar pushes on the particles immediately below it. When a liquid is in a closed container under pressure, each particle gets pushed by all the surrounding particles. The pressure-inducing surface push against them in reaction. These macroscopic forces are actually the net result of a very large number between the particles in those molecules. Strain inside a material may arise by various mechanisms, such as stress as applied to the bulk material or to its surface. In gases, only deformations that change the volume generate persistent elastic stress. However, if the deformation is gradually changing with time, even in fluids there will usually be some viscous stress, opposing that change. Viscous stresses are usually combined under the name mechanical stress. Significant stress may exist even when deformation is non-existent. Stress may exist in the absence of external forces; built-in stress is important, for example, in prestressed concrete and tempered glass. Stress that exceeds certain strength limits of the material will even change its crystal structure and chemical composition. In some branches of engineering, the stress is occasionally used in a looser sense as a synonym of "internal force". Since ancient times humans have been consciously aware of stress inside materials. With those tools, Augustin-Louis Cauchy was able to give mathematical model for stress in a homogeneous medium. Cauchy observed that the force across an imaginary surface was a linear function of its normal vector; and, moreover, that it must be a symmetric function.
Stress (mechanics)
–
Built-in strain, inside the plastic protractor, developed by the stress of the shape of the protractor, is revealed by the effect of polarized light.
Stress (mechanics)
–
Roman -era bridge in Switzerland
Stress (mechanics)
–
Inca bridge on the Apurimac River
Stress (mechanics)
–
Glass vase with the craquelé effect. The cracks are the result of brief but intense stress created when the semi-molten piece is briefly dipped in water.
7.
Deformation (mechanics)
–
Deformation in continuum mechanics is the transformation of a body from a reference configuration to a current configuration. A configuration is a set containing the positions of all particles of the body. A deformation may be caused by external loads, changes in temperature, moisture content, or chemical reactions, etc.. Strain is a description of deformation in terms of relative displacement of particles in the body that excludes rigid-body motions. The relation between induced strains is expressed by constitutive equations, e.g. Hooke's law for linear elastic materials. Deformations which are recovered after the field has been removed are called elastic deformations. In this case, the continuum completely recovers its original configuration. On the other hand, irreversible deformations remain even after stresses have been removed. Another type of irreversible deformation is viscous deformation, the irreversible part of viscoelastic deformation. In the case of elastic deformations, the function linking strain to the deforming stress is the compliance tensor of the material. Strain is a measure of deformation representing the displacement between particles in the body relative to a length. Such a measure does not distinguish between rigid body changes in shape of the body. A deformation has units of length. We could, for example, define strain to be ε ′ − I where I is the identity tensor. Hence strains are usually expressed as a decimal fraction, a percentage or in parts-per notation.
Deformation (mechanics)
–
The deformation of a thin straight rod into a closed loop. The length of the rod remains almost unchanged during the deformation, which indicates that the strain is small. In this particular case of bending, displacements associated with rigid translations and rotations of material elements in the rod are much greater than displacements associated with straining.
8.
Compatibility (mechanics)
–
Compatibility is the study of the conditions under which such a displacement field can be guaranteed. In the description of a solid body we imagine the body to be composed of a set of infinitesimal volumes or material points. Each volume is assumed to be connected without any gaps or overlaps. Mathematical conditions have to be satisfied to ensure that gaps/overlaps do not develop when a continuum body is deformed. A body that deforms without developing any gaps/overlaps is called a compatible body. Compatibility conditions are mathematical conditions that determine whether a particular deformation will leave a body in a compatible state. In the context of infinitesimal theory, these conditions are equivalent to stating that the displacements in a body can be obtained by integrating the strains. Such an integration is possible if the Saint-Venant's R vanishes in a simply-connected body where ε is the infinitesimal strain tensor and R: = ∇ ×. For finite deformations the compatibility conditions take the R: = ∇ × F = 0 where F is the deformation gradient. The compatibility conditions in linear elasticity are obtained by observing that there are six strain-displacement relations that are functions of only three unknown displacements. This suggests that the three displacements may be removed without loss of information. The resulting expressions in terms of only the strains provide constraints on the possible forms of a field. The same condition is also sufficient to ensure compatibility in a simply connected body. The quantity R i j m represents the mixed components of the Riemann-Christoffel curvature tensor. The problem of compatibility in continuum mechanics involves the determination of allowable continuous fields on simply connected bodies.
Compatibility (mechanics)
–
Figure 1. Motion of a continuum body.
9.
Finite strain theory
–
In this case, the undeformed and deformed configurations of the continuum are significantly different and a clear distinction has to be made between them. This is commonly the case with other fluids and biological soft tissue. The displacement of a body has two components: a rigid-body displacement and a deformation. A rigid-body displacement consists without changing its size. A change in the configuration of a continuum body can be described by a displacement field. A field is a field of all displacement vectors for all particles in the body, which relates the deformed configuration with the undeformed configuration. Relative displacement between particles occurs if and only if deformation has occurred. If displacement occurs without deformation, then it is deemed a rigid-body displacement. The displacement of particles indexed by variable i may be expressed as follows. The vector joining the positions of a particle in the undeformed configuration P i and deformed configuration p i is called the displacement vector. The partial derivative of the displacement vector with respect to the spatial coordinates yields the material displacement gradient tensor ∇ X u. α J i are the direction cosines between the material and spatial coordinate systems with unit vectors E J and e i, respectively. Due to the assumption of continuity of χ, F has the inverse H = − 1, where H is the spatial deformation tensor. Consider a particle or P with position vector X = X I I I in the undeformed configuration. The coordinate systems for the undeformed and deformed configuration can be superimposed for convenience.
Finite strain theory
–
Figure 1. Motion of a continuum body.
10.
Infinitesimal strain theory
–
With this assumption, the equations of continuum mechanics are considerably simplified. This approach may also be called small deformation theory, small displacement-gradient theory. It is contrasted with the finite theory where the opposite assumption is made. In such a linearization, the second-order terms of the finite strain tensor are neglected. Therefore, the spatial displacement gradient components are approximately equal. Therefore, the diagonal elements of the infinitesimal tensor are the normal strains in the coordinate directions. The results of these operations are called strain invariants. Since there are no shear strain components in this coordinate system, the principal strains represent the minimum stretches of an elemental volume. An octahedral plane is one whose normal makes equal angles with the three principal directions. Several definitions of strain can be found in the literature. Thus, a solution does not generally exist for an arbitrary choice of strain components. Therefore, some restrictions, named compatibility equations, are imposed upon the strain components. With the addition of the three compatibility equations the number of independent equations is reduced to three, matching the number of unknown displacement components. These constraints on the tensor were discovered by Saint-Venant, are called the "Saint Venant compatibility equations". The compatibility functions serve to assure a single-valued continuous displacement function u i.
Infinitesimal strain theory
–
Figure 1. Two-dimensional geometric deformation of an infinitesimal material element.
11.
Elasticity (physics)
–
Solid objects will deform when adequate forces are applied on them. If the material is elastic, the object will return to its initial size when these forces are removed. The physical reasons for elastic behavior can be quite different for different materials. In metals, the atomic lattice changes shape when forces are applied. When forces are removed, the lattice goes back to the original lower state. For other polymers, elasticity is caused by the stretching of polymer chains when forces are applied. Perfect elasticity is an approximation of the world.Most materials which possess elasticity in practice remain purely elastic only up to very small deformations. In engineering, the amount of elasticity of a material is determined by two types of parameter. The SI unit of modulus is pascal. A higher modulus typically indicates that the material is harder to deform. When describing the relative elasticities of two materials, the elastic limit have to be considered. Rubbers typically tend to stretch a lot and so appear more elastic than metals in everyday experience. Of two rubber materials with the elastic limit, the one with a lower modulus will appear to be more elastic, however not correct. The various moduli apply to different kinds of deformation. For instance, Young's modulus applies to extension/compression of a body, whereas the shear modulus applies to its shear.
Elasticity (physics)
–
Continuum mechanics
12.
Linear elasticity
–
Linear elasticity is the mathematical study of how solid objects deform and become internally stressed due to prescribed loading conditions. Linear elasticity models materials as continua. Linear elasticity is a branch of continuum mechanics. The fundamental "linearizing" assumptions of linear elasticity are: linear relationships between the components of stress and strain. In addition linear elasticity is valid only for stress states that do not produce yielding. These assumptions are reasonable for many engineering materials and design scenarios. Linear elasticity is therefore used extensively often with the aid of finite element analysis. The system of differential equations is completed by a set of algebraic constitutive relations. For elastic materials, Hooke's law relates the unknown stresses and strains. Note: the Einstein summation convention of summing on repeated indices is used below. These are 3 independent equations with 6 independent unknowns. Strain-displacement equations: ε i j = 1 2 where ε i j = ε j i is the strain. These are 6 independent equations relating displacements with 9 independent unknowns. Constitutive equations. These are 6 independent equations relating strains.
Linear elasticity
–
Spherical coordinates (r, θ, φ) as commonly used in physics: radial distance r, polar angle θ (theta), and azimuthal angle φ (phi). The symbol ρ (rho) is often used instead of r.
13.
Plasticity (physics)
–
In physics and materials science, plasticity describes the deformation of a material undergoing non-reversible changes of shape in response to applied forces. For example, a solid piece of metal being pounded into a new shape displays plasticity as permanent changes occur within the material itself. In engineering, the transition from elastic behavior to plastic behavior is called yield. Plastic deformation is observed in most materials, particularly metals, soils, rocks, concrete, foams, skin. However, the physical mechanisms that cause plastic deformation can vary widely. At a crystalline scale, plasticity in metals is usually a consequence of dislocations. In brittle materials such as rock, bone, plasticity is caused predominantly by slip at microcracks. For many ductile metals, tensile loading applied to a sample will cause it to behave in an elastic manner. Each increment of load is accompanied by a proportional increment in extension. When the load is removed, the piece returns to its original size. Its quality depends on the time frame considered and loading speed. If, as indicated in the opposite, the deformation includes elastic deformation, it is also often referred to as "elasto-plastic deformation" or "elastic-plastic deformation". Perfect plasticity is a property of materials to undergo irreversible deformation without any increase in loads. Plastic materials with hardening necessitate increasingly higher stresses to result in plastic deformation. Generally, plastic deformation is also dependent on the speed, i.e. higher stresses usually have to be applied to increase the rate of deformation.
Plasticity (physics)
–
Plasticity under a spherical Nanoindenter in (111) Copper. All particles in ideal lattice positions are omitted and the color code refers to the von Mises stress field.
Plasticity (physics)
–
1: True elastic limit
14.
Bending
–
In applied mechanics, bending characterizes the behavior of a slender structural element subjected to an external load applied perpendicularly to a longitudinal axis of the element. When the length is considerably longer than the thickness, the element is called a beam. For example, a closet sagging under the weight of clothes on clothes hangers is an example of a beam experiencing bending. A thin-walled, short tube supported at its ends and loaded laterally is an example of a shell experiencing bending. In the absence of a qualifier, the term bending is ambiguous because bending can occur locally in all objects. A beam deforms and develop inside it when a transverse load is applied on it. In the quasi-static case, the amount of bending the stresses that develop are assumed not to change over time. These last two forces form a moment as they are equal in magnitude and opposite in direction. This bending moment resists the sagging characteristic of a beam experiencing bending. The distribution in a beam can be predicted quite accurately when some simplifying assumptions are used. In the Euler–Bernoulli theory of slender beams, a major assumption is that'plane sections remain plane'. In other words, any deformation due to shear across the section is not accounted for. Also, this linear distribution is only applicable if the maximum stress is less than the stress of the material. For stresses that exceed yield, refer to plastic bending. At yield, the maximum stress experienced in the section is defined as the flexural strength.
Bending
–
Bending of an I -beam
15.
Hooke's law
–
The law is named after 17th-century British physicist Robert Hooke. He first stated the law in 1660 as a Latin anagram. He published the solution of his anagram in 1678 as: ut tensio, sic vis. An elastic body or material for which this equation can be assumed is said to be linear-elastic or Hookean. Hooke's law is only a linear approximation to applied forces. Many materials will noticeably deviate from Hooke's law well before those elastic limits are reached. On the other hand, Hooke's law is an accurate approximation for most solid bodies, long as the deformations are small enough. It is also the fundamental principle behind the balance wheel of the mechanical clock. Suppose that the spring has reached a state of equilibrium, where its length is not changing anymore. Let X be the amount by which the free end of the spring was displaced from its "relaxed" position. Hooke's law states that F = k X or, equivalently, X = k where k is characteristic of the spring. Moreover, the same formula holds when the spring is compressed, with F and X both negative in that case. In that case, the equation becomes F = − k X since the direction of the restoring force is opposite to that of the displacement. The displacement X in this case is the deviation of the beam, measured in the transversal direction, relative to its unloaded shape. The law also applies when a stretched steel wire is twisted by pulling on a lever attached to one end.
Hooke's law
–
Hooke's law: the force is proportional to the extension
16.
Material failure theory
–
Failure theory is the science of predicting the conditions under which solid materials fail under the action of external loads. The failure of a material is usually classified into brittle failure. Depending on the conditions most materials can fail in a brittle or ductile manner or both. However, for most practical situations, a material may be classified as either brittle or ductile. Though theory has been for over 200 years, its level of acceptability is yet to reach that of continuum mechanics. In mathematical terms, failure theory is expressed in the form of various failure criteria which are valid for specific materials. Failure criteria are functions in space which separate "failed" states from "unfailed" states. Working definitions are in use in the engineering community. Often, phenomenological failure criteria of the same form are used to ductile yield. In materials science, failure is the loss of load carrying capacity of a unit. This definition se introduces the fact that failure can be examined in different scales, from microscopic, to macroscopic. Such methodologies are useful for gaining insight in the cracking of specimens and simple structures under well defined global load distributions. Microscopic failure considers the initiation and propagation of a crack. Failure criteria in this case are related to microscopic fracture. Some of the most popular failure models in this area are the micromechanical failure models, which combine the advantages of continuum mechanics and classical fracture mechanics.
Material failure theory
–
Continuum mechanics
17.
Fracture mechanics
–
Fracture mechanics is the field of mechanics concerned with the study of the propagation of cracks in materials. In modern materials science, fracture mechanics is an important tool in improving the mechanical performance of mechanical components. Fractography is widely used with fracture mechanics to understand the causes of failures and also verify the theoretical failure predictions with real life failures. The prediction of growth is at the heart of the damage tolerance discipline. Arising from the manufacturing process, surface flaws are found in all metal structures. Not all such flaws are unstable under service conditions. Ensuring safe operation of structure despite these inherent flaws is achieved through damage analysis. Fracture mechanics as a subject for critical study has barely been around for a century and thus is relatively new. Fracture mechanics should attempt to provide quantitative answers to the following questions: What is the strength as a function of size? What crack size can be tolerated under service loading, i.e. what is the maximum permissible crack size? What is the service life of a structure when a certain pre-existing flaw size is assumed to exist? During the period available for crack detection how often should the structure be inspected for cracks? Fracture mechanics was developed by English aeronautical engineer, A. A. Griffith, to explain the failure of brittle materials. Griffith's work was motivated by two contradictory facts: The stress needed to fracture glass is around 100 MPa. The theoretical stress needed for breaking atomic bonds is approximately 10,000 MPa.
Fracture mechanics
–
The S.S. Schenectady split apart by brittle fracture while in harbor, 1943.
Fracture mechanics
–
The three fracture modes.
18.
Contact mechanics
–
Contact mechanics is the study of the deformation of solids that touch each other at one or more points. The frictional stresses acting tangentially between the surfaces. This page focuses mainly on the normal direction, i.e. on frictionless contact mechanics. Frictional contact mechanics is discussed separately. Applications of contact mechanics further extend into the micro- and nanotechnological realm. The original work in contact mechanics dates back to 1882 by Heinrich Hertz. Hertz was attempting to understand how the optical properties of stacked lenses might change with the force holding them together. Hertzian stress refers to the localized stresses that develop as two curved surfaces come in contact and deform slightly under the imposed loads. This amount of deformation is dependent on the modulus of elasticity of the material in contact. Classical contact mechanics is most notably associated with Heinrich Hertz. In 1882, Hertz solved the problem of two elastic bodies with curved surfaces. This classical solution provides a foundation for modern problems in contact mechanics. In mechanical engineering and tribology, Hertzian contact stress is a description of the stress within mating parts. The Hertzian stress usually refers to the stress close to the area of contact between two spheres of different radii. It was not until nearly one hundred years later that Johnson, Kendall, Roberts found a similar solution for the case of adhesive contact.
Contact mechanics
–
Stresses in a contact area loaded simultaneously with a normal and a tangential force. Stresses were made visible using photoelasticity.
Contact mechanics
–
Contact of an elastic sphere with an elastic half-space
Contact mechanics
–
Contact between two spheres.
Contact mechanics
–
Contact between two crossed cylinders of equal radius.
19.
Frictional contact mechanics
–
Contact mechanics is the study of the deformation of solids that touch each other at one or more points. This can be divided in the direction perpendicular to the interface, frictional forces in the tangential direction. Frictional contact mechanics is concerned with a large range of different scales. At the macroscopic scale, it is applied for the investigation of the motion of contacting bodies. For instance the bouncing of a ball on a surface depends on the frictional interaction at the contact interface. Here the total force versus indentation and displacement are of main concern. At the intermediate scale, one is interested in strains and deformations of the contacting bodies in and near the contact area. For instance to investigate wear and damage of the contacting bodies' surfaces. Application areas of this scale are railway wheel-rail interaction, roller bearing analysis, etc.. Several famous scientists, mathematician contributed to our understanding of friction. They include Leonardo da Vinci, Guillaume Amontons, John Theophilus Desaguliers, Charles-Augustin de Coulomb. Later, Nikolai Pavlovich Petrov, Richard Stribeck supplemented this understanding with theories of lubrication. With respect to contact mechanics the classical contribution by Heinrich Hertz stands out. Further the fundamental solutions by Boussinesq and Cerruti are of primary importance for the investigation of frictional contact problems in the elastic regime. Classical results for a true frictional contact problem concern the papers by F.W. Carter and H.
Frictional contact mechanics
–
In railway applications one wants to know the relation between creepage (velocity difference) and the friction force.
20.
Fluid
–
In physics, a fluid is a substance that continually deforms under an applied shear stress. Fluids include liquids, gases, plasmas and, to some extent, plastic solids. Fluids are substances that have zero shear modulus or, in simpler terms, a fluid is a substance which can not resist any force applied to it. For example, "brake fluid" will not perform its required incompressible function if there is gas in it. This colloquial usage of the term is also common in nutrition. Liquids form a free surface while gases do not. The distinction between solids and fluid is not entirely obvious. The distinction is made by evaluating the viscosity of the substance. Silly Putty can be considered to behave like a fluid, depending on the time period over which it is observed. It is best described as a viscoelastic fluid. There are many examples of substances proving difficult to classify. A particularly interesting one is pitch, as demonstrated in the pitch experiment currently running at the University of Queensland. These properties are typically a function of their inability to support a stress in static equilibrium. Solids can be subjected to normal stresses -- both compressive and tensile. In contrast, ideal fluids can only be subjected to compressive stress, called pressure.
Fluid
–
Continuum mechanics
21.
Fluid statics
–
Fluid statics or hydrostatics is the branch of fluid mechanics that studies incompressible fluids at rest. Hydrostatics are categorized as a part of the fluid statics, the study of all fluids, incompressible or not, at rest. Hydrostatics is fundamental to the engineering of equipment for storing, using fluids. It is also relevant to meteorology, to medicine, many other fields. Some principles of hydrostatics have been known by the builders of boats, cisterns, aqueducts and fountains. It was used as a tool. The height of this pipe is the same as the line carved into the interior of the cup. The cup may be filled without any fluid passing into the pipe in the center of the cup. However, when the amount of fluid exceeds this line, fluid will overflow into the pipe in the center of the cup. Due to the drag that molecules exert on one another, the cup will be emptied. Heron's fountain is a device invented by Heron of Alexandria that consists of a jet of fluid being fed by a reservoir of fluid. The device consisted of two containers arranged one above the other. Several cannula connecting the various vessels. Trapped air inside the vessels induces a jet of water out of a nozzle, emptying all water from the intermediate reservoir. Pascal made contributions in both hydrostatics and hydrodynamics.
Fluid statics
–
Table of Hydraulics and Hydrostatics, from the 1728 Cyclopædia
Fluid statics
–
Diving medicine:
22.
Fluid dynamics
–
In physics, fluid dynamics is a subdiscipline of fluid mechanics that deals with fluid flow—the science of fluids in motion. It has several subdisciplines itself, including aerodynamics and hydrodynamics. Some of its principles are even used in engineering, where traffic is treated as crowd dynamics. Before the twentieth century, hydrodynamics was synonymous with fluid dynamics. This is still reflected like hydrodynamic stability, both of which can also be applied to gases. The foundational axioms of fluid dynamics are the conservation laws, specifically, conservation of energy. These are modified in general relativity. They are expressed using the Reynolds Transport Theorem. In addition to the above, fluids are assumed to obey the continuum assumption. Fluids are composed of molecules that collide with one another and solid objects. However, the continuum assumption assumes that fluids are continuous, rather than discrete. The fact that the fluid is made up of discrete molecules is ignored. The unsimplified equations do not have a general closed-form solution, so they are primarily of use in Computational Fluid Dynamics. The equations can be simplified in a number of ways, all of which make them easier to solve. Some of the simplifications allow appropriate fluid dynamics problems to be solved in closed form.
Fluid dynamics
23.
Archimedes' principle
–
Archimedes' principle is a law of physics fundamental to fluid mechanics. It was formulated by Archimedes of Syracuse. Practically, the Archimedes' principle allows the buoyancy of an object fully immersed in a liquid to be calculated. The downward force on the object is simply its weight. The buoyant force on the object is that stated by Archimedes' principle, above. Thus the upward force on the object is the difference between the buoyant force and its weight. Consider a cube immersed with its sides parallel to the direction of gravity. The fluid will exert a normal force on each face, therefore only the forces on the bottom faces will contribute to buoyancy. The difference between the bottom and the top face is directly proportional to the height. The weight of the object in the fluid is reduced, because of the force acting on it, called upthrust. Thus, among completely submerged objects with equal masses, objects with greater volume have greater buoyancy. Suppose a rock's weight is measured as 10 newtons when suspended with gravity acting on it. Suppose that when the rock is lowered into water, it displaces water of weight 3 newtons. Buoyancy reduces the apparent weight of objects that have sunk completely to the floor. It is generally easier to lift an object up through the water than it is to pull it out of the water.
Archimedes' principle
–
Diving medicine:
Archimedes' principle
–
Continuum mechanics
24.
Bernoulli's principle
–
The principle is named after Daniel Bernoulli who published it in his book Hydrodynamica in 1738. The simple form of Bernoulli's equation is valid for incompressible flows. More advanced forms may be applied to compressible flows at higher Mach numbers. Bernoulli's principle can be derived from the principle of conservation of energy. This requires that the sum of kinetic energy, potential energy and internal energy remains constant. Bernoulli's principle can also be derived directly from Isaac Newton's Second Law of Motion. This gives a net force on the volume, accelerating it along the streamline. Fluid particles are subject only to pressure and their own weight. Consequently, within a fluid flowing horizontally, the highest speed occurs where the pressure is lowest, the lowest speed occurs where the pressure is highest. Therefore, the fluid can be considered to be incompressible and these flows are called incompressible flows. Bernoulli performed his experiments on liquids, so his equation in its original form is valid only for incompressible flow. E.g. for the Earth's gravity Ψ = gz. The constant in the Bernoulli equation can be normalised. In liquids – when the pressure becomes too low – cavitation occurs. The above equations use a linear relationship between flow speed squared and pressure.
Bernoulli's principle
–
Condensation visible over the upper surface of an Airbus A340 wing caused by the fall in temperature accompanying the fall in pressure, both due to acceleration of the air.
Bernoulli's principle
–
A flow of air into a venturi meter. The kinetic energy increases at the expense of the fluid pressure, as shown by the difference in height of the two columns of water.
25.
Pascal's law
–
The law was established by French mathematician Blaise Pascal. The intuitive explanation of this formula is that the change in pressure between 2 elevations is due to the weight of the fluid between the elevations. Note that the variation with height does not depend on any additional pressures. The pressure that the left piston exerts against the water will be exactly equal to the pressure the water exerts against the right piston. The difference between force and pressure is important: the additional pressure is exerted against the entire area of the larger piston. Since there is 50 times the area, 50 times as much force is exerted on the larger piston. Thus, the larger piston will support a 50 N load - fifty times the load on the smaller piston. Forces can be multiplied using such a device. One input produces 50 output. By further increasing the area of the larger piston, forces can be multiplied, in principle, by any amount. Pascal's principle underlies the operation of the hydraulic press. The hydraulic press does not violate energy conservation, because a decrease in distance moved compensates for the increase in force. When the small piston is moved downward 100 centimeters, the large piston will be raised only one-fiftieth of this, or 2 centimeters. Pascal's principle applies to all fluids, whether gases or liquids. A typical application of Pascal's principle for liquids is the lift seen in many service stations.
Pascal's law
–
The effects of Pascal's law in the (possibly apocryphal) " Pascal's barrel " experiment.
Pascal's law
–
Continuum mechanics
26.
Viscosity
–
The viscosity of a fluid is a measure of its resistance to gradual deformation by shear stress or tensile stress. For liquids, it corresponds to the informal concept of "thickness"; for example, honey has a much higher viscosity than water. For a given pattern, the stress required is proportional to the fluid's viscosity. A fluid that has no resistance to shear stress is known as an inviscid fluid. Zero viscosity is observed only at very low temperatures in superfluids. Otherwise, all fluids are technically said to be viscous or viscid. A fluid such as pitch, may appear to be a solid. The word "viscosity" is derived from the Latin "viscum", also a viscous glue made from mistletoe berries. The dynamic viscosity of a fluid expresses its resistance to shearing flows, where adjacent layers move parallel to each other with different speeds. This fluid has to be homogeneous at different shear stresses. An external force is therefore required in order to keep the top plate moving at constant speed. The proportionality μ in this formula is the viscosity of the fluid. The y-axis, perpendicular to the flow, points in the direction of maximum shear velocity. This equation can be used where the velocity does not vary linearly with y, such as in fluid flowing through a pipe. Use of the Greek mu for the dynamic stress viscosity is common among mechanical and chemical engineers, as well as physicists.
Viscosity
–
Pitch has a viscosity approximately 230 billion (2.3 × 10 11) times that of water.
Viscosity
–
Laminar shear of fluid between two plates. Friction between the fluid and the moving boundaries causes the fluid to shear. The force required for this action is a measure of the fluid's viscosity.
Viscosity
–
Example of the viscosity of milk and water. Liquids with higher viscosities make smaller splashes when poured at the same velocity.
Viscosity
–
Honey being drizzled.
27.
Newtonian fluid
–
Newtonian fluids are the simplest mathematical models of fluids that account for viscosity. However, non-Newtonian fluids include non-drip paint. Other examples include many polymer solutions, molten polymers, most highly viscous fluids. Newtonian fluids are named after Isaac Newton, who first postulated the relation between the shear strain stress for such fluids in differential form. These forces can be mathematically approximated by a viscous tensor, usually denoted by τ. The deformation of that fluid element, relative to some previous state, can be approximated by a tensor that changes with time. ∇ v can be expressed by × 3 matrices, relative to any chosen coordinate system. One also defines a total stress tensor σ ) that combines the shear stress with conventional pressure p. The diagonal components of viscosity tensor is molecular viscosity of a liquid, not diagonal components – turbulence eddy viscosity. Non-newtonian fluid
Newtonian fluid
–
Continuum mechanics
28.
Non-Newtonian fluid
–
A non-Newtonian fluid is a fluid with properties that are different in any way from those of Newtonian fluids. Most commonly, the viscosity of non-Newtonian fluids is dependent on shear rate or history. Some non-Newtonian fluids with shear-independent viscosity, however, still exhibit non-Newtonian behavior. In a non-Newtonian fluid, the relation between the rate is different and can even be time-dependent. Therefore, a constant coefficient of viscosity cannot be defined. The properties are better studied using tensor-valued constitutive equations, which are common in the field of continuum mechanics. The viscosity of dilatant fluid, appears to increase when the rate increases. Corn starch dissolved in water is a common example: when stirred slowly it looks milky, when stirred vigorously it feels like a very viscous liquid. Thus, to avoid confusion, the latter classification is more clearly termed pseudoplastic. Another example of a shear thinning fluid is blood. This application is highly favoured within the body, as it allows the viscosity of blood to decrease with increased rate. Fluids that have a linear shear relationship require a finite yield stress before they begin to flow. These fluids are called Bingham plastics. Several examples are clay suspensions, drilling mud, toothpaste, mustard. The surface of a Bingham plastic can hold peaks when it is still.
Non-Newtonian fluid
–
Demonstration of a non-Newtonian fluid at Universum in Mexico City
Non-Newtonian fluid
–
Classification of fluids with shear stress as a function of shear rate.
Non-Newtonian fluid
–
Oobleck on a subwoofer. Applying force to oobleck, by sound waves in this case, makes the non-Newtonian fluid thicken.
29.
Buoyancy
–
In science, buoyancy is an upward force exerted by a fluid that opposes the weight of an immersed object. In a column of fluid, pressure increases with depth as a result of the weight of the overlying fluid. Thus the pressure at the bottom of a column of fluid is greater than at the top of the column. Similarly, the pressure at the bottom of an object submerged in a fluid is greater than at the top of the object. This pressure difference results in a net upwards force on the object. For this reason, an object whose density is greater than that of the fluid in which it is submerged tends to sink. If the object is either less dense than the liquid or is shaped appropriately, the force can keep the object afloat. In a situation of fluid statics, the upward force is equal to the magnitude of the weight of fluid displaced by the body. The center of buoyancy of an object is the centroid of the displaced volume of fluid. Archimedes' principle is named after Archimedes of Syracuse, who first discovered this law in 212 B.C. More tersely: Buoyancy = weight of displaced fluid. The weight of the displaced fluid is directly proportional to the volume of the displaced fluid. Thus, among completely submerged objects with equal masses, objects with greater volume have greater buoyancy. This is also known as upthrust. Suppose a rock's weight is measured as 10 newtons when suspended by a string in a vacuum with gravity acting upon it.
Buoyancy
–
A metallic coin (one British pound coin) floats in mercury due to the buoyancy force upon it and appears to float higher because of the surface tension of the mercury.
Buoyancy
–
The forces at work in buoyancy. Note that the object is floating because the upward force of buoyancy is equal to the downward force of gravity.
30.
Mixing (process engineering)
–
In industrial process engineering, mixing is a unit operation that involves manipulation of a heterogeneous physical system with the intent to make it more homogeneous. The stirring of pancake batter to eliminate lumps. Mixing is performed to allow mass transfer to occur between one or more streams, components or phases. Modern industrial processing almost always involves some form of mixing. Some classes of chemical reactors are also mixers. With the right equipment, it is possible to mix a solid, gas into another solid, liquid or gas. The opposite of mixing is segregation. A classical example of segregation is the brazil effect. The type of equipment used during mixing depends on the state of materials being mixed and the miscibility of the materials being processed. In this context, the act of mixing may be synonymous with stirring kneading-processes. Mixing of liquids occurs frequently in engineering. The nature of liquids to blend determines the equipment used. Transitional mixing is frequently conducted with turbines or impellers; laminar mixing is conducted with helical ribbon or anchor mixers. Mixing of liquids that are at least soluble in each other occurs frequently in process engineering. An everyday example would be the addition of cream to tea or coffee.
Mixing (process engineering)
–
Machine for incorporating liquids and finely ground solids
Mixing (process engineering)
–
Schematics of an agitated vessel with a Rushton turbine and baffles
Mixing (process engineering)
–
A magnetic stirrer
Mixing (process engineering)
–
Axial flow impeller (left) and radial flow impeller (right).
31.
Pressure
–
Pressure is the force applied perpendicular to the surface of an object per unit area over which that force is distributed. Gauge pressure is the pressure relative to the ambient pressure. Various units are used to express pressure. Pressure is the amount of force acting per area. The symbol for it is P. The IUPAC recommendation for pressure is a lower-case p. However, upper-case P is widely used. Mathematically: p = F A where: p is the pressure, F is the normal force, A is the area of the surface on contact. Pressure is a quantity. It relates the vector element with the normal force acting on it. It is incorrect to say "the pressure is directed in such direction". The pressure, as a scalar, has no direction. The pressure does not. The pressure remains the same. Pressure is distributed across arbitrary sections of fluid normal to these boundaries or sections at every point. It is conjugate to volume.
Pressure
–
Mercury column
Pressure
–
Pressure as exerted by particle collisions inside a closed container.
Pressure
–
The effects of an external pressure of 700bar on an aluminum cylinder with 5mm wall thickness
Pressure
–
low pressure chamber in Bundesleistungszentrum Kienbaum, Germany
32.
Liquid
–
A liquid is a nearly incompressible fluid that conforms to the shape of its container but retains a constant volume independent of pressure. As such, it is the only state with a definite volume but no fixed shape. A liquid is made up of tiny vibrating particles such as atoms, held together by intermolecular bonds. Water is, by far, the most common liquid on Earth. Like a gas, a liquid is able to take the shape of a container. Most liquids resist compression, although others can be compressed. Unlike a gas, a liquid maintains a fairly constant density. A distinctive property of the liquid state is tension, leading to wetting phenomena. The density of a liquid is usually much higher than in a gas. Therefore, solid are both termed condensed matter. On the other hand, as gases share the ability to flow, they are both called fluids. Most known matter in the universe is in plasma form within stars. Liquid is one of the four primary states of matter, with the others being solid, gas and plasma. A liquid is a fluid. Unlike a solid, the molecules in a liquid have a much greater freedom to move.
Liquid
–
The formation of a spherical droplet of liquid water minimizes the surface area, which is the natural result of surface tension in liquids.
Liquid
–
Thermal image of a sink full of hot water with cold water being added, showing how the hot and the cold water flow into each other.
Liquid
–
Surface waves in water
33.
Surface tension
–
Surface tension is the elastic tendency of a fluid surface which makes it acquire the least surface area possible. Surface tension allows insects, usually denser than water, to stride on a water surface. At liquid-air interfaces, tension results from the greater attraction of liquid molecules to each other than to the molecules in the air. Thus, the surface becomes from the imbalanced forces, probably where the term "surface tension" came from. Surface tension is an important factor in the phenomenon of capillarity. Surface tension has of energy per unit area. In materials science, tension is used for either surface stress or surface free energy. The cohesive forces among liquid molecules are responsible for the phenomenon of tension. In the bulk of the liquid, each molecule is pulled equally by neighboring liquid molecules resulting in a net force of zero. The molecules at the surface do not have the same molecules on all sides of them and therefore are pulled inwards. This forces liquid surfaces to contract to the minimal area. Surface tension is responsible for the shape of liquid droplets. Although easily deformed, droplets of water tend to be pulled by the imbalance in cohesive forces of the surface layer. Including gravity, drops of virtually all liquids would be approximately spherical. The spherical shape minimizes the necessary "tension" of the surface layer according to Laplace's law.
Surface tension
–
Surface tension preventing a paper clip from submerging.
Surface tension
Surface tension
–
A. Water beading on a leaf
Surface tension
–
C. Water striders stay atop the liquid because of surface tension
34.
Capillary action
–
Capillary action is the ability of a liquid to flow in narrow spaces without the assistance of, or even in opposition to, external forces like gravity. It occurs because of intermolecular forces between the liquid and surrounding solid surfaces. The first recorded observation of capillary action was by Leonardo da Vinci. A former student of Galileo, Niccolò Aggiunti, was said to have investigated capillary action. Boyle then reported an experiment in which he dipped a capillary tube into red wine and then subjected the tube to a partial vacuum. Others soon followed Boyle's lead. Some thought that liquids rose in capillaries because air couldn't enter capillaries as easily as liquids, so the air pressure was lower inside capillaries. Others thought that the particles of liquid were attracted to each other and to the walls of the capillary. They derived the Young–Laplace equation of capillary action. By 1830, the German mathematician Carl Friedrich Gauss had determined the boundary conditions governing capillary action. In 1871, the British physicist William Thomson determined the effect of the meniscus on a liquid's vapor pressure—a relation known as the Kelvin equation. German physicist Franz Ernst Neumann subsequently determined the interaction between two immiscible liquids. Albert Einstein's first paper, submitted to Annalen der Physik in 1900, was on capillarity. A common apparatus used to demonstrate the first phenomenon is the capillary tube. When the lower end of a vertical glass tube is placed in a liquid, such as water, a concave meniscus forms.
Capillary action
–
Capillary flow experiment to investigate capillary flows and phenomena aboard the International Space Station
Capillary action
–
Capillary action of water compared to mercury, in each case with respect to a polar surface such as glass
Capillary action
–
Water height in a capillary plotted against capillary diameter
Capillary action
–
Capillary flow in a brick, with a sorptivity of 5.0 mm min −1/2 and a porosity of 0.25.
35.
Gas
–
Gas is one of the four fundamental states of matter. A mixture would contain a variety of pure gases much like the air. What distinguishes a gas from solids is the vast separation of the individual gas particles. This separation usually makes a colorless gas invisible to the human observer. The interaction of gas particles in the presence of gravitational fields are considered negligible as indicated by the constant velocity vectors in the image. One type of commonly known gas is steam. The gaseous state of matter is found between the plasma states, the latter of which provides the upper temperature boundary for gases. Bounding the lower end of the scale lie degenerative quantum gases which are gaining increasing attention. High-density gases super cooled to incredibly low temperatures are classified by their statistical behavior as either a Bose gas or a Fermi gas. For a comprehensive listing of these exotic states of matter see list of states of matter. These gases, when grouped together with the noble gases; which are helium, neon, argon, krypton, xenon and radon; are called "elemental gases". Alternatively they are sometimes known as "molecular gases" to distinguish them from molecules that are also chemical compounds. The gas is a neologism first used by the early 17th-century Flemish chemist J.B. van Helmont. According to Paracelsus's terminology, chaos meant something like "ultra-rarefied water". An alternative story is that Van Helmont's word is corrupted from gahst, signifying a spirit.
Gas
–
Drifting smoke particles provide clues to the movement of the surrounding gas.
Gas
–
Gas phase particles (atoms, molecules, or ions) move around freely in the absence of an applied electric field.
Gas
–
Shuttle imagery of re-entry phase.
Gas
–
21 April 1990 eruption of Mount Redoubt, Alaska, illustrating real gases not in thermodynamic equilibrium.
36.
Atmosphere
–
An atmosphere is a layer of gases surrounding a planet or other material body, held in place by the gravity of that body. An atmosphere is more likely to be retained if the gravity it is subject to is high and the temperature of the atmosphere is low. The atmosphere of Earth is mostly composed of other gases in trace amounts. The atmosphere helps protect living organisms by solar ultraviolet radiation, cosmic rays. Its current composition is the product of billions of years of biochemical modification of the paleoatmosphere by living organisms. The term stellar atmosphere typically includes the portion starting from the photosphere outwards. Stars with sufficiently low temperatures may form compound molecules in their outer atmosphere. Atmospheric pressure is the force per area, applied perpendicularly by the surrounding gas. It is determined above a location. On Earth, units of air pressure are based on the internationally recognized standard atmosphere, defined as 101.325 kPa. It is measured with a barometer. The pressure of an atmospheric gas decreases with altitude due to the diminishing mass of gas above. The height at which the pressure from an atmosphere declines by a factor of e is called the scale height and is denoted by H. For such a model atmosphere, the pressure declines exponentially with increasing altitude. However, atmospheres are not uniform in temperature, so the exact determination of the atmospheric pressure at any particular altitude is more complex.
Atmosphere
–
Mars's thin atmosphere
Atmosphere
–
Earth's atmospheric gases scatter blue light more than other wavelengths, giving the Earth a blue halo when seen from space.
37.
Boyle's law
–
Boyle's law is an experimental gas law that describes how the pressure of a gas tends to increase as the volume of the container decreases. The equation states that product of volume is a constant for a given mass of confined gas as long as the temperature is constant. The equation shows that, as volume increases, the pressure of the gas decreases in proportion. Similarly, as volume decreases, the pressure of the gas increases. The law was named after physicist Robert Boyle, who published the original law in 1662. This relationship between volume was first noted by Richard Towneley and Henry Power. Robert Boyle published the results. According to other authorities, it was Boyle's assistant, Robert Hooke, who built the experimental apparatus. Boyle's law is based with air, which he considered to be a fluid of particles at rest in between small invisible springs. Boyle disagreed. Boyle's interest was probably to understand air as an essential element of life; for example, he published works without air. Boyle had already published it in 1662. Thus this law is sometimes referred to as the Boyle -- Mariotte law. Instead of a static theory a kinetic theory is needed, provided two centuries later by Maxwell and Boltzmann. This law was the physical law to be expressed in the form of an equation describing the dependence of two variable quantities.
Boyle's law
–
Diving medicine:
Boyle's law
–
Continuum mechanics
38.
Charles's law
–
Charles's law is an experimental gas law that describes how gases tend to expand when heated. K is a constant. This law describes how a gas expands as the temperature increases; conversely, a decrease in temperature will lead to a decrease in volume. The equation shows that, as absolute temperature increases, the volume of the gas also increases in proportion. The law was named after scientist Jacques Charles, who formulated the original law from the 1780s. The basic principles had already been described by Francis Hauksbee a century earlier. Gay-Lussac concurred. On mathematical grounds alone, Gay-Lussac's paper does not permit the assignment of any law stating the linear relation. This equation so has nothing to do with what became known as Charles's Law. Gay-Lussac's value for k, was remarkably close to the present-day value of 1⁄2.7315. Gay-Lussac gave credit for this equation to unpublished statements in 1787. In the absence of a record, the gas law relating volume to temperature can not be named after Charles. His 1st law was that of partial pressures. Charles's law appears to imply that the volume of a gas will descend to zero at a certain temperature or −273.15 °C. Gay-Lussac had no experience of liquid air, although he appears to believe that the "permanent gases" such as air and hydrogen could be liquified.
Charles's law
–
Diving medicine:
Charles's law
–
Continuum mechanics
39.
Gay-Lussac's law
–
These laws are also known variously as Charles's law, Amontons's law. For example, Gay-Lussac found that 1 volume of oxygen would react to form 2 volumes of gaseous water. Based on Gay-Lussac's results, Amedeo Avogadro theorized that, at the same pressure, equal volumes of gas contain equal numbers of molecules. The law of combining gases was made public by Joseph Louis Gay-Lussac in 1808. Avogadro's hypothesis, however, was not initially accepted until the Italian chemist Stanislao Cannizzaro was able to convince the First International Chemical Congress in 1860. Amontons discovered this while building an "thermometer". The pressure of a gas of fixed mass and fixed volume is directly proportional to the gas's absolute temperature. If a gas's temperature increases, then so does its pressure if the volume of the gas are held constant. The law has a particularly simple mathematical form if the temperature is measured on an absolute scale, such as in kelvins. K is a constant. Because Amontons discovered the law beforehand, Gay-Lussac's name is now generally associated with the law of combining volumes discussed in the section above. Some introductory textbooks still define the pressure-temperature relationship as Gay-Lussac's law. His work did cover some comparison between pressure and temperature. Boyle's law form the combined gas law. These three gas laws in combination with Avogadro's Law can be generalized by the ideal law.
Gay-Lussac's law
–
Diving medicine:
Gay-Lussac's law
–
Continuum mechanics
40.
Combined gas law
–
The combined gas law is a gas law that combines Charles's law, Boyle's law, Gay-Lussac's law. There is no official founder for this law; it is merely an amalgamation of the three previously discovered laws. These laws each relate one thermodynamic variable to another mathematically while holding everything constant. Charles's law states that temperature are directly proportional to each other as long as pressure is held constant. Boyle's law asserts that volume are inversely proportional to each other at fixed temperature. Finally, Gay-Lussac's law introduces a direct proportionality between pressure as long as it is at a constant volume. By combining and either of or, we can gain a new equation with P, V and T. Substituting in Avogadro's Law yields the ideal gas equation. A derivation of the combined law using only elementary algebra can contain surprises. A physical derivation, more reliable, begins by realizing that the constant volume parameter in Gay-Lussac's law will change as the system volume changes. At constant volume, the law might appear P = k1T, while at constant volume V2 it might appear P = k2T. Rather, it should first be determined in what sense these equations are compatible with one another. To gain insight into this, recall that any two variables determine the third. Choosing P and V to be independent, we picture the T values forming a surface above the PV-plane. P0 define a T0, a point on that surface.
Combined gas law
–
Diving medicine:
Combined gas law
–
Continuum mechanics
41.
Plasma (physics)
–
Plasma is one of the four fundamental states of matter. Plasma has properties unlike those of the other states. The presence of a significant number of charge carriers makes plasma electrically conductive so that it responds strongly to electromagnetic fields. Like gas, plasma does not have a definite shape or a definite volume unless enclosed in a container. Under the influence of a magnetic field, it may form structures such as double layers. A common form of plasma on Earth is produced in neon signs. Much of the understanding of plasma has come from the pursuit of nuclear power, for which plasma physics provides the scientific foundation. Plasma is an electrically neutral medium of unbound positive and negative particles. It is important to note that although the particles are unbound, they are not ‘free’ in the sense of not experiencing forces. This governs collective behavior with many degrees of variation. The average number of particles in the Debye sphere is given by the plasma parameter, "Λ". Bulk interactions: The Debye screening length is short compared to the physical size of the plasma. This criterion means that interactions in the bulk of the plasma are more important than those at its edges, where boundary effects may take place. When this criterion is satisfied, the plasma is quasineutral. Plasma frequency: The electron plasma frequency is large compared to the electron-neutral collision frequency.
Plasma (physics)
–
Plasma
Plasma (physics)
Plasma (physics)
Plasma (physics)
42.
Rheology
–
The term was first used by Eugene C. Bingham and Crawford to describe the flow of liquids and the deformation of solids. Newtonian fluids can be characterized by a single coefficient of viscosity for a specific temperature. Although this viscosity will change with temperature, it does not change with the strain rate. Only a small group of fluids exhibit such constant viscosity. The large class of fluids whose viscosity changes with the strain rate are called non-Newtonian fluids. For example, ketchup can have its viscosity reduced by shaking but water cannot. Some non-Newtonian materials are called dilatant materials. Since Sir Isaac Newton originated the concept of viscosity, the study of liquids with strain rate dependent viscosity is also often called Non-Newtonian fluid mechanics. The term rheology was coined by Eugene C. Bingham, a professor at Lafayette College, in 1920, from a suggestion by a colleague, Markus Reiner. Materials with the characteristics of a fluid will flow when subjected to a stress, defined as the force per area. There are different sorts of stress and materials can respond differently for different stresses. Much of theoretical rheology is concerned with associating external forces and torques with internal stresses and internal strain gradients and flow velocities. In this sense, a solid undergoing plastic deformation is a fluid, although no viscosity coefficient is associated with this flow.
Rheology
–
Linear structure of cellulose -- the most common component of all organic plant life on Earth. * Note the evidence of hydrogen bonding which increases the viscosity at any temperature and pressure. This is an effect similar to that of polymer crosslinking, but less pronounced.
43.
Viscoelasticity
–
Viscoelasticity is the property of materials that exhibit both viscous and elastic characteristics when undergoing deformation. Viscous materials, like honey, resist linearly with time when a stress is applied. Elastic materials strain when stretched and quickly return to their original state once the stress is removed. Viscoelastic materials have elements of both of these properties and, as such, exhibit time-dependent strain. In the nineteenth century, physicists such as Maxwell, Boltzmann, Kelvin experimented with recovery of glasses, rubbers. Viscoelasticity was further examined in the late twentieth century when synthetic polymers were engineered and used in a variety of applications. Viscoelasticity calculations depend heavily on the viscosity variable, η. The inverse of η is also known as fluidity, φ. The value of either can be derived as a function of temperature or as a given value. Depending on the change of rate versus stress inside a material the viscosity can be categorized as having plastic response. When a material exhibits a linear response it is categorized as a Newtonian material. In this case the stress is linearly proportional to the strain rate. If the material exhibits a non-linear response to the strain rate, it is categorized as Non-Newtonian fluid. There is also an interesting case where the viscosity decreases as the shear/strain rate remains constant. A material which exhibits this type of behavior is known as thixotropic.
Viscoelasticity
–
Stress–strain curves for a purely elastic material (a) and a viscoelastic material (b). The red area is a hysteresis loop and shows the amount of energy lost (as heat) in a loading and unloading cycle. It is equal to, where is stress and is strain.
Viscoelasticity
–
Different types of responses () to a change in strain rate (d /dt)
44.
Rheometry
Rheometry
–
Continuum mechanics
45.
Rheometer
–
A rheometer is a laboratory device used to measure the way in which a liquid, suspension or slurry flows in response to applied forces. It measures the rheology of the fluid. There are two distinctively different types of rheometers. Rotational or type rheometers are usually designed as either a native strain-controlled instrument or a native stress-controlled instrument. The rheometer comes from the Greek, means a device for measuring flow. In the 19th century it was commonly used for devices to measure electric current, until the word was supplanted by ammeter. It was also used for the measurement of flow of liquids, in civil engineering. This latter use persisted in some areas. The principle and working of rheometers is described in several texts. Liquid is forced through a tube of constant cross-section and precisely known dimensions under conditions of flow. Either the other measured. Knowing the dimensions, the flow-rate can be converted into a value for the shear stress. Varying the flow allows a flow curve to be determined. The liquid is placed within the annulus of one cylinder inside another. One of the cylinders is rotated at a set speed.
Rheometer
–
Continuum mechanics
46.
Smart fluid
–
A smart fluid is a fluid whose properties can be changed by applying an electric field or a magnetic field. Today are fluids whose viscosity increases when a magnetic field is applied. Depending on road conditions, the damping fluid's viscosity is adjusted. It provides better control. Some haptic devices whose resistance to touch can be controlled are also based on these MR fluids. Another major type of smart fluid are ER fluids, whose resistance to flow can be quickly and dramatically altered by an applied electric field. Besides fast acting clutches, brakes, hydraulic valves, other, more esoteric, applications such as bulletproof vests have been proposed for these fluids. Smart fluids change their surface tension in the presence of an electric field. Other applications include seismic dampers, which are used in buildings in seismically-active zones to damp the oscillations occurring in an earthquake. Continuum mechanics Electrorheological fluid Ferrofluid Fluid mechanics Magnetorheological fluid Rheology Smart glass Smart metal http://www.aip.org/tip/INPHFA/vol-9/iss-6/p14.html
Smart fluid
–
Continuum mechanics
47.
Magnetorheological fluid
–
A magnetorheological fluid is a type of smart fluid in a carrier fluid, usually a type of oil. When subjected to a magnetic field, the fluid greatly increases its apparent viscosity, to the point of becoming a solid. Importantly, the yield stress in its active state can be controlled very accurately by varying the magnetic field intensity. The upshot is that the fluid's ability to transmit force can be controlled with an electromagnet, which gives rise to its many control-based applications. Extensive discussions of the applications of MR fluids can be found in a recent book. MR fluid is different from a ferrofluid which has smaller particles. MR fluid particles are too dense for Brownian motion to keep them suspended. Ferrofluid particles are primarily nanoparticles that generally will not settle under normal conditions. As a result, these two fluids have very different applications. When a magnetic field is applied, however, the microscopic particles align themselves along the lines of magnetic flux, see below. In the case of MR fluids, the fluid actually assumes properties comparable to a solid when in the activated state, up until a point of yield. The behavior of a MR fluid can thus be considered similar to a material model, well-investigated. However, a MR fluid does not exactly follow the characteristics of a Bingham plastic. MR fluids are also known to be subject to shear thinning, whereby the viscosity above yield decreases with increased rate. Low strength has been the primary reason for limited range of applications.
Magnetorheological fluid
Magnetorheological fluid
–
Schematic of a magnetorheological fluid solidifying and blocking a pipe in response to an external magnetic field. (Animated version available.)
Magnetorheological fluid
Magnetorheological fluid
48.
Electrorheological fluid
–
Electrorheological fluids are suspensions of extremely fine non-conducting but electrically active particles in an electrically insulating fluid. The apparent viscosity of these fluids changes reversibly to an electric field. Common applications are in ER brakes and shock absorbers. There are many novel uses for these fluids. Potential uses are as haptic controllers and tactile displays. Motorola filed a patent application in 2006. The change in apparent viscosity is dependent on the electric field, i.e. the potential divided by the distance between the plates. The effect is better described as an electric field dependent shear stress. When activated an ER fluid behaves with a yield point, determined by the electric field strength. After the point is reached, the fluid shears as a fluid, i.e. the incremental shear stress is proportional to the rate of shear. Hence the resistance to motion of the fluid can be controlled by adjusting the electric field. ER fluids are a type of smart fluid. A simple ER fluid can be made by mixing cornflour in a light vegetable oil or oil. There are two main theories to explain the effect: the electrostatic theory. The water theory assumes a three phase system, the particles contain the third phase, another liquid immiscible with the main phase liquid.
Electrorheological fluid
–
Continuum mechanics
49.
Ferrofluid
–
A ferrofluid is a liquid that becomes strongly magnetized in the presence of a magnetic field. Ferrofluids are colloidal liquids made of nanoscale ferrimagnetic, particles suspended in a carrier fluid. Each tiny particle is thoroughly coated with a surfactant to inhibit clumping. Ferromagnetic particles can be ripped out of the homogeneous colloidal mixture, forming a separate clump of magnetic dust when exposed to strong magnetic fields. The magnetic attraction of nanoparticles is weak enough that the surfactant's Van der Waals force is sufficient to prevent magnetic agglomeration. Ferrofluids thus are often classified as "superparamagnets" rather than ferromagnets. The difference between magnetorheological fluids is the size of the particles. The particles in a ferrofluid primarily generally will not settle under normal conditions. These two fluids have very different applications as a result. Ferrofluids are composed of nanoscale particles of magnetite, some other compound containing iron, a liquid. This is similar to the way that the ions in an paramagnetic salt solution make the solution paramagnetic. The composition of a typical ferrofluid is by volume. In this case, the two states of matter are the solid liquid it is in. True ferrofluids are stable. This means that the solid particles do not phase separate even in extremely strong magnetic fields.
Ferrofluid
–
Ferrofluid on glass, with a magnet underneath.
Ferrofluid
–
Steve Papell invented ferrofluid for NASA in 1963
Ferrofluid
–
Ferrofluid is the oily substance collecting at the poles of the magnet which is underneath the white dish.
Ferrofluid
–
Macrophotograph of ferrofluid influenced by a magnet.
50.
Daniel Bernoulli
–
Daniel Bernoulli FRS was a Swiss mathematician and physicist and was one of the many prominent mathematicians in the Bernoulli family. He is particularly remembered for his applications of mathematics for his pioneering work in probability and statistics. Daniel Bernoulli was born in Groningen, into a family of distinguished mathematicians. The Bernoulli family emigrated to escape the Spanish persecution of the Huguenots. After a brief period in Frankfurt the family moved in Switzerland. Daniel was the son of nephew of Jacob Bernoulli. He had two brothers, Johann II. Daniel Bernoulli was described by W. W. Rouse Ball as "by far the ablest of the younger Bernoullis". He is said to have had a bad relationship with his father. Johann Bernoulli also plagiarized some key ideas in his own book Hydraulica which he backdated to before Hydrodynamica. Despite Daniel's attempts at reconciliation, his father carried the grudge until his death. Around age, his father, Johann, encouraged him to study business, there being poor rewards awaiting a mathematician. However, Daniel refused, because he wanted to study mathematics. He later studied business. Daniel earned a PhD in anatomy and botany in 1721.
Daniel Bernoulli
–
Daniel Bernoulli
51.
Robert Boyle
–
Robert William Boyle FRS was an Anglo-Irish natural philosopher, chemist, physicist and inventor born in Lismore, County Waterford, Ireland. Among his works, The Sceptical Chymist is seen as a book in the field of chemistry. Boyle is noted for his writings in theology. He was born in County Waterford, Ireland, the seventh son and fourteenth child of Richard Boyle, 1st Earl of Cork, Catherine Fenton. Richard Boyle obtained an appointment as a deputy escheator. Boyle had amassed enormous landholdings by the time Robert was born. As a child, he was fostered to a local family, as were his elder brothers. Sir Henry Wotton, was then the provost of the college. During this time, his father hired Robert Carew, who had knowledge of Irish, to act as private tutor to his sons in Eton. After spending over three years at Eton, Robert travelled abroad with a French tutor. He returned to England with a keen interest in scientific research. Some of the members also had meetings at Oxford. In 1654, he left Ireland for Oxford to pursue his work more successfully. It was here that Boyle rented rooms from the wealthy apothecary who owned the Hall. The person who originally formulated the hypothesis was Henry Power in 1661.
Robert Boyle
–
Robert Boyle (1627–91)
Robert Boyle
–
Sculpture of a young boy, thought to be Boyle, on his parents' monument in St Patrick's Cathedral, Dublin.
Robert Boyle
–
One of Robert Boyle's notebooks (1690-1691) held by the Royal Society of London. The Royal Society archives holds 46 volumes of philosophical, scientific and theological papers by Boyle and seven volumes of his correspondence.
Robert Boyle
–
Plaque at the site of Boyle and Hooke's experiments in Oxford
52.
Augustin-Louis Cauchy
–
Baron Augustin-Louis Cauchy FRS FRSE was a French mathematician reputed as a pioneer of analysis. Cauchy was one of the first to prove theorems of calculus rigorously, rejecting the heuristic principle of the generality of algebra of earlier authors. Cauchy singlehandedly founded complex analysis and the study of permutation groups in abstract algebra. Cauchy had a great influence over his contemporaries and successors. His writings range widely in mathematical physics. "More theorems have been named for Cauchy than for any other mathematician." Cauchy was a prolific writer; he wrote five complete textbooks. He was the son of Louis François Cauchy and Marie-Madeleine Desestre. He married Aloise de Bure in 1818. She was a close relative of the publisher who published most of Cauchy's works. By her Cauchy had two daughters, Marie Mathilde. Cauchy's father was a high official in the Parisian Police of the New Régime. Cauchy lost his position because of the French Revolution that broke out month before Augustin-Louis was born. The Cauchy family survived the following Reign of Terror by escaping to Arcueil, where Cauchy received his first education, from his father. After the execution of Robespierre, it was safe for the family to return to Paris.
Augustin-Louis Cauchy
–
Cauchy around 1840. Lithography by Zéphirin Belliard after a painting by Jean Roller.
Augustin-Louis Cauchy
–
The title page of a textbook by Cauchy.
Augustin-Louis Cauchy
–
Leçons sur le calcul différentiel, 1829
53.
Jacques Charles
–
Jacques Alexandre César Charles was a French inventor, scientist, mathematician, balloonist. He was sometimes called the Geometer. Their pioneering use of hydrogen for lift led to this type of balloon being named a Charlière. Charles subsequently became professor of physics at the Académie de Sciences. Charles was born in 1746, He married Julie Françoise Bouchaud des Hérettes, a creole woman 37 years younger than himself. Charles died in Paris on April 7, 1823. The discolouration of the varnishing/rubberising process left a red and yellow result. The balloon was comparatively small, a 35 cubic metre sphere of rubberised silk, only capable of lifting about 9 kg. The project was funded by a subscription organised by Barthelemy Faujas de Saint-Fond. At 13:45 on December 1783 Jacques Charles and the Robert brothers launched a new manned balloon from the Jardin des Tuileries in Paris. Jacques Charles was accompanied by Nicolas-Louis Robert as co-pilot of the hydrogen-filled balloon. The envelope was covered with a net from which the basket was suspended. Ballast was used to control altitude. They landed at sunset in Nesles-la-Vallée after a 2-hour 5 minute flight covering 36 km. The chasers on horseback, who were led by the Duc de Chartres, held down the craft while both Charles and Nicolas-Louis alighted.
Jacques Charles
–
Jacques Alexandre César Charles, 1820
Jacques Charles
–
The balloon built by Jacques Charles and the Robert brothers is attacked by terrified villagers in Gonesse.
Jacques Charles
–
Contemporary illustration of the first flight by Prof. Jacques Charles with Nicolas-Louis Robert, December 1, 1783. Viewed from the Place de la Concorde to the Tuileries Palace (destroyed in 1871)
Jacques Charles
–
Meusnier's dirigible
54.
Leonhard Euler
–
He also introduced much of the modern mathematical terminology and notation, particularly for mathematical analysis, such as the notion of a mathematical function. Euler is also known for his work in mechanics, music theory. Euler was one of the most eminent mathematicians of the 18th century, is held to be one of the greatest in history. He is also widely considered to be the most prolific mathematician of all time. His collected works fill 60 to 80 quarto volumes, more than anybody in the field. He spent most of his adult life in St. Petersburg, Russia, in Berlin, then the capital of Prussia. A statement attributed to Pierre-Simon Laplace expresses Euler's influence on mathematics: "Read Euler, read Euler, he is the master of us all." He had two younger sisters: Anna Maria and Maria Magdalena, a younger brother Johann Heinrich. Soon after the birth of Leonhard, the Eulers moved from Basel to the town of Riehen, where Euler spent most of his childhood. Euler's formal education started in Basel, where he was sent to live with his maternal grandmother. During that time, he was receiving Saturday afternoon lessons from Johann Bernoulli, who quickly discovered his new pupil's incredible talent for mathematics. In 1726, Euler completed a dissertation on the propagation of sound with the title De Sono. At that time, he was unsuccessfully attempting to obtain a position at the University of Basel. Pierre Bouguer, who became known as "the father of naval architecture", won and Euler took second place. Euler later won this annual prize twelve times.
Leonhard Euler
–
Portrait by Jakob Emanuel Handmann (1756)
Leonhard Euler
–
1957 Soviet Union stamp commemorating the 250th birthday of Euler. The text says: 250 years from the birth of the great mathematician, academician Leonhard Euler.
Leonhard Euler
–
Stamp of the former German Democratic Republic honoring Euler on the 200th anniversary of his death. Across the centre it shows his polyhedral formula, nowadays written as " v − e + f = 2".
Leonhard Euler
–
Euler's grave at the Alexander Nevsky Monastery
55.
Joseph Louis Gay-Lussac
–
Joseph Louis Gay-Lussac was a French chemist and physicist. Gay-Lussac was born in the present-day department of Haute-Vienne. Anthony Gay, son of a doctor, was a lawyer and prosecutor, worked as a judge in Noblat Bridge. Towards the year 1803, son finally adopted the name Gay-Lussac. On behalf of the Law of Suspects, his father, former king's attorney, was imprisoned in Saint Léonard from 1793 to 1794. He received his early education at the hands of the Catholic Abbey of Bourdeix, though later in life became an atheist. In the care of the Abbot of Dumonteil he began his education in Paris, finally entering the École Polytechnique in 1798. By the time of entry to the École Polytechnique his father had been arrested. Three years later, Gay-Lussac transferred to the École Ponts et Chaussées, shortly afterwards was assigned to C. L. Berthollet as his assistant. In 1802, he was appointed demonstrator to A. F. Fourcroy at the École Polytechnique, where in he became professor of chemistry. In 1821, he was elected a foreign member of the Royal Swedish Academy of Sciences. In 1839 he entered the chamber of peers. He was elected a Foreign Honorary Member of the American Academy of Arts and Sciences in 1832. Gay-Lussac married Geneviève-Marie-Joseph Rojot in 1809. He had first met her when she was studying a chemistry textbook under the counter.
Joseph Louis Gay-Lussac
–
Joseph Louis Gay-Lussac
Joseph Louis Gay-Lussac
–
Gay-Lussac and Biot ascend in a hot air balloon, 1804. Illustration from the late 19th century.
Joseph Louis Gay-Lussac
–
Gravesite of Gay-Lussac
56.
Robert Hooke
–
Robert Hooke FRS was an English natural philosopher, architect and polymath. These issues may have contributed to his historical obscurity. Allan Chapman has characterised him as "England's Leonardo". He studied at Oxford during the Protectorate where he became one of a tightly knit group of ardent Royalists led by John Wilkins. Hooke observed the rotations of Mars and Jupiter. In 1665 Hooke inspired the use of microscopes for scientific exploration with Micrographia. Based on his microscopic observations of fossils, he was an early proponent of biological evolution. Much of what is known of Hooke's early life comes from an autobiography that he commenced in 1696 but never completed. Richard Waller mentions it in his introduction to The Posthumous Works of M.D. S.R.S. printed in 1705. The work of Waller, along with John Ward's Lives of the Gresham Professors and John Aubrey's Brief Lives, form the major biographical accounts of Hooke. Robert Hooke was born in 1635 in Freshwater to John Hooke and Cecily Gyles. His two brothers were also ministers. Robert Hooke was expected to join the Church. Robert, too, grew up to be a staunch monarchist. As a youth, Robert Hooke was fascinated by observation, drawing, interests that he would pursue in various ways throughout his life.
Robert Hooke
–
Modern portrait of Robert Hooke (Rita Greer 2004), based on descriptions by Aubrey and Waller; no contemporary depictions of Hooke are known to survive.
Robert Hooke
–
Memorial portrait of Robert Hooke at Alum Bay, Isle of Wight, his birthplace, by Rita Greer (2012).
Robert Hooke
–
Robert Boyle
Robert Hooke
–
Diagram of a louse from Hooke's Micrographia
57.
Blaise Pascal
–
Blaise Pascal was a French mathematician, physicist, inventor, writer and Christian philosopher. He was a prodigy, educated by his father, a tax collector in Rouen. Pascal also wrote in defence of the scientific method. In 1642, while still a teenager, he started some pioneering work on calculating machines. Following Galileo Galilei and Torricelli, in 1646, he rebutted Aristotle's followers who insisted that nature abhors a vacuum. Pascal's results caused many disputes before being accepted. In 1646, his sister Jacqueline identified with the religious movement within Catholicism known by its detractors as Jansenism. His father died in 1651. Following a religious experience in late 1654, he began writing influential works on theology. His two most famous works date from this period: the Lettres provinciales and the Pensées, the former set in the conflict between Jansenists and Jesuits. In that year, he also wrote an important treatise on the arithmetical triangle. Between 1659 he wrote on the cycloid and its use in calculating the volume of solids. He died just two months after his 39th birthday. Pascal was born in Clermont-Ferrand, in France's Auvergne region. He lost Antoinette Begon, at the age of three.
Blaise Pascal
–
Painting of Blaise Pascal made by François II Quesnel for Gérard Edelinck in 1691.
Blaise Pascal
–
An early Pascaline on display at the Musée des Arts et Métiers, Paris
Blaise Pascal
–
Portrait of Pascal
Blaise Pascal
–
Pascal studying the cycloid, by Augustin Pajou, 1785, Louvre
58.
Isaac Newton
–
His book Philosophiæ Naturalis Principia Mathematica, first published in 1687, laid the foundations for classical mechanics. He shares credit with Gottfried Wilhelm Leibniz for the development of calculus. Newton's Principia formulated the laws of motion and universal gravitation, which dominated scientists' view of the physical universe for the next three centuries. This work also demonstrated that the motion of objects of celestial bodies could be described by the same principles. Newton formulated an empirical law of cooling, introduced the notion of a Newtonian fluid. He was the second Lucasian Professor of Mathematics at the University of Cambridge. In his later life, he became president of the Royal Society. He served the British government as Warden and Master of the Royal Mint. His father, also named Isaac Newton, had died three months before. Born prematurely, he was a small child; his mother Hannah Ayscough reportedly said that he could have fit inside a mug. Newton's mother had three children from her second marriage. Newton hated farming. Master at the King's School, persuaded his mother to send him back to school so that he might complete his education. Motivated partly by a desire for revenge against a schoolyard bully, Newton became the top-ranked student, distinguishing himself mainly by building models of windmills. In June 1661, Newton was admitted on the recommendation of his uncle Rev William Ayscough who had studied there.
Isaac Newton
–
Portrait of Isaac Newton in 1689 (age 46) by Godfrey Kneller
Isaac Newton
–
Newton in a 1702 portrait by Godfrey Kneller
Isaac Newton
–
Isaac Newton (Bolton, Sarah K. Famous Men of Science. NY: Thomas Y. Crowell & Co., 1889)
Isaac Newton
–
Replica of Newton's second Reflecting telescope that he presented to the Royal Society in 1672
59.
Claude-Louis Navier
–
Claude-Louis Navier, was a French engineer and physicist who specialized in mechanics. The Navier–Stokes equations are named after him and George Gabriel Stokes. He eventually succeeded his uncle as Inspecteur general at the Corps des Ponts et Chaussées. In 1824, Navier was admitted into the French Academy of Science. Navier is therefore often considered to be the founder of modern structural analysis. His major contribution however remains the Navier–Stokes equations, central to fluid mechanics. His name is one of the 72 names inscribed on the Eiffel Tower. O'Connor, John J.; Robertson, Edmund F. "Claude-Louis Navier", MacTutor History of Mathematics archive, University of St Andrews.
Claude-Louis Navier
–
Bust of Claude Louis Marie Henri Navier at the École Nationale des Ponts et Chaussées
60.
Sir George Stokes, 1st Baronet
–
Sir George Gabriel Stokes, 1st Baronet, PRS, was a physicist and mathematician. In physics, Stokes made seminal contributions to physical optics. In mathematics he formulated the first version of what is now contributed to the theory of asymptotic expansions. He served as then president, of the Royal Society of London. He retained his place until 1902 when on the day before his 83rd birthday, he was elected to the mastership. In 1849, Stokes was appointed at Cambridge, a position he held until his death in 1903. On 1 the jubilee of this appointment was celebrated there in a ceremony, attended by numerous delegates from European and American universities. During a portion of this period he also was president of the Royal Society, of which he had been one of the secretaries since 1854. The Royal Society's catalogue of scientific papers gives the titles of over a hundred memoirs by him published down to 1883. Some of these are only brief notes, others are short corrective statements, but many are long and elaborate treatises. His first published papers, which appeared in 1843, were on the steady motion of incompressible fluids and some cases of fluid motion. His work on fluid viscosity led to his calculating the terminal velocity for a sphere falling in a viscous medium. This became known as Stokes' law. He derived an expression for the frictional force exerted with very small Reynolds numbers. His work is the basis of the falling viscometer, in which the fluid is stationary in a vertical glass tube.
Sir George Stokes, 1st Baronet
–
Sir George Stokes, Bt.
Sir George Stokes, 1st Baronet
–
Signature
Sir George Stokes, 1st Baronet
–
Fluorspar
Sir George Stokes, 1st Baronet
–
A calcite crystal laid upon a paper with some letters showing the double refraction
61.
Physics
–
One of the main goal of physics is to understand how the universe behaves. Physics is one of perhaps the oldest through its inclusion of astronomy. The boundaries of physics are not rigidly defined. New ideas in physics often explain the fundamental mechanisms of other sciences while opening new avenues of research in areas such as philosophy. Physics also makes significant contributions through advances in new technologies that arise from theoretical breakthroughs. The United Nations named the World Year of Physics. Astronomy is the oldest of the natural sciences. The planets were often a target of worship, believed to represent their gods. While the explanations for these phenomena were often unscientific and lacking in evidence, these early observations laid the foundation for later astronomy. In the book, he was also the first to delved further into the way the eye itself works. Fellow polymaths, from Robert Grosseteste and Leonardo da Vinci to René Descartes, Johannes Kepler and Isaac Newton, were in his debt. Indeed, the influence of Ibn al-Haytham's Optics ranks alongside that of Newton's work of the same title, published 700 years later. The translation of The Book of Optics had a huge impact on Europe. From it, later European scholars were able to build the same devices as what Ibn Al Haytham understand the way light works. From this, important things as eyeglasses, magnifying glasses, telescopes, cameras were developed.
Physics
–
Further information: Outline of physics
Physics
–
Ancient Egyptian astronomy is evident in monuments like the ceiling of Senemut's tomb from the Eighteenth Dynasty of Egypt.
Physics
–
Sir Isaac Newton (1643–1727), whose laws of motion and universal gravitation were major milestones in classical physics
Physics
–
Albert Einstein (1879–1955), whose work on the photoelectric effect and the theory of relativity led to a revolution in 20th century physics
62.
Mechanics
–
The scientific discipline has its origins in Ancient Greece with the writings of Aristotle and Archimedes. During the modern period, scientists such as Khayaam, Galileo, Kepler, Newton, laid the foundation for what is now known as classical mechanics. It can also be defined as a branch of science which deals with forces on objects. Historically, classical mechanics came first, while quantum mechanics is a comparatively recent invention. Classical mechanics originated in Principal Mathematical; Quantum Mechanics was discovered in the early 20th century. Both are commonly held to constitute the most certain knowledge that exists about physical nature. Classical mechanics has often been viewed as a model for other so-called exact sciences. Essential in this respect is the relentless use of mathematics in theories, well as the decisive role played by experiment in generating and testing them. Quantum mechanics is of a bigger scope, as it encompasses classical mechanics as a sub-discipline which applies under certain restricted circumstances. According to the principle, there is no contradiction or conflict between the two subjects, each simply pertains to specific situations. The principle states that the behavior of systems described by quantum theories reproduces classical physics in the limit of large quantum numbers. However, for macroscopic processes classical mechanics is well used. Modern descriptions of such behavior begin as displacement, time, velocity, acceleration, mass, force. Until about 400 years ago, however, motion was explained from a very different point of view. He showed that the speed of falling objects increases steadily during the time of their fall.
Mechanics
–
Arabic Machine Manuscript. Unknown date (at a guess: 16th to 19th centuries).
63.
Force
–
In physics, a force is any interaction that, when unopposed, will change the motion of an object. In other words, a force can cause an object with mass i.e. to accelerate. Force can also be described by intuitive concepts such as a pull. A force has both direction, making it a vector quantity. It is represented by the symbol F. In an extended body, each part usually applies forces on the adjacent parts; the distribution of such forces through the body is the mechanical stress. Pressure is a simple type of stress. Stress usually causes flow in fluids. A fundamental error was the belief that a force is required to maintain motion, even at a constant velocity. Most of the previous misunderstandings about force were eventually corrected by Galileo Galilei and Sir Isaac Newton. With his mathematical insight, Sir Isaac Newton formulated laws of motion that were not improved-on for nearly hundred years. The Standard Model predicts that exchanged particles called gauge bosons are the fundamental means by which forces are absorbed. Only four main interactions are known: in order of decreasing strength, they are: strong, weak, gravitational. High-energy physics observations made during the 1970s and 1980s confirmed that the weak and electromagnetic forces are expressions of a more fundamental electroweak interaction. Since antiquity the concept of force has been recognized to the functioning of each of the simple machines.
Force
–
Aristotle famously described a force as anything that causes an object to undergo "unnatural motion"
Force
–
Forces are also described as a push or pull on an object. They can be due to phenomena such as gravity, magnetism, or anything that might cause a mass to accelerate.
Force
–
Though Sir Isaac Newton 's most famous equation is, he actually wrote down a different form for his second law of motion that did not use differential calculus.
Force
–
Galileo Galilei was the first to point out the inherent contradictions contained in Aristotle's description of forces.
64.
Mechanical engineering
–
Mechanical engineering is the discipline that applies the principles of engineering, physics, materials science for the design, analysis, manufacturing, maintenance of mechanical systems. It is the branch of engineering that involves the operation of machinery. It is one of the oldest and broadest of the engineering disciplines. The mechanical field requires an understanding of core areas including mechanics, kinematics, thermodynamics, materials science, electricity. Mechanical science emerged as a result of developments in the field of physics. Today are pursuing developments in such fields as composites, mechatronics, nanotechnology. Mechanical engineers may also work with biomechanics, transport phenomena, biomechatronics, modeling of biological systems. Mechanical engineering finds its application in the archives of various ancient and medieval societies throughout mankind. In ancient Greece, the works of Archimedes deeply influenced mechanics in Heron of Alexandria created the first engine. In China, Zhang Heng improved a water clock and invented a seismometer, Ma Jun invented a chariot with differential gears. During the 7th to 15th century, the era called the Islamic Golden Age, there were remarkable contributions from Muslim inventors in the field of mechanical technology. Al-Jazari, one of them, presented mechanical designs. He is also considered to be the inventor of mechanical devices which now form the very basic such as the crankshaft and camshaft. Gottfried Wilhelm Leibniz is also credited with creating Calculus during the same time frame. On the European continent, Johann von Zimmermann founded the first factory for grinding machines in Chemnitz, Germany in 1848.
Mechanical engineering
–
Mechanical engineers design and build engines, power plants, other machines...
Mechanical engineering
–
... structures, and vehicles of all sizes.
Mechanical engineering
–
An oblique view of a four-cylinder inline crankshaft with pistons
Mechanical engineering
–
Training FMS with learning robot SCORBOT-ER 4u, workbench CNC Mill and CNC Lathe
65.
Civil engineering
–
Civil engineering is traditionally broken into a number of sub-disciplines. It is the second-oldest engineering discipline after military engineering, it is defined to distinguish non-military engineering from military engineering. Civil engineering takes place in the public sector from municipal through to national governments, in the private sector from individual homeowners through to international companies. Engineering has been an aspect of life since the beginnings of human existence. During this time, transportation became increasingly important leading to the development of the wheel and sailing. The construction of pyramids in Egypt were some of the first instances of large structure constructions. The Romans developed civil structures including insulae, harbors, bridges, dams and roads. In the 18th century, the term civil engineering was coined to incorporate all things civilian as opposed to military engineering. The civil engineer was John Smeaton, who constructed the Eddystone Lighthouse. Though there was evidence of some technical meetings, it was little more than a social society. In 1818 the Institution of Civil Engineers was founded in London, in 1820 the eminent engineer Thomas Telford became its first president. The institution received a Royal Charter in 1828, formally recognising civil engineering as a profession. The private college to teach civil engineering in the United States was Norwich University, founded by Captain Alden Partridge. The first degree in civil engineering in the United States was awarded by Rensselaer Polytechnic Institute in 1835. The such degree to be awarded to a woman was granted to Nora Stanton Blatch in 1905.
Civil engineering
–
A multi-level stack interchange, buildings, houses, and park in Shanghai, China.
Civil engineering
–
Philadelphia City Hall in the United States is still the world's tallest masonry load bearing structure.
Civil engineering
–
Leonhard Euler developed the theory explaining the buckling of columns
Civil engineering
–
John Smeaton, the "father of civil engineering"
66.
Chemical engineering
–
A engineer designs large-scale processes that convert chemicals, raw materials, living cells, microorganisms and energy into useful forms and products. In the same paper however, an English consultant, was credited for having coined the term. The History of Science in United States: An Encyclopedia puts this at around 1890. "engineering", describing the use of mechanical equipment in the chemical industry, became common vocabulary in England after 1850. By 1910, the profession, "engineer," was already in common use in Britain and the United States. Chemical engineering emerged upon the development of a fundamental concept of the discipline of chemical engineering. Most authors agree that Davis invented unit operations if not substantially developed it. Three years before Davis' lectures, Henry Edward Armstrong taught a degree course at the City and Guilds of London Institute. Armstrong's course "failed simply because its graduates... were not especially attractive to employers." Employers of the time would have rather hired mechanical engineers. Starting from 1888, Lewis M. Norton taught at MIT the first chemical course in the United States. Norton's course was essentially similar with Armstrong's course. Both courses, however, simply merged engineering subjects. "Its practitioners had convincing engineers that they were engineers and chemists that they were not simply chemists." Unit operations was introduced by William Hultz Walker in 1905.
Chemical engineering
–
Chemical engineers design, construct and operate process plants (distillation columns pictured)
Chemical engineering
–
George E. Davis
Chemical engineering
–
Chemical engineers use computers to control automated systems in plants.
Chemical engineering
–
Operators in a chemical plant using an older analog control board, seen in East-Germany, 1986.
67.
Geophysics
–
Although geophysics was only recognized as a separate discipline in the 19th century, its origins date back to ancient times. The first magnetic compasses were made from lodestones, while more modern magnetic compasses played an important role in the history of navigation. The first seismic instrument was built in 132 BC. Geophysics is applied to societal needs, such as mineral resources, mitigation of natural hazards and environmental protection. Geophysics is a highly interdisciplinary subject, geophysicists contribute to every area of the Earth sciences. Therefore, there is a gap of 12 hours and 25 minutes between every high tide and between every low tide. Gravitational forces make rocks press down on deeper rocks, increasing their density as the depth increases. Measurements of gravitational acceleration and gravitational potential at the Earth's surface and above it can be used to look for mineral deposits. The surface gravitational field provides information on the dynamics of tectonic plates. The geopotential surface called the geoid is one definition of the shape of the Earth. The geoid would be the mean level if the oceans could be extended through the continents. The resulting flow generates the Earth's magnetic field through mantle convection. The main sources of heat are the primordial heat and radioactivity, although there are also contributions from phase transitions. Some heat is carried up from the bottom of the mantle by mantle plumes. The heat flow at the Earth's surface is about 4.2 × 1013 W, it is a potential source of geothermal energy.
Geophysics
–
Illustration of the deformations of a block by body waves and surface waves (see seismic wave).
Geophysics
–
Age of the sea floor. Much of the dating information comes from magnetic anomalies.
Geophysics
–
Replica of Zhang Heng 's seismoscope, possibly the first contribution to seismology.
68.
Astrophysics
–
Among the objects studied are the Sun, other stars, galaxies, extrasolar planets, the interstellar medium and the cosmic microwave background. The properties examined include luminosity, density, temperature, chemical composition. In practice, astronomical research often involves a substantial amount of work in the realms of theoretical and observational physics. Although astronomy is as ancient as recorded history itself, it was long separated from the study of terrestrial physics. Their challenge was that the tools had not yet been invented with which to prove these assertions. For much of the nineteenth century, astronomical research was focused on the routine work of computing the motions of astronomical objects. Kirchhoff deduced that the dark lines in the solar spectrum are caused by chemical elements in the Solar atmosphere. Stars were also found on Earth. He thus claimed the line represented a new element, called helium, after the Greek Helios, the Sun personified. In 1885, Edward C. By 1890, a catalog of over 10,000 stars had been prepared that grouped them into thirteen spectral types. Most significantly, she discovered that helium were the principal components of stars. This discovery was so unexpected that her dissertation readers convinced her to modify the conclusion before publication. However, later research confirmed her discovery. By the end of the 20th century, further study of experimental spectra advanced, particularly as a result of the advent of quantum physics.
Astrophysics
–
Early 20th-century comparison of elemental, solar, and stellar spectra
Astrophysics
–
Supernova remnant LMC N 63A imaged in x-ray (blue), optical (green) and radio (red) wavelengths. The X-ray glow is from material heated to about ten million degrees Celsius by a shock wave generated by the supernova explosion.
Astrophysics
–
The stream lines on this simulation of a supernova show the flow of matter behind the shock wave giving clues as to the origin of pulsars
69.
Biology
–
Biology is a natural science concerned with the study of life and living organisms, including their structure, function, growth, evolution, distribution, identification and taxonomy. Modern biology is a eclectic field, composed of many branches and subdisciplines. The biology is derived from the Greek word βίος, bios, "life" and the suffix - λογία, - logia, "study of." The Latin-language form of the term first appeared in 1736 when Swedish scientist Carl Linnaeus used biologi in his Bibliotheca botanica. Biologie, was in a 1771 translation of Linnaeus' work. In 1797, Theodor Georg August Roose used the term in the preface of Grundzüge der Lehre van der Lebenskraft. Karl Friedrich Burdach used the term in a more restricted sense of the study of human beings from a morphological, physiological and psychological perspective. We will indicate by the name biology or the doctrine of life. Although modern biology is a relatively recent development, sciences included within it have been studied since ancient times. Natural philosophy was studied early as the ancient civilizations of Mesopotamia, Egypt, the Indian subcontinent, China. However, its approach to the study of nature are most often traced back to ancient Greece. While the formal study of medicine dates back to Hippocrates, it was Aristotle who contributed extensively to the development of biology. Scholars of the Islamic world who wrote on biology included al-Jahiz, Al-Dīnawarī, who wrote on botany, Rhazes who wrote on anatomy and physiology. Biology began to quickly grow with Anton van Leeuwenhoek's dramatic improvement of the microscope. It was then that scholars discovered spermatozoa, bacteria, the diversity of microscopic life.
Biology
Biology
Biology
Biology
70.
Numerical methods
–
Numerical analysis is the study of algorithms that use numerical approximation for the problems of mathematical analysis. Being able to compute the sides of a triangle is extremely important, in astronomy, construction. Numerical analysis continues this long tradition of practical mathematical calculations. Much like the Babylonian approximation of 2, modern numerical analysis does not seek exact answers, because exact answers are often impossible to obtain in practice. Instead, much of numerical analysis is concerned with obtaining approximate solutions while maintaining reasonable bounds on errors. Before the advent of modern computers numerical methods often depended on hand interpolation in large printed tables. Since the mid 20th century, computers calculate the required functions instead. These same interpolation formulas nevertheless continue to be used as part of the software algorithms for solving differential equations. Computing the trajectory of a spacecraft requires the accurate numerical solution of a system of ordinary differential equations. Car companies can improve the crash safety of their vehicles by using computer simulations of car crashes. Such simulations essentially consist of solving partial differential equations numerically. Hedge funds use tools from all fields of numerical analysis to attempt to calculate the value of stocks and derivatives more precisely than other market participants. Airlines use sophisticated optimization algorithms to decide ticket prices, airplane and crew assignments and fuel needs. Historically, such algorithms were developed within the overlapping field of operations research. Insurance companies use numerical programs for actuarial analysis.
Numerical methods
–
Babylonian clay tablet YBC 7289 (c. 1800–1600 BC) with annotations. The approximation of the square root of 2 is four sexagesimal figures, which is about six decimal figures. 1 + 24/60 + 51/60 2 + 10/60 3 = 1.41421296...
Numerical methods
–
Direct method
Numerical methods
71.
Computational fluid dynamics
–
Computational fluid dynamics is a branch of fluid mechanics that uses numerical analysis and algorithms to solve and analyze problems that involve fluid flows. Computers are used to perform the calculations required to simulate the interaction of gases with surfaces defined by boundary conditions. With high-speed supercomputers, better solutions can be achieved. Ongoing research yields software that improves the speed of complex simulation scenarios such as transonic or turbulent flows. Experimental validation of such software is performed using a wind tunnel with the final validation coming in full-scale testing, e.g. flight tests. The fundamental basis of almost all CFD problems are the Navier -- Stokes equations, which define single-phase fluid flows. These equations can be simplified by removing terms describing viscous actions to yield the Euler equations. Further simplification, by removing terms describing vorticity yields the full potential equations. Finally, for small perturbations in supersonic flows these equations can be linearized to yield the linearized potential equations. Historically, methods were first developed to solve the potential equations. Two-dimensional methods, using conformal transformations of the flow about a cylinder to the flow about an airfoil were developed in the 1930s. Although they failed dramatically, these calculations, together with Richardson's book "Weather prediction by numerical process", set the basis for numerical meteorology. In fact, early CFD calculations during the 1940s using ENIAC used methods close to those in Richardson's 1922 book. The power available paced development of three-dimensional methods. This group was led by Francis H. Harlow, widely considered as one of the pioneers of CFD.
Computational fluid dynamics
–
Computational physics
Computational fluid dynamics
–
A computer simulation of high velocity air flow around the Space Shuttle during re-entry.
Computational fluid dynamics
–
A simulation of the Hyper-X scramjet vehicle in operation at Mach -7
Computational fluid dynamics
–
Volume rendering of a non-premixed swirl flame as simulated by LES.
72.
Particle image velocimetry
–
Particle image velocimetry is an optical method of flow visualization used in education and research. It is used to obtain related properties in fluids. The fluid is seeded with tracer particles which, for sufficiently small particles, are assumed to faithfully follow the flow dynamics. The fluid with entrained particles is illuminated so that particles are visible. The motion of the seeding particles is used to calculate direction of the flow being studied. Other techniques used to measure flows are laser Doppler hot-wire anemometry. Liquid light guide may connect the laser to the lens setup. PIV software is used to post-process the optical images. The first to use particles to study fluids in a more systematic manner was Ludwig Prandtl, in the 20th century. Laser Doppler Velocimetry predates PIV as a laser-digital system to become widespread for research and industrial use. Able to obtain all of a fluid's velocity measurements at a specific point, it can be considered the 2-dimensional PIV's immediate predecessor. PIV itself found its roots in a technique that several groups began experimenting with in the late 1970s. In the early 1980s it was found that it was advantageous to decrease the concentration down to levels where individual particles could be observed. The images were usually needed immense amount of computing power to be analyzed. The seeding particles are an inherently critical component of the PIV system.
Particle image velocimetry
–
Application of PIV in combustion
Particle image velocimetry
–
PIV-Analysis of a vortex pair. The magnification in the upper left shows the increase in spatial resolution that can be achieved using a modern multi-pass window deformation technique.
Particle image velocimetry
–
PIV analysis of a stalled flat plate, shear rate superimposed
73.
Ancient Greece
–
Ancient Greece was a civilization belonging to a period of Greek history from the Greek Dark Ages to c. 5th century BC to the end of antiquity. Immediately following this period was the beginning of the Byzantine era. Included in ancient Greece is the period of Classical Greece, which flourished during the 5th to 4th centuries BC. Classical Greece began with the era of the Persian Wars. Because of conquests by Alexander the Great of Macedonia, Hellenistic civilization flourished to the western end of the Mediterranean Sea. Classical Antiquity in the Mediterranean region is commonly considered to have ended in the 6th century AD. Classical Antiquity in Greece is preceded by the Dark Ages, archaeologically characterised by the protogeometric and geometric styles of designs on pottery. The end of the Dark Ages is also frequently dated to the year of the first Olympic Games. The earliest of these is the Archaic period, in which artists made larger free-standing sculptures with the dreamlike "archaic smile". The Archaic period is often taken to end in 508 BC. This period saw Greco-Persian Wars and the Rise of Macedon. Following the Classical period was the Hellenistic period, during which Greek culture and power expanded into the Near and Middle East. This period ends with the Roman conquest. Herodotus is widely known as the "father of history": his Histories are eponymous of the entire field. Herodotus was succeeded by authors such as Thucydides, Xenophon, Demosthenes, Plato and Aristotle.
Ancient Greece
–
The Parthenon, a temple dedicated to Athena, located on the Acropolis in Athens, is one of the most representative symbols of the culture and sophistication of the ancient Greeks.
Ancient Greece
–
Dipylon Vase of the late Geometric period, or the beginning of the Archaic period, c. 750 BC.
Ancient Greece
–
Political geography of ancient Greece in the Archaic and Classical periods
74.
Archimedes
–
Archimedes of Syracuse was a Greek mathematician, physicist, engineer, inventor, astronomer. Although few details of his life are known, he is regarded as one of the leading scientists in classical antiquity. He was also one of the first to apply mathematics to physical phenomena, founding statics, including an explanation of the principle of the lever. He is credited with designing innovative machines, such as his screw pump, defensive war machines to protect his native Syracuse from invasion. Archimedes died during the Siege of Syracuse when he was killed despite orders that he should not be harmed. Unlike his inventions, the mathematical writings of Archimedes were little known in antiquity. The date of birth is based by the Byzantine Greek historian John Tzetzes that Archimedes lived for 75 years. In The Sand Reckoner, Archimedes gives his father's name as Phidias, an astronomer about whom nothing is known. Plutarch wrote in his Parallel Lives that Archimedes was related to the ruler of Syracuse. This work has been lost, leaving the details of his life obscure. It is unknown, for instance, whether he ever had children. During his youth, Archimedes may have studied in Alexandria, Egypt, where Conon of Samos and Eratosthenes of Cyrene were contemporaries. He referred as his friend while two of his works have introductions addressed to Eratosthenes. According to the popular account given by Plutarch, Archimedes was contemplating a mathematical diagram when the city was captured. He declined, saying that he had to finish working on the problem.
Archimedes
–
Archimedes Thoughtful by Fetti (1620)
Archimedes
–
Cicero Discovering the Tomb of Archimedes by Benjamin West (1805)
Archimedes
–
Artistic interpretation of Archimedes' mirror used to burn Roman ships. Painting by Giulio Parigi.
Archimedes
–
A sphere has 2/3 the volume and surface area of its circumscribing cylinder including its bases. A sphere and cylinder were placed on the tomb of Archimedes at his request. (see also: Equiareal map)
75.
On Floating Bodies
–
It is the known work on hydrostatics, of which Archimedes is recognized as the founder. It contains the first statement of what is now known as Archimedes' principle. Archimedes lived in the Greek city-state of Syracuse, Sicily. He is credited with calculating the underlying mathematics of the lever. A leading scientist of Archimedes also developed elaborate systems of pulleys to move large objects with a minimum of effort. His machines of war helped to hold back the armies of Rome in the First Punic War. The only known copy of "On Floating Bodies" in Greek comes from the Archimedes Palimpsest. Archimedes proves that water will adopt a spherical form around a center of gravity. This may have been an attempt at explaining the theory of Greek astronomers such as Eratosthenes that the Earth is round. Further, Proposition 5 of Archimedes' treatise On Floating Bodies states that: Any floating object displaces its own weight of fluid. This concept has come to be referred to by some as the principle of flotation. The second book rarely equaled since. It is restricted to the case when the base of the paraboloid lies either entirely above or entirely below the fluid surface. Archimedes' investigation of paraboloids was probably an idealization of the shapes of ships' hulls. Some of his sections float with the base above water similar to the way that icebergs float.
On Floating Bodies
–
a page from Floating Bodies, Archimedes Palimpsest
76.
Leonardo da Vinci
–
He has been variously called the father of palaeontology, ichnology, architecture, is widely considered one of the greatest painters of all time. Sometimes credited with the inventions of the parachute, helicopter and tank, he epitomised the Renaissance humanist ideal. Many historians and scholars regard Leonardo as the prime exemplar of the "Universal Genius" or "Renaissance Man", an individual of "unquenchable curiosity" and "feverishly inventive imagination". Much of his earlier working life was spent in the service of Ludovico il Moro in Milan. Leonardo was, is, renowned primarily as a painter. Among his works, the Mona Lisa is the most famous and most parodied portrait and The Last Supper the most reproduced religious painting of all time. Leonardo's drawing of the Vitruvian Man is also regarded as a cultural icon, being reproduced on items as varied as the euro coin, textbooks, T-shirts. Perhaps fifteen of his paintings have survived. Leonardo is revered for his technological ingenuity. Leonardo conceptualised the double hull. A number of Leonardo's most practical inventions are nowadays displayed as working models at the Museum of Vinci. Today, Leonardo is widely considered one of the most diversely talented individuals ever to have lived. He was the out-of-wedlock son of the wealthy Messer Piero Fruosino di Antonio da Vinci, a Florentine legal notary, Caterina, a peasant. The inclusion of the title "ser" indicated that Leonardo's father was a gentleman. Little is known about Leonardo's early life.
Leonardo da Vinci
–
Portrait of Leonardo
Leonardo da Vinci
–
Leonardo's childhood home in Anchiano
Leonardo da Vinci
–
Leonardo's earliest known drawing, the Arno Valley (1473), Uffizi
Leonardo da Vinci
–
The Baptism of Christ (1472–75)— Uffizi, by Verrocchio and Leonardo
77.
Evangelista Torricelli
–
Evangelista Torricelli was born on 15 October 1608 in Rome, he invented the barometer in Florence, Italy. The firstborn child of Gaspare Ruberti and Giacoma Torricelli. His family was in the Province of Ravenna then part of the Papal States. The family was very poor. The uncle then sent Torricelli to Rome to study science under professor of mathematics at the Collegio della Sapienza. Castelli was a student of Galileo Galilei. "he was entrusted by Pope Urban VIII with hydraulic undertakings." There is no actual evidence that Torricelli was enrolled at the university. It is almost certain that Torricelli was taught by Castelli. In exchange he worked for him from 1626 to 1632 as a private arrangement. Because of this, Torricelli was exposed to experiments funded by Pope Urban VIII. While living in Rome, Torricelli became also the student of Bonaventura Cavalieri, with whom he became great friends. Although Galileo promptly invited Torricelli to visit, he did not accept before Galileo's death. The reason for this was that Caterina Angetti, died. Right before the appointment, Torricelli was considering returning because of there being nothing left for him in Florence.
Evangelista Torricelli
–
Evangelista Torricelli portrayed on the frontpage of Lezioni d'Evangelista Torricelli
Evangelista Torricelli
–
Torricelli's statue in the Museo di Storia Naturale di Firenze
Evangelista Torricelli
–
Evangelista Torricelli by Lorenzo Lippi (circa 1647, Galleria Silvano Lodi & Due)
Evangelista Torricelli
–
NSRW Torricelli's experiment
78.
Barometer
–
A barometer is a scientific instrument used in meteorology to measure atmospheric pressure. Tendency can forecast short term changes in the weather. Numerous measurements of pressure are used within surface weather analysis to help find surface troughs, high pressure systems and frontal boundaries. Barometers and pressure altimeters used for different purposes. The main exception to this is ships at sea, which can use a barometer because their elevation does not change. Due to the presence of weather systems, aircraft altimeters may need to be adjusted as they fly between regions of varying atmospheric pressure. This was a restatement of the theory of horror vacui, which Galileo restated as resistenza del vacuo. Galileo's ideas reached Rome in his Discorsi. Sometime between 1639 and 1641, Berti carried it out. Water, inside of it poured out into the basin. This seemed to suggest the possibility of a vacuum existing in the space above the water. A friend and student of Galileo, interpreted the results of the experiments in a novel way. He proposed that the weight of not an attracting force of the vacuum, held the water in the tube. Even Galileo had accepted the weightlessness of air as a simple truth. Torricelli instead proposed that air had weight and that it was the latter which held up the column of water.
Barometer
–
Mercury barometers from the Musée des Arts et Métiers, Paris
Barometer
–
Goethe's device
Barometer
–
Old aneroid barometer
Barometer
–
Modern aneroid barometer
79.
Hydrostatics
–
Fluid statics or hydrostatics is the branch of fluid mechanics that studies incompressible fluids at rest. Hydrostatics are categorized as a part of the fluid statics, the study of all fluids, incompressible or not, at rest. Hydrostatics is fundamental to the engineering of equipment for storing, using fluids. It is also relevant to meteorology, to many other fields. Some principles of hydrostatics have been known by the builders of boats, cisterns, fountains. It was used as a learning tool. The height of this pipe is the same as the line carved into the interior of the cup. The cup may be filled to the line without any fluid passing into the pipe in the center of the cup. However, when the amount of fluid exceeds this fill line, fluid will overflow into the pipe in the center of the cup. Due to the drag that molecules exert on one another, the cup will be emptied. Heron's fountain is a device invented by Heron of Alexandria that consists of a jet of fluid being fed by a reservoir of fluid. The device consisted of an opening and two containers arranged one above the other. The intermediate pot, sealed, was filled with fluid, several cannula connecting the various vessels. Trapped air inside the vessels induces a jet of water out of a nozzle, emptying all water from the intermediate reservoir. Pascal made contributions in both hydrodynamics.
Hydrostatics
–
Table of Hydraulics and Hydrostatics, from the 1728 Cyclopædia
Hydrostatics
–
Diving medicine:
80.
Jean le Rond d'Alembert
–
Jean-Baptiste le Rond d'Alembert was a French mathematician, mechanician, physicist, philosopher, music theorist. Until 1759 he was also co-editor with Denis Diderot of the Encyclopédie. D'Alembert's formula for obtaining solutions to the wave equation is named after him. The wave equation is sometimes referred to as d'Alembert's equation. Born in Paris, d'Alembert was the natural son of an artillery officer. Destouches was abroad at the time of d'Alembert's birth. Days after birth his mother left him on the steps of the Saint-Jean-le-Rond de Paris church. According to custom, he was named after the patron saint of the church. Destouches secretly paid for the education of Jean le Rond, but did not want his paternity officially recognized. D'Alembert first attended a private school. The chevalier Destouches left d'Alembert an annuity of 1200 livres on his death in 1726. Under the influence of the Destouches family, at the age of twelve d'Alembert entered the Jansenist Collège des Quatre-Nations. Here he studied the arts, graduating as baccalauréat en arts in 1735. In his later life, D'Alembert scorned the Cartesian principles he had been taught by the Jansenists: the vortices". The Jansenists steered D'Alembert toward an ecclesiastical career, attempting to deter him from pursuits such as poetry and mathematics.
Jean le Rond d'Alembert
–
Jean-Baptiste le Rond d'Alembert, pastel by Maurice Quentin de La Tour
81.
Joseph Louis Lagrange
–
Joseph-Louis Lagrange, born Giuseppe Lodovico Lagrangia or Giuseppe Ludovico De la Grange Tournier, was an Italian Enlightenment Era mathematician and astronomer. Lagrange made significant contributions to the fields of both classical and celestial mechanics. At age 51, Lagrange became a member of the French Academy. He remained in France until the end of his life. Lagrange was one of the creators of the calculus of variations, deriving the Euler–Lagrange equations for extrema of functionals. He also extended the method to take into account possible constraints, arriving at the method of Lagrange multipliers. He proved that every natural number is a sum of four squares. His treatise Theorie des fonctions analytiques laid some of the foundations of group theory, anticipating Galois. In calculus, he developed a approach to Taylor series. Born as Giuseppe Lodovico Lagrangia, Lagrange was of Italian and French descent. His mother was from the countryside of Turin. He was raised as a Roman Catholic. A career as a lawyer was planned out for Lagrange by his father, certainly Lagrange seems to have accepted this willingly. He studied at the University of Turin and his favourite subject was classical Latin. At first he had no great enthusiasm for mathematics, finding Greek geometry rather dull.
Joseph Louis Lagrange
–
Joseph-Louis (Giuseppe Luigi), comte de Lagrange
Joseph Louis Lagrange
–
Lagrange's tomb in the crypt of the Panthéon
82.
Pierre-Simon Laplace
–
Pierre-Simon, marquis de Laplace was an influential French scholar whose work was important to the development of mathematics, statistics, physics and astronomy. He summarized and extended the work of his predecessors in his five-volume Mécanique Céleste. This work translated the geometric study of classical mechanics to one based on calculus, opening up a broader range of problems. In statistics, the Bayesian interpretation of probability was developed mainly by Laplace. The Laplacian differential operator, widely used in mathematics, is also named after him. Laplace is remembered as one of the greatest scientists of all time. Laplace was named a marquis after the Restoration. Laplace was born in Beaumont-en-Auge, Normandy on 23 March 1749 at Beaumont-en-Auge, a village four miles west of Pont l'Eveque in Normandy. According to W. W. Rouse Ball, His father, Pierre de Laplace, owned and farmed the small estates of Maarquis. His great-uncle, Maitre Oliver de Laplace, had held the title of Chirurgien Royal. It was here that Laplace was educated and was provisionally a professor. It was here he wrote his first paper published in the Mélanges of the Royal Society of Turin, Tome iv. 1766–1769, at least two years before he went at 22 or 23 to Paris in 1771. Thus before he was 20 he was in touch with Lagrange in Turin. He did not go to Paris a self-taught lad with only a background!
Pierre-Simon Laplace
–
Pierre-Simon Laplace (1749–1827). Posthumous portrait by Jean-Baptiste Paulin Guérin, 1838.
Pierre-Simon Laplace
–
Laplace's house at Arcueil.
Pierre-Simon Laplace
–
Laplace.
Pierre-Simon Laplace
–
Tomb of Pierre-Simon Laplace
83.
Engineers
–
Engineers design materials, structures, systems while considering the limitations imposed by practicality, regulation, safety, cost. The engineer is derived from the Latin words ingeniare and ingenium. Engineers who seek a professional license in N. America will be required to take further exams in ethics, law and professional practice. The work of engineers forms their subsequent applications to human and business needs and quality of life. His/her work is predominantly intellectual and varied and not of a routine physical character. It requires the ability to supervise the technical and administrative work of others. He/she is thus placed in a position to make contributions to the development of its applications. In due he/she will be able to give authoritative technical advice and to assume responsibility for the direction of important tasks in his/her branch. Engineers develop technological solutions. Much of an engineer's time is spent on researching, locating, transferring information. Indeed, research suggests engineers spend 56 % of their time engaged including 14 % actively searching for information. Engineers must choose the solution that best matches the requirements. Their unique task is to identify, understand, interpret the constraints on a design in order to produce a successful result. Engineers apply techniques of analysis in testing, production, or maintenance. Analytical engineers may supervise production in factories and elsewhere, test output to maintain quality.
Engineers
–
An electrical engineer, circa 1950
Engineers
–
Engineers conferring on prototype design, 1954
Engineers
–
NASA Launch Control Center Firing Room 2 as it appeared in the Apollo era
Engineers
–
The Challenger disaster is held as a case study of engineering ethics.
84.
Gotthilf Hagen
–
Gotthilf Heinrich Ludwig Hagen was a German civil engineer who made important contributions to fluid dynamics, hydraulic engineering and probability theory. Hagen was born to Friedrich Ludwig Hagen and Helene Charlotte Albertine Hagen. His mother was the daughter of Christian Reccard, professor of Theology at University of Königsberg, consistorial councillor and astronomer. Nevertheless, he remained throughout his life. In 1819 he after graduating took a job as a junior engineer in the civil service. His main responsibility was for hydraulic engineering and management. In 1822 he took the examination in Berlin to qualify as a master builder. He became known about various hydraulic constructions which he had visited during travels in Europe. In 1825 he became deputy governmental building officer for Danzig. A year later he transferred to become harbor inspector in Pillau, where he was responsible for the harbor and dyke construction. Methods he developed are still relevant to current management in the region. On April 1827 he married his niece Auguste Hagen, with whom he had two daughters and five sons. His son Ludwig Hagen also became a civil engineer. In 1830 Hagen became chief government building surveyor in 1831. From 1834 to 1849 he taught as a professor of hydraulic engineering in Berlin.
Gotthilf Hagen
–
Gotthilf Heinrich Ludwig Hagen
Gotthilf Hagen
–
United Artillery and Engineering School, Berlin
Gotthilf Hagen
–
Tomb of Gotthilf Hagen and Auguste on the Invalidenfriedhof, Berlin
Gotthilf Hagen
–
Detailed view of the monument to Hagen in Baltijsk
85.
George Gabriel Stokes
–
Sir George Gabriel Stokes, 1st Baronet, PRS, was a physicist and mathematician. In physics, Stokes made seminal contributions to fluid dynamics and to physical optics. In mathematics he formulated the first version of what is now known as Stokes' theorem and contributed to the theory of asymptotic expansions. He served as secretary, then president, of the Royal Society of London. He retained his place on the foundation until 1902, when on the day before his 83rd birthday, he was elected to the mastership. In 1849, Stokes was appointed at a position he held until his death in 1903. On 1 the jubilee of this appointment was celebrated there in a ceremony, attended from European and American universities. During a portion of this period he also was president of the Royal Society, of which he had been one of the secretaries since 1854. The Royal Society's catalogue of scientific papers gives the titles of over a hundred memoirs by him published down to 1883. Some of these are only brief notes, others are corrective statements, but many are long and elaborate treatises. His first published papers, which appeared in 1843, were on some cases of fluid motion. His work on fluid viscosity led to his calculating the velocity for a sphere falling in a viscous medium. This became known as Stokes' law. He derived an expression for the frictional force exerted with small Reynolds numbers. His work is the basis of the falling viscometer, in which the fluid is stationary in a vertical tube.
George Gabriel Stokes
–
Sir George Stokes, Bt.
George Gabriel Stokes
–
Signature
George Gabriel Stokes
–
Fluorspar
George Gabriel Stokes
–
A calcite crystal laid upon a paper with some letters showing the double refraction
86.
Boundary layers
–
On an wing the layer is the part of the flow close to the wing, where viscous forces distort the surrounding non-viscous flow. Laminar boundary layers can be loosely classified according to their structure and the circumstances under which they are created. When a fluid forces are balanced by the Coriolis effect, an Ekman layer forms. In the theory of transfer, a thermal layer occurs. A surface can have multiple types of boundary layer simultaneously. The viscous nature of airflow is responsible for friction. The layer of air over the wing's surface, stopped by viscosity, is the layer. There are two different types of flow: laminar and turbulent. Laminar Boundary Layer Flow The boundary is a very smooth flow, while the turbulent layer contains swirls or "eddies." The flow is less stable. Boundary flow over a surface begins as a smooth laminar flow. As the flow continues back from the leading edge, the layer increases in thickness. Turbulent Boundary Layer Flow At some distance back from the leading edge, the smooth flow transitions to a turbulent flow. The low flow, however, tends to break down more suddenly than the turbulent layer. This allows a closed-form solution for the flow in both areas, a significant simplification of the full Navier–Stokes equations.
Boundary layers
–
Ludwig Prandtl
Boundary layers
–
Boundary layer visualization, showing transition from laminar to turbulent condition
87.
Ludwig Prandtl
–
Ludwig Prandtl was a German engineer. In the 1920s Prandtl developed the mathematical basis for the fundamental principles of subsonic aerodynamics in particular; and including transonic velocities. His studies identified the boundary layer, lifting-line theories. The Prandtl number was named after him. He was born in Freising, in 1875. As a result, Ludwig spent more time with his father, a professor of engineering. His father also encouraged him to think about his observations. Prandtl graduated with a Ph.D. under guidance of Professor August Foeppl in six years. His first job was as an engineer designing factory equipment. There, Prandtl entered the field of fluid mechanics where he had to design a device. After carrying out some experiments, Prandtl came up with a new device that used less power than the one it replaced. In 1901 he became a professor of fluid mechanics at the technical school in Hannover, now the Technical University Hannover. It was here that he developed many of his most important theories. The paper also described separation as a result of the boundary layer, clearly explaining the concept of stall for the first time. In the end the approximation contained in his original paper remains in widespread use.
Ludwig Prandtl
–
Ludwig Prandtl
Ludwig Prandtl
–
Ludwig Prandtl 1904 with his fluid test channel
88.
Osborne Reynolds
–
Osborne Reynolds FRS was a prominent innovator in the understanding of fluid dynamics. Separately, his studies of transfer between solids and fluids brought improvements in boiler and condenser design. He spent his entire career at what is now called University of Manchester. Osborne Reynolds was moved with his parents soon afterward to Dedham, Essex. His father was also a very able mathematician with a keen interest in mechanics. The son credits him with being his chief teacher as a boy. Osborne Reynolds graduated in 1867 as the seventh wrangler in mathematics. Reynolds showed an early liking for the study of mechanics. My attention drawn to various mechanical phenomena, for the explanation of which I discovered that a knowledge of mathematics was essential." Reynolds remained for the rest of his career -- in 1880 the college became a constituent college of the newly founded Victoria University. He awarded the Royal Medal in 1888. He retired in 1905. Reynolds most famously studied the conditions in which the flow of fluid in pipes transitioned to turbulent flow. When the velocity was low, the dyed layer remained distinct through the entire length of the large tube. When the velocity was increased, the layer diffused throughout the fluid's cross-section.
Osborne Reynolds
–
Osborne Reynolds in 1903
Osborne Reynolds
–
Reynolds' experiment on fluid dynamics in pipes
89.
Andrey Kolmogorov
–
Andrey Kolmogorov was born in 1903. His unmarried mother, Maria Y. Kolmogorova, died giving birth to him. Andrey was raised by two of his aunts at the estate of his grandfather, a well-to-do nobleman. Little is known about Andrey's father. He had been an agronomist. Nikolai had been exiled after his participation in the revolutionary movement against the czars. He was presumed to have been killed in the Russian Civil War. Andrey was the "editor" of the mathematical section of this journal. In 1910, they moved to Moscow, where he graduated from high school in 1920. Later Kolmogorov began to study at the Moscow State University and at the same time Mendeleev Moscow Institute of Chemistry and Technology. Kolmogorov writes about this time: "I arrived at Moscow University with a fair knowledge of mathematics. I knew in particular the beginning of theory. I studied many questions in articles of Brockhaus and Efron filling out for myself what was presented too concisely in these articles." Kolmogorov gained a reputation for his wide-ranging erudition.
Andrey Kolmogorov
–
Andrey Kolmogorov
Andrey Kolmogorov
–
Kolmogorov (left) delivers a talk at a Soviet information theory symposium. (Tallinn, 1973).
Andrey Kolmogorov
–
Kolmogorov works on his talk (Tallinn, 1973).
90.
Geoffrey Ingram Taylor
–
Sir Geoffrey Ingram Taylor OM was a British physicist and mathematician, a major figure in fluid dynamics and wave theory. His mother, Margaret Boole, came from a family of mathematicians. As a child he was performed experiments using paint rollers and sticky-tape. Taylor read mathematics at Cambridge. He followed this up with work on shock waves, winning a Smith's Prize. His work on turbulence in the atmosphere led to the publication of "Turbulent motion in fluids", which won the Adams Prize in 1915. Not content just to do the science, he also learned to fly aeroplanes and make parachute jumps. After the war Taylor worked on an application of turbulent flow to oceanography. He also worked on the problem of bodies passing through a rotating fluid. In 1923 he was appointed to a Royal Society professorship as a Yarrow Research Professor. He also produced another major contribution to turbulent flow, where he introduced a new approach through a statistical study of velocity fluctuations. The insight was critical in developing the modern science of solid mechanics. Taylor was sent to the United States to the Manhattan Project. At Los Alamos, Taylor helped solve implosion instability problems in the development of atomic weapons. In 1944 he also received the Copley Medal from the Royal Society.
Geoffrey Ingram Taylor
–
Sir Geoffrey Ingram Taylor
91.
Turbulence
–
Turbulence or turbulent flow is a flow regime in fluid dynamics characterized by chaotic changes in pressure and flow velocity. It is to a laminar regime, which occurs when a fluid flows in parallel layers, with no disruption between those layers. Turbulence is caused by excessive kinetic energy in parts of a fluid flow, which overcomes the dampening effect of the fluid's viscosity. For this turbulence is easier to more difficult in highly viscous fluids. In general terms, in turbulent flow, unsteady vortices appear of many sizes which interact with each other, consequently drag due to friction effects increases. This would increase the energy needed to pump fluid through a pipe, for instance. However this effect can also be exploited on aircraft, which deliberately reduce lift. However, turbulence has long resisted detailed physical analysis, the interactions within turbulence creates a very complex situation. Richard Feynman has described turbulence as the most important unsolved problem of classical physics. Smoke rising from a cigarette is mostly turbulent flow. However, for the first few centimeters the flow is laminar. The plume becomes turbulent as its Reynolds number increases, due to characteristic length increasing. Flow over a golf ball. If the ball were smooth, the boundary flow over the front of the sphere would be laminar at typical conditions. To prevent this from happening, the surface is dimpled to perturb the boundary layer and promote transition to turbulence.
Turbulence
–
Flow visualization of a turbulent jet, made by laser-induced fluorescence. The jet exhibits a wide range of length scales, an important characteristic of turbulent flows.
Turbulence
–
Laminar and turbulent water flow over the hull of a submarine
Turbulence
–
Turbulence in the tip vortex from an airplane wing
92.
Mechanical equilibrium
–
In classical mechanics, a particle is in mechanical equilibrium if the net force on that particle is zero. By extension, a physical system made up of many parts is in mechanical equilibrium if the net force on each of its individual parts is zero. In addition to defining mechanical equilibrium in terms of force, there are alternative definitions for mechanical equilibrium which are all mathematically equivalent. In terms of momentum, a system is in equilibrium if the momentum of its parts is all constant. In terms of velocity, the system is in equilibrium if velocity is constant. In a rotational mechanical equilibrium the net torque is zero. If a particle in equilibrium has zero velocity, that particle is in static equilibrium. An important property of systems at mechanical equilibrium is their stability. If we have a function which describes the system's potential energy, we can determine the system's equilibria using calculus. A system is at the critical points of the function describing the system's potential energy. We can locate these points using the fact that the derivative of the function is zero at these points. If the system is displaced an arbitrarily small distance from the state, the forces of the system cause it to move even farther away. Second derivative > 0: The potential energy is at a local minimum. This is a stable equilibrium. The response to a small perturbation is forces that tend to restore the equilibrium.
Mechanical equilibrium
–
Force diagram showing the forces acting on an object at rest on a surface. The normal force N is equal and opposite to the gravitational force mg so the net force is zero. Consequently the object is in a state of static mechanical equilibrium.
93.
Hydrostatic equilibrium
–
This occurs when external forces such as gravity are balanced by a pressure force. Hydrostatic equilibrium has other roles in astrophysics and planetary geology. This means the sum of the forces in a given direction must be opposed by an equal sum of forces in the opposite direction. This balance is called a hydrostatic equilibrium. The fluid can be split into a large number of cuboid volume elements; by considering a single element, the action of the fluid can be derived. Finally, the weight of the element causes a force downwards. This sum equals zero if the fluid's velocity is constant. Dividing by A, 0 = P b o t t o m − P t o p − ρ ⋅ g ⋅ h. Or, P t o p − P b o t t o m = − ρ ⋅ g ⋅ h. Ptop − Pbottom is a change in pressure, h is the height of the volume element—a change in the distance above the ground. By saying these changes are infinitesimally small, the equation can be written in form. D P = − ρ ⋅ g ⋅ d h. Density changes with pressure, gravity changes with height, so the equation would be: d P = − ρ ⋅ g ⋅ d h. Then the non-trivial equation is the z - equation, which now reads ∂ p ∂ z + ρ g = 0. Thus, hydrostatic balance can be regarded as a particularly simple solution of the Navier -- Stokes equations.
Hydrostatic equilibrium
–
If the highlighted volume of fluid is not moving, the forces on it upwards must equal the forces downwards.
94.
Atmospheric pressure
–
Atmospheric pressure, sometimes also called barometric pressure, is the pressure exerted by the weight of air in the atmosphere of Earth. In most circumstances atmospheric pressure is closely approximated by the hydrostatic pressure caused above the measurement point. Low-pressure areas have less atmospheric mass above their location, whereas high-pressure areas have more atmospheric mass above their location. Likewise, as elevation increases, there is less overlying atmospheric mass, so that atmospheric pressure decreases with increasing elevation. That force is a pressure of 101,000 N/m2. A column 1 square inch in cross-section would have about 65.4 N. The standard atmosphere is a unit of pressure defined as 101325 Pa, equivalent to 760 mmHg, 14.696 psi. The mean sea pressure is the atmospheric pressure at sea level. This is the atmospheric pressure normally given on the Internet. When barometers in the home are set to match the local weather reports, they not the actual local atmospheric pressure. The altimeter setting in aviation, is an atmospheric adjustment. Sea-level pressure is 1013.25 mbar. However, in Canada's public weather reports, sea pressure is instead reported in kilopascals. Sea-level pressure is found at the centers of tropical cyclones and tornadoes, with a record low of 870 mbar. Pressure varies smoothly to the top of the mesosphere.
Atmospheric pressure
–
Kollsman-type barometric aircraft altimeter (as used in North America) displaying an altitude of 80 ft (24 m).
Atmospheric pressure
–
15 year average mean sea level pressure for June, July, and August (top) and December, January, and February (bottom). ERA-15 re-analysis.
Atmospheric pressure
–
A very local storm above Snæfellsjökull, showing clouds formed on the mountain by orographic lift
Atmospheric pressure
–
Hurricane Wilma on 19 October 2005–882 hPa (12.79 psi) in eye
95.
Altitude
–
Altitude or height is defined based on the context in which it is used. As a general definition, altitude is a measurement, usually in the vertical or "up" direction, between a reference datum and a point or object. The datum also often varies according to the context. Vertical distance measurements in the "down" direction are commonly referred to as depth. In aviation, the altitude can have several meanings, is always qualified by explicitly adding a modifier, or implicitly through the context of the communication. Parties exchanging information must be clear which definition is being used. Aviation altitude is measured using either mean sea level or local level as the reference datum. When flying at a level, the altimeter is always set to standard pressure. Absolute altitude is the height of the aircraft above the terrain over which it is flying. It can be measured using a altimeter. Also referred to as "radar height" or feet/metres above ground level. True altitude is the actual elevation above mean level. It is indicated altitude corrected for non-standard pressure. Height is the elevation above a ground point, commonly the terrain elevation. Altitude is used to indicate "flight level", the standard for altitude reporting in the U.S. in Class A airspace.
Altitude
–
Vertical distance comparison
96.
Hydraulics
–
Hydraulics is a technology and applied science using engineering, chemistry, other sciences involving the mechanical properties and use of liquids or fluids. At a very basic level, hydraulics is the liquid version of pneumatics. Fluid mechanics provides the theoretical foundation for hydraulics, which focuses on the applied engineering using the properties of fluids. In fluid power, hydraulics are used by the use of pressurized liquids. The principles of hydraulics are in use naturally in the human body within the male erection. Free surface hydraulics is the branch of hydraulics dealing with free flow, such as occurring in rivers, canals, lakes, estuaries and seas. Its open channel flow studies the flow in open channels. The word "hydraulics" originates from the Greek ὑδραυλικός which in turn originates from ὕδωρ and αὐλός. Early examples of water power include the Qanat system in ancient Persia and the Turpan water system in ancient China. The Greeks constructed sophisticated water and hydraulic power systems. An example is the construction under a public contract, of a watering channel for Samos, the Tunnel of Eupalinos. Probably the earliest in Europe, is the Perachora wheel. Notable is the construction of Alexandria. In ancient China there was Sunshu Ao, Ximen Bao, Du Shi, Ma Jun, while medieval China had Su Song and Shen Kuo. Du Shi employed a waterwheel to power the bellows of a blast furnace producing cast iron.
Hydraulics
–
Moat and gardens at Sigirya.
Hydraulics
–
An open channel, with a uniform depth, Open Channel Hydraulics deals with uniform and non-uniform streams.
Hydraulics
–
Aqueduct of Segovia, a 1st-century AD masterpiece.
97.
Engineering
–
The term Engineering is derived from ingeniare, meaning "to contrive, devise". Engineering has existed as humans devised fundamental inventions such as lever, wheel, pulley. Each of these inventions is essentially consistent with the modern definition of engineering. The engineering is derived from the engineer, which itself dates back to 1390 when an engine'er originally referred to "a constructor of military engines." In this context, now obsolete, an "engine" referred to a military machine, i.e. a mechanical contraption used in war. Notable examples of the obsolete usage which have survived to the present day are military engineering corps, e.g. the U.S. Army Corps of Engineers. The word "engine" itself is of even older origin, ultimately deriving from the Latin ingenium, meaning "innate quality, especially mental power, hence a clever invention." The earliest civil engineer known by name is Imhotep. Ancient Greece developed machines in both civilian and military domains. The mechanical inventions of Archimedes are examples of early mechanical engineering. In the Middle Ages, the trebuchet was developed. The first engine was built by Thomas Savery. The development of this device gave rise to the Industrial Revolution in the coming decades, allowing for the beginnings of mass production. Similarly, in addition to military and civil engineering, the fields then known as the mechanic arts became incorporated into engineering. The inventions of the Scottish engineer James Watt gave rise to mechanical engineering.
Engineering
–
The steam engine, a major driver in the Industrial Revolution, underscores the importance of engineering in modern history. This beam engine is on display in the Technical University of Madrid.
Engineering
–
Relief map of the Citadel of Lille, designed in 1668 by Vauban, the foremost military engineer of his age.
Engineering
–
The Ancient Romans built aqueducts to bring a steady supply of clean fresh water to cities and towns in the empire.
Engineering
–
The International Space Station represents a modern engineering challenge from many disciplines.
98.
Plate tectonics
–
The theoretical model builds on the concept of continental drift developed during the first few decades of the 20th century. The geoscientific community accepted plate-tectonic theory after seafloor spreading was validated in the late 1950s and early 1960s. The lithosphere, the rigid outermost shell of a planet, is broken up into tectonic plates. The Earth's lithosphere is composed of seven or eight major plates and many minor plates. Where the plates meet, their relative motion determines the type of boundary: convergent, divergent, or transform. Oceanic trench formation occur along these plate boundaries. The relative movement of the plates typically ranges from zero to 100 mm annually. Tectonic plates are composed of oceanic lithosphere and thicker continental lithosphere, each topped by its own kind of crust. In this way, the total surface of the lithosphere remains the same. This prediction of plate tectonics is also referred to as the conveyor belt principle. Earlier theories propose gradual shrinking or gradual expansion of the globe. Tectonic plates are able to move because the Earth's lithosphere has greater strength than the underlying asthenosphere. Lateral density variations in the mantle result in convection. Another explanation lies in the different forces generated by tidal forces of the Sun and Moon. The relative importance of each of these factors and their relationship to each other is unclear, still the subject of much debate.
Plate tectonics
–
Remnants of the Farallon Plate, deep in Earth's mantle. It is thought that much of the plate initially went under North America (particularly the western United States and southwest Canada) at a very shallow angle, creating much of the mountainous terrain in the area (particularly the southern Rocky Mountains).
Plate tectonics
–
The tectonic plates of the world were mapped in the second half of the 20th century.
Plate tectonics
–
Plate motion based on Global Positioning System (GPS) satellite data from NASA JPL. The vectors show direction and magnitude of motion.
Plate tectonics
–
Alfred Wegener in Greenland in the winter of 1912-13.
99.
Gravity of Earth
–
In SI units this acceleration is measured in newtons per kilogram. There is the downwards force experienced by objects on Earth, given by the equation F = ma. However, other factors such as the rotation of the Earth also contribute to the net acceleration. The precise strength of Earth's gravity varies depending on location. The nominal "average" value at the Earth's surface, known as standard gravity is, by definition, 9.80665 m/s2. This quantity is denoted variously as gn, ge, g0, simply g. The Earth is slightly flatter at the poles while bulging at the Equator: an oblate spheroid. There are consequently slight deviations of gravity across its surface. The net force as measured by a scale and bob is called "effective gravity" or "apparent gravity". Effective gravity includes other factors that affect the net force. These factors include things such as centrifugal force at the surface from the Earth's rotation and the gravitational pull of the Moon and Sun. In large cities, it ranges to 9.825 in Oslo and Helsinki. The surface of the Earth is rotating, so it is not an inertial frame of reference. At latitudes nearer the Equator, the centrifugal force produced by Earth's rotation is larger than at polar latitudes. The same two factors influence the direction of the effective gravity.
Gravity of Earth
–
Earth's gravity measured by NASA's GRACE mission, showing deviations from the theoretical gravity of an idealized smooth Earth, the so-called earth ellipsoid. Red shows the areas where gravity is stronger than the smooth, standard value, and blue reveals areas where gravity is weaker. (Animated version.)
Gravity of Earth
–
Earth's radial density distribution according to the Preliminary Reference Earth Model (PREM).
100.
Meteorology
–
Not to be confused with Metrology. For other uses of the root word "meteor", see Meteor. For the work by Aristotle, see Meteorology. Meteorology is the interdisciplinary scientific study of the atmosphere. The study of meteorology dates back millennia, though significant progress in meteorology did not occur until the 18th century. The 19th century saw modest progress in the field after weather observation networks were formed across broad regions. Prior attempts at prediction of weather depended on historical data. Meteorological phenomena are observable weather events that are explained by the science of meteorology. Spatial scales are used to predict weather on local, regional, global levels. Meteorology, atmospheric chemistry are sub-disciplines of the atmospheric sciences. Meteorology and hydrology compose the interdisciplinary field of hydrometeorology. The interactions between its oceans are part of a ocean-atmosphere system. Meteorology has application in diverse fields such as the military, energy production, transport, construction. The word "meteorology" is from Greek μετέωρος metéōros "lofty; high" and -λογία -logia "-logy", i.e. "the study of things in the air". Varāhamihira's classical work Brihatsamhita, written about 500 AD, provides clear evidence that a deep knowledge of atmospheric processes existed even in those times.
Meteorology
–
Atmospheric sciences
Meteorology
–
Parhelion (sundog) at Savoie
Meteorology
–
Twilight at Baker Beach
Meteorology
–
A hemispherical cup anemometer
101.
Medicine
–
Medicine is the science and practice of the diagnosis, treatment, prevention of disease. The medicine is derived from Latin medicus, meaning "a physician". Medicine encompasses a variety of care practices evolved to maintain and restore health by the prevention and treatment of illness. Since the advent of modern science, most medicine has become a combination of art and science. Prescientific forms of medicine are now known as traditional medicine and medicine. They remain commonly are thus called alternative medicine. For example, evidence on the effectiveness of acupuncture is generally safe when done by an appropriately trained practitioner. In contrast, treatments outside the bounds of efficacy are termed quackery. Clinical practice varies across the world due to regional differences in culture and technology. In clinical practice, doctors personally assess patients in order to diagnose, treat, prevent disease using clinical judgment. Basic medical devices are typically used. After examination for interviewing for symptoms, the doctor may order medical tests, take a biopsy, or prescribe pharmaceutical drugs or other therapies. Differential diagnosis methods help to rule out conditions based on the information provided. During the encounter, properly informing the patient of all relevant facts is the development of trust. The medical encounter is then documented in the medical record, a legal document in many jurisdictions.
Medicine
–
Early Medicine Bottles
Medicine
Medicine
–
The Doctor, by Sir Luke Fildes (1891)
Medicine
–
The Hospital of Santa Maria della Scala, fresco by Domenico di Bartolo, 1441–1442
102.
Flow measurement
–
Flow measurement is the quantification of bulk fluid movement. Flow can be measured in a variety of ways. Positive-displacement flow meters then count the number of times the volume is filled to measure flow. Other measurement methods rely on forces produced by the flowing stream as it overcomes a known constriction, to indirectly calculate flow. Flow may be measured by measuring the velocity of fluid over a known area. Liquid flow can be measured in volumetric or mass flow rates, such as liters per second or kilograms per second, respectively. These measurements are related by the material's density. The density of a liquid is almost independent of conditions. This is not the case for the densities of which depend greatly upon pressure, temperature and to a lesser extent, composition. Energy rate is usually derived from mass or volumetric flow rate by the use of a flow computer. In engineering contexts, the volumetric rate is usually given the symbol Q, the mass flow rate, the symbol m ˙. For a fluid having density ρ, mass and volumetric flow rates may be related by m ˙ = ρ ∗ Q. Gases are compressible and change volume when placed under pressure, are heated or are cooled. A volume of gas under one set of temperature conditions is not equivalent to the same gas under different conditions. In oceanography a common unit to measure transport is a sverdrup equivalent to 106 m3/s.
Flow measurement
–
A propeller-type current meter as used for hydroelectric turbine testing.
Flow measurement
–
8inch (200mm) V-Cone Flowmeter shown with ANSI 300# raised face weld neck flanges
Flow measurement
–
A magnetic flow meter at the Tetley's Brewery in Leeds, West Yorkshire.
103.
Velocity
–
The velocity of an object is the rate of change of its position with respect to a frame of reference, is a function of time. Velocity is equivalent to a specification of its speed and direction of motion. Velocity is an important concept in kinematics, the branch of classical mechanics that describes the motion of bodies. Velocity is a physical quantity; both direction are needed to define it. For example, "5 metres per second" is a scalar, whereas "5 metres per second east" is a vector. If there is a change in both, then the object has a changing velocity and is said to be undergoing an acceleration. To have a constant velocity, an object must have a constant speed in a constant direction. Constant direction constrains the object to motion in a straight path thus, a constant velocity means motion in a straight line at a constant speed. Hence, the car is considered to be undergoing an acceleration. Speed describes only how fast an object is moving, whereas velocity gives both how fast and in what direction the object is moving. If a car is said to travel at 60 km/h, its speed has been specified. However, if the car is said to move at 60 km/h to the north, its velocity has now been specified. The big difference can be noticed when we consider movement around a circle. Average velocity can be calculated as: v ¯ = Δ x Δ t. The average velocity is always less than or equal to the average speed of an object.
Velocity
–
As a change of direction occurs while the cars turn on the curved track, their velocity is not constant.
104.
Density
–
The density, or more precisely, the volumetric mass density, of a substance is its mass per unit volume. The symbol most often used for density is ρ, although the Latin letter D can also be used. For a pure substance the density has the same numerical value as its mass concentration. Different materials usually have different densities, density may be relevant to buoyancy, purity and packaging. Osmium and iridium are the densest known elements at standard conditions for temperature and pressure but certain chemical compounds may be denser. Thus a relative density less than one means that the substance floats in water. The density of a material varies with temperature and pressure. This variation is typically small for solids and liquids but much greater for gases. Increasing the pressure on an object decreases the volume of the object and thus increases its density. Increasing the temperature of a substance decreases its density by increasing its volume. This causes it to rise relative to more dense unheated material. The reciprocal of the density of a substance is occasionally called its specific volume, a term sometimes used in thermodynamics. Density is an intensive property in that increasing the amount of a substance does not increase its density; rather it increases its mass. Upon this discovery, he leapt from his bath and ran naked through the streets shouting, "Eureka! Eureka!"
Density
–
Air density vs. temperature
105.
Temperature
–
A temperature is an objective comparative measurement of hot or cold. It is measured by a thermometer. Several units exist for measuring temperature, the most common being Celsius, Fahrenheit, especially in science, Kelvin. Absolute zero is denoted as 0 K on the Kelvin scale, − 459.67 ° F on the Fahrenheit scale. The kinetic theory offers a limited account of the behavior of the materials of macroscopic bodies, especially of fluids. Temperature is important in all fields of natural science including physics, geology, chemistry, atmospheric sciences, biology as well as most aspects of daily life. The Celsius scale is used in most of the world. It is an empirical scale. Because of the 100 interval, it is called a centigrade scale. The United States commonly uses the scale, on which water freezes at 32 ° F and boils at 212 ° F at sea-level atmospheric pressure. Scientific measurements use the Kelvin temperature scale, named in honor of the Scottish physicist who first defined it. It is a absolute temperature scale. 0K, is defined to coincide with the coldest physically-possible temperature. Its degrees are defined through thermodynamics. The temperature of absolute zero occurs at 0K = −273.15°C, the freezing point of water at sea-level atmospheric pressure occurs at 273.15K = 0°C.
Temperature
–
Annual mean temperature around the world
Temperature
–
Body temperature variation
Temperature
–
A typical Celsius thermometer measures a winter day temperature of -17 °C.
Temperature
–
Plots of pressure vs temperature for three different gas samples extrapolated to absolute zero.
106.
Aerodynamics
–
Aerodynamics is a sub-field of fluid dynamics and gas dynamics, many aspects of aerodynamics theory are common to these fields. Formal aerodynamics study in the modern sense began in the eighteenth century, although observations of fundamental concepts such as aerodynamic drag have been recorded much earlier. Most of the early efforts in aerodynamics worked towards achieving heavier-than-air flight, first demonstrated by Wilbur and Orville Wright in 1903. Recent work in aerodynamics has focused on issues related to compressible flow, turbulence, boundary layers and has become increasingly computational in nature. Fundamental concepts of continuum, drag, pressure gradients appear in the work of Aristotle and Archimedes. In 1726, Sir Isaac Newton became the first person to develop a theory of air resistance, making him one of the first aerodynamicists. In 1757, Leonhard Euler published the more general Euler equations which could be applied to both compressible and incompressible flows. The Euler equations were extended to incorporate the effects of viscosity in the first half of the 1800s, resulting in the Navier-Stokes equations. The Navier-Stokes equations are the most general governing equations of fluid flow and are difficult to solve. In 1871, Francis Herbert Wenham constructed the first wind tunnel, allowing precise measurements of aerodynamic forces. Drag theories were developed by Jean le Rond d'Alembert, Gustav Kirchhoff, Lord Rayleigh. In 1889, Charles Renard, a French aeronautical engineer, became the first person to reasonably predict the power needed for sustained flight. Kutta and Zhukovsky went on to develop a two-dimensional wing theory. Expanding upon the work of Lanchester, Ludwig Prandtl is credited with developing the mathematics behind thin-airfoil and lifting-line theories as well as work with boundary layers. As aircraft speed increased, designers began to encounter challenges associated with air compressibility at speeds near or greater than the speed of sound.
Aerodynamics
–
A vortex is created by the passage of an aircraft wing, revealed by smoke. Vortices are one of the many phenomena associated with the study of aerodynamics.
Aerodynamics
–
A replica of the Wright brothers ' wind tunnel is on display at the Virginia Air and Space Center. Wind tunnels were key in the development and validation of the laws of aerodynamics.
107.
Aircraft
–
An aircraft is a machine, able to fly by gaining support from the air. The human activity that surrounds aircraft is called aviation. Aerial vehicles may be remotely controlled or self-controlled by onboard computers. Aircraft may be classified by different criteria, such as lift type, others. Each of the two World Wars led to great technical advances. Consequently, the history of aircraft can be divided into five eras: Pioneers of flight, from the earliest experiments to 1914. First World War, 1914 to 1918. Aviation between the World Wars, 1918 to 1939. Second World War, 1939 to 1945. Postwar era, also called the jet age, 1945 to the present day. Aerostats use buoyancy to float in the air in much the same way that ships float on the water. A balloon was originally any aerostat, while the airship was used for powered aircraft designs -- usually fixed-wing. In 1919 Frederick Handley Page was reported as referring to "ships of the air," with smaller passenger types as "Air yachts." In the 1930s, large intercontinental flying boats were also sometimes referred to as "ships of the air" or "flying-ships". – though none had yet been built.
Aircraft
–
NASA test aircraft
Aircraft
–
The Mil Mi-8 is the most-produced helicopter in history
Aircraft
–
"Voodoo" a modified P 51 Mustang is the 2014 Reno Air Race Champion
Aircraft
–
A hot air balloon in flight
108.
Petroleum
–
Petroleum is a naturally occurring, yellow-to-black liquid found in geological formations beneath the Earth's surface, commonly refined into various types of fuels. Components of petroleum are separated using a technique called fractional distillation. It consists of hydrocarbons of other organic compounds. The petroleum covers both naturally occurring unprocessed crude oil and petroleum products that are made up of refined crude oil. Petroleum has mostly been recovered by drilling. Reservoir characterization have been completed. It is estimated that the world consumes about 95 million barrels each day. The burning of fossil fuels plays the major role in the current episode of global warming. The petroleum comes from Greek: πέτρα for rocks and Greek: ἔλαιον for oil. The term was found in 10th-century Old English sources. It was used in the treatise De Natura Fossilium, published in 1546 by the German mineralogist Georg Bauer, also known as Georgius Agricola. Petroleum, in another, has been used since ancient times, is now important across society, including in economy, politics and technology. Great quantities of it were found on the banks of one of the tributaries of the Euphrates. Persian tablets indicate the medicinal and lighting uses of petroleum in the upper levels of their society. By 347 AD, oil was produced from bamboo-drilled wells in China.
Petroleum
–
Pumpjack pumping an oil well near Lubbock, Texas
Petroleum
–
An oil refinery in Mina-Al-Ahmadi, Kuwait
Petroleum
–
Natural petroleum spring in Korňa, Slovakia
Petroleum
–
Oil derrick in Okemah, Oklahoma, 1922
109.
Weather
–
Weather is the state of the atmosphere, to the degree that it is hot or cold, wet or dry, calm or stormy, clear or cloudy. Most weather phenomena occur in the troposphere, just below the stratosphere. Weather refers to day-to-day temperature and activity, whereas climate is the term for the statistics of atmospheric conditions over longer periods of time. When used without qualification, "weather" is generally understood to mean the weather of Earth. Weather is driven by air pressure, moisture differences between one place and another. These differences can occur due to the sun's angle at any particular spot, which varies from the tropics. The strong contrast between polar and tropical air gives rise to the jet stream. Weather systems such as extratropical cyclones, are caused by instabilities of the jet stream flow. Because the Earth's axis is tilted relative to its orbital plane, sunlight is incident at different angles at different times of the year. On Earth's surface, temperatures usually range 40 ° C annually. Surface temperature differences in turn cause pressure differences. Higher altitudes are cooler than lower altitudes due to differences in compressional heating. Weather forecasting is the application of technology to predict the state of the atmosphere for a future time and a given location. Studying how the weather works on other planets has been helpful in understanding how weather works on Earth. Jupiter's Great Red Spot, is an anticyclonic storm known to have existed for at least 300 years.
Weather
–
Thunderstorm near Garajau, Madeira
Weather
–
Cumulus mediocris cloud surrounded by stratocumulus
Weather
–
New Orleans, Louisiana, after being struck by Hurricane Katrina. Katrina was a Category 3 hurricane when it struck although it had been a category 5 hurricane in the Gulf of Mexico.
Weather
–
Early morning sunshine over Bratislava, Slovakia
110.
Nebula
–
A nebula is an interstellar cloud of dust, hydrogen, helium and other ionized gases. Originally, nebula was a name for any astronomical object, including galaxies beyond the Milky Way. Most nebulae are of vast size, even millions of light years in diameter. Some nebulae are variably illuminated by T Tauri variable stars. Nebulae are often star-forming regions, such as in the "Pillars of Creation" in the Eagle Nebula. The remaining material is then believed to form other planetary system objects. Around 150 AD, Claudius Ptolemaeus recorded, in VII-VIII of his Almagest, five stars that appeared nebulous. He also noted a region of nebulosity between the constellations Ursa Major and Leo, not associated with any star. The true nebula, as distinct from a star cluster, was mentioned by the Persian astronomer, Abd al-Rahman al-Sufi, in his Book of Fixed Stars. He noted "a little cloud" where the Andromeda Galaxy is located. He also cataloged the Omicron Velorum star cluster such as Brocchi's Cluster. The supernova that created the SN 1054, was observed by Arabic and Chinese astronomers in 1054. In 1610, Nicolas-Claude Fabri de Peiresc discovered the Orion Nebula using a telescope. This nebula was also observed by Johann Baptist Cysat in 1618. In 1715, Edmund Halley published a list of six nebulae.
Nebula
–
The " Pillars of Creation " from the Eagle Nebula. Evidence from the Spitzer Telescope suggests that the pillars may already have been destroyed by a supernova explosion, but the light showing us the destruction will not reach the Earth for another millennium.
Nebula
–
Portion of the Carina Nebula
Nebula
–
The Triangulum Emission Garren Nebula NGC 604
Nebula
–
Herbig–Haro object HH 161 and HH 164.
111.
Interstellar space
–
Interstellar Space is a studio album by American jazz saxophonist John Coltrane. It was released by Impulse! Records in September 1974. The album is an example of highly improvised free jazz, Coltrane's principal interest in the latter part of his career. Coltrane's improvisations are thus extremely free here, stating tacit modes and harmonies briefly and modulating constantly, twisting expressions into breath-length phrases. The folkish "Venus" is probably the most accessible number; the longest piece, does feature hints of swing by song's end. Its melody is rather similar to the canonical, almost cantor-like quality of the material on Stellar Regions. The original album featured four tracks: "Saturn". "Leo" and "Jupiter Variation", later appeared on the compilation album Jupiter Variation in 1978. A 2000 reissue collected all of the tracks from the session, including false starts for "Jupiter Variation" in the CD's pregap. Interstellar Space was released by Impulse! Records. Across these duets the saxophonist is at his most visceral exuding an overpowering confidence tempered at times with sacrosanct tenderness. Ali's interlocking pan-rhythmic patterns embrace while fervently pushing the music forward. These outtakes are hidden before "Mars".
Interstellar space
–
Interstellar Space
112.
Explosions
–
Supersonic explosions created by high explosives are known as detonations and travel via supersonic shock waves. Subsonic explosions are created through a slower process known as deflagration. Most natural explosions arise from volcanic processes of various sorts. Explosions also occur as a result of impact events and in phenomena such as hydrothermal explosions. Explosions can also occur outside of Earth in the universe in events such as supernova. Explosions frequently occur during bushfires in eucalyptus forests where the volatile oils in the tree tops suddenly combust. Animal bodies can also be explosive, as some animals hold a large amount of flammable material such as animal fat. This, in rare cases, results in naturally exploding animals. Solar flares are an example of explosion common on the Sun, presumably on most other stars as well. The source for solar activity comes from the tangling of magnetic field lines resulting from the rotation of the Sun's conductive plasma. Another type of astronomical explosion occurs when an asteroid impacts the surface of another object, such as a planet. The most common artificial explosives are chemical explosives, usually involving a violent reaction that produces large amounts of hot gas. Gunpowder was the first explosive to be discovered and put to use. Early developments in chemical explosive technology were Frederick Augustus Abel's development of nitrocellulose in 1865 and Alfred Nobel's invention of dynamite in 1866. Chemical explosions are often initiated by an electric spark or flame.
Explosions
–
Detonation of 16 tons of explosives.
Explosions
–
Gasoline explosions, simulating bomb drops at an airshow.
Explosions
–
Black smoke from an explosion rising after a bomb detonation inside the outside Nahr al-Bared, Lebanon.
Explosions
–
Detonation of a MICLIC to destroy a 1km in depth blast resistant minefield in Iraq.
113.
Traffic engineering (transportation)
–
Traffic engineering is a branch of civil engineering that uses engineering techniques to achieve the safe and efficient movement of people and goods on roadways. Traffic engineering deals with the functional part except the infrastructures provided. Typical engineering projects involve designing traffic control device installations and modifications, including traffic signals, signs, pavement markings. However, traffic engineers also consider safety by investigating locations with high crash rates and developing countermeasures to reduce crashes. Traffic management can be short-term or long-term. Traditionally, road improvements have consisted mainly of building additional infrastructure. However, dynamic elements are now being introduced into road management. Dynamic elements have long been used in transport. These include sensors to measure traffic flows and automatic, interconnected, guidance systems to manage traffic. These systems are collectively called intelligent transportation systems. However, above a critical threshold, increased density reduces speed. Additionally, beyond a further threshold, increased density reduces flow well. Signals on entrance ramps that control the rate at which vehicles are allowed to enter the mainline facility, provide this function. Highway engineering is a branch of traffic engineering that deals with reducing the frequency and severity of crashes. It uses vehicle dynamics, as well as road user psychology and human factors engineering, to reduce the influence of factors that contribute to crashes.
Traffic engineering (transportation)
–
Complex intersections with multiple vehicle lanes, bike lanes, and crosswalks are common examples of traffic engineering projects
Traffic engineering (transportation)
–
A ramp meter limits the rate at which vehicles can enter the freeway
114.
Stress (physics)
–
For example, when a vertical bar is supporting a weight, each particle in the bar pushes immediately below it. When a liquid is in a closed container under pressure, each particle gets pushed against by all the surrounding particles. The pressure-inducing surface push in reaction. These macroscopic forces are actually the net result of a very large number of intermolecular forces and collisions between the particles in those molecules. Strain inside a material may arise by various mechanisms, such as stress as applied to its surface. In gases, only deformations that change the volume generate elastic stress. However, if the deformation is gradually changing with time, even in fluids there will usually be some viscous stress, opposing that change. Elastic and viscous stresses are usually combined under the name mechanical stress. Significant stress may exist even when deformation is negligible or non-existent. Stress may exist in the absence of external forces; built-in stress is important, in prestressed concrete and tempered glass. Stress that exceeds certain strength limits of the material will even change its crystal structure and composition. In some branches of engineering, the term stress is occasionally used in a looser sense as a synonym of "internal force". Since ancient times humans have been consciously aware of stress inside materials. With those tools, Augustin-Louis Cauchy was able to give the first rigorous and general mathematical model for stress in a homogeneous medium. Cauchy observed that the force across an imaginary surface was a linear function of its normal vector; and, moreover, that it must be a symmetric function.
Stress (physics)
–
Built-in strain, inside the plastic protractor, developed by the stress of the shape of the protractor, is revealed by the effect of polarized light.
Stress (physics)
–
Roman -era bridge in Switzerland
Stress (physics)
–
Inca bridge on the Apurimac River
Stress (physics)
–
Glass vase with the craquelé effect. The cracks are the result of brief but intense stress created when the semi-molten piece is briefly dipped in water.
115.
Shear stress
–
A shear stress, often denoted τ, is defined as the component of stress coplanar with a material cross section. Shear stress arises to the cross section. Normal stress, on the other hand, arises to the material cross section on which it acts. ν is Poisson's ratio. Beam shear is defined as the internal stress of a beam caused by the shear force applied to the beam. The beam formula is also known as Zhuravskii shear stress formula after Dmitrii Ivanovich Zhuravskii who derived it in 1855. Shear stresses within a semi-monocoque structure may be calculated by idealizing the cross-section of the structure into a set of webs. Dividing the flow by the thickness of a given portion of the semi-monocoque structure yields the shear stress. Any real fluids moving along solid boundary will incur a stress on that boundary. The region between these two points is aptly named the layer. However, for non-Newtonian fluids, this is longer the case as for these fluids the viscosity is not constant. The stress is imparted onto the boundary as a result of this loss of velocity. Specifically, the wall stress is defined as: τ w ≡ τ = μ ∂ u ∂ y | y = 0. For an isotropic Newtonian flow it is a scalar, while for anisotropic Newtonian flows it can be a second-order tensor too. The constant one finds in this case is the dynamic viscosity of the flow.
Shear stress
–
A shearing force is applied to the top of the rectangle while the bottom is held in place. The resulting shear stress,, deforms the rectangle into a parallelogram. The area involved would be the top of the parallelogram.
116.
Control volume
–
In continuum mechanics and thermodynamics, a control volume is a mathematical abstraction employed in the process of creating mathematical models of physical processes. In an inertial frame of reference, it is a volume moving with constant flow velocity through which the continuum flows. The surface enclosing the volume is referred to as the control surface. At steady state, a volume can be thought of as an arbitrary volume in which the mass of the continuum remains constant. As a continuum moves through the volume, the mass entering the control volume is equal to the mass leaving the control volume. In the absence of work and heat transfer, the energy within the control volume remains constant. It is analogous to the classical mechanics concept of the free diagram. There is special about a particular control volume, it simply represents a small part of the system to which physical laws can be easily applied. This gives rise to what is termed a volume-wise formulation of the mathematical model. In this way, the point-wise formulation of the mathematical model can be developed so it can describe the physical behaviour of an entire system. In continuum mechanics the conservation equations are in integral form. They therefore apply on volumes. Finding forms of the equation that are independent of the control volumes allows simplification of the integral signs. This can be seen as follows. Consider a bug, moving through a volume where there is some scalar, e.g. pressure, that varies with time and position: p = p.
Control volume
117.
Conservation of momentum
–
In classical mechanics, linear momentum, translational momentum, or simply momentum is the product of the mass and velocity of an object, quantified in kilogram-meters per second. It is dimensionally equivalent to the product of force and time, quantified in newton-seconds. Newton's second law of motion states that the change in linear momentum of a body is equal to the net impulse acting on it. If the truck were lighter, or moving more slowly, then it would therefore require less impulse to start or stop. Linear momentum is also a conserved quantity, meaning that if a closed system is not affected by external forces, its linear momentum can not change. In classical mechanics, conservation of linear momentum is implied by Newton's laws. With appropriate definitions, a linear momentum conservation law holds in electrodynamics, quantum mechanics, quantum field theory, general relativity. It is ultimately an expression of the fundamental symmetries of space and time, that of translational symmetry. Linear momentum depends on frame of reference. Observers in different frames would find different values of linear momentum of a system. But each would observe that the value of linear momentum does not change provided the system is isolated. Momentum has a direction well as magnitude. Quantities that have both a direction are known as vector quantities. Because momentum has a direction, it can be used to predict the resulting direction of objects after they collide, well as their speeds. Below, the basic properties of momentum are described in one dimension.
Conservation of momentum
–
In a game of pool, momentum is conserved; that is, if one ball stops dead after the collision, the other ball will continue away with all the momentum. If the moving ball continues or is deflected then both balls will carry a portion of the momentum from the collision.
118.
Derivative
–
The derivative of a function of a real variable measures the sensitivity to change of a quantity, determined by another quantity. Derivatives are a fundamental tool of calculus. The line is the best linear approximation of the function near that input value. Derivatives may be generalized to functions of real variables. In this generalization, the derivative is reinterpreted as a linear transformation whose graph is the best linear approximation to the graph of the original function. The Jacobian matrix is the matrix that represents this linear transformation with respect to the basis given by the choice of dependent variables. It can be calculated to the independent variables. For a real-valued function of several variables, the Jacobian matrix reduces to the gradient vector. The process of finding a derivative is called differentiation. The reverse process is called antidifferentiation. The fundamental theorem of calculus states that antidifferentiation is the same as integration. Integration constitute the two fundamental operations in single-variable calculus. Differentiation is the action of computing a derivative. It is called the derivative of f with respect to x. Thus, since y + Δ y = y + m Δ x, it follows that Δ y = m Δ x.
Derivative
–
The graph of a function, drawn in black, and a tangent line to that function, drawn in red. The slope of the tangent line is equal to the derivative of the function at the marked point.
119.
Continuity equation
–
A continuity equation in physics is an equation that describes the transport of some quantity. It can be generalized to apply to any extensive quantity. Continuity equations are a stronger, local form of conservation laws. This statement does not immediately rule out the possibility that energy could disappear from a field in Canada while simultaneously appearing in a room in Indonesia. A equation is the mathematical way to express this kind of statement. Any equation can be expressed in an "integral form", which applies to any finite region, or in a "differential form" which applies at a point. Continuity equations underlie more specific transport equations such as the convection -- diffusion equation, Navier -- Stokes equations. Before we can write down the equation, we must first define flux, a quantity specifying flow or motion. Let ρ be the density of this property, i.e. the amount of q per unit volume. The way that this q is flowing is described by its flux. The flux of q is a field, which we denote as j. Here are some properties of flux: The dimension of flux is "amount of q per unit time, per unit area". Outside the pipe, where there is no water, the flux is zero. In a well-known example, the flux of electric charge is the current density. In a simple example, q could be the number of people in the building.
Continuity equation
–
Illustration of how the flux j of a quantity q passes through an open surface S. (d S is differential vector area).
120.
Continuous function
–
In mathematics, a continuous function is, roughly speaking, a function for which sufficiently small changes in the input result in arbitrarily small changes in the output. Otherwise, a function is said to be a discontinuous function. A continuous function with a continuous inverse function is called a homeomorphism. Continuity of functions is one of the core concepts of topology, treated in full generality below. The introductory portion of this article focuses on the special case where the inputs and outputs of functions are real numbers. In addition, this article discusses the definition for the more general case of functions between two metric spaces. Especially in theory, one considers a notion of continuity known as Scott continuity. Other forms of continuity do exist but they are not discussed in this article. As an example, consider the h, which describes the height of a growing flower at t. This function is continuous. A form of the epsilon–delta definition of continuity was first given by Bernard Bolzano in 1817. Cauchy defined infinitely small quantities in terms of variable quantities, his definition of continuity closely parallels the infinitesimal definition used today. All three of those nonequivalent definitions of pointwise continuity are still in use. Eduard Heine provided the first published definition of uniform continuity in 1872, but based these ideas on lectures given by Peter Gustav Lejeune Dirichlet in 1854. Such a point is called a discontinuity.
Continuous function
–
Illustration of the ε-δ-definition: for ε=0.5, c=2, the value δ=0.5 satisfies the condition of the definition.
121.
Molecules
–
A molecule is an electrically neutral group of two or more atoms held together by chemical bonds. Molecules are distinguished from ions by their lack of electrical charge. However, in biochemistry, the term molecule is often used less strictly, also being applied to polyatomic ions. In the kinetic theory of gases, the term molecule is often used for any gaseous particle regardless of its composition. According to this definition, noble gas atoms are considered molecules as they are in fact monoatomic molecules. Complexes connected by non-covalent interactions, such as ionic bonds, are generally not considered single molecules. Molecules as components of matter are common in organic substances. They also make up most of the oceans and atmosphere. The theme of repeated unit-cellular-structure also holds for most condensed phases with metallic bonding, which means that solid metals are also not made of molecules. The science of molecules is called molecular physics, depending on whether the focus is on physics. In practice, however, this distinction is vague. In molecular sciences, a molecule consists of a stable system composed of two or more atoms. Polyatomic ions may sometimes be usefully thought of as electrically charged molecules. According to the Online Etymology Dictionary, the word "molecule" derives from small unit of mass. Molecule -- "extremely minute particle", from French molécule, from diminutive of Latin moles "barrier".
Molecules
–
Atomic force microscopy image of a PTCDA molecule, which contains five carbon rings in a non-linear arrangement.
Molecules
–
A scanning tunneling microscopy image of pentacene molecules, which consist of linear chains of five carbon rings.
Molecules
–
Arrangement of polyvinylidene fluoride molecules in a nanofiber – transmission electron microscopy image.
Molecules
122.
Statistical mechanics
–
A common use of statistical mechanics is in explaining the thermodynamic behaviour of large systems. This branch of statistical mechanics which extends classical thermodynamics is known as statistical thermodynamics or equilibrium statistical mechanics. Statistical mechanics also finds use outside equilibrium. An important subbranch known as statistical mechanics deals with the issue of microscopically modelling the speed of irreversible processes that are driven by imbalances. Examples of such processes include chemical flows of particles and heat. In physics there are two types of mechanics usually examined: quantum mechanics. The statistical ensemble is a distribution over all possible states of the system. In statistical mechanics, the ensemble is a probability distribution over phase points, usually represented as a distribution in a phase space with canonical coordinates. In statistical mechanics, the ensemble is a probability distribution over pure states, can be compactly summarized as a density matrix. These two meanings will be used interchangeably in this article. However the probability is interpreted, each state in the ensemble evolves according to the equation of motion. Thus, the ensemble itself also evolves, as the virtual systems in the ensemble enter another. The evolution is given by the Liouville equation or the von Neumann equation. One special class of ensemble is those ensembles that do not evolve over time. Their condition is known as statistical equilibrium.
Statistical mechanics
–
Statistical mechanics
123.
Mean free path
–
The following table lists some typical values for air at different pressures at room temperature. Because of hardening, the free path of the X-ray spectrum changes with distance. Sometimes one measures the thickness of a material in the number of mean free paths. Material with the thickness of one mean free path will attenuate 37% of photons. This concept is closely related to half-value layer: a material with a thickness of one HVL will attenuate 50% of photons. A x-ray image is a image, an image with negative logarithm of its intensities is sometimes called a number of mean free paths image. In particle physics the concept of the free path is not commonly used, being replaced by the similar concept of length. In particular, for high-energy photons, which mostly interact by electron -- production, the radiation length is used much like the mean free path in radiography. Independent-particle models in nuclear physics require the undisturbed orbiting of nucleons within the nucleus before they interact with other nucleons. This requirement seems to be in contradiction to the assumptions made in the theory... We are facing here one of the fundamental problems of nuclear structure physics which has yet to be solved." . Qs can be evaluated numerically for spherical particles using Mie theory. This relation is used in the derivation of the Sabine equation in acoustics, using a geometrical approximation of sound propagation. The free path is used in the design of e.g. systems for distillation.
Mean free path
–
Mean free path for photons in energy range from 1 keV to 20 MeV for Elements Z = 1 to 100. Based on data from. The discontinuities are due to low density of gas elements. Six bands correspond to neighborhoods of six noble gases. Also shown are locations of absorption edges.
124.
Scale (ratio)
–
The scale ratio of a model represents the proportional ratio of a linear dimension of the model to the same feature of the original. Examples include the scale drawings of the elevations or plans of a building. In such cases the scale is exact throughout the model or drawing. The scale can be expressed in four ways: in words, as a ratio, as a graphical scale. In general a representation may involve more than one scale at the same time. For example, a drawing showing a new road in elevation might use different vertical scales. A map at some scale may be annotated with wind arrows at a dimensional scale of 1 cm to 20 mph. Map scales require careful discussion. In general the scale of a projection depends on direction. The variation of scale may be considerable in small scale maps which may cover the globe. It is always present. The scale of a projection must be interpreted as a nominal scale. A model is a representation or copy of an object, larger or smaller than the actual size of the object being represented. Very often the model is smaller than the original and used as a guide to making the object in full size. In mathematics, the idea of geometric scaling can be generalized.
Scale (ratio)
–
Da Vinci's Vitruvian Man illustrates the ratios of the dimensions of the human body; a human figure is often used to illustrate the scale of architectural or engineering drawings.
125.
Differential equations
–
A differential equation is a mathematical equation that relates some function with its derivatives. In applications, the functions usually represent physical quantities, the derivatives represent their rates of change, the equation defines a relationship between the two. Because such relations are extremely common, differential equations play a prominent role in many disciplines including engineering, biology. In pure mathematics, differential equations are studied from several different perspectives, mostly concerned with their solutions—the set of functions that satisfy the equation. If a self-contained formula for the solution is not available, the solution may be numerically approximated using computers. Differential equations first came by Newton and Leibniz. Jacob Bernoulli proposed the Bernoulli differential equation in 1695. In 1746, within ten years Euler discovered the three-dimensional equation. The Euler–Lagrange equation was developed in the 1750s by Euler and Lagrange in connection with their studies of the tautochrone problem. Lagrange solved this problem in 1755 and sent the solution to Euler. Both further developed Lagrange's method and applied it to mechanics, which led to the formulation of Lagrangian mechanics. Contained in this book was Fourier's proposal of his heat equation for conductive diffusion of heat. This partial differential equation is now taught to every student of mathematical physics. In classical mechanics, the motion of a body is described by its velocity as the time value varies. Newton's laws allow one to express these variables dynamically as a differential equation for the unknown position of the body as a function of time.
Differential equations
–
Navier–Stokes differential equations used to simulate airflow around an obstruction.
126.
Kinematic viscosity
–
The viscosity of a fluid is a measure of its resistance to gradual deformation by shear stress or tensile stress. For liquids, it corresponds to the informal concept of "thickness"; for example, honey has a much higher viscosity than water. For a given pattern, the stress required is proportional to the fluid's viscosity. A fluid that has no resistance to shear stress is known as an inviscid fluid. Zero viscosity is observed only at very low temperatures in superfluids. Otherwise, all fluids are technically said to be viscous or viscid. A fluid such as pitch, may appear to be a solid. The word "viscosity" is derived from the Latin "viscum", also a viscous glue made from mistletoe berries. The dynamic viscosity of a fluid expresses its resistance to shearing flows, where adjacent layers move parallel to each other with different speeds. This fluid has to be homogeneous at different shear stresses. An external force is therefore required in order to keep the top plate moving at constant speed. The proportionality μ in this formula is the viscosity of the fluid. The y-axis, perpendicular to the flow, points in the direction of maximum shear velocity. This equation can be used where the velocity does not vary linearly with y, such as in fluid flowing through a pipe. Use of the Greek mu for the dynamic stress viscosity is common among mechanical and chemical engineers, as well as physicists.
Kinematic viscosity
–
Pitch has a viscosity approximately 230 billion (2.3 × 10 11) times that of water.
Kinematic viscosity
–
A simulation of substances with different viscosities. The substance above has lower viscosity than the substance below
Kinematic viscosity
–
Example of the viscosity of milk and water. Liquids with higher viscosities make smaller splashes when poured at the same velocity.
Kinematic viscosity
–
Honey being drizzled.
127.
Calculus
–
It has two major branches, integral calculus; these two branches are related to each other by the fundamental theorem of calculus. Both branches make use of the fundamental notions of infinite series to a well-defined limit. Generally, modern calculus is considered to have been developed by Isaac Newton and Gottfried Leibniz. Calculus has widespread uses in science, engineering and economics. Calculus is a part of modern education. A course in calculus is a gateway to other, more advanced courses in mathematics devoted to the study of limits, broadly called mathematical analysis. Calculus has historically been called "the calculus of infinitesimals", or "infinitesimal calculus". Calculus is also used for naming theories of computation, such as propositional calculus, calculus of variations, lambda calculus, process calculus. The method of exhaustion was later reinvented by Liu Hui in the 3rd century AD in order to find the area of a circle. In the 5th AD, Zu Chongzhi established a method that would later be called Cavalieri's principle to find the volume of a sphere. Indian mathematicians gave a semi-rigorous method of differentiation of some trigonometric functions. In the Middle East, Alhazen derived a formula for the sum of fourth powers. The infinitesimal quantities he introduced were disreputable at first. The formal study of calculus brought together Cavalieri's infinitesimals with the calculus of finite differences developed at around the same time. Pierre de Fermat, claiming that he borrowed from Diophantus, introduced the concept of adequality, which represented equality up to an infinitesimal term.
Calculus
–
Isaac Newton developed the use of calculus in his laws of motion and gravitation.
Calculus
–
Gottfried Wilhelm Leibniz was the first to publish his results on the development of calculus.
Calculus
–
Maria Gaetana Agnesi
Calculus
–
The logarithmic spiral of the Nautilus shell is a classical image used to depict the growth and change related to calculus
128.
Reynolds number
–
The Reynolds number is an important dimensionless quantity in fluid mechanics, used to help predict flow patterns in different fluid flow situations. It is widely used in many applications ranging to the passage of air over an aircraft wing. A similar effect is created by the introduction of a stream such as the hot gases from a flame in air. This relative movement generates fluid friction, a factor in developing turbulent flow. The application of Reynolds numbers to both situations allows scaling factors to be developed. The Reynolds number can be defined for different situations where a fluid is in relative motion to a surface. These definitions generally include a velocity and a characteristic length or characteristic dimension. For aircraft or ships, the width can be used. For flow in a sphere moving in a fluid the internal diameter is generally used today. Other shapes such as non-spherical objects have an equivalent diameter defined. For fluids of variable density such as compressible fluids of variable viscosity such as non-Newtonian fluids, special rules apply. The velocity may also be a matter of convention in some circumstances, notably stirred vessels. In practice, matching the Reynolds number is not on its own sufficient to guarantee similitude. Very small changes to shape and surface roughness can result in very different flows. Nevertheless, Reynolds numbers are widely used.
Reynolds number
–
Sir George Stokes, introduced Reynolds numbers
Reynolds number
–
Osborne Reynolds popularised the concept
Reynolds number
–
The Moody diagram, which describes the Darcy–Weisbach friction factor f as a function of the Reynolds number and relative pipe roughness.
129.
Ideal fluid
–
Perfect fluids are idealized models in which these possibilities are neglected. Specifically, perfect fluids have no shear stresses, heat conduction. Perfect fluids admit a Lagrangian formulation, which allows the techniques used in particular, quantization, to be applied to fluids. Unfortunately, heat conduction and anisotropic stresses can not be treated in these generalized formulations. Perfect fluids are often used in general relativity to model idealized distributions such as the interior of a star or an isotropic universe. In the latter case, the equation of state of the perfect fluid may be used in Friedmann–Lemaître–Robertson–Walker equations to describe the evolution of the universe. Equation of state gas Fluid solutions in general relativity The Large Scale Structure of Space-Time, by S.W.Hawking and G.F.R.Ellis, Cambridge University Press, 1973. ISBN 0-521-20016-4, ISBN 0-521-09906-4 Mark D. Roberts.
Ideal fluid
–
The stress–energy tensor of a perfect fluid contains only the diagonal components.
130.
Superfluidity
–
Superfluidity is the characteristic property of a fluid with zero viscosity which therefore flows without loss of kinetic energy. When stirred a superfluid forms cellular vortices that continue to rotate indefinitely. Superfluidity occurs in two isotopes of helium, helium-3 and helium-4, when they are liquified by cooling to cryogenic temperatures. It is also a property of exotic states of matter theorized to exist in astrophysics, high-energy physics, theories of quantum gravity. Superfluidity was originally discovered by John F. Allen. It has since been described through phenomenology and microscopic theories. In liquid helium-4, the superfluidity occurs at far higher temperatures than it does in helium-3. Each atom of helium-4 is a particle, by virtue of its spin. A helium-3 atom is a particle; it can form bosons only by pairing at much lower temperatures. This process is similar to the electron pairing in superconductivity. Such vortices had previously been observed in an ultracold bosonic gas using 87Rb in 2000, more recently in two-dimensional gases. As early as 1999 Lene Hau created such a condensate using sodium atoms for the purpose of slowing light, later stopping it completely. With a light-roadblock setup, we can generate controlled collisions between shock waves resulting in completely nonlinear excitations. We have observed hybrid structures consisting of vortex rings embedded in dark solitonic shells. The vortex rings act as'phantom propellers' leading to very rich excitation dynamics."
Superfluidity
–
Fig. 2. The liquid helium is in the superfluid phase. As long as it remains superfluid, it creeps up the wall of the cup as a thin film. It comes down on the outside, forming a drop which will fall into the liquid below. Another drop will form—and so on—until the cup is empty.
Superfluidity
–
Fig. 1. Helium II will "creep" along surfaces in order to find its own level—after a short while, the levels in the two containers will equalize. The Rollin film also covers the interior of the larger container; if it were not sealed, the helium II would creep out and escape.
131.
Boundary layer
–
On an wing the boundary layer is the part of the flow close to the wing, where viscous forces distort the surrounding non-viscous flow. Laminar boundary layers can be loosely classified according to the circumstances under which they are created. When a fluid rotates and forces are balanced by the Coriolis effect, an Ekman layer forms. In the theory of transfer, a thermal boundary layer occurs. A surface can have multiple types of layer simultaneously. The viscous nature of airflow is responsible for skin friction. The layer of air over the wing's surface, stopped by viscosity, is the boundary layer. There are two different types of boundary flow: laminar and turbulent. Laminar Boundary Layer Flow The boundary is a very smooth flow, while the turbulent boundary layer contains swirls or "eddies." The flow creates less skin friction drag than the turbulent flow, but is less stable. Boundary flow over a wing surface begins as a smooth laminar flow. As the flow continues back from the leading edge, the laminar layer increases in thickness. Turbulent Boundary Layer Flow At some distance back from the leading edge, the smooth flow breaks down and transitions to a turbulent flow. The low energy flow, however, tends to break down more suddenly than the turbulent layer. This allows a closed-form solution for the flow in a significant simplification of the full Navier -- Stokes equations.
Boundary layer
–
Ludwig Prandtl
132.
Method of matched asymptotic expansions
–
It is particularly used when solving singularly perturbed differential equations. In a large class of perturbed problems, the domain may be divided into two or more subdomains. In one of often the largest, the solution is accurately approximated by an asymptotic series found by treating the problem as a regular perturbation. This approximation is called the "inner solution," and the other is the "outer solution," named to the transition layer. This has solution y = A e − t for some constant A. Applying the boundary condition y = 0, we would have A = 0; applying the boundary condition y = 1, we would have A = e. It is therefore impossible to satisfy both boundary conditions, so ϵ = 0 is not a valid approximation to make across the whole of the domain. From this we infer that there must be a layer at one of the endpoints of the domain where ϵ needs to be included. It is the leading-order solution. Of comparable size, so define the new O time variable τ = t / ϵ. This has y = B − C e − τ for some constants B and C. We use matching to find the value of the constant B. To obtain our final, matched, composite solution, valid on the whole domain, one popular method is the uniform method. Note that this expression correctly reduces to the expressions for y O when t is O and O, respectively. This final solution satisfies the problem's original equation.
Method of matched asymptotic expansions
–
Convergence of approximations. Approximations and exact solutions, which are indistinguishable at this scale, are shown for various. The outer solution is also shown. Note that since the boundary layer becomes narrower with decreasing, the approximations converge to the outer solution pointwise, but not uniformly, almost everywhere.
133.
Laminar flow
–
In fluid dynamics, laminar flow occurs when a fluid flows in parallel layers, with no disruption between the layers. At low velocities, the fluid tends to flow without adjacent layers slide past one another like playing cards. There are eddies or swirls of fluids. Laminar flow is a regime characterized by high momentum diffusion and low momentum convection. Laminar flow tends to occur below a threshold at which it becomes turbulent. Turbulent flow is a less orderly regime, characterised by eddies or small packets of fluid particles which result in lateral mixing. In non-scientific terms, flow is smooth while turbulent flow is rough. The dimensionless Reynolds number is an important parameter in the equations that describe whether fully developed flow conditions lead to laminar or turbulent flow. Laminar flow generally occurs when the fluid is very viscous. The values where laminar flow occurs, will depend on the geometry of the flow system and flow pattern. Q is the volumetric rate. A is the pipe's cross-sectional area. V is the mean velocity of the fluid. μ is the dynamic viscosity of the fluid. ν is the kinematic viscosity of the fluid, ν = / ρ.
Laminar flow
–
A sphere in Stokes flow, at very low Reynolds number. An object moving through a fluid experiences a force in the direction opposite to its motion.
134.
Speed of sound
–
The speed of sound is the distance travelled per unit time by a sound wave as it propagates through an elastic medium. The speed of sound in an ideal gas depends only on its composition. The speed has a weak dependence on pressure in ordinary air, deviating slightly from ideal behavior. In everyday speech, speed of sound refers to the speed of sound waves in air. However, the speed of sound varies to substance: sound travels most slowly in gases; it travels faster in liquids; and faster still in solids. For example, sound travels at 343.2 m/s in air; it travels at 1,484 m/s in water; and at 5,120 m/s in iron. In an exceptionally stiff material such as diamond, sound travels at 12,000 m/s;, around the maximum speed that sound will travel under normal conditions. These different types of waves in solids usually travel at different speeds, as exhibited in seismology. The speed of a sound wave in solids is determined by the medium's compressibility, shear modulus and density. The speed of shear waves is determined only by the solid material's shear modulus and density. The ratio of the speed of an object to the speed of sound in the fluid is called the object's Mach number. Objects moving at greater than Mach1 are said to be traveling at supersonic speeds. In 1709, Rector of Upminster, published a more accurate measure of the speed of sound, at 1,072 Parisian feet per second. Measurements were made of gunshots including North Ockendon church. Thus the speed that the sound had travelled was calculated.
Speed of sound
–
U.S. Navy F/A-18 traveling near the speed of sound. The white halo consists of condensed water droplets formed by the sudden drop in air pressure behind the shock cone around the aircraft (see Prandtl-Glauert singularity).
Speed of sound
–
Pressure-pulse or compression-type wave (longitudinal wave) confined to a plane. This is the only type of sound wave that travels in fluids (gases and liquids)
135.
Gradient
–
In mathematics, the gradient is a generalization of the usual concept of derivative to functions of several variables. If f is a real-valued function of several variables, its gradient is the vector whose components are the n partial derivatives of f. It is thus a vector-valued function. Similarly to the usual derivative, the gradient represents the slope of the tangent of the graph of the function. The components of the gradient in coordinates are the coefficients of the variables in the equation of the tangent space to the graph. The Jacobian is the generalization of the gradient for differentiable maps between Euclidean spaces or, more generally, manifolds. A further generalization for a function between Banach spaces is the Fréchet derivative. Consider a room in which the temperature is given by T, so at each point the temperature is T. At each point in the room, the gradient of T at that point will show the direction in which the temperature rises most quickly. The magnitude of the gradient will determine how fast the temperature rises in that direction. Consider a surface whose height above level at point is H. The gradient of H at a point is a vector pointing in the direction of the steepest grade at that point. The steepness of the slope at that point is given by the magnitude of the gradient vector. Suppose that the steepest slope on a hill is 40%. If a road goes up the hill, then the steepest slope on the road will also be 40 %.
Gradient
–
Gradient of the 2-d function f (x, y) = xe −(x 2 + y 2) is plotted as blue arrows over the pseudocolor plot of the function.
Gradient
–
In the above two images, the values of the function are represented in black and white, black representing higher values, and its corresponding gradient is represented by blue arrows.
136.
Perpendicular
–
In elementary geometry, the property of being perpendicular is the relationship between two lines which meet at a right angle. The property extends to other related geometric objects. A line is said to be perpendicular to another line if the two lines intersect at a right angle. For this reason, we may speak as being perpendicular without specifying an order. Perpendicularity easily extends to rays. In symbols, A B ¯ ⊥ C D ¯ means segment AB is perpendicular to line segment CD. A line is said to be perpendicular to a plane if it is perpendicular to every line in the plane that it intersects. This definition depends on the definition of perpendicularity between lines. Two planes in space are said to be perpendicular if the dihedral angle at which they meet is a right angle. Perpendicularity is one particular instance of the more general mathematical concept of orthogonality; perpendicularity is the orthogonality of geometric objects. The word "foot" is frequently used with perpendiculars. This usage is exemplified in the top diagram, above, its caption. The diagram can be in any orientation. The foot is not necessarily at the bottom. Step 2: construct circles centered at A' and B' having equal radius.
Perpendicular
–
The segment AB is perpendicular to the segment CD because the two angles it creates (indicated in orange and blue) are each 90 degrees.
137.
Drag (physics)
–
In fluid dynamics, drag is a force acting opposite to the relative motion of any object moving with respect to a surrounding fluid. This can exist between two fluid layers or a fluid and a solid surface. Unlike other resistive forces, such as dry friction, which are nearly independent of velocity, drag forces depend on velocity. Drag force is proportional to the velocity for a laminar flow and the squared velocity for a turbulent flow. Even though the ultimate cause of a drag is viscous friction, the turbulent drag is independent of viscosity. Drag forces always decrease fluid velocity relative to the solid object in the fluid's path. In the case of viscous drag of fluid in a pipe, drag force on the immobile pipe decreases fluid velocity relative to the pipe. In physics of sports, the force is necessary to explain the performance of sprinters. The phrase parasitic drag is mainly used in aerodynamics, since for lifting wings drag is in general small compared to lift. For flow around bluff bodies, drag is most often dominating, then the qualifier "parasitic" is meaningless. Skin drag on bluff bodies are not coined as being elements of "parasitic drag", but directly as elements of drag. Drag depends on the speed of the object. At low R e, C D is asymptotically proportional to R e − 1, which means that the drag is proportional to the speed. At high R e, C D is more or less constant. The graph to the right shows how C D varies with R e for the case of a sphere.
Drag (physics)
–
The power curve: form and induced drag vs. airspeed
Drag (physics)
138.
Friction
–
Friction is the force resisting the relative motion of solid surfaces, fluid layers, material elements sliding against each other. There are several types of friction: Dry friction resists lateral motion of two solid surfaces in contact. Dry friction is subdivided into kinetic friction between moving surfaces. Fluid friction describes the friction between layers of a viscous fluid that are moving relative to each other. Lubricated friction is a case of fluid friction where a fluid separates two solid surfaces. Friction is a component of drag, the force resisting the motion of a fluid across the surface of a body. Internal friction is the force resisting motion between the elements making up a solid material while it undergoes deformation. When surfaces in contact move relative to each other, the friction between the two surfaces converts kinetic energy into thermal energy. This property can have dramatic consequences, as illustrated by the use of friction created by rubbing pieces of wood together to start a fire. Kinetic energy is converted to thermal energy whenever motion with friction occurs, for example when a viscous fluid is stirred. Another important consequence of many types of friction can be wear, which damage to components. Friction is a component of the science of tribology. Friction is not itself a fundamental force. Dry friction arises from a combination of inter-surface adhesion, surface roughness, surface contamination. Friction is a non-conservative force - work done against friction is path dependent.
Friction
–
When the mass is not moving, the object experiences static friction. The friction increases as the applied force increases until the block moves. After the block moves, it experiences kinetic friction, which is less than the maximum static friction.
139.
Non-newtonian fluid
–
A non-Newtonian fluid is a fluid with properties that are different in any way from those of Newtonian fluids. Most commonly, the viscosity of non-Newtonian fluids is dependent on shear rate or history. Some non-Newtonian fluids with shear-independent viscosity, however, still exhibit non-Newtonian behavior. In a non-Newtonian fluid, the relation between the rate is different and can even be time-dependent. Therefore, a constant coefficient of viscosity cannot be defined. The properties are better studied using tensor-valued constitutive equations, which are common in the field of continuum mechanics. The viscosity of dilatant fluid, appears to increase when the rate increases. Corn starch dissolved in water is a common example: when stirred slowly it looks milky, when stirred vigorously it feels like a very viscous liquid. Thus, to avoid confusion, the latter classification is more clearly termed pseudoplastic. Another example of a shear thinning fluid is blood. This application is highly favoured within the body, as it allows the viscosity of blood to decrease with increased rate. Fluids that have a linear shear relationship require a finite yield stress before they begin to flow. These fluids are called Bingham plastics. Several examples are clay suspensions, drilling mud, toothpaste, mustard. The surface of a Bingham plastic can hold peaks when it is still.
Non-newtonian fluid
–
Demonstration of a non-Newtonian fluid at Universum in Mexico City
Non-newtonian fluid
–
Classification of fluids with shear stress as a function of shear rate.
Non-newtonian fluid
–
Oobleck on a subwoofer. Applying force to oobleck, by sound waves in this case, makes the non-Newtonian fluid thicken.
140.
Sand
–
Sand is a naturally occurring granular material composed of finely divided rock and mineral particles. It is defined by size, being finer than coarser than silt. Sand can also refer to a textural class of soil or type; i.e. a soil containing more than 85 % sand-sized particles by mass. For example, it is the primary form of sand apparent in areas where reefs have dominated the ecosystem for millions of years like the Caribbean. In terms of size as used by geologists, sand particles range in diameter from 0.0625 mm to 2 mm. An individual particle in this size is termed a sand grain. Sand grains are between silt. A 1953 standard published by the American Association of State Highway and Transportation Officials set the minimum sand size at 0.074 mm. A 1938 specification of the United States Department of Agriculture was 0.05 mm. Sand feels gritty when rubbed between the fingers. ISO 14688 grades sands as fine, medium and coarse with ranges 0.063 mm to 0.63 mm to 2.0 mm. These sizes are based on the Krumbein scale, where size in Φ = - log2D; D being the particle size in mm. For sand the value of Φ varies from − 1 to +4, with the divisions between sub-categories at whole numbers. The composition of sand is highly variable, depending on the local rock sources and conditions. The gypsum sand dunes of the White Sands National Monument in New Mexico are famous for their white color. Arkose is a sandstone with considerable feldspar content, derived from weathering and erosion of a granitic rock outcrop.
Sand
–
Sand dunes in the Idehan Ubari, Libya.
Sand
–
Close-up (1×1 cm) of sand from the Gobi Desert, Mongolia.
Sand
–
Heavy minerals (dark) in a quartz beach sand (Chennai, India).
Sand
–
Sand from Coral Pink Sand Dunes State Park, Utah. These are grains of quartz with a hematite coating providing the orange color.
141.
Paint
–
Paint is any liquid, liquefiable, or mastic composition that, after application to a substrate in a thin layer, converts to a solid film. It is most commonly used to protect, color, or provide texture to objects. Paint can be purchased in many colors -- and in many different types, such as watercolor, synthetic, etc.. Most types dry into a solid. In 2004, South African archeologists reported finds in Blombos Cave of a 100,000-year-old human-made ochre-based mixture that could have been used like paint. Further excavation in the same cave resulted for grinding pigments and making a primitive paint-like substance. The Egyptians applied them separately from each other without any blending or mixture. They appear to have used six colors: yellow, green. They first covered the area entirely with white, then traced the design in black, leaving out the lights of the ground color. They used minium for red, generally of a dark tinge. Pliny mentions some painted ceilings in his day in the town of Ardea, done prior to the foundation of Rome. He expresses great surprise and admiration after the lapse of so many centuries. Therefore, the substance would harden and adhere to the surface it was applied to. Pigment was made from plants, different soils. Most paints used either water as a base.
Paint
–
Dried green paint
Paint
–
A charcoal and ochre cave painting of Megaloceros from Lascaux, France.
Paint
–
A piece of Giant clam shell used to hold ochre paint in pre-dynastic ancient Egypt
Paint
–
Watercolors as applied with a brush
142.
Cartesian coordinate system
–
In general, n Cartesian coordinates specify the point in an n-dimensional Euclidean space for any dimension n. These coordinates are equal, up to sign, to distances from the point to n mutually perpendicular hyperplanes. The invention of Cartesian coordinates in the 17th century by René Descartes revolutionized mathematics by providing the first systematic link between Euclidean geometry and algebra. Using the Cartesian coordinate system, geometric shapes can be described by Cartesian equations: algebraic equations involving the coordinates of the points lying on the shape. A familiar example is the concept of the graph of a function. Cartesian coordinates are also essential tools for most applied disciplines that deal with geometry, including astronomy, physics, engineering and many more. They are the most common coordinate system used in computer graphics, computer-aided geometric design and other geometry-related data processing. The adjective Cartesian refers to the French Mathematician and Philosopher René Descartes who published this idea in 1637. It was independently discovered by Pierre de Fermat, who also worked in three dimensions, although Fermat did not publish the discovery. Both authors used a single axis in their treatments and have a variable length measured in reference to this axis. These commentators introduced several concepts while trying to clarify the ideas contained in Descartes' work. Many other coordinate systems have been developed since Descartes, such as the polar coordinates for the plane, the spherical and cylindrical coordinates for three-dimensional space. The development of the Cartesian coordinate system would play a fundamental role in the development of the Calculus by Isaac Newton and Gottfried Wilhelm Leibniz. The two-coordinate description of the plane was later generalized into the concept of vector spaces. A line with a chosen Cartesian system is called a number line.
Cartesian coordinate system
–
The right hand rule.
Cartesian coordinate system
–
Illustration of a Cartesian coordinate plane. Four points are marked and labeled with their coordinates: (2,3) in green, (−3,1) in red, (−1.5,−2.5) in blue, and the origin (0,0) in purple.
Cartesian coordinate system
–
3D Cartesian Coordinate Handedness
143.
Euler equations (fluid dynamics)
–
In fluid dynamics, the Euler equations are a set of quasilinear hyperbolic equations governing adiabatic and inviscid flow. They are named after Leonhard Euler. Historically, only the incompressible equations have been derived by Euler. From the mathematical point of view, Euler equations are notably hyperbolic conservation equations in the case without external field. Like any Cauchy equation, the Euler equations originally formulated in convective form can also be put in the "conservation form". The convective form emphasizes changes to the state in a frame of reference moving with the fluid. They were among the partial differential equations to be written down. An additional equation, later to be called the adiabatic condition, was supplied by Pierre-Simon Laplace in 1816. G represents body accelerations acting for example gravity, inertial accelerations, electric field acceleration, so on. The first equation is the Euler equation with uniform density. The second equation is the incompressible constraint, stating the velocity is a solenoidal field. The equations above thus represent respectively conservation of momentum. In 3D for example the r and u vectors are explicitly and. Flow pressure are the so-called physical variables. Although Euler first presented these equations in 1755, fundamental questions about them remain unanswered.
Euler equations (fluid dynamics)
–
The "Streamline curvature theorem" states that the pressure at the upper surface of an airfoil is lower than the pressure far away and that the pressure at the lower surface is higher than the pressure far away; hence the pressure difference between the upper and lower surfaces of an airfoil generates a lift force.
144.
Communicating vessels
Communicating vessels
–
Communicating vessels
145.
Secondary flow
–
The flow in these regions is the secondary flow. The basic principles of the Coriolis effect satisfactorily explain that the direction of the wind in the atmosphere is parallel to the isobars. This flow of air across the isobars near level is a secondary flow. It does not conform to the primary flow, parallel to the isobars. At heights well above level there is a balance between the Coriolis effect, the local pressure gradient, the velocity of the wind. This is balanced flow. Closer to the ground the air is not able to accelerate to the speed necessary for balanced flow. Hence, the secondary flow toward the center of a region of low pressure is also drawn upward by the significantly lower pressure at mid altitudes. This descent explains why regions of high pressure usually experience cloud-free skies for many days. The primary flow around a tropical cyclone is parallel to the isobars – and hence circular. The closer to the center of the cyclone, the faster is the wind speed. In accordance with Bernoulli's principle where the speed is fastest the barometric pressure is lowest. Consequently, near the center of the cyclone the barometric pressure is very low. There is a strong gradient across the isobars toward the center of the cyclone. This gradient provides the centripetal force necessary for the circular motion of each parcel of air.
Secondary flow
–
An example of a dust devil in Ramadi, Iraq.
146.
International Standard Book Number
–
The International Standard Book Number is a unique numeric commercial book identifier. An ISBN is assigned to each variation of a book. For example, an e-book, a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned after 1 January 2007, 10 digits long if assigned before 2007. The method of assigning an ISBN varies from country to country, often depending on how large the publishing industry is within a country. The initial ISBN configuration of recognition was generated based upon the 9-digit Standard Book Numbering created in 1966. The 10-digit ISBN format was published in 1970 as international standard ISO 2108. The International Standard Serial Number, identifies periodical publications such as magazines; and the International Standard Music Number covers for musical scores. The ISBN configuration of recognition was generated in 1967 in the United Kingdom by Emery Koltay. The 10-digit ISBN format was published as international standard ISO 2108. The United Kingdom continued to use the 9-digit SBN code until 1974. The ISO on-line facility only refers back to 1978. An SBN may be converted by prefixing the digit "0". This can be converted to ISBN 0-340-01381-8; the digit does not need to be re-calculated. Since 1 ISBNs have contained 13 digits, a format, compatible with "Bookland" European Article Number EAN-13s.
International Standard Book Number
–
A 13-digit ISBN, 978-3-16-148410-0, as represented by an EAN-13 bar code
147.
McGraw-Hill, Inc.
–
S&P Global Inc. is an American publicly traded corporation headquartered in New York City. Its primary areas of business are financial information and analytics. The predecessor companies of S&P Global have history dating to 1888, when James H. McGraw purchased the American Journal of Railway Appliances. He continued to add further publications, eventually establishing The McGraw Publishing Company in 1899. John A. Hill had also produced several technical and trade publications and in 1902 formed his own business, The Hill Publishing Company. John Hill served as President, with James McGraw as Vice-President. The buyout made McGraw-Hill the largest educational publisher in the United States. In 1964, After Hill died, Merged both McGraw-Hill Publishing Company and McGraw-Hill Book Company into McGraw-Hill, Inc. In 1979, McGraw-Hill purchased Byte magazine from its owner/publisher Virginia Williamson who then became a vice-president of McGraw-Hill. In 1995, McGraw-Hill, Inc. became The McGraw-Hill Companies, Inc. as part of a corporate rebranding. In 2007, McGraw-Hill launched GradeGuru.com, which gave an opportunity to connect directly with its end users, the students. The site closed on April 29, 2012. On October 2011, McGraw-Hill announced it was selling its entire television station group for $212 million. The sale was completed on December 30, 2011.
McGraw-Hill, Inc.
–
McGraw Hill Financial, Inc.
McGraw-Hill, Inc.
–
1221 Avenue of the Americas, the headquarters of McGraw-Hill
McGraw-Hill, Inc.
–
2008 conference booth
148.
Applied physics
–
Applied physics is physics, intended for a particular technological or practical use. It is usually considered as a connection between physics and engineering. This approach is similar to that of applied mathematics. Applied physicists can also be interested in the use of physics for scientific research. For instance, the field of accelerator physics can contribute by working with engineers enabling design and construction of high-energy colliders.
Applied physics
–
Experiment using a laser
Applied physics
–
A magnetic resonance image
Applied physics
–
Computer modeling of the space shuttle during re-entry
149.
Experimental physics
–
Experimental physics is the category of disciplines and sub-disciplines in the field of physics that are concerned with the observation of physical phenomena and experiments. Methods vary from discipline from simple experiments and observations, such as the Cavendish experiment, to more complicated ones, such as the Large Hadron Collider. Although theoretical physics are concerned with different aspects of nature, they both share the same goal of understanding it and have a symbiotic relation. In the 17th century, Galileo made extensive use of experimentation to validate physical theories, the key idea in the modern scientific method. Galileo successfully tested several results in dynamics, in particular the law of inertia, which later became the first law in Newton's laws of motion. Huygens used the motion of a boat along a Dutch canal to illustrate an early form of the conservation of momentum. Experimental physics is considered to have reached a high point with the publication of the Philosophiae Naturalis Principia Mathematica by Sir Isaac Newton. Both theories agreed well with experiment. The Principia also included several theories in fluid dynamics. From the 17th century onward, thermodynamics was developed by physicist and chemist Boyle, Young, many others. In 1733, Bernoulli used statistical arguments with classical mechanics initiating the field of statistical mechanics. Ludwig Boltzmann, in the nineteenth century, is responsible for the modern form of statistical mechanics. Besides classical thermodynamics, another great field of experimental inquiry within physics was the nature of electricity. Observations in the eighteenth century by scientists such as Robert Boyle, Stephen Gray, Benjamin Franklin created a foundation for later work. These observations also established our basic understanding of electrical current.
Experimental physics
–
A view of the CMS detector, an experimental endeavour of the LHC at CERN.
150.
Theoretical physics
–
Theoretical physics is a branch of physics which employs mathematical models and abstractions of physical objects and systems to rationalize, explain and predict natural phenomena. This is in contrast to experimental physics, which uses experimental tools to probe these phenomena. The advancement of science depends in general on the interplay between experimental studies and theory. In some cases, theoretical physics adheres to standards of mathematical rigor while giving little weight to experiments and observations. Conversely, Einstein was awarded the Nobel Prize for explaining the photoelectric effect, previously an experimental result lacking a theoretical formulation. A physical theory is a model of physical events. It is judged by the extent to which its predictions agree with empirical observations. The quality of a physical theory is also judged on its ability to make new predictions which can be verified by new observations. A physical theory similarly differs from a mathematical theory, in the sense that the word "theory" has a different meaning in mathematical terms. A physical theory involves one or more relationships between various measurable quantities. Theoretical physics consists of several different approaches. In this regard, theoretical particle physics forms a good example. For instance: "phenomenologists" might employ empirical formulas to agree with experimental results, often without deep physical understanding. Some attempt to create approximate theories, called effective theories, because fully developed theories may be regarded as unsolvable or too complicated. Other theorists may try to unify, formalise, reinterpret or generalise extant theories, or create completely new ones altogether.
Theoretical physics
–
Visual representation of a Schwarzschild wormhole. Wormholes have never been observed, but they are predicted to exist through mathematical models and scientific theory.
151.
Energy
–
In physics, energy is a property of objects which can be transferred to other objects or converted into different forms. It is misleading because energy is not necessarily available to do work. All of the many forms of energy are convertible to other kinds of energy. This means that it is impossible to destroy energy. This creates a limit to the amount of energy that can do work in a cyclic process, a limit called the available energy. Other forms of energy can be transformed in the other direction into thermal energy without such limitations. The total energy of a system can be calculated by adding up all forms of energy in the system. Lifting against gravity performs mechanical work on the object and stores gravitational potential energy in the object. Energy are closely related. With a sensitive enough scale, one could measure an increase in mass after heating an object. Living organisms require available energy to stay alive, such as the energy humans get from food. Civilisation gets the energy it needs from energy resources such as fossil fuels, renewable energy. The processes of Earth's ecosystem are driven by the radiant energy Earth receives from the sun and the geothermal energy contained within the earth. In biology, energy can be thought of as what's needed to keep entropy low. The total energy of a system can be classified in various ways.
Energy
–
In a typical lightning strike, 500 megajoules of electric potential energy is converted into the same amount of energy in other forms, mostly light energy, sound energy and thermal energy.
Energy
–
Thermal energy is energy of microscopic constituents of matter, which may include both kinetic and potential energy.
Energy
–
Thomas Young – the first to use the term "energy" in the modern sense.
Energy
–
A Turbo generator transforms the energy of pressurised steam into electrical energy
152.
Motion (physics)
–
In physics, motion is a change in position of an object over time. Motion is typically described in terms of displacement, distance, velocity, acceleration, speed. An object's motion can not change unless it is acted by a force, as described. Momentum is a quantity, used for measuring motion of an object. As there is no absolute frame of reference, absolute motion cannot be determined. Thus, everything in the universe can be considered to be moving. One can also speak of motion of boundaries. So, the motion in general signifies a continuous change in the configuration of a physical system. In physics, motion is described through two sets of apparently contradictory laws of mechanics. Motions of familiar objects in the universe are described by classical mechanics. Whereas the motion of sub-atomic objects is described by quantum mechanics. It is one of the oldest and largest in science, engineering, technology. Classical mechanics is fundamentally based on Newton's laws of motion. These laws describe the relationship between the forces acting on the motion of that body. They were first compiled by Sir Isaac Newton in his work Philosophiæ Naturalis Principia Mathematica, first published on July 1687.
Motion (physics)
–
Motion involves a change in position, such as in this perspective of rapidly leaving Yongsan Station.
153.
Thermodynamics
–
Thermodynamics is a branch of science concerned with heat and temperature and their relation to energy and work. The laws of thermodynamics are explained by statistical mechanics. Thermodynamics applies to a wide variety of topics in science and engineering, especially physical chemistry, mechanical engineering. The initial application of thermodynamics to mechanical heat engines was extended early on to the study of chemical systems. Other formulations of thermodynamics emerged in the following decades. Statistical mechanics, concerned itself with statistical predictions of the collective motion of particles from their microscopic behavior. In 1909, Constantin Carathéodory presented a purely mathematical approach in his axiomatic formulation of thermodynamics, a description often referred to as geometrical thermodynamics. The starting point for most considerations of thermodynamic systems are the laws of four principles that form an axiomatic basis. The first law specifies that energy can be exchanged as heat and work. In thermodynamics, interactions between large ensembles of objects are categorized. Central to this are the concepts of system and surroundings. A system is composed of particles, whose average motions define its properties, which in turn are related through equations of state. Properties can be combined to express internal energy and thermodynamic potentials, which are useful for determining conditions for equilibrium and spontaneous processes. With these tools, thermodynamics can be used to describe how systems respond to changes in their environment. This article is focused mainly on classical thermodynamics which primarily studies systems in thermodynamic equilibrium.
Thermodynamics
–
Annotated color version of the original 1824 Carnot heat engine showing the hot body (boiler), working body (system, steam), and cold body (water), the letters labeled according to the stopping points in Carnot cycle
154.
Classical mechanics
–
In physics, classical mechanics is one of the two major sub-fields of mechanics, along with quantum mechanics. Classical mechanics is concerned with the set of physical laws describing the motion of bodies under the influence of a system of forces. The study of the motion of bodies is an ancient one, making classical mechanics one of the largest subjects in science, engineering and technology. It is also widely known as Newtonian mechanics. Classical mechanics describes the motion of macroscopic objects, from projectiles to parts of machinery, well as astronomical objects, such as spacecraft, planets, stars, galaxies. Within classical mechanics are fields of study that describe the behavior of solids, liquids and gases and other specific sub-topics. When classical mechanics can not apply, such as at the quantum level with high speeds, quantum field theory becomes applicable. Since these aspects of physics were developed long before the emergence of quantum relativity, some sources exclude Einstein's theory of relativity from this category. However, a number of modern sources do include relativistic mechanics, which in their view represents classical mechanics in its most accurate form. Later, more general methods were developed, leading to reformulations of classical mechanics known as Lagrangian mechanics and Hamiltonian mechanics. They extend substantially beyond Newton's work, particularly through their use of analytical mechanics. The following introduces the basic concepts of classical mechanics. For simplicity, it often models real-world objects as point particles. The motion of a particle is characterized by a small number of parameters: its position, mass, the forces applied to it. Each of these parameters is discussed in turn.
Classical mechanics
–
Sir Isaac Newton (1643–1727), an influential figure in the history of physics and whose three laws of motion form the basis of classical mechanics
Classical mechanics
–
Diagram of orbital motion of a satellite around the earth, showing perpendicular velocity and acceleration (force) vectors.
Classical mechanics
–
Hamilton 's greatest contribution is perhaps the reformulation of Newtonian mechanics, now called Hamiltonian mechanics.
155.
Ballistics
–
The earliest known ballistic projectiles were stones and spears, the throwing stick. They had shallow grooves on the base, indicating that they were shot from a bow. The oldest bow so far recovered is about 8,000 years old, found in the Holmegård swamp in Denmark. Archery seems to have arrived in the Americas with the Arctic small tool tradition, about 4,500 years ago. The word ballistics comes from the Greek βάλλειν ballein, meaning "to throw". A projectile is any object projected into space by the exertion of a force. Although any object in motion through space is a projectile, the term most commonly refers to a ranged weapon. Mathematical equations of motion are used to analyze projectile trajectory. Examples of projectiles include balls, arrows, bullets, etc.. Throwing is the launching of a projectile by hand. Evidence of human throwing dates back 2 million years. The 90 mph throwing speed found in many athletes far exceeds the speed at which chimpanzees can throw things, about 20 mph. This ability reflects the ability of the human shoulder muscles and tendons to store elasticity until it is needed to propel an object. A sling is a projectile weapon typically used to throw a blunt projectile such as lead "sling-bullet". A sling has a small cradle or pouch in the middle of two lengths of cord.
Ballistics
–
Baseball throws can exceed 100 mph
Ballistics
–
Trajectories of three objects thrown at the same angle (70°). The black object doesn't experience any form of drag and moves along a parabola. The blue object experiences Stokes' drag, and the green object Newtonian drag.
Ballistics
–
Catapult 1 Mercato San Severino
Ballistics
–
SIG Pro semi-automatic pistol
156.
Lagrangian mechanics
–
Lagrangian mechanics is a reformulation of classical mechanics, introduced by the Italian-French mathematician and astronomer Joseph-Louis Lagrange in 1788. No new physics is introduced in Lagrangian mechanics compared to Newtonian mechanics. Newton's laws can include non-conservative forces like friction; however, they must include constraint forces explicitly and are best suited to Cartesian coordinates. Lagrangian mechanics is ideal for systems with conservative forces and for bypassing constraint forces in any coordinate system. Lagrangian mechanics also reveals conserved quantities and their symmetries in a direct way, as a special case of Noether's theorem. Lagrangian mechanics is important not just for its broad applications, but also for its role in advancing deep understanding of physics. It can also be applied to other systems by analogy, for instance to coupled electric circuits with inductances and capacitances. Lagrangian mechanics is widely used to solve mechanical problems in physics and engineering when Newton's formulation of classical mechanics is not convenient. Lagrangian mechanics applies to the dynamics of particles, fields are described using a Lagrangian density. Lagrange's equations are also used in optimisation problems of dynamic systems. In mechanics, Lagrange's equations of the second kind are used much more than those of the first kind. Suppose we have a bead sliding around on a wire, or a swinging simple pendulum, etc. This choice eliminates the need for the constraint force to enter into the resultant system of equations. There are fewer equations since one is not directly calculating the influence of the constraint on the particle at a given moment. For a system of N point particles with masses m1, m2... mN, each particle has a position vector, denoted r1, r2... rN.
Lagrangian mechanics
–
Joseph-Louis Lagrange (1736—1813)
Lagrangian mechanics
–
Isaac Newton (1642—1726)
Lagrangian mechanics
–
Jean d'Alembert (1717—1783)
157.
Quantum mechanics
–
Quantum mechanics, including quantum field theory, is a fundamental branch of physics concerned with processes involving, for example, atoms and photons. Systems such as these which obey quantum mechanics can be in a quantum superposition of different states, unlike in classical physics. Early quantum theory was profoundly reconceived in the mid-1920s. The reconceived theory is formulated in various specially developed mathematical formalisms. In one of them, the wave function, provides information about the probability amplitude of position, momentum, other physical properties of a particle. This experiment played a major role in the general acceptance of the theory of light. In 1838, Michael Faraday discovered cathode rays. Planck's hypothesis that energy is absorbed in discrete "quanta" precisely matched the observed patterns of black-body radiation. In 1896, Wilhelm Wien empirically determined a law of black-body radiation, known as Wien's law in his honor. Ludwig Boltzmann independently arrived by considerations of Maxwell's equations. However, it underestimated the radiance at low frequencies. Following Max Planck's solution to the black-body radiation problem, Albert Einstein offered a quantum-based theory to explain the photoelectric effect. Albert Einstein developed a theory for it. In 1913, Peter Debye extended Niels Bohr's theory of atomic structure, introducing elliptical orbits, a concept also introduced by Arnold Sommerfeld. This phase is known as old theory.
Quantum mechanics
–
Max Planck is considered the father of the quantum theory.
Quantum mechanics
–
Solution to Schrödinger's equation for the hydrogen atom at different energy levels. The brighter areas represent a higher probability of finding an electron
Quantum mechanics
–
The 1927 Solvay Conference in Brussels.
158.
Wave
–
In physics, a wave is an oscillation accompanied by a transfer of energy that travels through a medium. Frequency refers to the addition of time. Wave motion transfers energy from one point with little or no associated mass transport. Waves consist, instead, of vibrations, around almost fixed locations. There are two main types of waves. The substance of this medium is deformed. Restoring forces then reverse the deformation. For example, sound waves propagate via air molecules colliding with their neighbors. When the molecules collide, they also bounce away from each other. This keeps the molecules from continuing to travel in the direction of the wave. Electromagnetic waves, do not require a medium. Instead, they can therefore travel through a vacuum. These types include radio waves, microwaves, infrared radiation, visible light, ultraviolet radiation, X-rays and gamma rays. Waves are described by a equation which sets out how the disturbance proceeds over time. The mathematical form of this equation varies depending on the type of wave.
Wave
–
Surface waves in water
Wave
–
Wavelength λ, can be measured between any two corresponding points on a waveform
Wave
–
Light beam exhibiting reflection, refraction, transmission and dispersion when encountering a prism
159.
Field (physics)
–
In physics, a field is a physical quantity, typically a number or tensor, that has a value for each point in space and time. On a weather map, the surface wind velocity is described by assigning a vector to each point on a map. Each vector represents the direction of the movement of air at that point. When a test electric charge is placed in this electric field, the particle accelerates due to a force. This led physicists to consider electromagnetic fields to be a physical entity, making a supporting paradigm of the edifice of modern physics. In practice, the strength of most fields has been found to diminish to the point of being undetectable. One consequence is that the Earth's gravitational field quickly becomes undetectable on cosmic scales. In fact in this theory an representation of field is a field particle, namely a boson. To Isaac Newton his law of universal gravitation simply expressed the gravitational force that acted between any pair of massive objects. In the eighteenth century, a new quantity was devised to simplify the bookkeeping of all these gravitational forces. The development of the independent concept of a field truly began with the development of the theory of electromagnetism. The independent nature of the field became more apparent with James Clerk Maxwell's discovery that waves in these fields propagated at a finite speed. Maxwell, at first, did not adopt the modern concept of a field as fundamental quantity that could independently exist. Instead, he supposed that the electromagnetic field expressed the deformation of some underlying medium -- the luminiferous aether -- much like the tension in a membrane. If that were the case, the observed velocity of the electromagnetic waves should depend to the aether.
Field (physics)
–
Illustration of the electric field surrounding a positive (red) and a negative (blue) charge.
160.
Gravity
–
Gravity, or gravitation, is a natural phenomenon by which all things with mass are brought toward one another, including planets, stars and galaxies. Since mass are equivalent, all forms of energy, including light, also cause gravitation and are under the influence of it. On Earth, gravity causes the ocean tides. Gravity has an infinite range, although its effects become increasingly weaker on farther objects. The most extreme example of this curvature of spacetime is a black hole, from which nothing can escape once past its horizon, not even light. More gravity results in gravitational time dilation, where time lapses more slowly at a lower gravitational potential. Gravity is the weakest of the four fundamental interactions of nature. As a consequence, gravity plays no role in determining the internal properties of everyday matter. On the other hand, gravity is the cause of the formation, shape and trajectory of astronomical bodies. While the European thinkers are rightly credited with development of gravitational theory, there were pre-existing ideas which had identified the force of gravity. Later, the works of Brahmagupta referred to the presence of this force. Modern work on gravitational theory began in the late 16th and early 17th centuries. This was a major departure from Aristotle's belief that heavier objects have a higher gravitational acceleration. Galileo postulated resistance as the reason that objects with less mass may fall slower in an atmosphere. Galileo's work set the stage for the formulation of Newton's theory of gravity.
Gravity
–
Sir Isaac Newton, an English physicist who lived from 1642 to 1727
Gravity
Gravity
–
Two-dimensional analogy of spacetime distortion generated by the mass of an object. Matter changes the geometry of spacetime, this (curved) geometry being interpreted as gravity. White lines do not represent the curvature of space but instead represent the coordinate system imposed on the curved spacetime, which would be rectilinear in a flat spacetime.
Gravity
–
Ball falling freely under gravity. See text for description.
161.
Electromagnetism
–
Electromagnetism is a branch of physics involving the study of the electromagnetic force, a type of physical interaction that occurs between electrically charged particles. The electromagnetic force usually is one of the four fundamental interactions in nature. The other three fundamental interactions are the strong interaction, gravitation. The electromagnetic force plays a major role in determining the internal properties of most objects encountered in daily life. Ordinary matter is a manifestation of the electromagnetic force. The electromagnetic force governs the processes involved in chemistry, which arise from interactions between the electrons of neighboring atoms. There are mathematical descriptions of the electromagnetic field. In classical electrodynamics, electric fields are described as electric current. Although electromagnetism is considered one of the four fundamental forces, at high energy electromagnetic force are unified as a single electroweak force. During the quark epoch the unified force broke into the two separate forces as the universe cooled. Originally, magnetism were considered to be two separate forces. An electric current inside a wire creates a corresponding magnetic field outside the wire. Its direction depends on the direction of the current in the wire. While preparing for an lecture on 21 April 1820, Hans Christian Ørsted made a surprising observation. However, he began more intensive investigations.
Electromagnetism
–
Lightning is an electrostatic discharge that travels between two charged regions.
Electromagnetism
–
Hans Christian Ørsted.
Electromagnetism
–
André-Marie Ampère
Electromagnetism
–
Michael Faraday
162.
Optics
–
Optics usually describes the behaviour of ultraviolet, infrared light. Because light is an electromagnetic wave, other forms of electromagnetic radiation such as X-rays, radio waves exhibit similar properties. Most optical phenomena can be accounted for using the electromagnetic description of light. Electromagnetic descriptions of light are, however, often difficult to apply in practice. Practical optics is usually done using simplified models. Physical optics is a more comprehensive model of light, which includes wave effects such as interference that can not be accounted for in geometric optics. Historically, the ray-based model of light was developed first, followed by the model of light. Progress in electromagnetic theory in the 19th century led to the discovery that light waves were in fact electromagnetic radiation. Some phenomena depend on the fact that light has both particle-like properties. Explanation of these effects requires quantum mechanics. When considering light's particle-like properties, the light is modelled as a collection of particles called "photons". Quantum optics deals with the application of quantum mechanics to optical systems. Optical science is relevant to and studied in related disciplines including astronomy, various engineering fields, photography, medicine. Practical applications of optics are found in a variety including mirrors, lenses, telescopes, microscopes, lasers, fibre optics. Optics began by the ancient Egyptians and Mesopotamians.
Optics
–
Optics includes study of dispersion of light.
Optics
–
The Nimrud lens
Optics
–
Reproduction of a page of Ibn Sahl 's manuscript showing his knowledge of the law of refraction, now known as Snell's law
Optics
–
Cover of the first edition of Newton's Opticks
163.
Geometrical optics
–
Geometrical optics, or ray optics, describes light propagation in terms of rays. The ray in geometric optics is an instrument, useful in approximating the paths along which light propagates in certain classes of circumstances. Geometrical optics does not account for optical effects such as diffraction and interference. The techniques are particularly useful in describing geometrical aspects including optical aberrations. A light ray is a curve, perpendicular to the light's wavefronts. Geometrical optics is often simplified by making "small angle approximation." The mathematical behavior then becomes allowing optical components and systems to be described by simple matrices. Glossy surfaces such as mirrors reflect light in a predictable way. This allows for production of reflected images that can be associated with an extrapolated location in space. This is known as the Law of Reflection. The size is the same as the object size. The law also implies that mirror images are parity inverted, perceived as a left-right inversion. Mirrors with curved surfaces can be modeled by ray using the law of reflection at each point on the surface. For mirrors with parabolic surfaces, parallel rays incident on the produce reflected rays that converge at a common focus. Curved surfaces may also focus light, but with aberrations due to the diverging shape causing the focus to be smeared out in space.
Geometrical optics
–
As light travels through space, it oscillates in amplitude. In this image, each maximum amplitude crest is marked with a plane to illustrate the wavefront. The ray is the arrow perpendicular to these parallel surfaces.
164.
Physical optics
–
This usage tends not to include effects such as noise in optical communication, studied in the sub-branch of coherence theory. Physical optics is also the name of an approximation commonly used in optics, applied physics. In this context, it is full wave electromagnetism, a precise theory. The word "physical" means not that it is an exact physical theory. This resembles the Born approximation, in that the details of the problem are treated as a perturbation. In optics, it is a standard way of estimating diffraction effects. In radio, this approximation is used to estimate some effects that resemble optical effects. It several interference, diffraction and polarization effects but not the dependence of diffraction on polarization. Since it is a high-frequency approximation, it is often more accurate in optics than for radio. In optics, it typically consists of integrating ray-estimated field over a lens, aperture to calculate the transmitted or scattered field. Current on the shadowed parts is taken as zero. The scattered field is then obtained by an integral over these approximate currents. This is useful for bodies for lossy surfaces. Current is generally not accurate near edges or shadow boundaries, unless supplemented by diffraction and creeping wave calculations. The standard theory of physical optics has some defects in the evaluation of scattered fields, leading to decreased accuracy away from the direction.
Physical optics
–
Physical optics is used to explain effects such as diffraction.
165.
Nonlinear optics
–
The nonlinearity is typically observed only at very high light intensities such as those provided by lasers. Above the Schwinger limit, the vacuum itself is expected to become nonlinear. In nonlinear optics, the superposition principle no longer holds. However, some nonlinear effects were discovered before the development of the laser. The theoretical basis for many nonlinear processes were first described in Bloembergen's monograph "Nonlinear Optics". Nonlinear optics explains response of properties such as frequency, path of incident light. Generation of light with three photons are destroyed, creating a single photon at the frequency. High-harmonic generation, generation of light with frequencies much greater than the original. Sum-frequency generation, generation of light with a frequency, the sum of two other frequencies. Difference-frequency generation, generation of light with a frequency, the difference between two other frequencies. Optical parametric amplification, amplification of a signal input in the presence of a higher-frequency pump wave, at the same time generating an idler wave. Optical parametric oscillation, generation of a signal and idler wave using a parametric amplifier in a resonator. Optical parametric generation, like parametric oscillation but without a resonator, using a very high gain instead. Spontaneous parametric down-conversion, the amplification of the vacuum fluctuations in the low-gain regime. Optical rectification, generation of quasi-static electric fields.
Nonlinear optics
–
Reversal of Linear Momentum and Angular Momentum in Phase Conjugating Mirror.
Nonlinear optics
Nonlinear optics
–
Dark-Red Gallium Selenide in its bulk form.
166.
Quantum field theory
–
QFT treats particles as excited states of the physical field, so these are called field quanta. In quantum theory, quantum mechanical interactions among particles are described by interaction terms among the corresponding underlying quantum fields. These interactions are conveniently visualized by Feynman diagrams, which are a formal tool of relativistically covariant theory, serving to evaluate particle processes. The first achievement of namely quantum electrodynamics, is "still the paradigmatic example of a successful quantum field theory". Ordinarily, quantum mechanics cannot give an account of photons which constitute the prime case of relativistic ‘particles’. The formalism of QFT is needed for an explicit description of photons. However, quantum mechanics did not focus much on problems of radiation. As as the conceptual framework of quantum mechanics was developed, a small group of theoreticians tried to extend quantum methods to electromagnetic fields. A good example is the famous paper by Born, Jordan & Heisenberg. The ideas of QM were thus extended to systems having an infinite number of degrees of freedom, so an infinite array of quantum oscillators. The inception of QFT is usually considered to be Dirac's famous 1927 paper on "The theory of the emission and absorption of radiation". Here Dirac coined the name "electrodynamics" for the part of QFT, developed first. Employing the theory of the quantum oscillator, Dirac gave a theoretical description of how photons appear in the quantization of the electromagnetic radiation field. Later, Dirac's procedure became a model for the quantization of other fields well. These first approaches to QFT were further developed during the following three years.
Quantum field theory
167.
Theory of relativity
–
The theory of relativity usually encompasses two interrelated theories by Albert Einstein: special relativity and general relativity. Special relativity applies to their interactions, describing all their physical phenomena except gravity. General relativity explains the law of its relation to other forces of nature. It applies to the astrophysical realm, including astronomy. The theory transformed theoretical astronomy during the 20th century, superseding a 200-year-old theory of mechanics created primarily by Isaac Newton. It introduced concepts including spacetime as a unified entity of space and time, length contraction. In the field of physics, relativity improved the science of their fundamental interactions, along with ushering in the nuclear age. With relativity, cosmology and astrophysics predicted astronomical phenomena such as neutron stars, black holes, gravitational waves. Max Planck, others did subsequent work. Einstein developed general relativity between 1907 and 1915, after 1915. The final form of general relativity was published in 1916. In the section of the same paper, Alfred Bucherer used for the first time the expression "theory of relativity". By the 1920s, the community understood and accepted special relativity. It rapidly became a necessary tool for theorists and experimentalists in the new fields of atomic physics, nuclear physics, quantum mechanics. By comparison, general relativity did not appear to be as useful, beyond making minor corrections to predictions of Newtonian theory.
Theory of relativity
–
USSR stamp dedicated to Albert Einstein
Theory of relativity
–
Key concepts
168.
Special relativity
–
In physics, special relativity is the generally accepted and experimentally well-confirmed physical theory regarding the relationship between space and time. In Albert Einstein's original pedagogical treatment, it is based on two postulates: The laws of physics are invariant in all inertial systems. The speed of light in a vacuum is the same for all observers, regardless of the motion of the light source. It was originally proposed in 1905 by Albert Einstein in the paper "On the Electrodynamics of Moving Bodies". As of today, special relativity is the most accurate model of motion at any speed. Even so, the Newtonian mechanics model is still useful as an approximation at small velocities relative to the speed of light. It has replaced the conventional notion of an universal time with the notion of a time, dependent on spatial position. Rather than an invariant interval between two events, there is an invariant interval. A defining feature of special relativity is the replacement of the Galilean transformations of Newtonian mechanics with the Lorentz transformations. Time and space cannot be defined separately from each other. Rather space and time are interwoven into a single continuum known as spacetime. Events that occur at the same time for one observer can occur at different times for another. The theory is "special" in that it only applies in the special case where the curvature of spacetime due to gravity is negligible. In order to include gravity, Einstein formulated general relativity in 1915. Special relativity, contrary to some outdated descriptions, is capable of handling accelerated frames of reference.
Special relativity
–
Albert Einstein around 1905, the year his " Annus Mirabilis papers " – which included Zur Elektrodynamik bewegter Körper, the paper founding special relativity – were published.
169.
General relativity
–
General relativity is the geometric theory of gravitation published by Albert Einstein in 1915 and the current description of gravitation in modern physics. In particular, the curvature of spacetime is directly related to the momentum of whatever matter and radiation are present. The relation is specified by a system of partial differential equations. Examples of such differences include gravitational time dilation, gravitational lensing, the gravitational time delay. The predictions of general relativity have been confirmed to date. Although general relativity is not the only relativistic theory of gravity, it is the simplest theory, consistent with experimental data. Einstein's theory has astrophysical implications. General relativity also predicts the existence of gravitational waves, which have since been observed directly by physics LIGO. In addition, general relativity is the basis of cosmological models of a consistently expanding universe. Soon after publishing the special theory of relativity in 1905, Einstein started thinking about how to incorporate gravity into his relativistic framework. The Einstein field equations are very difficult to solve. Einstein used approximation methods in working out initial predictions of the theory. But early as 1916, the astrophysicist Karl Schwarzschild found the first non-trivial exact solution to the Einstein field equations, the Schwarzschild metric. The objects known today as black holes. In 1917, Einstein applied his theory as a whole initiating the field of relativistic cosmology.
General relativity
–
A simulated black hole of 10 solar masses within the Milky Way, seen from a distance of 600 kilometers.
General relativity
–
Albert Einstein developed the theories of special and general relativity. Picture from 1921.
General relativity
–
Einstein cross: four images of the same astronomical object, produced by a gravitational lens
General relativity
–
Artist's impression of the space-borne gravitational wave detector LISA
170.
Accelerator physics
–
Accelerator physics is a branch of applied physics, concerned with designing, building and operating particle accelerators. It is also related to other fields: Microwave engineering. Optics with an emphasis on geometrical optics and laser physics. Computer technology with an emphasis on digital signal processing; e.g. for automated manipulation of the particle beam. Furthermore, due to electrostatic fields being conservative, the maximum voltage limits the kinetic energy, applicable to the particles. To circumvent this problem, linear particle accelerators operate using time-varying fields. The space around a beam is evacuated requiring it to be enclosed in a vacuum chamber. This may be in the form of a resistive impedance or an inductive/capacitive impedance. These impedances will induce wakefields that can interact with later particles. Since this interaction may have negative effects, it is studied to determine its magnitude, to determine any actions that may be taken to mitigate it. In most accelerator concepts, these are applied with different functions. An important step in the development of these types of accelerators was the understanding of strong focusing. The general equations of motion originate from relativistic Hamiltonian mechanics, in almost all cases using the Paraxial approximation. There are many different software packages available for modeling the different aspects of accelerator physics. Then one must model the charged evolution within those fields.
Accelerator physics
–
Superconducting niobium cavity for acceleration of ultrarelativistic particles from the TESLA project
171.
Acoustics
–
The application of acoustics is present in almost all aspects of modern society with the most obvious being the control industries. Accordingly, the science of acoustics spreads across many facets of human society -- music, medicine, architecture, more. Likewise, animal species such as frogs use sound and marking territories. Art, technology have provoked one another to advance the whole, as in many other fields of knowledge. Robert Bruce Lindsay's'Wheel of Acoustics' is a well accepted overview of the various fields in acoustics. The Latin synonym is "sonic", after which the term sonics used to be a synonym for acoustics and later a branch of acoustics. Frequencies above and below the audible range are called "ultrasonic" and "infrasonic", respectively. If, for example, a string of a certain length would sound particularly harmonious with a string of twice the length. In modern parlance, if a string sounds C when plucked, a string twice long will sound a C an octave lower. The physical understanding of acoustical processes advanced rapidly during and after the Scientific Revolution. Mainly Galileo Galilei but also Marin Mersenne, independently, discovered the complete laws of vibrating strings. Experimental measurements of the speed of sound in air were carried out successfully by a number of prominently Mersenne. Meanwhile, Newton derived the relationship in a cornerstone of physical acoustics. The eighteenth century saw major advances in acoustics as mathematicians applied the new techniques of calculus to elaborate theories of sound wave propagation. Also in the 19th century, Wheatstone, Ohm, Henry developed the analogy between electricity and acoustics.
Acoustics
–
Principles of acoustics were applied since ancient times: Roman theatre in the city of Amman.
Acoustics
–
Artificial omni-directional sound source in an anechoic chamber
Acoustics
–
Jay Pritzker Pavilion
Acoustics
172.
Nuclear astrophysics
–
In general terms, nuclear astrophysics aims to understand the energy generation in stars. The conversion of nuclear mass to radiative energy is the source of energy which allows stars to shine to billions of years. A scientific theory must be predictive in order to have any merit. The theory of stellar nucleosynthesis has been well-tested by experiment since the theory was first formulated. These observations have far-reaching implications. Although the foundations of the science are bona fide, there are still many remaining open questions. Nuclear physics Astrophysics Nucleosynthesis Abundance of the chemical elements Joint Institute for Nuclear Astrophysics
Nuclear astrophysics
173.
Heliophysics
–
The term heliophysics means "physics of the Sun", appears to have been used only in that sense until quite recently. Usage was extended explicitly 1981 to its literal meaning, denoting the physics of the entire Sun: from center to corona. As such it was a direct translation from the French héliophysique, introduced to provide a distinction from physique solaire. It thus became a subdiscipline of heliology. Heliophysics combines other disciplines, including solar physics, stellar physics in general, also several branches of nuclear physics, plasma physics and space physics. The recent extension of heliophysics is closely tied to the phenomena that affect it, consequently to climatology. "Heliophysics" is now the name of four divisions within NASA's Science Mission Directorate. The title was used to simplify the name of the "Sun--Solar-System Connections" Division. NASA's restricted use of the term heliophysics has also been adopted in naming the International Heliophysical Year in 2007-2008.
Heliophysics
–
Current and future Heliophysics System Observatory missions in their approximate regions of study.
Heliophysics
–
Heliophysics flight program timeline
Heliophysics
–
Internal structure
174.
Solar physics
–
Solar physics is the branch of astrophysics that specializes in the study of the Sun. It deals with detailed measurements that are possible only for our closest star. Because the Sun is uniquely situated for close-range observing, there is a split between the related discipline of solar physics. The Sun also provides a "physical laboratory" for the study of plasma physics. Babylonians were keeping a record of solar eclipses, with the oldest record originating from the ancient city of Ugarit, in modern-day Syria. This record dates to about 1300 BC. Ancient Chinese astronomers were also observing solar phenomena with the purpose of keeping track of calendars, which were based on lunar and solar cycles. Unfortunately, records kept before 720 BC are very vague and offer no useful information. However, after 720 BC, 37 solar eclipses were noted over the course of 240 years. Astronomical knowledge flourished in the Islamic world during medieval times. Many observatories were built to Baghdad, where astronomical observations were taken. Particularly, a few solar parameters were measured and detailed observations of the Sun were taken. Solar observations were taken with the purpose of navigation, but mostly for timekeeping. Islam requires its followers to pray five times a day, at specific position of the Sun in the sky. As such, accurate observations of the Sun and its trajectory on the sky were needed.
Solar physics
–
The SDO satellite
Solar physics
–
Internal structure
175.
Atomic, molecular, and optical physics
–
The three areas are closely interrelated. AMO theory includes classical, semi-classical and quantum treatments. The term atomic physics is often associated with nuclear power and nuclear bombs, due to the synonymous use of atomic and nuclear in standard English. The important experimental techniques are the various types of spectroscopy. Molecular physics, while closely related to atomic physics, also overlaps greatly with chemical physics. Both subfields are primarily concerned with electronic structure and the dynamical processes by which these arrangements change. Generally this work involves using quantum mechanics. For molecular physics this approach is known as quantum chemistry. One important aspect of molecular physics is that the atomic theory in the field of atomic physics expands to the molecular orbital theory. Molecular physics is concerned with atomic processes in molecules, but it is additionally concerned with effects due to the molecular structure. Additionally to the electronic excitation states which are known from atoms, molecules are able to rotate and to vibrate. These rotations and vibrations are quantized; there are discrete energy levels. The smallest energy differences exist between rotational states, therefore rotational spectra are in the far infrared region of the electromagnetic spectrum. Spectra resulting from electronic transitions are mostly in the ultraviolet regions. From measuring rotational and vibrational spectra properties of molecules like the distance between the nuclei can be calculated.
Atomic, molecular, and optical physics
–
An optical lattice formed by laser interference. Optical lattices are used to simulate interacting condensed matter systems.
176.
Computational physics
–
Computational physics is the study and implementation of numerical analysis to solve problems in physics for which a quantitative theory already exists. Historically, computational physics is now a subset of computational science. In physics, different theories based on mathematical models provide very precise predictions on how systems behave. Unfortunately, it is often the case that solving the mathematical model for a particular system in order to produce a useful prediction is not feasible. This can occur, for instance, when the solution is too complicated. In such cases, numerical approximations are required. There is a debate about the status of computation within the scientific method. While computers can be used in experiments for the recording of data, this clearly does not constitute a computational approach. Physics problems are in general very difficult to solve exactly. This is due to several reasons: lack of algebraic and/or analytic solubility, chaos. On the more advanced side, mathematical theory is also sometimes used. In addition, the computational cost and complexity for many-body problems tend to grow quickly. A macroscopic system typically has a size of the order of 23 constituent particles, so it is somewhat of a problem. For classical N-body it is of order N-squared. Because computational physics uses a broad class of problems, it is generally divided amongst the mathematical problems it numerically solves, or the methods it applies.
Computational physics
–
Computational physics
177.
Condensed matter physics
–
Condensed matter physics is a branch of physics that deals with the physical properties of condensed phases of matter, where particles adhere to each other. Condensed matter physicists seek to understand the behavior of these phases by using physical laws. In particular, they include the laws of quantum mechanics, electromagnetism and statistical mechanics. The field overlaps with chemistry, materials science, nanotechnology, relates closely to atomic physics and biophysics. Theoretical condensed matter physics shares important concepts and techniques with theoretical particle and nuclear physics. The Bell Telephone Laboratories was one of the first institutes to conduct a research program in condensed matter physics. References to "condensed" state can be traced to earlier sources. As a matter of fact, it would be more correct to unify them under the title of'condensed bodies'". One of the first studies of condensed states of matter was by English chemist Humphry Davy, in the first decades of the nineteenth century. Davy observed that of the forty chemical elements known at the time, twenty-six had metallic properties such as lustre, ductility and high electrical and thermal conductivity. This indicated that the atoms in Dalton's atomic theory were not indivisible as Dalton claimed, but had inner structure. By 1908, James Dewar and H. Kamerlingh Onnes were successfully able to liquefy hydrogen and then newly discovered helium, respectively. Paul Drude in 1900 proposed the first theoretical model for a classical electron moving through a metallic solid. The phenomenon completely surprised the best theoretical physicists of the time, it remained unexplained for several decades. Drude's classical model was augmented by Wolfgang Pauli, Arnold Sommerfeld, Felix Bloch and other physicists.
Condensed matter physics
–
Heike Kamerlingh Onnes and Johannes van der Waals with the helium "liquefactor" in Leiden (1908)
Condensed matter physics
–
Condensed matter physics
Condensed matter physics
–
A replica of the first point-contact transistor in Bell labs
Condensed matter physics
–
Computer simulation of "nanogears" made of fullerene molecules. It is hoped that advances in nanoscience will lead to machines working on the molecular scale.
178.
Mesoscopic physics
–
Mesoscopic physics is a subdiscipline of condensed matter physics that deals with materials of an intermediate length. The scale of these materials can be described as being between the size of a quantity of atoms and of materials measuring micrometres. The lower limit can also be defined as being the size of individual atoms. At the micrometre level are bulk materials. Both mesoscopic and macroscopic objects contain a large number of atoms. In other words, a macroscopic device, when scaled down to a meso-size, starts revealing quantum mechanical properties. For example, at the macroscopic level the conductance of a wire increases continuously with its diameter. However, at the mesoscopic level, the wire's conductance is quantized: the increases occur in discrete, or individual, whole steps. The applied science of mesoscopic physics deals with the potential of building nanodevices. Mesoscopic physics also addresses fundamental practical problems which occur when a macroscopic object is miniaturized, as with the miniaturization of transistors in semiconductor electronics. The physical properties of materials change as their size approaches the nanoscale, where the percentage of atoms at the surface of the material becomes significant. The subdiscipline has dealt primarily with artificial structures of metal or semiconducting material which have been fabricated by the techniques employed for producing microelectronic circuits. Thus, mesoscopic physics has a close connection to the fields of nanofabrication and nanotechnology. Devices used in nanotechnology are examples of mesoscopic systems. Three categories of new phenomena in such systems are interference effects, quantum confinement effects and charging effects.
Mesoscopic physics
–
Condensed matter physics
179.
Solid-state physics
–
Solid-state physics is the study of rigid matter, or solids, through methods such as quantum mechanics, crystallography, electromagnetism, metallurgy. It is the largest branch of condensed matter physics. Solid-state physics studies how the large-scale properties of solid materials result from their atomic-scale properties. Thus, solid-state physics forms a theoretical basis of materials science. It also has direct applications, for example in the technology of transistors and semiconductors. Solid materials are formed from densely packed atoms, which interact intensely. These interactions produce the mechanical, thermal, optical properties of solids. Depending on the conditions in which it was formed, the atoms may be arranged in a geometric pattern or irregularly. The bulk of solid-state physics, as a general theory, is focused on crystals. Primarily, this is because the periodicity of atoms in a crystal — its defining characteristic — facilitates mathematical modeling. Likewise, crystalline materials often have electrical, mechanical properties that can be exploited for engineering purposes. The forces between the atoms in a crystal can take a variety of forms. In a crystal of chloride, the crystal is made up of ionic sodium and chlorine, held together with ionic bonds. In others, the atoms share electrons and form covalent bonds. In metals, electrons are shared amongst the whole crystal in metallic bonding.
Solid-state physics
–
An example of a simple cubic lattice
180.
Soft matter
–
They include liquids, colloids, polymers, foams, a number of biological materials. These materials share an common feature in that physical behaviors occur at an scale comparable with room temperature thermal energy. At these temperatures, quantum aspects are generally unimportant. He is especially noted for inventing the concept of reptation. Interesting behaviors arise from soft matter in ways that cannot be predicted, or are difficult to predict, directly from its atomic or molecular constituents. The properties and interactions of these mesoscopic structures may determine the macroscopic behavior of the material. Soft materials are important in a wide range of technological applications. They may appear as structural and packaging materials, foams and adhesives, detergents and cosmetics, paints, etc.. In addition, a number of biological materials are classifiable as soft matter. Liquid crystals, another category of soft matter, exhibit a responsivity to electric fields that make them very important as materials in display devices. These properties lead to metastable states. Soft matters, such as polymers and lipids have found applications in nanotechnology as well. An important part of soft condensed matter research is biophysics. Soft condensed matter biophysics may be diverging into two distinct directions: a physical chemistry approach and a complex systems approach. Hamley, Introduction to Soft Matter, J. Wiley, Chichester.
Soft matter
–
Condensed matter physics
181.
Mathematical physics
–
Mathematical physics refers to development of mathematical methods for application to problems in physics. It is a branch of applied mathematics, but deals with physical problems. These roughly correspond to historical periods. The rigorous, abstract and advanced re-formulation of Newtonian mechanics adopting the Lagrangian mechanics and the Hamiltonian mechanics even in the presence of constraints. Both formulations are embodied in analytical mechanics. Moreover, they have provided basic ideas in geometry. The theory of partial differential equations are perhaps most closely associated with mathematical physics. These were developed intensively from the second half of the eighteenth century until the 1930s. Physical applications of these developments include hydrodynamics, celestial mechanics, continuum mechanics, elasticity theory, acoustics, thermodynamics, aerodynamics. It has connections to molecular physics. Quantum information theory is another subspecialty. The special and general theories of relativity require a rather different type of mathematics. This was theory, which played an important role in both quantum field theory and geometry. This was, however, gradually supplemented by functional analysis in the mathematical description of cosmological well as quantum field theory phenomena. In this area both homological theory are important nowadays.
Mathematical physics
–
An example of mathematical physics: solutions of Schrödinger's equation for quantum harmonic oscillators (left) with their amplitudes (right).
182.
Nuclear physics
–
Nuclear physics is the field of physics that studies atomic nuclei and their constituents and interactions. The field of particle physics evolved out of nuclear physics and is typically taught in close association with nuclear physics. The discovery of the electron by J. J. Thomson a year later was an indication that the atom had internal structure. By the turn of the century physicists had also discovered three types of radiation emanating from atoms, which they named alpha, beta, gamma radiation. Experiments by Otto Hahn in 1911 and by James Chadwick in 1914 discovered that the beta decay spectrum was continuous rather than discrete. This was a problem for nuclear physics at the time, because it seemed to indicate that energy was not conserved in these decays. Rutherford was awarded the Nobel Prize in Chemistry in 1908 for his "investigations into the disintegration of the elements and the chemistry of radioactive substances". In 1905 Albert Einstein formulated the idea of mass–energy equivalence. In 1906 Ernest Rutherford published "Retardation of the α Particle from Radium in passing through matter." Greatly expanded work was published by Geiger. In 1911–1912 Rutherford went before the Royal Society to explain the experiments and propound the new theory of the atomic nucleus as we now understand it. The plum pudding model had predicted that the alpha particles should come out of the foil with their trajectories being at most slightly bent. He likened it to firing a bullet at tissue paper and having it bounce off. The Rutherford model worked quite well until studies of nuclear spin were carried out by Franco Rasetti at the California Institute of Technology in 1929. By 1925 it was known that protons and electrons each had a spin of 1⁄2.
Nuclear physics
–
Nuclear physics
183.
Particle physics
–
Particle physics is the branch of physics that studies the nature of the particles that constitute matter and radiation. By our current understanding, these elementary particles are excitations of the quantum fields that also govern their interactions. The currently dominant theory explaining these fundamental particles and fields, along with their dynamics, is called the Standard Model. In more technical terms, they are described in a Hilbert space, also treated in theory. Their interactions observed to date can be described entirely by a quantum theory called the Standard Model. The Standard Model, as currently formulated, has 61 elementary particles. Those elementary particles can combine to form composite particles, accounting for the hundreds of other species of particles that have been discovered since the 1960s. The Standard Model has been found to agree with almost all the experimental tests conducted to date. However, most particle physicists believe that it is an incomplete description of nature and that a more fundamental theory awaits discovery. In recent years, measurements of neutrino mass have provided the first experimental deviations from the Standard Model. The idea that all matter is composed of elementary particles dates from at least the 6th century BC. In the 19th century, John Dalton, through his work on stoichiometry, concluded that each element of nature was composed of a single, unique type of particle. Throughout the 1950s and 1960s, a bewildering variety of particles were found in scattering experiments. It was referred to informally as the "particle zoo". The current state of the classification of all elementary particles is explained by the Standard Model.
Particle physics
–
Large Hadron Collider tunnel at CERN
184.
Biomechanics
–
Biomechanics is closely related to engineering, because it often uses traditional engineering sciences to analyze biological systems. Some simple applications of Newtonian mechanics and/or materials sciences can supply correct approximations to the mechanics of many biological systems. Usually biological systems are much more complex than man-built systems. Numerical methods are hence applied in almost every biomechanical study. Research is done in an iterative process of hypothesis and verification, including several steps of modeling, computer simulation and experimental measurements. Elements of clinical neurophysiology are common methods used in sports biomechanics. Biomechanics in sports can be stated as the muscular, joint and skeletal actions of the body during the execution of a given task, skill and/or technique. Proper understanding of biomechanics relating to sports skill has the greatest implications on: sport's prevention, along with mastery. As noted by Doctor Michael Yessis, one could say that best athlete is the one that executes his or her skill the best. The mechanical analysis of biomaterials and biofluids is usually carried forth with the concepts of continuum mechanics. This assumption breaks down when the length scales of interest approach the order of the micro structural details of the material. One of the most remarkable characteristic of biomaterials is their hierarchical structure. Biomaterials are classified in two groups, hard and soft tissues. Mechanical deformation of hard tissues may be analysed with the theory of linear elasticity. On the other hand, soft tissues usually undergo large deformations and thus their analysis rely on the finite strain theory and computer simulations.
Biomechanics
–
Page of one of the first works of Biomechanics (De Motu Animalium of Giovanni Alfonso Borelli) in the 17th century
Biomechanics
–
Red blood cells
Biomechanics
–
Chinstrap penguin leaping over water
Biomechanics
–
Subdisciplines
185.
Health physics
–
Health physics is the applied physics of radiation protection for health and health care purposes. It is the science concerned with the control of health hazards to permit the safe use and application of ionizing radiation. Health physics professionals promote excellence in the practice of radiation safety. Practical measurement is essential for health physics. It enables the evaluation of protection measures, the assessment of the radiation dose likely, or actually received by individuals. The provision of such instruments is normally controlled by law. In the UK it is the Ionising Radiation Regulations 1999. The measuring instruments for protection are both portable. Installed instruments are fixed in positions which are known to be important in assessing the general radiation hazard in an area. Examples are installed "area" radiation monitors, airborne contamination monitors. Airborne contamination monitors measure the concentration of radioactive particles in the atmosphere to guard against radioactive particles being deposited in the lungs of personnel. Personnel exit monitors are used to monitor workers who are exiting a "contamination controlled" or potentially contaminated area. These can be in the form of whole body monitors. These monitor the surface of the workers body and clothing to check if any radioactive contamination has been deposited. These generally measure alpha or beta or gamma, or combinations of these.
Health physics
–
1947 Oak Ridge National Laboratory poster.
Health physics
–
General topics
186.
Laser medicine
Laser medicine
–
CW rhodamine dye laser emitting near 590 nm, one typically used in early medical laser systems.
Laser medicine
–
Laser radiation being delivered, via a fiber, for photodynamic therapy to treat cancer.
Laser medicine
–
A 40 watt CO 2 laser with applications in ENT, gynecology, dermatology, oral surgery, and podiatry
187.
Neurophysics
Neurophysics
–
Basic science
188.
Psychophysics
–
Psychophysics quantitatively investigates the relationship between physical stimuli and the sensations and perceptions they produce. Psychophysics also refers to a general class of methods that can be applied to study a perceptual system. Modern applications rely heavily on signal theory. Psychophysics has widespread and important practical applications. In the study of digital processing, psychophysics has informed the development of methods of lossy compression. These models explain why humans perceive very little loss of signal quality when audio and video signals are formatted using lossy compression. Many of the classical techniques and theories of psychophysics were formulated in 1860 when Gustav Theodor Fechner in Leipzig published Elemente der Psychophysik. He coined the term "psychophysics", describing research intended to relate physical stimuli to the contents of consciousness such as sensations. From this, Fechner derived his well-known logarithmic scale, now known as Fechner scale. Weber's and Fechner's work formed one of the bases of psychology as a science, with Wilhelm Wundt founding the first laboratory for psychological research in Leipzig. Fechner's work systematised the introspectionist approach, that had to contend with the Behaviorist approach in which even verbal responses are as physical as the stimuli. Peirce and Jastrow largely confirmed Fechner's empirical findings, but not all. In particular, a classic experiment of Peirce and Jastrow rejected Fechner's estimation of a threshold of perception of weights, as being far too high. The Peirce–Jastrow experiments were conducted as part of Peirce's application of his pragmaticism program to human perception; other studies considered the perception of light, etc. Jastrow wrote the following summary: "Mr. Peirce’s courses in logic gave me my first real experience of intellectual muscle.
Psychophysics
–
Diagram showing a specific staircase procedure: Transformed Up/Down Method (1 up/ 2 down rule). Until the first reversal (which is neglected) the simple up/down rule and a larger step size is used.
189.
Atmospheric physics
–
Atmospheric physics is the application of physics to the study of the atmosphere. There are two kinds of remote sensing. Passive sensors detect natural radiation, emitted or reflected by the object or surrounding area being observed. Reflected sunlight is the most common source of radiation measured by passive sensors. Examples of remote sensors include film photography, radiometers. Remote sensing makes it possible to collect data on dangerous or inaccessible areas. Military collection during the cold war made use of stand-off collection of data about dangerous border areas. Remote sensing also replaces slow collection on the ground, ensuring in the process that areas or objects are not disturbed. Atmospheric physicists typically divide radiation into solar radiation and terrestrial radiation. Solar radiation contains variety of wavelengths. Visible light has wavelengths between 0.4 and 0.7 micrometers. Shorter wavelengths are known as the ultraviolet part of the spectrum, while longer wavelengths are grouped into the infrared portion of the spectrum. Ozone is most effective in absorbing radiation around 0.25 micrometers, where UV-c rays lie in the spectrum. This increases the temperature of the nearby stratosphere. Snow reflects 88% of UV rays, while sand reflects 12%, water reflects only 4% of incoming UV radiation.
Atmospheric physics
–
Atmospheric sciences
Atmospheric physics
–
Brightness can indicate reflectivity as in this 1960 weather radar image (of Hurricane Abby). The radar's frequency, pulse form, and antenna largely determine what it can observe.
Atmospheric physics
–
Cloud to ground Lightning in the global atmospheric electrical circuit.
Atmospheric physics
–
Representation of upper-atmospheric lightning and electrical-discharge phenomena
190.
Cloud physics
–
Cloud physics is the study of the physical processes that lead to the formation, growth and precipitation of atmospheric clouds. Clouds consist of microscopic droplets of liquid water, both. Cloud droplets initially form onto condensation nuclei when the supersaturation of air exceeds a critical value according to Köhler theory. At small radii, the amount of supersaturation needed for condensation to occur is so large, that it does not happen naturally. Raoult's Law describes how the pressure is dependent on the amount of solute in a solution. At high concentrations, when the cloud droplets are small, the supersaturation required is smaller than without the presence of a nucleus. The large droplets can then combine to form even larger drops. The coalescence is not as important in mixed phase clouds where the Bergeron process dominates. Advances in satellite technology have also allowed the precise study of clouds on a large scale. The history of cloud microphysics is described in several publications. Otto von Guericke originated the idea that clouds were composed of water bubbles. In 1847 Augustus Waller used web to examine droplets under the microscope. These observations were confirmed in 1884. As water evaporates from an area of the surface, the air over that area becomes moist. Moist air is lighter than the surrounding dry air, creating an unstable situation.
Cloud physics
–
Atmospheric sciences
Cloud physics
–
Late-summer rainstorm in Denmark. Nearly black color of base indicates main cloud in foreground probably cumulonimbus.
Cloud physics
–
Windy evening twilight enhanced by the Sun's angle, can visually mimic a tornado resulting from orographic lift
191.
Integrated Authority File
–
The Integrated Authority File or GND is an international authority file for the organisation of personal names, subject headings and corporate bodies from catalogues. It is used mainly increasingly also by archives and museums. The GND is managed with various regional library networks in German-speaking Europe and other partners. The GND falls under the Creative Commons Zero license. The GND specification provides a hierarchy of high-level sub-classes, useful in library classification, an approach to unambiguous identification of single elements. It also comprises an ontology intended for knowledge representation in the semantic web, available in the RDF format.
Integrated Authority File
–
GND screenshot
192.
National Diet Library
–
The National Diet Library is the only national library in Japan. It was established for the purpose of assisting members of the National Diet of Japan in researching matters of public policy. The library is similar in scope to the United States Library of Congress. The National Diet Library consists of several other branch libraries throughout Japan. Its need for information was "correspondingly small." The original Diet libraries "never developed either the services which might have made them vital adjuncts of genuinely responsible legislative activity." Until Japan's defeat, moreover, the executive had controlled all political documents, depriving the Diet of access to vital information. In 1946, each house of the Diet formed its own National Diet Library Standing Committee. Hani envisioned the new body as "both a ` citadel of popular sovereignty," and the means of realizing a "peaceful revolution." The National Diet Library opened with an initial collection of 100,000 volumes. The first Librarian of the Diet Library was the politician Tokujirō Kanamori. The philosopher Masakazu Nakai served as the first Vice Librarian. In 1949, the NDL became the only national library in Japan. At this time the collection gained an additional million volumes previously housed in the former National Library in Ueno. In 1961, the NDL opened at its present location in Nagatachō, adjacent to the National Diet.
National Diet Library
–
Tokyo Main Library of the National Diet Library
National Diet Library
–
Kansai-kan of the National Diet Library
National Diet Library
–
The National Diet Library
National Diet Library
–
Main building in Tokyo
193.
Fluid mechanics
–
Fluid mechanics is a branch of physics concerned with the mechanics of fluids and the forces on them. Fluid mechanics has a wide range of applications, including for mechanical engineering, civil engineering, chemical engineering, geophysics, biology. Especially fluid dynamics, is an active field of research with many problems that are partly or wholly unsolved. Fluid mechanics can best be solved by numerical methods, typically using computers. A modern discipline, called computational fluid dynamics, is devoted to this approach to solving fluid problems. An experimental method for visualizing and analyzing fluid flow, also takes advantage of the highly visual nature of fluid flow. Viscous flow was explored by a multitude of engineers including Jean Léonard Marie Poiseuille and Gotthilf Hagen. Fluid hydrostatics is the branch of fluid mechanics that studies fluids at rest. Hydrostatics is fundamental to the engineering of equipment for storing, transporting and using fluids. It is also relevant to meteorology, to medicine, many other fields. Fluid dynamics is a subdiscipline of fluid mechanics that deals with fluid flow -- the science of gases in motion. It has several subdisciplines itself, including aerodynamics and hydrodynamics. Some fluid-dynamical principles are used in traffic crowd dynamics. Fluid mechanics is a subdiscipline of continuum mechanics, as illustrated in the following table. A fluid at rest has no stress.
Fluid mechanics
–
Balance for some integrated fluid quantity in a control volume enclosed by a control surface.