1.
Continuum mechanics
–
Continuum mechanics is a branch of mechanics that deals with the analysis of the kinematics and the mechanical behavior of materials modeled as a continuous mass rather than as discrete particles. The French mathematician Augustin-Louis Cauchy was the first to formulate such models in the 19th century, research in the area continues till today. Modeling an object as a continuum assumes that the substance of the object completely fills the space it occupies, Continuum mechanics deals with physical properties of solids and fluids which are independent of any particular coordinate system in which they are observed. These physical properties are represented by tensors, which are mathematical objects that have the required property of being independent of coordinate system. These tensors can be expressed in coordinate systems for computational convenience, Materials, such as solids, liquids and gases, are composed of molecules separated by space. On a microscopic scale, materials have cracks and discontinuities, a continuum is a body that can be continually sub-divided into infinitesimal elements with properties being those of the bulk material. More specifically, the continuum hypothesis/assumption hinges on the concepts of an elementary volume. This condition provides a link between an experimentalists and a viewpoint on constitutive equations as well as a way of spatial and statistical averaging of the microstructure. The latter then provide a basis for stochastic finite elements. The levels of SVE and RVE link continuum mechanics to statistical mechanics, the RVE may be assessed only in a limited way via experimental testing, when the constitutive response becomes spatially homogeneous. Specifically for fluids, the Knudsen number is used to assess to what extent the approximation of continuity can be made, consider car traffic on a highway---with just one lane for simplicity. Somewhat surprisingly, and in a tribute to its effectiveness, continuum mechanics effectively models the movement of cars via a differential equation for the density of cars. The familiarity of this situation empowers us to understand a little of the continuum-discrete dichotomy underlying continuum modelling in general. To start modelling define that, x measure distance along the highway, t is time, ρ is the density of cars on the highway, cars do not appear and disappear. Consider any group of cars, from the car at the back of the group located at x = a to the particular car at the front located at x = b. The total number of cars in this group N = ∫ a b ρ d x, since cars are conserved d N / d t =0. The only way an integral can be zero for all intervals is if the integrand is zero for all x, consequently, conservation derives the first order nonlinear conservation PDE ∂ ρ ∂ t + ∂ ∂ x =0 for all positions on the highway. This conservation PDE applies not only to car traffic but also to fluids, solids, crowds, animals, plants, bushfires, financial traders and this PDE is one equation with two unknowns, so another equation is needed to form a well posed problem
Continuum mechanics
–
Figure 1. Configuration of a continuum body
2.
Solid mechanics
–
Solid mechanics is fundamental for civil, aerospace, nuclear, and mechanical engineering, for geology, and for many branches of physics such as materials science. It has specific applications in other areas, such as understanding the anatomy of living beings. One of the most common applications of solid mechanics is the Euler-Bernoulli beam equation. Solid mechanics extensively uses tensors to describe stresses, strains, as shown in the following table, solid mechanics inhabits a central place within continuum mechanics. The field of rheology presents an overlap between solid and fluid mechanics, a material has a rest shape and its shape departs away from the rest shape due to stress. The amount of departure from rest shape is called deformation, the proportion of deformation to original size is called strain and this region of deformation is known as the linearly elastic region. It is most common for analysts in solid mechanics to use linear material models, however, real materials often exhibit non-linear behavior. As new materials are used and old ones are pushed to their limits, There are four basic models that describe how a solid responds to an applied stress, Elastically – When an applied stress is removed, the material returns to its undeformed state. Linearly elastic materials, those that deform proportionally to the applied load and this implies that the material response has time-dependence. Plastically – Materials that behave elastically generally do so when the stress is less than a yield value. When the stress is greater than the stress, the material behaves plastically. That is, deformation occurs after yield is permanent. Thermoelastically - There is coupling of mechanical with thermal responses, in general, thermoelasticity is concerned with elastic solids under conditions that are neither isothermal nor adiabatic. The simplest theory involves the Fouriers law of conduction, as opposed to advanced theories with physically more realistic models. This theorem includes the method of least work as a special case 1874,1922, Timoshenko corrects the Euler-Bernoulli beam equation 1936, Hardy Cross publication of the moment distribution method, an important innovation in the design of continuous frames. Martin, and L. J. Applied mechanics Materials science Continuum mechanics Fracture mechanics L. D, landau, E. M. Lifshitz, Course of Theoretical Physics, Theory of Elasticity Butterworth-Heinemann, ISBN 0-7506-2633-X J. E. Marsden, T. J. Hughes, Mathematical Foundations of Elasticity, Dover, ISBN 0-486-67865-2 P. C. Chou, N. J. Pagano, Elasticity, Tensor, Dyadic, goodier, Theory of elasticity, 3d ed
Solid mechanics
–
Continuum mechanics
3.
Compatibility (mechanics)
–
In continuum mechanics, a compatible deformation tensor field in a body is that unique tensor field that is obtained when the body is subjected to a continuous, single-valued, displacement field. Compatibility is the study of the conditions under which such a displacement field can be guaranteed, compatibility conditions are particular cases of integrability conditions and were first derived for linear elasticity by Barré de Saint-Venant in 1864 and proved rigorously by Beltrami in 1886. In the continuum description of a solid body we imagine the body to be composed of a set of volumes or material points. Each volume is assumed to be connected to its neighbors without any gaps or overlaps, certain mathematical conditions have to be satisfied to ensure that gaps/overlaps do not develop when a continuum body is deformed. A body that deforms without developing any gaps/overlaps is called a compatible body, compatibility conditions are mathematical conditions that determine whether a particular deformation will leave a body in a compatible state. In the context of infinitesimal strain theory, these conditions are equivalent to stating that the displacements in a body can be obtained by integrating the strains. Such an integration is possible if the Saint-Venants tensor R vanishes in a body where ε is the infinitesimal strain tensor. For finite deformations the compatibility conditions take the form R, = ∇ × F =0 where F is the deformation gradient, the compatibility conditions in linear elasticity are obtained by observing that there are six strain-displacement relations that are functions of only three unknown displacements. This suggests that the three displacements may be removed from the system of equations without loss of information, the resulting expressions in terms of only the strains provide constraints on the possible forms of a strain field. e. We can write these conditions in index notation as e i k r e j l s ε i j, k l =0 where e i j k is the permutation symbol. In direct tensor notation ∇ × =0 where the operator can be expressed in an orthonormal coordinate system as ∇ × ε = e i j k ε r j, i e k ⊗ e r. The same condition is sufficient to ensure compatibility in a simply connected body. The quantity R i j k m represents the components of the Riemann-Christoffel curvature tensor. The problem of compatibility in continuum mechanics involves the determination of allowable single-valued continuous fields on simply connected bodies, more precisely, the problem may be stated in the following manner. Consider the deformation of a body shown in Figure 1, given a symmetric second order tensor field ϵ when is it possible to construct a vector field u such that ϵ =12 Suppose that there exists u such that the expression for ϵ holds. Hence, ε i k, j l − ε j k, i l − ε i l, j k + ε j l, i k =0 In direct tensor notation ∇ × =0 The above are necessary conditions. If w is the rotation vector then ∇ × ϵ = ∇ w + ∇ w T. Hence the necessary condition may also be written as ∇ × =0, Let us now assume that the condition ∇ × =0 is satisfied in a portion of a body
Compatibility (mechanics)
–
Figure 1. Motion of a continuum body.
4.
Finite strain theory
–
In this case, the undeformed and deformed configurations of the continuum are significantly different and a clear distinction has to be made between them. This is commonly the case with elastomers, plastically-deforming materials and other fluids, the displacement of a body has two components, a rigid-body displacement and a deformation. A rigid-body displacement consists of a translation and rotation of the body without changing its shape or size. Deformation implies the change in shape and/or size of the body from an initial or undeformed configuration κ0 to a current or deformed configuration κ t, a change in the configuration of a continuum body can be described by a displacement field. A displacement field is a field of all displacement vectors for all particles in the body. Relative displacement between particles occurs if and only if deformation has occurred, if displacement occurs without deformation, then it is deemed a rigid-body displacement. The displacement of particles indexed by variable i may be expressed as follows, the vector joining the positions of a particle in the undeformed configuration P i and deformed configuration p i is called the displacement vector. The partial derivative of the displacement vector with respect to the material coordinates yields the material displacement gradient tensor ∇ X u, α J i are the direction cosines between the material and spatial coordinate systems with unit vectors E J and e i, respectively. e. Due to the assumption of continuity of χ, F has the inverse H = F −1, then, by the implicit function theorem, the Jacobian determinant J must be nonsingular, i. e. Consider a particle or material point P with position vector X = X I I I in the undeformed configuration. After a displacement of the body, the new position of the particle indicated by p in the new configuration is given by the position x = x i e i. The coordinate systems for the undeformed and deformed configuration can be superimposed for convenience, consider now a material point Q neighboring P, with position vector X + Δ X = I I. In the deformed configuration this particle has a new position q given by the vector x + Δ x. Assuming that the line segments Δ X and Δ x joining the particles P and Q in both the undeformed and deformed configuration, respectively, to be small, then we can express them as d X and d x. A geometrically consistent definition of such a derivative requires an excursion into differential geometry, the time derivative of F is F ˙ = ∂ F ∂ t = ∂ ∂ t = ∂ ∂ X = ∂ ∂ X where V is the velocity. The derivative on the hand side represents a material velocity gradient. It is common to convert that into a gradient, i. e. F ˙ = ∂ ∂ X = ∂ ∂ x ⋅ ∂ x ∂ X = l ⋅ F where l is the spatial velocity gradient. If the spatial velocity gradient is constant, the equation can be solved exactly to give F = e l t assuming F =1 at t =0
Finite strain theory
–
Figure 1. Motion of a continuum body.
5.
Linear elasticity
–
Linear elasticity is the mathematical study of how solid objects deform and become internally stressed due to prescribed loading conditions. Linear elasticity models materials as continua, linear elasticity is a simplification of the more general nonlinear theory of elasticity and is a branch of continuum mechanics. The fundamental linearizing assumptions of linear elasticity are, infinitesimal strains or small deformations, in addition linear elasticity is valid only for stress states that do not produce yielding. These assumptions are reasonable for many engineering materials and engineering design scenarios, linear elasticity is therefore used extensively in structural analysis and engineering design, often with the aid of finite element analysis. The system of equations is completed by a set of linear algebraic constitutive relations. For elastic materials, Hookes law represents the behavior and relates the unknown stresses. Note, the Einstein summation convention of summing on repeated indices is used below and these are 3 independent equations with 6 independent unknowns. Strain-displacement equations, ε i j =12 where ε i j = ε j i is the strain and these are 6 independent equations relating strains and displacements with 9 independent unknowns. The equation for Hookes law is, σ i j = C i j k l ε k l where C i j k l is the stiffness tensor and these are 6 independent equations relating stresses and strains. An elastostatic boundary value problem for a media is a system of 15 independent equations. Specifying the boundary conditions, the value problem is completely defined. To solve the two approaches can be taken according to boundary conditions of the boundary value problem, a displacement formulation. In isotropic media, the stiffness tensor gives the relationship between the stresses and the strains, for an isotropic medium, the stiffness tensor has no preferred direction, an applied force will give the same displacements no matter the direction in which the force is applied. If the medium is homogeneous, then the elastic moduli will be independent of the position in the medium, the constitutive equation may now be written as, σ i j = K δ i j ε k k +2 μ. This expression separates the stress into a part on the left which may be associated with a scalar pressure. A simpler expression is, σ i j = λ δ i j ε k k +2 μ ε i j where λ is Lamés first parameter. More simply, ε i j =12 μ σ i j − ν E δ i j σ k k =1 E where ν is Poissons ratio and E is Youngs modulus. Elastostatics is the study of linear elasticity under the conditions of equilibrium, in all forces on the elastic body sum to zero
Linear elasticity
–
Spherical coordinates (r, θ, φ) as commonly used in physics: radial distance r, polar angle θ (theta), and azimuthal angle φ (phi). The symbol ρ (rho) is often used instead of r.
6.
Plasticity (physics)
–
In physics and materials science, plasticity describes the deformation of a material undergoing non-reversible changes of shape in response to applied forces. For example, a piece of metal being bent or pounded into a new shape displays plasticity as permanent changes occur within the material itself. In engineering, the transition from elastic behavior to plastic behavior is called yield, plastic deformation is observed in most materials, particularly metals, soils, rocks, concrete, foams, bone and skin. However, the mechanisms that cause plastic deformation can vary widely. At a crystalline scale, plasticity in metals is usually a consequence of dislocations, such defects are relatively rare in most crystalline materials, but are numerous in some and part of their crystal structure, in such cases, plastic crystallinity can result. In brittle materials such as rock, concrete and bone, plasticity is caused predominantly by slip at microcracks, for many ductile metals, tensile loading applied to a sample will cause it to behave in an elastic manner. Each increment of load is accompanied by an increment in extension. When the load is removed, the returns to its original size. However, once the load exceeds a threshold – the yield strength – the extension increases more rapidly than in the region, now when the load is removed. Elastic deformation, however, is an approximation and its quality depends on the time frame considered, if, as indicated in the graph opposite, the deformation includes elastic deformation, it is also often referred to as elasto-plastic deformation or elastic-plastic deformation. Perfect plasticity is a property of materials to undergo irreversible deformation without any increase in stresses or loads, plastic materials with hardening necessitate increasingly higher stresses to result in further plastic deformation. Generally, plastic deformation is dependent on the deformation speed. Such materials are said to deform visco-plastically, the plasticity of a material is directly proportional to the ductility and malleability of the material. Plasticity in a crystal of pure metal is primarily caused by two modes of deformation in the lattice, slip and twinning. Slip is a deformation which moves the atoms through many interatomic distances relative to their initial positions. Twinning is the plastic deformation takes place along two planes due to a set of forces applied to a given metal piece. Most metals show more plasticity when hot than when cold, lead shows sufficient plasticity at room temperature, while cast iron does not possess sufficient plasticity for any forging operation even when hot. This property is of importance in forming, shaping and extruding operations on metals, most metals are rendered plastic by heating and hence shaped hot
Plasticity (physics)
–
Plasticity under a spherical Nanoindenter in (111) Copper. All particles in ideal lattice positions are omitted and the color code refers to the von Mises stress field.
Plasticity (physics)
–
1: True elastic limit
7.
Bending
–
In applied mechanics, bending characterizes the behavior of a slender structural element subjected to an external load applied perpendicularly to a longitudinal axis of the element. The structural element is assumed to be such that at least one of its dimensions is a fraction, typically 1/10 or less. When the length is longer than the width and the thickness. For example, a closet rod sagging under the weight of clothes on clothes hangers is an example of a beam experiencing bending. On the other hand, a shell is a structure of any form where the length. A large diameter, but thin-walled, short tube supported at its ends, in the absence of a qualifier, the term bending is ambiguous because bending can occur locally in all objects. Therefore, to make the usage of the more precise, engineers refer to a specific object such as, the bending of rods, the bending of beams, the bending of plates. A beam deforms and stresses develop inside it when a load is applied on it. In the quasi-static case, the amount of bending deflection and the stresses that develop are assumed not to change over time. In a horizontal beam supported at the ends and loaded downwards in the middle and these last two forces form a couple or moment as they are equal in magnitude and opposite in direction. This bending moment resists the sagging deformation characteristic of a beam experiencing bending, the stress distribution in a beam can be predicted quite accurately when some simplifying assumptions are used. In the Euler–Bernoulli theory of beams, a major assumption is that plane sections remain plane. In other words, any deformation due to shear across the section is not accounted for, also, this linear distribution is only applicable if the maximum stress is less than the yield stress of the material. For stresses that exceed yield, refer to article plastic bending, at yield, the maximum stress experienced in the section is defined as the flexural strength. Simple beam bending is often analyzed with the Euler–Bernoulli beam equation, the conditions for using simple bending theory are, The beam is subject to pure bending. This means that the force is zero, and that no torsional or axial loads are present. The material is isotropic and homogeneous, the beam is initially straight with a cross section that is constant throughout the beam length. The beam has an axis of symmetry in the plane of bending, the proportions of the beam are such that it would fail by bending rather than by crushing, wrinkling or sideways buckling
Bending
–
Bending of an I -beam
8.
Fracture mechanics
–
Fracture mechanics is the field of mechanics concerned with the study of the propagation of cracks in materials. It uses methods of solid mechanics to calculate the driving force on a crack. In modern materials science, fracture mechanics is an important tool used to improve the performance of mechanical components, fractography is widely used with fracture mechanics to understand the causes of failures and also verify the theoretical failure predictions with real life failures. The prediction of crack growth is at the heart of the damage tolerance mechanical design discipline. There are three ways of applying a force to enable a crack to propagate, Mode I fracture – Opening mode, Mode II fracture – Sliding mode, the processes of material manufacture, processing, machining, and forming may introduce flaws in a finished mechanical component. Arising from the process, interior and surface flaws are found in all metal structures. Not all such flaws are unstable under service conditions, Fracture mechanics is the analysis of flaws to discover those that are safe and those that are liable to propagate as cracks and so cause failure of the flawed structure. Despite these inherent flaws, it is possible to achieve through damage tolerance analysis the safe operation of a structure, Fracture mechanics as a subject for critical study has barely been around for a century and thus is relatively new. Fracture mechanics should attempt to provide answers to the following questions. What crack size can be tolerated under service loading, i. e. what is the maximum permissible crack size. How long does it take for a crack to grow from an initial size, for example the minimum detectable crack size. What is the life of a structure when a certain pre-existing flaw size is assumed to exist. During the period available for crack detection how often should the structure be inspected for cracks, Fracture mechanics was developed during World War I by English aeronautical engineer, A. A. Griffith, to explain the failure of brittle materials. Griffiths work was motivated by two facts, The stress needed to fracture bulk glass is around 100 MPa. The theoretical stress needed for breaking atomic bonds of glass is approximately 10,000 MPa, a theory was needed to reconcile these conflicting observations. Also, experiments on glass fibers that Griffith himself conducted suggested that the stress increases as the fiber diameter decreases. Hence the uniaxial tensile strength, which had used extensively to predict material failure before Griffith. Griffith suggested that the low fracture strength observed in experiments, as well as the size-dependence of strength, was due to the presence of microscopic flaws in the bulk material, to verify the flaw hypothesis, Griffith introduced an artificial flaw in his experimental glass specimens
Fracture mechanics
–
The S.S. Schenectady split apart by brittle fracture while in harbor, 1943.
Fracture mechanics
–
The three fracture modes.
9.
Fluid
–
In physics, a fluid is a substance that continually deforms under an applied shear stress. Fluids are a subset of the phases of matter and include liquids, gases, plasmas, fluids are substances that have zero shear modulus, or, in simpler terms, a fluid is a substance which cannot resist any shear force applied to it. Although the term includes both the liquid and gas phases, in common usage, fluid is often used as a synonym for liquid. For example, brake fluid is hydraulic oil and will not perform its required incompressible function if there is gas in it and this colloquial usage of the term is also common in medicine and in nutrition. Liquids form a surface while gases do not. The distinction between solids and fluid is not entirely obvious, the distinction is made by evaluating the viscosity of the substance. Silly Putty can be considered to behave like a solid or a fluid and it is best described as a viscoelastic fluid. There are many examples of substances proving difficult to classify, a particularly interesting one is pitch, as demonstrated in the pitch drop experiment currently running at the University of Queensland. Fluids display properties such as, not resisting deformation, or resisting it only slightly, and these properties are typically a function of their inability to support a shear stress in static equilibrium. Solids can be subjected to stresses, and to normal stresses—both compressive. In contrast, ideal fluids can only be subjected to normal, real fluids display viscosity and so are capable of being subjected to low levels of shear stress. In a solid, shear stress is a function of strain, a consequence of this behavior is Pascals law which describes the role of pressure in characterizing a fluids state. The study of fluids is fluid mechanics, which is subdivided into fluid dynamics, matter Liquid Gas Bird, Byron, Stewart, Warren, Lightfoot, Edward
Fluid
–
Continuum mechanics
10.
Fluid dynamics
–
In physics and engineering, fluid dynamics is a subdiscipline of fluid mechanics that describes the flow of fluids. It has several subdisciplines, including aerodynamics and hydrodynamics, before the twentieth century, hydrodynamics was synonymous with fluid dynamics. This is still reflected in names of some fluid dynamics topics, like magnetohydrodynamics and hydrodynamic stability, the foundational axioms of fluid dynamics are the conservation laws, specifically, conservation of mass, conservation of linear momentum, and conservation of energy. These are based on mechanics and are modified in quantum mechanics. They are expressed using the Reynolds transport theorem, in addition to the above, fluids are assumed to obey the continuum assumption. Fluids are composed of molecules that collide with one another and solid objects, however, the continuum assumption assumes that fluids are continuous, rather than discrete. The fact that the fluid is made up of molecules is ignored. The unsimplified equations do not have a general solution, so they are primarily of use in Computational Fluid Dynamics. The equations can be simplified in a number of ways, all of which make them easier to solve, some of the simplifications allow some simple fluid dynamics problems to be solved in closed form. Three conservation laws are used to solve fluid dynamics problems, the conservation laws may be applied to a region of the flow called a control volume. A control volume is a volume in space through which fluid is assumed to flow. The integral formulations of the laws are used to describe the change of mass, momentum. Mass continuity, The rate of change of fluid mass inside a control volume must be equal to the net rate of flow into the volume. Mass flow into the system is accounted as positive, and since the vector to the surface is opposite the sense of flow into the system the term is negated. The first term on the right is the net rate at which momentum is convected into the volume, the second term on the right is the force due to pressure on the volumes surfaces. The first two terms on the right are negated since momentum entering the system is accounted as positive, the third term on the right is the net acceleration of the mass within the volume due to any body forces. Surface forces, such as forces, are represented by F surf. The following is the form of the momentum conservation equation
Fluid dynamics
11.
Archimedes' principle
–
Archimedes principle is a law of physics fundamental to fluid mechanics. It was formulated by Archimedes of Syracuse, in On Floating Bodies, Archimedes suggested that, Practically, Archimedes principle allows the buoyancy of an object partially or fully immersed in a liquid to be calculated. The downward force on the object is simply its weight, the upward, or buoyant, force on the object is that stated by Archimedes principle, above. Thus, the net force on the object is the difference between the buoyant force and its weight. If this net force is positive, the object rises, if negative, the sinks, and if zero. Consider a cube immersed in a fluid, with its sides parallel to the direction of gravity, the fluid will exert a normal force on each face, and therefore only the forces on the top and bottom faces will contribute to buoyancy. The pressure difference between the bottom and the top face is directly proportional to the height, multiplying the pressure difference by the area of a face gives the net force on the cube – the buoyancy, or the weight of the fluid displaced. By extending this reasoning to irregular shapes, we can see that, whatever the shape of the submerged body, the buoyant force is equal to the weight of the fluid displaced. Apparent loss in weight of water = weight of object in air − weight of object in water The weight of the fluid is directly proportional to the volume of the displaced fluid. The weight of the object in the fluid is reduced, because of the acting on it. Thus, among completely submerged objects with equal masses, objects with greater volume have greater buoyancy, suppose a rocks weight is measured as 10 newtons when suspended by a string in a vacuum with gravity acting on it. Suppose that, when the rock is lowered into water, it displaces water of weight 3 newtons, the force it then exerts on the string from which it hangs would be 10 newtons minus the 3 newtons of buoyant force,10 −3 =7 newtons. Buoyancy reduces the apparent weight of objects that have sunk completely to the sea floor and it is generally easier to lift an object up through the water than it is to pull it out of the water. Example, If you drop wood into water, buoyancy will keep it afloat, example, A helium balloon in a moving car. When increasing speed or driving in a curve, the air moves in the direction to the cars acceleration. However, due to buoyancy, the balloon is pushed out of the way by the air, and will actually drift in the same direction as the cars acceleration. When an object is immersed in a liquid, the liquid exerts a force, which is known as the buoyant force. The sum force acting on the object, then, is equal to the difference between the weight of the object and the weight of displaced liquid, equilibrium, or neutral buoyancy, is achieved when these two weights are equal
Archimedes' principle
–
Diving medicine:
Archimedes' principle
–
Continuum mechanics
12.
Pascal's law
–
The law was established by French mathematician Blaise Pascal in 1647–48. The intuitive explanation of this formula is that the change in pressure between 2 elevations is due to the weight of the fluid between the elevations. A more correct interpretation, though, is that the change is caused by the change of potential energy per unit volume of the liquid due to the existence of the gravitational field. Note that the variation with height does not depend on any additional pressures, therefore, Pascals law can be interpreted as saying that any change in pressure applied at any given point of the fluid is transmitted undiminished throughout the fluid. If a U-tube is filled with water and pistons are placed at each end, pressure exerted against the piston will be transmitted throughout the liquid. The pressure that the left piston exerts against the water will be equal to the pressure the water exerts against the right piston. Suppose the tube on the side is made wider and a piston of a larger area is used, for example. If a 1 N load is placed on the left piston, the difference between force and pressure is important, the additional pressure is exerted against the entire area of the larger piston. Since there is 50 times the area,50 times as much force is exerted on the larger piston, thus, the larger piston will support a 50 N load - fifty times the load on the smaller piston. Forces can be multiplied using such a device, one newton input produces 50 newtons output. By further increasing the area of the piston, forces can be multiplied, in principle. Pascals principle underlies the operation of the hydraulic press, the hydraulic press does not violate energy conservation, because a decrease in distance moved compensates for the increase in force. When the small piston is moved downward 100 centimeters, the piston will be raised only one-fiftieth of this. Pascals principle applies to all fluids, whether gases or liquids, a typical application of Pascals principle for gases and liquids is the automobile lift seen in many service stations. Increased air pressure produced by an air compressor is transmitted through the air to the surface of oil in an underground reservoir, the oil, in turn, transmits the pressure to a piston, which lifts the automobile. The relatively low pressure that exerts the force against the piston is about the same as the air pressure in automobile tires. Hydraulics is employed by modern devices ranging from small to enormous. For example, there are hydraulic pistons in almost all construction machines where heavy loads are involved, Pascals barrel is the name of a hydrostatics experiment allegedly performed by Blaise Pascal in 1646
Pascal's law
–
The effects of Pascal's law in the (possibly apocryphal) " Pascal's barrel " experiment.
Pascal's law
–
Continuum mechanics
13.
Newtonian fluid
–
That is equivalent to saying that those forces are proportional to the rates of change of the fluids velocity vector as one moves away from the point in question in various directions. Newtonian fluids are the simplest mathematical models of fluids that account for viscosity, while no real fluid fits the definition perfectly, many common liquids and gases, such as water and air, can be assumed to be Newtonian for practical calculations under ordinary conditions. However, non-Newtonian fluids are relatively common, and include oobleck, other examples include many polymer solutions, molten polymers, many solid suspensions, blood, and most highly viscous fluids. Newtonian fluids are named after Isaac Newton, who first postulated the relation between the strain rate and shear stress for such fluids in differential form. An element of a liquid or gas will suffer forces from the surrounding fluid. These forces can be approximated to first order by a viscous stress tensor. The deformation of that element, relative to some previous state. The tensors τ and ∇ v can be expressed by 3×3 matrices, one also defines a total stress tensor σ ) that combines the shear stress with conventional pressure p. The diagonal components of viscosity tensor is molecular viscosity of a liquid, and not diagonal components – turbulence eddy viscosity
Newtonian fluid
–
Continuum mechanics
14.
Non-Newtonian fluid
–
A non-Newtonian fluid is a fluid that does not follow Newtons Law of Viscosity. Most commonly, the viscosity of fluids is dependent on shear rate or shear rate history. Some non-Newtonian fluids with shear-independent viscosity, however, still exhibit normal stress-differences or other non-Newtonian behavior. Many salt solutions and molten polymers are non-Newtonian fluids, as are commonly found substances such as ketchup, custard, toothpaste, starch suspensions, maizena, paint, blood. In a Newtonian fluid, the relation between the stress and the shear rate is linear, passing through the origin, the constant of proportionality being the coefficient of viscosity. In a non-Newtonian fluid, the relation between the stress and the shear rate is different and can even be time-dependent. Therefore, a constant coefficient of viscosity cannot be defined, although the concept of viscosity is commonly used in fluid mechanics to characterize the shear properties of a fluid, it can be inadequate to describe non-Newtonian fluids. The properties are studied using tensor-valued constitutive equations, which are common in the field of continuum mechanics. The viscosity of a shear thickening fluid, or dilatant fluid, corn starch dissolved in water is a common example, when stirred slowly it looks milky, when stirred vigorously it feels like a very viscous liquid. Note that all thixotropic fluids are extremely shear thinning, but they are time dependent. Thus, to avoid confusion, the classification is more clearly termed pseudoplastic. Another example of a shear thinning fluid is blood and this application is highly favoured within the body, as it allows the viscosity of blood to decrease with increased shear strain rate. Fluids that have a linear shear stress/shear strain relationship require a finite yield stress before they begin to flow and these fluids are called Bingham plastics. Several examples are clay suspensions, drilling mud, toothpaste, mayonnaise, chocolate, the surface of a Bingham plastic can hold peaks when it is still. By contrast Newtonian fluids have flat featureless surfaces when still, there are also fluids whose strain rate is a function of time. Fluids that require a gradually increasing shear stress to maintain a constant strain rate are referred to as rheopectic, an opposite case of this is a fluid that thins out with time and requires a decreasing stress to maintain a constant strain rate. Many common substances exhibit non-Newtonian flows, uncooked cornflour has the same properties. The name oobleck is derived from the Dr. Seuss book Bartholomew, because of its properties, oobleck is often used in demonstrations that exhibit its unusual behavior
Non-Newtonian fluid
–
Demonstration of a non-Newtonian fluid at Universum in Mexico City
Non-Newtonian fluid
–
Classification of fluids with shear stress as a function of shear rate.
Non-Newtonian fluid
–
Oobleck on a subwoofer. Applying force to oobleck, by sound waves in this case, makes the non-Newtonian fluid thicken.
15.
Buoyancy
–
In science, buoyancy or upthrust, is an upward force exerted by a fluid that opposes the weight of an immersed object. In a column of fluid, pressure increases with depth as a result of the weight of the overlying fluid, thus the pressure at the bottom of a column of fluid is greater than at the top of the column. Similarly, the pressure at the bottom of an object submerged in a fluid is greater than at the top of the object and this pressure difference results in a net upwards force on the object. For this reason, an object whose density is greater than that of the fluid in which it is submerged tends to sink, If the object is either less dense than the liquid or is shaped appropriately, the force can keep the object afloat. This can occur only in a reference frame, which either has a gravitational field or is accelerating due to a force other than gravity defining a downward direction. In a situation of fluid statics, the net upward force is equal to the magnitude of the weight of fluid displaced by the body. The center of buoyancy of an object is the centroid of the volume of fluid. Archimedes principle is named after Archimedes of Syracuse, who first discovered this law in 212 B. C, more tersely, Buoyancy = weight of displaced fluid. The weight of the fluid is directly proportional to the volume of the displaced fluid. Thus, among completely submerged objects with equal masses, objects with greater volume have greater buoyancy and this is also known as upthrust. Suppose a rocks weight is measured as 10 newtons when suspended by a string in a vacuum with gravity acting upon it, suppose that when the rock is lowered into water, it displaces water of weight 3 newtons. The force it exerts on the string from which it hangs would be 10 newtons minus the 3 newtons of buoyancy force,10 −3 =7 newtons. Buoyancy reduces the apparent weight of objects that have sunk completely to the sea floor and it is generally easier to lift an object up through the water than it is to pull it out of the water. The density of the object relative to the density of the fluid can easily be calculated without measuring any volumes. Density of object density of fluid = weight weight − apparent immersed weight Example, If you drop wood into water, Example, A helium balloon in a moving car. During a period of increasing speed, the air mass inside the car moves in the direction opposite to the cars acceleration, the balloon is also pulled this way. However, because the balloon is buoyant relative to the air, it ends up being pushed out of the way, If the car slows down, the same balloon will begin to drift backward. For the same reason, as the car goes round a curve and this is the equation to calculate the pressure inside a fluid in equilibrium
Buoyancy
–
A metallic coin (one British pound coin) floats in mercury due to the buoyancy force upon it and appears to float higher because of the surface tension of the mercury.
Buoyancy
–
The forces at work in buoyancy. Note that the object is floating because the upward force of buoyancy is equal to the downward force of gravity.
16.
Mixing (process engineering)
–
In industrial process engineering, mixing is a unit operation that involves manipulation of a heterogeneous physical system with the intent to make it more homogeneous. Familiar examples include pumping of the water in a pool to homogenize the water temperature. Mixing is performed to allow heat and/or mass transfer to occur between one or more streams, components or phases, modern industrial processing almost always involves some form of mixing. Some classes of chemical reactors are also mixers, with the right equipment, it is possible to mix a solid, liquid or gas into another solid, liquid or gas. The opposite of mixing is segregation, a classical example of segregation is the brazil nut effect. The type of operation and equipment used during mixing depends on the state of materials being mixed, in this context, the act of mixing may be synonymous with stirring-, or kneading-processes. Mixing of liquids occurs frequently in process engineering, the nature of liquids to blend determines the equipment used. Turbulent or transitional mixing is conducted with turbines or impellers. Mixing of liquids that are miscible or at least soluble in each other frequently in process engineering. An everyday example would be the addition of milk or cream to tea or coffee, since both liquids are water-based, they dissolve easily in one another. The momentum of the liquid being added is sometimes enough to cause enough turbulence to mix the two, since the viscosity of liquids is relatively low. If necessary, a spoon or paddle could be used to complete the mixing process, blending in a more viscous liquid, such as honey, requires more mixing power per unit volume to achieve the same homogeneity in the same amount of time. Blending powders is one of the oldest unit-operations in the solids handling industries, for many decades powder blending has been used just to homogenize bulk materials. Many different machines have been designed to handle materials with various bulk solids properties, on the basis of the practical experience gained with these different machines, engineering knowledge has been developed to construct reliable equipment and to predict scale-up and mixing behavior. This wide range of applications of mixing equipment requires a level of knowledge, long time experience and extended test facilities to come to the optimal selection of equipment. In powder two different dimensions in the process can be determined, convective mixing and intensive mixing. In the case of convective mixing material in the mixer is transported from one location to another and this type of mixing leads to a less ordered state inside the mixer, the components that must be mixed are distributed over the other components. With progressing time the mixture becomes more randomly ordered, after a certain mixing time the ultimate random state is reached
Mixing (process engineering)
–
Machine for incorporating liquids and finely ground solids
Mixing (process engineering)
–
Schematics of an agitated vessel with a Rushton turbine and baffles
Mixing (process engineering)
–
A magnetic stirrer
Mixing (process engineering)
–
Axial flow impeller (left) and radial flow impeller (right).
17.
Surface tension
–
Surface tension is the elastic tendency of a fluid surface which makes it acquire the least surface area possible. Surface tension allows insects, usually denser than water, to float, at liquid-air interfaces, surface tension results from the greater attraction of liquid molecules to each other than to the molecules in the air. The net effect is a force at its surface that causes the liquid to behave as if its surface were covered with a stretched elastic membrane. Thus, the surface becomes under tension from the imbalanced forces, because of the relatively high attraction of water molecules for each other through a web of hydrogen bonds, water has a higher surface tension compared to that of most other liquids. Surface tension is an important factor in the phenomenon of capillarity, Surface tension has the dimension of force per unit length, or of energy per unit area. The two are equivalent, but when referring to energy per unit of area, it is common to use the surface energy. In materials science, surface tension is used for either surface stress or surface free energy, the cohesive forces among liquid molecules are responsible for the phenomenon of surface tension. In the bulk of the liquid, each molecule is pulled equally in every direction by neighboring liquid molecules, the molecules at the surface do not have the same molecules on all sides of them and therefore are pulled inwards. This creates some internal pressure and forces liquid surfaces to contract to the minimal area, Surface tension is responsible for the shape of liquid droplets. Although easily deformed, droplets of water tend to be pulled into a shape by the imbalance in cohesive forces of the surface layer. In the absence of forces, including gravity, drops of virtually all liquids would be approximately spherical. The spherical shape minimizes the necessary wall tension of the surface according to Laplaces law. Another way to view surface tension is in terms of energy, a molecule in contact with a neighbor is in a lower state of energy than if it were alone. The interior molecules have as many neighbors as they can possibly have, for the liquid to minimize its energy state, the number of higher energy boundary molecules must be minimized. The minimized number of boundary molecules results in a surface area. As a result of surface area minimization, a surface will assume the smoothest shape it can, since any curvature in the surface shape results in greater area, a higher energy will also result. Consequently, the surface will push back against any curvature in much the way as a ball pushed uphill will push back to minimize its gravitational potential energy. Bubbles in pure water are unstable, the addition of surfactants, however, can have a stabilizing effect on the bubbles
Surface tension
–
Surface tension preventing a paper clip from submerging.
Surface tension
Surface tension
–
A. Water beading on a leaf
Surface tension
–
C. Water striders stay atop the liquid because of surface tension
18.
Capillary action
–
Capillary action is the ability of a liquid to flow in narrow spaces without the assistance of, or even in opposition to, external forces like gravity. It occurs because of forces between the liquid and surrounding solid surfaces. If the diameter of the tube is sufficiently small, then the combination of surface tension, the first recorded observation of capillary action was by Leonardo da Vinci. A former student of Galileo, Niccolò Aggiunti, was said to have investigated capillary action, boyle then reported an experiment in which he dipped a capillary tube into red wine and then subjected the tube to a partial vacuum. Some thought that liquids rose in capillaries because air couldnt enter capillaries as easily as liquids, others thought that the particles of liquid were attracted to each other and to the walls of the capillary. They derived the Young–Laplace equation of capillary action, by 1830, the German mathematician Carl Friedrich Gauss had determined the boundary conditions governing capillary action. In 1871, the British physicist William Thomson determined the effect of the meniscus on a liquids vapor pressure—a relation known as the Kelvin equation, German physicist Franz Ernst Neumann subsequently determined the interaction between two immiscible liquids. Albert Einsteins first paper, which was submitted to Annalen der Physik in 1900, was on capillarity, a common apparatus used to demonstrate the first phenomenon is the capillary tube. When the lower end of a glass tube is placed in a liquid, such as water. Adhesion occurs between the fluid and the inner wall pulling the liquid column up until there is a sufficient mass of liquid for gravitational forces to overcome these intermolecular forces. So, a tube will draw a liquid column higher than a wider tube will. Capillary action is essential for the drainage of constantly produced tear fluid from the eye, wicking is the absorption of a liquid by a material in the manner of a candle wick. Paper towels absorb liquid through capillary action, allowing a fluid to be transferred from a surface to the towel, the small pores of a sponge act as small capillaries, causing it to absorb a large amount of fluid. Some textile fabrics are said to use capillary action to wick sweat away from the skin and these are often referred to as wicking fabrics, after the capillary properties of candle and lamp wicks. Capillary action is observed in thin layer chromatography, in which a solvent moves vertically up a plate via capillary action, in this case the pores are gaps between very small particles. Capillary action draws ink to the tips of fountain pen nibs from a reservoir or cartridge inside the pen, in hydrology, capillary action describes the attraction of water molecules to soil particles. Capillary action is responsible for moving groundwater from wet areas of the soil to dry areas, differences in soil potential drive capillary action in soil. Thus the thinner the space in which the water can travel, for a water-filled glass tube in air at standard laboratory conditions, γ =0.0728 N/m at 20 °C, ρ =1000 kg/m3, and g =9.81 m/s2
Capillary action
–
Capillary flow experiment to investigate capillary flows and phenomena aboard the International Space Station
Capillary action
–
Capillary action of water compared to mercury, in each case with respect to a polar surface such as glass
Capillary action
–
Water height in a capillary plotted against capillary diameter
Capillary action
–
Capillary flow in a brick, with a sorptivity of 5.0 mm min −1/2 and a porosity of 0.25.
19.
Boyle's law
–
Boyles law is an experimental gas law that describes how the pressure of a gas tends to increase as the volume of the container decreases. Mathematically, Boyles law can be stated as P ∝1 V or P V = k where P is the pressure of the gas, V is the volume of the gas, and k is a constant. The equation states that product of pressure and volume is a constant for a mass of confined gas as long as the temperature is constant. For comparing the same substance under two different sets of conditions, the law can be expressed as P1 V1 = P2 V2. The equation shows that, as increases, the pressure of the gas decreases in proportion. Similarly, as volume decreases, the pressure of the gas increases, the law was named after chemist and physicist Robert Boyle, who published the original law in 1662. This relationship between pressure and volume was first noted by Richard Towneley and Henry Power, Robert Boyle confirmed their discovery through experiments and published the results. According to Robert Gunther and other authorities, it was Boyles assistant, Robert Hooke, Boyles law is based on experiments with air, which he considered to be a fluid of particles at rest in between small invisible springs. At that time, air was still seen as one of the four elements, Boyles interest was probably to understand air as an essential element of life, for example, he published works on the growth of plants without air. Boyle used a closed J-shaped tube and after pouring mercury from one side he forced the air on the side to contract under the pressure of mercury. The French physicist Edme Mariotte discovered the law independent of Boyle in 1679. Thus this law is referred to as Mariottes law or the Boyle–Mariotte law. Instead of a static theory a kinetic theory is needed, which was provided two centuries later by Maxwell and Boltzmann and this law was the first physical law to be expressed in the form of an equation describing the dependence of two variable quantities. The law itself can be stated as follows, Or Boyles law is a gas law, stating that the pressure and volume of a gas have an inverse relationship, if volume increases, then pressure decreases and vice versa, when temperature is held constant. Therefore, when the volume is halved, the pressure is doubled, and if the volume is doubled, Boyles law states that at constant temperature for a fixed mass, the absolute pressure and the volume of a gas are inversely proportional. The law can also be stated in a different manner. Most gases behave like ideal gases at moderate pressures and temperatures, the technology of the 17th century could not produce high pressures or low temperatures. Hence, the law was not likely to have deviations at the time of publication, the deviation is expressed as the compressibility factor
Boyle's law
–
Diving medicine:
Boyle's law
–
Continuum mechanics
20.
Charles's law
–
Charless law is an experimental gas law that describes how gases tend to expand when heated. A modern statement of Charless law is, When the pressure on a sample of a dry gas is constant, the Kelvin temperature. This directly proportional relationship can be written as, V ∝ T or V T = k and this law describes how a gas expands as the temperature increases, conversely, a decrease in temperature will lead to a decrease in volume. The equation shows that, as temperature increases, the volume of the gas also increases in proportion. The law was named after scientist Jacques Charles, who formulated the law in his unpublished work from the 1780s. The basic principles had already described by Guillaume Amontons and Francis Hauksbee a century earlier. Dalton was the first to demonstrate that the law applied generally to all gases, with measurements only at the two thermometric fixed points of water, Gay-Lussac was unable to show that the equation relating volume to temperature was a linear function. On mathematical grounds alone, Gay-Lussacs paper does not permit the assignment of any law stating the linear relation and this equation does not contain the temperature and so has nothing to do with what became known as Charless Law. Gay-Lussacs value for k, was identical to Daltons earlier value for vapours, Gay-Lussac gave credit for this equation to unpublished statements by his fellow Republican citizen J. Charles in 1787. In the absence of a record, the gas law relating volume to temperature cannot be named after Charles. Daltons measurements had much more scope regarding temperature than Gay-Lussac, not only measuring the volume at the points of water. His conclusion for vapours is a statement of what become known wrongly as Charless Law, then even more wrongly as Gay-Lussacs law. His 1st law was that of partial pressures, Charless law appears to imply that the volume of a gas will descend to zero at a certain temperature or −273.15 °C. Gay-Lussac had no experience of air, although he appears to believe that the permanent gases such as air. However, the zero on the Kelvin temperature scale was originally defined in terms of the second law of thermodynamics. Thomson did not assume that this was equal to the point of Charless law. The two can be shown to be equivalent by Ludwig Boltzmanns statistical view of entropy, however, Charles also stated, The volume of a fixed mass of dry gas increases or decreases by 1⁄273 times the volume at 0 °C for every 1 °C rise or fall in temperature. Thus, V T = V0 + × T V T = V0 where VT is the volume of gas at temperature T, under this definition, the demonstration of Charless law is almost trivial
Charles's law
–
Diving medicine:
Charles's law
–
Continuum mechanics
21.
Gay-Lussac's law
–
He is most often recognized for the Pressure Law which established that the pressure of an enclosed gas is directly proportional to its temperature and which he was the first to formulate. These laws are known variously as the Pressure Law or Amontonss law. For example, Gay-Lussac found that 2 volumes of hydrogen and 1 volume of oxygen would react to form 2 volumes of gaseous water, based on Gay-Lussacs results, Amedeo Avogadro theorized that, at the same temperature and pressure, equal volumes of gas contain equal numbers of molecules. The law of combining gases was made public by Joseph Louis Gay-Lussac in 1808, Avogadros hypothesis, however, was not initially accepted by chemists until the Italian chemist Stanislao Cannizzaro was able to convince the First International Chemical Congress in 1860. Amontons discovered this while building an air thermometer, the pressure of a gas of fixed mass and fixed volume is directly proportional to the gass absolute temperature. If a gass temperature increases, then so does its pressure if the mass, the law has a particularly simple mathematical form if the temperature is measured on an absolute scale, such as in kelvins. The law can then be expressed mathematically as P ∝ T, or P T = k, where, P is the pressure of the gas, T is the temperature of the gas, k is a constant. For comparing the same substance under two different sets of conditions, the law can be written as, P1 T1 = P2 T2 or P1 T2 = P2 T1. Because Amontons discovered the law beforehand, Gay-Lussacs name is now generally associated within chemistry with the law of combining volumes discussed in the section above, some introductory physics textbooks still define the pressure-temperature relationship as Gay-Lussacs law. Gay-Lussac primarily investigated the relationship between volume and temperature and published it in 1802, but his work did cover some comparison between pressure and temperature, however, in recent years the term has fallen out of favor. Gay-Lussacs law, Charless law, and Boyles law form the gas law. These three gas laws in combination with Avogadros law can be generalized by the gas law. Avogadros law Boyles law Charless law Combined gas law Castka, Joseph F. Metcalfe, H. Clark, Davis, Raymond E. Williams, the Complete Idiots Guide to Chemistry. How to Prepare for the SAT II Chemistry
Gay-Lussac's law
–
Diving medicine:
Gay-Lussac's law
–
Continuum mechanics
22.
Rheology
–
The term rheology was coined by Eugene C. Bingham, a professor at Lafayette College, in 1920, from a suggestion by a colleague, the term was inspired by the aphorism of Simplicius, panta rhei, everything flows, and was first used to describe the flow of liquids and the deformation of solids. Newtonian fluids can be characterized by a coefficient of viscosity for a specific temperature. Although this viscosity will change with temperature, it does not change with the strain rate, only a small group of fluids exhibit such constant viscosity. The large class of fluids whose viscosity changes with the rate are called non-Newtonian fluids. For example, ketchup can have its viscosity reduced by shaking, ketchup is a shear thinning material, like yogurt and emulsion paint, exhibiting thixotropy, where an increase in relative flow velocity will cause a reduction in viscosity, for example, by stirring. Some other non-Newtonian materials show the behavior, rheopecty, viscosity going up with relative deformation. Since Sir Isaac Newton originated the concept of viscosity, the study of liquids with strain rate dependent viscosity is also often called Non-Newtonian fluid mechanics, materials with the characteristics of a fluid will flow when subjected to a stress which is defined as the force per area. There are different sorts of stress and materials can respond differently for different stresses, much of theoretical rheology is concerned with associating external forces and torques with internal stresses and internal strain gradients and flow velocities. In this sense, a solid undergoing plastic deformation is a fluid, granular rheology refers to the continuum mechanical description of granular materials. These experimental techniques are known as rheometry and are concerned with the determination with well-defined rheological material functions, such relationships are then amenable to mathematical treatment by the established methods of continuum mechanics. The characterization of flow or deformation originating from a shear stress field is called shear rheometry. The study of extensional flows is called extensional rheology, shear flows are much easier to study and thus much more experimental data are available for shear flows than for extensional flows. A rheologist is an interdisciplinary scientist or engineer who studies the flow of liquids or the deformation of soft solids. It is not a degree subject, there is no qualification of rheologist as such. Most rheologists have a qualification in mathematics, the sciences, engineering, medicine, or certain technologies. Elasticity is essentially a time independent processes, as the strains appear the moment the stress is applied, if the material deformation rate increases linearly with increasing applied stress, then the material is viscous in the Newtonian sense. These materials are characterized due to the delay between the applied constant stress and the maximum strain
Rheology
–
Linear structure of cellulose -- the most common component of all organic plant life on Earth. * Note the evidence of hydrogen bonding which increases the viscosity at any temperature and pressure. This is an effect similar to that of polymer crosslinking, but less pronounced.
23.
Smart fluid
–
A smart fluid is a fluid whose properties can be changed by applying an electric field or a magnetic field. The most developed smart fluids today are fluids whose viscosity increases when a field is applied. Small magnetic dipoles are suspended in a fluid, and the applied magnetic field causes these small magnets to line up. These magnetorheological or MR fluids are being used in the suspension of the 2002 model of the Cadillac Seville STS automobile and more recently, depending on road conditions, the damping fluids viscosity is adjusted. This is more expensive than traditional systems, but it provides better control, some haptic devices whose resistance to touch can be controlled are also based on these MR fluids. Another major type of fluid are electrorheological or ER fluids. Besides fast acting clutches, brakes, shock absorbers and hydraulic valves, other, more esoteric, other smart fluids change their surface tension in the presence of an electric field. Other applications include brakes and seismic dampers, which are used in buildings in seismically-active zones to damp the oscillations occurring in an earthquake. Since then it appears that interest has waned a little, possibly due to the existence of various limitations of smart fluids which have yet to be overcome. Continuum mechanics Electrorheological fluid Ferrofluid Fluid mechanics Magnetorheological fluid Rheology Smart glass Smart metal http, //www. aip. org/tip/INPHFA/vol-9/iss-6/p14. html
Smart fluid
–
Continuum mechanics
24.
Magnetorheological fluid
–
A magnetorheological fluid is a type of smart fluid in a carrier fluid, usually a type of oil. When subjected to a field, the fluid greatly increases its apparent viscosity. Importantly, the stress of the fluid when in its active state can be controlled very accurately by varying the magnetic field intensity. The upshot is that the ability to transmit force can be controlled with an electromagnet. Extensive discussions of the physics and applications of MR fluids can be found in a recent book, MR fluid is different from a ferrofluid which has smaller particles. MR fluid particles are primarily on the micrometre-scale and are too dense for Brownian motion to keep them suspended, Ferrofluid particles are primarily nanoparticles that are suspended by Brownian motion and generally will not settle under normal conditions. As a result, these two fluids have different applications. When a magnetic field is applied, however, the particles align themselves along the lines of magnetic flux. To understand and predict the behavior of the MR fluid it is necessary to model the fluid mathematically, a task slightly complicated by the varying material properties. As mentioned above, smart fluids are such that they have a low viscosity in the absence of a magnetic field. In the case of MR fluids, the fluid actually assumes properties comparable to a solid when in the activated state, the behavior of a MR fluid can thus be considered similar to a Bingham plastic, a material model which has been well-investigated. However, a MR fluid does not exactly follow the characteristics of a Bingham plastic, for example, below the yield stress, the fluid behaves as a viscoelastic material, with a complex modulus that is also known to be dependent on the magnetic field intensity. MR fluids are also known to be subject to shear thinning, low shear strength has been the primary reason for limited range of applications. In the absence of pressure the maximum shear strength is about 100 kPa. If the fluid is compressed in the field direction and the compressive stress is 2 MPa. If the standard magnetic particles are replaced with elongated magnetic particles, ferroparticles settle out of the suspension over time due to the inherent density difference between the particles and their carrier fluid. The rate and degree to which this occurs is one of the primary attributes considered in industry when implementing or designing an MR device. Surfactants are typically used to offset this effect, but at a cost of the fluids magnetic saturation, and thus the maximum yield stress exhibited in its activated state
Magnetorheological fluid
Magnetorheological fluid
–
Schematic of a magnetorheological fluid solidifying and blocking a pipe in response to an external magnetic field. (Animated version available.)
Magnetorheological fluid
Magnetorheological fluid
25.
Electrorheological fluid
–
Electrorheological fluids are suspensions of extremely fine non-conducting but electrically active particles in an electrically insulating fluid. The apparent viscosity of these fluids changes reversibly by an order of up to 100,000 in response to an electric field. For example, a typical ER fluid can go from the consistency of a liquid to that of a gel, and back, with response times on the order of milliseconds. The effect is called the Winslow effect after its discoverer, the American inventor Willis Winslow. Other common applications are in ER brakes and shock absorbers, there are many novel uses for these fluids. Potential uses are in accurate abrasive polishing and as haptic controllers, motorola filed a patent application for mobile device applications in 2006. The change in apparent viscosity is dependent on the electric field. The change is not a change in viscosity, hence these fluids are now known as ER fluids. The effect is described as an electric field dependent shear yield stress. When activated an ER fluid behaves as a Bingham plastic, with a point which is determined by the electric field strength. After the yield point is reached, the fluid shears as a fluid, hence the resistance to motion of the fluid can be controlled by adjusting the applied electric field. ER fluids are a type of smart fluid, a simple ER fluid can be made by mixing cornflour in a light vegetable oil or silicone oil. There are two theories to explain the effect, the interfacial tension or water bridge theory. The water bridge theory assumes a three phase system, the particles contain the third phase which is another liquid immiscible with the main phase liquid, with no applied electric field the third phase is strongly attracted to and held within the particles. This means the ER fluid is a suspension of particles, which behaves as a liquid, when an electric field is applied the third phase is driven to one side of the particles by electro osmosis and binds adjacent particles together to form chains. This chain structure means the ER fluid has become a solid, the electrostatic theory assumes just a two phase system, with dielectric particles forming chains aligned with an electric field in an analogous way to how magnetorheological fluid fluids work. An ER fluid has been constructed with the solid phase made from a conductor coated in an insulator and this ER fluid clearly cannot work by the water bridge model. However, although demonstrating that some ER fluids work by the electrostatic effect, the advantage of having an ER fluid which operates on the electrostatic effect is the elimination of leakage current, i. e. potentially there is no direct current
Electrorheological fluid
–
Continuum mechanics
26.
Robert Boyle
–
Robert William Boyle FRS was an Anglo-Irish natural philosopher, chemist, physicist and inventor born in Lismore, County Waterford, Ireland. Boyle is largely regarded today as the first modern chemist, and therefore one of the founders of modern chemistry, and one of the pioneers of modern experimental scientific method. He is best known for Boyles law, which describes the proportional relationship between the absolute pressure and volume of a gas, if the temperature is kept constant within a closed system. Among his works, The Sceptical Chymist is seen as a book in the field of chemistry. He was a devout and pious Anglican and is noted for his writings in theology, Boyle was born in Lismore Castle, in County Waterford, Ireland, the seventh son and fourteenth child of Richard Boyle, 1st Earl of Cork, and Catherine Fenton. Richard Boyle arrived in Dublin from England in 1588 during the Tudor plantations of Ireland and he had amassed enormous landholdings by the time Robert was born. As a child, Boyle was fostered to a local family, Boyle received private tutoring in Latin, Greek, and French and when he was eight years old, following the death of his mother, he was sent to Eton College in England. His fathers friend, Sir Henry Wotton, was then the provost of the college, during this time, his father hired a private tutor, Robert Carew, who had knowledge of Irish, to act as private tutor to his sons in Eton. After spending over three years at Eton, Robert travelled abroad with a French tutor and they visited Italy in 1641 and remained in Florence during the winter of that year studying the paradoxes of the great star-gazer Galileo Galilei, who was elderly but still living in 1641. Boyle returned to England from continental Europe in mid-1644 with a keen interest in scientific research and his father had died the previous year and had left him the manor of Stalbridge in Dorset, England and substantial estates in County Limerick in Ireland that he had acquired. They met frequently in London, often at Gresham College, having made several visits to his Irish estates beginning in 1647, Robert moved to Ireland in 1652 but became frustrated at his inability to make progress in his chemical work. In one letter, he described Ireland as a country where chemical spirits were so misunderstood. In 1654, Boyle left Ireland for Oxford to pursue his work more successfully, an inscription can be found on the wall of University College, Oxford the High Street at Oxford, marking the spot where Cross Hall stood until the early 19th century. It was here that Boyle rented rooms from the apothecary who owned the Hall. An account of Boyles work with the air pump was published in 1660 under the title New Experiments Physico-Mechanical, Touching the Spring of the Air, the person who originally formulated the hypothesis was Henry Power in 1661. Boyle in 1662 included a reference to a written by Power. In continental Europe the hypothesis is attributed to Edme Mariotte. In 1680 he was elected president of the society, but declined the honour from a scruple about oaths and they are extraordinary because all but a few of the 24 have come true
Robert Boyle
–
Robert Boyle (1627–91)
Robert Boyle
–
Sculpture of a young boy, thought to be Boyle, on his parents' monument in St Patrick's Cathedral, Dublin.
Robert Boyle
–
One of Robert Boyle's notebooks (1690-1691) held by the Royal Society of London. The Royal Society archives holds 46 volumes of philosophical, scientific and theological papers by Boyle and seven volumes of his correspondence.
Robert Boyle
–
Plaque at the site of Boyle and Hooke's experiments in Oxford
27.
Augustin-Louis Cauchy
–
Baron Augustin-Louis Cauchy FRS FRSE was a French mathematician who made pioneering contributions to analysis. He was one of the first to state and prove theorems of calculus rigorously and he almost singlehandedly founded complex analysis and the study of permutation groups in abstract algebra. A profound mathematician, Cauchy had an influence over his contemporaries. His writings range widely in mathematics and mathematical physics, more concepts and theorems have been named for Cauchy than for any other mathematician. Cauchy was a writer, he wrote approximately eight hundred research articles. Cauchy was the son of Louis François Cauchy and Marie-Madeleine Desestre, Cauchy married Aloise de Bure in 1818. She was a relative of the publisher who published most of Cauchys works. By her he had two daughters, Marie Françoise Alicia and Marie Mathilde, Cauchys father was a high official in the Parisian Police of the New Régime. He lost his position because of the French Revolution that broke out one month before Augustin-Louis was born, the Cauchy family survived the revolution and the following Reign of Terror by escaping to Arcueil, where Cauchy received his first education, from his father. After the execution of Robespierre, it was safe for the family to return to Paris, there Louis-François Cauchy found himself a new bureaucratic job, and quickly moved up the ranks. When Napoleon Bonaparte came to power, Louis-François Cauchy was further promoted, the famous mathematician Lagrange was also a friend of the Cauchy family. On Lagranges advice, Augustin-Louis was enrolled in the École Centrale du Panthéon, most of the curriculum consisted of classical languages, the young and ambitious Cauchy, being a brilliant student, won many prizes in Latin and Humanities. In spite of successes, Augustin-Louis chose an engineering career. In 1805 he placed second out of 293 applicants on this exam, one of the main purposes of this school was to give future civil and military engineers a high-level scientific and mathematical education. The school functioned under military discipline, which caused the young, nevertheless, he finished the Polytechnique in 1807, at the age of 18, and went on to the École des Ponts et Chaussées. He graduated in engineering, with the highest honors. After finishing school in 1810, Cauchy accepted a job as an engineer in Cherbourg. Cauchys first two manuscripts were accepted, the one was rejected
Augustin-Louis Cauchy
–
Cauchy around 1840. Lithography by Zéphirin Belliard after a painting by Jean Roller.
Augustin-Louis Cauchy
–
The title page of a textbook by Cauchy.
Augustin-Louis Cauchy
–
Leçons sur le calcul différentiel, 1829
28.
Leonhard Euler
–
He also introduced much of the modern mathematical terminology and notation, particularly for mathematical analysis, such as the notion of a mathematical function. He is also known for his work in mechanics, fluid dynamics, optics, astronomy, Euler was one of the most eminent mathematicians of the 18th century, and is held to be one of the greatest in history. He is also considered to be the most prolific mathematician of all time. His collected works fill 60 to 80 quarto volumes, more than anybody in the field and he spent most of his adult life in Saint Petersburg, Russia, and in Berlin, then the capital of Prussia. A statement attributed to Pierre-Simon Laplace expresses Eulers influence on mathematics, Read Euler, read Euler, Leonhard Euler was born on 15 April 1707, in Basel, Switzerland to Paul III Euler, a pastor of the Reformed Church, and Marguerite née Brucker, a pastors daughter. He had two sisters, Anna Maria and Maria Magdalena, and a younger brother Johann Heinrich. Soon after the birth of Leonhard, the Eulers moved from Basel to the town of Riehen, Paul Euler was a friend of the Bernoulli family, Johann Bernoulli was then regarded as Europes foremost mathematician, and would eventually be the most important influence on young Leonhard. Eulers formal education started in Basel, where he was sent to live with his maternal grandmother. In 1720, aged thirteen, he enrolled at the University of Basel, during that time, he was receiving Saturday afternoon lessons from Johann Bernoulli, who quickly discovered his new pupils incredible talent for mathematics. In 1726, Euler completed a dissertation on the propagation of sound with the title De Sono, at that time, he was unsuccessfully attempting to obtain a position at the University of Basel. In 1727, he first entered the Paris Academy Prize Problem competition, Pierre Bouguer, who became known as the father of naval architecture, won and Euler took second place. Euler later won this annual prize twelve times, around this time Johann Bernoullis two sons, Daniel and Nicolaus, were working at the Imperial Russian Academy of Sciences in Saint Petersburg. In November 1726 Euler eagerly accepted the offer, but delayed making the trip to Saint Petersburg while he applied for a physics professorship at the University of Basel. Euler arrived in Saint Petersburg on 17 May 1727 and he was promoted from his junior post in the medical department of the academy to a position in the mathematics department. He lodged with Daniel Bernoulli with whom he worked in close collaboration. Euler mastered Russian and settled life in Saint Petersburg. He also took on a job as a medic in the Russian Navy. The Academy at Saint Petersburg, established by Peter the Great, was intended to improve education in Russia, as a result, it was made especially attractive to foreign scholars like Euler
Leonhard Euler
–
Portrait by Jakob Emanuel Handmann (1756)
Leonhard Euler
–
1957 Soviet Union stamp commemorating the 250th birthday of Euler. The text says: 250 years from the birth of the great mathematician, academician Leonhard Euler.
Leonhard Euler
–
Stamp of the former German Democratic Republic honoring Euler on the 200th anniversary of his death. Across the centre it shows his polyhedral formula, nowadays written as " v − e + f = 2".
Leonhard Euler
–
Euler's grave at the Alexander Nevsky Monastery
29.
Robert Hooke
–
Robert Hooke FRS was an English natural philosopher, architect and polymath. Allan Chapman has characterised him as Englands Leonardo, Robert Gunthers Early Science in Oxford, a history of science in Oxford during the Protectorate, Restoration and Age of Enlightenment, devotes five of its fourteen volumes to Hooke. Hooke studied at Wadham College, Oxford during the Protectorate where he became one of a tightly knit group of ardent Royalists led by John Wilkins. Here he was employed as an assistant to Thomas Willis and to Robert Boyle and he built some of the earliest Gregorian telescopes and observed the rotations of Mars and Jupiter. In 1665 he inspired the use of microscopes for scientific exploration with his book, based on his microscopic observations of fossils, Hooke was an early proponent of biological evolution. Much of Hookes scientific work was conducted in his capacity as curator of experiments of the Royal Society, much of what is known of Hookes early life comes from an autobiography that he commenced in 1696 but never completed. Richard Waller mentions it in his introduction to The Posthumous Works of Robert Hooke, the work of Waller, along with John Wards Lives of the Gresham Professors and John Aubreys Brief Lives, form the major near-contemporaneous biographical accounts of Hooke. Robert Hooke was born in 1635 in Freshwater on the Isle of Wight to John Hooke, Robert was the last of four children, two boys and two girls, and there was an age difference of seven years between him and the next youngest. Their father John was a Church of England priest, the curate of Freshwaters Church of All Saints, Robert Hooke was expected to succeed in his education and join the Church. John Hooke also was in charge of a school, and so was able to teach Robert. He was a Royalist and almost certainly a member of a group who went to pay their respects to Charles I when he escaped to the Isle of Wight, Robert, too, grew up to be a staunch monarchist. As a youth, Robert Hooke was fascinated by observation, mechanical works and he dismantled a brass clock and built a wooden replica that, by all accounts, worked well enough, and he learned to draw, making his own materials from coal, chalk and ruddle. Hooke quickly mastered Latin and Greek, made study of Hebrew. Here, too, he embarked on his study of mechanics. It appears that Hooke was one of a group of students whom Busby educated in parallel to the work of the school. Contemporary accounts say he was not much seen in the school, in 1653, Hooke secured a choristers place at Christ Church, Oxford. He was employed as an assistant to Dr Thomas Willis. There he met the natural philosopher Robert Boyle, and gained employment as his assistant from about 1655 to 1662, constructing, operating and he did not take his Master of Arts until 1662 or 1663
Robert Hooke
–
Modern portrait of Robert Hooke (Rita Greer 2004), based on descriptions by Aubrey and Waller; no contemporary depictions of Hooke are known to survive.
Robert Hooke
–
Memorial portrait of Robert Hooke at Alum Bay, Isle of Wight, his birthplace, by Rita Greer (2012).
Robert Hooke
–
Robert Boyle
Robert Hooke
–
Diagram of a louse from Hooke's Micrographia
30.
Isaac Newton
–
His book Philosophiæ Naturalis Principia Mathematica, first published in 1687, laid the foundations of classical mechanics. Newton also made contributions to optics, and he shares credit with Gottfried Wilhelm Leibniz for developing the infinitesimal calculus. Newtons Principia formulated the laws of motion and universal gravitation that dominated scientists view of the universe for the next three centuries. Newtons work on light was collected in his influential book Opticks. He also formulated a law of cooling, made the first theoretical calculation of the speed of sound. Newton was a fellow of Trinity College and the second Lucasian Professor of Mathematics at the University of Cambridge, politically and personally tied to the Whig party, Newton served two brief terms as Member of Parliament for the University of Cambridge, in 1689–90 and 1701–02. He was knighted by Queen Anne in 1705 and he spent the last three decades of his life in London, serving as Warden and Master of the Royal Mint and his father, also named Isaac Newton, had died three months before. Born prematurely, he was a child, his mother Hannah Ayscough reportedly said that he could have fit inside a quart mug. When Newton was three, his mother remarried and went to live with her new husband, the Reverend Barnabas Smith, leaving her son in the care of his maternal grandmother, Newtons mother had three children from her second marriage. From the age of twelve until he was seventeen, Newton was educated at The Kings School, Grantham which taught Latin and Greek. He was removed from school, and by October 1659, he was to be found at Woolsthorpe-by-Colsterworth, Henry Stokes, master at the Kings School, persuaded his mother to send him back to school so that he might complete his education. Motivated partly by a desire for revenge against a bully, he became the top-ranked student. In June 1661, he was admitted to Trinity College, Cambridge and he started as a subsizar—paying his way by performing valets duties—until he was awarded a scholarship in 1664, which guaranteed him four more years until he would get his M. A. He set down in his notebook a series of Quaestiones about mechanical philosophy as he found it, in 1665, he discovered the generalised binomial theorem and began to develop a mathematical theory that later became calculus. Soon after Newton had obtained his B. A. degree in August 1665, in April 1667, he returned to Cambridge and in October was elected as a fellow of Trinity. Fellows were required to become ordained priests, although this was not enforced in the restoration years, however, by 1675 the issue could not be avoided and by then his unconventional views stood in the way. Nevertheless, Newton managed to avoid it by means of a special permission from Charles II. A and he was elected a Fellow of the Royal Society in 1672. Newtons work has been said to distinctly advance every branch of mathematics then studied and his work on the subject usually referred to as fluxions or calculus, seen in a manuscript of October 1666, is now published among Newtons mathematical papers
Isaac Newton
–
Portrait of Isaac Newton in 1689 (age 46) by Godfrey Kneller
Isaac Newton
–
Newton in a 1702 portrait by Godfrey Kneller
Isaac Newton
–
Isaac Newton (Bolton, Sarah K. Famous Men of Science. NY: Thomas Y. Crowell & Co., 1889)
Isaac Newton
–
Replica of Newton's second Reflecting telescope that he presented to the Royal Society in 1672
31.
Claude-Louis Navier
–
Claude-Louis Navier, was a French engineer and physicist who specialized in mechanics. The Navier–Stokes equations are named after him and George Gabriel Stokes, after the death of his father in 1793, Naviers mother left his education in the hands of his uncle Émiland Gauthey, an engineer with the Corps of Bridges and Roads. In 1802, Navier enrolled at the École polytechnique, and in 1804 continued his studies at the École Nationale des Ponts et Chaussées and he eventually succeeded his uncle as Inspecteur general at the Corps des Ponts et Chaussées. He directed the construction of bridges at Choisy, Asnières and Argenteuil in the Department of the Seine, in 1824, Navier was admitted into the French Academy of Science. Navier formulated the theory of elasticity in a mathematically usable form. Navier is therefore considered to be the founder of modern structural analysis. His major contribution however remains the Navier–Stokes equations, central to fluid mechanics and his name is one of the 72 names inscribed on the Eiffel Tower. OConnor, John J. Robertson, Edmund F. Claude-Louis Navier, MacTutor History of Mathematics archive, University of St Andrews
Claude-Louis Navier
–
Bust of Claude Louis Marie Henri Navier at the École Nationale des Ponts et Chaussées
32.
Mechanics
–
Mechanics is an area of science concerned with the behaviour of physical bodies when subjected to forces or displacements, and the subsequent effects of the bodies on their environment. The scientific discipline has its origins in Ancient Greece with the writings of Aristotle, during the early modern period, scientists such as Khayaam, Galileo, Kepler, and Newton, laid the foundation for what is now known as classical mechanics. It is a branch of physics that deals with particles that are either at rest or are moving with velocities significantly less than the speed of light. It can also be defined as a branch of science which deals with the motion of, historically, classical mechanics came first, while quantum mechanics is a comparatively recent invention. Classical mechanics originated with Isaac Newtons laws of motion in Philosophiæ Naturalis Principia Mathematica, both are commonly held to constitute the most certain knowledge that exists about physical nature. Classical mechanics has especially often been viewed as a model for other so-called exact sciences, essential in this respect is the relentless use of mathematics in theories, as well as the decisive role played by experiment in generating and testing them. Quantum mechanics is of a scope, as it encompasses classical mechanics as a sub-discipline which applies under certain restricted circumstances. According to the principle, there is no contradiction or conflict between the two subjects, each simply pertains to specific situations. The correspondence principle states that the behavior of systems described by quantum theories reproduces classical physics in the limit of quantum numbers. Quantum mechanics has superseded classical mechanics at the level and is indispensable for the explanation and prediction of processes at the molecular, atomic. However, for macroscopic processes classical mechanics is able to solve problems which are difficult in quantum mechanics and hence remains useful. Modern descriptions of such behavior begin with a definition of such quantities as displacement, time, velocity, acceleration, mass. Until about 400 years ago, however, motion was explained from a different point of view. He showed that the speed of falling objects increases steadily during the time of their fall and this acceleration is the same for heavy objects as for light ones, provided air friction is discounted. The English mathematician and physicist Isaac Newton improved this analysis by defining force and mass, for objects traveling at speeds close to the speed of light, Newton’s laws were superseded by Albert Einstein’s theory of relativity. For atomic and subatomic particles, Newton’s laws were superseded by quantum theory, for everyday phenomena, however, Newton’s three laws of motion remain the cornerstone of dynamics, which is the study of what causes motion. In analogy to the distinction between quantum and classical mechanics, Einsteins general and special theories of relativity have expanded the scope of Newton, the differences between relativistic and Newtonian mechanics become significant and even dominant as the velocity of a massive body approaches the speed of light. Relativistic corrections are also needed for quantum mechanics, although general relativity has not been integrated, the two theories remain incompatible, a hurdle which must be overcome in developing a theory of everything
Mechanics
–
Arabic Machine Manuscript. Unknown date (at a guess: 16th to 19th centuries).
33.
Force
–
In physics, a force is any interaction that, when unopposed, will change the motion of an object. In other words, a force can cause an object with mass to change its velocity, force can also be described intuitively as a push or a pull. A force has both magnitude and direction, making it a vector quantity and it is measured in the SI unit of newtons and represented by the symbol F. The original form of Newtons second law states that the net force acting upon an object is equal to the rate at which its momentum changes with time. In an extended body, each part usually applies forces on the adjacent parts, such internal mechanical stresses cause no accelation of that body as the forces balance one another. Pressure, the distribution of small forces applied over an area of a body, is a simple type of stress that if unbalanced can cause the body to accelerate. Stress usually causes deformation of materials, or flow in fluids. In part this was due to an understanding of the sometimes non-obvious force of friction. A fundamental error was the belief that a force is required to maintain motion, most of the previous misunderstandings about motion and force were eventually corrected by Galileo Galilei and Sir Isaac Newton. With his mathematical insight, Sir Isaac Newton formulated laws of motion that were not improved-on for nearly three hundred years, the Standard Model predicts that exchanged particles called gauge bosons are the fundamental means by which forces are emitted and absorbed. Only four main interactions are known, in order of decreasing strength, they are, strong, electromagnetic, weak, high-energy particle physics observations made during the 1970s and 1980s confirmed that the weak and electromagnetic forces are expressions of a more fundamental electroweak interaction. Since antiquity the concept of force has been recognized as integral to the functioning of each of the simple machines. The mechanical advantage given by a machine allowed for less force to be used in exchange for that force acting over a greater distance for the same amount of work. Analysis of the characteristics of forces ultimately culminated in the work of Archimedes who was famous for formulating a treatment of buoyant forces inherent in fluids. Aristotle provided a discussion of the concept of a force as an integral part of Aristotelian cosmology. In Aristotles view, the sphere contained four elements that come to rest at different natural places therein. Aristotle believed that objects on Earth, those composed mostly of the elements earth and water, to be in their natural place on the ground. He distinguished between the tendency of objects to find their natural place, which led to natural motion, and unnatural or forced motion
Force
–
Aristotle famously described a force as anything that causes an object to undergo "unnatural motion"
Force
–
Forces are also described as a push or pull on an object. They can be due to phenomena such as gravity, magnetism, or anything that might cause a mass to accelerate.
Force
–
Though Sir Isaac Newton 's most famous equation is, he actually wrote down a different form for his second law of motion that did not use differential calculus.
Force
–
Galileo Galilei was the first to point out the inherent contradictions contained in Aristotle's description of forces.
34.
Mechanical engineering
–
Mechanical engineering is the discipline that applies the principles of engineering, physics, and materials science for the design, analysis, manufacturing, and maintenance of mechanical systems. It is the branch of engineering that involves the design, production and it is one of the oldest and broadest of the engineering disciplines. The mechanical engineering field requires an understanding of areas including mechanics, kinematics, thermodynamics, materials science, structural analysis. Mechanical engineering emerged as a field during the Industrial Revolution in Europe in the 18th century, however, Mechanical engineering science emerged in the 19th century as a result of developments in the field of physics. The field has evolved to incorporate advancements in technology, and mechanical engineers today are pursuing developments in such fields as composites, mechatronics. Mechanical engineers may work in the field of biomedical engineering, specifically with biomechanics, transport phenomena, biomechatronics, bionanotechnology. Mechanical engineering finds its application in the archives of various ancient, in ancient Greece, the works of Archimedes deeply influenced mechanics in the Western tradition and Heron of Alexandria created the first steam engine. In China, Zhang Heng improved a water clock and invented a seismometer, during the 7th to 15th century, the era called the Islamic Golden Age, there were remarkable contributions from Muslim inventors in the field of mechanical technology. Al-Jazari, who was one of them, wrote his famous Book of Knowledge of Ingenious Mechanical Devices in 1206 and he is also considered to be the inventor of such mechanical devices which now form the very basic of mechanisms, such as the crankshaft and camshaft. Newton was reluctant to publish his methods and laws for years, gottfried Wilhelm Leibniz is also credited with creating Calculus during the same time frame. On the European continent, Johann von Zimmermann founded the first factory for grinding machines in Chemnitz, education in mechanical engineering has historically been based on a strong foundation in mathematics and science. Degrees in mechanical engineering are offered at universities worldwide. In Spain, Portugal and most of South America, where neither B. Sc. nor B. Tech, programs have been adopted, the formal name for the degree is Mechanical Engineer, and the course work is based on five or six years of training. In Italy the course work is based on five years of education, and training, in Greece, the coursework is based on a five-year curriculum and the requirement of a Diploma Thesis, which upon completion a Diploma is awarded rather than a B. Sc. In Australia, mechanical engineering degrees are awarded as Bachelor of Engineering or similar nomenclature although there are a number of specialisations. The degree takes four years of study to achieve. To ensure quality in engineering degrees, Engineers Australia accredits engineering degrees awarded by Australian universities in accordance with the global Washington Accord, before the degree can be awarded, the student must complete at least 3 months of on the job work experience in an engineering firm. Similar systems are present in South Africa and are overseen by the Engineering Council of South Africa
Mechanical engineering
–
Mechanical engineers design and build engines, power plants, other machines...
Mechanical engineering
–
... structures, and vehicles of all sizes.
Mechanical engineering
–
An oblique view of a four-cylinder inline crankshaft with pistons
Mechanical engineering
–
Training FMS with learning robot SCORBOT-ER 4u, workbench CNC Mill and CNC Lathe
35.
Civil engineering
–
Civil engineering is traditionally broken into a number of sub-disciplines. It is the second-oldest engineering discipline after military engineering, and it is defined to distinguish non-military engineering from military engineering, Civil engineering takes place in the public sector from municipal through to national governments, and in the private sector from individual homeowners through to international companies. Engineering has been an aspect of life since the beginnings of human existence, during this time, transportation became increasingly important leading to the development of the wheel and sailing. The construction of pyramids in Egypt were some of the first instances of large structure constructions, the Romans developed civil structures throughout their empire, including especially aqueducts, insulae, harbors, bridges, dams and roads. In the 18th century, the civil engineering was coined to incorporate all things civilian as opposed to military engineering. The first self-proclaimed civil engineer was John Smeaton, who constructed the Eddystone Lighthouse, in 1771 Smeaton and some of his colleagues formed the Smeatonian Society of Civil Engineers, a group of leaders of the profession who met informally over dinner. Though there was evidence of some meetings, it was little more than a social society. In 1818 the Institution of Civil Engineers was founded in London, the institution received a Royal Charter in 1828, formally recognising civil engineering as a profession. The first private college to teach engineering in the United States was Norwich University. The first degree in engineering in the United States was awarded by Rensselaer Polytechnic Institute in 1835. The first such degree to be awarded to a woman was granted by Cornell University to Nora Stanton Blatch in 1905, throughout ancient and medieval history most architectural design and construction was carried out by artisans, such as stonemasons and carpenters, rising to the role of master builder. Knowledge was retained in guilds and seldom supplanted by advances, structures, roads and infrastructure that existed were repetitive, and increases in scale were incremental. Brahmagupta, an Indian mathematician, used arithmetic in the 7th century AD, based on Hindu-Arabic numerals, Civil engineers typically possess an academic degree in civil engineering. The length of study is three to five years, and the degree is designated as a bachelor of engineering. The curriculum generally includes classes in physics, mathematics, project management, design, after taking basic courses in most sub-disciplines of civil engineering, they move onto specialize in one or more sub-disciplines at advanced levels. In most countries, a degree in engineering represents the first step towards professional certification. After completing a degree program, the engineer must satisfy a range of requirements before being certified. Once certified, the engineer is designated as a engineer, a chartered engineer
Civil engineering
–
A multi-level stack interchange, buildings, houses, and park in Shanghai, China.
Civil engineering
–
Philadelphia City Hall in the United States is still the world's tallest masonry load bearing structure.
Civil engineering
–
Leonhard Euler developed the theory explaining the buckling of columns
Civil engineering
–
John Smeaton, the "father of civil engineering"
36.
Chemical engineering
–
A chemical engineer designs large-scale processes that convert chemicals, raw materials, living cells, microorganisms and energy into useful forms and products. A1996 British Journal for the History of Science article cites James F. Donnelly for mentioning an 1839 reference to chemical engineering in relation to the production of sulfuric acid. In the same however, George E. Davis, an English consultant, was credited for having coined the term. The History of Science in United States, An Encyclopedia puts this at around 1890, Chemical engineering, describing the use of mechanical equipment in the chemical industry, became common vocabulary in England after 1850. By 1910, the profession, chemical engineer, was already in use in Britain. Chemical engineering emerged upon the development of operations, a fundamental concept of the discipline of chemical engineering. Most authors agree that Davis invented the concept of operations if not substantially developed it. He gave a series of lectures on unit operations at the Manchester Technical School in 1887, three years before Davis lectures, Henry Edward Armstrong taught a degree course in chemical engineering at the City and Guilds of London Institute. Armstrongs course failed simply because its graduates, were not especially attractive to employers. Employers of the time would have rather hired chemists and mechanical engineers, starting from 1888, Lewis M. Norton taught at MIT the first chemical engineering course in the United States. Nortons course was contemporaneous and essentially similar with Armstrongs course, both courses, however, simply merged chemistry and engineering subjects. Its practitioners had difficulty convincing engineers that they were engineers and chemists that they were not simply chemists, unit operations was introduced into the course by William Hultz Walker in 1905. By the early 1920s, unit operations became an important aspect of engineering at MIT and other US universities. For instance, it defined chemical engineering to be a science of itself, unit operations in a 1922 report, and with which principle, it had published a list of academic institutions which offered satisfactory chemical engineering courses. Meanwhile, promoting chemical engineering as a science in Britain lead to the establishment of the Institution of Chemical Engineers in 1922. IChemE likewise helped make unit operations considered essential to the discipline, by the 1940s, it became clear that unit operations alone was insufficient in developing chemical reactors. While the predominance of unit operations in chemical engineering courses in Britain, along with other novel concepts, such process systems engineering, a second paradigm was defined. Transport phenomena gave an analytical approach to chemical engineering while PSE focused on its elements, such as control system
Chemical engineering
–
Chemical engineers design, construct and operate process plants (distillation columns pictured)
Chemical engineering
–
George E. Davis
Chemical engineering
–
Chemical engineers use computers to control automated systems in plants.
Chemical engineering
–
Operators in a chemical plant using an older analog control board, seen in East-Germany, 1986.
37.
Astrophysics
–
Astrophysics is the branch of astronomy that employs the principles of physics and chemistry to ascertain the nature of the heavenly bodies, rather than their positions or motions in space. Among the objects studied are the Sun, other stars, galaxies, extrasolar planets, the interstellar medium and their emissions are examined across all parts of the electromagnetic spectrum, and the properties examined include luminosity, density, temperature, and chemical composition. In practice, modern astronomical research often involves an amount of work in the realms of theoretical and observational physics. Although astronomy is as ancient as recorded history itself, it was separated from the study of terrestrial physics. Their challenge was that the tools had not yet been invented with which to prove these assertions, for much of the nineteenth century, astronomical research was focused on the routine work of measuring the positions and computing the motions of astronomical objects. Kirchhoff deduced that the lines in the solar spectrum are caused by absorption by chemical elements in the Solar atmosphere. In this way it was proved that the elements found in the Sun. Among those who extended the study of solar and stellar spectra was Norman Lockyer and he thus claimed the line represented a new element, which was called helium, after the Greek Helios, the Sun personified. By 1890, a catalog of over 10,000 stars had been prepared that grouped them into thirteen spectral types, most significantly, she discovered that hydrogen and helium were the principal components of stars. This discovery was so unexpected that her dissertation readers convinced her to modify the conclusion before publication, however, later research confirmed her discovery. By the end of the 20th century, studies of astronomical spectra had expanded to cover wavelengths extending from radio waves through optical, x-ray and it is the practice of observing celestial objects by using telescopes and other astronomical apparatus. The majority of observations are made using the electromagnetic spectrum. Radio astronomy studies radiation with a greater than a few millimeters. The study of these waves requires very large radio telescopes, infrared astronomy studies radiation with a wavelength that is too long to be visible to the naked eye but is shorter than radio waves. Infrared observations are made with telescopes similar to the familiar optical telescopes. Objects colder than stars are studied at infrared frequencies. Optical astronomy is the oldest kind of astronomy, telescopes paired with a charge-coupled device or spectroscopes are the most common instruments used. The Earths atmosphere interferes somewhat with optical observations, so adaptive optics, in this wavelength range, stars are highly visible, and many chemical spectra can be observed to study the chemical composition of stars, galaxies and nebulae
Astrophysics
–
Early 20th-century comparison of elemental, solar, and stellar spectra
Astrophysics
–
Supernova remnant LMC N 63A imaged in x-ray (blue), optical (green) and radio (red) wavelengths. The X-ray glow is from material heated to about ten million degrees Celsius by a shock wave generated by the supernova explosion.
Astrophysics
–
The stream lines on this simulation of a supernova show the flow of matter behind the shock wave giving clues as to the origin of pulsars
38.
Biology
–
Biology is a natural science concerned with the study of life and living organisms, including their structure, function, growth, evolution, distribution, identification and taxonomy. Modern biology is a vast and eclectic field, composed of branches and subdisciplines. However, despite the broad scope of biology, there are certain unifying concepts within it that consolidate it into single, coherent field. In general, biology recognizes the cell as the unit of life, genes as the basic unit of heredity. It is also understood today that all organisms survive by consuming and transforming energy and by regulating their internal environment to maintain a stable, the term biology is derived from the Greek word βίος, bios, life and the suffix -λογία, -logia, study of. The Latin-language form of the term first appeared in 1736 when Swedish scientist Carl Linnaeus used biologi in his Bibliotheca botanica, the first German use, Biologie, was in a 1771 translation of Linnaeus work. In 1797, Theodor Georg August Roose used the term in the preface of a book, karl Friedrich Burdach used the term in 1800 in a more restricted sense of the study of human beings from a morphological, physiological and psychological perspective. The science that concerns itself with these objects we will indicate by the biology or the doctrine of life. Although modern biology is a recent development, sciences related to. Natural philosophy was studied as early as the ancient civilizations of Mesopotamia, Egypt, the Indian subcontinent, however, the origins of modern biology and its approach to the study of nature are most often traced back to ancient Greece. While the formal study of medicine back to Hippocrates, it was Aristotle who contributed most extensively to the development of biology. Especially important are his History of Animals and other works where he showed naturalist leanings, and later more empirical works that focused on biological causation and the diversity of life. Aristotles successor at the Lyceum, Theophrastus, wrote a series of books on botany that survived as the most important contribution of antiquity to the plant sciences, even into the Middle Ages. Scholars of the medieval Islamic world who wrote on biology included al-Jahiz, Al-Dīnawarī, who wrote on botany, biology began to quickly develop and grow with Anton van Leeuwenhoeks dramatic improvement of the microscope. It was then that scholars discovered spermatozoa, bacteria, infusoria, investigations by Jan Swammerdam led to new interest in entomology and helped to develop the basic techniques of microscopic dissection and staining. Advances in microscopy also had a impact on biological thinking. In the early 19th century, a number of biologists pointed to the importance of the cell. Thanks to the work of Robert Remak and Rudolf Virchow, however, meanwhile, taxonomy and classification became the focus of natural historians
Biology
Biology
Biology
Biology
39.
Numerical methods
–
Numerical analysis is the study of algorithms that use numerical approximation for the problems of mathematical analysis. Being able to compute the sides of a triangle is important, for instance, in astronomy, carpentry. Numerical analysis continues this tradition of practical mathematical calculations. Much like the Babylonian approximation of the root of 2, modern numerical analysis does not seek exact answers. Instead, much of numerical analysis is concerned with obtaining approximate solutions while maintaining reasonable bounds on errors, before the advent of modern computers numerical methods often depended on hand interpolation in large printed tables. Since the mid 20th century, computers calculate the required functions instead and these same interpolation formulas nevertheless continue to be used as part of the software algorithms for solving differential equations. Computing the trajectory of a spacecraft requires the accurate numerical solution of a system of differential equations. Car companies can improve the safety of their vehicles by using computer simulations of car crashes. Such simulations essentially consist of solving differential equations numerically. Hedge funds use tools from all fields of analysis to attempt to calculate the value of stocks. Airlines use sophisticated optimization algorithms to decide ticket prices, airplane and crew assignments, historically, such algorithms were developed within the overlapping field of operations research. Insurance companies use programs for actuarial analysis. The rest of this section outlines several important themes of numerical analysis, the field of numerical analysis predates the invention of modern computers by many centuries. Linear interpolation was already in use more than 2000 years ago, to facilitate computations by hand, large books were produced with formulas and tables of data such as interpolation points and function coefficients. The function values are no very useful when a computer is available. The mechanical calculator was developed as a tool for hand computation. These calculators evolved into electronic computers in the 1940s, and it was found that these computers were also useful for administrative purposes. But the invention of the computer also influenced the field of analysis, since now longer
Numerical methods
–
Babylonian clay tablet YBC 7289 (c. 1800–1600 BC) with annotations. The approximation of the square root of 2 is four sexagesimal figures, which is about six decimal figures. 1 + 24/60 + 51/60 2 + 10/60 3 = 1.41421296...
Numerical methods
–
Direct method
Numerical methods
40.
Computational fluid dynamics
–
Computational fluid dynamics is a branch of fluid mechanics that uses numerical analysis and data structures to solve and analyze problems that involve fluid flows. Computers are used to perform the required to simulate the interaction of liquids. With high-speed supercomputers, better solutions can be achieved, ongoing research yields software that improves the accuracy and speed of complex simulation scenarios such as transonic or turbulent flows. Initial experimental validation of such software is performed using a tunnel with the final validation coming in full-scale testing. The fundamental basis of almost all CFD problems is the Navier–Stokes equations and these equations can be simplified by removing terms describing viscous actions to yield the Euler equations. Further simplification, by removing terms describing vorticity yields the full potential equations, finally, for small perturbations in subsonic and supersonic flows these equations can be linearized to yield the linearized potential equations. Historically, methods were first developed to solve the potential equations. Two-dimensional methods, using conformal transformations of the flow about a cylinder to the flow about an airfoil were developed in the 1930s. One of the earliest type of calculations resembling modern CFD are those by Lewis Fry Richardson, in the sense that these calculations used finite differences and divided the physical space in cells. Although they failed dramatically, these calculations, together with Richardsons book Weather prediction by numerical process, set the basis for modern CFD, in fact, early CFD calculations during the 1940s using ENIAC used methods close to those in Richardsons 1922 book. The computer power available paced development of three-dimensional methods, probably the first work using computers to model fluid flow, as governed by the Navier-Stokes equations, was performed at Los Alamos National Lab, in the T3 group. This group was led by Francis H. Harlow, who is considered as one of the pioneers of CFD. Fromms vorticity-stream-function method for 2D, transient, incompressible flow was the first treatment of strongly contorting incompressible flows in the world, the first paper with three-dimensional model was published by John Hess and A. M. O. Smith of Douglas Aircraft in 1967 and this method discretized the surface of the geometry with panels, giving rise to this class of programs being called Panel Methods. Their method itself was simplified, in that it did not include lifting flows and hence was mainly applied to ship hulls, the first lifting Panel Code was described in a paper written by Paul Rubbert and Gary Saaris of Boeing Aircraft in 1968. In time, more advanced three-dimensional Panel Codes were developed at Boeing, Lockheed, Douglas, McDonnell Aircraft, NASA, some were higher order codes, using higher order distributions of surface singularities, while others used single singularities on each surface panel. The advantage of the lower order codes was that they ran much faster on the computers of the time, today, VSAERO has grown to be a multi-order code and is the most widely used program of this class. It has been used in the development of submarines, surface ships, automobiles, helicopters, aircraft
Computational fluid dynamics
–
Computational physics
Computational fluid dynamics
–
A computer simulation of high velocity air flow around the Space Shuttle during re-entry.
Computational fluid dynamics
–
A simulation of the Hyper-X scramjet vehicle in operation at Mach -7
Computational fluid dynamics
–
Volume rendering of a non-premixed swirl flame as simulated by LES.
41.
Archimedes
–
Archimedes of Syracuse was a Greek mathematician, physicist, engineer, inventor, and astronomer. Although few details of his life are known, he is regarded as one of the scientists in classical antiquity. He was also one of the first to apply mathematics to physical phenomena, founding hydrostatics and statics and he is credited with designing innovative machines, such as his screw pump, compound pulleys, and defensive war machines to protect his native Syracuse from invasion. Archimedes died during the Siege of Syracuse when he was killed by a Roman soldier despite orders that he should not be harmed. Cicero describes visiting the tomb of Archimedes, which was surmounted by a sphere and a cylinder, unlike his inventions, the mathematical writings of Archimedes were little known in antiquity. Archimedes was born c.287 BC in the city of Syracuse, Sicily, at that time a self-governing colony in Magna Graecia. The date of birth is based on a statement by the Byzantine Greek historian John Tzetzes that Archimedes lived for 75 years, in The Sand Reckoner, Archimedes gives his fathers name as Phidias, an astronomer about whom nothing is known. Plutarch wrote in his Parallel Lives that Archimedes was related to King Hiero II, a biography of Archimedes was written by his friend Heracleides but this work has been lost, leaving the details of his life obscure. It is unknown, for instance, whether he married or had children. During his youth, Archimedes may have studied in Alexandria, Egypt and he referred to Conon of Samos as his friend, while two of his works have introductions addressed to Eratosthenes. Archimedes died c.212 BC during the Second Punic War, according to the popular account given by Plutarch, Archimedes was contemplating a mathematical diagram when the city was captured. A Roman soldier commanded him to come and meet General Marcellus but he declined, the soldier was enraged by this, and killed Archimedes with his sword. Plutarch also gives an account of the death of Archimedes which suggests that he may have been killed while attempting to surrender to a Roman soldier. According to this story, Archimedes was carrying mathematical instruments, and was killed because the thought that they were valuable items. General Marcellus was reportedly angered by the death of Archimedes, as he considered him a valuable asset and had ordered that he not be harmed. Marcellus called Archimedes a geometrical Briareus, the last words attributed to Archimedes are Do not disturb my circles, a reference to the circles in the mathematical drawing that he was supposedly studying when disturbed by the Roman soldier. This quote is given in Latin as Noli turbare circulos meos. The phrase is given in Katharevousa Greek as μὴ μου τοὺς κύκλους τάραττε
Archimedes
–
Archimedes Thoughtful by Fetti (1620)
Archimedes
–
Cicero Discovering the Tomb of Archimedes by Benjamin West (1805)
Archimedes
–
Artistic interpretation of Archimedes' mirror used to burn Roman ships. Painting by Giulio Parigi.
Archimedes
–
A sphere has 2/3 the volume and surface area of its circumscribing cylinder including its bases. A sphere and cylinder were placed on the tomb of Archimedes at his request. (see also: Equiareal map)
42.
On Floating Bodies
–
On Floating Bodies is a Greek-language work consisting of two books written by Archimedes of Syracuse, one of the most important mathematicians, physicists, and engineers of antiquity. On Floating Bodies, which is thought to have written around 250 BC, survives only partly in Greek. It is the first known work on hydrostatics, of which Archimedes is recognized as the founder, the purpose of On Floating Bodies was to determine the positions that various solids will assume when floating in a fluid, according to their form and the variation in their specific gravities. It contains the first statement of what is now known as Archimedes principle, Archimedes lived in the Greek city-state of Syracuse, Sicily. He is credited with laying the foundations of hydrostatics, statics, a leading scientist of classical antiquity, Archimedes also developed elaborate systems of pulleys to move large objects with a minimum of effort. The Archimedes screw underpins modern hydroengineering, and his machines of war helped to hold back the armies of Rome in the First Punic War. Archimedes opposed the arguments of Aristotle, pointing out that it was impossible to separate mathematics and nature, the only known copy of On Floating Bodies in Greek comes from the Archimedes Palimpsest. In the first part of the treatise, Archimedes establishes various general principles, such as that a solid denser than a fluid will, Archimedes spells out the law of equilibrium of fluids, and proves that water will adopt a spherical form around a center of gravity. This may have been an attempt at explaining the theory of contemporary Greek astronomers such as Eratosthenes that the Earth is round. The fluids described by Archimedes are not self-gravitating, since he assumes the existence of a point towards all things fall in order to derive the spherical shape. Further, Proposition 5 of Archimedes treatise On Floating Bodies states that, the second book is a mathematical achievement unmatched in antiquity and rarely equaled since. It is restricted to the case when the base of the paraboloid lies either entirely above or entirely below the fluid surface, Archimedes investigation of paraboloids was probably an idealization of the shapes of ships hulls. Some of his sections float with the base water and the summit above water. Of his works survive, the second of his two books of On Floating Bodies is considered his most mature work, commonly described as a tour de force
On Floating Bodies
–
a page from Floating Bodies, Archimedes Palimpsest
43.
Joseph Louis Lagrange
–
Joseph-Louis Lagrange, born Giuseppe Lodovico Lagrangia or Giuseppe Ludovico De la Grange Tournier, was an Italian and French Enlightenment Era mathematician and astronomer. He made significant contributions to the fields of analysis, number theory, in 1787, at age 51, he moved from Berlin to Paris and became a member of the French Academy of Sciences. He remained in France until the end of his life, Lagrange was one of the creators of the calculus of variations, deriving the Euler–Lagrange equations for extrema of functionals. He also extended the method to take into account possible constraints and he proved that every natural number is a sum of four squares. His treatise Theorie des fonctions analytiques laid some of the foundations of group theory, in calculus, Lagrange developed a novel approach to interpolation and Taylor series. Born as Giuseppe Lodovico Lagrangia, Lagrange was of Italian and French descent and his mother was from the countryside of Turin. He was raised as a Roman Catholic, a career as a lawyer was planned out for Lagrange by his father, and certainly Lagrange seems to have accepted this willingly. He studied at the University of Turin and his subject was classical Latin. At first he had no enthusiasm for mathematics, finding Greek geometry rather dull. It was not until he was seventeen that he showed any taste for mathematics – his interest in the subject being first excited by a paper by Edmond Halley which he came across by accident. Alone and unaided he threw himself into mathematical studies, at the end of a years incessant toil he was already an accomplished mathematician, in that capacity, Lagrange was the first to teach calculus in an engineering school. In this Academy one of his students was François Daviet de Foncenex, Lagrange is one of the founders of the calculus of variations. Starting in 1754, he worked on the problem of tautochrone, Lagrange wrote several letters to Leonhard Euler between 1754 and 1756 describing his results. He outlined his δ-algorithm, leading to the Euler–Lagrange equations of variational calculus, Lagrange also applied his ideas to problems of classical mechanics, generalizing the results of Euler and Maupertuis. Euler was very impressed with Lagranges results, Lagrange published his method in two memoirs of the Turin Society in 1762 and 1773. Many of these are elaborate papers, the article concludes with a masterly discussion of echoes, beats, and compound sounds. Other articles in volume are on recurring series, probabilities. The next work he produced was in 1764 on the libration of the Moon, and an explanation as to why the face was always turned to the earth
Joseph Louis Lagrange
–
Joseph-Louis (Giuseppe Luigi), comte de Lagrange
Joseph Louis Lagrange
–
Lagrange's tomb in the crypt of the Panthéon
44.
Gotthilf Hagen
–
Gotthilf Heinrich Ludwig Hagen was a German civil engineer who made important contributions to fluid dynamics, hydraulic engineering and probability theory. Hagen was born in Königsberg, East Prussia to Friedrich Ludwig Hagen and his father was a government official and his mother was the daughter of Christian Reccard, professor of Theology at University of Königsberg, consistorial councillor and astronomer. He showed promise in mathematics in school and he went on to study at the University of Königsberg where his uncle, Karl Gottfried Hagen was professor of physics. In 1816 Hagen began studying mathematics and astronomy with Friedrich Wilhelm Bessel, nevertheless, he remained in close contact with Bessel throughout his life. In 1819 he undertook the examination for surveyors and after graduating took a job as an engineer in the civil service. His main responsibility was for engineering and water management. In 1822 he took the examination in Berlin to qualify as a master builder. He became known through his publications about various hydraulic constructions which he had visited during travels in Europe, in 1824 he was appointed director of building by the mercantile community in Königsberg and in 1825 he became deputy governmental building officer for Danzig. A year later he transferred to become harbor building inspector in Pillau, methods he developed are still relevant to current harbor management in the region. On April 27,1827 he married his niece Auguste Hagen and his son Ludwig Hagen also became a notable civil engineer. In 1830 Hagen joined the building authority in Berlin and became chief government building surveyor in 1831. From 1834 to 1849 he taught as a professor of engineering at the Bauakademie. Hagen was unusual in stressing the mathematical and theoretical aspects of hydraulic engineering, in particular he was interested in using probability calculus for land surveying and this interest led to his contributions to probability theory. In a letter to Bessel dated 2 August 1836 Hagen presented his hypothesis of elementary errors, in 1849 he was appointed as an expert adviser to the Frankfurt National Assembly and in 1850 was appointed expert councillor in the Prussian Ministry of Commerce. Hagen played a role in planning the development of numerous German rivers and harbors. The Prussian Admiralty appointed him to supervise the planning of Wilhelmshaven in 1855, Hagen took leave from his post in the Ministry of Trade and became chair of the Commission for the port construction in the Jade Bight. After rejecting the designs of two internationally known experts, he proposed his own design to the Prussian Admiralty on May 29,1856 and this port design met the requirements of the Prussian Admiralty but also allowed for later expansions and additions. The design was approved by order on 25 June 1856
Gotthilf Hagen
–
Gotthilf Heinrich Ludwig Hagen
Gotthilf Hagen
–
United Artillery and Engineering School, Berlin
Gotthilf Hagen
–
Tomb of Gotthilf Hagen and Auguste on the Invalidenfriedhof, Berlin
Gotthilf Hagen
–
Detailed view of the monument to Hagen in Baltijsk
45.
Andrey Kolmogorov
–
Andrey Kolmogorov was born in Tambov, about 500 kilometers south-southeast of Moscow, in 1903. Kolmogorova, died giving birth to him, Andrey was raised by two of his aunts in Tunoshna at the estate of his grandfather, a well-to-do nobleman. Little is known about Andreys father and he was supposedly named Nikolai Matveevich Kataev and had been an agronomist. Nikolai had been exiled from St. Petersburg to the Yaroslavl province after his participation in the movement against the czars. He disappeared in 1919 and he was presumed to have killed in the Russian Civil War. Andrey Kolmogorov was educated in his aunt Veras village school, and his earliest literary efforts, Andrey was the editor of the mathematical section of this journal. In 1910, his aunt adopted him, and they moved to Moscow, later that same year, Kolmogorov began to study at the Moscow State University and at the same time Mendeleev Moscow Institute of Chemistry and Technology. Kolmogorov writes about this time, I arrived at Moscow University with a knowledge of mathematics. I knew in particular the beginning of set theory, I studied many questions in articles in the Encyclopedia of Brockhaus and Efron, filling out for myself what was presented too concisely in these articles. Kolmogorov gained a reputation for his wide-ranging erudition, during the same period, Kolmogorov worked out and proved several results in set theory and in the theory of Fourier series. In 1922, Kolmogorov gained international recognition for constructing a Fourier series that diverges almost everywhere, around this time, he decided to devote his life to mathematics. In 1925, Kolmogorov graduated from the Moscow State University and began to study under the supervision of Nikolai Luzin, Kolmogorov became interested in probability theory. In 1929, Kolmogorov earned his Doctor of Philosophy degree, from Moscow State University, in 1930, Kolmogorov went on his first long trip abroad, traveling to Göttingen and Munich, and then to Paris. He had various contacts in Göttingen. His pioneering work, About the Analytical Methods of Probability Theory, was published in 1931, also in 1931, he became a professor at the Moscow State University. In 1935, Kolmogorov became the first chairman of the department of probability theory at the Moscow State University, around the same years Kolmogorov contributed to the field of ecology and generalized the Lotka–Volterra model of predator-prey systems. In 1936, Kolmogorov and Alexandrov were involved in the persecution of their common teacher Nikolai Luzin, in the so-called Luzin affair. In a 1938 paper, Kolmogorov established the basic theorems for smoothing and predicting stationary stochastic processes—a paper that had military applications during the Cold War
Andrey Kolmogorov
–
Andrey Kolmogorov
Andrey Kolmogorov
–
Kolmogorov (left) delivers a talk at a Soviet information theory symposium. (Tallinn, 1973).
Andrey Kolmogorov
–
Kolmogorov works on his talk (Tallinn, 1973).
46.
Geoffrey Ingram Taylor
–
Sir Geoffrey Ingram Taylor OM was a British physicist and mathematician, and a major figure in fluid dynamics and wave theory. His biographer and one-time student, George Batchelor, described him as one of the most notable scientists of this century, Taylor was born in St. Johns Wood, London. His father, Edward Ingram Taylor, was an artist, and his mother, Margaret Boole, as a child he was fascinated by science after attending the Royal Institution Christmas Lectures, and performed experiments using paint rollers and sticky-tape. Taylor read mathematics at Trinity College, Cambridge and his first paper was on quanta showing that Youngs slit diffraction experiment produced fringes even with feeble light sources such that less than one photon on average was present at a time. He followed this up with work on shock waves, winning a Smiths Prize, in 1910 he was elected to a Fellowship at Trinity College, and the following year he was appointed to a meteorology post, becoming Reader in Dynamical Meteorology. His work on turbulence in the led to the publication of Turbulent motion in fluids. In 1913 Taylor served as a meteorologist aboard the Ice Patrol vessel Scotia, where his observations formed the basis of his later work on a theoretical model of mixing of the air. At the outbreak of World War I, he was sent to the Royal Aircraft Factory at Farnborough to apply his knowledge to design, working, amongst other things. Not content just to sit back and do the science, he learned to fly aeroplanes. After the war Taylor returned to Trinity and worked on an application of turbulent flow to oceanography and he also worked on the problem of bodies passing through a rotating fluid. In 1923 he was appointed to a Royal Society research professorship as a Yarrow Research Professor and this enabled him to stop teaching, which he had been doing for the previous four years, and which he both disliked and had no great aptitude for. He also produced another major contribution to turbulent flow, where he introduced a new approach through a study of velocity fluctuations. The insight was critical in developing the science of solid mechanics. During World War II, Taylor again applied his expertise to military problems such as the propagation of blast waves, Taylor was sent to the United States in 1944–1945 as part of the British delegation to the Manhattan Project. At Los Alamos, Taylor helped solve implosion instability problems in the development of atomic weapons particularly the plutonium bomb used at Nagasaki on August 9th,1945, in 1944 he also received his knighthood and the Copley Medal from the Royal Society. Taylor was present at the Trinity, July 16,1945, as part of General Leslie Groves VIP List of just 10 people who observed the test from Compania Hill, about 20 miles northwest of the shot tower. By a strange twist, Joan Hinton, another descendant of the mathematician, George Boole, had been working on the same project. His estimate of 22 kt was remarkably close to the value of 20 kt
Geoffrey Ingram Taylor
–
Sir Geoffrey Ingram Taylor
47.
Turbulence
–
Turbulence or turbulent flow is a flow regime in fluid dynamics characterized by chaotic changes in pressure and flow velocity. It is in contrast to a flow regime, which occurs when a fluid flows in parallel layers. Turbulence is caused by kinetic energy in parts of a fluid flow. For this reason turbulence is easier to create in low viscosity fluids, in general terms, in turbulent flow, unsteady vortices appear of many sizes which interact with each other, consequently drag due to friction effects increases. This would increase the energy needed to pump fluid through a pipe, however this effect can also be exploited by such as aerodynamic spoilers on aircraft, which deliberately spoil the laminar flow to increase drag and reduce lift. The onset of turbulence can be predicted by a constant called the Reynolds number. However, turbulence has long resisted detailed physical analysis, and the interactions within turbulence creates a complex situation. Richard Feynman has described turbulence as the most important unsolved problem of classical physics, smoke rising from a cigarette is mostly turbulent flow. However, for the first few centimeters the flow is laminar, the smoke plume becomes turbulent as its Reynolds number increases, due to its flow velocity and characteristic length increasing. If the golf ball were smooth, the boundary layer flow over the front of the sphere would be laminar at typical conditions. However, the layer would separate early, as the pressure gradient switched from favorable to unfavorable. To prevent this happening, the surface is dimpled to perturb the boundary layer. This results in higher skin friction, but moves the point of boundary layer separation further along, resulting in form drag. The flow conditions in industrial equipment and machines. The external flow over all kind of such as cars, airplanes, ships. The motions of matter in stellar atmospheres, a jet exhausting from a nozzle into a quiescent fluid. As the flow emerges into this external fluid, shear layers originating at the lips of the nozzle are created and these layers separate the fast moving jet from the external fluid, and at a certain critical Reynolds number they become unstable and break down to turbulence. Biologically generated turbulence resulting from swimming animals affects ocean mixing, snow fences work by inducing turbulence in the wind, forcing it to drop much of its snow load near the fence
Turbulence
–
Flow visualization of a turbulent jet, made by laser-induced fluorescence. The jet exhibits a wide range of length scales, an important characteristic of turbulent flows.
Turbulence
–
Laminar and turbulent water flow over the hull of a submarine
Turbulence
–
Turbulence in the tip vortex from an airplane wing
48.
Mechanical equilibrium
–
In classical mechanics, a particle is in mechanical equilibrium if the net force on that particle is zero. By extension, a system made up of many parts is in mechanical equilibrium if the net force on each of its individual parts is zero. In addition to defining mechanical equilibrium in terms of force, there are alternative definitions for mechanical equilibrium which are all mathematically equivalent. In terms of momentum, a system is in if the momentum of its parts is all constant. In terms of velocity, the system is in equilibrium if velocity is constant, in a rotational mechanical equilibrium the angular momentum of the object is conserved and the net torque is zero. More generally in conservative systems, equilibrium is established at a point in space where the gradient of the potential energy with respect to the generalized coordinates is zero. If a particle in equilibrium has zero velocity, that particle is in static equilibrium, since all particles in equilibrium have constant velocity, it is always possible to find an inertial reference frame in which the particle is stationary with respect to the frame. An important property of systems at mechanical equilibrium is their stability, if we have a function which describes the systems potential energy, we can determine the systems equilibria using calculus. A system is in equilibrium at the critical points of the function describing the systems potential energy. We can locate these points using the fact that the derivative of the function is zero at these points, if the system is displaced an arbitrarily small distance from the equilibrium state, the forces of the system cause it to move even farther away. Second derivative >0, The potential energy is at a local minimum, the response to a small perturbation is forces that tend to restore the equilibrium. If more than one equilibrium state is possible for a system. Second derivative =0 or does not exist, The state is neutral to the lowest order, to investigate the precise stability of the system, higher order derivatives must be examined. In a truly neutral state the energy does not vary and the state of equilibrium has a finite width and this is sometimes referred to as state that is marginally stable or in a state of indifference. Generally an equilibrium is only referred to as if it is stable in all directions. Sometimes there is not enough information about the acting on a body to determine if it is in equilibrium or not. This makes it an indeterminate system. The special case of mechanical equilibrium of an object is static equilibrium
Mechanical equilibrium
–
Force diagram showing the forces acting on an object at rest on a surface. The normal force N is equal and opposite to the gravitational force mg so the net force is zero. Consequently the object is in a state of static mechanical equilibrium.
49.
Altitude
–
Altitude or height is defined based on the context in which it is used. As a general definition, altitude is a measurement, usually in the vertical or up direction. The reference datum also often varies according to the context, although the term altitude is commonly used to mean the height above sea level of a location, in geography the term elevation is often preferred for this usage. Vertical distance measurements in the direction are commonly referred to as depth. In aviation, the altitude can have several meanings, and is always qualified by explicitly adding a modifier. Parties exchanging altitude information must be clear which definition is being used, aviation altitude is measured using either mean sea level or local ground level as the reference datum. When flying at a level, the altimeter is always set to standard pressure. On the flight deck, the instrument for measuring altitude is the pressure altimeter. There are several types of altitude, Indicated altitude is the reading on the altimeter when it is set to the local barometric pressure at mean sea level. In UK aviation radiotelephony usage, the distance of a level, a point or an object considered as a point, measured from mean sea level. Absolute altitude is the height of the aircraft above the terrain over which it is flying and it can be measured using a radar altimeter. Also referred to as radar height or feet/metres above ground level, true altitude is the actual elevation above mean sea level. It is indicated altitude corrected for temperature and pressure. Height is the elevation above a reference point, commonly the terrain elevation. Pressure altitude is used to indicate flight level which is the standard for reporting in the U. S. in Class A airspace. Pressure altitude and indicated altitude are the same when the setting is 29.92 Hg or 1013.25 millibars. Density altitude is the altitude corrected for non-ISA International Standard Atmosphere atmospheric conditions, aircraft performance depends on density altitude, which is affected by barometric pressure, humidity and temperature. On a very hot day, density altitude at an airport may be so high as to preclude takeoff and these types of altitude can be explained more simply as various ways of measuring the altitude, Indicated altitude – the altitude shown on the altimeter
Altitude
–
Vertical distance comparison
50.
Hydraulics
–
Hydraulics is a technology and applied science using engineering, chemistry, and other sciences involving the mechanical properties and use of liquids or fluids. At a very basic level, hydraulics is the version of pneumatics. Fluid mechanics provides the foundation for hydraulics, which focuses on the applied engineering using the properties of fluids. In fluid power, hydraulics are used for the generation, control, hydraulic topics range through some parts of science and most of engineering modules, and cover concepts such as pipe flow, dam design, fluidics and fluid control circuitry, pumps. The principles of hydraulics are in use naturally in the body within the heart. Free surface hydraulics is the branch of hydraulics dealing with surface flow, such as occurring in rivers, canals, lakes, estuaries. Its sub-field open channel flow studies the flow in open channels, the word hydraulics originates from the Greek word ὑδραυλικός which in turn originates from ὕδωρ and αὐλός. Early uses of water power date back to Mesopotamia and ancient Egypt, other early examples of water power include the Qanat system in ancient Persia and the Turpan water system in ancient Central Asia. The Greeks constructed sophisticated water and hydraulic power systems, an example is the construction by Eupalinos, under a public contract, of a watering channel for Samos, the Tunnel of Eupalinos. An early example of the usage of hydraulic wheel, probably the earliest in Europe, is the Perachora wheel, notable is the construction of the first hydraulic automata by Ctesibius and Hero of Alexandria. Hero describes a number of working machines using hydraulic power, such as the force pump, in ancient China there was Sunshu Ao, Ximen Bao, Du Shi, Zhang Heng, and Ma Jun, while medieval China had Su Song and Shen Kuo. Du Shi employed a waterwheel to power the bellows of a blast furnace producing cast iron, Zhang Heng was the first to employ hydraulics to provide motive power in rotating an armillary sphere for astronomical observation. In ancient Sri Lanka, hydraulics were used in the ancient kingdoms of Anuradhapura. The discovery of the principle of the tower, or valve pit. By the first century AD, several irrigation works had been completed. The coral on the rock at the site includes cisterns for collecting water. They were among the first to use of the siphon to carry water across valleys. They used lead widely in plumbing systems for domestic and public supply, hydraulic mining was used in the gold-fields of northern Spain, which was conquered by Augustus in 25 BC
Hydraulics
–
Moat and gardens at Sigirya.
Hydraulics
–
An open channel, with a uniform depth, Open Channel Hydraulics deals with uniform and non-uniform streams.
Hydraulics
–
Aqueduct of Segovia, a 1st-century AD masterpiece.
51.
Plate tectonics
–
The theoretical model builds on the concept of continental drift developed during the first few decades of the 20th century. The geoscientific community accepted plate-tectonic theory after seafloor spreading was validated in the late 1950s, the lithosphere, which is the rigid outermost shell of a planet, is broken up into tectonic plates. The Earths lithosphere is composed of seven or eight major plates, where the plates meet, their relative motion determines the type of boundary, convergent, divergent, or transform. Earthquakes, volcanic activity, mountain-building, and oceanic trench formation occur along plate boundaries. The relative movement of the plates typically ranges from zero to 100 mm annually, tectonic plates are composed of oceanic lithosphere and thicker continental lithosphere, each topped by its own kind of crust. Along convergent boundaries, subduction carries plates into the mantle, the material lost is balanced by the formation of new crust along divergent margins by seafloor spreading. In this way, the surface of the lithosphere remains the same. This prediction of plate tectonics is also referred to as the conveyor belt principle, earlier theories, since disproven, proposed gradual shrinking or gradual expansion of the globe. Tectonic plates are able to move because the Earths lithosphere has greater strength than the underlying asthenosphere. Lateral density variations in the result in convection. Plate movement is thought to be driven by a combination of the motion of the seafloor away from the ridge and drag, with downward suction. Another explanation lies in the different forces generated by forces of the Sun. The relative importance of each of these factors and their relationship to other is unclear. The outer layers of the Earth are divided into the lithosphere and asthenosphere and this is based on differences in mechanical properties and in the method for the transfer of heat. Mechanically, the lithosphere is cooler and more rigid, while the asthenosphere is hotter, in terms of heat transfer, the lithosphere loses heat by conduction, whereas the asthenosphere also transfers heat by convection and has a nearly adiabatic temperature gradient. The key principle of plate tectonics is that the lithosphere exists as separate and distinct tectonic plates, Plate motions range up to a typical 10–40 mm/year, to about 160 mm/year. The driving mechanism behind this movement is described below, tectonic lithosphere plates consist of lithospheric mantle overlain by either or both of two types of crustal material, oceanic crust and continental crust. Average oceanic lithosphere is typically 100 km thick, its thickness is a function of its age, as passes, it conductively cools
Plate tectonics
–
Remnants of the Farallon Plate, deep in Earth's mantle. It is thought that much of the plate initially went under North America (particularly the western United States and southwest Canada) at a very shallow angle, creating much of the mountainous terrain in the area (particularly the southern Rocky Mountains).
Plate tectonics
–
The tectonic plates of the world were mapped in the second half of the 20th century.
Plate tectonics
–
Plate motion based on Global Positioning System (GPS) satellite data from NASA JPL. The vectors show direction and magnitude of motion.
Plate tectonics
–
Alfred Wegener in Greenland in the winter of 1912-13.
52.
Medicine
–
Medicine is the science and practice of the diagnosis, treatment, and prevention of disease. The word medicine is derived from Latin medicus, meaning a physician, Medicine encompasses a variety of health care practices evolved to maintain and restore health by the prevention and treatment of illness. Medicine has existed for thousands of years, during most of which it was an art frequently having connections to the religious and philosophical beliefs of local culture. For example, a man would apply herbs and say prayers for healing, or an ancient philosopher. In recent centuries, since the advent of modern science, most medicine has become a combination of art, while stitching technique for sutures is an art learned through practice, the knowledge of what happens at the cellular and molecular level in the tissues being stitched arises through science. Prescientific forms of medicine are now known as medicine and folk medicine. They remain commonly used with or instead of medicine and are thus called alternative medicine. For example, evidence on the effectiveness of acupuncture is variable and inconsistent for any condition, in contrast, treatments outside the bounds of safety and efficacy are termed quackery. Medical availability and clinical practice varies across the world due to differences in culture. In modern clinical practice, physicians personally assess patients in order to diagnose, treat, the doctor-patient relationship typically begins an interaction with an examination of the patients medical history and medical record, followed by a medical interview and a physical examination. Basic diagnostic medical devices are typically used, after examination for signs and interviewing for symptoms, the doctor may order medical tests, take a biopsy, or prescribe pharmaceutical drugs or other therapies. Differential diagnosis methods help to rule out conditions based on the information provided, during the encounter, properly informing the patient of all relevant facts is an important part of the relationship and the development of trust. The medical encounter is then documented in the record, which is a legal document in many jurisdictions. Follow-ups may be shorter but follow the general procedure. The diagnosis and treatment may take only a few minutes or a few weeks depending upon the complexity of the issue, the components of the medical interview and encounter are, Chief complaint, the reason for the current medical visit. They are in the patients own words and are recorded along with the duration of each one, also called chief concern or presenting complaint. History of present illness, the order of events of symptoms. Distinguishable from history of illness, often called past medical history
Medicine
–
Early Medicine Bottles
Medicine
Medicine
–
The Doctor, by Sir Luke Fildes (1891)
Medicine
–
The Hospital of Santa Maria della Scala, fresco by Domenico di Bartolo, 1441–1442
53.
Density
–
The density, or more precisely, the volumetric mass density, of a substance is its mass per unit volume. The symbol most often used for density is ρ, although the Latin letter D can also be used. Mathematically, density is defined as mass divided by volume, ρ = m V, where ρ is the density, m is the mass, and V is the volume. In some cases, density is defined as its weight per unit volume. For a pure substance the density has the numerical value as its mass concentration. Different materials usually have different densities, and density may be relevant to buoyancy, purity, osmium and iridium are the densest known elements at standard conditions for temperature and pressure but certain chemical compounds may be denser. Thus a relative density less than one means that the floats in water. The density of a material varies with temperature and pressure and this variation is typically small for solids and liquids but much greater for gases. Increasing the pressure on an object decreases the volume of the object, increasing the temperature of a substance decreases its density by increasing its volume. In most materials, heating the bottom of a results in convection of the heat from the bottom to the top. This causes it to rise relative to more dense unheated material, the reciprocal of the density of a substance is occasionally called its specific volume, a term sometimes used in thermodynamics. Density is a property in that increasing the amount of a substance does not increase its density. Archimedes knew that the irregularly shaped wreath could be crushed into a cube whose volume could be calculated easily and compared with the mass, upon this discovery, he leapt from his bath and ran naked through the streets shouting, Eureka. As a result, the term eureka entered common parlance and is used today to indicate a moment of enlightenment, the story first appeared in written form in Vitruvius books of architecture, two centuries after it supposedly took place. Some scholars have doubted the accuracy of this tale, saying among other things that the method would have required precise measurements that would have been difficult to make at the time, from the equation for density, mass density has units of mass divided by volume. As there are units of mass and volume covering many different magnitudes there are a large number of units for mass density in use. The SI unit of kilogram per metre and the cgs unit of gram per cubic centimetre are probably the most commonly used units for density.1,000 kg/m3 equals 1 g/cm3. In industry, other larger or smaller units of mass and or volume are often more practical, see below for a list of some of the most common units of density
Density
–
Air density vs. temperature
54.
Aircraft
–
An aircraft is a machine that is able to fly by gaining support from the air. It counters the force of gravity by using either static lift or by using the lift of an airfoil. The human activity that surrounds aircraft is called aviation, crewed aircraft are flown by an onboard pilot, but unmanned aerial vehicles may be remotely controlled or self-controlled by onboard computers. Aircraft may be classified by different criteria, such as type, aircraft propulsion, usage. Each of the two World Wars led to technical advances. Consequently, the history of aircraft can be divided into five eras, Pioneers of flight, first World War,1914 to 1918. Aviation between the World Wars,1918 to 1939, Second World War,1939 to 1945. Postwar era, also called the jet age,1945 to the present day, aerostats use buoyancy to float in the air in much the same way that ships float on the water. They are characterized by one or more large gasbags or canopies, filled with a relatively low-density gas such as helium, hydrogen, or hot air, which is less dense than the surrounding air. When the weight of this is added to the weight of the aircraft structure, a balloon was originally any aerostat, while the term airship was used for large, powered aircraft designs – usually fixed-wing. In 1919 Frederick Handley Page was reported as referring to ships of the air, in the 1930s, large intercontinental flying boats were also sometimes referred to as ships of the air or flying-ships. – though none had yet been built, the advent of powered balloons, called dirigible balloons, and later of rigid hulls allowing a great increase in size, began to change the way these words were used. Huge powered aerostats, characterized by an outer framework and separate aerodynamic skin surrounding the gas bags, were produced. There were still no fixed-wing aircraft or non-rigid balloons large enough to be called airships, then several accidents, such as the Hindenburg disaster in 1937, led to the demise of these airships. Nowadays a balloon is an aerostat and an airship is a powered one. A powered, steerable aerostat is called a dirigible, sometimes this term is applied only to non-rigid balloons, and sometimes dirigible balloon is regarded as the definition of an airship. Non-rigid dirigibles are characterized by a moderately aerodynamic gasbag with stabilizing fins at the back and these soon became known as blimps. During the Second World War, this shape was adopted for tethered balloons, in windy weather
Aircraft
–
NASA test aircraft
Aircraft
–
The Mil Mi-8 is the most-produced helicopter in history
Aircraft
–
"Voodoo" a modified P 51 Mustang is the 2014 Reno Air Race Champion
Aircraft
–
A hot air balloon in flight
55.
Petroleum
–
Petroleum is a naturally occurring, yellow-to-black liquid found in geological formations beneath the Earths surface, which is commonly refined into various types of fuels. Components of petroleum are separated using a technique called fractional distillation and it consists of hydrocarbons of various molecular weights and other organic compounds. The name petroleum covers both naturally occurring unprocessed crude oil and petroleum products that are made up of refined crude oil. A fossil fuel, petroleum is formed when large quantities of dead organisms, usually zooplankton and algae, are buried underneath sedimentary rock, Petroleum has mostly been recovered by oil drilling. Drilling is carried out studies of structural geology, sedimentary basin analysis. Petroleum is used in manufacturing a variety of materials. Concern over the depletion of the earths finite reserves of oil, the burning of fossil fuels plays the major role in the current episode of global warming. The word petroleum comes from Greek, πέτρα for rocks and Greek, the term was found in 10th-century Old English sources. It was used in the treatise De Natura Fossilium, published in 1546 by the German mineralogist Georg Bauer, Petroleum, in one form or another, has been used since ancient times, and is now important across society, including in economy, politics and technology. Great quantities of it were found on the banks of the river Issus, ancient Persian tablets indicate the medicinal and lighting uses of petroleum in the upper levels of their society. By 347 AD, oil was produced from bamboo-drilled wells in China, early British explorers to Myanmar documented a flourishing oil extraction industry based in Yenangyaung that, in 1795, had hundreds of hand-dug wells under production. The mythological origins of the oil fields at Yenangyaung, and its hereditary monopoly control by 24 families, Pechelbronn is said to be the first European site where petroleum has been explored and used. The still active Erdpechquelle, a spring where petroleum appears mixed with water has been used since 1498, Oil sands have been mined since the 18th century. In Wietze in lower Saxony, natural asphalt/bitumen has been explored since the 18th century, both in Pechelbronn as in Wietze, the coal industry dominated the petroleum technologies. In 1848 Young set up a small business refining the crude oil, Young eventually succeeded, by distilling cannel coal at a low heat, in creating a fluid resembling petroleum, which when treated in the same way as the seep oil gave similar products. The production of oils and solid paraffin wax from coal formed the subject of his patent dated 17 October 1850. In 1850 Young & Meldrum and Edward William Binney entered into partnership under the title of E. W. Binney & Co. at Bathgate in West Lothian, the worlds first oil refinery was built in 1856 by Ignacy Łukasiewicz. The demand for petroleum as a fuel for lighting in North America, edwin Drakes 1859 well near Titusville, Pennsylvania, is popularly considered the first modern well
Petroleum
–
Pumpjack pumping an oil well near Lubbock, Texas
Petroleum
–
An oil refinery in Mina-Al-Ahmadi, Kuwait
Petroleum
–
Natural petroleum spring in Korňa, Slovakia
Petroleum
–
Oil derrick in Okemah, Oklahoma, 1922
56.
Nebula
–
A nebula is an interstellar cloud of dust, hydrogen, helium and other ionized gases. Originally, nebula was a name for any diffuse astronomical object, the Andromeda Galaxy, for instance, was once referred to as the Andromeda Nebula before the true nature of galaxies was confirmed in the early 20th century by Vesto Slipher, Edwin Hubble and others. Most nebulae are of vast size, even millions of years in diameter. The Orion Nebula, the brightest nebula in the sky that occupies a region twice the diameter of the full Moon, can be viewed with the naked eye but was missed by early astronomers. Many nebulae are visible due to their fluorescence caused by the hot stars, while others are so diffuse they can only be detected with long exposures. Some nebulae are variably illuminated by T Tauri variable stars, Nebulae are often star-forming regions, such as in the Pillars of Creation in the Eagle Nebula. In these regions the formations of gas, dust, and other materials together to form denser regions, which attract further matter. The remaining material is believed to form planets and other planetary system objects. Around 150 AD, Claudius Ptolemaeus recorded, in books VII–VIII of his Almagest and he also noted a region of nebulosity between the constellations Ursa Major and Leo that was not associated with any star. The first true nebula, as distinct from a cluster, was mentioned by the Persian astronomer Abd al-Rahman al-Sufi. He noted a little cloud where the Andromeda Galaxy is located and he also cataloged the Omicron Velorum star cluster as a nebulous star and other nebulous objects, such as Brocchis Cluster. The supernova that created the Crab Nebula, the SN1054, was observed by Arabic, in 1610, Nicolas-Claude Fabri de Peiresc discovered the Orion Nebula using a telescope. This nebula was observed by Johann Baptist Cysat in 1618. However, the first detailed study of the Orion Nebula was not performed until 1659 by Christiaan Huygens, in 1715, Edmund Halley published a list of six nebulae. This number steadily increased during the century, with Jean-Philippe de Cheseaux compiling a list of 20 in 1746, from 1751 to 1753, Nicolas Louis de Lacaille cataloged 42 nebulae from the Cape of Good Hope, most of which were previously unknown. Charles Messier then compiled a catalog of 103 nebulae by 1781, his interest was detecting comets, the number of nebulae was then greatly expanded by the efforts of William Herschel and his sister Caroline Herschel. Their Catalogue of One Thousand New Nebulae and Clusters of Stars was published in 1786, a second catalog of a thousand was published in 1789 and the third and final catalog of 510 appeared in 1802. During much of their work, William Herschel believed that these nebulae were merely unresolved clusters of stars, in 1790, however, he discovered a star surrounded by nebulosity and concluded that this was a true nebulosity, rather than a more distant cluster
Nebula
–
The " Pillars of Creation " from the Eagle Nebula. Evidence from the Spitzer Telescope suggests that the pillars may already have been destroyed by a supernova explosion, but the light showing us the destruction will not reach the Earth for another millennium.
Nebula
–
Portion of the Carina Nebula
Nebula
–
The Triangulum Emission Garren Nebula NGC 604
Nebula
–
Herbig–Haro object HH 161 and HH 164.
57.
Explosions
–
An explosion is a rapid increase in volume and release of energy in an extreme manner, usually with the generation of high temperatures and the release of gases. Supersonic explosions created by high explosives are known as detonations and travel via supersonic shock waves, subsonic explosions are created by low explosives through a slower burning process known as deflagration. When caused by a device such as an exploding rocket or firework. Most natural explosions arise from volcanic processes of various sorts, explosions also occur as a result of impact events and in phenomena such as hydrothermal explosions. Explosions can also occur outside of Earth in the universe in events such as supernova, explosions frequently occur during bushfires in eucalyptus forests where the volatile oils in the tree tops suddenly combust. Solar flares are an example of common explosion on the Sun, the energy source for solar flare activity comes from the tangling of magnetic field lines resulting from the rotation of the Suns conductive plasma. Another type of large astronomical explosion occurs when a large meteoroid or an asteroid impacts the surface of another object. The most common artificial explosives are chemical explosives, usually involving a rapid, gunpowder was the first explosive to be discovered and put to use. Other notable early developments in chemical explosive technology were Frederick Augustus Abels development of nitrocellulose in 1865, chemical explosions are often initiated by an electric spark or flame. Accidental explosions may occur in fuel tanks, rocket engines, etc, a high current electrical fault can create an electrical explosion by forming a high energy electrical arc which rapidly vaporizes metal and insulation material. This arc flash hazard is a danger to persons working on energized switchgear, also, excessive magnetic pressure within an ultra-strong electromagnet can cause a magnetic explosion. Strictly a physical process, as opposed to chemical or nuclear, examples include an overheated boiler or a simple tin can of beans tossed into a fire. Note that the contents of the container may cause a subsequent chemical explosion, in such a case, to the effects of the mechanical explosion when the tank fails are added the effects from the explosion resulting from the released propane in the presence of an ignition source. For this reason, emergency workers often differentiate between the two events, in addition to stellar nuclear explosions, a man-made nuclear weapon is a type of explosive weapon that derives its destructive force from nuclear fission or from a combination of fission and fusion. Explosive force is released in a perpendicular to the surface of the explosive. If a grenade is in mid air during the explosion, the direction of the blast will be 360°, if the surface is cut or shaped, the explosive forces can be focused to produce a greater local effect, this is known as a shaped charge. The speed of the reaction is what distinguishes an explosive reaction from a combustion reaction. Unless the reaction occurs rapidly, the thermally expanding gases will be moderately dissipated in the medium
Explosions
–
Detonation of 16 tons of explosives.
Explosions
–
Gasoline explosions, simulating bomb drops at an airshow.
Explosions
–
Black smoke from an explosion rising after a bomb detonation inside the outside Nahr al-Bared, Lebanon.
Explosions
–
Detonation of a MICLIC to destroy a 1km in depth blast resistant minefield in Iraq.
58.
Stress (physics)
–
For example, when a solid vertical bar is supporting a weight, each particle in the bar pushes on the particles immediately below it. When a liquid is in a container under pressure, each particle gets pushed against by all the surrounding particles. The container walls and the pressure-inducing surface push against them in reaction and these macroscopic forces are actually the net result of a very large number of intermolecular forces and collisions between the particles in those molecules. Strain inside a material may arise by various mechanisms, such as stress as applied by external forces to the material or to its surface. Any strain of a material generates an internal elastic stress, analogous to the reaction force of a spring. In liquids and gases, only deformations that change the volume generate persistent elastic stress, however, if the deformation is gradually changing with time, even in fluids there will usually be some viscous stress, opposing that change. Elastic and viscous stresses are usually combined under the mechanical stress. Significant stress may exist even when deformation is negligible or non-existent, stress may exist in the absence of external forces, such built-in stress is important, for example, in prestressed concrete and tempered glass. Stress may also be imposed on a material without the application of net forces, for example by changes in temperature or chemical composition, stress that exceeds certain strength limits of the material will result in permanent deformation or even change its crystal structure and chemical composition. In some branches of engineering, the stress is occasionally used in a looser sense as a synonym of internal force. For example, in the analysis of trusses, it may refer to the total traction or compression force acting on a beam, since ancient times humans have been consciously aware of stress inside materials. Until the 17th century the understanding of stress was largely intuitive and empirical, with those tools, Augustin-Louis Cauchy was able to give the first rigorous and general mathematical model for stress in a homogeneous medium. Cauchy observed that the force across a surface was a linear function of its normal vector, and, moreover. The understanding of stress in liquids started with Newton, who provided a formula for friction forces in parallel laminar flow. Stress is defined as the force across a small boundary per unit area of that boundary, following the basic premises of continuum mechanics, stress is a macroscopic concept. In a fluid at rest the force is perpendicular to the surface, in a solid, or in a flow of viscous liquid, the force F may not be perpendicular to S, hence the stress across a surface must be regarded a vector quantity, not a scalar. Moreover, the direction and magnitude depend on the orientation of S. Thus the stress state of the material must be described by a tensor, called the stress tensor, with respect to any chosen coordinate system, the Cauchy stress tensor can be represented as a symmetric matrix of 3×3 real numbers
Stress (physics)
–
Built-in strain, inside the plastic protractor, developed by the stress of the shape of the protractor, is revealed by the effect of polarized light.
Stress (physics)
–
Roman -era bridge in Switzerland
Stress (physics)
–
Inca bridge on the Apurimac River
Stress (physics)
–
Glass vase with the craquelé effect. The cracks are the result of brief but intense stress created when the semi-molten piece is briefly dipped in water.
59.
Control volume
–
In continuum mechanics and thermodynamics, a control volume is a mathematical abstraction employed in the process of creating mathematical models of physical processes. In an inertial frame of reference, it is a volume fixed in space or moving with constant flow velocity through which the continuum flows, the surface enclosing the control volume is referred to as the control surface. At steady state, a volume can be thought of as an arbitrary volume in which the mass of the continuum remains constant. As a continuum moves through the volume, the mass entering the control volume is equal to the mass leaving the control volume. At steady state, and in the absence of work and heat transfer and it is analogous to the classical mechanics concept of the free body diagram. Typically, to understand how a physical law applies to the system under consideration, one first begins by considering how it applies to a small, control volume. There is nothing special about a particular volume, it simply represents a small part of the system to which physical laws can be easily applied. This gives rise to what is termed a volumetric, or volume-wise formulation of the mathematical model, in this way, the corresponding point-wise formulation of the mathematical model can be developed so it can describe the physical behaviour of an entire system. In continuum mechanics the equations are in integral form. Finding forms of the equation that are independent of the control volumes allows simplification of the integral signs, computations in continuum mechanics often require that the regular time derivation operator d / d t is replaced by the substantive derivative operator D / D t. This can be seen as follows, consider a bug that is moving through a volume where there is some scalar, e. g. pressure, that varies with time and position, p = p. If the bug is just moving with the flow, the formula applies. The last parenthesized expression is the derivative of the scalar pressure. Since the pressure p in this computation is a scalar field, we may abstract it
Control volume
60.
Statistical mechanics
–
Statistical mechanics is a branch of theoretical physics using probability theory to study the average behaviour of a mechanical system, where the state of the system is uncertain. A common use of mechanics is in explaining the thermodynamic behaviour of large systems. This branch of mechanics, which treats and extends classical thermodynamics, is known as statistical thermodynamics or equilibrium statistical mechanics. Statistical mechanics also finds use outside equilibrium, an important subbranch known as non-equilibrium statistical mechanics deals with the issue of microscopically modelling the speed of irreversible processes that are driven by imbalances. Examples of such processes include chemical reactions or flows of particles, in physics there are two types of mechanics usually examined, classical mechanics and quantum mechanics. Statistical mechanics fills this disconnection between the laws of mechanics and the experience of incomplete knowledge, by adding some uncertainty about which state the system is in. The statistical ensemble is a probability distribution over all states of the system. In classical statistical mechanics, the ensemble is a probability distribution over phase points, in quantum statistical mechanics, the ensemble is a probability distribution over pure states, and can be compactly summarized as a density matrix. These two meanings are equivalent for many purposes, and will be used interchangeably in this article, however the probability is interpreted, each state in the ensemble evolves over time according to the equation of motion. Thus, the ensemble itself also evolves, as the systems in the ensemble continually leave one state. The ensemble evolution is given by the Liouville equation or the von Neumann equation, one special class of ensemble is those ensembles that do not evolve over time. These ensembles are known as equilibrium ensembles and their condition is known as statistical equilibrium, Statistical equilibrium occurs if, for each state in the ensemble, the ensemble also contains all of its future and past states with probabilities equal to the probability of being in that state. The study of equilibrium ensembles of isolated systems is the focus of statistical thermodynamics, non-equilibrium statistical mechanics addresses the more general case of ensembles that change over time, and/or ensembles of non-isolated systems. The primary goal of thermodynamics is to derive the classical thermodynamics of materials in terms of the properties of their constituent particles. Whereas statistical mechanics proper involves dynamics, here the attention is focussed on statistical equilibrium, Statistical equilibrium does not mean that the particles have stopped moving, rather, only that the ensemble is not evolving. A sufficient condition for statistical equilibrium with a system is that the probability distribution is a function only of conserved properties. There are many different equilibrium ensembles that can be considered, additional postulates are necessary to motivate why the ensemble for a given system should have one form or another. A common approach found in textbooks is to take the equal a priori probability postulate
Statistical mechanics
–
Statistical mechanics
61.
Kinematic viscosity
–
The viscosity of a fluid is a measure of its resistance to gradual deformation by shear stress or tensile stress. For liquids, it corresponds to the concept of thickness, for example. Viscosity is a property of the fluid which opposes the motion between the two surfaces of the fluid in a fluid that are moving at different velocities. For a given velocity pattern, the stress required is proportional to the fluids viscosity, a fluid that has no resistance to shear stress is known as an ideal or inviscid fluid. Zero viscosity is observed only at low temperatures in superfluids. Otherwise, all fluids have positive viscosity, and are said to be viscous or viscid. A fluid with a high viscosity, such as pitch. The word viscosity is derived from the Latin viscum, meaning mistletoe, the dynamic viscosity of a fluid expresses its resistance to shearing flows, where adjacent layers move parallel to each other with different speeds. It can be defined through the situation known as a Couette flow. This fluid has to be homogeneous in the layer and at different shear stresses, if the speed of the top plate is small enough, the fluid particles will move parallel to it, and their speed will vary linearly from zero at the bottom to u at the top. Each layer of fluid will move faster than the one just below it, in particular, the fluid will apply on the top plate a force in the direction opposite to its motion, and an equal but opposite one to the bottom plate. An external force is required in order to keep the top plate moving at constant speed. The magnitude F of this force is found to be proportional to the u and the area A of each plate. The proportionality factor μ in this formula is the viscosity of the fluid, the ratio u/y is called the rate of shear deformation or shear velocity, and is the derivative of the fluid speed in the direction perpendicular to the plates. Isaac Newton expressed the forces by the differential equation τ = μ ∂ u ∂ y, where τ = F/A. This formula assumes that the flow is moving along parallel lines and this equation can be used where the velocity does not vary linearly with y, such as in fluid flowing through a pipe. Use of the Greek letter mu for the dynamic viscosity is common among mechanical and chemical engineers. However, the Greek letter eta is used by chemists, physicists
Kinematic viscosity
–
Pitch has a viscosity approximately 230 billion (2.3 × 10 11) times that of water.
Kinematic viscosity
–
A simulation of substances with different viscosities. The substance above has lower viscosity than the substance below
Kinematic viscosity
–
Example of the viscosity of milk and water. Liquids with higher viscosities make smaller splashes when poured at the same velocity.
Kinematic viscosity
–
Honey being drizzled.
62.
Reynolds number
–
The Reynolds number is an important dimensionless quantity in fluid mechanics used to help predict flow patterns in different fluid flow situations. It has wide applications, ranging from liquid flow in a pipe to the passage of air over an aircraft wing. The concept was introduced by George Gabriel Stokes in 1851, but the Reynolds number was named by Arnold Sommerfeld in 1908 after Osborne Reynolds, who popularized its use in 1883. A similar effect is created by the introduction of a stream of higher velocity fluid and this relative movement generates fluid friction, which is a factor in developing turbulent flow. Counteracting this effect is the viscosity of the fluid, which as it increases, progressively inhibits turbulence, the Reynolds number quantifies the relative importance of these two types of forces for given flow conditions, and is a guide to when turbulent flow will occur in a particular situation. Such scaling is not linear and the application of Reynolds numbers to both situations allows scaling factors to be developed, the Reynolds number can be defined for several different situations where a fluid is in relative motion to a surface. These definitions generally include the properties of density and viscosity, plus a velocity. This dimension is a matter of convention – for example radius and diameter are equally valid to describe spheres or circles, for aircraft or ships, the length or width can be used. For flow in a pipe or a sphere moving in a fluid the internal diameter is used today. Other shapes such as pipes or non-spherical objects have an equivalent diameter defined. For fluids of variable density such as gases or fluids of variable viscosity such as non-Newtonian fluids. The velocity may also be a matter of convention in some circumstances, in practice, matching the Reynolds number is not on its own sufficient to guarantee similitude. Fluid flow is chaotic, and very small changes to shape. Nevertheless, Reynolds numbers are an important guide and are widely used. Osborne Reynolds famously studied the conditions in which the flow of fluid in pipes transitioned from laminar flow to turbulent flow, when the velocity was low, the dyed layer remained distinct through the entire length of the large tube. When the velocity was increased, the broke up at a given point. The point at which this happened was the point from laminar to turbulent flow. From these experiments came the dimensionless Reynolds number for dynamic similarity—the ratio of forces to viscous forces
Reynolds number
–
Sir George Stokes, introduced Reynolds numbers
Reynolds number
–
Osborne Reynolds popularised the concept
Reynolds number
–
The Moody diagram, which describes the Darcy–Weisbach friction factor f as a function of the Reynolds number and relative pipe roughness.
63.
Boundary layer
–
In the Earths atmosphere, the atmospheric boundary layer is the air layer near the ground affected by diurnal heat, moisture or momentum transfer to or from the surface. On an aircraft wing the boundary layer is the part of the close to the wing. Laminar boundary layers can be classified according to their structure. When a fluid rotates and viscous forces are balanced by the Coriolis effect, in the theory of heat transfer, a thermal boundary layer occurs. A surface can have multiple types of boundary layer simultaneously, the viscous nature of airflow reduces the local velocities on a surface and is responsible for skin friction. The layer of air over the surface that is slowed down or stopped by viscosity, is the boundary layer. There are two different types of boundary layer flow, laminar and turbulent, laminar Boundary Layer Flow The laminar boundary is a very smooth flow, while the turbulent boundary layer contains swirls or eddies. The laminar flow creates less skin friction drag than the turbulent flow, Boundary layer flow over a wing surface begins as a smooth laminar flow. As the flow continues back from the edge, the laminar boundary layer increases in thickness. Turbulent Boundary Layer Flow At some distance back from the leading edge, the low energy laminar flow, however, tends to break down more suddenly than the turbulent layer. The aerodynamic boundary layer was first defined by Ludwig Prandtl in a paper presented on August 12,1904 at the third International Congress of Mathematicians in Heidelberg and this allows a closed-form solution for the flow in both areas, a significant simplification of the full Navier–Stokes equations. The majority of the transfer to and from a body also takes place within the boundary layer. The pressure distribution throughout the layer in the direction normal to the surface remains constant throughout the boundary layer. The thickness of the velocity boundary layer is defined as the distance from the solid body at which the viscous flow velocity is 99% of the freestream velocity. Displacement Thickness is an alternative definition stating that the boundary layer represents a deficit in mass compared to inviscid flow with slip at the wall. It is the distance by which the wall would have to be displaced in the case to give the same total mass flow as the viscous case. The no-slip condition requires the flow velocity at the surface of an object be zero. The flow velocity will then increase rapidly within the layer, governed by the boundary layer equations
Boundary layer
–
Ludwig Prandtl
64.
Drag (physics)
–
In fluid dynamics, drag is a force acting opposite to the relative motion of any object moving with respect to a surrounding fluid. This can exist between two layers or a fluid and a solid surface. Unlike other resistive forces, such as dry friction, which are independent of velocity. Drag force is proportional to the velocity for a laminar flow, even though the ultimate cause of a drag is viscous friction, the turbulent drag is independent of viscosity. Drag forces always decrease fluid velocity relative to the object in the fluids path. In the case of viscous drag of fluid in a pipe, in physics of sports, the drag force is necessary to explain the performance of runners, particularly of sprinters. Types of drag are generally divided into the categories, parasitic drag, consisting of form drag, skin friction, interference drag, lift-induced drag. The phrase parasitic drag is used in aerodynamics, since for lifting wings drag it is in general small compared to lift. For flow around bluff bodies, form and interference drags often dominate, further, lift-induced drag is only relevant when wings or a lifting body are present, and is therefore usually discussed either in aviation or in the design of semi-planing or planing hulls. Wave drag occurs either when an object is moving through a fluid at or near the speed of sound or when a solid object is moving along a fluid boundary. Drag depends on the properties of the fluid and on the size, shape, at low R e, C D is asymptotically proportional to R e −1, which means that the drag is linearly proportional to the speed. At high R e, C D is more or less constant, the graph to the right shows how C D varies with R e for the case of a sphere. As mentioned, the equation with a constant drag coefficient gives the force experienced by an object moving through a fluid at relatively large velocity. This is also called quadratic drag, the equation is attributed to Lord Rayleigh, who originally used L2 in place of A. Sometimes a body is a composite of different parts, each with a different reference areas, in the case of a wing the reference areas are the same and the drag force is in the same ratio to the lift force as the ratio of drag coefficient to lift coefficient. Therefore, the reference for a wing is often the area rather than the frontal area. For an object with a surface, and non-fixed separation points—like a sphere or circular cylinder—the drag coefficient may vary with Reynolds number Re. For an object with well-defined fixed separation points, like a disk with its plane normal to the flow direction
Drag (physics)
–
The power curve: form and induced drag vs. airspeed
Drag (physics)
65.
Friction
–
Friction is the force resisting the relative motion of solid surfaces, fluid layers, and material elements sliding against each other. There are several types of friction, Dry friction resists relative lateral motion of two surfaces in contact. Dry friction is subdivided into static friction between non-moving surfaces, and kinetic friction between moving surfaces, fluid friction describes the friction between layers of a viscous fluid that are moving relative to each other. Lubricated friction is a case of fluid friction where a lubricant fluid separates two solid surfaces, skin friction is a component of drag, the force resisting the motion of a fluid across the surface of a body. Internal friction is the force resisting motion between the making up a solid material while it undergoes deformation. When surfaces in contact move relative to other, the friction between the two surfaces converts kinetic energy into thermal energy. This property can have consequences, as illustrated by the use of friction created by rubbing pieces of wood together to start a fire. Kinetic energy is converted to thermal energy whenever motion with friction occurs, another important consequence of many types of friction can be wear, which may lead to performance degradation and/or damage to components. Friction is a component of the science of tribology, Friction is not itself a fundamental force. Dry friction arises from a combination of adhesion, surface roughness, surface deformation. The complexity of interactions makes the calculation of friction from first principles impractical and necessitates the use of empirical methods for analysis. Friction is a non-conservative force - work done against friction is path dependent, in the presence of friction, some energy is always lost in the form of heat. Thus mechanical energy is not conserved, the Greeks, including Aristotle, Vitruvius, and Pliny the Elder, were interested in the cause and mitigation of friction. They were aware of differences between static and kinetic friction with Themistius stating in 350 A. D. that it is easier to further the motion of a moving body than to move a body at rest. The classic laws of sliding friction were discovered by Leonardo da Vinci in 1493, a pioneer in tribology and these laws were rediscovered by Guillaume Amontons in 1699. Amontons presented the nature of friction in terms of surface irregularities, the understanding of friction was further developed by Charles-Augustin de Coulomb. Coulomb further considered the influence of sliding velocity, temperature and humidity, the distinction between static and dynamic friction is made in Coulombs friction law, although this distinction was already drawn by Johann Andreas von Segner in 1758. Leslie was equally skeptical about the role of adhesion proposed by Desaguliers, in Leslies view, friction should be seen as a time-dependent process of flattening, pressing down asperities, which creates new obstacles in what were cavities before
Friction
–
When the mass is not moving, the object experiences static friction. The friction increases as the applied force increases until the block moves. After the block moves, it experiences kinetic friction, which is less than the maximum static friction.
66.
Non-newtonian fluid
–
A non-Newtonian fluid is a fluid that does not follow Newtons Law of Viscosity. Most commonly, the viscosity of fluids is dependent on shear rate or shear rate history. Some non-Newtonian fluids with shear-independent viscosity, however, still exhibit normal stress-differences or other non-Newtonian behavior. Many salt solutions and molten polymers are non-Newtonian fluids, as are commonly found substances such as ketchup, custard, toothpaste, starch suspensions, maizena, paint, blood. In a Newtonian fluid, the relation between the stress and the shear rate is linear, passing through the origin, the constant of proportionality being the coefficient of viscosity. In a non-Newtonian fluid, the relation between the stress and the shear rate is different and can even be time-dependent. Therefore, a constant coefficient of viscosity cannot be defined, although the concept of viscosity is commonly used in fluid mechanics to characterize the shear properties of a fluid, it can be inadequate to describe non-Newtonian fluids. The properties are studied using tensor-valued constitutive equations, which are common in the field of continuum mechanics. The viscosity of a shear thickening fluid, or dilatant fluid, corn starch dissolved in water is a common example, when stirred slowly it looks milky, when stirred vigorously it feels like a very viscous liquid. Note that all thixotropic fluids are extremely shear thinning, but they are time dependent. Thus, to avoid confusion, the classification is more clearly termed pseudoplastic. Another example of a shear thinning fluid is blood and this application is highly favoured within the body, as it allows the viscosity of blood to decrease with increased shear strain rate. Fluids that have a linear shear stress/shear strain relationship require a finite yield stress before they begin to flow and these fluids are called Bingham plastics. Several examples are clay suspensions, drilling mud, toothpaste, mayonnaise, chocolate, the surface of a Bingham plastic can hold peaks when it is still. By contrast Newtonian fluids have flat featureless surfaces when still, there are also fluids whose strain rate is a function of time. Fluids that require a gradually increasing shear stress to maintain a constant strain rate are referred to as rheopectic, an opposite case of this is a fluid that thins out with time and requires a decreasing stress to maintain a constant strain rate. Many common substances exhibit non-Newtonian flows, uncooked cornflour has the same properties. The name oobleck is derived from the Dr. Seuss book Bartholomew, because of its properties, oobleck is often used in demonstrations that exhibit its unusual behavior
Non-newtonian fluid
–
Demonstration of a non-Newtonian fluid at Universum in Mexico City
Non-newtonian fluid
–
Classification of fluids with shear stress as a function of shear rate.
Non-newtonian fluid
–
Oobleck on a subwoofer. Applying force to oobleck, by sound waves in this case, makes the non-Newtonian fluid thicken.
67.
Cartesian coordinate system
–
Each reference line is called a coordinate axis or just axis of the system, and the point where they meet is its origin, usually at ordered pair. The coordinates can also be defined as the positions of the projections of the point onto the two axis, expressed as signed distances from the origin. One can use the principle to specify the position of any point in three-dimensional space by three Cartesian coordinates, its signed distances to three mutually perpendicular planes. In general, n Cartesian coordinates specify the point in an n-dimensional Euclidean space for any dimension n and these coordinates are equal, up to sign, to distances from the point to n mutually perpendicular hyperplanes. The invention of Cartesian coordinates in the 17th century by René Descartes revolutionized mathematics by providing the first systematic link between Euclidean geometry and algebra. Using the Cartesian coordinate system, geometric shapes can be described by Cartesian equations, algebraic equations involving the coordinates of the points lying on the shape. For example, a circle of radius 2, centered at the origin of the plane, a familiar example is the concept of the graph of a function. Cartesian coordinates are also tools for most applied disciplines that deal with geometry, including astronomy, physics, engineering. They are the most common system used in computer graphics, computer-aided geometric design. Nicole Oresme, a French cleric and friend of the Dauphin of the 14th Century, used similar to Cartesian coordinates well before the time of Descartes. The adjective Cartesian refers to the French mathematician and philosopher René Descartes who published this idea in 1637 and it was independently discovered by Pierre de Fermat, who also worked in three dimensions, although Fermat did not publish the discovery. Both authors used a single axis in their treatments and have a length measured in reference to this axis. The concept of using a pair of axes was introduced later, after Descartes La Géométrie was translated into Latin in 1649 by Frans van Schooten and these commentators introduced several concepts while trying to clarify the ideas contained in Descartes work. Many other coordinate systems have developed since Descartes, such as the polar coordinates for the plane. The development of the Cartesian coordinate system would play a role in the development of the Calculus by Isaac Newton. The two-coordinate description of the plane was later generalized into the concept of vector spaces. Choosing a Cartesian coordinate system for a one-dimensional space – that is, for a straight line—involves choosing a point O of the line, a unit of length, and an orientation for the line. An orientation chooses which of the two half-lines determined by O is the positive, and which is negative, we say that the line is oriented from the negative half towards the positive half
Cartesian coordinate system
–
The right hand rule.
Cartesian coordinate system
–
Illustration of a Cartesian coordinate plane. Four points are marked and labeled with their coordinates: (2,3) in green, (−3,1) in red, (−1.5,−2.5) in blue, and the origin (0,0) in purple.
Cartesian coordinate system
–
3D Cartesian Coordinate Handedness
68.
International Standard Book Number
–
The International Standard Book Number is a unique numeric commercial book identifier. An ISBN is assigned to each edition and variation of a book, for example, an e-book, a paperback and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, the method of assigning an ISBN is nation-based and varies from country to country, often depending on how large the publishing industry is within a country. The initial ISBN configuration of recognition was generated in 1967 based upon the 9-digit Standard Book Numbering created in 1966, the 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108. Occasionally, a book may appear without a printed ISBN if it is printed privately or the author does not follow the usual ISBN procedure, however, this can be rectified later. Another identifier, the International Standard Serial Number, identifies periodical publications such as magazines, the ISBN configuration of recognition was generated in 1967 in the United Kingdom by David Whitaker and in 1968 in the US by Emery Koltay. The 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108, the United Kingdom continued to use the 9-digit SBN code until 1974. The ISO on-line facility only refers back to 1978, an SBN may be converted to an ISBN by prefixing the digit 0. For example, the edition of Mr. J. G. Reeder Returns, published by Hodder in 1965, has SBN340013818 -340 indicating the publisher,01381 their serial number. This can be converted to ISBN 0-340-01381-8, the check digit does not need to be re-calculated, since 1 January 2007, ISBNs have contained 13 digits, a format that is compatible with Bookland European Article Number EAN-13s. An ISBN is assigned to each edition and variation of a book, for example, an ebook, a paperback, and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, a 13-digit ISBN can be separated into its parts, and when this is done it is customary to separate the parts with hyphens or spaces. Separating the parts of a 10-digit ISBN is also done with either hyphens or spaces, figuring out how to correctly separate a given ISBN number is complicated, because most of the parts do not use a fixed number of digits. ISBN issuance is country-specific, in that ISBNs are issued by the ISBN registration agency that is responsible for country or territory regardless of the publication language. Some ISBN registration agencies are based in national libraries or within ministries of culture, in other cases, the ISBN registration service is provided by organisations such as bibliographic data providers that are not government funded. In Canada, ISBNs are issued at no cost with the purpose of encouraging Canadian culture. In the United Kingdom, United States, and some countries, where the service is provided by non-government-funded organisations. Australia, ISBNs are issued by the library services agency Thorpe-Bowker
International Standard Book Number
–
A 13-digit ISBN, 978-3-16-148410-0, as represented by an EAN-13 bar code
69.
McGraw-Hill, Inc.
–
S&P Global Inc. is an American publicly traded corporation headquartered in New York City. Its primary areas of business are financial information and analytics and it is the parent company of S&P Global Ratings, S&P Global Market Intelligence, and S&P Global Platts, and is the majority owner of the S&P Dow Jones Indices joint venture. The predecessor companies of S&P Global have history dating to 1888 and he continued to add further publications, eventually establishing The McGraw Publishing Company in 1899. Hill had also produced several technical and trade publications and in 1902 formed his own business, in 1909 both men, having known each others interests, agreed upon an alliance and combined the book departments of their publishing companies into The McGraw-Hill Book Company. John Hill served as President, with James McGraw as Vice-President, the buyout made McGraw-Hill the largest educational publisher in the United States. In 1964, After Hill died, Merged both McGraw-Hill Publishing Company and McGraw-Hill Book Company into McGraw-Hill, Inc, in 1979, McGraw-Hill purchased Byte magazine from its owner/publisher Virginia Williamson who then became a vice-president of McGraw-Hill. In 1995, McGraw-Hill, Inc. became The McGraw-Hill Companies, in 2007, McGraw-Hill launched an online study network, GradeGuru. com, which gave McGraw-Hill an opportunity to connect directly with its end users, the students. The site closed on April 29,2012, on October 3,2011, McGraw-Hill announced it was selling its entire television station group to the E. W. Scripps Company for $212 million. The sale was completed on December 30,2011 and it had been involved in broadcasting since 1972, when it purchased four television stations from a division of Time Inc. McGraw Hill has produced the Glencoe series of books for decades. On November 26,2012, McGraw-Hill announced it was selling its entire education division, on March 22,2013, it announced it had completed the sale for $2.4 billion cash. On May 1,2013, shareholders of McGraw-Hill voted to change the name to McGraw Hill Financial. McGraw-Hill divested the subsidiary McGraw-Hill Construction to Symphony Technology Group for US$320 million on September 22,2014, the sale included Engineering News-Record, Architectural Record, Dodge and Sweets. McGraw-Hill Construction has been renamed Dodge Data & Analytics, in February 2016, it was announced that McGraw-Hill Financial would change its name to S&P Global Inc. by the end of April 2016. The company officially changed its name following a vote on April 27,2016. In April 2016, the company announced it was selling J. D. Power & Associates to investment firm XIO Group for $1.1 billion, S&P Global now organizes its businesses in four units based on the market in which they are involved. S&P Global Ratings provides independent investment research including ratings on various investment instruments, subsidiaries include Leveraged Commentary & Data. Launched on July 2,2012, S&P Dow Jones Indices is the worlds largest global resource for index-based concepts, data and it produces the S&P500 and the Dow Jones Industrial Average. Headquartered in London, S&P Global Platts is a provider of information and a source of benchmark price assessments for the commodities, energy, petrochemicals, metals, and agriculture markets
McGraw-Hill, Inc.
–
McGraw Hill Financial, Inc.
McGraw-Hill, Inc.
–
1221 Avenue of the Americas, the headquarters of McGraw-Hill
McGraw-Hill, Inc.
–
2008 conference booth
70.
Experimental physics
–
Experimental physics is the category of disciplines and sub-disciplines in the field of physics that are concerned with the observation of physical phenomena and experiments. Methods vary from discipline to discipline, from experiments and observations, such as the Cavendish experiment, to more complicated ones. It is often put in contrast with theoretical physics, which is concerned with predicting and explaining the physical behaviour of nature than the acquisition of knowledge about it. Although experimental and theoretical physics are concerned with different aspects of nature, theoretical physics can also offer insight on what data is needed in order to gain a better understanding of the universe, and on what experiments to devise in order to obtain it. In the early 17th century, Galileo made extensive use of experimentation to validate physical theories, Galileo formulated and successfully tested several results in dynamics, in particular the law of inertia, which later became the first law in Newtons laws of motion. In Galileos Two New Sciences, a dialogue between the characters Simplicio and Salviati discuss the motion of a ship and how that ships cargo is indifferent to its motion. Huygens used the motion of a boat along a Dutch canal to illustrate a form of the conservation of momentum. Experimental physics is considered to have reached a point with the publication of the Philosophiae Naturalis Principia Mathematica in 1687 by Sir Isaac Newton. Both theories agreed well with experiment, the Principia also included several theories in fluid dynamics. From the late 17th century onward, thermodynamics was developed by physicist and chemist Boyle, Young, in 1733, Bernoulli used statistical arguments with classical mechanics to derive thermodynamic results, initiating the field of statistical mechanics. In 1798, Thompson demonstrated the conversion of work into heat. Ludwig Boltzmann, in the century, is responsible for the modern form of statistical mechanics. Besides classical mechanics and thermodynamics, another field of experimental inquiry within physics was the nature of electricity. Observations in the 17th and eighteenth century by such as Robert Boyle, Stephen Gray. These observations also established our basic understanding of electrical charge and current, by 1808 John Dalton had discovered that atoms of different elements have different weights and proposed the modern theory of the atom. It was Hans Christian Ørsted who first proposed the connection between electricity and magnetism after observing the deflection of a needle by a nearby electric current. By the early 1830s Michael Faraday had demonstrated that magnetic fields, in 1864 James Clerk Maxwell presented to the Royal Society a set of equations that described this relationship between electricity and magnetism. Maxwells equations also predicted correctly that light is an electromagnetic wave, starting with astronomy, the principles of natural philosophy crystallized into fundamental laws of physics which were enunciated and improved in the succeeding centuries
Experimental physics
–
A view of the CMS detector, an experimental endeavour of the LHC at CERN.
71.
Theoretical physics
–
Theoretical physics is a branch of physics that employs mathematical models and abstractions of physical objects and systems to rationalize, explain and predict natural phenomena. This is in contrast to physics, which uses experimental tools to probe these phenomena. The advancement of science depends in general on the interplay between experimental studies and theory, in some cases, theoretical physics adheres to standards of mathematical rigor while giving little weight to experiments and observations. Conversely, Einstein was awarded the Nobel Prize for explaining the photoelectric effect, a physical theory is a model of physical events. It is judged by the extent to which its predictions agree with empirical observations, the quality of a physical theory is also judged on its ability to make new predictions which can be verified by new observations. A physical theory similarly differs from a theory, in the sense that the word theory has a different meaning in mathematical terms. A physical theory involves one or more relationships between various measurable quantities, archimedes realized that a ship floats by displacing its mass of water, Pythagoras understood the relation between the length of a vibrating string and the musical tone it produces. Other examples include entropy as a measure of the uncertainty regarding the positions and motions of unseen particles, Theoretical physics consists of several different approaches. In this regard, theoretical particle physics forms a good example, for instance, phenomenologists might employ empirical formulas to agree with experimental results, often without deep physical understanding. Modelers often appear much like phenomenologists, but try to model speculative theories that have certain desirable features, some attempt to create approximate theories, called effective theories, because fully developed theories may be regarded as unsolvable or too complicated. Other theorists may try to unify, formalise, reinterpret or generalise extant theories, or create completely new ones altogether. Sometimes the vision provided by pure mathematical systems can provide clues to how a system might be modeled, e. g. the notion, due to Riemann and others. Theoretical problems that need computational investigation are often the concern of computational physics, Theoretical advances may consist in setting aside old, incorrect paradigms or may be an alternative model that provides answers that are more accurate or that can be more widely applied. In the latter case, a correspondence principle will be required to recover the previously known result, sometimes though, advances may proceed along different paths. However, an exception to all the above is the wave–particle duality, Physical theories become accepted if they are able to make correct predictions and no incorrect ones. They are also likely to be accepted if they connect a wide range of phenomena. Testing the consequences of a theory is part of the scientific method, Physical theories can be grouped into three categories, mainstream theories, proposed theories and fringe theories. Theoretical physics began at least 2,300 years ago, under the Pre-socratic philosophy, during the Middle Ages and Renaissance, the concept of experimental science, the counterpoint to theory, began with scholars such as Ibn al-Haytham and Francis Bacon
Theoretical physics
–
Visual representation of a Schwarzschild wormhole. Wormholes have never been observed, but they are predicted to exist through mathematical models and scientific theory.
72.
Lagrangian mechanics
–
Lagrangian mechanics is a reformulation of classical mechanics, introduced by the Italian-French mathematician and astronomer Joseph-Louis Lagrange in 1788. No new physics is introduced in Lagrangian mechanics compared to Newtonian mechanics, Newtons laws can include non-conservative forces like friction, however, they must include constraint forces explicitly and are best suited to Cartesian coordinates. Lagrangian mechanics is ideal for systems with conservative forces and for bypassing constraint forces in any coordinate system, dissipative and driven forces can be accounted for by splitting the external forces into a sum of potential and non-potential forces, leading to a set of modified Euler-Lagrange equations. Generalized coordinates can be chosen by convenience, to exploit symmetries in the system or the geometry of the constraints, Lagrangian mechanics also reveals conserved quantities and their symmetries in a direct way, as a special case of Noethers theorem. Lagrangian mechanics is important not just for its applications. It can also be applied to systems by analogy, for instance to coupled electric circuits with inductances and capacitances. Lagrangian mechanics is used to solve mechanical problems in physics. Lagrangian mechanics applies to the dynamics of particles, fields are described using a Lagrangian density, Lagranges equations are also used in optimisation problems of dynamic systems. In mechanics, Lagranges equations of the second kind are used more than those of the first kind. Suppose we have a bead sliding around on a wire, or a simple pendulum. This choice eliminates the need for the constraint force to enter into the resultant system of equations, there are fewer equations since one is not directly calculating the influence of the constraint on the particle at a given moment. For a wide variety of systems, if the size and shape of a massive object are negligible. For a system of N point particles with masses m1, m2, MN, each particle has a position vector, denoted r1, r2. Cartesian coordinates are often sufficient, so r1 =, r2 =, in three dimensional space, each position vector requires three coordinates to uniquely define the location of a point, so there are 3N coordinates to uniquely define the configuration of the system. These are all points in space to locate the particles. The velocity of particle is how fast the particle moves along its path of motion. In Newtonian mechanics, the equations of motion are given by Newtons laws, the second law net force equals mass times acceleration, Σ F = m d2r/dt2, applies to each particle. For an N particle system in 3d, there are 3N second order differential equations in the positions of the particles to solve for
Lagrangian mechanics
–
Joseph-Louis Lagrange (1736—1813)
Lagrangian mechanics
–
Isaac Newton (1642—1726)
Lagrangian mechanics
–
Jean d'Alembert (1717—1783)
73.
Quantum mechanics
–
Quantum mechanics, including quantum field theory, is a branch of physics which is the fundamental theory of nature at small scales and low energies of atoms and subatomic particles. Classical physics, the physics existing before quantum mechanics, derives from quantum mechanics as an approximation valid only at large scales, early quantum theory was profoundly reconceived in the mid-1920s. The reconceived theory is formulated in various specially developed mathematical formalisms, in one of them, a mathematical function, the wave function, provides information about the probability amplitude of position, momentum, and other physical properties of a particle. In 1803, Thomas Young, an English polymath, performed the famous experiment that he later described in a paper titled On the nature of light. This experiment played a role in the general acceptance of the wave theory of light. In 1838, Michael Faraday discovered cathode rays, Plancks hypothesis that energy is radiated and absorbed in discrete quanta precisely matched the observed patterns of black-body radiation. In 1896, Wilhelm Wien empirically determined a distribution law of black-body radiation, ludwig Boltzmann independently arrived at this result by considerations of Maxwells equations. However, it was only at high frequencies and underestimated the radiance at low frequencies. Later, Planck corrected this model using Boltzmanns statistical interpretation of thermodynamics and proposed what is now called Plancks law, following Max Plancks solution in 1900 to the black-body radiation problem, Albert Einstein offered a quantum-based theory to explain the photoelectric effect. Among the first to study quantum phenomena in nature were Arthur Compton, C. V. Raman, robert Andrews Millikan studied the photoelectric effect experimentally, and Albert Einstein developed a theory for it. In 1913, Peter Debye extended Niels Bohrs theory of structure, introducing elliptical orbits. This phase is known as old quantum theory, according to Planck, each energy element is proportional to its frequency, E = h ν, where h is Plancks constant. Planck cautiously insisted that this was simply an aspect of the processes of absorption and emission of radiation and had nothing to do with the reality of the radiation itself. In fact, he considered his quantum hypothesis a mathematical trick to get the right rather than a sizable discovery. He won the 1921 Nobel Prize in Physics for this work, Einstein further developed this idea to show that an electromagnetic wave such as light could also be described as a particle, with a discrete quantum of energy that was dependent on its frequency. The Copenhagen interpretation of Niels Bohr became widely accepted, in the mid-1920s, developments in quantum mechanics led to its becoming the standard formulation for atomic physics. In the summer of 1925, Bohr and Heisenberg published results that closed the old quantum theory, out of deference to their particle-like behavior in certain processes and measurements, light quanta came to be called photons. From Einsteins simple postulation was born a flurry of debating, theorizing, thus, the entire field of quantum physics emerged, leading to its wider acceptance at the Fifth Solvay Conference in 1927
Quantum mechanics
–
Max Planck is considered the father of the quantum theory.
Quantum mechanics
–
Solution to Schrödinger's equation for the hydrogen atom at different energy levels. The brighter areas represent a higher probability of finding an electron
Quantum mechanics
–
The 1927 Solvay Conference in Brussels.
74.
Field (physics)
–
In physics, a field is a physical quantity, typically a number or tensor, that has a value for each point in space and time. For example, on a map, the surface wind velocity is described by assigning a vector to each point on a map. Each vector represents the speed and direction of the movement of air at that point, as another example, an electric field can be thought of as a condition in space emanating from an electric charge and extending throughout the whole of space. When a test electric charge is placed in this electric field, physicists have found the notion of a field to be of such practical utility for the analysis of forces that they have come to think of a force as due to a field. In the modern framework of the theory of fields, even without referring to a test particle, a field occupies space, contains energy. This led physicists to consider electromagnetic fields to be a physical entity, the fact that the electromagnetic field can possess momentum and energy makes it very real. A particle makes a field, and a field acts on another particle, in practice, the strength of most fields has been found to diminish with distance to the point of being undetectable. One consequence is that the Earths gravitational field quickly becomes undetectable on cosmic scales, a field has a unique tensorial character in every point where it is defined, i. e. a field cannot be a scalar field somewhere and a vector field somewhere else. For example, the Newtonian gravitational field is a vector field, moreover, within each category, a field can be either a classical field or a quantum field, depending on whether it is characterized by numbers or quantum operators respectively. In fact in this theory an equivalent representation of field is a field particle, to Isaac Newton his law of universal gravitation simply expressed the gravitational force that acted between any pair of massive objects. In the eighteenth century, a new quantity was devised to simplify the bookkeeping of all these gravitational forces and this quantity, the gravitational field, gave at each point in space the total gravitational force which would be felt by an object with unit mass at that point. The development of the independent concept of a field began in the nineteenth century with the development of the theory of electromagnetism. In the early stages, André-Marie Ampère and Charles-Augustin de Coulomb could manage with Newton-style laws that expressed the forces between pairs of electric charges or electric currents. However, it became more natural to take the field approach and express these laws in terms of electric and magnetic fields. The independent nature of the field became more apparent with James Clerk Maxwells discovery that waves in these fields propagated at a finite speed, Maxwell, at first, did not adopt the modern concept of a field as fundamental quantity that could independently exist. Instead, he supposed that the field expressed the deformation of some underlying medium—the luminiferous aether—much like the tension in a rubber membrane. If that were the case, the velocity of the electromagnetic waves should depend upon the velocity of the observer with respect to the aether. Despite much effort, no evidence of such an effect was ever found
Field (physics)
–
Illustration of the electric field surrounding a positive (red) and a negative (blue) charge.
75.
Optics
–
Optics is the branch of physics which involves the behaviour and properties of light, including its interactions with matter and the construction of instruments that use or detect it. Optics usually describes the behaviour of visible, ultraviolet, and infrared light, because light is an electromagnetic wave, other forms of electromagnetic radiation such as X-rays, microwaves, and radio waves exhibit similar properties. Most optical phenomena can be accounted for using the classical description of light. Complete electromagnetic descriptions of light are, however, often difficult to apply in practice, practical optics is usually done using simplified models. The most common of these, geometric optics, treats light as a collection of rays that travel in straight lines, physical optics is a more comprehensive model of light, which includes wave effects such as diffraction and interference that cannot be accounted for in geometric optics. Historically, the model of light was developed first, followed by the wave model of light. Progress in electromagnetic theory in the 19th century led to the discovery that waves were in fact electromagnetic radiation. Some phenomena depend on the fact that light has both wave-like and particle-like properties, explanation of these effects requires quantum mechanics. When considering lights particle-like properties, the light is modelled as a collection of particles called photons, quantum optics deals with the application of quantum mechanics to optical systems. Optical science is relevant to and studied in many related disciplines including astronomy, various engineering fields, photography, practical applications of optics are found in a variety of technologies and everyday objects, including mirrors, lenses, telescopes, microscopes, lasers, and fibre optics. Optics began with the development of lenses by the ancient Egyptians and Mesopotamians, the earliest known lenses, made from polished crystal, often quartz, date from as early as 700 BC for Assyrian lenses such as the Layard/Nimrud lens. The ancient Romans and Greeks filled glass spheres with water to make lenses, the word optics comes from the ancient Greek word ὀπτική, meaning appearance, look. Greek philosophy on optics broke down into two opposing theories on how vision worked, the theory and the emission theory. The intro-mission approach saw vision as coming from objects casting off copies of themselves that were captured by the eye, plato first articulated the emission theory, the idea that visual perception is accomplished by rays emitted by the eyes. He also commented on the parity reversal of mirrors in Timaeus, some hundred years later, Euclid wrote a treatise entitled Optics where he linked vision to geometry, creating geometrical optics. Ptolemy, in his treatise Optics, held a theory of vision, the rays from the eye formed a cone, the vertex being within the eye. The rays were sensitive, and conveyed back to the observer’s intellect about the distance. He summarised much of Euclid and went on to describe a way to measure the angle of refraction, during the Middle Ages, Greek ideas about optics were resurrected and extended by writers in the Muslim world
Optics
–
Optics includes study of dispersion of light.
Optics
–
The Nimrud lens
Optics
–
Reproduction of a page of Ibn Sahl 's manuscript showing his knowledge of the law of refraction, now known as Snell's law
Optics
–
Cover of the first edition of Newton's Opticks
76.
Quantum field theory
–
QFT treats particles as excited states of the underlying physical field, so these are called field quanta. In quantum field theory, quantum mechanical interactions among particles are described by interaction terms among the corresponding underlying quantum fields and these interactions are conveniently visualized by Feynman diagrams, which are a formal tool of relativistically covariant perturbation theory, serving to evaluate particle processes. The first achievement of quantum theory, namely quantum electrodynamics, is still the paradigmatic example of a successful quantum field theory. Ordinarily, quantum mechanics cannot give an account of photons which constitute the prime case of relativistic particles, since photons have rest mass zero, and correspondingly travel in the vacuum at the speed c, a non-relativistic theory such as ordinary QM cannot give even an approximate description. Photons are implicit in the emission and absorption processes which have to be postulated, for instance, the formalism of QFT is needed for an explicit description of photons. In fact most topics in the development of quantum theory were related to the interaction of radiation and matter. However, quantum mechanics as formulated by Dirac, Heisenberg, and Schrödinger in 1926–27 started from atomic spectra, as soon as the conceptual framework of quantum mechanics was developed, a small group of theoreticians tried to extend quantum methods to electromagnetic fields. A good example is the paper by Born, Jordan & Heisenberg. The basic idea was that in QFT the electromagnetic field should be represented by matrices in the way that position. The ideas of QM were thus extended to systems having a number of degrees of freedom. The inception of QFT is usually considered to be Diracs famous 1927 paper on The quantum theory of the emission and absorption of radiation, here Dirac coined the name quantum electrodynamics for the part of QFT that was developed first. Employing the theory of the harmonic oscillator, Dirac gave a theoretical description of how photons appear in the quantization of the electromagnetic radiation field. Later, Diracs procedure became a model for the quantization of fields as well. These first approaches to QFT were further developed during the three years. P. Jordan introduced creation and annihilation operators for fields obeying Fermi–Dirac statistics and these differ from the corresponding operators for Bose–Einstein statistics in that the former satisfy anti-commutation relations while the latter satisfy commutation relations. The methods of QFT could be applied to derive equations resulting from the treatment of particles, e. g. the Dirac equation, the Klein–Gordon equation. Schweber points out that the idea and procedure of second quantization goes back to Jordan, in a number of papers from 1927, some difficult problems concerning commutation relations, statistics, and Lorentz invariance were eventually solved. The first comprehensive account of a theory of quantum fields, in particular
Quantum field theory
77.
Accelerator physics
–
Accelerator physics is a branch of applied physics, concerned with designing, building and operating particle accelerators. As such, it can be described as the study of motion, manipulation and observation of relativistic charged particle beams and it is also related to other fields, Microwave engineering. Optics with an emphasis on geometrical optics and laser physics, computer technology with an emphasis on digital signal processing, e. g. for automated manipulation of the particle beam. The types of experiments done at a particular accelerator facility are determined by characteristics of the particle beam such as average energy, particle type, intensity. While it is possible to accelerate charged particles using electrostatic fields, like in a Cockcroft-Walton voltage multiplier, furthermore, due to electrostatic fields being conservative, the maximum voltage limits the kinetic energy that is applicable to the particles. To circumvent this problem, linear particle accelerators operate using time-varying fields, the space around a particle beam is evacuated to prevent scattering with gas atoms, requiring it to be enclosed in a vacuum chamber. Due to the electromagnetic fields that follow the beam, it is possible for it to interact with any electrical impedance in the walls of the beam pipe. This may be in the form of an impedance or an inductive/capacitive impedance. These impedances will induce wakefields that can interact with later particles, since this interaction may have negative effects, it is studied to determine its magnitude, and to determine any actions that may be taken to mitigate it. Due to the velocity of the particles, and the resulting Lorentz force for magnetic fields. In most accelerator concepts, these are applied by dedicated electromagnets with different properties, an important step in the development of these types of accelerators was the understanding of strong focusing. Dipole magnets are used to guide the beam through the structure, while quadrupole magnets are used for beam focusing, a particle on the exact design trajectory of the accelerator only experiences dipole field components, while particles with transverse position deviation x are re-focused to the design orbit. The general equations of motion originate from relativistic Hamiltonian mechanics, in almost all using the Paraxial approximation. Even in the cases of strongly nonlinear magnetic fields, and without the paraxial approximation, there are many different software packages available for modeling the different aspects of accelerator physics. One must model the elements that create the electric and magnetic fields, a popular code for beam dynamics, designed by CERN is MAD, or Methodical Accelerator Design. A vital component of any accelerator are the devices that allow various properties of the particle bunches to be measured. A typical machine may use different types of measurement device in order to measure different properties. While many of these rely on well understood technology, designing a device capable of measuring a beam for a particular machine is a complex task requiring much expertise
Accelerator physics
–
Superconducting niobium cavity for acceleration of ultrarelativistic particles from the TESLA project
78.
Nuclear astrophysics
–
In general terms, nuclear astrophysics aims to understand the origin of the chemical elements and the energy generation in stars. The basic tenets of astrophysics are that only isotopes of hydrogen and helium can be formed in a homogeneous big bang model. The conversion of mass to radiative energy is the source of energy which allows stars to shine for up to billions of years.5 billion years. While impressive, these data were used to formulate the theory, the theory of stellar nucleosynthesis has been well-tested by observation and experiment since the theory was first formulated. 26Al has a lifetime a bit less than one million years, the observable neutrino flux from nuclear reactors is much larger than that of the Sun, and thus Davis and others were primarily motivated to look for solar neutrinos for astronomical reasons. Although the foundations of the science are bona fide, there are many remaining open questions. Nuclear physics Astrophysics Nucleosynthesis Abundance of the chemical elements Joint Institute for Nuclear Astrophysics
Nuclear astrophysics
79.
Solar physics
–
Solar physics is the branch of astrophysics that specializes in the study of the Sun. It deals with detailed measurements that are only for our closest star. Because the Sun is uniquely situated for close-range observing, there is a split between the discipline of observational astrophysics and observational solar physics. The study of physics is also important as it is believed that changes in the solar atmosphere. The Sun also provides a physical laboratory for the study of plasma physics, babylonians were keeping a record of solar eclipses, with the oldest record originating from the ancient city of Ugarit, in modern-day Syria. This record dates to about 1300 BC, ancient Chinese astronomers were also observing solar phenomena with the purpose of keeping track of calendars, which were based on lunar and solar cycles. Unfortunately, records kept before 720 BC are very vague and offer no useful information, however, after 720 BC,37 solar eclipses were noted over the course of 240 years. Astronomical knowledge flourished in the Islamic world during medieval times, many observatories were built in cities from Damascus to Baghdad, where detailed astronomical observations were taken. Particularly, a few solar parameters were measured and detailed observations of the Sun were taken, Solar observations were taken with the purpose of navigation, but mostly for timekeeping. Islam requires its followers to pray five times a day, at position of the Sun in the sky. As such, accurate observations of the Sun and its trajectory on the sky were needed, in the late 10th century, Iranian astronomer Abu-Mahmud Khojandi built a massive observatory near Tehran. There, he took measurements of a series of meridian transits of the Sun. Following the fall of the Western Roman Empire, Western Europe was cut from all sources of ancient scientific knowledge and this, plus de-urbanisation and diseases such as the Black Death led to a decline in scientific knowledge in Medieval Europe, especially in the early Middle Ages. During this period, observations of the Sun were taken either in relation to the zodiac, or to assist in building places of worship such as churches, in astronomy, the renaissance period started with the work of Nicolaus Copernicus. He proposed that planets revolve around the Sun and not around the Earth and this model is known as the heliocentric model. His work was expanded by Johannes Kepler and Galileo Galilei. Particularly, Galilei used his new telescope to look at the Sun, in 1610, he discovered sunspots on its surface. In the autumn of 1611, Johannes Fabricius wrote the first book on sunspots, modern day solar physics is focused towards understanding the many phenomena observed with the help of modern telescopes and satellites
Solar physics
–
The SDO satellite
Solar physics
–
Internal structure
80.
Computational physics
–
Computational physics is the study and implementation of numerical analysis to solve problems in physics for which a quantitative theory already exists. Historically, computational physics was the first application of computers in science. In physics, different theories based on mathematical models provide very precise predictions on how systems behave, unfortunately, it is often the case that solving the mathematical model for a particular system in order to produce a useful prediction is not feasible. This can occur, for instance, when the solution does not have a closed-form expression, in such cases, numerical approximations are required. There is a debate about the status of computation within the scientific method, while computers can be used in experiments for the measurement and recording of data, this clearly does not constitute a computational approach. Physics problems are in very difficult to solve exactly. This is due to several reasons, lack of algebraic and/or analytic solubility, complexity, on the more advanced side, mathematical perturbation theory is also sometimes used. In addition, the computational cost and computational complexity for many-body problems tend to grow quickly, a macroscopic system typically has a size of the order of 1023 constituent particles, so it is somewhat of a problem. Solving quantum mechanical problems is generally of exponential order in the size of the system, because computational physics uses a broad class of problems, it is generally divided amongst the different mathematical problems it numerically solves, or the methods it applies. Furthermore, computational physics encompasses the tuning of the structure to solve the problems. It is possible to find a corresponding computational branch for every field in physics, for example computational mechanics. Computational mechanics consists of fluid dynamics, computational solid mechanics. One subfield at the confluence between CFD and electromagnetic modelling is computational magnetohydrodynamics, the quantum many-body problem leads naturally to the large and rapidly growing field of computational chemistry. Computational solid state physics is an important division of computational physics dealing directly with material science. A field related to computational condensed matter is computational statistical mechanics, computational statistical physics makes heavy use of Monte Carlo-like methods. More broadly, it concerns itself with in the social sciences, network theory, and mathematical models for the propagation of disease. Computational astrophysics is the application of techniques and methods to astrophysical problems. Stickler, E. Schachinger, Basic concepts in computational physics, E. Winsberg, Science in the Age of Computer Simulation
Computational physics
–
Computational physics
81.
Condensed matter physics
–
Condensed matter physics is a branch of physics that deals with the physical properties of condensed phases of matter, where particles adhere to each other. Condensed matter physicists seek to understand the behavior of these phases by using physical laws, in particular, they include the laws of quantum mechanics, electromagnetism and statistical mechanics. The field overlaps with chemistry, materials science, and nanotechnology, the theoretical physics of condensed matter shares important concepts and methods with that of particle physics and nuclear physics. A variety of topics in physics such as crystallography, metallurgy, elasticity, magnetism, etc. were treated as distinct areas until the 1940s, when they were grouped together as solid state physics. Around the 1960s, the study of properties of liquids was added to this list, forming the basis for the new. The Bell Telephone Laboratories was one of the first institutes to conduct a program in condensed matter physics. References to condensed state can be traced to earlier sources, as a matter of fact, it would be more correct to unify them under the title of condensed bodies. One of the first studies of condensed states of matter was by English chemist Humphry Davy, Davy observed that of the forty chemical elements known at the time, twenty-six had metallic properties such as lustre, ductility and high electrical and thermal conductivity. This indicated that the atoms in John Daltons atomic theory were not indivisible as Dalton claimed, Davy further claimed that elements that were then believed to be gases, such as nitrogen and hydrogen could be liquefied under the right conditions and would then behave as metals. In 1823, Michael Faraday, then an assistant in Davys lab, successfully liquefied chlorine and went on to all known gaseous elements, except for nitrogen, hydrogen. By 1908, James Dewar and Heike Kamerlingh Onnes were successfully able to hydrogen and then newly discovered helium. Paul Drude in 1900 proposed the first theoretical model for an electron moving through a metallic solid. Drudes model described properties of metals in terms of a gas of free electrons, the phenomenon completely surprised the best theoretical physicists of the time, and it remained unexplained for several decades. Drudes classical model was augmented by Wolfgang Pauli, Arnold Sommerfeld, Felix Bloch, Pauli realized that the free electrons in metal must obey the Fermi–Dirac statistics. Using this idea, he developed the theory of paramagnetism in 1926, shortly after, Sommerfeld incorporated the Fermi–Dirac statistics into the free electron model and made it better able to explain the heat capacity. Two years later, Bloch used quantum mechanics to describe the motion of an electron in a periodic lattice. Magnetism as a property of matter has been known in China since 4000 BC, Pierre Curie studied the dependence of magnetization on temperature and discovered the Curie point phase transition in ferromagnetic materials. In 1906, Pierre Weiss introduced the concept of magnetic domains to explain the properties of ferromagnets
Condensed matter physics
–
Heike Kamerlingh Onnes and Johannes van der Waals with the helium "liquefactor" in Leiden (1908)
Condensed matter physics
–
Condensed matter physics
Condensed matter physics
–
A replica of the first point-contact transistor in Bell labs
Condensed matter physics
–
Computer simulation of "nanogears" made of fullerene molecules. It is hoped that advances in nanoscience will lead to machines working on the molecular scale.
82.
Mesoscopic physics
–
Disambiguation, This page refers to the sub-discipline of condensed matter physics, not the branch of meteorology concerned with the study of weather systems smaller than synoptic scale systems. Mesoscopic physics is a subdiscipline of condensed matter physics deals with materials of an intermediate length. The scale of these materials can be described as being between the size of a quantity of atoms and of materials measuring micrometres, the lower limit can also be defined as being the size of individual atoms. At the micrometre level are bulk materials, both mesoscopic and macroscopic objects contain a large number of atoms. In other words, a device, when scaled down to a meso-size. For example, at the level the conductance of a wire increases continuously with its diameter. However, at the level, the wires conductance is quantized. The applied science of physics deals with the potential of building nanodevices. Mesoscopic physics also addresses fundamental practical problems which occur when an object is miniaturized. The physical properties of materials change as their approaches the nanoscale. For bulk materials larger than one micrometre, the percentage of atoms at the surface is insignificant in relation to the number of atoms in the entire material. The subdiscipline has dealt primarily with artificial structures of metal or semiconducting material which have been fabricated by the techniques employed for producing microelectronic circuits. There is no definition for mesoscopic physics but the systems studied are normally in the range of 100 nm to 1000 nm,100 nanometers is the approximate upper limit for a nanoparticle. Thus, mesoscopic physics has a connection to the fields of nanofabrication. Devices used in nanotechnology are examples of mesoscopic systems, three categories of new phenomena in such systems are interference effects, quantum confinement effects and charging effects. Quantum confinement effects describe electrons in terms of levels, potential well, valence bands, conduction band. Electrons in bulk material can be described by energy bands or electron energy levels. Electrons exist at different energy levels or bands, in bulk materials these energy levels are described as continuous because the difference in energy is negligible
Mesoscopic physics
–
Condensed matter physics
83.
Solid-state physics
–
Solid-state physics is the study of rigid matter, or solids, through methods such as quantum mechanics, crystallography, electromagnetism, and metallurgy. It is the largest branch of condensed matter physics, solid-state physics studies how the large-scale properties of solid materials result from their atomic-scale properties. Thus, solid-state physics forms a basis of materials science. It also has applications, for example in the technology of transistors and semiconductors. Solid materials are formed from densely packed atoms, which interact intensely and these interactions produce the mechanical, thermal, electrical, magnetic and optical properties of solids. Depending on the involved and the conditions in which it was formed. The bulk of physics, as a general theory, is focused on crystals. Primarily, this is because the periodicity of atoms in a crystal — its defining characteristic — facilitates mathematical modeling, likewise, crystalline materials often have electrical, magnetic, optical, or mechanical properties that can be exploited for engineering purposes. The forces between the atoms in a crystal can take a variety of forms, for example, in a crystal of sodium chloride, the crystal is made up of ionic sodium and chlorine, and held together with ionic bonds. In others, the atoms share electrons and form covalent bonds, in metals, electrons are shared amongst the whole crystal in metallic bonding. Finally, the noble gases do not undergo any of these types of bonding, in solid form, the noble gases are held together with van der Waals forces resulting from the polarisation of the electronic charge cloud on each atom. The differences between the types of solid result from the differences between their bonding, the DSSP catered to industrial physicists, and solid-state physics became associated with the technological applications made possible by research on solids. By the early 1960s, the DSSP was the largest division of the American Physical Society, large communities of solid state physicists also emerged in Europe after World War II, in particular in England, Germany, and the Soviet Union. In the United States and Europe, solid state became a prominent field through its investigations into semiconductors, superconductivity, nuclear magnetic resonance, today, solid-state physics is broadly considered to be the subfield of condensed matter physics that focuses on the properties of solids with regular crystal lattices. Many properties of materials are affected by their crystal structure and this structure can be investigated using a range of crystallographic techniques, including X-ray crystallography, neutron diffraction and electron diffraction. The sizes of the crystals in a crystalline solid material vary depending on the material involved. Real crystals feature defects or irregularities in the arrangements. Properties of materials such as electrical conduction and heat capacity are investigated by solid state physics, an early model of electrical conduction was the Drude model, which applied kinetic theory to the electrons in a solid
Solid-state physics
–
An example of a simple cubic lattice
84.
Soft matter
–
They include liquids, colloids, polymers, foams, gels, granular materials, liquid crystals, and a number of biological materials. These materials share an important common feature in that predominant physical behaviors occur at a scale comparable with room temperature thermal energy. At these temperatures, quantum aspects are generally unimportant and he is especially noted for inventing the concept of reptation. Interesting behaviors arise from matter in ways that cannot be predicted, or are difficult to predict. The properties and interactions of these structures may determine the macroscopic behavior of the material. Soft materials are important in a range of technological applications. They may appear as structural and packaging materials, foams and adhesives, detergents and cosmetics, paints, food additives, lubricants and fuel additives, rubber in tires, in addition, a number of biological materials are classifiable as soft matter. Liquid crystals, another category of soft matter, exhibit a responsivity to electric fields that make them very important as materials in display devices. These properties lead to thermal fluctuations, a wide variety of forms, sensitivity of equilibrium structures to external conditions, macroscopic softness. Soft matters, such as polymers and lipids have found applications in nanotechnology as well, an important part of soft condensed matter research is biophysics. Soft condensed matter biophysics may be diverging into two directions, a physical chemistry approach and a complex systems approach. Hamley, Introduction to Soft Matter, J. Wiley, Chichester, R. A. L. Jones, Soft Condensed Matter, Oxford University Press, Oxford. T. A. Witten, Structured Fluids, Polymers, Colloids, Surfactants, M. Kleman and O. D. Lavrentovich, Soft Matter Physics, An Introduction, Springer. M. Mitov, Sensitive Matter, Foams, Gels, Liquid Crystals and Other Miracles, J. N. Israelachvili, Intermolecular and Surface Forces, Academic Press. A. V. Zvelindovksy, Nanostructured Soft Matter - Experiment, Theory, Simulation and Perspectives, Springer/Dodrecht, M. Daoud, C. E. Williams, Soft Matter Physics, Springer Verlag, Berlin. Gerald H. Ristow, Pattern Formation in Granular Materials, Springer Tracts in Modern Physics, ISBN 3-540-66701-6. de Gennes, Pierre-Gilles, Soft Matter, Nobel Lecture, December 9,1991. S. A. Safran, Statistical thermodynamics of surfaces, interfaces and membranes, Harvard School of Engineering and Applied Sciences Soft Matter Wiki - organizes, reviews, and summarizes academic papers on soft matter. Soft Matter Engineering - A group dedicated to Soft Matter Engineering at the University of Florida Google Scholar page on soft matter
Soft matter
–
Condensed matter physics
85.
Mathematical physics
–
Mathematical physics refers to development of mathematical methods for application to problems in physics. It is a branch of applied mathematics, but deals with physical problems, there are several distinct branches of mathematical physics, and these roughly correspond to particular historical periods. The rigorous, abstract and advanced re-formulation of Newtonian mechanics adopting the Lagrangian mechanics, both formulations are embodied in analytical mechanics. These approaches and ideas can be and, in fact, have extended to other areas of physics as statistical mechanics, continuum mechanics, classical field theory. Moreover, they have provided several examples and basic ideas in differential geometry, the theory of partial differential equations are perhaps most closely associated with mathematical physics. These were developed intensively from the half of the eighteenth century until the 1930s. Physical applications of these developments include hydrodynamics, celestial mechanics, continuum mechanics, elasticity theory, acoustics, thermodynamics, electricity, magnetism, and aerodynamics. The theory of atomic spectra developed almost concurrently with the fields of linear algebra. Nonrelativistic quantum mechanics includes Schrödinger operators, and it has connections to atomic, Quantum information theory is another subspecialty. The special and general theories of relativity require a different type of mathematics. This was group theory, which played an important role in quantum field theory and differential geometry. This was, however, gradually supplemented by topology and functional analysis in the description of cosmological as well as quantum field theory phenomena. In this area both homological algebra and category theory are important nowadays, statistical mechanics forms a separate field, which includes the theory of phase transitions. It relies upon the Hamiltonian mechanics and it is related with the more mathematical ergodic theory. There are increasing interactions between combinatorics and physics, in statistical physics. The usage of the mathematical physics is sometimes idiosyncratic. Certain parts of mathematics that arose from the development of physics are not, in fact, considered parts of mathematical physics. The term mathematical physics is sometimes used to research aimed at studying and solving problems inspired by physics or thought experiments within a mathematically rigorous framework
Mathematical physics
–
An example of mathematical physics: solutions of Schrödinger's equation for quantum harmonic oscillators (left) with their amplitudes (right).
86.
Nuclear physics
–
Nuclear physics is the field of physics that studies atomic nuclei and their constituents and interactions. Other forms of matter are also studied. Nuclear physics should not be confused with atomic physics, which studies the atom as a whole, discoveries in nuclear physics have led to applications in many fields. Such applications are studied in the field of nuclear engineering, Particle physics evolved out of nuclear physics and the two fields are typically taught in close association. Nuclear astrophysics, the application of physics to astrophysics, is crucial in explaining the inner workings of stars. The discovery of the electron by J. J. Thomson a year later was an indication that the atom had internal structure, in the years that followed, radioactivity was extensively investigated, notably by Marie and Pierre Curie as well as by Ernest Rutherford and his collaborators. By the turn of the physicists had also discovered three types of radiation emanating from atoms, which they named alpha, beta, and gamma radiation. Experiments by Otto Hahn in 1911 and by James Chadwick in 1914 discovered that the beta decay spectrum was continuous rather than discrete. That is, electrons were ejected from the atom with a range of energies, rather than the discrete amounts of energy that were observed in gamma. This was a problem for physics at the time, because it seemed to indicate that energy was not conserved in these decays. The 1903 Nobel Prize in Physics was awarded jointly to Becquerel for his discovery and to Marie, Rutherford was awarded the Nobel Prize in Chemistry in 1908 for his investigations into the disintegration of the elements and the chemistry of radioactive substances. In 1905 Albert Einstein formulated the idea of mass–energy equivalence, in 1906 Ernest Rutherford published Retardation of the α Particle from Radium in passing through matter. Hans Geiger expanded on this work in a communication to the Royal Society with experiments he and Rutherford had done, passing alpha particles through air, aluminum foil and gold leaf. More work was published in 1909 by Geiger and Ernest Marsden, in 1911–1912 Rutherford went before the Royal Society to explain the experiments and propound the new theory of the atomic nucleus as we now understand it. The plum pudding model had predicted that the particles should come out of the foil with their trajectories being at most slightly bent. But Rutherford instructed his team to look for something that shocked him to observe and he likened it to firing a bullet at tissue paper and having it bounce off. As an example, in this model consisted of a nucleus with 14 protons and 7 electrons. The Rutherford model worked well until studies of nuclear spin were carried out by Franco Rasetti at the California Institute of Technology in 1929
Nuclear physics
–
Nuclear physics
87.
Particle physics
–
Particle physics is the branch of physics that studies the nature of the particles that constitute matter and radiation. By our current understanding, these particles are excitations of the quantum fields that also govern their interactions. The currently dominant theory explaining these fundamental particles and fields, along with their dynamics, is called the Standard Model, in more technical terms, they are described by quantum state vectors in a Hilbert space, which is also treated in quantum field theory. All particles and their interactions observed to date can be described almost entirely by a field theory called the Standard Model. The Standard Model, as formulated, has 61 elementary particles. Those elementary particles can combine to form composite particles, accounting for the hundreds of species of particles that have been discovered since the 1960s. The Standard Model has been found to agree with almost all the tests conducted to date. However, most particle physicists believe that it is a description of nature. In recent years, measurements of mass have provided the first experimental deviations from the Standard Model. The idea that all matter is composed of elementary particles dates from at least the 6th century BC, in the 19th century, John Dalton, through his work on stoichiometry, concluded that each element of nature was composed of a single, unique type of particle. Throughout the 1950s and 1960s, a variety of particles were found in collisions of particles from increasingly high-energy beams. It was referred to informally as the particle zoo, the current state of the classification of all elementary particles is explained by the Standard Model. It describes the strong, weak, and electromagnetic fundamental interactions, the species of gauge bosons are the gluons, W−, W+ and Z bosons, and the photons. The Standard Model also contains 24 fundamental particles, which are the constituents of all matter, finally, the Standard Model also predicted the existence of a type of boson known as the Higgs boson. Early in the morning on 4 July 2012, physicists with the Large Hadron Collider at CERN announced they had found a new particle that behaves similarly to what is expected from the Higgs boson, the worlds major particle physics laboratories are, Brookhaven National Laboratory. Its main facility is the Relativistic Heavy Ion Collider, which collides heavy ions such as gold ions and it is the worlds first heavy ion collider, and the worlds only polarized proton collider. Its main projects are now the electron-positron colliders VEPP-2000, operated since 2006 and its main project is now the Large Hadron Collider, which had its first beam circulation on 10 September 2008, and is now the worlds most energetic collider of protons. It also became the most energetic collider of heavy ions after it began colliding lead ions and its main facility is the Hadron Elektron Ring Anlage, which collides electrons and positrons with protons
Particle physics
–
Large Hadron Collider tunnel at CERN
88.
Biomechanics
–
Biomechanics is the study of the structure and function of biological systems such as humans, animals, plants, organs, fungi, and cells by means of the methods of mechanics. Biomechanics is closely related to engineering, because it often uses traditional engineering sciences to analyze biological systems, some simple applications of Newtonian mechanics and/or materials sciences can supply correct approximations to the mechanics of many biological systems. Usually biological systems are more complex than man-built systems. Numerical methods are applied in almost every biomechanical study. Research is done in a process of hypothesis and verification, including several steps of modeling, computer simulation. Elements of mechanical engineering, electrical engineering, computer science, gait analysis, Biomechanics in sports can be stated as the muscular, joint and skeletal actions of the body during the execution of a given task, skill and/or technique. Proper understanding of biomechanics relating to sports skill has the greatest implications on, sports performance, rehabilitation and injury prevention, as noted by Doctor Michael Yessis, one could say that best athlete is the one that executes his or her skill the best. The mechanical analysis of biomaterials and biofluids is usually carried forth with the concepts of continuum mechanics and this assumption breaks down when the length scales of interest approach the order of the micro structural details of the material. One of the most remarkable characteristic of biomaterials is their hierarchical structure, in other words, the mechanical characteristics of these materials rely on physical phenomena occurring in multiple levels, from the molecular all the way up to the tissue and organ levels. Biomaterials are classified in two groups, hard and soft tissues, mechanical deformation of hard tissues may be analysed with the theory of linear elasticity. On the other hand, soft tissues usually undergo large deformations and thus their analysis rely on the strain theory. The interest in continuum biomechanics is spurred by the need for realism in the development of medical simulation, biological fluid mechanics, or biofluid mechanics, is the study of both gas and liquid fluid flows in or around biological organisms. An often studied liquid biofluids problem is that of blood flow in the cardiovascular system. Under certain mathematical circumstances, blood flow can be modelled by the Navier–Stokes equations, in vivo whole blood is assumed to be an incompressible Newtonian fluid. However, this fails when considering forward flow within arterioles. At the microscopic scale, the effects of red blood cells become significant. When the diameter of the vessel is just slightly larger than the diameter of the red blood cell the Fahraeus–Lindquist effect occurs. However, as the diameter of the vessel decreases further
Biomechanics
–
Page of one of the first works of Biomechanics (De Motu Animalium of Giovanni Alfonso Borelli) in the 17th century
Biomechanics
–
Red blood cells
Biomechanics
–
Chinstrap penguin leaping over water
Biomechanics
–
Subdisciplines
89.
Psychophysics
–
Psychophysics quantitatively investigates the relationship between physical stimuli and the sensations and perceptions they produce. Psychophysics also refers to a class of methods that can be applied to study a perceptual system. Modern applications rely heavily on threshold measurement, ideal observer analysis, Psychophysics has widespread and important practical applications. For example, in the study of signal processing, psychophysics has informed the development of models. These models explain why humans perceive very little loss of quality when audio. Many of the techniques and theories of psychophysics were formulated in 1860 when Gustav Theodor Fechner in Leipzig published Elemente der Psychophysik. He coined the term psychophysics, describing research intended to relate physical stimuli to the contents of such as sensations. As a physicist and philosopher, Fechner aimed at developing a method that relates matter to the mind, connecting the publicly observable world, from this, Fechner derived his well-known logarithmic scale, now known as Fechner scale. Webers and Fechners work formed one of the bases of psychology as a science, Fechners work systematised the introspectionist approach, that had to contend with the Behaviorist approach in which even verbal responses are as physical as the stimuli. Fechners work was studied and extended by Charles S. Peirce, who was aided by his student Joseph Jastrow, Peirce and Jastrow largely confirmed Fechners empirical findings, but not all. In particular, an experiment of Peirce and Jastrow rejected Fechners estimation of a threshold of perception of weights. The Peirce–Jastrow experiments were conducted as part of Peirces application of his program to human perception, other studies considered the perception of light. Jastrow wrote the summary, Mr. Peirce’s courses in logic gave me my first real experience of intellectual muscle. He borrowed the apparatus for me, which I took to my room, installed at my window, and with which, the results were published over our joint names in the Proceedings of the National Academy of Sciences. This work clearly distinguishes observable cognitive performance from the expression of consciousness, one leading method is based on signal detection theory, developed for cases of very weak stimuli. However, the subjectivist approach persists among those in the tradition of Stanley Smith Stevens, Stevens revived the idea of a power law suggested by 19th century researchers, in contrast with Fechners log-linear function. He also advocated the assignment of numbers in ratio to the strengths of stimuli, Stevens added techniques such as magnitude production and cross-modality matching. He opposed the assignment of stimulus strengths to points on a line that are labeled in order of strength, nevertheless, that sort of response has remained popular in applied psychophysics
Psychophysics
–
Diagram showing a specific staircase procedure: Transformed Up/Down Method (1 up/ 2 down rule). Until the first reversal (which is neglected) the simple up/down rule and a larger step size is used.
90.
Integrated Authority File
–
The Integrated Authority File or GND is an international authority file for the organisation of personal names, subject headings and corporate bodies from catalogues. It is used mainly for documentation in libraries and increasingly also by archives, the GND is managed by the German National Library in cooperation with various regional library networks in German-speaking Europe and other partners. The GND falls under the Creative Commons Zero license, the GND specification provides a hierarchy of high-level entities and sub-classes, useful in library classification, and an approach to unambiguous identification of single elements. It also comprises an ontology intended for knowledge representation in the semantic web, available in the RDF format
Integrated Authority File
–
GND screenshot
91.
National Diet Library
–
The National Diet Library is the only national library in Japan. It was established in 1948 for the purpose of assisting members of the National Diet of Japan in researching matters of public policy, the library is similar in purpose and scope to the United States Library of Congress. The National Diet Library consists of two facilities in Tokyo and Kyoto, and several other branch libraries throughout Japan. The Diets power in prewar Japan was limited, and its need for information was correspondingly small, the original Diet libraries never developed either the collections or the services which might have made them vital adjuncts of genuinely responsible legislative activity. Until Japans defeat, moreover, the executive had controlled all political documents, depriving the people and the Diet of access to vital information. The U. S. occupation forces under General Douglas MacArthur deemed reform of the Diet library system to be an important part of the democratization of Japan after its defeat in World War II. In 1946, each house of the Diet formed its own National Diet Library Standing Committee, hani Gorō, a Marxist historian who had been imprisoned during the war for thought crimes and had been elected to the House of Councillors after the war, spearheaded the reform efforts. Hani envisioned the new body as both a citadel of popular sovereignty, and the means of realizing a peaceful revolution, the National Diet Library opened in June 1948 in the present-day State Guest-House with an initial collection of 100,000 volumes. The first Librarian of the Diet Library was the politician Tokujirō Kanamori, the philosopher Masakazu Nakai served as the first Vice Librarian. In 1949, the NDL merged with the National Library and became the national library in Japan. At this time the collection gained a million volumes previously housed in the former National Library in Ueno. In 1961, the NDL opened at its present location in Nagatachō, in 1986, the NDLs Annex was completed to accommodate a combined total of 12 million books and periodicals. The Kansai-kan, which opened in October 2002 in the Kansai Science City, has a collection of 6 million items, in May 2002, the NDL opened a new branch, the International Library of Childrens Literature, in the former building of the Imperial Library in Ueno. This branch contains some 400,000 items of literature from around the world. Though the NDLs original mandate was to be a library for the National Diet. In the fiscal year ending March 2004, for example, the library reported more than 250,000 reference inquiries, in contrast, as Japans national library, the NDL collects copies of all publications published in Japan. The NDL has an extensive collection of some 30 million pages of documents relating to the Occupation of Japan after World War II. This collection include the documents prepared by General Headquarters and the Supreme Commander of the Allied Powers, the Far Eastern Commission, the NDL maintains a collection of some 530,000 books and booklets and 2 million microform titles relating to the sciences
National Diet Library
–
Tokyo Main Library of the National Diet Library
National Diet Library
–
Kansai-kan of the National Diet Library
National Diet Library
–
The National Diet Library
National Diet Library
–
Main building in Tokyo