1.
Pressure
–
Pressure is the force applied perpendicular to the surface of an object per unit area over which that force is distributed. Gauge pressure is the relative to the ambient pressure. Various units are used to express pressure, Pressure may also be expressed in terms of standard atmospheric pressure, the atmosphere is equal to this pressure and the torr is defined as 1⁄760 of this. Manometric units such as the centimetre of water, millimetre of mercury, Pressure is the amount of force acting per unit area. The symbol for it is p or P, the IUPAC recommendation for pressure is a lower-case p. However, upper-case P is widely used. The usage of P vs p depends upon the field in one is working, on the nearby presence of other symbols for quantities such as power and momentum. Mathematically, p = F A where, p is the pressure, F is the normal force and it relates the vector surface element with the normal force acting on it. It is incorrect to say the pressure is directed in such or such direction, the pressure, as a scalar, has no direction. The force given by the relationship to the quantity has a direction. If we change the orientation of the element, the direction of the normal force changes accordingly. Pressure is distributed to solid boundaries or across arbitrary sections of normal to these boundaries or sections at every point. It is a parameter in thermodynamics, and it is conjugate to volume. The SI unit for pressure is the pascal, equal to one newton per square metre and this name for the unit was added in 1971, before that, pressure in SI was expressed simply in newtons per square metre. Other units of pressure, such as pounds per square inch, the CGS unit of pressure is the barye, equal to 1 dyn·cm−2 or 0.1 Pa. Pressure is sometimes expressed in grams-force or kilograms-force per square centimetre, but using the names kilogram, gram, kilogram-force, or gram-force as units of force is expressly forbidden in SI. The technical atmosphere is 1 kgf/cm2, since a system under pressure has potential to perform work on its surroundings, pressure is a measure of potential energy stored per unit volume. It is therefore related to density and may be expressed in units such as joules per cubic metre. Similar pressures are given in kilopascals in most other fields, where the prefix is rarely used
2.
Continuum mechanics
–
Continuum mechanics is a branch of mechanics that deals with the analysis of the kinematics and the mechanical behavior of materials modeled as a continuous mass rather than as discrete particles. The French mathematician Augustin-Louis Cauchy was the first to formulate such models in the 19th century, research in the area continues till today. Modeling an object as a continuum assumes that the substance of the object completely fills the space it occupies, Continuum mechanics deals with physical properties of solids and fluids which are independent of any particular coordinate system in which they are observed. These physical properties are represented by tensors, which are mathematical objects that have the required property of being independent of coordinate system. These tensors can be expressed in coordinate systems for computational convenience, Materials, such as solids, liquids and gases, are composed of molecules separated by space. On a microscopic scale, materials have cracks and discontinuities, a continuum is a body that can be continually sub-divided into infinitesimal elements with properties being those of the bulk material. More specifically, the continuum hypothesis/assumption hinges on the concepts of an elementary volume. This condition provides a link between an experimentalists and a viewpoint on constitutive equations as well as a way of spatial and statistical averaging of the microstructure. The latter then provide a basis for stochastic finite elements. The levels of SVE and RVE link continuum mechanics to statistical mechanics, the RVE may be assessed only in a limited way via experimental testing, when the constitutive response becomes spatially homogeneous. Specifically for fluids, the Knudsen number is used to assess to what extent the approximation of continuity can be made, consider car traffic on a highway---with just one lane for simplicity. Somewhat surprisingly, and in a tribute to its effectiveness, continuum mechanics effectively models the movement of cars via a differential equation for the density of cars. The familiarity of this situation empowers us to understand a little of the continuum-discrete dichotomy underlying continuum modelling in general. To start modelling define that, x measure distance along the highway, t is time, ρ is the density of cars on the highway, cars do not appear and disappear. Consider any group of cars, from the car at the back of the group located at x = a to the particular car at the front located at x = b. The total number of cars in this group N = ∫ a b ρ d x, since cars are conserved d N / d t =0. The only way an integral can be zero for all intervals is if the integrand is zero for all x, consequently, conservation derives the first order nonlinear conservation PDE ∂ ρ ∂ t + ∂ ∂ x =0 for all positions on the highway. This conservation PDE applies not only to car traffic but also to fluids, solids, crowds, animals, plants, bushfires, financial traders and this PDE is one equation with two unknowns, so another equation is needed to form a well posed problem
3.
Conservation of energy
–
In physics, the law of conservation of energy states that the total energy of an isolated system remains constant—it is said to be conserved over time. Energy can neither be created nor destroyed, rather, it transforms from one form to another, for instance, chemical energy can be converted to kinetic energy in the explosion of a stick of dynamite. A consequence of the law of conservation of energy is that a perpetual motion machine of the first kind cannot exist and that is to say, no system without an external energy supply can deliver an unlimited amount of energy to its surroundings. Ancient philosophers as far back as Thales of Miletus c.550 BCE had inklings of the conservation of some underlying substance of everything is made. However, there is no reason to identify this with what we know today as mass-energy. Empedocles wrote that in his system, composed of four roots, nothing comes to be or perishes, instead. In 1605, Simon Stevinus was able to solve a number of problems in statics based on the principle that perpetual motion was impossible. Essentially, he pointed out that the height a moving body rises is equal to the height from which it falls, and used this observation to infer the idea of inertia. The remarkable aspect of this observation is that the height that a moving body ascends to does not depend on the shape of the surface that the body is moving on. In 1669, Christian Huygens published his laws of collision, among the quantities he listed as being invariant before and after the collision of bodies were both the sum of their linear momentums as well as the sum of their kinetic energies. However, the difference between elastic and inelastic collision was not understood at the time and this led to the dispute among later researchers as to which of these conserved quantities was the more fundamental. In his Horologium Oscillatorium, he gave a much clearer statement regarding the height of ascent of a moving body, Huygens study of the dynamics of pendulum motion was based on a single principle, that the center of gravity of heavy objects cannot lift itself. The fact that energy is scalar, unlike linear momentum which is a vector. It was Leibniz during 1676–1689 who first attempted a mathematical formulation of the kind of energy which is connected with motion. Using Huygens work on collision, Leibniz noticed that in mechanical systems. He called this quantity the vis viva or living force of the system, the principle represents an accurate statement of the approximate conservation of kinetic energy in situations where there is no friction. Many physicists at that time, such as Newton, held that the conservation of momentum and it was later shown that both quantities are conserved simultaneously, given the proper conditions such as an elastic collision. In 1687, Isaac Newton published his Principia, which was organized around the concept of force and momentum
4.
Conservation of mass
–
Hence, the quantity of mass is conserved over time. Thus, during any chemical reaction, nuclear reaction, or radioactive decay in an isolated system, the concept of mass conservation is widely used in many fields such as chemistry, mechanics, and fluid dynamics. e. Those completely isolated from all exchanges with the environment, in this circumstance, the mass–energy equivalence theorem states that mass conservation is equivalent to total energy conservation, which is the first law of thermodynamics. By contrast, for a closed system mass is only approximately conserved. Certain types of matter may be created or destroyed, but in all of these processes, for a discussion, see mass in general relativity. An important idea in ancient Greek philosophy was that Nothing comes from nothing, so that what exists now has always existed, no new matter can come into existence where there was none before. A further principle of conservation was stated by Epicurus who, describing the nature of the Universe, wrote that the totality of things was always such as it is now, and always will be. Jain philosophy, a non-creationist philosophy based on the teachings of Mahavira, states that the universe, the Jain text Tattvarthasutra states that a substance is permanent, but its modes are characterised by creation and destruction. A principle of the conservation of matter was also stated by Nasīr al-Dīn al-Tūsī and he wrote that A body of matter cannot disappear completely. It only changes its form, condition, composition, color and other properties, the principle of conservation of mass was first outlined by Mikhail Lomonosov in 1748. He proved it by experiments—though this is sometimes challenged, antoine Lavoisier had expressed these ideas in 1774. Others whose ideas pre-dated the work of Lavoisier include Joseph Black, Henry Cavendish, the conservation of mass was obscure for millennia because of the buoyancy effect of the Earths atmosphere on the weight of gases. For example, a piece of wood weighs less after burning, the vacuum pump also enabled the weighing of gases using scales. Once understood, the conservation of mass was of importance in progressing from alchemy to modern chemistry. His research indicated that in certain reactions the loss or gain could not have more than from 2 to 4 parts in 100,000. The difference in the accuracy aimed at and attained by Lavoisier on the one hand, in special relativity, the conservation of mass does not apply if the system is open and energy escapes. However, it continue to apply to totally closed systems. If energy cannot escape a system, its mass cannot decrease, in relativity theory, so long as any type of energy is retained within a system, this energy exhibits mass
5.
Momentum
–
In classical mechanics, linear momentum, translational momentum, or simply momentum is the product of the mass and velocity of an object, quantified in kilogram-meters per second. It is dimensionally equivalent to impulse, the product of force and time, Newtons second law of motion states that the change in linear momentum of a body is equal to the net impulse acting on it. If the truck were lighter, or moving slowly, then it would have less momentum. Linear momentum is also a quantity, meaning that if a closed system is not affected by external forces. In classical mechanics, conservation of momentum is implied by Newtons laws. It also holds in special relativity and, with definitions, a linear momentum conservation law holds in electrodynamics, quantum mechanics, quantum field theory. It is ultimately an expression of one of the symmetries of space and time. Linear momentum depends on frame of reference, observers in different frames would find different values of linear momentum of a system. But each would observe that the value of linear momentum does not change with time, momentum has a direction as well as magnitude. Quantities that have both a magnitude and a direction are known as vector quantities, because momentum has a direction, it can be used to predict the resulting direction of objects after they collide, as well as their speeds. Below, the properties of momentum are described in one dimension. The vector equations are almost identical to the scalar equations, the momentum of a particle is traditionally represented by the letter p. It is the product of two quantities, the mass and velocity, p = m v, the units of momentum are the product of the units of mass and velocity. In SI units, if the mass is in kilograms and the velocity in meters per second then the momentum is in kilogram meters/second, in cgs units, if the mass is in grams and the velocity in centimeters per second, then the momentum is in gram centimeters/second. Being a vector, momentum has magnitude and direction, for example, a 1 kg model airplane, traveling due north at 1 m/s in straight and level flight, has a momentum of 1 kg m/s due north measured from the ground. The momentum of a system of particles is the sum of their momenta, if two particles have masses m1 and m2, and velocities v1 and v2, the total momentum is p = p 1 + p 2 = m 1 v 1 + m 2 v 2. If all the particles are moving, the center of mass will generally be moving as well, if the center of mass is moving at velocity vcm, the momentum is, p = m v cm. This is known as Eulers first law, if a force F is applied to a particle for a time interval Δt, the momentum of the particle changes by an amount Δ p = F Δ t
6.
Solid mechanics
–
Solid mechanics is fundamental for civil, aerospace, nuclear, and mechanical engineering, for geology, and for many branches of physics such as materials science. It has specific applications in other areas, such as understanding the anatomy of living beings. One of the most common applications of solid mechanics is the Euler-Bernoulli beam equation. Solid mechanics extensively uses tensors to describe stresses, strains, as shown in the following table, solid mechanics inhabits a central place within continuum mechanics. The field of rheology presents an overlap between solid and fluid mechanics, a material has a rest shape and its shape departs away from the rest shape due to stress. The amount of departure from rest shape is called deformation, the proportion of deformation to original size is called strain and this region of deformation is known as the linearly elastic region. It is most common for analysts in solid mechanics to use linear material models, however, real materials often exhibit non-linear behavior. As new materials are used and old ones are pushed to their limits, There are four basic models that describe how a solid responds to an applied stress, Elastically – When an applied stress is removed, the material returns to its undeformed state. Linearly elastic materials, those that deform proportionally to the applied load and this implies that the material response has time-dependence. Plastically – Materials that behave elastically generally do so when the stress is less than a yield value. When the stress is greater than the stress, the material behaves plastically. That is, deformation occurs after yield is permanent. Thermoelastically - There is coupling of mechanical with thermal responses, in general, thermoelasticity is concerned with elastic solids under conditions that are neither isothermal nor adiabatic. The simplest theory involves the Fouriers law of conduction, as opposed to advanced theories with physically more realistic models. This theorem includes the method of least work as a special case 1874,1922, Timoshenko corrects the Euler-Bernoulli beam equation 1936, Hardy Cross publication of the moment distribution method, an important innovation in the design of continuous frames. Martin, and L. J. Applied mechanics Materials science Continuum mechanics Fracture mechanics L. D, landau, E. M. Lifshitz, Course of Theoretical Physics, Theory of Elasticity Butterworth-Heinemann, ISBN 0-7506-2633-X J. E. Marsden, T. J. Hughes, Mathematical Foundations of Elasticity, Dover, ISBN 0-486-67865-2 P. C. Chou, N. J. Pagano, Elasticity, Tensor, Dyadic, goodier, Theory of elasticity, 3d ed
7.
Stress (mechanics)
–
For example, when a solid vertical bar is supporting a weight, each particle in the bar pushes on the particles immediately below it. When a liquid is in a container under pressure, each particle gets pushed against by all the surrounding particles. The container walls and the pressure-inducing surface push against them in reaction and these macroscopic forces are actually the net result of a very large number of intermolecular forces and collisions between the particles in those molecules. Strain inside a material may arise by various mechanisms, such as stress as applied by external forces to the material or to its surface. Any strain of a material generates an internal elastic stress, analogous to the reaction force of a spring. In liquids and gases, only deformations that change the volume generate persistent elastic stress, however, if the deformation is gradually changing with time, even in fluids there will usually be some viscous stress, opposing that change. Elastic and viscous stresses are usually combined under the mechanical stress. Significant stress may exist even when deformation is negligible or non-existent, stress may exist in the absence of external forces, such built-in stress is important, for example, in prestressed concrete and tempered glass. Stress may also be imposed on a material without the application of net forces, for example by changes in temperature or chemical composition, stress that exceeds certain strength limits of the material will result in permanent deformation or even change its crystal structure and chemical composition. In some branches of engineering, the stress is occasionally used in a looser sense as a synonym of internal force. For example, in the analysis of trusses, it may refer to the total traction or compression force acting on a beam, since ancient times humans have been consciously aware of stress inside materials. Until the 17th century the understanding of stress was largely intuitive and empirical, with those tools, Augustin-Louis Cauchy was able to give the first rigorous and general mathematical model for stress in a homogeneous medium. Cauchy observed that the force across a surface was a linear function of its normal vector, and, moreover. The understanding of stress in liquids started with Newton, who provided a formula for friction forces in parallel laminar flow. Stress is defined as the force across a small boundary per unit area of that boundary, following the basic premises of continuum mechanics, stress is a macroscopic concept. In a fluid at rest the force is perpendicular to the surface, in a solid, or in a flow of viscous liquid, the force F may not be perpendicular to S, hence the stress across a surface must be regarded a vector quantity, not a scalar. Moreover, the direction and magnitude depend on the orientation of S. Thus the stress state of the material must be described by a tensor, called the stress tensor, with respect to any chosen coordinate system, the Cauchy stress tensor can be represented as a symmetric matrix of 3×3 real numbers
8.
Deformation (mechanics)
–
Deformation in continuum mechanics is the transformation of a body from a reference configuration to a current configuration. A configuration is a set containing the positions of all particles of the body, a deformation may be caused by external loads, body forces, or changes in temperature, moisture content, or chemical reactions, etc. Strain is a description of deformation in terms of displacement of particles in the body that excludes rigid-body motions. In a continuous body, a deformation field results from a field induced by applied forces or is due to changes in the temperature field inside the body. The relation between stresses and induced strains is expressed by constitutive equations, e. g. Hookes law for linear elastic materials, deformations which are recovered after the stress field has been removed are called elastic deformations. In this case, the continuum completely recovers its original configuration, on the other hand, irreversible deformations remain even after stresses have been removed. Another type of deformation is viscous deformation, which is the irreversible part of viscoelastic deformation. In the case of elastic deformations, the response function linking strain to the stress is the compliance tensor of the material. Strain is a measure of deformation representing the displacement between particles in the relative to a reference length. A general deformation of a body can be expressed in the form x = F where X is the position of material points in the body. Such a measure does not distinguish between rigid body motions and changes in shape of the body, a deformation has units of length. We could, for example, define strain to be ε ≐ ∂ ∂ X = F ′ − I, hence strains are dimensionless and are usually expressed as a decimal fraction, a percentage or in parts-per notation. Strains measure how much a given deformation differs locally from a rigid-body deformation, a strain is in general a tensor quantity. Physical insight into strains can be gained by observing that a given strain can be decomposed into normal and this could be applied by elongation, shortening, or volume changes, or angular distortion. However, it is sufficient to know the normal and shear components of strain on a set of three perpendicular directions. In this case, the undeformed and deformed configurations of the continuum are significantly different and this is commonly the case with elastomers, plastically-deforming materials and other fluids and biological soft tissue. Infinitesimal strain theory, also called small strain theory, small deformation theory, small displacement theory, in this case, the undeformed and deformed configurations of the body can be assumed identical. Large-displacement or large-rotation theory, which assumes small strains but large rotations, in each of these theories the strain is then defined differently
9.
Compatibility (mechanics)
–
In continuum mechanics, a compatible deformation tensor field in a body is that unique tensor field that is obtained when the body is subjected to a continuous, single-valued, displacement field. Compatibility is the study of the conditions under which such a displacement field can be guaranteed, compatibility conditions are particular cases of integrability conditions and were first derived for linear elasticity by Barré de Saint-Venant in 1864 and proved rigorously by Beltrami in 1886. In the continuum description of a solid body we imagine the body to be composed of a set of volumes or material points. Each volume is assumed to be connected to its neighbors without any gaps or overlaps, certain mathematical conditions have to be satisfied to ensure that gaps/overlaps do not develop when a continuum body is deformed. A body that deforms without developing any gaps/overlaps is called a compatible body, compatibility conditions are mathematical conditions that determine whether a particular deformation will leave a body in a compatible state. In the context of infinitesimal strain theory, these conditions are equivalent to stating that the displacements in a body can be obtained by integrating the strains. Such an integration is possible if the Saint-Venants tensor R vanishes in a body where ε is the infinitesimal strain tensor. For finite deformations the compatibility conditions take the form R, = ∇ × F =0 where F is the deformation gradient, the compatibility conditions in linear elasticity are obtained by observing that there are six strain-displacement relations that are functions of only three unknown displacements. This suggests that the three displacements may be removed from the system of equations without loss of information, the resulting expressions in terms of only the strains provide constraints on the possible forms of a strain field. e. We can write these conditions in index notation as e i k r e j l s ε i j, k l =0 where e i j k is the permutation symbol. In direct tensor notation ∇ × =0 where the operator can be expressed in an orthonormal coordinate system as ∇ × ε = e i j k ε r j, i e k ⊗ e r. The same condition is sufficient to ensure compatibility in a simply connected body. The quantity R i j k m represents the components of the Riemann-Christoffel curvature tensor. The problem of compatibility in continuum mechanics involves the determination of allowable single-valued continuous fields on simply connected bodies, more precisely, the problem may be stated in the following manner. Consider the deformation of a body shown in Figure 1, given a symmetric second order tensor field ϵ when is it possible to construct a vector field u such that ϵ =12 Suppose that there exists u such that the expression for ϵ holds. Hence, ε i k, j l − ε j k, i l − ε i l, j k + ε j l, i k =0 In direct tensor notation ∇ × =0 The above are necessary conditions. If w is the rotation vector then ∇ × ϵ = ∇ w + ∇ w T. Hence the necessary condition may also be written as ∇ × =0, Let us now assume that the condition ∇ × =0 is satisfied in a portion of a body
10.
Finite strain theory
–
In this case, the undeformed and deformed configurations of the continuum are significantly different and a clear distinction has to be made between them. This is commonly the case with elastomers, plastically-deforming materials and other fluids, the displacement of a body has two components, a rigid-body displacement and a deformation. A rigid-body displacement consists of a translation and rotation of the body without changing its shape or size. Deformation implies the change in shape and/or size of the body from an initial or undeformed configuration κ0 to a current or deformed configuration κ t, a change in the configuration of a continuum body can be described by a displacement field. A displacement field is a field of all displacement vectors for all particles in the body. Relative displacement between particles occurs if and only if deformation has occurred, if displacement occurs without deformation, then it is deemed a rigid-body displacement. The displacement of particles indexed by variable i may be expressed as follows, the vector joining the positions of a particle in the undeformed configuration P i and deformed configuration p i is called the displacement vector. The partial derivative of the displacement vector with respect to the material coordinates yields the material displacement gradient tensor ∇ X u, α J i are the direction cosines between the material and spatial coordinate systems with unit vectors E J and e i, respectively. e. Due to the assumption of continuity of χ, F has the inverse H = F −1, then, by the implicit function theorem, the Jacobian determinant J must be nonsingular, i. e. Consider a particle or material point P with position vector X = X I I I in the undeformed configuration. After a displacement of the body, the new position of the particle indicated by p in the new configuration is given by the position x = x i e i. The coordinate systems for the undeformed and deformed configuration can be superimposed for convenience, consider now a material point Q neighboring P, with position vector X + Δ X = I I. In the deformed configuration this particle has a new position q given by the vector x + Δ x. Assuming that the line segments Δ X and Δ x joining the particles P and Q in both the undeformed and deformed configuration, respectively, to be small, then we can express them as d X and d x. A geometrically consistent definition of such a derivative requires an excursion into differential geometry, the time derivative of F is F ˙ = ∂ F ∂ t = ∂ ∂ t = ∂ ∂ X = ∂ ∂ X where V is the velocity. The derivative on the hand side represents a material velocity gradient. It is common to convert that into a gradient, i. e. F ˙ = ∂ ∂ X = ∂ ∂ x ⋅ ∂ x ∂ X = l ⋅ F where l is the spatial velocity gradient. If the spatial velocity gradient is constant, the equation can be solved exactly to give F = e l t assuming F =1 at t =0
11.
Infinitesimal strain theory
–
With this assumption, the equations of continuum mechanics are considerably simplified. This approach may also be called small deformation theory, small displacement theory and it is contrasted with the finite strain theory where the opposite assumption is made. In such a linearization, the non-linear or second-order terms of the strain tensor are neglected. Therefore, the displacement gradient components and the spatial displacement gradient components are approximately equal. From the geometry of Figure 1 we have a b ¯ =2 +2 = d x 1 +2 ∂ u x ∂ x +2 +2 For very small displacement gradients, i. e. e. Therefore, the elements of the infinitesimal strain tensor are the normal strains in the coordinate directions. The results of operations are called strain invariants. Since there are no shear strain components in this coordinate system, an octahedral plane is one whose normal makes equal angles with the three principal directions. The engineering shear strain on a plane is called the octahedral shear strain and is given by γ o c t =232 +2 +2 where ε1, ε2, ε3 are the principal strains. Several definitions of equivalent strain can be found in the literature, thus, a solution does not generally exist for an arbitrary choice of strain components. Therefore, some restrictions, named compatibility equations, are imposed upon the strain components, with the addition of the three compatibility equations the number of independent equations are reduced to three, matching the number of unknown displacement components. These constraints on the strain tensor were discovered by Saint-Venant, and are called the Saint Venant compatibility equations, the compatibility functions serve to assure a single-valued continuous displacement function u i. The strains associated with length, i. e. the normal strain ε33, plane strain is then an acceptable approximation. The strain tensor for plane strain is written as, ε _ _ = in which the double underline indicates a second order tensor and this strain state is called plane strain. The corresponding stress tensor is, σ _ _ = in which the non-zero σ33 is needed to maintain the constraint ϵ33 =0. This stress term can be removed from the analysis to leave only the in-plane terms. Antiplane strain is another state of strain that can occur in a body. For infinitesimal deformations the scalar components of ω satisfy the condition | ω i j | ≪1, note that the displacement gradient is small only if both the strain tensor and the rotation tensor are infinitesimal
12.
Elasticity (physics)
–
In physics, elasticity is the ability of a body to resist a distorting influence or deforming force and to return to its original size and shape when that influence or force is removed. Solid objects will deform when adequate forces are applied on them, if the material is elastic, the object will return to its initial shape and size when these forces are removed. The physical reasons for elastic behavior can be different for different materials. In metals, the atomic lattice changes size and shape when forces are applied, when forces are removed, the lattice goes back to the original lower energy state. For rubbers and other polymers, elasticity is caused by the stretching of polymer chains when forces are applied, perfect elasticity is an approximation of the real world. The most elastic body in modern science found is Quartz fibre which is not even a perfect elastic body, so perfect elastic body is an ideal concept only. Most materials which possess elasticity in practice remain purely elastic only up to very small deformations. In engineering, the amount of elasticity of a material is determined by two types of material parameter, the first type of material parameter is called a modulus, which measures the amount of force per unit area needed to achieve a given amount of deformation. The SI unit of modulus is the pascal, a higher modulus typically indicates that the material is harder to deform. The second type of measures the elastic limit, the maximum stress that can arise in a material before the onset of permanent deformation. Its SI unit is also pascal, when describing the relative elasticities of two materials, both the modulus and the elastic limit have to be considered. Rubbers typically have a low modulus and tend to stretch a lot, of two rubber materials with the same elastic limit, the one with a lower modulus will appear to be more elastic, which is however not correct. When an elastic material is deformed due to a force, it experiences internal resistance to the deformation. The various moduli apply to different kinds of deformation, for instance, Youngs modulus applies to extension/compression of a body, whereas the shear modulus applies to its shear. The elasticity of materials is described by a curve, which shows the relation between stress and strain. The curve is nonlinear, but it can be approximated as linear for sufficiently small deformations. For even higher stresses, materials exhibit behavior, that is, they deform irreversibly. Elasticity is not exhibited only by solids, non-Newtonian fluids, such as viscoelastic fluids, in response to a small, rapidly applied and removed strain, these fluids may deform and then return to their original shape. Under larger strains, or strains applied for longer periods of time, because the elasticity of a material is described in terms of a stress-strain relation, it is essential that the terms stress and strain be defined without ambiguity
13.
Linear elasticity
–
Linear elasticity is the mathematical study of how solid objects deform and become internally stressed due to prescribed loading conditions. Linear elasticity models materials as continua, linear elasticity is a simplification of the more general nonlinear theory of elasticity and is a branch of continuum mechanics. The fundamental linearizing assumptions of linear elasticity are, infinitesimal strains or small deformations, in addition linear elasticity is valid only for stress states that do not produce yielding. These assumptions are reasonable for many engineering materials and engineering design scenarios, linear elasticity is therefore used extensively in structural analysis and engineering design, often with the aid of finite element analysis. The system of equations is completed by a set of linear algebraic constitutive relations. For elastic materials, Hookes law represents the behavior and relates the unknown stresses. Note, the Einstein summation convention of summing on repeated indices is used below and these are 3 independent equations with 6 independent unknowns. Strain-displacement equations, ε i j =12 where ε i j = ε j i is the strain and these are 6 independent equations relating strains and displacements with 9 independent unknowns. The equation for Hookes law is, σ i j = C i j k l ε k l where C i j k l is the stiffness tensor and these are 6 independent equations relating stresses and strains. An elastostatic boundary value problem for a media is a system of 15 independent equations. Specifying the boundary conditions, the value problem is completely defined. To solve the two approaches can be taken according to boundary conditions of the boundary value problem, a displacement formulation. In isotropic media, the stiffness tensor gives the relationship between the stresses and the strains, for an isotropic medium, the stiffness tensor has no preferred direction, an applied force will give the same displacements no matter the direction in which the force is applied. If the medium is homogeneous, then the elastic moduli will be independent of the position in the medium, the constitutive equation may now be written as, σ i j = K δ i j ε k k +2 μ. This expression separates the stress into a part on the left which may be associated with a scalar pressure. A simpler expression is, σ i j = λ δ i j ε k k +2 μ ε i j where λ is Lamés first parameter. More simply, ε i j =12 μ σ i j − ν E δ i j σ k k =1 E where ν is Poissons ratio and E is Youngs modulus. Elastostatics is the study of linear elasticity under the conditions of equilibrium, in all forces on the elastic body sum to zero
14.
Plasticity (physics)
–
In physics and materials science, plasticity describes the deformation of a material undergoing non-reversible changes of shape in response to applied forces. For example, a piece of metal being bent or pounded into a new shape displays plasticity as permanent changes occur within the material itself. In engineering, the transition from elastic behavior to plastic behavior is called yield, plastic deformation is observed in most materials, particularly metals, soils, rocks, concrete, foams, bone and skin. However, the mechanisms that cause plastic deformation can vary widely. At a crystalline scale, plasticity in metals is usually a consequence of dislocations, such defects are relatively rare in most crystalline materials, but are numerous in some and part of their crystal structure, in such cases, plastic crystallinity can result. In brittle materials such as rock, concrete and bone, plasticity is caused predominantly by slip at microcracks, for many ductile metals, tensile loading applied to a sample will cause it to behave in an elastic manner. Each increment of load is accompanied by an increment in extension. When the load is removed, the returns to its original size. However, once the load exceeds a threshold – the yield strength – the extension increases more rapidly than in the region, now when the load is removed. Elastic deformation, however, is an approximation and its quality depends on the time frame considered, if, as indicated in the graph opposite, the deformation includes elastic deformation, it is also often referred to as elasto-plastic deformation or elastic-plastic deformation. Perfect plasticity is a property of materials to undergo irreversible deformation without any increase in stresses or loads, plastic materials with hardening necessitate increasingly higher stresses to result in further plastic deformation. Generally, plastic deformation is dependent on the deformation speed. Such materials are said to deform visco-plastically, the plasticity of a material is directly proportional to the ductility and malleability of the material. Plasticity in a crystal of pure metal is primarily caused by two modes of deformation in the lattice, slip and twinning. Slip is a deformation which moves the atoms through many interatomic distances relative to their initial positions. Twinning is the plastic deformation takes place along two planes due to a set of forces applied to a given metal piece. Most metals show more plasticity when hot than when cold, lead shows sufficient plasticity at room temperature, while cast iron does not possess sufficient plasticity for any forging operation even when hot. This property is of importance in forming, shaping and extruding operations on metals, most metals are rendered plastic by heating and hence shaped hot
15.
Bending
–
In applied mechanics, bending characterizes the behavior of a slender structural element subjected to an external load applied perpendicularly to a longitudinal axis of the element. The structural element is assumed to be such that at least one of its dimensions is a fraction, typically 1/10 or less. When the length is longer than the width and the thickness. For example, a closet rod sagging under the weight of clothes on clothes hangers is an example of a beam experiencing bending. On the other hand, a shell is a structure of any form where the length. A large diameter, but thin-walled, short tube supported at its ends, in the absence of a qualifier, the term bending is ambiguous because bending can occur locally in all objects. Therefore, to make the usage of the more precise, engineers refer to a specific object such as, the bending of rods, the bending of beams, the bending of plates. A beam deforms and stresses develop inside it when a load is applied on it. In the quasi-static case, the amount of bending deflection and the stresses that develop are assumed not to change over time. In a horizontal beam supported at the ends and loaded downwards in the middle and these last two forces form a couple or moment as they are equal in magnitude and opposite in direction. This bending moment resists the sagging deformation characteristic of a beam experiencing bending, the stress distribution in a beam can be predicted quite accurately when some simplifying assumptions are used. In the Euler–Bernoulli theory of beams, a major assumption is that plane sections remain plane. In other words, any deformation due to shear across the section is not accounted for, also, this linear distribution is only applicable if the maximum stress is less than the yield stress of the material. For stresses that exceed yield, refer to article plastic bending, at yield, the maximum stress experienced in the section is defined as the flexural strength. Simple beam bending is often analyzed with the Euler–Bernoulli beam equation, the conditions for using simple bending theory are, The beam is subject to pure bending. This means that the force is zero, and that no torsional or axial loads are present. The material is isotropic and homogeneous, the beam is initially straight with a cross section that is constant throughout the beam length. The beam has an axis of symmetry in the plane of bending, the proportions of the beam are such that it would fail by bending rather than by crushing, wrinkling or sideways buckling
16.
Hooke's law
–
Hookes law is a principle of physics that states that the force needed to extend or compress a spring by some distance X is proportional to that distance. That is, F = kX, where k is a constant factor characteristic of the spring, its stiffness, the law is named after 17th-century British physicist Robert Hooke. He first stated the law in 1676 as a Latin anagram and he published the solution of his anagram in 1678 as, ut tensio, sic vis. Hooke states in the 1678 work that he was aware of the law already in 1660, an elastic body or material for which this equation can be assumed is said to be linear-elastic or Hookean. Hookes law is only a linear approximation to the real response of springs. Many materials will deviate from Hookes law well before those elastic limits are reached. On the other hand, Hookes law is an approximation for most solid bodies, as long as the forces. For this reason, Hookes law is used in all branches of science and engineering. It is also the principle behind the spring scale, the manometer. The modern theory of elasticity generalizes Hookes law to say that the strain of an object or material is proportional to the stress applied to it. In this general form, Hookes law makes it possible to deduce the relation between strain and stress for complex objects in terms of properties of the materials it is made of. Consider a simple helical spring that has one end attached to some fixed object, suppose that the spring has reached a state of equilibrium, where its length is not changing anymore. Let X be the amount by which the end of the spring was displaced from its relaxed position. Hookes law states that F = k X or, equivalently, X = F k where k is a real number. Moreover, the formula holds when the spring is compressed. According to this formula, the graph of the applied force F as a function of the displacement X will be a line passing through the origin. Hookes law for a spring is often stated under the convention that F is the force exerted by the spring on whatever is pulling its free end. In that case, the equation becomes F = − k X since the direction of the force is opposite to that of the displacement
17.
Material failure theory
–
Failure theory is the science of predicting the conditions under which solid materials fail under the action of external loads. The failure of a material is classified into brittle failure or ductile failure. Depending on the conditions most materials can fail in a brittle or ductile manner or both, however, for most practical situations, a material may be classified as either brittle or ductile. Though failure theory has been in development for over 200 years, in mathematical terms, failure theory is expressed in the form of various failure criteria which are valid for specific materials. Failure criteria are functions in stress or strain space which separate failed states from unfailed states, a precise physical definition of a failed state is not easily quantified and several working definitions are in use in the engineering community. Quite often, phenomenological failure criteria of the form are used to predict brittle failure. In materials science, material failure is the loss of carrying capacity of a material unit. This definition per se introduces the fact that failure can be examined in different scales, from microscopic. On the other hand, due to the lack of globally accepted fracture criteria, such methodologies are useful for gaining insight in the cracking of specimens and simple structures under well defined global load distributions. Microscopic failure considers the initiation and propagation of a crack, failure criteria in this case are related to microscopic fracture. Some of the most popular models in this area are the micromechanical failure models. Such a model, proposed by Gurson and extended by Tvergaard, another approach, proposed by Rousselier, is based on continuum damage mechanics and thermodynamics. Both models form a modification of the von Mises yield potential by introducing a scalar quantity, which represents the void volume fraction of cavities. Macroscopic material failure is defined in terms of load carrying capacity or energy storage capacity, li presents a classification of macroscopic failure criteria in four categories, Stress or strain failure Energy type failure Damage failure Empirical failure. The material behavior at one level is considered as a collective of its behavior at a sub-level, an efficient deformation and failure model should be consistent at every level. The maximum stress criterion assumes that a material fails when the principal stress σ1 in a material element exceeds the uniaxial tensile strength of the material. Alternatively, the material will fail if the principal stress σ3 is less than the uniaxial compressive strength of the material. Numerous other phenomenological failure criteria can be found in the engineering literature, the degree of success of these criteria in predicting failure has been limited
18.
Fracture mechanics
–
Fracture mechanics is the field of mechanics concerned with the study of the propagation of cracks in materials. It uses methods of solid mechanics to calculate the driving force on a crack. In modern materials science, fracture mechanics is an important tool used to improve the performance of mechanical components, fractography is widely used with fracture mechanics to understand the causes of failures and also verify the theoretical failure predictions with real life failures. The prediction of crack growth is at the heart of the damage tolerance mechanical design discipline. There are three ways of applying a force to enable a crack to propagate, Mode I fracture – Opening mode, Mode II fracture – Sliding mode, the processes of material manufacture, processing, machining, and forming may introduce flaws in a finished mechanical component. Arising from the process, interior and surface flaws are found in all metal structures. Not all such flaws are unstable under service conditions, Fracture mechanics is the analysis of flaws to discover those that are safe and those that are liable to propagate as cracks and so cause failure of the flawed structure. Despite these inherent flaws, it is possible to achieve through damage tolerance analysis the safe operation of a structure, Fracture mechanics as a subject for critical study has barely been around for a century and thus is relatively new. Fracture mechanics should attempt to provide answers to the following questions. What crack size can be tolerated under service loading, i. e. what is the maximum permissible crack size. How long does it take for a crack to grow from an initial size, for example the minimum detectable crack size. What is the life of a structure when a certain pre-existing flaw size is assumed to exist. During the period available for crack detection how often should the structure be inspected for cracks, Fracture mechanics was developed during World War I by English aeronautical engineer, A. A. Griffith, to explain the failure of brittle materials. Griffiths work was motivated by two facts, The stress needed to fracture bulk glass is around 100 MPa. The theoretical stress needed for breaking atomic bonds of glass is approximately 10,000 MPa, a theory was needed to reconcile these conflicting observations. Also, experiments on glass fibers that Griffith himself conducted suggested that the stress increases as the fiber diameter decreases. Hence the uniaxial tensile strength, which had used extensively to predict material failure before Griffith. Griffith suggested that the low fracture strength observed in experiments, as well as the size-dependence of strength, was due to the presence of microscopic flaws in the bulk material, to verify the flaw hypothesis, Griffith introduced an artificial flaw in his experimental glass specimens
19.
Contact mechanics
–
Contact mechanics is the study of the deformation of solids that touch each other at one or more points. Central aspects in contact mechanics are the pressures and adhesion acting perpendicular to the bodies surfaces. This page focuses mainly on the direction, i. e. on frictionless contact mechanics. Frictional contact mechanics is discussed separately, current challenges faced in the field may include stress analysis of contact and coupling members and the influence of lubrication and material design on friction and wear. Applications of contact mechanics further extend into the micro- and nanotechnological realm, the original work in contact mechanics dates back to 1882 with the publication of the paper On the contact of elastic solids by Heinrich Hertz. Hertz was attempting to understand how the properties of multiple. Hertzian contact stress refers to the stresses that develop as two curved surfaces come in contact and deform slightly under the imposed loads. This amount of deformation is dependent on the modulus of elasticity of the material in contact and it gives the contact stress as a function of the normal contact force, the radii of curvature of both bodies and the modulus of elasticity of both bodies. Hertzian contact stress forms the foundation for the equations for load bearing capabilities and fatigue life in bearings, gears, classical contact mechanics is most notably associated with Heinrich Hertz. In 1882, Hertz solved the problem of two elastic bodies with curved surfaces. This still-relevant classical solution provides a foundation for modern problems in contact mechanics, for example, in mechanical engineering and tribology, Hertzian contact stress is a description of the stress within mating parts. The Hertzian contact stress refers to the stress close to the area of contact between two spheres of different radii. It was not until one hundred years later that Johnson, Kendall. This theory was rejected by Boris Derjaguin and co-workers who proposed a different theory of adhesion in the 1970s, the Derjaguin model came to be known as the DMT model, and the Johnson et al. model came to be known as the JKR model for adhesive elastic contact. This rejection proved to be instrumental in the development of the Tabor, further advancement in the field of contact mechanics in the mid-twentieth century may be attributed to names such as Bowden and Tabor. Bowden and Tabor were the first to emphasize the importance of surface roughness for bodies in contact, through investigation of the surface roughness, the true contact area between friction partners is found to be less than the apparent contact area. Such understanding also drastically changed the direction of undertakings in tribology, the works of Bowden and Tabor yielded several theories in contact mechanics of rough surfaces. The contributions of Archard must also be mentioned in discussion of pioneering works in this field, Archard concluded that, even for rough elastic surfaces, the contact area is approximately proportional to the normal force
20.
Frictional contact mechanics
–
Contact mechanics is the study of the deformation of solids that touch each other at one or more points. This can be divided into compressive and adhesive forces in the perpendicular to the interface. Frictional contact mechanics is the study of the deformation of bodies in the presence of frictional effects, frictional contact mechanics is concerned with a large range of different scales. At the macroscopic scale, it is applied for the investigation of the motion of contacting bodies, for instance the bouncing of a rubber ball on a surface depends on the frictional interaction at the contact interface. Here the total force versus indentation and lateral displacement are of main concern, at the intermediate scale, one is interested in the local stresses, strains and deformations of the contacting bodies in and near the contact area. For instance to derive or validate contact models at the scale, or to investigate wear. Application areas of scale are tire-pavement interaction, railway wheel-rail interaction, roller bearing analysis. Several famous scientists, engineers and mathematician contributed to our understanding of friction and they include Leonardo da Vinci, Guillaume Amontons, John Theophilus Desaguliers, Leonhard Euler, and Charles-Augustin de Coulomb. Later, Nikolai Pavlovich Petrov, Osborne Reynolds and Richard Stribeck supplemented this understanding with theories of lubrication, deformation of solid materials was investigated in the 17th and 18th centuries by Robert Hooke, Joseph Louis Lagrange, and in the 19th and 20th centuries by d’Alembert and Timoshenko. With respect to contact mechanics the classical contribution by Heinrich Hertz stands out, further the fundamental solutions by Boussinesq and Cerruti are of primary importance for the investigation of frictional contact problems in the elastic regime. Classical results for a true frictional contact problem concern the papers by F. W. Carter and they independently presented the creep versus creep force relation for a cylinder on a plane or for two cylinders in steady rolling contact using Coulomb’s dry friction law. These are applied to railway locomotive traction, and for understanding the hunting oscillation of railway vehicles, with respect to sliding, the classical solutions are due to C. Cattaneo and R. D. Mindlin, who considered the tangential shifting of a sphere on a plane, in the 1950s interest in the rolling contact of railway wheels grew. Johnson presented an approach for the 3D frictional problem with Hertzian geometry. Among others he found that spin creepage, which is symmetric about the center of the contact patch and this is due to the fore-aft differences in the distribution of tractions in the contact patch. In 1967 Joost Kalker published his milestone PhD thesis on the theory for rolling contact. This theory is exact for the situation of a friction coefficient in which case the slip area vanishes. It does assume Coulomb’s friction law, which more or less requires clean surfaces and this theory is for massive bodies such as the railway wheel-rail contact
21.
Fluid mechanics
–
Fluid mechanics is a branch of physics concerned with the mechanics of fluids and the forces on them. Fluid mechanics has a range of applications, including for mechanical engineering, civil engineering, chemical engineering, geophysics, astrophysics. Fluid mechanics can be divided into fluid statics, the study of fluids at rest, and fluid dynamics, fluid mechanics, especially fluid dynamics, is an active field of research with many problems that are partly or wholly unsolved. Fluid mechanics can be complex, and can best be solved by numerical methods. A modern discipline, called computational fluid dynamics, is devoted to this approach to solving fluid mechanics problems, Particle image velocimetry, an experimental method for visualizing and analyzing fluid flow, also takes advantage of the highly visual nature of fluid flow. Inviscid flow was further analyzed by mathematicians and viscous flow was explored by a multitude of engineers including Jean Léonard Marie Poiseuille. Fluid statics or hydrostatics is the branch of mechanics that studies fluids at rest. It embraces the study of the conditions under which fluids are at rest in stable equilibrium, and is contrasted with fluid dynamics, hydrostatics is fundamental to hydraulics, the engineering of equipment for storing, transporting and using fluids. It is also relevant to some aspect of geophysics and astrophysics, to meteorology, to medicine, fluid dynamics is a subdiscipline of fluid mechanics that deals with fluid flow—the science of liquids and gases in motion. The solution to a fluid dynamics problem typically involves calculating various properties of the fluid, such as velocity, pressure, density and it has several subdisciplines itself, including aerodynamics and hydrodynamics. Some fluid-dynamical principles are used in engineering and crowd dynamics. Fluid mechanics is a subdiscipline of continuum mechanics, as illustrated in the following table, in a mechanical view, a fluid is a substance that does not support shear stress, that is why a fluid at rest has the shape of its containing vessel. A fluid at rest has no shear stress, the assumptions inherent to a fluid mechanical treatment of a physical system can be expressed in terms of mathematical equations. This can be expressed as an equation in integral form over the control volume, the continuum assumption is an idealization of continuum mechanics under which fluids can be treated as continuous, even though, on a microscopic scale, they are composed of molecules. Fluid properties can vary continuously from one element to another and are average values of the molecular properties. The continuum hypothesis can lead to results in applications like supersonic speed flows. Those problems for which the continuum hypothesis fails, can be solved using statistical mechanics, to determine whether or not the continuum hypothesis applies, the Knudsen number, defined as the ratio of the molecular mean free path to the characteristic length scale, is evaluated. Problems with Knudsen numbers below 0.1 can be evaluated using the continuum hypothesis, the Navier–Stokes equations are differential equations that describe the force balance at a given point within a fluid
22.
Fluid
–
In physics, a fluid is a substance that continually deforms under an applied shear stress. Fluids are a subset of the phases of matter and include liquids, gases, plasmas, fluids are substances that have zero shear modulus, or, in simpler terms, a fluid is a substance which cannot resist any shear force applied to it. Although the term includes both the liquid and gas phases, in common usage, fluid is often used as a synonym for liquid. For example, brake fluid is hydraulic oil and will not perform its required incompressible function if there is gas in it and this colloquial usage of the term is also common in medicine and in nutrition. Liquids form a surface while gases do not. The distinction between solids and fluid is not entirely obvious, the distinction is made by evaluating the viscosity of the substance. Silly Putty can be considered to behave like a solid or a fluid and it is best described as a viscoelastic fluid. There are many examples of substances proving difficult to classify, a particularly interesting one is pitch, as demonstrated in the pitch drop experiment currently running at the University of Queensland. Fluids display properties such as, not resisting deformation, or resisting it only slightly, and these properties are typically a function of their inability to support a shear stress in static equilibrium. Solids can be subjected to stresses, and to normal stresses—both compressive. In contrast, ideal fluids can only be subjected to normal, real fluids display viscosity and so are capable of being subjected to low levels of shear stress. In a solid, shear stress is a function of strain, a consequence of this behavior is Pascals law which describes the role of pressure in characterizing a fluids state. The study of fluids is fluid mechanics, which is subdivided into fluid dynamics, matter Liquid Gas Bird, Byron, Stewart, Warren, Lightfoot, Edward
23.
Fluid statics
–
Fluid statics or hydrostatics is the branch of fluid mechanics that studies incompressible fluids at rest. It encompasses the study of the conditions under which fluids are at rest in stable equilibrium as opposed to fluid dynamics, hydrostatics are categorized as a part of the fluid statics, which is the study of all fluids, incompressible or not, at rest. Hydrostatics is fundamental to hydraulics, the engineering of equipment for storing, transporting and using fluids and it is also relevant to geophysics and astrophysics, to meteorology, to medicine, and many other fields. Some principles of hydrostatics have been known in an empirical and intuitive sense since antiquity, by the builders of boats, cisterns, aqueducts and fountains. Archimedes is credited with the discovery of Archimedes Principle, which relates the force on an object that is submerged in a fluid to the weight of fluid displaced by the object. The fair cup or Pythagorean cup, which dates from about the 6th century BC, is a technology whose invention is credited to the Greek mathematician. It was used as a learning tool, the cup consists of a line carved into the interior of the cup, and a small vertical pipe in the center of the cup that leads to the bottom. The height of this pipe is the same as the line carved into the interior of the cup, the cup may be filled to the line without any fluid passing into the pipe in the center of the cup. However, when the amount of fluid exceeds this fill line, due to the drag that molecules exert on one another, the cup will be emptied. Herons fountain is a device invented by Heron of Alexandria that consists of a jet of fluid being fed by a reservoir of fluid. The fountain is constructed in such a way that the height of the jet exceeds the height of the fluid in the reservoir, the device consisted of an opening and two containers arranged one above the other. The intermediate pot, which was sealed, was filled with fluid, trapped air inside the vessels induces a jet of water out of a nozzle, emptying all water from the intermediate reservoir. Pascal made contributions to developments in both hydrostatics and hydrodynamics, due to the fundamental nature of fluids, a fluid cannot remain at rest under the presence of a shear stress. However, fluids can exert pressure normal to any contacting surface, if a point in the fluid is thought of as an infinitesimally small cube, then it follows from the principles of equilibrium that the pressure on every side of this unit of fluid must be equal. If this were not the case, the fluid would move in the direction of the resulting force, thus, the pressure on a fluid at rest is isotropic, i. e. it acts with equal magnitude in all directions. This characteristic allows fluids to transmit force through the length of pipes or tubes, i. e. a force applied to a fluid in a pipe is transmitted, via the fluid, to the other end of the pipe. This principle was first formulated, in an extended form, by Blaise Pascal. In a fluid at rest, all frictional and inertial stresses vanish, when this condition of V =0 is applied to the Navier-Stokes equation, the gradient of pressure becomes a function of body forces only
24.
Fluid dynamics
–
In physics and engineering, fluid dynamics is a subdiscipline of fluid mechanics that describes the flow of fluids. It has several subdisciplines, including aerodynamics and hydrodynamics, before the twentieth century, hydrodynamics was synonymous with fluid dynamics. This is still reflected in names of some fluid dynamics topics, like magnetohydrodynamics and hydrodynamic stability, the foundational axioms of fluid dynamics are the conservation laws, specifically, conservation of mass, conservation of linear momentum, and conservation of energy. These are based on mechanics and are modified in quantum mechanics. They are expressed using the Reynolds transport theorem, in addition to the above, fluids are assumed to obey the continuum assumption. Fluids are composed of molecules that collide with one another and solid objects, however, the continuum assumption assumes that fluids are continuous, rather than discrete. The fact that the fluid is made up of molecules is ignored. The unsimplified equations do not have a general solution, so they are primarily of use in Computational Fluid Dynamics. The equations can be simplified in a number of ways, all of which make them easier to solve, some of the simplifications allow some simple fluid dynamics problems to be solved in closed form. Three conservation laws are used to solve fluid dynamics problems, the conservation laws may be applied to a region of the flow called a control volume. A control volume is a volume in space through which fluid is assumed to flow. The integral formulations of the laws are used to describe the change of mass, momentum. Mass continuity, The rate of change of fluid mass inside a control volume must be equal to the net rate of flow into the volume. Mass flow into the system is accounted as positive, and since the vector to the surface is opposite the sense of flow into the system the term is negated. The first term on the right is the net rate at which momentum is convected into the volume, the second term on the right is the force due to pressure on the volumes surfaces. The first two terms on the right are negated since momentum entering the system is accounted as positive, the third term on the right is the net acceleration of the mass within the volume due to any body forces. Surface forces, such as forces, are represented by F surf. The following is the form of the momentum conservation equation
25.
Archimedes' principle
–
Archimedes principle is a law of physics fundamental to fluid mechanics. It was formulated by Archimedes of Syracuse, in On Floating Bodies, Archimedes suggested that, Practically, Archimedes principle allows the buoyancy of an object partially or fully immersed in a liquid to be calculated. The downward force on the object is simply its weight, the upward, or buoyant, force on the object is that stated by Archimedes principle, above. Thus, the net force on the object is the difference between the buoyant force and its weight. If this net force is positive, the object rises, if negative, the sinks, and if zero. Consider a cube immersed in a fluid, with its sides parallel to the direction of gravity, the fluid will exert a normal force on each face, and therefore only the forces on the top and bottom faces will contribute to buoyancy. The pressure difference between the bottom and the top face is directly proportional to the height, multiplying the pressure difference by the area of a face gives the net force on the cube – the buoyancy, or the weight of the fluid displaced. By extending this reasoning to irregular shapes, we can see that, whatever the shape of the submerged body, the buoyant force is equal to the weight of the fluid displaced. Apparent loss in weight of water = weight of object in air − weight of object in water The weight of the fluid is directly proportional to the volume of the displaced fluid. The weight of the object in the fluid is reduced, because of the acting on it. Thus, among completely submerged objects with equal masses, objects with greater volume have greater buoyancy, suppose a rocks weight is measured as 10 newtons when suspended by a string in a vacuum with gravity acting on it. Suppose that, when the rock is lowered into water, it displaces water of weight 3 newtons, the force it then exerts on the string from which it hangs would be 10 newtons minus the 3 newtons of buoyant force,10 −3 =7 newtons. Buoyancy reduces the apparent weight of objects that have sunk completely to the sea floor and it is generally easier to lift an object up through the water than it is to pull it out of the water. Example, If you drop wood into water, buoyancy will keep it afloat, example, A helium balloon in a moving car. When increasing speed or driving in a curve, the air moves in the direction to the cars acceleration. However, due to buoyancy, the balloon is pushed out of the way by the air, and will actually drift in the same direction as the cars acceleration. When an object is immersed in a liquid, the liquid exerts a force, which is known as the buoyant force. The sum force acting on the object, then, is equal to the difference between the weight of the object and the weight of displaced liquid, equilibrium, or neutral buoyancy, is achieved when these two weights are equal
26.
Bernoulli's principle
–
In fluid dynamics, Bernoullis principle states that an increase in the speed of a fluid occurs simultaneously with a decrease in pressure or a decrease in the fluids potential energy. The principle is named after Daniel Bernoulli who published it in his book Hydrodynamica in 1738, Bernoullis principle can be applied to various types of fluid flow, resulting in various forms of Bernoullis equation, there are different forms of Bernoullis equation for different types of flow. The simple form of Bernoullis equation is valid for incompressible flows, more advanced forms may be applied to compressible flows at higher Mach numbers. Bernoullis principle can be derived from the principle of conservation of energy and this states that, in a steady flow, the sum of all forms of energy in a fluid along a streamline is the same at all points on that streamline. This requires that the sum of energy, potential energy. If the fluid is flowing out of a reservoir, the sum of all forms of energy is the same on all streamlines because in a reservoir the energy per volume is the same everywhere. Bernoullis principle can also be derived directly from Isaac Newtons Second Law of Motion, if a small volume of fluid is flowing horizontally from a region of high pressure to a region of low pressure, then there is more pressure behind than in front. This gives a net force on the volume, accelerating it along the streamline, fluid particles are subject only to pressure and their own weight. Consequently, within a fluid flowing horizontally, the highest speed occurs where the pressure is lowest, and the lowest speed occurs where the pressure is highest. In most flows of liquids, and of gases at low Mach number, therefore, the fluid can be considered to be incompressible and these flows are called incompressible flows. Bernoulli performed his experiments on liquids, so his equation in its form is valid only for incompressible flow. The constant on the side of the equation depends only on the streamline chosen. For conservative force fields, Bernoullis equation can be generalized as, E. g. for the Earths gravity Ψ = gz. The constant in the Bernoulli equation can be normalised, most often, gases and liquids are not capable of negative absolute pressure, or even zero pressure, so clearly Bernoullis equation ceases to be valid before zero pressure is reached. In liquids – when the pressure becomes too low – cavitation occurs, the above equations use a linear relationship between flow speed squared and pressure. At higher flow speeds in gases, or for sound waves in liquid, in many applications of Bernoullis equation, the change in the ρgz term along the streamline is so small compared with the other terms that it can be ignored. For example, in the case of aircraft in flight, the change in height z along a streamline is so small the ρgz term can be omitted. This allows the equation to be presented in the following simplified form, p + q = p 0 where p0 is called total pressure
27.
Pascal's law
–
The law was established by French mathematician Blaise Pascal in 1647–48. The intuitive explanation of this formula is that the change in pressure between 2 elevations is due to the weight of the fluid between the elevations. A more correct interpretation, though, is that the change is caused by the change of potential energy per unit volume of the liquid due to the existence of the gravitational field. Note that the variation with height does not depend on any additional pressures, therefore, Pascals law can be interpreted as saying that any change in pressure applied at any given point of the fluid is transmitted undiminished throughout the fluid. If a U-tube is filled with water and pistons are placed at each end, pressure exerted against the piston will be transmitted throughout the liquid. The pressure that the left piston exerts against the water will be equal to the pressure the water exerts against the right piston. Suppose the tube on the side is made wider and a piston of a larger area is used, for example. If a 1 N load is placed on the left piston, the difference between force and pressure is important, the additional pressure is exerted against the entire area of the larger piston. Since there is 50 times the area,50 times as much force is exerted on the larger piston, thus, the larger piston will support a 50 N load - fifty times the load on the smaller piston. Forces can be multiplied using such a device, one newton input produces 50 newtons output. By further increasing the area of the piston, forces can be multiplied, in principle. Pascals principle underlies the operation of the hydraulic press, the hydraulic press does not violate energy conservation, because a decrease in distance moved compensates for the increase in force. When the small piston is moved downward 100 centimeters, the piston will be raised only one-fiftieth of this. Pascals principle applies to all fluids, whether gases or liquids, a typical application of Pascals principle for gases and liquids is the automobile lift seen in many service stations. Increased air pressure produced by an air compressor is transmitted through the air to the surface of oil in an underground reservoir, the oil, in turn, transmits the pressure to a piston, which lifts the automobile. The relatively low pressure that exerts the force against the piston is about the same as the air pressure in automobile tires. Hydraulics is employed by modern devices ranging from small to enormous. For example, there are hydraulic pistons in almost all construction machines where heavy loads are involved, Pascals barrel is the name of a hydrostatics experiment allegedly performed by Blaise Pascal in 1646
28.
Viscosity
–
The viscosity of a fluid is a measure of its resistance to gradual deformation by shear stress or tensile stress. For liquids, it corresponds to the concept of thickness, for example. Viscosity is a property of the fluid which opposes the motion between the two surfaces of the fluid in a fluid that are moving at different velocities. For a given velocity pattern, the stress required is proportional to the fluids viscosity, a fluid that has no resistance to shear stress is known as an ideal or inviscid fluid. Zero viscosity is observed only at low temperatures in superfluids. Otherwise, all fluids have positive viscosity, and are said to be viscous or viscid. A fluid with a high viscosity, such as pitch. The word viscosity is derived from the Latin viscum, meaning mistletoe, the dynamic viscosity of a fluid expresses its resistance to shearing flows, where adjacent layers move parallel to each other with different speeds. It can be defined through the situation known as a Couette flow. This fluid has to be homogeneous in the layer and at different shear stresses, if the speed of the top plate is small enough, the fluid particles will move parallel to it, and their speed will vary linearly from zero at the bottom to u at the top. Each layer of fluid will move faster than the one just below it, in particular, the fluid will apply on the top plate a force in the direction opposite to its motion, and an equal but opposite one to the bottom plate. An external force is required in order to keep the top plate moving at constant speed. The magnitude F of this force is found to be proportional to the u and the area A of each plate. The proportionality factor μ in this formula is the viscosity of the fluid, the ratio u/y is called the rate of shear deformation or shear velocity, and is the derivative of the fluid speed in the direction perpendicular to the plates. Isaac Newton expressed the forces by the differential equation τ = μ ∂ u ∂ y, where τ = F/A. This formula assumes that the flow is moving along parallel lines and this equation can be used where the velocity does not vary linearly with y, such as in fluid flowing through a pipe. Use of the Greek letter mu for the dynamic viscosity is common among mechanical and chemical engineers. However, the Greek letter eta is used by chemists, physicists
29.
Newtonian fluid
–
That is equivalent to saying that those forces are proportional to the rates of change of the fluids velocity vector as one moves away from the point in question in various directions. Newtonian fluids are the simplest mathematical models of fluids that account for viscosity, while no real fluid fits the definition perfectly, many common liquids and gases, such as water and air, can be assumed to be Newtonian for practical calculations under ordinary conditions. However, non-Newtonian fluids are relatively common, and include oobleck, other examples include many polymer solutions, molten polymers, many solid suspensions, blood, and most highly viscous fluids. Newtonian fluids are named after Isaac Newton, who first postulated the relation between the strain rate and shear stress for such fluids in differential form. An element of a liquid or gas will suffer forces from the surrounding fluid. These forces can be approximated to first order by a viscous stress tensor. The deformation of that element, relative to some previous state. The tensors τ and ∇ v can be expressed by 3×3 matrices, one also defines a total stress tensor σ ) that combines the shear stress with conventional pressure p. The diagonal components of viscosity tensor is molecular viscosity of a liquid, and not diagonal components – turbulence eddy viscosity
30.
Non-Newtonian fluid
–
A non-Newtonian fluid is a fluid that does not follow Newtons Law of Viscosity. Most commonly, the viscosity of fluids is dependent on shear rate or shear rate history. Some non-Newtonian fluids with shear-independent viscosity, however, still exhibit normal stress-differences or other non-Newtonian behavior. Many salt solutions and molten polymers are non-Newtonian fluids, as are commonly found substances such as ketchup, custard, toothpaste, starch suspensions, maizena, paint, blood. In a Newtonian fluid, the relation between the stress and the shear rate is linear, passing through the origin, the constant of proportionality being the coefficient of viscosity. In a non-Newtonian fluid, the relation between the stress and the shear rate is different and can even be time-dependent. Therefore, a constant coefficient of viscosity cannot be defined, although the concept of viscosity is commonly used in fluid mechanics to characterize the shear properties of a fluid, it can be inadequate to describe non-Newtonian fluids. The properties are studied using tensor-valued constitutive equations, which are common in the field of continuum mechanics. The viscosity of a shear thickening fluid, or dilatant fluid, corn starch dissolved in water is a common example, when stirred slowly it looks milky, when stirred vigorously it feels like a very viscous liquid. Note that all thixotropic fluids are extremely shear thinning, but they are time dependent. Thus, to avoid confusion, the classification is more clearly termed pseudoplastic. Another example of a shear thinning fluid is blood and this application is highly favoured within the body, as it allows the viscosity of blood to decrease with increased shear strain rate. Fluids that have a linear shear stress/shear strain relationship require a finite yield stress before they begin to flow and these fluids are called Bingham plastics. Several examples are clay suspensions, drilling mud, toothpaste, mayonnaise, chocolate, the surface of a Bingham plastic can hold peaks when it is still. By contrast Newtonian fluids have flat featureless surfaces when still, there are also fluids whose strain rate is a function of time. Fluids that require a gradually increasing shear stress to maintain a constant strain rate are referred to as rheopectic, an opposite case of this is a fluid that thins out with time and requires a decreasing stress to maintain a constant strain rate. Many common substances exhibit non-Newtonian flows, uncooked cornflour has the same properties. The name oobleck is derived from the Dr. Seuss book Bartholomew, because of its properties, oobleck is often used in demonstrations that exhibit its unusual behavior
31.
Buoyancy
–
In science, buoyancy or upthrust, is an upward force exerted by a fluid that opposes the weight of an immersed object. In a column of fluid, pressure increases with depth as a result of the weight of the overlying fluid, thus the pressure at the bottom of a column of fluid is greater than at the top of the column. Similarly, the pressure at the bottom of an object submerged in a fluid is greater than at the top of the object and this pressure difference results in a net upwards force on the object. For this reason, an object whose density is greater than that of the fluid in which it is submerged tends to sink, If the object is either less dense than the liquid or is shaped appropriately, the force can keep the object afloat. This can occur only in a reference frame, which either has a gravitational field or is accelerating due to a force other than gravity defining a downward direction. In a situation of fluid statics, the net upward force is equal to the magnitude of the weight of fluid displaced by the body. The center of buoyancy of an object is the centroid of the volume of fluid. Archimedes principle is named after Archimedes of Syracuse, who first discovered this law in 212 B. C, more tersely, Buoyancy = weight of displaced fluid. The weight of the fluid is directly proportional to the volume of the displaced fluid. Thus, among completely submerged objects with equal masses, objects with greater volume have greater buoyancy and this is also known as upthrust. Suppose a rocks weight is measured as 10 newtons when suspended by a string in a vacuum with gravity acting upon it, suppose that when the rock is lowered into water, it displaces water of weight 3 newtons. The force it exerts on the string from which it hangs would be 10 newtons minus the 3 newtons of buoyancy force,10 −3 =7 newtons. Buoyancy reduces the apparent weight of objects that have sunk completely to the sea floor and it is generally easier to lift an object up through the water than it is to pull it out of the water. The density of the object relative to the density of the fluid can easily be calculated without measuring any volumes. Density of object density of fluid = weight weight − apparent immersed weight Example, If you drop wood into water, Example, A helium balloon in a moving car. During a period of increasing speed, the air mass inside the car moves in the direction opposite to the cars acceleration, the balloon is also pulled this way. However, because the balloon is buoyant relative to the air, it ends up being pushed out of the way, If the car slows down, the same balloon will begin to drift backward. For the same reason, as the car goes round a curve and this is the equation to calculate the pressure inside a fluid in equilibrium
32.
Mixing (process engineering)
–
In industrial process engineering, mixing is a unit operation that involves manipulation of a heterogeneous physical system with the intent to make it more homogeneous. Familiar examples include pumping of the water in a pool to homogenize the water temperature. Mixing is performed to allow heat and/or mass transfer to occur between one or more streams, components or phases, modern industrial processing almost always involves some form of mixing. Some classes of chemical reactors are also mixers, with the right equipment, it is possible to mix a solid, liquid or gas into another solid, liquid or gas. The opposite of mixing is segregation, a classical example of segregation is the brazil nut effect. The type of operation and equipment used during mixing depends on the state of materials being mixed, in this context, the act of mixing may be synonymous with stirring-, or kneading-processes. Mixing of liquids occurs frequently in process engineering, the nature of liquids to blend determines the equipment used. Turbulent or transitional mixing is conducted with turbines or impellers. Mixing of liquids that are miscible or at least soluble in each other frequently in process engineering. An everyday example would be the addition of milk or cream to tea or coffee, since both liquids are water-based, they dissolve easily in one another. The momentum of the liquid being added is sometimes enough to cause enough turbulence to mix the two, since the viscosity of liquids is relatively low. If necessary, a spoon or paddle could be used to complete the mixing process, blending in a more viscous liquid, such as honey, requires more mixing power per unit volume to achieve the same homogeneity in the same amount of time. Blending powders is one of the oldest unit-operations in the solids handling industries, for many decades powder blending has been used just to homogenize bulk materials. Many different machines have been designed to handle materials with various bulk solids properties, on the basis of the practical experience gained with these different machines, engineering knowledge has been developed to construct reliable equipment and to predict scale-up and mixing behavior. This wide range of applications of mixing equipment requires a level of knowledge, long time experience and extended test facilities to come to the optimal selection of equipment. In powder two different dimensions in the process can be determined, convective mixing and intensive mixing. In the case of convective mixing material in the mixer is transported from one location to another and this type of mixing leads to a less ordered state inside the mixer, the components that must be mixed are distributed over the other components. With progressing time the mixture becomes more randomly ordered, after a certain mixing time the ultimate random state is reached
33.
Liquid
–
A liquid is a nearly incompressible fluid that conforms to the shape of its container but retains a constant volume independent of pressure. As such, it is one of the four states of matter. A liquid is made up of tiny vibrating particles of matter, such as atoms, water is, by far, the most common liquid on Earth. Like a gas, a liquid is able to flow and take the shape of a container, most liquids resist compression, although others can be compressed. Unlike a gas, a liquid does not disperse to fill every space of a container, a distinctive property of the liquid state is surface tension, leading to wetting phenomena. The density of a liquid is usually close to that of a solid, therefore, liquid and solid are both termed condensed matter. On the other hand, as liquids and gases share the ability to flow, although liquid water is abundant on Earth, this state of matter is actually the least common in the known universe, because liquids require a relatively narrow temperature/pressure range to exist. Most known matter in the universe is in form as interstellar clouds or in plasma form within stars. Liquid is one of the four states of matter, with the others being solid, gas. Unlike a solid, the molecules in a liquid have a greater freedom to move. The forces that bind the molecules together in a solid are only temporary in a liquid, a liquid, like a gas, displays the properties of a fluid. A liquid can flow, assume the shape of a container, if liquid is placed in a bag, it can be squeezed into any shape. These properties make a suitable for applications such as hydraulics. Liquid particles are bound firmly but not rigidly and they are able to move around one another freely, resulting in a limited degree of particle mobility. As the temperature increases, the vibrations of the molecules causes distances between the molecules to increase. When a liquid reaches its point, the cohesive forces that bind the molecules closely together break. If the temperature is decreased, the distances between the molecules become smaller, only two elements are liquid at standard conditions for temperature and pressure, mercury and bromine. Four more elements have melting points slightly above room temperature, francium, caesium, gallium and rubidium, metal alloys that are liquid at room temperature include NaK, a sodium-potassium metal alloy, galinstan, a fusible alloy liquid, and some amalgams
34.
Surface tension
–
Surface tension is the elastic tendency of a fluid surface which makes it acquire the least surface area possible. Surface tension allows insects, usually denser than water, to float, at liquid-air interfaces, surface tension results from the greater attraction of liquid molecules to each other than to the molecules in the air. The net effect is a force at its surface that causes the liquid to behave as if its surface were covered with a stretched elastic membrane. Thus, the surface becomes under tension from the imbalanced forces, because of the relatively high attraction of water molecules for each other through a web of hydrogen bonds, water has a higher surface tension compared to that of most other liquids. Surface tension is an important factor in the phenomenon of capillarity, Surface tension has the dimension of force per unit length, or of energy per unit area. The two are equivalent, but when referring to energy per unit of area, it is common to use the surface energy. In materials science, surface tension is used for either surface stress or surface free energy, the cohesive forces among liquid molecules are responsible for the phenomenon of surface tension. In the bulk of the liquid, each molecule is pulled equally in every direction by neighboring liquid molecules, the molecules at the surface do not have the same molecules on all sides of them and therefore are pulled inwards. This creates some internal pressure and forces liquid surfaces to contract to the minimal area, Surface tension is responsible for the shape of liquid droplets. Although easily deformed, droplets of water tend to be pulled into a shape by the imbalance in cohesive forces of the surface layer. In the absence of forces, including gravity, drops of virtually all liquids would be approximately spherical. The spherical shape minimizes the necessary wall tension of the surface according to Laplaces law. Another way to view surface tension is in terms of energy, a molecule in contact with a neighbor is in a lower state of energy than if it were alone. The interior molecules have as many neighbors as they can possibly have, for the liquid to minimize its energy state, the number of higher energy boundary molecules must be minimized. The minimized number of boundary molecules results in a surface area. As a result of surface area minimization, a surface will assume the smoothest shape it can, since any curvature in the surface shape results in greater area, a higher energy will also result. Consequently, the surface will push back against any curvature in much the way as a ball pushed uphill will push back to minimize its gravitational potential energy. Bubbles in pure water are unstable, the addition of surfactants, however, can have a stabilizing effect on the bubbles
35.
Capillary action
–
Capillary action is the ability of a liquid to flow in narrow spaces without the assistance of, or even in opposition to, external forces like gravity. It occurs because of forces between the liquid and surrounding solid surfaces. If the diameter of the tube is sufficiently small, then the combination of surface tension, the first recorded observation of capillary action was by Leonardo da Vinci. A former student of Galileo, Niccolò Aggiunti, was said to have investigated capillary action, boyle then reported an experiment in which he dipped a capillary tube into red wine and then subjected the tube to a partial vacuum. Some thought that liquids rose in capillaries because air couldnt enter capillaries as easily as liquids, others thought that the particles of liquid were attracted to each other and to the walls of the capillary. They derived the Young–Laplace equation of capillary action, by 1830, the German mathematician Carl Friedrich Gauss had determined the boundary conditions governing capillary action. In 1871, the British physicist William Thomson determined the effect of the meniscus on a liquids vapor pressure—a relation known as the Kelvin equation, German physicist Franz Ernst Neumann subsequently determined the interaction between two immiscible liquids. Albert Einsteins first paper, which was submitted to Annalen der Physik in 1900, was on capillarity, a common apparatus used to demonstrate the first phenomenon is the capillary tube. When the lower end of a glass tube is placed in a liquid, such as water. Adhesion occurs between the fluid and the inner wall pulling the liquid column up until there is a sufficient mass of liquid for gravitational forces to overcome these intermolecular forces. So, a tube will draw a liquid column higher than a wider tube will. Capillary action is essential for the drainage of constantly produced tear fluid from the eye, wicking is the absorption of a liquid by a material in the manner of a candle wick. Paper towels absorb liquid through capillary action, allowing a fluid to be transferred from a surface to the towel, the small pores of a sponge act as small capillaries, causing it to absorb a large amount of fluid. Some textile fabrics are said to use capillary action to wick sweat away from the skin and these are often referred to as wicking fabrics, after the capillary properties of candle and lamp wicks. Capillary action is observed in thin layer chromatography, in which a solvent moves vertically up a plate via capillary action, in this case the pores are gaps between very small particles. Capillary action draws ink to the tips of fountain pen nibs from a reservoir or cartridge inside the pen, in hydrology, capillary action describes the attraction of water molecules to soil particles. Capillary action is responsible for moving groundwater from wet areas of the soil to dry areas, differences in soil potential drive capillary action in soil. Thus the thinner the space in which the water can travel, for a water-filled glass tube in air at standard laboratory conditions, γ =0.0728 N/m at 20 °C, ρ =1000 kg/m3, and g =9.81 m/s2
36.
Gas
–
Gas is one of the four fundamental states of matter. A pure gas may be made up of atoms, elemental molecules made from one type of atom. A gas mixture would contain a variety of pure gases much like the air, what distinguishes a gas from liquids and solids is the vast separation of the individual gas particles. This separation usually makes a colorless gas invisible to the human observer, the interaction of gas particles in the presence of electric and gravitational fields are considered negligible as indicated by the constant velocity vectors in the image. One type of commonly known gas is steam, the gaseous state of matter is found between the liquid and plasma states, the latter of which provides the upper temperature boundary for gases. Bounding the lower end of the temperature scale lie degenerative quantum gases which are gaining increasing attention, high-density atomic gases super cooled to incredibly low temperatures are classified by their statistical behavior as either a Bose gas or a Fermi gas. For a comprehensive listing of these states of matter see list of states of matter. The only chemical elements which are stable multi atom homonuclear molecules at temperature and pressure, are hydrogen, nitrogen and oxygen. These gases, when grouped together with the noble gases. Alternatively they are known as molecular gases to distinguish them from molecules that are also chemical compounds. The word gas is a neologism first used by the early 17th-century Flemish chemist J. B. van Helmont, according to Paracelsuss terminology, chaos meant something like ultra-rarefied water. An alternative story is that Van Helmonts word is corrupted from gahst and these four characteristics were repeatedly observed by scientists such as Robert Boyle, Jacques Charles, John Dalton, Joseph Gay-Lussac and Amedeo Avogadro for a variety of gases in various settings. Their detailed studies ultimately led to a relationship among these properties expressed by the ideal gas law. Gas particles are separated from one another, and consequently have weaker intermolecular bonds than liquids or solids. These intermolecular forces result from interactions between gas particles. Like-charged areas of different gas particles repel, while oppositely charged regions of different gas particles attract one another, transient, randomly induced charges exist across non-polar covalent bonds of molecules and electrostatic interactions caused by them are referred to as Van der Waals forces. The interaction of these forces varies within a substance which determines many of the physical properties unique to each gas. A comparison of boiling points for compounds formed by ionic and covalent bonds leads us to this conclusion, the drifting smoke particles in the image provides some insight into low pressure gas behavior
37.
Atmosphere
–
An atmosphere is a layer of gases surrounding a planet or other material body, that is held in place by the gravity of that body. An atmosphere is likely to be retained if the gravity it is subject to is high. The atmosphere of Earth is mostly composed of nitrogen, oxygen, argon with carbon dioxide, the atmosphere helps protect living organisms from genetic damage by solar ultraviolet radiation, solar wind and cosmic rays. Its current composition is the product of billions of years of modification of the paleoatmosphere by living organisms. The term stellar atmosphere describes the region of a star. Stars with sufficiently low temperatures may form compound molecules in their outer atmosphere, Atmospheric pressure is the force per unit area that is applied perpendicularly to a surface by the surrounding gas. It is determined by a gravitational force in combination with the total mass of a column of gas above a location. On Earth, units of air pressure are based on the recognized standard atmosphere. It is measured with a barometer, the pressure of an atmospheric gas decreases with altitude due to the diminishing mass of gas above. The height at which the pressure from an atmosphere declines by a factor of e is called the height and is denoted by H. For such an atmosphere, the pressure declines exponentially with increasing altitude. However, atmospheres are not uniform in temperature, so the determination of the atmospheric pressure at any particular altitude is more complex. Surface gravity, the force that holds down an atmosphere, differs significantly among the planets, for example, the large gravitational force of the giant planet Jupiter is able to retain light gases such as hydrogen and helium that escape from objects with lower gravity. Thus, the distant and cold Titan, Triton, and Pluto are able to retain their atmospheres despite their relatively low gravities, rogue planets, theoretically, may also retain thick atmospheres. Since a collection of gas molecules may be moving at a range of velocities. Lighter molecules move faster than ones with the same thermal kinetic energy. It is thought that Venus and Mars may have lost much of their water when, after being photo dissociated into hydrogen and oxygen by solar ultraviolet, Earths magnetic field helps to prevent this, as, normally, the solar wind would greatly enhance the escape of hydrogen. However, over the past 3 billion years Earth may have lost gases through the polar regions due to auroral activity
38.
Boyle's law
–
Boyles law is an experimental gas law that describes how the pressure of a gas tends to increase as the volume of the container decreases. Mathematically, Boyles law can be stated as P ∝1 V or P V = k where P is the pressure of the gas, V is the volume of the gas, and k is a constant. The equation states that product of pressure and volume is a constant for a mass of confined gas as long as the temperature is constant. For comparing the same substance under two different sets of conditions, the law can be expressed as P1 V1 = P2 V2. The equation shows that, as increases, the pressure of the gas decreases in proportion. Similarly, as volume decreases, the pressure of the gas increases, the law was named after chemist and physicist Robert Boyle, who published the original law in 1662. This relationship between pressure and volume was first noted by Richard Towneley and Henry Power, Robert Boyle confirmed their discovery through experiments and published the results. According to Robert Gunther and other authorities, it was Boyles assistant, Robert Hooke, Boyles law is based on experiments with air, which he considered to be a fluid of particles at rest in between small invisible springs. At that time, air was still seen as one of the four elements, Boyles interest was probably to understand air as an essential element of life, for example, he published works on the growth of plants without air. Boyle used a closed J-shaped tube and after pouring mercury from one side he forced the air on the side to contract under the pressure of mercury. The French physicist Edme Mariotte discovered the law independent of Boyle in 1679. Thus this law is referred to as Mariottes law or the Boyle–Mariotte law. Instead of a static theory a kinetic theory is needed, which was provided two centuries later by Maxwell and Boltzmann and this law was the first physical law to be expressed in the form of an equation describing the dependence of two variable quantities. The law itself can be stated as follows, Or Boyles law is a gas law, stating that the pressure and volume of a gas have an inverse relationship, if volume increases, then pressure decreases and vice versa, when temperature is held constant. Therefore, when the volume is halved, the pressure is doubled, and if the volume is doubled, Boyles law states that at constant temperature for a fixed mass, the absolute pressure and the volume of a gas are inversely proportional. The law can also be stated in a different manner. Most gases behave like ideal gases at moderate pressures and temperatures, the technology of the 17th century could not produce high pressures or low temperatures. Hence, the law was not likely to have deviations at the time of publication, the deviation is expressed as the compressibility factor
39.
Charles's law
–
Charless law is an experimental gas law that describes how gases tend to expand when heated. A modern statement of Charless law is, When the pressure on a sample of a dry gas is constant, the Kelvin temperature. This directly proportional relationship can be written as, V ∝ T or V T = k and this law describes how a gas expands as the temperature increases, conversely, a decrease in temperature will lead to a decrease in volume. The equation shows that, as temperature increases, the volume of the gas also increases in proportion. The law was named after scientist Jacques Charles, who formulated the law in his unpublished work from the 1780s. The basic principles had already described by Guillaume Amontons and Francis Hauksbee a century earlier. Dalton was the first to demonstrate that the law applied generally to all gases, with measurements only at the two thermometric fixed points of water, Gay-Lussac was unable to show that the equation relating volume to temperature was a linear function. On mathematical grounds alone, Gay-Lussacs paper does not permit the assignment of any law stating the linear relation and this equation does not contain the temperature and so has nothing to do with what became known as Charless Law. Gay-Lussacs value for k, was identical to Daltons earlier value for vapours, Gay-Lussac gave credit for this equation to unpublished statements by his fellow Republican citizen J. Charles in 1787. In the absence of a record, the gas law relating volume to temperature cannot be named after Charles. Daltons measurements had much more scope regarding temperature than Gay-Lussac, not only measuring the volume at the points of water. His conclusion for vapours is a statement of what become known wrongly as Charless Law, then even more wrongly as Gay-Lussacs law. His 1st law was that of partial pressures, Charless law appears to imply that the volume of a gas will descend to zero at a certain temperature or −273.15 °C. Gay-Lussac had no experience of air, although he appears to believe that the permanent gases such as air. However, the zero on the Kelvin temperature scale was originally defined in terms of the second law of thermodynamics. Thomson did not assume that this was equal to the point of Charless law. The two can be shown to be equivalent by Ludwig Boltzmanns statistical view of entropy, however, Charles also stated, The volume of a fixed mass of dry gas increases or decreases by 1⁄273 times the volume at 0 °C for every 1 °C rise or fall in temperature. Thus, V T = V0 + × T V T = V0 where VT is the volume of gas at temperature T, under this definition, the demonstration of Charless law is almost trivial
40.
Gay-Lussac's law
–
He is most often recognized for the Pressure Law which established that the pressure of an enclosed gas is directly proportional to its temperature and which he was the first to formulate. These laws are known variously as the Pressure Law or Amontonss law. For example, Gay-Lussac found that 2 volumes of hydrogen and 1 volume of oxygen would react to form 2 volumes of gaseous water, based on Gay-Lussacs results, Amedeo Avogadro theorized that, at the same temperature and pressure, equal volumes of gas contain equal numbers of molecules. The law of combining gases was made public by Joseph Louis Gay-Lussac in 1808, Avogadros hypothesis, however, was not initially accepted by chemists until the Italian chemist Stanislao Cannizzaro was able to convince the First International Chemical Congress in 1860. Amontons discovered this while building an air thermometer, the pressure of a gas of fixed mass and fixed volume is directly proportional to the gass absolute temperature. If a gass temperature increases, then so does its pressure if the mass, the law has a particularly simple mathematical form if the temperature is measured on an absolute scale, such as in kelvins. The law can then be expressed mathematically as P ∝ T, or P T = k, where, P is the pressure of the gas, T is the temperature of the gas, k is a constant. For comparing the same substance under two different sets of conditions, the law can be written as, P1 T1 = P2 T2 or P1 T2 = P2 T1. Because Amontons discovered the law beforehand, Gay-Lussacs name is now generally associated within chemistry with the law of combining volumes discussed in the section above, some introductory physics textbooks still define the pressure-temperature relationship as Gay-Lussacs law. Gay-Lussac primarily investigated the relationship between volume and temperature and published it in 1802, but his work did cover some comparison between pressure and temperature, however, in recent years the term has fallen out of favor. Gay-Lussacs law, Charless law, and Boyles law form the gas law. These three gas laws in combination with Avogadros law can be generalized by the gas law. Avogadros law Boyles law Charless law Combined gas law Castka, Joseph F. Metcalfe, H. Clark, Davis, Raymond E. Williams, the Complete Idiots Guide to Chemistry. How to Prepare for the SAT II Chemistry
41.
Combined gas law
–
The combined gas law is a gas law that combines Charless law, Boyles law, and Gay-Lussacs law. There is no official founder for this law, it is merely an amalgamation of the three previously discovered laws and these laws each relate one thermodynamic variable to another mathematically while holding everything else constant. Charless law states that volume and temperature are directly proportional to other as long as pressure is held constant. Boyles law asserts that pressure and volume are inversely proportional to each other at fixed temperature, finally, Gay-Lussacs law introduces a direct proportionality between temperature and pressure as long as it is at a constant volume. By combining and either of or, we can gain a new equation with P, V and T, if we divide equation by temperature and multiply equation by pressure we will get, P V T = k 1 T P V T = k 2 P. As the left-hand side of both equations is the same, we arrive at k 1 T = k 2 P, substituting in Avogadros Law yields the ideal gas equation. A derivation of the gas law using only elementary algebra can contain surprises. A physical derivation, longer but more reliable, begins by realizing that the constant volume parameter in Gay-Lussacs law will change as the volume changes. At constant volume, V1 the law might appear P = k1T, rather, it should first be determined in what sense these equations are compatible with one another. To gain insight into this, recall that any two variables determine the third, choosing P and V to be independent, we picture the T values forming a surface above the PV-plane. A definite V0 and P0 define a T0, a point on that surface, the ratio of the slopes of these two lines depends only on the value of P0/V0 at that point. Note that the form of did not depend on the particular point chosen. The same formula would have arisen for any combination of P and V values. Therefore, one can write k V k P = P V ∀ P, ∀ V This says that each point on the surface has its own pair of lines through it. Whereas is a relation between specific slopes and variable values, is a relation between slope functions and function variables and it holds true for any point on the surface, i. e. for any and all combinations of P and V values. To solve this equation for the function kV, first separate the variables, V on the left, V k V = P k P Choose any pressure P1. The right side evaluates to some value, call it karb. V k V = k arb This particular equation must now hold true, not just for one value of V, the only definition of kV that guarantees this for all V and arbitrary karb is k V = k arb V which may be verified by substitution in
42.
Plasma (physics)
–
Plasma is one of the four fundamental states of matter, the others being solid, liquid, and gas. Yet unlike these three states of matter, plasma does not naturally exist on the Earth under normal surface conditions, the term was first introduced by chemist Irving Langmuir in the 1920s. However, true plasma production is from the separation of these ions and electrons that produces an electric field. Based on the environmental temperature and density either partially ionised or fully ionised forms of plasma may be produced. The positive charge in ions is achieved by stripping away electrons from atomic nuclei, the number of electrons removed is related to either the increase in temperature or the local density of other ionised matter. Plasma may be the most abundant form of matter in the universe, although this is currently tentative based on the existence. Plasma is mostly associated with the Sun and stars, extending to the rarefied intracluster medium, Plasma was first identified in a Crookes tube, and so described by Sir William Crookes in 1879. The nature of the Crookes tube cathode ray matter was identified by British physicist Sir J. J. The term plasma was coined by Irving Langmuir in 1928, perhaps because the glowing discharge molds itself to the shape of the Crookes tube and we shall use the name plasma to describe this region containing balanced charges of ions and electrons. Plasma is a neutral medium of unbound positive and negative particles. Although these particles are unbound, they are not ‘free’ in the sense of not experiencing forces, in turn this governs collective behavior with many degrees of variation. The average number of particles in the Debye sphere is given by the plasma parameter, bulk interactions, The Debye screening length is short compared to the physical size of the plasma. This criterion means that interactions in the bulk of the plasma are more important than those at its edges, when this criterion is satisfied, the plasma is quasineutral. Plasma frequency, The electron plasma frequency is compared to the electron-neutral collision frequency. When this condition is valid, electrostatic interactions dominate over the processes of ordinary gas kinetics, for plasma to exist, ionization is necessary. The term plasma density by itself refers to the electron density, that is. The degree of ionization of a plasma is the proportion of atoms that have lost or gained electrons, even a partially ionized gas in which as little as 1% of the particles are ionized can have the characteristics of a plasma. The degree of ionization, α, is defined as α = n i n i + n n, where n i is the number density of ions and n n is the number density of neutral atoms
43.
Rheology
–
The term rheology was coined by Eugene C. Bingham, a professor at Lafayette College, in 1920, from a suggestion by a colleague, the term was inspired by the aphorism of Simplicius, panta rhei, everything flows, and was first used to describe the flow of liquids and the deformation of solids. Newtonian fluids can be characterized by a coefficient of viscosity for a specific temperature. Although this viscosity will change with temperature, it does not change with the strain rate, only a small group of fluids exhibit such constant viscosity. The large class of fluids whose viscosity changes with the rate are called non-Newtonian fluids. For example, ketchup can have its viscosity reduced by shaking, ketchup is a shear thinning material, like yogurt and emulsion paint, exhibiting thixotropy, where an increase in relative flow velocity will cause a reduction in viscosity, for example, by stirring. Some other non-Newtonian materials show the behavior, rheopecty, viscosity going up with relative deformation. Since Sir Isaac Newton originated the concept of viscosity, the study of liquids with strain rate dependent viscosity is also often called Non-Newtonian fluid mechanics, materials with the characteristics of a fluid will flow when subjected to a stress which is defined as the force per area. There are different sorts of stress and materials can respond differently for different stresses, much of theoretical rheology is concerned with associating external forces and torques with internal stresses and internal strain gradients and flow velocities. In this sense, a solid undergoing plastic deformation is a fluid, granular rheology refers to the continuum mechanical description of granular materials. These experimental techniques are known as rheometry and are concerned with the determination with well-defined rheological material functions, such relationships are then amenable to mathematical treatment by the established methods of continuum mechanics. The characterization of flow or deformation originating from a shear stress field is called shear rheometry. The study of extensional flows is called extensional rheology, shear flows are much easier to study and thus much more experimental data are available for shear flows than for extensional flows. A rheologist is an interdisciplinary scientist or engineer who studies the flow of liquids or the deformation of soft solids. It is not a degree subject, there is no qualification of rheologist as such. Most rheologists have a qualification in mathematics, the sciences, engineering, medicine, or certain technologies. Elasticity is essentially a time independent processes, as the strains appear the moment the stress is applied, if the material deformation rate increases linearly with increasing applied stress, then the material is viscous in the Newtonian sense. These materials are characterized due to the delay between the applied constant stress and the maximum strain
44.
Viscoelasticity
–
Viscoelasticity is the property of materials that exhibit both viscous and elastic characteristics when undergoing deformation. Viscous materials, like honey, resist shear flow and strain linearly with time when a stress is applied, elastic materials strain when stretched and quickly return to their original state once the stress is removed. Viscoelastic materials have elements of both of properties and, as such, exhibit time-dependent strain. In the nineteenth century, physicists such as Maxwell, Boltzmann, and Kelvin researched and experimented with creep and recovery of glasses, metals, viscoelasticity was further examined in the late twentieth century when synthetic polymers were engineered and used in a variety of applications. Viscoelasticity calculations depend heavily on the viscosity variable, η, the inverse of η is also known as fluidity, φ. The value of either can be derived as a function of temperature or as a given value, depending on the change of strain rate versus stress inside a material the viscosity can be categorized as having a linear, non-linear, or plastic response. When a material exhibits a linear response it is categorized as a Newtonian material, in this case the stress is linearly proportional to the strain rate. If the material exhibits a non-linear response to the strain rate, there is also an interesting case where the viscosity decreases as the shear/strain rate remains constant. A material which exhibits this type of behavior is known as thixotropic, in addition, when the stress is independent of this strain rate, the material exhibits plastic deformation. Many viscoelastic materials exhibit rubber like behavior explained by the theory of polymer elasticity. Some examples of materials include amorphous polymers, semicrystalline polymers, biopolymers, metals at very high temperatures. Cracking occurs when the strain is applied quickly and outside of the elastic limit, ligaments and tendons are viscoelastic, so the extent of the potential damage to them depends both on the rate of the change of their length as well as on the force applied. The viscosity of a viscoelastic substance gives the substance a strain rate dependence on time, purely elastic materials do not dissipate energy when a load is applied, then removed. However, a viscoelastic substance loses energy when a load is applied, hysteresis is observed in the stress–strain curve, with the area of the loop being equal to the energy lost during the loading cycle. Since viscosity is the resistance to thermally activated plastic deformation, a material will lose energy through a loading cycle. Plastic deformation results in lost energy, which is uncharacteristic of a purely elastic materials reaction to a loading cycle, specifically, viscoelasticity is a molecular rearrangement. When a stress is applied to a material such as a polymer. This movement or rearrangement is called creep, polymers remain a solid material even when these parts of their chains are rearranging in order to accompany the stress, and as this occurs, it creates a back stress in the material
45.
Rheometer
–
A rheometer is a laboratory device used to measure the way in which a liquid, suspension or slurry flows in response to applied forces. It is used for those fluids which cannot be defined by a value of viscosity and therefore require more parameters to be set. It measures the rheology of the fluid, there are two distinctively different types of rheometers. Rotational or shear type rheometers are usually designed as either a native strain-controlled instrument or a native stress-controlled instrument, the word rheometer comes from the Greek, and means a device for measuring flow. In the 19th century it was used for devices to measure electric current. It was also used for the measurement of flow of liquids, in medical practice and this latter use persisted to the second half of the 20th century in some areas. Following the coining of the term rheology the word came to be applied to instruments for measuring the character rather than quantity of flow, the principle and working of rheometers is described in several texts. A dynamic shear rheometer, commonly known as DSR is used for research, liquid is forced through a tube of constant cross-section and precisely known dimensions under conditions of laminar flow. Either the flow-rate or the drop are fixed and the other measured. Knowing the dimensions, the flow-rate can be converted into a value for the shear rate, varying the pressure or flow allows a flow curve to be determined. The liquid is placed within the annulus of one cylinder inside another, one of the cylinders is rotated at a set speed. This determines the shear rate inside the annulus, the liquid tends to drag the other cylinder round, and the force it exerts on that cylinders is measured, which can be converted to a shear stress. One version of this is the Fann V-G Viscometer, which runs at two speeds, and therefore only two points on the flow curve. This is sufficient to define a Bingham plastic model which used to be used in the oil industry for determining the flow character of drilling fluids. In recent years rheometers that spin at 600,300,200,100,6 &3 RPM have been used and this allows for more complex fluids models such as Herschel-Bulkley to be used. Some models allow the speed to be increased and decreased in a programmed fashion. The liquid is placed on horizontal plate and a cone placed into it. The angle between the surface of the cone and the plate is around 1 to 2 degrees but can vary depending on the types of tests being run, typically the plate is rotated and the torque on the cone measured
46.
Smart fluid
–
A smart fluid is a fluid whose properties can be changed by applying an electric field or a magnetic field. The most developed smart fluids today are fluids whose viscosity increases when a field is applied. Small magnetic dipoles are suspended in a fluid, and the applied magnetic field causes these small magnets to line up. These magnetorheological or MR fluids are being used in the suspension of the 2002 model of the Cadillac Seville STS automobile and more recently, depending on road conditions, the damping fluids viscosity is adjusted. This is more expensive than traditional systems, but it provides better control, some haptic devices whose resistance to touch can be controlled are also based on these MR fluids. Another major type of fluid are electrorheological or ER fluids. Besides fast acting clutches, brakes, shock absorbers and hydraulic valves, other, more esoteric, other smart fluids change their surface tension in the presence of an electric field. Other applications include brakes and seismic dampers, which are used in buildings in seismically-active zones to damp the oscillations occurring in an earthquake. Since then it appears that interest has waned a little, possibly due to the existence of various limitations of smart fluids which have yet to be overcome. Continuum mechanics Electrorheological fluid Ferrofluid Fluid mechanics Magnetorheological fluid Rheology Smart glass Smart metal http, //www. aip. org/tip/INPHFA/vol-9/iss-6/p14. html
47.
Magnetorheological fluid
–
A magnetorheological fluid is a type of smart fluid in a carrier fluid, usually a type of oil. When subjected to a field, the fluid greatly increases its apparent viscosity. Importantly, the stress of the fluid when in its active state can be controlled very accurately by varying the magnetic field intensity. The upshot is that the ability to transmit force can be controlled with an electromagnet. Extensive discussions of the physics and applications of MR fluids can be found in a recent book, MR fluid is different from a ferrofluid which has smaller particles. MR fluid particles are primarily on the micrometre-scale and are too dense for Brownian motion to keep them suspended, Ferrofluid particles are primarily nanoparticles that are suspended by Brownian motion and generally will not settle under normal conditions. As a result, these two fluids have different applications. When a magnetic field is applied, however, the particles align themselves along the lines of magnetic flux. To understand and predict the behavior of the MR fluid it is necessary to model the fluid mathematically, a task slightly complicated by the varying material properties. As mentioned above, smart fluids are such that they have a low viscosity in the absence of a magnetic field. In the case of MR fluids, the fluid actually assumes properties comparable to a solid when in the activated state, the behavior of a MR fluid can thus be considered similar to a Bingham plastic, a material model which has been well-investigated. However, a MR fluid does not exactly follow the characteristics of a Bingham plastic, for example, below the yield stress, the fluid behaves as a viscoelastic material, with a complex modulus that is also known to be dependent on the magnetic field intensity. MR fluids are also known to be subject to shear thinning, low shear strength has been the primary reason for limited range of applications. In the absence of pressure the maximum shear strength is about 100 kPa. If the fluid is compressed in the field direction and the compressive stress is 2 MPa. If the standard magnetic particles are replaced with elongated magnetic particles, ferroparticles settle out of the suspension over time due to the inherent density difference between the particles and their carrier fluid. The rate and degree to which this occurs is one of the primary attributes considered in industry when implementing or designing an MR device. Surfactants are typically used to offset this effect, but at a cost of the fluids magnetic saturation, and thus the maximum yield stress exhibited in its activated state
48.
Electrorheological fluid
–
Electrorheological fluids are suspensions of extremely fine non-conducting but electrically active particles in an electrically insulating fluid. The apparent viscosity of these fluids changes reversibly by an order of up to 100,000 in response to an electric field. For example, a typical ER fluid can go from the consistency of a liquid to that of a gel, and back, with response times on the order of milliseconds. The effect is called the Winslow effect after its discoverer, the American inventor Willis Winslow. Other common applications are in ER brakes and shock absorbers, there are many novel uses for these fluids. Potential uses are in accurate abrasive polishing and as haptic controllers, motorola filed a patent application for mobile device applications in 2006. The change in apparent viscosity is dependent on the electric field. The change is not a change in viscosity, hence these fluids are now known as ER fluids. The effect is described as an electric field dependent shear yield stress. When activated an ER fluid behaves as a Bingham plastic, with a point which is determined by the electric field strength. After the yield point is reached, the fluid shears as a fluid, hence the resistance to motion of the fluid can be controlled by adjusting the applied electric field. ER fluids are a type of smart fluid, a simple ER fluid can be made by mixing cornflour in a light vegetable oil or silicone oil. There are two theories to explain the effect, the interfacial tension or water bridge theory. The water bridge theory assumes a three phase system, the particles contain the third phase which is another liquid immiscible with the main phase liquid, with no applied electric field the third phase is strongly attracted to and held within the particles. This means the ER fluid is a suspension of particles, which behaves as a liquid, when an electric field is applied the third phase is driven to one side of the particles by electro osmosis and binds adjacent particles together to form chains. This chain structure means the ER fluid has become a solid, the electrostatic theory assumes just a two phase system, with dielectric particles forming chains aligned with an electric field in an analogous way to how magnetorheological fluid fluids work. An ER fluid has been constructed with the solid phase made from a conductor coated in an insulator and this ER fluid clearly cannot work by the water bridge model. However, although demonstrating that some ER fluids work by the electrostatic effect, the advantage of having an ER fluid which operates on the electrostatic effect is the elimination of leakage current, i. e. potentially there is no direct current
49.
Ferrofluid
–
A ferrofluid is a liquid that becomes strongly magnetized in the presence of a magnetic field. Ferrofluid was invented in 1963 by NASAs Steve Papell as a rocket fuel that could be drawn toward a pump inlet in a weightless environment by applying a magnetic field. Ferrofluids are colloidal liquids made of nanoscale ferromagnetic, or ferrimagnetic, each tiny particle is thoroughly coated with a surfactant to inhibit clumping. Large ferromagnetic particles can be ripped out of the homogeneous colloidal mixture, the magnetic attraction of nanoparticles is weak enough that the surfactants Van der Waals force is sufficient to prevent magnetic clumping or agglomeration. Ferrofluids usually do not retain magnetization in the absence of an applied field. The difference between ferrofluids and magnetorheological fluids is the size of the particles, the particles in a ferrofluid primarily consist of nanoparticles which are suspended by Brownian motion and generally will not settle under normal conditions. These two fluids have different applications as a result. Ferrofluids are composed of particles of magnetite, hematite or some other compound containing iron. This is small enough for thermal agitation to disperse them evenly within a carrier fluid and this is similar to the way that the ions in an aqueous paramagnetic salt solution make the solution paramagnetic. The composition of a typical ferrofluid is about 5% magnetic solids, 10% surfactant and 85% carrier, particles in ferrofluids are dispersed in a liquid, often using a surfactant, and thus ferrofluids are colloidal suspensions – materials with properties of more than one state of matter. In this case, the two states of matter are the metal and liquid it is in. This ability to change phases with the application of a field allows them to be used as seals, lubricants. This means that the particles do not agglomerate or phase separate even in extremely strong magnetic fields. However, the surfactant tends to break down over time, and eventually the nano-particles will agglomerate, the term magnetorheological fluid refers to liquids similar to ferrofluids that solidify in the presence of a magnetic field. Magnetorheological fluids have micrometre scale magnetic particles that are one to three orders of magnitude larger than those of ferrofluids, however, ferrofluids lose their magnetic properties at sufficiently high temperatures, known as the Curie temperature. When a paramagnetic fluid is subjected to a vertical magnetic field. This effect is known as the Rosensweig or normal-field instability, the instability is driven by the magnetic field, it can be explained by considering which shape of the fluid minimizes the total energy of the system. From the point of view of energy, peaks and valleys are energetically favorable
50.
Daniel Bernoulli
–
Daniel Bernoulli FRS was a Swiss mathematician and physicist and was one of the many prominent mathematicians in the Bernoulli family. He is particularly remembered for his applications of mathematics to mechanics, especially fluid mechanics, Daniel Bernoulli was born in Groningen, in the Netherlands, into a family of distinguished mathematicians. The Bernoulli family came originally from Antwerp, at time in the Spanish Netherlands. After a brief period in Frankfurt the family moved to Basel, Daniel was the son of Johann Bernoulli, nephew of Jacob Bernoulli. He had two brothers, Niklaus and Johann II, Daniel Bernoulli was described by W. W. Rouse Ball as by far the ablest of the younger Bernoullis. He is said to have had a bad relationship with his father, Johann Bernoulli also plagiarized some key ideas from Daniels book Hydrodynamica in his own book Hydraulica which he backdated to before Hydrodynamica. Despite Daniels attempts at reconciliation, his father carried the grudge until his death, around schooling age, his father, Johann, encouraged him to study business, there being poor rewards awaiting a mathematician. However, Daniel refused, because he wanted to study mathematics and he later gave in to his fathers wish and studied business. His father then asked him to study in medicine, and Daniel agreed under the condition that his father would teach him mathematics privately, Daniel studied medicine at Basel, Heidelberg, and Strasbourg, and earned a PhD in anatomy and botany in 1721. He was a contemporary and close friend of Leonhard Euler and he went to St. Petersburg in 1724 as professor of mathematics, but was very unhappy there, and a temporary illness in 1733 gave him an excuse for leaving St. Petersberg. He returned to the University of Basel, where he held the chairs of medicine, metaphysics. In May,1750 he was elected a Fellow of the Royal Society and his earliest mathematical work was the Exercitationes, published in 1724 with the help of Goldbach. Two years later he pointed out for the first time the frequent desirability of resolving a compound motion into motions of translation and motion of rotation, together Bernoulli and Euler tried to discover more about the flow of fluids. In particular, they wanted to know about the relationship between the speed at which blood flows and its pressure, soon physicians all over Europe were measuring patients blood pressure by sticking point-ended glass tubes directly into their arteries. It was not until about 170 years later, in 1896 that an Italian doctor discovered a less painful method which is still in use today. However, Bernoullis method of measuring pressure is used today in modern aircraft to measure the speed of the air passing the plane. Taking his discoveries further, Daniel Bernoulli now returned to his work on Conservation of Energy. It was known that a moving body exchanges its kinetic energy for energy when it gains height
51.
Robert Boyle
–
Robert William Boyle FRS was an Anglo-Irish natural philosopher, chemist, physicist and inventor born in Lismore, County Waterford, Ireland. Boyle is largely regarded today as the first modern chemist, and therefore one of the founders of modern chemistry, and one of the pioneers of modern experimental scientific method. He is best known for Boyles law, which describes the proportional relationship between the absolute pressure and volume of a gas, if the temperature is kept constant within a closed system. Among his works, The Sceptical Chymist is seen as a book in the field of chemistry. He was a devout and pious Anglican and is noted for his writings in theology, Boyle was born in Lismore Castle, in County Waterford, Ireland, the seventh son and fourteenth child of Richard Boyle, 1st Earl of Cork, and Catherine Fenton. Richard Boyle arrived in Dublin from England in 1588 during the Tudor plantations of Ireland and he had amassed enormous landholdings by the time Robert was born. As a child, Boyle was fostered to a local family, Boyle received private tutoring in Latin, Greek, and French and when he was eight years old, following the death of his mother, he was sent to Eton College in England. His fathers friend, Sir Henry Wotton, was then the provost of the college, during this time, his father hired a private tutor, Robert Carew, who had knowledge of Irish, to act as private tutor to his sons in Eton. After spending over three years at Eton, Robert travelled abroad with a French tutor and they visited Italy in 1641 and remained in Florence during the winter of that year studying the paradoxes of the great star-gazer Galileo Galilei, who was elderly but still living in 1641. Boyle returned to England from continental Europe in mid-1644 with a keen interest in scientific research and his father had died the previous year and had left him the manor of Stalbridge in Dorset, England and substantial estates in County Limerick in Ireland that he had acquired. They met frequently in London, often at Gresham College, having made several visits to his Irish estates beginning in 1647, Robert moved to Ireland in 1652 but became frustrated at his inability to make progress in his chemical work. In one letter, he described Ireland as a country where chemical spirits were so misunderstood. In 1654, Boyle left Ireland for Oxford to pursue his work more successfully, an inscription can be found on the wall of University College, Oxford the High Street at Oxford, marking the spot where Cross Hall stood until the early 19th century. It was here that Boyle rented rooms from the apothecary who owned the Hall. An account of Boyles work with the air pump was published in 1660 under the title New Experiments Physico-Mechanical, Touching the Spring of the Air, the person who originally formulated the hypothesis was Henry Power in 1661. Boyle in 1662 included a reference to a written by Power. In continental Europe the hypothesis is attributed to Edme Mariotte. In 1680 he was elected president of the society, but declined the honour from a scruple about oaths and they are extraordinary because all but a few of the 24 have come true
52.
Augustin-Louis Cauchy
–
Baron Augustin-Louis Cauchy FRS FRSE was a French mathematician who made pioneering contributions to analysis. He was one of the first to state and prove theorems of calculus rigorously and he almost singlehandedly founded complex analysis and the study of permutation groups in abstract algebra. A profound mathematician, Cauchy had an influence over his contemporaries. His writings range widely in mathematics and mathematical physics, more concepts and theorems have been named for Cauchy than for any other mathematician. Cauchy was a writer, he wrote approximately eight hundred research articles. Cauchy was the son of Louis François Cauchy and Marie-Madeleine Desestre, Cauchy married Aloise de Bure in 1818. She was a relative of the publisher who published most of Cauchys works. By her he had two daughters, Marie Françoise Alicia and Marie Mathilde, Cauchys father was a high official in the Parisian Police of the New Régime. He lost his position because of the French Revolution that broke out one month before Augustin-Louis was born, the Cauchy family survived the revolution and the following Reign of Terror by escaping to Arcueil, where Cauchy received his first education, from his father. After the execution of Robespierre, it was safe for the family to return to Paris, there Louis-François Cauchy found himself a new bureaucratic job, and quickly moved up the ranks. When Napoleon Bonaparte came to power, Louis-François Cauchy was further promoted, the famous mathematician Lagrange was also a friend of the Cauchy family. On Lagranges advice, Augustin-Louis was enrolled in the École Centrale du Panthéon, most of the curriculum consisted of classical languages, the young and ambitious Cauchy, being a brilliant student, won many prizes in Latin and Humanities. In spite of successes, Augustin-Louis chose an engineering career. In 1805 he placed second out of 293 applicants on this exam, one of the main purposes of this school was to give future civil and military engineers a high-level scientific and mathematical education. The school functioned under military discipline, which caused the young, nevertheless, he finished the Polytechnique in 1807, at the age of 18, and went on to the École des Ponts et Chaussées. He graduated in engineering, with the highest honors. After finishing school in 1810, Cauchy accepted a job as an engineer in Cherbourg. Cauchys first two manuscripts were accepted, the one was rejected
53.
Jacques Charles
–
Jacques Alexandre César Charles was a French inventor, scientist, mathematician, and balloonist. He was sometimes called Charles the Geometer and their pioneering use of hydrogen for lift led to this type of balloon being named a Charlière. Charless law, describing how gases tend to expand when heated, was formulated by Joseph Louis Gay-Lussac in 1802, Charles was elected to the Académie des Sciences in 1795 and subsequently became professor of physics at the Académie de Sciences. Charles was born in Beaugency-sur-Loire in 1746, He married Julie Françoise Bouchaud des Hérettes, Charles outlived her and died in Paris on April 7,1823. They used alternate strips of red and white silk, but the discolouration of the process left a red. Jacques Charles and the Robert brothers launched the worlds first hydrogen filled balloon on August 27,1783, from the Champ de Mars, the balloon was comparatively small, a 35 cubic metre sphere of rubberised silk, and only capable of lifting about 9 kg. It was filled with hydrogen that had made by pouring nearly a quarter of a tonne of sulphuric acid onto a half a tonne of scrap iron. The hydrogen gas was fed into the balloon via lead pipes, daily progress bulletins were issued on the inflation, and the crowd was so great that on the 26th the balloon was moved secretly by night to the Champ de Mars, a distance of 4 kilometres. The project was funded by a subscription organised by Barthelemy Faujas de Saint-Fond, at 13,45 on December 1,1783 Jacques Charles and the Robert brothers launched a new manned balloon from the Jardin des Tuileries in Paris. Jacques Charles was accompanied by Nicolas-Louis Robert as co-pilot of the 380-cubic-metre, the envelope was fitted with a hydrogen release valve and was covered with a net from which the basket was suspended. Sand ballast was used to control altitude and they ascended to a height of about 1,800 feet and landed at sunset in Nesles-la-Vallée after a 2-hour 5 minute flight covering 36 km. The chasers on horseback, who were led by the Duc de Chartres, Jacques Charles then decided to ascend again, but alone this time because the balloon had lost some of its hydrogen. This time it ascended rapidly to an altitude of about 3,000 metres and he began suffering from aching pain in his ears so he valved to release gas, and descended to land gently about 3 km away at Tour du Lay. Unlike the Robert brothers, Charles never flew again, although a hydrogen balloon came to be called a Charlière in his honour, among the special enclosure crowd was Benjamin Franklin, the diplomatic representative of the United States of America. Also present was Joseph Montgolfier, whom Charles honoured by asking him to release the small, bright green, pilot balloon to assess the wind and weather conditions. This event took place ten days after the worlds first manned flight by Jean-François Pilâtre de Rozier using a Montgolfier brothers hot air balloon. Simon Schama wrote in Citizens, Montgolfiers principal scientific collaborator was M. Charles, who had been the first to propose the gas produced by vitriol instead of the burning, dampened straw and wood that he had used in earlier flights. Charles himself was eager to ascend but had run into a firm veto from the King
54.
Leonhard Euler
–
He also introduced much of the modern mathematical terminology and notation, particularly for mathematical analysis, such as the notion of a mathematical function. He is also known for his work in mechanics, fluid dynamics, optics, astronomy, Euler was one of the most eminent mathematicians of the 18th century, and is held to be one of the greatest in history. He is also considered to be the most prolific mathematician of all time. His collected works fill 60 to 80 quarto volumes, more than anybody in the field and he spent most of his adult life in Saint Petersburg, Russia, and in Berlin, then the capital of Prussia. A statement attributed to Pierre-Simon Laplace expresses Eulers influence on mathematics, Read Euler, read Euler, Leonhard Euler was born on 15 April 1707, in Basel, Switzerland to Paul III Euler, a pastor of the Reformed Church, and Marguerite née Brucker, a pastors daughter. He had two sisters, Anna Maria and Maria Magdalena, and a younger brother Johann Heinrich. Soon after the birth of Leonhard, the Eulers moved from Basel to the town of Riehen, Paul Euler was a friend of the Bernoulli family, Johann Bernoulli was then regarded as Europes foremost mathematician, and would eventually be the most important influence on young Leonhard. Eulers formal education started in Basel, where he was sent to live with his maternal grandmother. In 1720, aged thirteen, he enrolled at the University of Basel, during that time, he was receiving Saturday afternoon lessons from Johann Bernoulli, who quickly discovered his new pupils incredible talent for mathematics. In 1726, Euler completed a dissertation on the propagation of sound with the title De Sono, at that time, he was unsuccessfully attempting to obtain a position at the University of Basel. In 1727, he first entered the Paris Academy Prize Problem competition, Pierre Bouguer, who became known as the father of naval architecture, won and Euler took second place. Euler later won this annual prize twelve times, around this time Johann Bernoullis two sons, Daniel and Nicolaus, were working at the Imperial Russian Academy of Sciences in Saint Petersburg. In November 1726 Euler eagerly accepted the offer, but delayed making the trip to Saint Petersburg while he applied for a physics professorship at the University of Basel. Euler arrived in Saint Petersburg on 17 May 1727 and he was promoted from his junior post in the medical department of the academy to a position in the mathematics department. He lodged with Daniel Bernoulli with whom he worked in close collaboration. Euler mastered Russian and settled life in Saint Petersburg. He also took on a job as a medic in the Russian Navy. The Academy at Saint Petersburg, established by Peter the Great, was intended to improve education in Russia, as a result, it was made especially attractive to foreign scholars like Euler
55.
Joseph Louis Gay-Lussac
–
Joseph Louis Gay-Lussac was a French chemist and physicist. Gay-Lussac was born at Saint-Léonard-de-Noblat in the department of Haute-Vienne. The father of Joseph Louis Gay, Anthony Gay, son of a doctor, was a lawyer and prosecutor, and worked as a judge in Noblat Bridge. Father of two sons and three daughters, he owned much of the Lussac village and usually added the name of this hamlet of the Haute-Vienne to his name, towards the year 1803, father and son finally adopted the name Gay-Lussac. During the Revolution, on behalf of the Law of Suspects and he received his early education at the hands of the Catholic Abbey of Bourdeix, though later in life became an atheist. In the care of the Abbot of Dumonteil he began his education in Paris, Gay-Lussac narrowly avoided conscription and by the time of entry to the École Polytechnique his father had been arrested. Three years later, Gay-Lussac transferred to the École des Ponts et Chaussées, in 1802, he was appointed demonstrator to A. F. Fourcroy at the École Polytechnique, where in he became professor of chemistry. From 1808 to 1832, he was professor of physics at the Sorbonne, in 1821, he was elected a foreign member of the Royal Swedish Academy of Sciences. In 1831 he was elected to represent Haute-Vienne in the chamber of deputies and he was elected a Foreign Honorary Member of the American Academy of Arts and Sciences in 1832. Gay-Lussac married Geneviève-Marie-Joseph Rojot in 1809 and he had first met her when she worked as a linen drapers shop assistant and was studying a chemistry textbook under the counter. He fathered five children, of whom the eldest became assistant to Justus Liebig in Giessen, some publications by Jules are mistaken as his fathers today since they share the same first initial. Gay-Lussac died in Paris, and his grave is there at Père Lachaise Cemetery and his name is one of the 72 names inscribed on the Eiffel Tower. 1802 – Gay-Lussac first formulated the law, Gay-Lussacs Law, stating if the mass. His work was preceded by that of Guillaume Amontons, who established the rough relation without the use of accurate thermometers. The law is written as p = k T, where k is a constant dependent on the mass and volume of the gas. 1804 – He and Jean-Baptiste Biot made a balloon ascent to a height of 7,016 metres in an early investigation of the Earths atmosphere. He wanted to collect samples of the air at different heights to record differences in temperature and moisture,1805 – Together with his friend and scientific collaborator Alexander von Humboldt, he discovered that the composition of the atmosphere does not change with decreasing pressure. They also discovered that water is formed by two parts of hydrogen and one part of oxygen,1808 – He was the co-discoverer of boron
56.
Robert Hooke
–
Robert Hooke FRS was an English natural philosopher, architect and polymath. Allan Chapman has characterised him as Englands Leonardo, Robert Gunthers Early Science in Oxford, a history of science in Oxford during the Protectorate, Restoration and Age of Enlightenment, devotes five of its fourteen volumes to Hooke. Hooke studied at Wadham College, Oxford during the Protectorate where he became one of a tightly knit group of ardent Royalists led by John Wilkins. Here he was employed as an assistant to Thomas Willis and to Robert Boyle and he built some of the earliest Gregorian telescopes and observed the rotations of Mars and Jupiter. In 1665 he inspired the use of microscopes for scientific exploration with his book, based on his microscopic observations of fossils, Hooke was an early proponent of biological evolution. Much of Hookes scientific work was conducted in his capacity as curator of experiments of the Royal Society, much of what is known of Hookes early life comes from an autobiography that he commenced in 1696 but never completed. Richard Waller mentions it in his introduction to The Posthumous Works of Robert Hooke, the work of Waller, along with John Wards Lives of the Gresham Professors and John Aubreys Brief Lives, form the major near-contemporaneous biographical accounts of Hooke. Robert Hooke was born in 1635 in Freshwater on the Isle of Wight to John Hooke, Robert was the last of four children, two boys and two girls, and there was an age difference of seven years between him and the next youngest. Their father John was a Church of England priest, the curate of Freshwaters Church of All Saints, Robert Hooke was expected to succeed in his education and join the Church. John Hooke also was in charge of a school, and so was able to teach Robert. He was a Royalist and almost certainly a member of a group who went to pay their respects to Charles I when he escaped to the Isle of Wight, Robert, too, grew up to be a staunch monarchist. As a youth, Robert Hooke was fascinated by observation, mechanical works and he dismantled a brass clock and built a wooden replica that, by all accounts, worked well enough, and he learned to draw, making his own materials from coal, chalk and ruddle. Hooke quickly mastered Latin and Greek, made study of Hebrew. Here, too, he embarked on his study of mechanics. It appears that Hooke was one of a group of students whom Busby educated in parallel to the work of the school. Contemporary accounts say he was not much seen in the school, in 1653, Hooke secured a choristers place at Christ Church, Oxford. He was employed as an assistant to Dr Thomas Willis. There he met the natural philosopher Robert Boyle, and gained employment as his assistant from about 1655 to 1662, constructing, operating and he did not take his Master of Arts until 1662 or 1663
57.
Blaise Pascal
–
Blaise Pascal was a French mathematician, physicist, inventor, writer and Christian philosopher. He was a prodigy who was educated by his father. Pascal also wrote in defence of the scientific method, in 1642, while still a teenager, he started some pioneering work on calculating machines. After three years of effort and 50 prototypes, he built 20 finished machines over the following 10 years, following Galileo Galilei and Torricelli, in 1647, he rebutted Aristotles followers who insisted that nature abhors a vacuum. Pascals results caused many disputes before being accepted, in 1646, he and his sister Jacqueline identified with the religious movement within Catholicism known by its detractors as Jansenism. Following a religious experience in late 1654, he began writing works on philosophy. His two most famous works date from this period, the Lettres provinciales and the Pensées, the set in the conflict between Jansenists and Jesuits. In that year, he wrote an important treatise on the arithmetical triangle. Between 1658 and 1659 he wrote on the cycloid and its use in calculating the volume of solids, Pascal had poor health, especially after the age of 18, and he died just two months after his 39th birthday. Pascal was born in Clermont-Ferrand, which is in Frances Auvergne region and he lost his mother, Antoinette Begon, at the age of three. His father, Étienne Pascal, who also had an interest in science and mathematics, was a local judge, Pascal had two sisters, the younger Jacqueline and the elder Gilberte. In 1631, five years after the death of his wife, the newly arrived family soon hired Louise Delfault, a maid who eventually became an instrumental member of the family. Étienne, who never remarried, decided that he alone would educate his children, for they all showed extraordinary intellectual ability, the young Pascal showed an amazing aptitude for mathematics and science. Particularly of interest to Pascal was a work of Desargues on conic sections and it states that if a hexagon is inscribed in a circle then the three intersection points of opposite sides lie on a line. Pascals work was so precocious that Descartes was convinced that Pascals father had written it, in France at that time offices and positions could be—and were—bought and sold. In 1631 Étienne sold his position as president of the Cour des Aides for 65,665 livres. The money was invested in a government bond which provided, if not a lavish, then certainly a comfortable income which allowed the Pascal family to move to, but in 1638 Richelieu, desperate for money to carry on the Thirty Years War, defaulted on the governments bonds. Suddenly Étienne Pascals worth had dropped from nearly 66,000 livres to less than 7,300 and it was only when Jacqueline performed well in a childrens play with Richelieu in attendance that Étienne was pardoned
58.
Isaac Newton
–
His book Philosophiæ Naturalis Principia Mathematica, first published in 1687, laid the foundations of classical mechanics. Newton also made contributions to optics, and he shares credit with Gottfried Wilhelm Leibniz for developing the infinitesimal calculus. Newtons Principia formulated the laws of motion and universal gravitation that dominated scientists view of the universe for the next three centuries. Newtons work on light was collected in his influential book Opticks. He also formulated a law of cooling, made the first theoretical calculation of the speed of sound. Newton was a fellow of Trinity College and the second Lucasian Professor of Mathematics at the University of Cambridge, politically and personally tied to the Whig party, Newton served two brief terms as Member of Parliament for the University of Cambridge, in 1689–90 and 1701–02. He was knighted by Queen Anne in 1705 and he spent the last three decades of his life in London, serving as Warden and Master of the Royal Mint and his father, also named Isaac Newton, had died three months before. Born prematurely, he was a child, his mother Hannah Ayscough reportedly said that he could have fit inside a quart mug. When Newton was three, his mother remarried and went to live with her new husband, the Reverend Barnabas Smith, leaving her son in the care of his maternal grandmother, Newtons mother had three children from her second marriage. From the age of twelve until he was seventeen, Newton was educated at The Kings School, Grantham which taught Latin and Greek. He was removed from school, and by October 1659, he was to be found at Woolsthorpe-by-Colsterworth, Henry Stokes, master at the Kings School, persuaded his mother to send him back to school so that he might complete his education. Motivated partly by a desire for revenge against a bully, he became the top-ranked student. In June 1661, he was admitted to Trinity College, Cambridge and he started as a subsizar—paying his way by performing valets duties—until he was awarded a scholarship in 1664, which guaranteed him four more years until he would get his M. A. He set down in his notebook a series of Quaestiones about mechanical philosophy as he found it, in 1665, he discovered the generalised binomial theorem and began to develop a mathematical theory that later became calculus. Soon after Newton had obtained his B. A. degree in August 1665, in April 1667, he returned to Cambridge and in October was elected as a fellow of Trinity. Fellows were required to become ordained priests, although this was not enforced in the restoration years, however, by 1675 the issue could not be avoided and by then his unconventional views stood in the way. Nevertheless, Newton managed to avoid it by means of a special permission from Charles II. A and he was elected a Fellow of the Royal Society in 1672. Newtons work has been said to distinctly advance every branch of mathematics then studied and his work on the subject usually referred to as fluxions or calculus, seen in a manuscript of October 1666, is now published among Newtons mathematical papers
59.
Claude-Louis Navier
–
Claude-Louis Navier, was a French engineer and physicist who specialized in mechanics. The Navier–Stokes equations are named after him and George Gabriel Stokes, after the death of his father in 1793, Naviers mother left his education in the hands of his uncle Émiland Gauthey, an engineer with the Corps of Bridges and Roads. In 1802, Navier enrolled at the École polytechnique, and in 1804 continued his studies at the École Nationale des Ponts et Chaussées and he eventually succeeded his uncle as Inspecteur general at the Corps des Ponts et Chaussées. He directed the construction of bridges at Choisy, Asnières and Argenteuil in the Department of the Seine, in 1824, Navier was admitted into the French Academy of Science. Navier formulated the theory of elasticity in a mathematically usable form. Navier is therefore considered to be the founder of modern structural analysis. His major contribution however remains the Navier–Stokes equations, central to fluid mechanics and his name is one of the 72 names inscribed on the Eiffel Tower. OConnor, John J. Robertson, Edmund F. Claude-Louis Navier, MacTutor History of Mathematics archive, University of St Andrews
60.
Sir George Stokes, 1st Baronet
–
Sir George Gabriel Stokes, 1st Baronet, PRS, was a physicist and mathematician. Born in Ireland, Stokes spent all of his career at the University of Cambridge, in physics, Stokes made seminal contributions to fluid dynamics and to physical optics. In mathematics he formulated the first version of what is now known as Stokes theorem and he served as secretary, then president, of the Royal Society of London. George Stokes was the youngest son of the Reverend Gabriel Stokes, rector of Skreen, County Sligo, Ireland, where he was born and brought up in an evangelical Protestant family. In accordance with the statutes, he had to resign the fellowship when he married in 1857. He retained his place on the foundation until 1902, when on the day before his 83rd birthday and he did not hold this position for long, for he died at Cambridge on 1 February the following year, and was buried in the Mill Road cemetery. In 1849, Stokes was appointed to the Lucasian professorship of mathematics at Cambridge, on 1 June 1899, the jubilee of this appointment was celebrated there in a ceremony, which was attended by numerous delegates from European and American universities. Stokes, who was made a baronet in 1889, further served his university by representing it in parliament from 1887 to 1892 as one of the two members for the Cambridge University constituency. During a portion of this period he also was president of the Royal Society, since he was also Lucasian Professor at this time, Stokes was the first person to hold all three positions simultaneously, Newton held the same three, although not at the same time. Stokess original work began about 1840, and from that date onwards the great extent of his output was only less remarkable than the brilliance of its quality, the Royal Societys catalogue of scientific papers gives the titles of over a hundred memoirs by him published down to 1883. Some of these are brief notes, others are short controversial or corrective statements. His first published papers, which appeared in 1842 and 1843, were on the motion of incompressible fluids. His work on motion and viscosity led to his calculating the terminal velocity for a sphere falling in a viscous medium. This became known as Stokes law and he derived an expression for the frictional force exerted on spherical objects with very small Reynolds numbers. His work is the basis of the falling sphere viscometer, in which the fluid is stationary in a glass tube. A sphere of size and density is allowed to descend through the liquid. If correctly selected, it reaches terminal velocity, which can be measured by the time it takes to pass two marks on the tube, electronic sensing can be used for opaque fluids. Knowing the terminal velocity, the size and density of the sphere, a series of steel ball bearings of different diameter is normally used in the classic experiment to improve the accuracy of the calculation
61.
Barometer
–
A barometer is a scientific instrument used in meteorology to measure atmospheric pressure. Pressure tendency can forecast short term changes in the weather, numerous measurements of air pressure are used within surface weather analysis to help find surface troughs, high pressure systems and frontal boundaries. Barometers and pressure altimeters are essentially the same instrument, but used for different purposes, the main exception to this is ships at sea, which can use a barometer because their elevation does not change. Due to the presence of weather systems, aircraft altimeters may need to be adjusted as they fly between regions of varying normalized atmospheric pressure. On July 27,1630, Giovanni Battista Baliani wrote a letter to Galileo Galilei explaining an experiment he had made in which a siphon, led over a hill about twenty-one meters high, failed to work. This was a restatement of the theory of horror vacui, which dates to Aristotle, galileos ideas reached Rome in December 1638 in his Discorsi. Raffaele Magiotti and Gasparo Berti were excited by these ideas, Magiotti devised such an experiment, and sometime between 1639 and 1641, Berti carried it out. The bottom end of the tube was opened, and water that had been inside of it poured out into the basin. What was most important about this experiment was that the water had left a space above it in the tube which had no intermediate contact with air to fill it up. This seemed to suggest the possibility of a vacuum existing in the space above the water, Torricelli, a friend and student of Galileo, interpreted the results of the experiments in a novel way. He proposed that the weight of the atmosphere, not a force of the vacuum. It was traditionally thought that the air did not have weight, that is. Even Galileo had accepted the weightlessness of air as a simple truth, Torricelli questioned that assumption, and instead proposed that air had weight and that it was the latter which held up the column of water. He thought that the level the water stayed at was reflective of the force of the airs weight pushing on it. In other words, he viewed the barometer as a balance, an instrument for measurement and he needed to use a liquid that was heavier than water, and from his previous association and suggestions by Galileo, he deduced by using mercury, a shorter tube could be used. With mercury, which is about 14 times heavier than water, Pascal further devised an experiment to test the Aristotelian proposition that it was vapors from the liquid that filled the space in a barometer. His experiment compared water with wine, and since the latter was considered more spiritous, Pascal performed the experiment publicly, inviting the Aristotelians to predict the outcome beforehand. The Aristotelians predicted the wine would stand lower, however, Pascal went even further to test the mechanical theory
62.
Atmosphere of Earth
–
The atmosphere of Earth is the layer of gases, commonly known as air, that surrounds the planet Earth and is retained by Earths gravity. The atmosphere of Earth protects life on Earth by absorbing solar radiation, warming the surface through heat retention. By volume, dry air contains 78. 09% nitrogen,20. 95% oxygen,0. 93% argon,0. 04% carbon dioxide, and small amounts of other gases. Air also contains an amount of water vapor, on average around 1% at sea level. The atmosphere has a mass of about 5. 15×1018 kg, the atmosphere becomes thinner and thinner with increasing altitude, with no definite boundary between the atmosphere and outer space. The Kármán line, at 100 km, or 1. 57% of Earths radius, is used as the border between the atmosphere and outer space. Atmospheric effects become noticeable during atmospheric reentry of spacecraft at an altitude of around 120 km, several layers can be distinguished in the atmosphere, based on characteristics such as temperature and composition. The study of Earths atmosphere and its processes is called atmospheric science, early pioneers in the field include Léon Teisserenc de Bort and Richard Assmann. The three major constituents of air, and therefore of Earths atmosphere, are nitrogen, oxygen, water vapor accounts for roughly 0. 25% of the atmosphere by mass. The remaining gases are often referred to as gases, among which are the greenhouse gases, principally carbon dioxide, methane, nitrous oxide. Filtered air includes trace amounts of other chemical compounds. Various industrial pollutants also may be present as gases or aerosols, such as chlorine, fluorine compounds, sulfur compounds such as hydrogen sulfide and sulfur dioxide may be derived from natural sources or from industrial air pollution. In general, air pressure and density decrease with altitude in the atmosphere, however, temperature has a more complicated profile with altitude, and may remain relatively constant or even increase with altitude in some regions. In this way, Earths atmosphere can be divided into five main layers, excluding the exosphere, Earth has four primary layers, which are the troposphere, stratosphere, mesosphere, and thermosphere. It extends from the exobase, which is located at the top of the thermosphere at an altitude of about 700 km above sea level, to about 10,000 km where it merges into the solar wind. This layer is composed of extremely low densities of hydrogen, helium and several heavier molecules including nitrogen, oxygen. The atoms and molecules are so far apart that they can travel hundreds of kilometers without colliding with one another, thus, the exosphere no longer behaves like a gas, and the particles constantly escape into space. These free-moving particles follow ballistic trajectories and may migrate in and out of the magnetosphere or the solar wind, the exosphere is located too far above Earth for any meteorological phenomena to be possible
63.
Fluid pressure
–
Pressure is the force applied perpendicular to the surface of an object per unit area over which that force is distributed. Gauge pressure is the relative to the ambient pressure. Various units are used to express pressure, Pressure may also be expressed in terms of standard atmospheric pressure, the atmosphere is equal to this pressure and the torr is defined as 1⁄760 of this. Manometric units such as the centimetre of water, millimetre of mercury, Pressure is the amount of force acting per unit area. The symbol for it is p or P, the IUPAC recommendation for pressure is a lower-case p. However, upper-case P is widely used. The usage of P vs p depends upon the field in one is working, on the nearby presence of other symbols for quantities such as power and momentum. Mathematically, p = F A where, p is the pressure, F is the normal force and it relates the vector surface element with the normal force acting on it. It is incorrect to say the pressure is directed in such or such direction, the pressure, as a scalar, has no direction. The force given by the relationship to the quantity has a direction. If we change the orientation of the element, the direction of the normal force changes accordingly. Pressure is distributed to solid boundaries or across arbitrary sections of normal to these boundaries or sections at every point. It is a parameter in thermodynamics, and it is conjugate to volume. The SI unit for pressure is the pascal, equal to one newton per square metre and this name for the unit was added in 1971, before that, pressure in SI was expressed simply in newtons per square metre. Other units of pressure, such as pounds per square inch, the CGS unit of pressure is the barye, equal to 1 dyn·cm−2 or 0.1 Pa. Pressure is sometimes expressed in grams-force or kilograms-force per square centimetre, but using the names kilogram, gram, kilogram-force, or gram-force as units of force is expressly forbidden in SI. The technical atmosphere is 1 kgf/cm2, since a system under pressure has potential to perform work on its surroundings, pressure is a measure of potential energy stored per unit volume. It is therefore related to density and may be expressed in units such as joules per cubic metre. Similar pressures are given in kilopascals in most other fields, where the prefix is rarely used
64.
Weight
–
In science and engineering, the weight of an object is usually taken to be the force on the object due to gravity. Weight is a vector whose magnitude, often denoted by an italic letter W, is the product of the m of the object. The unit of measurement for weight is that of force, which in the International System of Units is the newton. For example, an object with a mass of one kilogram has a weight of about 9.8 newtons on the surface of the Earth, in this sense of weight, a body can be weightless only if it is far away from any other mass. Although weight and mass are scientifically distinct quantities, the terms are often confused with other in everyday use. There is also a tradition within Newtonian physics and engineering which sees weight as that which is measured when one uses scales. There the weight is a measure of the magnitude of the force exerted on a body. Typically, in measuring an objects weight, the object is placed on scales at rest with respect to the earth, thus, in a state of free fall, the weight would be zero. In this second sense of weight, terrestrial objects can be weightless, ignoring air resistance, the famous apple falling from the tree, on its way to meet the ground near Isaac Newton, is weightless. Further complications in elucidating the various concepts of weight have to do with the theory of relativity according to gravity is modelled as a consequence of the curvature of spacetime. In the teaching community, a debate has existed for over half a century on how to define weight for their students. The current situation is that a set of concepts co-exist. Discussion of the concepts of heaviness and lightness date back to the ancient Greek philosophers and these were typically viewed as inherent properties of objects. Plato described weight as the tendency of objects to seek their kin. To Aristotle weight and levity represented the tendency to restore the order of the basic elements, air, earth, fire. He ascribed absolute weight to earth and absolute levity to fire, archimedes saw weight as a quality opposed to buoyancy, with the conflict between the two determining if an object sinks or floats. The first operational definition of weight was given by Euclid, who defined weight as, weight is the heaviness or lightness of one thing, compared to another, operational balances had, however, been around much longer. According to Aristotle, weight was the cause of the falling motion of an object
65.
Earth's atmosphere
–
The atmosphere of Earth is the layer of gases, commonly known as air, that surrounds the planet Earth and is retained by Earths gravity. The atmosphere of Earth protects life on Earth by absorbing solar radiation, warming the surface through heat retention. By volume, dry air contains 78. 09% nitrogen,20. 95% oxygen,0. 93% argon,0. 04% carbon dioxide, and small amounts of other gases. Air also contains an amount of water vapor, on average around 1% at sea level. The atmosphere has a mass of about 5. 15×1018 kg, the atmosphere becomes thinner and thinner with increasing altitude, with no definite boundary between the atmosphere and outer space. The Kármán line, at 100 km, or 1. 57% of Earths radius, is used as the border between the atmosphere and outer space. Atmospheric effects become noticeable during atmospheric reentry of spacecraft at an altitude of around 120 km, several layers can be distinguished in the atmosphere, based on characteristics such as temperature and composition. The study of Earths atmosphere and its processes is called atmospheric science, early pioneers in the field include Léon Teisserenc de Bort and Richard Assmann. The three major constituents of air, and therefore of Earths atmosphere, are nitrogen, oxygen, water vapor accounts for roughly 0. 25% of the atmosphere by mass. The remaining gases are often referred to as gases, among which are the greenhouse gases, principally carbon dioxide, methane, nitrous oxide. Filtered air includes trace amounts of other chemical compounds. Various industrial pollutants also may be present as gases or aerosols, such as chlorine, fluorine compounds, sulfur compounds such as hydrogen sulfide and sulfur dioxide may be derived from natural sources or from industrial air pollution. In general, air pressure and density decrease with altitude in the atmosphere, however, temperature has a more complicated profile with altitude, and may remain relatively constant or even increase with altitude in some regions. In this way, Earths atmosphere can be divided into five main layers, excluding the exosphere, Earth has four primary layers, which are the troposphere, stratosphere, mesosphere, and thermosphere. It extends from the exobase, which is located at the top of the thermosphere at an altitude of about 700 km above sea level, to about 10,000 km where it merges into the solar wind. This layer is composed of extremely low densities of hydrogen, helium and several heavier molecules including nitrogen, oxygen. The atoms and molecules are so far apart that they can travel hundreds of kilometers without colliding with one another, thus, the exosphere no longer behaves like a gas, and the particles constantly escape into space. These free-moving particles follow ballistic trajectories and may migrate in and out of the magnetosphere or the solar wind, the exosphere is located too far above Earth for any meteorological phenomena to be possible
66.
Elevation
–
GIS or geographic information system is a computer system that allows for visualizing, manipulating, capturing, and storage of data with associated attributes. GIS offers better understanding of patterns and relationships of the landscape at different scales, tools inside the GIS allow for manipulation of data for spatial analysis or cartography. A topographical map is the type of map used to depict elevation. In a Geographic Information System, digital models are commonly used to represent the surface of a place. Digital terrain models are another way to represent terrain in GIS, USGS is developing a 3D Elevation Program to keep up with growing needs for high quality topographic data. 3DEP is a collection of enhanced elevation data in the form of high quality LiDAR data over the conterminous United States, Hawaii, there are three bare earth DEM layers in 3DEP which are nationally seamless at the resolution of 1/3,1, and 2 arcseconds. This map is derived from GTOPO30 data that describes the elevation of Earths terrain at intervals of 30 arcseconds and it uses color and shading instead of contour lines to indicate elevation. Hypsography is the study of the distribution of elevations on the surface of the Earth, the term originates from the Greek word ὕψος hypsos meaning height. Most often it is used only in reference to elevation of land, related to the term hypsometry, the measurement of these elevations of a planets solid surface are taken relative to mean datum, except for Earth which is taken relative to the sea level. In the troposphere, temperatures decrease with altitude and this lapse rate is approximately 6.5 °C/km. S
67.
International System of Units
–
The International System of Units is the modern form of the metric system, and is the most widely used system of measurement. It comprises a coherent system of units of measurement built on seven base units, the system also establishes a set of twenty prefixes to the unit names and unit symbols that may be used when specifying multiples and fractions of the units. The system was published in 1960 as the result of an initiative began in 1948. It is based on the system of units rather than any variant of the centimetre-gram-second system. The motivation for the development of the SI was the diversity of units that had sprung up within the CGS systems, the International System of Units has been adopted by most developed countries, however, the adoption has not been universal in all English-speaking countries. The metric system was first implemented during the French Revolution with just the metre and kilogram as standards of length, in the 1830s Carl Friedrich Gauss laid the foundations for a coherent system based on length, mass, and time. In the 1860s a group working under the auspices of the British Association for the Advancement of Science formulated the requirement for a coherent system of units with base units and derived units. Meanwhile, in 1875, the Treaty of the Metre passed responsibility for verification of the kilogram, in 1921, the Treaty was extended to include all physical quantities including electrical units originally defined in 1893. The units associated with these quantities were the metre, kilogram, second, ampere, kelvin, in 1971, a seventh base quantity, amount of substance represented by the mole, was added to the definition of SI. On 11 July 1792, the proposed the names metre, are, litre and grave for the units of length, area, capacity. The committee also proposed that multiples and submultiples of these units were to be denoted by decimal-based prefixes such as centi for a hundredth, on 10 December 1799, the law by which the metric system was to be definitively adopted in France was passed. Prior to this, the strength of the magnetic field had only been described in relative terms. The technique used by Gauss was to equate the torque induced on a magnet of known mass by the earth’s magnetic field with the torque induced on an equivalent system under gravity. The resultant calculations enabled him to assign dimensions based on mass, length, a French-inspired initiative for international cooperation in metrology led to the signing in 1875 of the Metre Convention. Initially the convention only covered standards for the metre and the kilogram, one of each was selected at random to become the International prototype metre and International prototype kilogram that replaced the mètre des Archives and kilogramme des Archives respectively. Each member state was entitled to one of each of the prototypes to serve as the national prototype for that country. Initially its prime purpose was a periodic recalibration of national prototype metres. The official language of the Metre Convention is French and the version of all official documents published by or on behalf of the CGPM is the French-language version
68.
Pascal (unit)
–
The pascal is the SI derived unit of pressure used to quantify internal pressure, stress, Youngs modulus and ultimate tensile strength. It is defined as one newton per square meter and it is named after the French polymath Blaise Pascal. Common multiple units of the pascal are the hectopascal which is equal to one millibar, the unit of measurement called standard atmosphere is defined as 101,325 Pa and approximates to the average pressure at sea-level at the latitude 45° N. Meteorological reports typically state atmospheric pressure in hectopascals, the unit is named after Blaise Pascal, noted for his contributions to hydrodynamics and hydrostatics, and experiments with a barometer. The name pascal was adopted for the SI unit newton per square metre by the 14th General Conference on Weights, one pascal is the pressure exerted by a force of magnitude one newton perpendicularly upon an area of one square metre. The unit of measurement called atmosphere or standard atmosphere is 101325 Pa and this value is often used as a reference pressure and specified as such in some national and international standards, such as ISO2787, ISO2533 and ISO5024. In contrast, IUPAC recommends the use of 100 kPa as a standard pressure when reporting the properties of substances, geophysicists use the gigapascal in measuring or calculating tectonic stresses and pressures within the Earth. Medical elastography measures tissue stiffness non-invasively with ultrasound or magnetic resonance imaging, in materials science and engineering, the pascal measures the stiffness, tensile strength and compressive strength of materials. In engineering use, because the pascal represents a small quantity. The pascal is also equivalent to the SI unit of energy density and this applies not only to the thermodynamics of pressurised gases, but also to the energy density of electric, magnetic, and gravitational fields. In measurements of sound pressure, or loudness of sound, one pascal is equal to 94 decibels SPL, the quietest sound a human can hear, known as the threshold of hearing, is 0 dB SPL, or 20 µPa. The airtightness of buildings is measured at 50 Pa, the units of atmospheric pressure commonly used in meteorology were formerly the bar, which was close to the average air pressure on Earth, and the millibar. Since the introduction of SI units, meteorologists generally measure pressures in hectopascals unit, exceptions include Canada and Portugal, which use kilopascals. In many other fields of science, the SI is preferred, many countries also use the millibar or hectopascal to give aviation altimeter settings. In practically all fields, the kilopascal is used instead. Centimetre of water Metric prefix Orders of magnitude Pascals law
69.
Sea level
–
Mean sea level is an average level of the surface of one or more of Earths oceans from which heights such as elevations may be measured. A common and relatively straightforward mean sea-level standard is the midpoint between a low and mean high tide at a particular location. Sea levels can be affected by factors and are known to have varied greatly over geological time scales. The careful measurement of variations in MSL can offer insights into ongoing climate change, the term above sea level generally refers to above mean sea level. Precise determination of a sea level is a difficult problem because of the many factors that affect sea level. Sea level varies quite a lot on several scales of time and this is because the sea is in constant motion, affected by the tides, wind, atmospheric pressure, local gravitational differences, temperature, salinity and so forth. The easiest way this may be calculated is by selecting a location and calculating the mean sea level at that point, for example, a period of 19 years of hourly level observations may be averaged and used to determine the mean sea level at some measurement point. One measures the values of MSL in respect to the land, hence a change in MSL can result from a real change in sea level, or from a change in the height of the land on which the tide gauge operates. In the UK, the Ordnance Datum is the sea level measured at Newlyn in Cornwall between 1915 and 1921. Prior to 1921, the datum was MSL at the Victoria Dock, in Hong Kong, mPD is a surveying term meaning metres above Principal Datum and refers to height of 1. 230m below the average sea level. In France, the Marégraphe in Marseilles measures continuously the sea level since 1883 and it is used for a part of continental Europe and main part of Africa as official sea level. Elsewhere in Europe vertical elevation references are made to the Amsterdam Peil elevation, satellite altimeters have been making precise measurements of sea level since the launch of TOPEX/Poseidon in 1992. A joint mission of NASA and CNES, TOPEX/Poseidon was followed by Jason-1 in 2001, height above mean sea level is the elevation or altitude of an object, relative to the average sea level datum. It is also used in aviation, where some heights are recorded and reported with respect to sea level, and in the atmospheric sciences. An alternative is to base height measurements on an ellipsoid of the entire Earth, in aviation, the ellipsoid known as World Geodetic System 84 is increasingly used to define heights, however, differences up to 100 metres exist between this ellipsoid height and mean tidal height. The alternative is to use a vertical datum such as NAVD88. When referring to geographic features such as mountains on a topographic map, the elevation of a mountain denotes the highest point or summit and is typically illustrated as a small circle on a topographic map with the AMSL height shown in metres, feet or both. In the rare case that a location is below sea level, for one such case, see Amsterdam Airport Schiphol
70.
Mass
–
In physics, mass is a property of a physical body. It is the measure of a resistance to acceleration when a net force is applied. It also determines the strength of its gravitational attraction to other bodies. The basic SI unit of mass is the kilogram, Mass is not the same as weight, even though mass is often determined by measuring the objects weight using a spring scale, rather than comparing it directly with known masses. An object on the Moon would weigh less than it does on Earth because of the lower gravity and this is because weight is a force, while mass is the property that determines the strength of this force. In Newtonian physics, mass can be generalized as the amount of matter in an object, however, at very high speeds, special relativity postulates that energy is an additional source of mass. Thus, any body having mass has an equivalent amount of energy. In addition, matter is a defined term in science. There are several distinct phenomena which can be used to measure mass, active gravitational mass measures the gravitational force exerted by an object. Passive gravitational mass measures the force exerted on an object in a known gravitational field. The mass of an object determines its acceleration in the presence of an applied force, according to Newtons second law of motion, if a body of fixed mass m is subjected to a single force F, its acceleration a is given by F/m. A bodys mass also determines the degree to which it generates or is affected by a gravitational field and this is sometimes referred to as gravitational mass. The standard International System of Units unit of mass is the kilogram, the kilogram is 1000 grams, first defined in 1795 as one cubic decimeter of water at the melting point of ice. Then in 1889, the kilogram was redefined as the mass of the prototype kilogram. As of January 2013, there are proposals for redefining the kilogram yet again. In this context, the mass has units of eV/c2, the electronvolt and its multiples, such as the MeV, are commonly used in particle physics. The atomic mass unit is 1/12 of the mass of a carbon-12 atom, the atomic mass unit is convenient for expressing the masses of atoms and molecules. Outside the SI system, other units of mass include, the slug is an Imperial unit of mass, the pound is a unit of both mass and force, used mainly in the United States
71.
Newton (unit)
–
The newton is the International System of Units derived unit of force. It is named after Isaac Newton in recognition of his work on classical mechanics, see below for the conversion factors. One newton is the force needed to one kilogram of mass at the rate of one metre per second squared in direction of the applied force. In 1948, the 9th CGPM resolution 7 adopted the name newton for this force, the MKS system then became the blueprint for todays SI system of units. The newton thus became the unit of force in le Système International dUnités. This SI unit is named after Isaac Newton, as with every International System of Units unit named for a person, the first letter of its symbol is upper case. Note that degree Celsius conforms to this rule because the d is lowercase. — Based on The International System of Units, section 5.2. Newtons second law of motion states that F = ma, where F is the applied, m is the mass of the object receiving the force. The newton is therefore, where the symbols are used for the units, N for newton, kg for kilogram, m for metre. In dimensional analysis, F = M L T2 where F is force, M is mass, L is length, at average gravity on earth, a kilogram mass exerts a force of about 9.8 newtons. An average-sized apple exerts about one newton of force, which we measure as the apples weight, for example, the tractive effort of a Class Y steam train and the thrust of an F100 fighter jet engine are both around 130 kN. One kilonewton,1 kN, is 102.0 kgf,1 kN =102 kg ×9.81 m/s2 So for example, a platform rated at 321 kilonewtons will safely support a 32,100 kilograms load. Specifications in kilonewtons are common in safety specifications for, the values of fasteners, Earth anchors. Working loads in tension and in shear, thrust of rocket engines and launch vehicles clamping forces of the various moulds in injection moulding machines used to manufacture plastic parts
72.
Units of pressure
–
Pressure is the force applied perpendicular to the surface of an object per unit area over which that force is distributed. Gauge pressure is the relative to the ambient pressure. Various units are used to express pressure, Pressure may also be expressed in terms of standard atmospheric pressure, the atmosphere is equal to this pressure and the torr is defined as 1⁄760 of this. Manometric units such as the centimetre of water, millimetre of mercury, Pressure is the amount of force acting per unit area. The symbol for it is p or P, the IUPAC recommendation for pressure is a lower-case p. However, upper-case P is widely used. The usage of P vs p depends upon the field in one is working, on the nearby presence of other symbols for quantities such as power and momentum. Mathematically, p = F A where, p is the pressure, F is the normal force and it relates the vector surface element with the normal force acting on it. It is incorrect to say the pressure is directed in such or such direction, the pressure, as a scalar, has no direction. The force given by the relationship to the quantity has a direction. If we change the orientation of the element, the direction of the normal force changes accordingly. Pressure is distributed to solid boundaries or across arbitrary sections of normal to these boundaries or sections at every point. It is a parameter in thermodynamics, and it is conjugate to volume. The SI unit for pressure is the pascal, equal to one newton per square metre and this name for the unit was added in 1971, before that, pressure in SI was expressed simply in newtons per square metre. Other units of pressure, such as pounds per square inch, the CGS unit of pressure is the barye, equal to 1 dyn·cm−2 or 0.1 Pa. Pressure is sometimes expressed in grams-force or kilograms-force per square centimetre, but using the names kilogram, gram, kilogram-force, or gram-force as units of force is expressly forbidden in SI. The technical atmosphere is 1 kgf/cm2, since a system under pressure has potential to perform work on its surroundings, pressure is a measure of potential energy stored per unit volume. It is therefore related to density and may be expressed in units such as joules per cubic metre. Similar pressures are given in kilopascals in most other fields, where the prefix is rarely used
73.
Pounds per square inch
–
The pound per square inch or, more accurately, pound-force per square inch is a unit of pressure or of stress based on avoirdupois units. Now converting the psi to standard atmospheres,1 atm =101325 Pa =101325 Pa 6894.757293168 Pa / psi ≈14.70 psi Therefore,1 atmosphere is approximately 14.7 pounds per square inch. Pounds per square inch absolute is used to make it clear that the pressure is relative to a rather than the ambient atmospheric pressure. Since atmospheric pressure at sea level is around 14.7 psi, the converse is pounds per square inch gauge or pounds per square inch gage, indicating that the pressure is relative to atmospheric pressure. For example, a bicycle tire pumped up to 65 psi above atmospheric pressure. When gauge pressure is referenced to something other than ambient atmospheric pressure, the kilopound per square inch is a scaled unit derived from psi, equivalent to a thousand psi. Ksi are not widely used for gas pressures and they are mostly used in materials science, where the tensile strength of a material is measured as a large number of psi. The conversion in SI Units is 1 ksi =6.895 MPa, the megapound per square inch is another multiple equal to a million psi. It is used in mechanics for the modulus of the materials. The conversion in SI Units is 1 Mpsi =6.895 GPa, inch of water,0.036 psid Blood pressure – clinically normal human blood pressure,2.32 psig/1.55 psig Natural gas residential piped in for consumer appliance, 4–6 psig. Boost pressure provided by a turbocharger, 6–15 psig NFL football,12. 5–13.5 psig Atmospheric pressure at sea level,14
74.
Altimeter
–
An altimeter or an altitude meter is an instrument used to measure the altitude of an object above a fixed level. The measurement of altitude is called altimetry, which is related to the term bathymetry, altitude can be determined based on the measurement of atmospheric pressure. The greater the altitude, the lower the pressure, when a barometer is supplied with a nonlinear calibration so as to indicate altitude, the instrument is called a pressure altimeter or barometric altimeter. A pressure altimeter is the altimeter found in most aircraft, hikers and mountain climbers use wrist-mounted or hand-held altimeters, in addition to other navigational tools such as a map, magnetic compass, or GPS receiver. The calibration of an altimeter follows the equation z = c T log , where c is a constant, T is the temperature, P is the pressure at altitude z. The constant c depends on the acceleration of gravity and the mass of the air. A barometric altimeter, used along with a map, can help to verify ones location. An altimeter is the most important piece of skydiving equipment, after the parachute itself, altitude awareness is crucial at all times during the jump, and determines the appropriate response to maintain safety. This is the most basic and common type, and is used by all student skydivers. The common design has a face marked from 0 to 4000m, the face plate sports sections prominently marked with yellow and red respectively, signifying the recommended deployment altitude, as well as emergency procedure decision altitude. Some advanced electronic altimeters are also available which use of the familiar analogue display. Digital visual altimeters, mounted on the wrist or hand and this type always operates electronically, and conveys the altitude as a number, rather than a pointer on a dial. An electronic altimeter is turned on on the ground before jump, if the intended landing zone is at a different elevation than the takeoff point, the user needs to input the appropriate offset by using a designated function. These are inserted into ones helmet, and emit a warning tone at a predefined altitude, audibles are strictly auxiliary devices, and do not replace, but complement a visual altimeter which remains the primary tool for maintaining altitude awareness. Audibles are not recommended and often banned from use by student skydivers and these do not show the precise altitude, but rather help maintain a general indicator in ones peripheral vision. The exact choice of altimeters depends heavily on the individual preferences, experience level, primary disciplines. On one end of the spectrum, a demonstration jump with water landing and no free fall might waive the mandated use of altimeters. Another skydiver doing similar types of jumps might wear a digital altimeter for their primary visual one, in aircraft, an aneroid barometer measures the atmospheric pressure from a static port outside the aircraft
75.
Altitude
–
Altitude or height is defined based on the context in which it is used. As a general definition, altitude is a measurement, usually in the vertical or up direction. The reference datum also often varies according to the context, although the term altitude is commonly used to mean the height above sea level of a location, in geography the term elevation is often preferred for this usage. Vertical distance measurements in the direction are commonly referred to as depth. In aviation, the altitude can have several meanings, and is always qualified by explicitly adding a modifier. Parties exchanging altitude information must be clear which definition is being used, aviation altitude is measured using either mean sea level or local ground level as the reference datum. When flying at a level, the altimeter is always set to standard pressure. On the flight deck, the instrument for measuring altitude is the pressure altimeter. There are several types of altitude, Indicated altitude is the reading on the altimeter when it is set to the local barometric pressure at mean sea level. In UK aviation radiotelephony usage, the distance of a level, a point or an object considered as a point, measured from mean sea level. Absolute altitude is the height of the aircraft above the terrain over which it is flying and it can be measured using a radar altimeter. Also referred to as radar height or feet/metres above ground level, true altitude is the actual elevation above mean sea level. It is indicated altitude corrected for temperature and pressure. Height is the elevation above a reference point, commonly the terrain elevation. Pressure altitude is used to indicate flight level which is the standard for reporting in the U. S. in Class A airspace. Pressure altitude and indicated altitude are the same when the setting is 29.92 Hg or 1013.25 millibars. Density altitude is the altitude corrected for non-ISA International Standard Atmosphere atmospheric conditions, aircraft performance depends on density altitude, which is affected by barometric pressure, humidity and temperature. On a very hot day, density altitude at an airport may be so high as to preclude takeoff and these types of altitude can be explained more simply as various ways of measuring the altitude, Indicated altitude – the altitude shown on the altimeter
76.
Internet
–
The Internet is the global system of interconnected computer networks that use the Internet protocol suite to link devices worldwide. The origins of the Internet date back to research commissioned by the United States federal government in the 1960s to build robust, the primary precursor network, the ARPANET, initially served as a backbone for interconnection of regional academic and military networks in the 1980s. Although the Internet was widely used by academia since the 1980s, Internet use grew rapidly in the West from the mid-1990s and from the late 1990s in the developing world. In the two decades since then, Internet use has grown 100-times, measured for the period of one year, newspaper, book, and other print publishing are adapting to website technology, or are reshaped into blogging, web feeds and online news aggregators. The entertainment industry was initially the fastest growing segment on the Internet, the Internet has enabled and accelerated new forms of personal interactions through instant messaging, Internet forums, and social networking. Business-to-business and financial services on the Internet affect supply chains across entire industries, the Internet has no centralized governance in either technological implementation or policies for access and usage, each constituent network sets its own policies. The term Internet, when used to refer to the global system of interconnected Internet Protocol networks, is a proper noun. In common use and the media, it is not capitalized. Some guides specify that the word should be capitalized when used as a noun, the Internet is also often referred to as the Net, as a short form of network. Historically, as early as 1849, the word internetted was used uncapitalized as an adjective, the designers of early computer networks used internet both as a noun and as a verb in shorthand form of internetwork or internetworking, meaning interconnecting computer networks. The terms Internet and World Wide Web are often used interchangeably in everyday speech, however, the World Wide Web or the Web is only one of a large number of Internet services. The Web is a collection of interconnected documents and other web resources, linked by hyperlinks, the term Interweb is a portmanteau of Internet and World Wide Web typically used sarcastically to parody a technically unsavvy user. The ARPANET project led to the development of protocols for internetworking, the third site was the Culler-Fried Interactive Mathematics Center at the University of California, Santa Barbara, followed by the University of Utah Graphics Department. In an early sign of growth, fifteen sites were connected to the young ARPANET by the end of 1971. These early years were documented in the 1972 film Computer Networks, early international collaborations on the ARPANET were rare. European developers were concerned with developing the X.25 networks, in December 1974, RFC675, by Vinton Cerf, Yogen Dalal, and Carl Sunshine, used the term internet as a shorthand for internetworking and later RFCs repeated this use. Access to the ARPANET was expanded in 1981 when the National Science Foundation funded the Computer Science Network, in 1982, the Internet Protocol Suite was standardized, which permitted worldwide proliferation of interconnected networks.5 Mbit/s and 45 Mbit/s. Commercial Internet service providers emerged in the late 1980s and early 1990s, the ARPANET was decommissioned in 1990
77.
United States
–
Forty-eight of the fifty states and the federal district are contiguous and located in North America between Canada and Mexico. The state of Alaska is in the northwest corner of North America, bordered by Canada to the east, the state of Hawaii is an archipelago in the mid-Pacific Ocean. The U. S. territories are scattered about the Pacific Ocean, the geography, climate and wildlife of the country are extremely diverse. At 3.8 million square miles and with over 324 million people, the United States is the worlds third- or fourth-largest country by area, third-largest by land area. It is one of the worlds most ethnically diverse and multicultural nations, paleo-Indians migrated from Asia to the North American mainland at least 15,000 years ago. European colonization began in the 16th century, the United States emerged from 13 British colonies along the East Coast. Numerous disputes between Great Britain and the following the Seven Years War led to the American Revolution. On July 4,1776, during the course of the American Revolutionary War, the war ended in 1783 with recognition of the independence of the United States by Great Britain, representing the first successful war of independence against a European power. The current constitution was adopted in 1788, after the Articles of Confederation, the first ten amendments, collectively named the Bill of Rights, were ratified in 1791 and designed to guarantee many fundamental civil liberties. During the second half of the 19th century, the American Civil War led to the end of slavery in the country. By the end of century, the United States extended into the Pacific Ocean. The Spanish–American War and World War I confirmed the status as a global military power. The end of the Cold War and the dissolution of the Soviet Union in 1991 left the United States as the sole superpower. The U. S. is a member of the United Nations, World Bank, International Monetary Fund, Organization of American States. The United States is a developed country, with the worlds largest economy by nominal GDP. It ranks highly in several measures of performance, including average wage, human development, per capita GDP. While the U. S. economy is considered post-industrial, characterized by the dominance of services and knowledge economy, the United States is a prominent political and cultural force internationally, and a leader in scientific research and technological innovations. In 1507, the German cartographer Martin Waldseemüller produced a map on which he named the lands of the Western Hemisphere America after the Italian explorer and cartographer Amerigo Vespucci
78.
Colombia
–
Colombia, officially the Republic of Colombia, is a transcontinental country largely situated in the northwest of South America, with territories in Central America. Colombia shares a border to the northwest with Panama, to the east with Venezuela and Brazil and to the south with Ecuador and it shares its maritime limits with Costa Rica, Nicaragua, Honduras, Jamaica, Haiti and the Dominican Republic. It is a unitary, constitutional republic comprising thirty-two departments, the territory of what is now Colombia was originally inhabited by indigenous peoples including the Muisca, the Quimbaya and the Tairona. The Spanish arrived in 1499 and initiated a period of conquest and colonization ultimately creating the Viceroyalty of New Granada, independence from Spain was won in 1819, but by 1830 the Gran Colombia Federation was dissolved. What is now Colombia and Panama emerged as the Republic of New Granada, the new nation experimented with federalism as the Granadine Confederation, and then the United States of Colombia, before the Republic of Colombia was finally declared in 1886. Since the 1960s the country has suffered from an asymmetric low-intensity armed conflict, Colombia is one of the most ethnically and linguistically diverse countries in the world, and thereby possesses a rich cultural heritage. Cultural diversity has also influenced by Colombias varied geography. The urban centres are located in the highlands of the Andes mountains. Colombian territory also encompasses Amazon rainforest, tropical grassland and both Caribbean and Pacific coastlines, ecologically, it is one of the worlds 17 megadiverse countries, and the most densely biodiverse of these per square kilometer. Colombia is a power and a regional actor with the fourth-largest economy in Latin America, is part of the CIVETS group of six leading emerging markets and is an accessing member to the OECD. Colombia has an economy with macroeconomic stability and favorable growth prospects in the long run. The name Colombia is derived from the last name of Christopher Columbus and it was conceived by the Venezuelan revolutionary Francisco de Miranda as a reference to all the New World, but especially to those portions under Spanish and Portuguese rule. The name was adopted by the Republic of Colombia of 1819. When Venezuela, Ecuador and Cundinamarca came to exist as independent states, New Granada officially changed its name in 1858 to the Granadine Confederation. In 1863 the name was changed, this time to United States of Colombia. To refer to country, the Colombian government uses the terms Colombia. Owing to its location, the present territory of Colombia was a corridor of early human migration from Mesoamerica, the oldest archaeological finds are from the Pubenza and El Totumo sites in the Magdalena Valley 100 km southwest of Bogotá. These sites date from the Paleoindian period, at Puerto Hormiga and other sites, traces from the Archaic Period have been found
79.
Mercury (element)
–
Mercury is a chemical element with symbol Hg and atomic number 80. It is commonly known as quicksilver and was formerly named hydrargyrum, Mercury occurs in deposits throughout the world mostly as cinnabar. The red pigment vermilion is obtained by grinding natural cinnabar or synthetic mercuric sulfide, likewise, mechanical pressure gauges and electronic strain gauge sensors have replaced mercury sphygmomanometers. Mercury remains in use in research applications and in amalgam for dental restoration in some locales. It is used in fluorescent lighting, electricity passed through mercury vapor in a fluorescent lamp produces short-wave ultraviolet light which then causes the phosphor in the tube to fluoresce, making visible light. Mercury poisoning can result from exposure to water-soluble forms of mercury, Mercury is a heavy, silvery-white liquid metal. Compared to other metals, it is a conductor of heat. It has a point of −38.83 °C and a boiling point of 356.73 °C. Upon freezing, the volume of mercury decreases by 3. 59%, the coefficient of volume expansion is 181.59 × 10−6 at 0 °C,181.71 × 10−6 at 20 °C and 182.50 × 10−6 at 100 °C. Solid mercury is malleable and ductile and can be cut with a knife, because this configuration strongly resists removal of an electron, mercury behaves similarly to noble gases, which form weak bonds and hence melt at low temperatures. The stability of the 6s shell is due to the presence of a filled 4f shell, an f shell poorly screens the nuclear charge that increases the attractive Coulomb interaction of the 6s shell and the nucleus. Like silver, mercury reacts with hydrogen sulfide. Mercury reacts with solid sulfur flakes, which are used in mercury spill kits to absorb mercury, Mercury dissolves many other metals such as gold and silver to form amalgams. Iron is an exception, and iron flasks have traditionally used to trade mercury. Several other first row transition metals with the exception of manganese, copper, other elements that do not readily form amalgams with mercury include platinum. Sodium amalgam is a reducing agent in organic synthesis, and is also used in high-pressure sodium lamps. Mercury readily combines with aluminium to form a mercury-aluminium amalgam when the two pure metals come into contact, since the amalgam destroys the aluminium oxide layer which protects metallic aluminium from oxidizing in-depth, even small amounts of mercury can seriously corrode aluminium. For this reason, mercury is not allowed aboard an aircraft under most circumstances because of the risk of it forming an amalgam with exposed aluminium parts in the aircraft, Mercury embrittlement is the most common type of liquid metal embrittlement