1.
Deformation (engineering)
–
In materials science, deformation refers to any changes in the shape or size of an object due to- an applied force or a change in temperature. The first case can be a result of forces, compressive forces, shear. The movement or displacement of such mobile defects is thermally activated, deformation is often described as strain. As deformation occurs, internal inter-molecular forces arise that oppose the applied force, a larger applied force may lead to a permanent deformation of the object or even to its structural failure. In the figure it can be seen that the compressive loading has caused deformation in the cylinder so that the shape has changed into one with bulging sides. The sides bulge because the material, although enough to not crack or otherwise fail, is not strong enough to support the load without change. Internal forces resist the applied load, the concept of a rigid body can be applied if the deformation is negligible. Depending on the type of material, size and geometry of the object, the image to the right shows the engineering stress vs. strain diagram for a typical ductile material such as steel. Different deformation modes may occur under different conditions, as can be depicted using a deformation mechanism map and this type of deformation is reversible. Once the forces are no longer applied, the returns to its original shape. Elastomers and shape memory metals such as Nitinol exhibit large elastic deformation ranges, however elasticity is nonlinear in these materials. Normal metals, ceramics and most crystals show linear elasticity and an elastic range. This relationship only applies in the range and indicates that the slope of the stress vs. strain curve can be used to find Youngs modulus. Engineers often use this calculation in tensile tests, the elastic range ends when the material reaches its yield strength. At this point plastic deformation begins, note that not all elastic materials undergo linear elastic deformation, some, such as concrete, gray cast iron, and many polymers, respond in a nonlinear fashion. For these materials Hookes law is inapplicable and this type of deformation is irreversible. However, an object in the plastic deformation range will first have undergone elastic deformation, soft thermoplastics have a rather large plastic deformation range as do ductile metals such as copper, silver, and gold. Steel does, too, but not cast iron, hard thermosetting plastics, rubber, crystals, and ceramics have minimal plastic deformation ranges
2.
Continuum mechanics
–
Continuum mechanics is a branch of mechanics that deals with the analysis of the kinematics and the mechanical behavior of materials modeled as a continuous mass rather than as discrete particles. The French mathematician Augustin-Louis Cauchy was the first to formulate such models in the 19th century, research in the area continues till today. Modeling an object as a continuum assumes that the substance of the object completely fills the space it occupies, Continuum mechanics deals with physical properties of solids and fluids which are independent of any particular coordinate system in which they are observed. These physical properties are represented by tensors, which are mathematical objects that have the required property of being independent of coordinate system. These tensors can be expressed in coordinate systems for computational convenience, Materials, such as solids, liquids and gases, are composed of molecules separated by space. On a microscopic scale, materials have cracks and discontinuities, a continuum is a body that can be continually sub-divided into infinitesimal elements with properties being those of the bulk material. More specifically, the continuum hypothesis/assumption hinges on the concepts of an elementary volume. This condition provides a link between an experimentalists and a viewpoint on constitutive equations as well as a way of spatial and statistical averaging of the microstructure. The latter then provide a basis for stochastic finite elements. The levels of SVE and RVE link continuum mechanics to statistical mechanics, the RVE may be assessed only in a limited way via experimental testing, when the constitutive response becomes spatially homogeneous. Specifically for fluids, the Knudsen number is used to assess to what extent the approximation of continuity can be made, consider car traffic on a highway---with just one lane for simplicity. Somewhat surprisingly, and in a tribute to its effectiveness, continuum mechanics effectively models the movement of cars via a differential equation for the density of cars. The familiarity of this situation empowers us to understand a little of the continuum-discrete dichotomy underlying continuum modelling in general. To start modelling define that, x measure distance along the highway, t is time, ρ is the density of cars on the highway, cars do not appear and disappear. Consider any group of cars, from the car at the back of the group located at x = a to the particular car at the front located at x = b. The total number of cars in this group N = ∫ a b ρ d x, since cars are conserved d N / d t =0. The only way an integral can be zero for all intervals is if the integrand is zero for all x, consequently, conservation derives the first order nonlinear conservation PDE ∂ ρ ∂ t + ∂ ∂ x =0 for all positions on the highway. This conservation PDE applies not only to car traffic but also to fluids, solids, crowds, animals, plants, bushfires, financial traders and this PDE is one equation with two unknowns, so another equation is needed to form a well posed problem
3.
Conservation of energy
–
In physics, the law of conservation of energy states that the total energy of an isolated system remains constant—it is said to be conserved over time. Energy can neither be created nor destroyed, rather, it transforms from one form to another, for instance, chemical energy can be converted to kinetic energy in the explosion of a stick of dynamite. A consequence of the law of conservation of energy is that a perpetual motion machine of the first kind cannot exist and that is to say, no system without an external energy supply can deliver an unlimited amount of energy to its surroundings. Ancient philosophers as far back as Thales of Miletus c.550 BCE had inklings of the conservation of some underlying substance of everything is made. However, there is no reason to identify this with what we know today as mass-energy. Empedocles wrote that in his system, composed of four roots, nothing comes to be or perishes, instead. In 1605, Simon Stevinus was able to solve a number of problems in statics based on the principle that perpetual motion was impossible. Essentially, he pointed out that the height a moving body rises is equal to the height from which it falls, and used this observation to infer the idea of inertia. The remarkable aspect of this observation is that the height that a moving body ascends to does not depend on the shape of the surface that the body is moving on. In 1669, Christian Huygens published his laws of collision, among the quantities he listed as being invariant before and after the collision of bodies were both the sum of their linear momentums as well as the sum of their kinetic energies. However, the difference between elastic and inelastic collision was not understood at the time and this led to the dispute among later researchers as to which of these conserved quantities was the more fundamental. In his Horologium Oscillatorium, he gave a much clearer statement regarding the height of ascent of a moving body, Huygens study of the dynamics of pendulum motion was based on a single principle, that the center of gravity of heavy objects cannot lift itself. The fact that energy is scalar, unlike linear momentum which is a vector. It was Leibniz during 1676–1689 who first attempted a mathematical formulation of the kind of energy which is connected with motion. Using Huygens work on collision, Leibniz noticed that in mechanical systems. He called this quantity the vis viva or living force of the system, the principle represents an accurate statement of the approximate conservation of kinetic energy in situations where there is no friction. Many physicists at that time, such as Newton, held that the conservation of momentum and it was later shown that both quantities are conserved simultaneously, given the proper conditions such as an elastic collision. In 1687, Isaac Newton published his Principia, which was organized around the concept of force and momentum
4.
Conservation of mass
–
Hence, the quantity of mass is conserved over time. Thus, during any chemical reaction, nuclear reaction, or radioactive decay in an isolated system, the concept of mass conservation is widely used in many fields such as chemistry, mechanics, and fluid dynamics. e. Those completely isolated from all exchanges with the environment, in this circumstance, the mass–energy equivalence theorem states that mass conservation is equivalent to total energy conservation, which is the first law of thermodynamics. By contrast, for a closed system mass is only approximately conserved. Certain types of matter may be created or destroyed, but in all of these processes, for a discussion, see mass in general relativity. An important idea in ancient Greek philosophy was that Nothing comes from nothing, so that what exists now has always existed, no new matter can come into existence where there was none before. A further principle of conservation was stated by Epicurus who, describing the nature of the Universe, wrote that the totality of things was always such as it is now, and always will be. Jain philosophy, a non-creationist philosophy based on the teachings of Mahavira, states that the universe, the Jain text Tattvarthasutra states that a substance is permanent, but its modes are characterised by creation and destruction. A principle of the conservation of matter was also stated by Nasīr al-Dīn al-Tūsī and he wrote that A body of matter cannot disappear completely. It only changes its form, condition, composition, color and other properties, the principle of conservation of mass was first outlined by Mikhail Lomonosov in 1748. He proved it by experiments—though this is sometimes challenged, antoine Lavoisier had expressed these ideas in 1774. Others whose ideas pre-dated the work of Lavoisier include Joseph Black, Henry Cavendish, the conservation of mass was obscure for millennia because of the buoyancy effect of the Earths atmosphere on the weight of gases. For example, a piece of wood weighs less after burning, the vacuum pump also enabled the weighing of gases using scales. Once understood, the conservation of mass was of importance in progressing from alchemy to modern chemistry. His research indicated that in certain reactions the loss or gain could not have more than from 2 to 4 parts in 100,000. The difference in the accuracy aimed at and attained by Lavoisier on the one hand, in special relativity, the conservation of mass does not apply if the system is open and energy escapes. However, it continue to apply to totally closed systems. If energy cannot escape a system, its mass cannot decrease, in relativity theory, so long as any type of energy is retained within a system, this energy exhibits mass
5.
Momentum
–
In classical mechanics, linear momentum, translational momentum, or simply momentum is the product of the mass and velocity of an object, quantified in kilogram-meters per second. It is dimensionally equivalent to impulse, the product of force and time, Newtons second law of motion states that the change in linear momentum of a body is equal to the net impulse acting on it. If the truck were lighter, or moving slowly, then it would have less momentum. Linear momentum is also a quantity, meaning that if a closed system is not affected by external forces. In classical mechanics, conservation of momentum is implied by Newtons laws. It also holds in special relativity and, with definitions, a linear momentum conservation law holds in electrodynamics, quantum mechanics, quantum field theory. It is ultimately an expression of one of the symmetries of space and time. Linear momentum depends on frame of reference, observers in different frames would find different values of linear momentum of a system. But each would observe that the value of linear momentum does not change with time, momentum has a direction as well as magnitude. Quantities that have both a magnitude and a direction are known as vector quantities, because momentum has a direction, it can be used to predict the resulting direction of objects after they collide, as well as their speeds. Below, the properties of momentum are described in one dimension. The vector equations are almost identical to the scalar equations, the momentum of a particle is traditionally represented by the letter p. It is the product of two quantities, the mass and velocity, p = m v, the units of momentum are the product of the units of mass and velocity. In SI units, if the mass is in kilograms and the velocity in meters per second then the momentum is in kilogram meters/second, in cgs units, if the mass is in grams and the velocity in centimeters per second, then the momentum is in gram centimeters/second. Being a vector, momentum has magnitude and direction, for example, a 1 kg model airplane, traveling due north at 1 m/s in straight and level flight, has a momentum of 1 kg m/s due north measured from the ground. The momentum of a system of particles is the sum of their momenta, if two particles have masses m1 and m2, and velocities v1 and v2, the total momentum is p = p 1 + p 2 = m 1 v 1 + m 2 v 2. If all the particles are moving, the center of mass will generally be moving as well, if the center of mass is moving at velocity vcm, the momentum is, p = m v cm. This is known as Eulers first law, if a force F is applied to a particle for a time interval Δt, the momentum of the particle changes by an amount Δ p = F Δ t
6.
Solid mechanics
–
Solid mechanics is fundamental for civil, aerospace, nuclear, and mechanical engineering, for geology, and for many branches of physics such as materials science. It has specific applications in other areas, such as understanding the anatomy of living beings. One of the most common applications of solid mechanics is the Euler-Bernoulli beam equation. Solid mechanics extensively uses tensors to describe stresses, strains, as shown in the following table, solid mechanics inhabits a central place within continuum mechanics. The field of rheology presents an overlap between solid and fluid mechanics, a material has a rest shape and its shape departs away from the rest shape due to stress. The amount of departure from rest shape is called deformation, the proportion of deformation to original size is called strain and this region of deformation is known as the linearly elastic region. It is most common for analysts in solid mechanics to use linear material models, however, real materials often exhibit non-linear behavior. As new materials are used and old ones are pushed to their limits, There are four basic models that describe how a solid responds to an applied stress, Elastically – When an applied stress is removed, the material returns to its undeformed state. Linearly elastic materials, those that deform proportionally to the applied load and this implies that the material response has time-dependence. Plastically – Materials that behave elastically generally do so when the stress is less than a yield value. When the stress is greater than the stress, the material behaves plastically. That is, deformation occurs after yield is permanent. Thermoelastically - There is coupling of mechanical with thermal responses, in general, thermoelasticity is concerned with elastic solids under conditions that are neither isothermal nor adiabatic. The simplest theory involves the Fouriers law of conduction, as opposed to advanced theories with physically more realistic models. This theorem includes the method of least work as a special case 1874,1922, Timoshenko corrects the Euler-Bernoulli beam equation 1936, Hardy Cross publication of the moment distribution method, an important innovation in the design of continuous frames. Martin, and L. J. Applied mechanics Materials science Continuum mechanics Fracture mechanics L. D, landau, E. M. Lifshitz, Course of Theoretical Physics, Theory of Elasticity Butterworth-Heinemann, ISBN 0-7506-2633-X J. E. Marsden, T. J. Hughes, Mathematical Foundations of Elasticity, Dover, ISBN 0-486-67865-2 P. C. Chou, N. J. Pagano, Elasticity, Tensor, Dyadic, goodier, Theory of elasticity, 3d ed
7.
Stress (mechanics)
–
For example, when a solid vertical bar is supporting a weight, each particle in the bar pushes on the particles immediately below it. When a liquid is in a container under pressure, each particle gets pushed against by all the surrounding particles. The container walls and the pressure-inducing surface push against them in reaction and these macroscopic forces are actually the net result of a very large number of intermolecular forces and collisions between the particles in those molecules. Strain inside a material may arise by various mechanisms, such as stress as applied by external forces to the material or to its surface. Any strain of a material generates an internal elastic stress, analogous to the reaction force of a spring. In liquids and gases, only deformations that change the volume generate persistent elastic stress, however, if the deformation is gradually changing with time, even in fluids there will usually be some viscous stress, opposing that change. Elastic and viscous stresses are usually combined under the mechanical stress. Significant stress may exist even when deformation is negligible or non-existent, stress may exist in the absence of external forces, such built-in stress is important, for example, in prestressed concrete and tempered glass. Stress may also be imposed on a material without the application of net forces, for example by changes in temperature or chemical composition, stress that exceeds certain strength limits of the material will result in permanent deformation or even change its crystal structure and chemical composition. In some branches of engineering, the stress is occasionally used in a looser sense as a synonym of internal force. For example, in the analysis of trusses, it may refer to the total traction or compression force acting on a beam, since ancient times humans have been consciously aware of stress inside materials. Until the 17th century the understanding of stress was largely intuitive and empirical, with those tools, Augustin-Louis Cauchy was able to give the first rigorous and general mathematical model for stress in a homogeneous medium. Cauchy observed that the force across a surface was a linear function of its normal vector, and, moreover. The understanding of stress in liquids started with Newton, who provided a formula for friction forces in parallel laminar flow. Stress is defined as the force across a small boundary per unit area of that boundary, following the basic premises of continuum mechanics, stress is a macroscopic concept. In a fluid at rest the force is perpendicular to the surface, in a solid, or in a flow of viscous liquid, the force F may not be perpendicular to S, hence the stress across a surface must be regarded a vector quantity, not a scalar. Moreover, the direction and magnitude depend on the orientation of S. Thus the stress state of the material must be described by a tensor, called the stress tensor, with respect to any chosen coordinate system, the Cauchy stress tensor can be represented as a symmetric matrix of 3×3 real numbers
8.
Compatibility (mechanics)
–
In continuum mechanics, a compatible deformation tensor field in a body is that unique tensor field that is obtained when the body is subjected to a continuous, single-valued, displacement field. Compatibility is the study of the conditions under which such a displacement field can be guaranteed, compatibility conditions are particular cases of integrability conditions and were first derived for linear elasticity by Barré de Saint-Venant in 1864 and proved rigorously by Beltrami in 1886. In the continuum description of a solid body we imagine the body to be composed of a set of volumes or material points. Each volume is assumed to be connected to its neighbors without any gaps or overlaps, certain mathematical conditions have to be satisfied to ensure that gaps/overlaps do not develop when a continuum body is deformed. A body that deforms without developing any gaps/overlaps is called a compatible body, compatibility conditions are mathematical conditions that determine whether a particular deformation will leave a body in a compatible state. In the context of infinitesimal strain theory, these conditions are equivalent to stating that the displacements in a body can be obtained by integrating the strains. Such an integration is possible if the Saint-Venants tensor R vanishes in a body where ε is the infinitesimal strain tensor. For finite deformations the compatibility conditions take the form R, = ∇ × F =0 where F is the deformation gradient, the compatibility conditions in linear elasticity are obtained by observing that there are six strain-displacement relations that are functions of only three unknown displacements. This suggests that the three displacements may be removed from the system of equations without loss of information, the resulting expressions in terms of only the strains provide constraints on the possible forms of a strain field. e. We can write these conditions in index notation as e i k r e j l s ε i j, k l =0 where e i j k is the permutation symbol. In direct tensor notation ∇ × =0 where the operator can be expressed in an orthonormal coordinate system as ∇ × ε = e i j k ε r j, i e k ⊗ e r. The same condition is sufficient to ensure compatibility in a simply connected body. The quantity R i j k m represents the components of the Riemann-Christoffel curvature tensor. The problem of compatibility in continuum mechanics involves the determination of allowable single-valued continuous fields on simply connected bodies, more precisely, the problem may be stated in the following manner. Consider the deformation of a body shown in Figure 1, given a symmetric second order tensor field ϵ when is it possible to construct a vector field u such that ϵ =12 Suppose that there exists u such that the expression for ϵ holds. Hence, ε i k, j l − ε j k, i l − ε i l, j k + ε j l, i k =0 In direct tensor notation ∇ × =0 The above are necessary conditions. If w is the rotation vector then ∇ × ϵ = ∇ w + ∇ w T. Hence the necessary condition may also be written as ∇ × =0, Let us now assume that the condition ∇ × =0 is satisfied in a portion of a body
9.
Finite strain theory
–
In this case, the undeformed and deformed configurations of the continuum are significantly different and a clear distinction has to be made between them. This is commonly the case with elastomers, plastically-deforming materials and other fluids, the displacement of a body has two components, a rigid-body displacement and a deformation. A rigid-body displacement consists of a translation and rotation of the body without changing its shape or size. Deformation implies the change in shape and/or size of the body from an initial or undeformed configuration κ0 to a current or deformed configuration κ t, a change in the configuration of a continuum body can be described by a displacement field. A displacement field is a field of all displacement vectors for all particles in the body. Relative displacement between particles occurs if and only if deformation has occurred, if displacement occurs without deformation, then it is deemed a rigid-body displacement. The displacement of particles indexed by variable i may be expressed as follows, the vector joining the positions of a particle in the undeformed configuration P i and deformed configuration p i is called the displacement vector. The partial derivative of the displacement vector with respect to the material coordinates yields the material displacement gradient tensor ∇ X u, α J i are the direction cosines between the material and spatial coordinate systems with unit vectors E J and e i, respectively. e. Due to the assumption of continuity of χ, F has the inverse H = F −1, then, by the implicit function theorem, the Jacobian determinant J must be nonsingular, i. e. Consider a particle or material point P with position vector X = X I I I in the undeformed configuration. After a displacement of the body, the new position of the particle indicated by p in the new configuration is given by the position x = x i e i. The coordinate systems for the undeformed and deformed configuration can be superimposed for convenience, consider now a material point Q neighboring P, with position vector X + Δ X = I I. In the deformed configuration this particle has a new position q given by the vector x + Δ x. Assuming that the line segments Δ X and Δ x joining the particles P and Q in both the undeformed and deformed configuration, respectively, to be small, then we can express them as d X and d x. A geometrically consistent definition of such a derivative requires an excursion into differential geometry, the time derivative of F is F ˙ = ∂ F ∂ t = ∂ ∂ t = ∂ ∂ X = ∂ ∂ X where V is the velocity. The derivative on the hand side represents a material velocity gradient. It is common to convert that into a gradient, i. e. F ˙ = ∂ ∂ X = ∂ ∂ x ⋅ ∂ x ∂ X = l ⋅ F where l is the spatial velocity gradient. If the spatial velocity gradient is constant, the equation can be solved exactly to give F = e l t assuming F =1 at t =0
10.
Infinitesimal strain theory
–
With this assumption, the equations of continuum mechanics are considerably simplified. This approach may also be called small deformation theory, small displacement theory and it is contrasted with the finite strain theory where the opposite assumption is made. In such a linearization, the non-linear or second-order terms of the strain tensor are neglected. Therefore, the displacement gradient components and the spatial displacement gradient components are approximately equal. From the geometry of Figure 1 we have a b ¯ =2 +2 = d x 1 +2 ∂ u x ∂ x +2 +2 For very small displacement gradients, i. e. e. Therefore, the elements of the infinitesimal strain tensor are the normal strains in the coordinate directions. The results of operations are called strain invariants. Since there are no shear strain components in this coordinate system, an octahedral plane is one whose normal makes equal angles with the three principal directions. The engineering shear strain on a plane is called the octahedral shear strain and is given by γ o c t =232 +2 +2 where ε1, ε2, ε3 are the principal strains. Several definitions of equivalent strain can be found in the literature, thus, a solution does not generally exist for an arbitrary choice of strain components. Therefore, some restrictions, named compatibility equations, are imposed upon the strain components, with the addition of the three compatibility equations the number of independent equations are reduced to three, matching the number of unknown displacement components. These constraints on the strain tensor were discovered by Saint-Venant, and are called the Saint Venant compatibility equations, the compatibility functions serve to assure a single-valued continuous displacement function u i. The strains associated with length, i. e. the normal strain ε33, plane strain is then an acceptable approximation. The strain tensor for plane strain is written as, ε _ _ = in which the double underline indicates a second order tensor and this strain state is called plane strain. The corresponding stress tensor is, σ _ _ = in which the non-zero σ33 is needed to maintain the constraint ϵ33 =0. This stress term can be removed from the analysis to leave only the in-plane terms. Antiplane strain is another state of strain that can occur in a body. For infinitesimal deformations the scalar components of ω satisfy the condition | ω i j | ≪1, note that the displacement gradient is small only if both the strain tensor and the rotation tensor are infinitesimal
11.
Elasticity (physics)
–
In physics, elasticity is the ability of a body to resist a distorting influence or deforming force and to return to its original size and shape when that influence or force is removed. Solid objects will deform when adequate forces are applied on them, if the material is elastic, the object will return to its initial shape and size when these forces are removed. The physical reasons for elastic behavior can be different for different materials. In metals, the atomic lattice changes size and shape when forces are applied, when forces are removed, the lattice goes back to the original lower energy state. For rubbers and other polymers, elasticity is caused by the stretching of polymer chains when forces are applied, perfect elasticity is an approximation of the real world. The most elastic body in modern science found is Quartz fibre which is not even a perfect elastic body, so perfect elastic body is an ideal concept only. Most materials which possess elasticity in practice remain purely elastic only up to very small deformations. In engineering, the amount of elasticity of a material is determined by two types of material parameter, the first type of material parameter is called a modulus, which measures the amount of force per unit area needed to achieve a given amount of deformation. The SI unit of modulus is the pascal, a higher modulus typically indicates that the material is harder to deform. The second type of measures the elastic limit, the maximum stress that can arise in a material before the onset of permanent deformation. Its SI unit is also pascal, when describing the relative elasticities of two materials, both the modulus and the elastic limit have to be considered. Rubbers typically have a low modulus and tend to stretch a lot, of two rubber materials with the same elastic limit, the one with a lower modulus will appear to be more elastic, which is however not correct. When an elastic material is deformed due to a force, it experiences internal resistance to the deformation. The various moduli apply to different kinds of deformation, for instance, Youngs modulus applies to extension/compression of a body, whereas the shear modulus applies to its shear. The elasticity of materials is described by a curve, which shows the relation between stress and strain. The curve is nonlinear, but it can be approximated as linear for sufficiently small deformations. For even higher stresses, materials exhibit behavior, that is, they deform irreversibly. Elasticity is not exhibited only by solids, non-Newtonian fluids, such as viscoelastic fluids, in response to a small, rapidly applied and removed strain, these fluids may deform and then return to their original shape. Under larger strains, or strains applied for longer periods of time, because the elasticity of a material is described in terms of a stress-strain relation, it is essential that the terms stress and strain be defined without ambiguity
12.
Linear elasticity
–
Linear elasticity is the mathematical study of how solid objects deform and become internally stressed due to prescribed loading conditions. Linear elasticity models materials as continua, linear elasticity is a simplification of the more general nonlinear theory of elasticity and is a branch of continuum mechanics. The fundamental linearizing assumptions of linear elasticity are, infinitesimal strains or small deformations, in addition linear elasticity is valid only for stress states that do not produce yielding. These assumptions are reasonable for many engineering materials and engineering design scenarios, linear elasticity is therefore used extensively in structural analysis and engineering design, often with the aid of finite element analysis. The system of equations is completed by a set of linear algebraic constitutive relations. For elastic materials, Hookes law represents the behavior and relates the unknown stresses. Note, the Einstein summation convention of summing on repeated indices is used below and these are 3 independent equations with 6 independent unknowns. Strain-displacement equations, ε i j =12 where ε i j = ε j i is the strain and these are 6 independent equations relating strains and displacements with 9 independent unknowns. The equation for Hookes law is, σ i j = C i j k l ε k l where C i j k l is the stiffness tensor and these are 6 independent equations relating stresses and strains. An elastostatic boundary value problem for a media is a system of 15 independent equations. Specifying the boundary conditions, the value problem is completely defined. To solve the two approaches can be taken according to boundary conditions of the boundary value problem, a displacement formulation. In isotropic media, the stiffness tensor gives the relationship between the stresses and the strains, for an isotropic medium, the stiffness tensor has no preferred direction, an applied force will give the same displacements no matter the direction in which the force is applied. If the medium is homogeneous, then the elastic moduli will be independent of the position in the medium, the constitutive equation may now be written as, σ i j = K δ i j ε k k +2 μ. This expression separates the stress into a part on the left which may be associated with a scalar pressure. A simpler expression is, σ i j = λ δ i j ε k k +2 μ ε i j where λ is Lamés first parameter. More simply, ε i j =12 μ σ i j − ν E δ i j σ k k =1 E where ν is Poissons ratio and E is Youngs modulus. Elastostatics is the study of linear elasticity under the conditions of equilibrium, in all forces on the elastic body sum to zero
13.
Plasticity (physics)
–
In physics and materials science, plasticity describes the deformation of a material undergoing non-reversible changes of shape in response to applied forces. For example, a piece of metal being bent or pounded into a new shape displays plasticity as permanent changes occur within the material itself. In engineering, the transition from elastic behavior to plastic behavior is called yield, plastic deformation is observed in most materials, particularly metals, soils, rocks, concrete, foams, bone and skin. However, the mechanisms that cause plastic deformation can vary widely. At a crystalline scale, plasticity in metals is usually a consequence of dislocations, such defects are relatively rare in most crystalline materials, but are numerous in some and part of their crystal structure, in such cases, plastic crystallinity can result. In brittle materials such as rock, concrete and bone, plasticity is caused predominantly by slip at microcracks, for many ductile metals, tensile loading applied to a sample will cause it to behave in an elastic manner. Each increment of load is accompanied by an increment in extension. When the load is removed, the returns to its original size. However, once the load exceeds a threshold – the yield strength – the extension increases more rapidly than in the region, now when the load is removed. Elastic deformation, however, is an approximation and its quality depends on the time frame considered, if, as indicated in the graph opposite, the deformation includes elastic deformation, it is also often referred to as elasto-plastic deformation or elastic-plastic deformation. Perfect plasticity is a property of materials to undergo irreversible deformation without any increase in stresses or loads, plastic materials with hardening necessitate increasingly higher stresses to result in further plastic deformation. Generally, plastic deformation is dependent on the deformation speed. Such materials are said to deform visco-plastically, the plasticity of a material is directly proportional to the ductility and malleability of the material. Plasticity in a crystal of pure metal is primarily caused by two modes of deformation in the lattice, slip and twinning. Slip is a deformation which moves the atoms through many interatomic distances relative to their initial positions. Twinning is the plastic deformation takes place along two planes due to a set of forces applied to a given metal piece. Most metals show more plasticity when hot than when cold, lead shows sufficient plasticity at room temperature, while cast iron does not possess sufficient plasticity for any forging operation even when hot. This property is of importance in forming, shaping and extruding operations on metals, most metals are rendered plastic by heating and hence shaped hot
14.
Bending
–
In applied mechanics, bending characterizes the behavior of a slender structural element subjected to an external load applied perpendicularly to a longitudinal axis of the element. The structural element is assumed to be such that at least one of its dimensions is a fraction, typically 1/10 or less. When the length is longer than the width and the thickness. For example, a closet rod sagging under the weight of clothes on clothes hangers is an example of a beam experiencing bending. On the other hand, a shell is a structure of any form where the length. A large diameter, but thin-walled, short tube supported at its ends, in the absence of a qualifier, the term bending is ambiguous because bending can occur locally in all objects. Therefore, to make the usage of the more precise, engineers refer to a specific object such as, the bending of rods, the bending of beams, the bending of plates. A beam deforms and stresses develop inside it when a load is applied on it. In the quasi-static case, the amount of bending deflection and the stresses that develop are assumed not to change over time. In a horizontal beam supported at the ends and loaded downwards in the middle and these last two forces form a couple or moment as they are equal in magnitude and opposite in direction. This bending moment resists the sagging deformation characteristic of a beam experiencing bending, the stress distribution in a beam can be predicted quite accurately when some simplifying assumptions are used. In the Euler–Bernoulli theory of beams, a major assumption is that plane sections remain plane. In other words, any deformation due to shear across the section is not accounted for, also, this linear distribution is only applicable if the maximum stress is less than the yield stress of the material. For stresses that exceed yield, refer to article plastic bending, at yield, the maximum stress experienced in the section is defined as the flexural strength. Simple beam bending is often analyzed with the Euler–Bernoulli beam equation, the conditions for using simple bending theory are, The beam is subject to pure bending. This means that the force is zero, and that no torsional or axial loads are present. The material is isotropic and homogeneous, the beam is initially straight with a cross section that is constant throughout the beam length. The beam has an axis of symmetry in the plane of bending, the proportions of the beam are such that it would fail by bending rather than by crushing, wrinkling or sideways buckling
15.
Hooke's law
–
Hookes law is a principle of physics that states that the force needed to extend or compress a spring by some distance X is proportional to that distance. That is, F = kX, where k is a constant factor characteristic of the spring, its stiffness, the law is named after 17th-century British physicist Robert Hooke. He first stated the law in 1676 as a Latin anagram and he published the solution of his anagram in 1678 as, ut tensio, sic vis. Hooke states in the 1678 work that he was aware of the law already in 1660, an elastic body or material for which this equation can be assumed is said to be linear-elastic or Hookean. Hookes law is only a linear approximation to the real response of springs. Many materials will deviate from Hookes law well before those elastic limits are reached. On the other hand, Hookes law is an approximation for most solid bodies, as long as the forces. For this reason, Hookes law is used in all branches of science and engineering. It is also the principle behind the spring scale, the manometer. The modern theory of elasticity generalizes Hookes law to say that the strain of an object or material is proportional to the stress applied to it. In this general form, Hookes law makes it possible to deduce the relation between strain and stress for complex objects in terms of properties of the materials it is made of. Consider a simple helical spring that has one end attached to some fixed object, suppose that the spring has reached a state of equilibrium, where its length is not changing anymore. Let X be the amount by which the end of the spring was displaced from its relaxed position. Hookes law states that F = k X or, equivalently, X = F k where k is a real number. Moreover, the formula holds when the spring is compressed. According to this formula, the graph of the applied force F as a function of the displacement X will be a line passing through the origin. Hookes law for a spring is often stated under the convention that F is the force exerted by the spring on whatever is pulling its free end. In that case, the equation becomes F = − k X since the direction of the force is opposite to that of the displacement
16.
Material failure theory
–
Failure theory is the science of predicting the conditions under which solid materials fail under the action of external loads. The failure of a material is classified into brittle failure or ductile failure. Depending on the conditions most materials can fail in a brittle or ductile manner or both, however, for most practical situations, a material may be classified as either brittle or ductile. Though failure theory has been in development for over 200 years, in mathematical terms, failure theory is expressed in the form of various failure criteria which are valid for specific materials. Failure criteria are functions in stress or strain space which separate failed states from unfailed states, a precise physical definition of a failed state is not easily quantified and several working definitions are in use in the engineering community. Quite often, phenomenological failure criteria of the form are used to predict brittle failure. In materials science, material failure is the loss of carrying capacity of a material unit. This definition per se introduces the fact that failure can be examined in different scales, from microscopic. On the other hand, due to the lack of globally accepted fracture criteria, such methodologies are useful for gaining insight in the cracking of specimens and simple structures under well defined global load distributions. Microscopic failure considers the initiation and propagation of a crack, failure criteria in this case are related to microscopic fracture. Some of the most popular models in this area are the micromechanical failure models. Such a model, proposed by Gurson and extended by Tvergaard, another approach, proposed by Rousselier, is based on continuum damage mechanics and thermodynamics. Both models form a modification of the von Mises yield potential by introducing a scalar quantity, which represents the void volume fraction of cavities. Macroscopic material failure is defined in terms of load carrying capacity or energy storage capacity, li presents a classification of macroscopic failure criteria in four categories, Stress or strain failure Energy type failure Damage failure Empirical failure. The material behavior at one level is considered as a collective of its behavior at a sub-level, an efficient deformation and failure model should be consistent at every level. The maximum stress criterion assumes that a material fails when the principal stress σ1 in a material element exceeds the uniaxial tensile strength of the material. Alternatively, the material will fail if the principal stress σ3 is less than the uniaxial compressive strength of the material. Numerous other phenomenological failure criteria can be found in the engineering literature, the degree of success of these criteria in predicting failure has been limited
17.
Fracture mechanics
–
Fracture mechanics is the field of mechanics concerned with the study of the propagation of cracks in materials. It uses methods of solid mechanics to calculate the driving force on a crack. In modern materials science, fracture mechanics is an important tool used to improve the performance of mechanical components, fractography is widely used with fracture mechanics to understand the causes of failures and also verify the theoretical failure predictions with real life failures. The prediction of crack growth is at the heart of the damage tolerance mechanical design discipline. There are three ways of applying a force to enable a crack to propagate, Mode I fracture – Opening mode, Mode II fracture – Sliding mode, the processes of material manufacture, processing, machining, and forming may introduce flaws in a finished mechanical component. Arising from the process, interior and surface flaws are found in all metal structures. Not all such flaws are unstable under service conditions, Fracture mechanics is the analysis of flaws to discover those that are safe and those that are liable to propagate as cracks and so cause failure of the flawed structure. Despite these inherent flaws, it is possible to achieve through damage tolerance analysis the safe operation of a structure, Fracture mechanics as a subject for critical study has barely been around for a century and thus is relatively new. Fracture mechanics should attempt to provide answers to the following questions. What crack size can be tolerated under service loading, i. e. what is the maximum permissible crack size. How long does it take for a crack to grow from an initial size, for example the minimum detectable crack size. What is the life of a structure when a certain pre-existing flaw size is assumed to exist. During the period available for crack detection how often should the structure be inspected for cracks, Fracture mechanics was developed during World War I by English aeronautical engineer, A. A. Griffith, to explain the failure of brittle materials. Griffiths work was motivated by two facts, The stress needed to fracture bulk glass is around 100 MPa. The theoretical stress needed for breaking atomic bonds of glass is approximately 10,000 MPa, a theory was needed to reconcile these conflicting observations. Also, experiments on glass fibers that Griffith himself conducted suggested that the stress increases as the fiber diameter decreases. Hence the uniaxial tensile strength, which had used extensively to predict material failure before Griffith. Griffith suggested that the low fracture strength observed in experiments, as well as the size-dependence of strength, was due to the presence of microscopic flaws in the bulk material, to verify the flaw hypothesis, Griffith introduced an artificial flaw in his experimental glass specimens
18.
Contact mechanics
–
Contact mechanics is the study of the deformation of solids that touch each other at one or more points. Central aspects in contact mechanics are the pressures and adhesion acting perpendicular to the bodies surfaces. This page focuses mainly on the direction, i. e. on frictionless contact mechanics. Frictional contact mechanics is discussed separately, current challenges faced in the field may include stress analysis of contact and coupling members and the influence of lubrication and material design on friction and wear. Applications of contact mechanics further extend into the micro- and nanotechnological realm, the original work in contact mechanics dates back to 1882 with the publication of the paper On the contact of elastic solids by Heinrich Hertz. Hertz was attempting to understand how the properties of multiple. Hertzian contact stress refers to the stresses that develop as two curved surfaces come in contact and deform slightly under the imposed loads. This amount of deformation is dependent on the modulus of elasticity of the material in contact and it gives the contact stress as a function of the normal contact force, the radii of curvature of both bodies and the modulus of elasticity of both bodies. Hertzian contact stress forms the foundation for the equations for load bearing capabilities and fatigue life in bearings, gears, classical contact mechanics is most notably associated with Heinrich Hertz. In 1882, Hertz solved the problem of two elastic bodies with curved surfaces. This still-relevant classical solution provides a foundation for modern problems in contact mechanics, for example, in mechanical engineering and tribology, Hertzian contact stress is a description of the stress within mating parts. The Hertzian contact stress refers to the stress close to the area of contact between two spheres of different radii. It was not until one hundred years later that Johnson, Kendall. This theory was rejected by Boris Derjaguin and co-workers who proposed a different theory of adhesion in the 1970s, the Derjaguin model came to be known as the DMT model, and the Johnson et al. model came to be known as the JKR model for adhesive elastic contact. This rejection proved to be instrumental in the development of the Tabor, further advancement in the field of contact mechanics in the mid-twentieth century may be attributed to names such as Bowden and Tabor. Bowden and Tabor were the first to emphasize the importance of surface roughness for bodies in contact, through investigation of the surface roughness, the true contact area between friction partners is found to be less than the apparent contact area. Such understanding also drastically changed the direction of undertakings in tribology, the works of Bowden and Tabor yielded several theories in contact mechanics of rough surfaces. The contributions of Archard must also be mentioned in discussion of pioneering works in this field, Archard concluded that, even for rough elastic surfaces, the contact area is approximately proportional to the normal force
19.
Frictional contact mechanics
–
Contact mechanics is the study of the deformation of solids that touch each other at one or more points. This can be divided into compressive and adhesive forces in the perpendicular to the interface. Frictional contact mechanics is the study of the deformation of bodies in the presence of frictional effects, frictional contact mechanics is concerned with a large range of different scales. At the macroscopic scale, it is applied for the investigation of the motion of contacting bodies, for instance the bouncing of a rubber ball on a surface depends on the frictional interaction at the contact interface. Here the total force versus indentation and lateral displacement are of main concern, at the intermediate scale, one is interested in the local stresses, strains and deformations of the contacting bodies in and near the contact area. For instance to derive or validate contact models at the scale, or to investigate wear. Application areas of scale are tire-pavement interaction, railway wheel-rail interaction, roller bearing analysis. Several famous scientists, engineers and mathematician contributed to our understanding of friction and they include Leonardo da Vinci, Guillaume Amontons, John Theophilus Desaguliers, Leonhard Euler, and Charles-Augustin de Coulomb. Later, Nikolai Pavlovich Petrov, Osborne Reynolds and Richard Stribeck supplemented this understanding with theories of lubrication, deformation of solid materials was investigated in the 17th and 18th centuries by Robert Hooke, Joseph Louis Lagrange, and in the 19th and 20th centuries by d’Alembert and Timoshenko. With respect to contact mechanics the classical contribution by Heinrich Hertz stands out, further the fundamental solutions by Boussinesq and Cerruti are of primary importance for the investigation of frictional contact problems in the elastic regime. Classical results for a true frictional contact problem concern the papers by F. W. Carter and they independently presented the creep versus creep force relation for a cylinder on a plane or for two cylinders in steady rolling contact using Coulomb’s dry friction law. These are applied to railway locomotive traction, and for understanding the hunting oscillation of railway vehicles, with respect to sliding, the classical solutions are due to C. Cattaneo and R. D. Mindlin, who considered the tangential shifting of a sphere on a plane, in the 1950s interest in the rolling contact of railway wheels grew. Johnson presented an approach for the 3D frictional problem with Hertzian geometry. Among others he found that spin creepage, which is symmetric about the center of the contact patch and this is due to the fore-aft differences in the distribution of tractions in the contact patch. In 1967 Joost Kalker published his milestone PhD thesis on the theory for rolling contact. This theory is exact for the situation of a friction coefficient in which case the slip area vanishes. It does assume Coulomb’s friction law, which more or less requires clean surfaces and this theory is for massive bodies such as the railway wheel-rail contact
20.
Fluid mechanics
–
Fluid mechanics is a branch of physics concerned with the mechanics of fluids and the forces on them. Fluid mechanics has a range of applications, including for mechanical engineering, civil engineering, chemical engineering, geophysics, astrophysics. Fluid mechanics can be divided into fluid statics, the study of fluids at rest, and fluid dynamics, fluid mechanics, especially fluid dynamics, is an active field of research with many problems that are partly or wholly unsolved. Fluid mechanics can be complex, and can best be solved by numerical methods. A modern discipline, called computational fluid dynamics, is devoted to this approach to solving fluid mechanics problems, Particle image velocimetry, an experimental method for visualizing and analyzing fluid flow, also takes advantage of the highly visual nature of fluid flow. Inviscid flow was further analyzed by mathematicians and viscous flow was explored by a multitude of engineers including Jean Léonard Marie Poiseuille. Fluid statics or hydrostatics is the branch of mechanics that studies fluids at rest. It embraces the study of the conditions under which fluids are at rest in stable equilibrium, and is contrasted with fluid dynamics, hydrostatics is fundamental to hydraulics, the engineering of equipment for storing, transporting and using fluids. It is also relevant to some aspect of geophysics and astrophysics, to meteorology, to medicine, fluid dynamics is a subdiscipline of fluid mechanics that deals with fluid flow—the science of liquids and gases in motion. The solution to a fluid dynamics problem typically involves calculating various properties of the fluid, such as velocity, pressure, density and it has several subdisciplines itself, including aerodynamics and hydrodynamics. Some fluid-dynamical principles are used in engineering and crowd dynamics. Fluid mechanics is a subdiscipline of continuum mechanics, as illustrated in the following table, in a mechanical view, a fluid is a substance that does not support shear stress, that is why a fluid at rest has the shape of its containing vessel. A fluid at rest has no shear stress, the assumptions inherent to a fluid mechanical treatment of a physical system can be expressed in terms of mathematical equations. This can be expressed as an equation in integral form over the control volume, the continuum assumption is an idealization of continuum mechanics under which fluids can be treated as continuous, even though, on a microscopic scale, they are composed of molecules. Fluid properties can vary continuously from one element to another and are average values of the molecular properties. The continuum hypothesis can lead to results in applications like supersonic speed flows. Those problems for which the continuum hypothesis fails, can be solved using statistical mechanics, to determine whether or not the continuum hypothesis applies, the Knudsen number, defined as the ratio of the molecular mean free path to the characteristic length scale, is evaluated. Problems with Knudsen numbers below 0.1 can be evaluated using the continuum hypothesis, the Navier–Stokes equations are differential equations that describe the force balance at a given point within a fluid
21.
Fluid
–
In physics, a fluid is a substance that continually deforms under an applied shear stress. Fluids are a subset of the phases of matter and include liquids, gases, plasmas, fluids are substances that have zero shear modulus, or, in simpler terms, a fluid is a substance which cannot resist any shear force applied to it. Although the term includes both the liquid and gas phases, in common usage, fluid is often used as a synonym for liquid. For example, brake fluid is hydraulic oil and will not perform its required incompressible function if there is gas in it and this colloquial usage of the term is also common in medicine and in nutrition. Liquids form a surface while gases do not. The distinction between solids and fluid is not entirely obvious, the distinction is made by evaluating the viscosity of the substance. Silly Putty can be considered to behave like a solid or a fluid and it is best described as a viscoelastic fluid. There are many examples of substances proving difficult to classify, a particularly interesting one is pitch, as demonstrated in the pitch drop experiment currently running at the University of Queensland. Fluids display properties such as, not resisting deformation, or resisting it only slightly, and these properties are typically a function of their inability to support a shear stress in static equilibrium. Solids can be subjected to stresses, and to normal stresses—both compressive. In contrast, ideal fluids can only be subjected to normal, real fluids display viscosity and so are capable of being subjected to low levels of shear stress. In a solid, shear stress is a function of strain, a consequence of this behavior is Pascals law which describes the role of pressure in characterizing a fluids state. The study of fluids is fluid mechanics, which is subdivided into fluid dynamics, matter Liquid Gas Bird, Byron, Stewart, Warren, Lightfoot, Edward
22.
Fluid statics
–
Fluid statics or hydrostatics is the branch of fluid mechanics that studies incompressible fluids at rest. It encompasses the study of the conditions under which fluids are at rest in stable equilibrium as opposed to fluid dynamics, hydrostatics are categorized as a part of the fluid statics, which is the study of all fluids, incompressible or not, at rest. Hydrostatics is fundamental to hydraulics, the engineering of equipment for storing, transporting and using fluids and it is also relevant to geophysics and astrophysics, to meteorology, to medicine, and many other fields. Some principles of hydrostatics have been known in an empirical and intuitive sense since antiquity, by the builders of boats, cisterns, aqueducts and fountains. Archimedes is credited with the discovery of Archimedes Principle, which relates the force on an object that is submerged in a fluid to the weight of fluid displaced by the object. The fair cup or Pythagorean cup, which dates from about the 6th century BC, is a technology whose invention is credited to the Greek mathematician. It was used as a learning tool, the cup consists of a line carved into the interior of the cup, and a small vertical pipe in the center of the cup that leads to the bottom. The height of this pipe is the same as the line carved into the interior of the cup, the cup may be filled to the line without any fluid passing into the pipe in the center of the cup. However, when the amount of fluid exceeds this fill line, due to the drag that molecules exert on one another, the cup will be emptied. Herons fountain is a device invented by Heron of Alexandria that consists of a jet of fluid being fed by a reservoir of fluid. The fountain is constructed in such a way that the height of the jet exceeds the height of the fluid in the reservoir, the device consisted of an opening and two containers arranged one above the other. The intermediate pot, which was sealed, was filled with fluid, trapped air inside the vessels induces a jet of water out of a nozzle, emptying all water from the intermediate reservoir. Pascal made contributions to developments in both hydrostatics and hydrodynamics, due to the fundamental nature of fluids, a fluid cannot remain at rest under the presence of a shear stress. However, fluids can exert pressure normal to any contacting surface, if a point in the fluid is thought of as an infinitesimally small cube, then it follows from the principles of equilibrium that the pressure on every side of this unit of fluid must be equal. If this were not the case, the fluid would move in the direction of the resulting force, thus, the pressure on a fluid at rest is isotropic, i. e. it acts with equal magnitude in all directions. This characteristic allows fluids to transmit force through the length of pipes or tubes, i. e. a force applied to a fluid in a pipe is transmitted, via the fluid, to the other end of the pipe. This principle was first formulated, in an extended form, by Blaise Pascal. In a fluid at rest, all frictional and inertial stresses vanish, when this condition of V =0 is applied to the Navier-Stokes equation, the gradient of pressure becomes a function of body forces only
23.
Fluid dynamics
–
In physics and engineering, fluid dynamics is a subdiscipline of fluid mechanics that describes the flow of fluids. It has several subdisciplines, including aerodynamics and hydrodynamics, before the twentieth century, hydrodynamics was synonymous with fluid dynamics. This is still reflected in names of some fluid dynamics topics, like magnetohydrodynamics and hydrodynamic stability, the foundational axioms of fluid dynamics are the conservation laws, specifically, conservation of mass, conservation of linear momentum, and conservation of energy. These are based on mechanics and are modified in quantum mechanics. They are expressed using the Reynolds transport theorem, in addition to the above, fluids are assumed to obey the continuum assumption. Fluids are composed of molecules that collide with one another and solid objects, however, the continuum assumption assumes that fluids are continuous, rather than discrete. The fact that the fluid is made up of molecules is ignored. The unsimplified equations do not have a general solution, so they are primarily of use in Computational Fluid Dynamics. The equations can be simplified in a number of ways, all of which make them easier to solve, some of the simplifications allow some simple fluid dynamics problems to be solved in closed form. Three conservation laws are used to solve fluid dynamics problems, the conservation laws may be applied to a region of the flow called a control volume. A control volume is a volume in space through which fluid is assumed to flow. The integral formulations of the laws are used to describe the change of mass, momentum. Mass continuity, The rate of change of fluid mass inside a control volume must be equal to the net rate of flow into the volume. Mass flow into the system is accounted as positive, and since the vector to the surface is opposite the sense of flow into the system the term is negated. The first term on the right is the net rate at which momentum is convected into the volume, the second term on the right is the force due to pressure on the volumes surfaces. The first two terms on the right are negated since momentum entering the system is accounted as positive, the third term on the right is the net acceleration of the mass within the volume due to any body forces. Surface forces, such as forces, are represented by F surf. The following is the form of the momentum conservation equation
24.
Archimedes' principle
–
Archimedes principle is a law of physics fundamental to fluid mechanics. It was formulated by Archimedes of Syracuse, in On Floating Bodies, Archimedes suggested that, Practically, Archimedes principle allows the buoyancy of an object partially or fully immersed in a liquid to be calculated. The downward force on the object is simply its weight, the upward, or buoyant, force on the object is that stated by Archimedes principle, above. Thus, the net force on the object is the difference between the buoyant force and its weight. If this net force is positive, the object rises, if negative, the sinks, and if zero. Consider a cube immersed in a fluid, with its sides parallel to the direction of gravity, the fluid will exert a normal force on each face, and therefore only the forces on the top and bottom faces will contribute to buoyancy. The pressure difference between the bottom and the top face is directly proportional to the height, multiplying the pressure difference by the area of a face gives the net force on the cube – the buoyancy, or the weight of the fluid displaced. By extending this reasoning to irregular shapes, we can see that, whatever the shape of the submerged body, the buoyant force is equal to the weight of the fluid displaced. Apparent loss in weight of water = weight of object in air − weight of object in water The weight of the fluid is directly proportional to the volume of the displaced fluid. The weight of the object in the fluid is reduced, because of the acting on it. Thus, among completely submerged objects with equal masses, objects with greater volume have greater buoyancy, suppose a rocks weight is measured as 10 newtons when suspended by a string in a vacuum with gravity acting on it. Suppose that, when the rock is lowered into water, it displaces water of weight 3 newtons, the force it then exerts on the string from which it hangs would be 10 newtons minus the 3 newtons of buoyant force,10 −3 =7 newtons. Buoyancy reduces the apparent weight of objects that have sunk completely to the sea floor and it is generally easier to lift an object up through the water than it is to pull it out of the water. Example, If you drop wood into water, buoyancy will keep it afloat, example, A helium balloon in a moving car. When increasing speed or driving in a curve, the air moves in the direction to the cars acceleration. However, due to buoyancy, the balloon is pushed out of the way by the air, and will actually drift in the same direction as the cars acceleration. When an object is immersed in a liquid, the liquid exerts a force, which is known as the buoyant force. The sum force acting on the object, then, is equal to the difference between the weight of the object and the weight of displaced liquid, equilibrium, or neutral buoyancy, is achieved when these two weights are equal
25.
Bernoulli's principle
–
In fluid dynamics, Bernoullis principle states that an increase in the speed of a fluid occurs simultaneously with a decrease in pressure or a decrease in the fluids potential energy. The principle is named after Daniel Bernoulli who published it in his book Hydrodynamica in 1738, Bernoullis principle can be applied to various types of fluid flow, resulting in various forms of Bernoullis equation, there are different forms of Bernoullis equation for different types of flow. The simple form of Bernoullis equation is valid for incompressible flows, more advanced forms may be applied to compressible flows at higher Mach numbers. Bernoullis principle can be derived from the principle of conservation of energy and this states that, in a steady flow, the sum of all forms of energy in a fluid along a streamline is the same at all points on that streamline. This requires that the sum of energy, potential energy. If the fluid is flowing out of a reservoir, the sum of all forms of energy is the same on all streamlines because in a reservoir the energy per volume is the same everywhere. Bernoullis principle can also be derived directly from Isaac Newtons Second Law of Motion, if a small volume of fluid is flowing horizontally from a region of high pressure to a region of low pressure, then there is more pressure behind than in front. This gives a net force on the volume, accelerating it along the streamline, fluid particles are subject only to pressure and their own weight. Consequently, within a fluid flowing horizontally, the highest speed occurs where the pressure is lowest, and the lowest speed occurs where the pressure is highest. In most flows of liquids, and of gases at low Mach number, therefore, the fluid can be considered to be incompressible and these flows are called incompressible flows. Bernoulli performed his experiments on liquids, so his equation in its form is valid only for incompressible flow. The constant on the side of the equation depends only on the streamline chosen. For conservative force fields, Bernoullis equation can be generalized as, E. g. for the Earths gravity Ψ = gz. The constant in the Bernoulli equation can be normalised, most often, gases and liquids are not capable of negative absolute pressure, or even zero pressure, so clearly Bernoullis equation ceases to be valid before zero pressure is reached. In liquids – when the pressure becomes too low – cavitation occurs, the above equations use a linear relationship between flow speed squared and pressure. At higher flow speeds in gases, or for sound waves in liquid, in many applications of Bernoullis equation, the change in the ρgz term along the streamline is so small compared with the other terms that it can be ignored. For example, in the case of aircraft in flight, the change in height z along a streamline is so small the ρgz term can be omitted. This allows the equation to be presented in the following simplified form, p + q = p 0 where p0 is called total pressure
26.
Pascal's law
–
The law was established by French mathematician Blaise Pascal in 1647–48. The intuitive explanation of this formula is that the change in pressure between 2 elevations is due to the weight of the fluid between the elevations. A more correct interpretation, though, is that the change is caused by the change of potential energy per unit volume of the liquid due to the existence of the gravitational field. Note that the variation with height does not depend on any additional pressures, therefore, Pascals law can be interpreted as saying that any change in pressure applied at any given point of the fluid is transmitted undiminished throughout the fluid. If a U-tube is filled with water and pistons are placed at each end, pressure exerted against the piston will be transmitted throughout the liquid. The pressure that the left piston exerts against the water will be equal to the pressure the water exerts against the right piston. Suppose the tube on the side is made wider and a piston of a larger area is used, for example. If a 1 N load is placed on the left piston, the difference between force and pressure is important, the additional pressure is exerted against the entire area of the larger piston. Since there is 50 times the area,50 times as much force is exerted on the larger piston, thus, the larger piston will support a 50 N load - fifty times the load on the smaller piston. Forces can be multiplied using such a device, one newton input produces 50 newtons output. By further increasing the area of the piston, forces can be multiplied, in principle. Pascals principle underlies the operation of the hydraulic press, the hydraulic press does not violate energy conservation, because a decrease in distance moved compensates for the increase in force. When the small piston is moved downward 100 centimeters, the piston will be raised only one-fiftieth of this. Pascals principle applies to all fluids, whether gases or liquids, a typical application of Pascals principle for gases and liquids is the automobile lift seen in many service stations. Increased air pressure produced by an air compressor is transmitted through the air to the surface of oil in an underground reservoir, the oil, in turn, transmits the pressure to a piston, which lifts the automobile. The relatively low pressure that exerts the force against the piston is about the same as the air pressure in automobile tires. Hydraulics is employed by modern devices ranging from small to enormous. For example, there are hydraulic pistons in almost all construction machines where heavy loads are involved, Pascals barrel is the name of a hydrostatics experiment allegedly performed by Blaise Pascal in 1646
27.
Viscosity
–
The viscosity of a fluid is a measure of its resistance to gradual deformation by shear stress or tensile stress. For liquids, it corresponds to the concept of thickness, for example. Viscosity is a property of the fluid which opposes the motion between the two surfaces of the fluid in a fluid that are moving at different velocities. For a given velocity pattern, the stress required is proportional to the fluids viscosity, a fluid that has no resistance to shear stress is known as an ideal or inviscid fluid. Zero viscosity is observed only at low temperatures in superfluids. Otherwise, all fluids have positive viscosity, and are said to be viscous or viscid. A fluid with a high viscosity, such as pitch. The word viscosity is derived from the Latin viscum, meaning mistletoe, the dynamic viscosity of a fluid expresses its resistance to shearing flows, where adjacent layers move parallel to each other with different speeds. It can be defined through the situation known as a Couette flow. This fluid has to be homogeneous in the layer and at different shear stresses, if the speed of the top plate is small enough, the fluid particles will move parallel to it, and their speed will vary linearly from zero at the bottom to u at the top. Each layer of fluid will move faster than the one just below it, in particular, the fluid will apply on the top plate a force in the direction opposite to its motion, and an equal but opposite one to the bottom plate. An external force is required in order to keep the top plate moving at constant speed. The magnitude F of this force is found to be proportional to the u and the area A of each plate. The proportionality factor μ in this formula is the viscosity of the fluid, the ratio u/y is called the rate of shear deformation or shear velocity, and is the derivative of the fluid speed in the direction perpendicular to the plates. Isaac Newton expressed the forces by the differential equation τ = μ ∂ u ∂ y, where τ = F/A. This formula assumes that the flow is moving along parallel lines and this equation can be used where the velocity does not vary linearly with y, such as in fluid flowing through a pipe. Use of the Greek letter mu for the dynamic viscosity is common among mechanical and chemical engineers. However, the Greek letter eta is used by chemists, physicists
28.
Newtonian fluid
–
That is equivalent to saying that those forces are proportional to the rates of change of the fluids velocity vector as one moves away from the point in question in various directions. Newtonian fluids are the simplest mathematical models of fluids that account for viscosity, while no real fluid fits the definition perfectly, many common liquids and gases, such as water and air, can be assumed to be Newtonian for practical calculations under ordinary conditions. However, non-Newtonian fluids are relatively common, and include oobleck, other examples include many polymer solutions, molten polymers, many solid suspensions, blood, and most highly viscous fluids. Newtonian fluids are named after Isaac Newton, who first postulated the relation between the strain rate and shear stress for such fluids in differential form. An element of a liquid or gas will suffer forces from the surrounding fluid. These forces can be approximated to first order by a viscous stress tensor. The deformation of that element, relative to some previous state. The tensors τ and ∇ v can be expressed by 3×3 matrices, one also defines a total stress tensor σ ) that combines the shear stress with conventional pressure p. The diagonal components of viscosity tensor is molecular viscosity of a liquid, and not diagonal components – turbulence eddy viscosity
29.
Non-Newtonian fluid
–
A non-Newtonian fluid is a fluid that does not follow Newtons Law of Viscosity. Most commonly, the viscosity of fluids is dependent on shear rate or shear rate history. Some non-Newtonian fluids with shear-independent viscosity, however, still exhibit normal stress-differences or other non-Newtonian behavior. Many salt solutions and molten polymers are non-Newtonian fluids, as are commonly found substances such as ketchup, custard, toothpaste, starch suspensions, maizena, paint, blood. In a Newtonian fluid, the relation between the stress and the shear rate is linear, passing through the origin, the constant of proportionality being the coefficient of viscosity. In a non-Newtonian fluid, the relation between the stress and the shear rate is different and can even be time-dependent. Therefore, a constant coefficient of viscosity cannot be defined, although the concept of viscosity is commonly used in fluid mechanics to characterize the shear properties of a fluid, it can be inadequate to describe non-Newtonian fluids. The properties are studied using tensor-valued constitutive equations, which are common in the field of continuum mechanics. The viscosity of a shear thickening fluid, or dilatant fluid, corn starch dissolved in water is a common example, when stirred slowly it looks milky, when stirred vigorously it feels like a very viscous liquid. Note that all thixotropic fluids are extremely shear thinning, but they are time dependent. Thus, to avoid confusion, the classification is more clearly termed pseudoplastic. Another example of a shear thinning fluid is blood and this application is highly favoured within the body, as it allows the viscosity of blood to decrease with increased shear strain rate. Fluids that have a linear shear stress/shear strain relationship require a finite yield stress before they begin to flow and these fluids are called Bingham plastics. Several examples are clay suspensions, drilling mud, toothpaste, mayonnaise, chocolate, the surface of a Bingham plastic can hold peaks when it is still. By contrast Newtonian fluids have flat featureless surfaces when still, there are also fluids whose strain rate is a function of time. Fluids that require a gradually increasing shear stress to maintain a constant strain rate are referred to as rheopectic, an opposite case of this is a fluid that thins out with time and requires a decreasing stress to maintain a constant strain rate. Many common substances exhibit non-Newtonian flows, uncooked cornflour has the same properties. The name oobleck is derived from the Dr. Seuss book Bartholomew, because of its properties, oobleck is often used in demonstrations that exhibit its unusual behavior
30.
Buoyancy
–
In science, buoyancy or upthrust, is an upward force exerted by a fluid that opposes the weight of an immersed object. In a column of fluid, pressure increases with depth as a result of the weight of the overlying fluid, thus the pressure at the bottom of a column of fluid is greater than at the top of the column. Similarly, the pressure at the bottom of an object submerged in a fluid is greater than at the top of the object and this pressure difference results in a net upwards force on the object. For this reason, an object whose density is greater than that of the fluid in which it is submerged tends to sink, If the object is either less dense than the liquid or is shaped appropriately, the force can keep the object afloat. This can occur only in a reference frame, which either has a gravitational field or is accelerating due to a force other than gravity defining a downward direction. In a situation of fluid statics, the net upward force is equal to the magnitude of the weight of fluid displaced by the body. The center of buoyancy of an object is the centroid of the volume of fluid. Archimedes principle is named after Archimedes of Syracuse, who first discovered this law in 212 B. C, more tersely, Buoyancy = weight of displaced fluid. The weight of the fluid is directly proportional to the volume of the displaced fluid. Thus, among completely submerged objects with equal masses, objects with greater volume have greater buoyancy and this is also known as upthrust. Suppose a rocks weight is measured as 10 newtons when suspended by a string in a vacuum with gravity acting upon it, suppose that when the rock is lowered into water, it displaces water of weight 3 newtons. The force it exerts on the string from which it hangs would be 10 newtons minus the 3 newtons of buoyancy force,10 −3 =7 newtons. Buoyancy reduces the apparent weight of objects that have sunk completely to the sea floor and it is generally easier to lift an object up through the water than it is to pull it out of the water. The density of the object relative to the density of the fluid can easily be calculated without measuring any volumes. Density of object density of fluid = weight weight − apparent immersed weight Example, If you drop wood into water, Example, A helium balloon in a moving car. During a period of increasing speed, the air mass inside the car moves in the direction opposite to the cars acceleration, the balloon is also pulled this way. However, because the balloon is buoyant relative to the air, it ends up being pushed out of the way, If the car slows down, the same balloon will begin to drift backward. For the same reason, as the car goes round a curve and this is the equation to calculate the pressure inside a fluid in equilibrium
31.
Mixing (process engineering)
–
In industrial process engineering, mixing is a unit operation that involves manipulation of a heterogeneous physical system with the intent to make it more homogeneous. Familiar examples include pumping of the water in a pool to homogenize the water temperature. Mixing is performed to allow heat and/or mass transfer to occur between one or more streams, components or phases, modern industrial processing almost always involves some form of mixing. Some classes of chemical reactors are also mixers, with the right equipment, it is possible to mix a solid, liquid or gas into another solid, liquid or gas. The opposite of mixing is segregation, a classical example of segregation is the brazil nut effect. The type of operation and equipment used during mixing depends on the state of materials being mixed, in this context, the act of mixing may be synonymous with stirring-, or kneading-processes. Mixing of liquids occurs frequently in process engineering, the nature of liquids to blend determines the equipment used. Turbulent or transitional mixing is conducted with turbines or impellers. Mixing of liquids that are miscible or at least soluble in each other frequently in process engineering. An everyday example would be the addition of milk or cream to tea or coffee, since both liquids are water-based, they dissolve easily in one another. The momentum of the liquid being added is sometimes enough to cause enough turbulence to mix the two, since the viscosity of liquids is relatively low. If necessary, a spoon or paddle could be used to complete the mixing process, blending in a more viscous liquid, such as honey, requires more mixing power per unit volume to achieve the same homogeneity in the same amount of time. Blending powders is one of the oldest unit-operations in the solids handling industries, for many decades powder blending has been used just to homogenize bulk materials. Many different machines have been designed to handle materials with various bulk solids properties, on the basis of the practical experience gained with these different machines, engineering knowledge has been developed to construct reliable equipment and to predict scale-up and mixing behavior. This wide range of applications of mixing equipment requires a level of knowledge, long time experience and extended test facilities to come to the optimal selection of equipment. In powder two different dimensions in the process can be determined, convective mixing and intensive mixing. In the case of convective mixing material in the mixer is transported from one location to another and this type of mixing leads to a less ordered state inside the mixer, the components that must be mixed are distributed over the other components. With progressing time the mixture becomes more randomly ordered, after a certain mixing time the ultimate random state is reached
32.
Pressure
–
Pressure is the force applied perpendicular to the surface of an object per unit area over which that force is distributed. Gauge pressure is the relative to the ambient pressure. Various units are used to express pressure, Pressure may also be expressed in terms of standard atmospheric pressure, the atmosphere is equal to this pressure and the torr is defined as 1⁄760 of this. Manometric units such as the centimetre of water, millimetre of mercury, Pressure is the amount of force acting per unit area. The symbol for it is p or P, the IUPAC recommendation for pressure is a lower-case p. However, upper-case P is widely used. The usage of P vs p depends upon the field in one is working, on the nearby presence of other symbols for quantities such as power and momentum. Mathematically, p = F A where, p is the pressure, F is the normal force and it relates the vector surface element with the normal force acting on it. It is incorrect to say the pressure is directed in such or such direction, the pressure, as a scalar, has no direction. The force given by the relationship to the quantity has a direction. If we change the orientation of the element, the direction of the normal force changes accordingly. Pressure is distributed to solid boundaries or across arbitrary sections of normal to these boundaries or sections at every point. It is a parameter in thermodynamics, and it is conjugate to volume. The SI unit for pressure is the pascal, equal to one newton per square metre and this name for the unit was added in 1971, before that, pressure in SI was expressed simply in newtons per square metre. Other units of pressure, such as pounds per square inch, the CGS unit of pressure is the barye, equal to 1 dyn·cm−2 or 0.1 Pa. Pressure is sometimes expressed in grams-force or kilograms-force per square centimetre, but using the names kilogram, gram, kilogram-force, or gram-force as units of force is expressly forbidden in SI. The technical atmosphere is 1 kgf/cm2, since a system under pressure has potential to perform work on its surroundings, pressure is a measure of potential energy stored per unit volume. It is therefore related to density and may be expressed in units such as joules per cubic metre. Similar pressures are given in kilopascals in most other fields, where the prefix is rarely used
33.
Liquid
–
A liquid is a nearly incompressible fluid that conforms to the shape of its container but retains a constant volume independent of pressure. As such, it is one of the four states of matter. A liquid is made up of tiny vibrating particles of matter, such as atoms, water is, by far, the most common liquid on Earth. Like a gas, a liquid is able to flow and take the shape of a container, most liquids resist compression, although others can be compressed. Unlike a gas, a liquid does not disperse to fill every space of a container, a distinctive property of the liquid state is surface tension, leading to wetting phenomena. The density of a liquid is usually close to that of a solid, therefore, liquid and solid are both termed condensed matter. On the other hand, as liquids and gases share the ability to flow, although liquid water is abundant on Earth, this state of matter is actually the least common in the known universe, because liquids require a relatively narrow temperature/pressure range to exist. Most known matter in the universe is in form as interstellar clouds or in plasma form within stars. Liquid is one of the four states of matter, with the others being solid, gas. Unlike a solid, the molecules in a liquid have a greater freedom to move. The forces that bind the molecules together in a solid are only temporary in a liquid, a liquid, like a gas, displays the properties of a fluid. A liquid can flow, assume the shape of a container, if liquid is placed in a bag, it can be squeezed into any shape. These properties make a suitable for applications such as hydraulics. Liquid particles are bound firmly but not rigidly and they are able to move around one another freely, resulting in a limited degree of particle mobility. As the temperature increases, the vibrations of the molecules causes distances between the molecules to increase. When a liquid reaches its point, the cohesive forces that bind the molecules closely together break. If the temperature is decreased, the distances between the molecules become smaller, only two elements are liquid at standard conditions for temperature and pressure, mercury and bromine. Four more elements have melting points slightly above room temperature, francium, caesium, gallium and rubidium, metal alloys that are liquid at room temperature include NaK, a sodium-potassium metal alloy, galinstan, a fusible alloy liquid, and some amalgams
34.
Surface tension
–
Surface tension is the elastic tendency of a fluid surface which makes it acquire the least surface area possible. Surface tension allows insects, usually denser than water, to float, at liquid-air interfaces, surface tension results from the greater attraction of liquid molecules to each other than to the molecules in the air. The net effect is a force at its surface that causes the liquid to behave as if its surface were covered with a stretched elastic membrane. Thus, the surface becomes under tension from the imbalanced forces, because of the relatively high attraction of water molecules for each other through a web of hydrogen bonds, water has a higher surface tension compared to that of most other liquids. Surface tension is an important factor in the phenomenon of capillarity, Surface tension has the dimension of force per unit length, or of energy per unit area. The two are equivalent, but when referring to energy per unit of area, it is common to use the surface energy. In materials science, surface tension is used for either surface stress or surface free energy, the cohesive forces among liquid molecules are responsible for the phenomenon of surface tension. In the bulk of the liquid, each molecule is pulled equally in every direction by neighboring liquid molecules, the molecules at the surface do not have the same molecules on all sides of them and therefore are pulled inwards. This creates some internal pressure and forces liquid surfaces to contract to the minimal area, Surface tension is responsible for the shape of liquid droplets. Although easily deformed, droplets of water tend to be pulled into a shape by the imbalance in cohesive forces of the surface layer. In the absence of forces, including gravity, drops of virtually all liquids would be approximately spherical. The spherical shape minimizes the necessary wall tension of the surface according to Laplaces law. Another way to view surface tension is in terms of energy, a molecule in contact with a neighbor is in a lower state of energy than if it were alone. The interior molecules have as many neighbors as they can possibly have, for the liquid to minimize its energy state, the number of higher energy boundary molecules must be minimized. The minimized number of boundary molecules results in a surface area. As a result of surface area minimization, a surface will assume the smoothest shape it can, since any curvature in the surface shape results in greater area, a higher energy will also result. Consequently, the surface will push back against any curvature in much the way as a ball pushed uphill will push back to minimize its gravitational potential energy. Bubbles in pure water are unstable, the addition of surfactants, however, can have a stabilizing effect on the bubbles
35.
Capillary action
–
Capillary action is the ability of a liquid to flow in narrow spaces without the assistance of, or even in opposition to, external forces like gravity. It occurs because of forces between the liquid and surrounding solid surfaces. If the diameter of the tube is sufficiently small, then the combination of surface tension, the first recorded observation of capillary action was by Leonardo da Vinci. A former student of Galileo, Niccolò Aggiunti, was said to have investigated capillary action, boyle then reported an experiment in which he dipped a capillary tube into red wine and then subjected the tube to a partial vacuum. Some thought that liquids rose in capillaries because air couldnt enter capillaries as easily as liquids, others thought that the particles of liquid were attracted to each other and to the walls of the capillary. They derived the Young–Laplace equation of capillary action, by 1830, the German mathematician Carl Friedrich Gauss had determined the boundary conditions governing capillary action. In 1871, the British physicist William Thomson determined the effect of the meniscus on a liquids vapor pressure—a relation known as the Kelvin equation, German physicist Franz Ernst Neumann subsequently determined the interaction between two immiscible liquids. Albert Einsteins first paper, which was submitted to Annalen der Physik in 1900, was on capillarity, a common apparatus used to demonstrate the first phenomenon is the capillary tube. When the lower end of a glass tube is placed in a liquid, such as water. Adhesion occurs between the fluid and the inner wall pulling the liquid column up until there is a sufficient mass of liquid for gravitational forces to overcome these intermolecular forces. So, a tube will draw a liquid column higher than a wider tube will. Capillary action is essential for the drainage of constantly produced tear fluid from the eye, wicking is the absorption of a liquid by a material in the manner of a candle wick. Paper towels absorb liquid through capillary action, allowing a fluid to be transferred from a surface to the towel, the small pores of a sponge act as small capillaries, causing it to absorb a large amount of fluid. Some textile fabrics are said to use capillary action to wick sweat away from the skin and these are often referred to as wicking fabrics, after the capillary properties of candle and lamp wicks. Capillary action is observed in thin layer chromatography, in which a solvent moves vertically up a plate via capillary action, in this case the pores are gaps between very small particles. Capillary action draws ink to the tips of fountain pen nibs from a reservoir or cartridge inside the pen, in hydrology, capillary action describes the attraction of water molecules to soil particles. Capillary action is responsible for moving groundwater from wet areas of the soil to dry areas, differences in soil potential drive capillary action in soil. Thus the thinner the space in which the water can travel, for a water-filled glass tube in air at standard laboratory conditions, γ =0.0728 N/m at 20 °C, ρ =1000 kg/m3, and g =9.81 m/s2
36.
Gas
–
Gas is one of the four fundamental states of matter. A pure gas may be made up of atoms, elemental molecules made from one type of atom. A gas mixture would contain a variety of pure gases much like the air, what distinguishes a gas from liquids and solids is the vast separation of the individual gas particles. This separation usually makes a colorless gas invisible to the human observer, the interaction of gas particles in the presence of electric and gravitational fields are considered negligible as indicated by the constant velocity vectors in the image. One type of commonly known gas is steam, the gaseous state of matter is found between the liquid and plasma states, the latter of which provides the upper temperature boundary for gases. Bounding the lower end of the temperature scale lie degenerative quantum gases which are gaining increasing attention, high-density atomic gases super cooled to incredibly low temperatures are classified by their statistical behavior as either a Bose gas or a Fermi gas. For a comprehensive listing of these states of matter see list of states of matter. The only chemical elements which are stable multi atom homonuclear molecules at temperature and pressure, are hydrogen, nitrogen and oxygen. These gases, when grouped together with the noble gases. Alternatively they are known as molecular gases to distinguish them from molecules that are also chemical compounds. The word gas is a neologism first used by the early 17th-century Flemish chemist J. B. van Helmont, according to Paracelsuss terminology, chaos meant something like ultra-rarefied water. An alternative story is that Van Helmonts word is corrupted from gahst and these four characteristics were repeatedly observed by scientists such as Robert Boyle, Jacques Charles, John Dalton, Joseph Gay-Lussac and Amedeo Avogadro for a variety of gases in various settings. Their detailed studies ultimately led to a relationship among these properties expressed by the ideal gas law. Gas particles are separated from one another, and consequently have weaker intermolecular bonds than liquids or solids. These intermolecular forces result from interactions between gas particles. Like-charged areas of different gas particles repel, while oppositely charged regions of different gas particles attract one another, transient, randomly induced charges exist across non-polar covalent bonds of molecules and electrostatic interactions caused by them are referred to as Van der Waals forces. The interaction of these forces varies within a substance which determines many of the physical properties unique to each gas. A comparison of boiling points for compounds formed by ionic and covalent bonds leads us to this conclusion, the drifting smoke particles in the image provides some insight into low pressure gas behavior
37.
Atmosphere
–
An atmosphere is a layer of gases surrounding a planet or other material body, that is held in place by the gravity of that body. An atmosphere is likely to be retained if the gravity it is subject to is high. The atmosphere of Earth is mostly composed of nitrogen, oxygen, argon with carbon dioxide, the atmosphere helps protect living organisms from genetic damage by solar ultraviolet radiation, solar wind and cosmic rays. Its current composition is the product of billions of years of modification of the paleoatmosphere by living organisms. The term stellar atmosphere describes the region of a star. Stars with sufficiently low temperatures may form compound molecules in their outer atmosphere, Atmospheric pressure is the force per unit area that is applied perpendicularly to a surface by the surrounding gas. It is determined by a gravitational force in combination with the total mass of a column of gas above a location. On Earth, units of air pressure are based on the recognized standard atmosphere. It is measured with a barometer, the pressure of an atmospheric gas decreases with altitude due to the diminishing mass of gas above. The height at which the pressure from an atmosphere declines by a factor of e is called the height and is denoted by H. For such an atmosphere, the pressure declines exponentially with increasing altitude. However, atmospheres are not uniform in temperature, so the determination of the atmospheric pressure at any particular altitude is more complex. Surface gravity, the force that holds down an atmosphere, differs significantly among the planets, for example, the large gravitational force of the giant planet Jupiter is able to retain light gases such as hydrogen and helium that escape from objects with lower gravity. Thus, the distant and cold Titan, Triton, and Pluto are able to retain their atmospheres despite their relatively low gravities, rogue planets, theoretically, may also retain thick atmospheres. Since a collection of gas molecules may be moving at a range of velocities. Lighter molecules move faster than ones with the same thermal kinetic energy. It is thought that Venus and Mars may have lost much of their water when, after being photo dissociated into hydrogen and oxygen by solar ultraviolet, Earths magnetic field helps to prevent this, as, normally, the solar wind would greatly enhance the escape of hydrogen. However, over the past 3 billion years Earth may have lost gases through the polar regions due to auroral activity
38.
Boyle's law
–
Boyles law is an experimental gas law that describes how the pressure of a gas tends to increase as the volume of the container decreases. Mathematically, Boyles law can be stated as P ∝1 V or P V = k where P is the pressure of the gas, V is the volume of the gas, and k is a constant. The equation states that product of pressure and volume is a constant for a mass of confined gas as long as the temperature is constant. For comparing the same substance under two different sets of conditions, the law can be expressed as P1 V1 = P2 V2. The equation shows that, as increases, the pressure of the gas decreases in proportion. Similarly, as volume decreases, the pressure of the gas increases, the law was named after chemist and physicist Robert Boyle, who published the original law in 1662. This relationship between pressure and volume was first noted by Richard Towneley and Henry Power, Robert Boyle confirmed their discovery through experiments and published the results. According to Robert Gunther and other authorities, it was Boyles assistant, Robert Hooke, Boyles law is based on experiments with air, which he considered to be a fluid of particles at rest in between small invisible springs. At that time, air was still seen as one of the four elements, Boyles interest was probably to understand air as an essential element of life, for example, he published works on the growth of plants without air. Boyle used a closed J-shaped tube and after pouring mercury from one side he forced the air on the side to contract under the pressure of mercury. The French physicist Edme Mariotte discovered the law independent of Boyle in 1679. Thus this law is referred to as Mariottes law or the Boyle–Mariotte law. Instead of a static theory a kinetic theory is needed, which was provided two centuries later by Maxwell and Boltzmann and this law was the first physical law to be expressed in the form of an equation describing the dependence of two variable quantities. The law itself can be stated as follows, Or Boyles law is a gas law, stating that the pressure and volume of a gas have an inverse relationship, if volume increases, then pressure decreases and vice versa, when temperature is held constant. Therefore, when the volume is halved, the pressure is doubled, and if the volume is doubled, Boyles law states that at constant temperature for a fixed mass, the absolute pressure and the volume of a gas are inversely proportional. The law can also be stated in a different manner. Most gases behave like ideal gases at moderate pressures and temperatures, the technology of the 17th century could not produce high pressures or low temperatures. Hence, the law was not likely to have deviations at the time of publication, the deviation is expressed as the compressibility factor
39.
Charles's law
–
Charless law is an experimental gas law that describes how gases tend to expand when heated. A modern statement of Charless law is, When the pressure on a sample of a dry gas is constant, the Kelvin temperature. This directly proportional relationship can be written as, V ∝ T or V T = k and this law describes how a gas expands as the temperature increases, conversely, a decrease in temperature will lead to a decrease in volume. The equation shows that, as temperature increases, the volume of the gas also increases in proportion. The law was named after scientist Jacques Charles, who formulated the law in his unpublished work from the 1780s. The basic principles had already described by Guillaume Amontons and Francis Hauksbee a century earlier. Dalton was the first to demonstrate that the law applied generally to all gases, with measurements only at the two thermometric fixed points of water, Gay-Lussac was unable to show that the equation relating volume to temperature was a linear function. On mathematical grounds alone, Gay-Lussacs paper does not permit the assignment of any law stating the linear relation and this equation does not contain the temperature and so has nothing to do with what became known as Charless Law. Gay-Lussacs value for k, was identical to Daltons earlier value for vapours, Gay-Lussac gave credit for this equation to unpublished statements by his fellow Republican citizen J. Charles in 1787. In the absence of a record, the gas law relating volume to temperature cannot be named after Charles. Daltons measurements had much more scope regarding temperature than Gay-Lussac, not only measuring the volume at the points of water. His conclusion for vapours is a statement of what become known wrongly as Charless Law, then even more wrongly as Gay-Lussacs law. His 1st law was that of partial pressures, Charless law appears to imply that the volume of a gas will descend to zero at a certain temperature or −273.15 °C. Gay-Lussac had no experience of air, although he appears to believe that the permanent gases such as air. However, the zero on the Kelvin temperature scale was originally defined in terms of the second law of thermodynamics. Thomson did not assume that this was equal to the point of Charless law. The two can be shown to be equivalent by Ludwig Boltzmanns statistical view of entropy, however, Charles also stated, The volume of a fixed mass of dry gas increases or decreases by 1⁄273 times the volume at 0 °C for every 1 °C rise or fall in temperature. Thus, V T = V0 + × T V T = V0 where VT is the volume of gas at temperature T, under this definition, the demonstration of Charless law is almost trivial
40.
Gay-Lussac's law
–
He is most often recognized for the Pressure Law which established that the pressure of an enclosed gas is directly proportional to its temperature and which he was the first to formulate. These laws are known variously as the Pressure Law or Amontonss law. For example, Gay-Lussac found that 2 volumes of hydrogen and 1 volume of oxygen would react to form 2 volumes of gaseous water, based on Gay-Lussacs results, Amedeo Avogadro theorized that, at the same temperature and pressure, equal volumes of gas contain equal numbers of molecules. The law of combining gases was made public by Joseph Louis Gay-Lussac in 1808, Avogadros hypothesis, however, was not initially accepted by chemists until the Italian chemist Stanislao Cannizzaro was able to convince the First International Chemical Congress in 1860. Amontons discovered this while building an air thermometer, the pressure of a gas of fixed mass and fixed volume is directly proportional to the gass absolute temperature. If a gass temperature increases, then so does its pressure if the mass, the law has a particularly simple mathematical form if the temperature is measured on an absolute scale, such as in kelvins. The law can then be expressed mathematically as P ∝ T, or P T = k, where, P is the pressure of the gas, T is the temperature of the gas, k is a constant. For comparing the same substance under two different sets of conditions, the law can be written as, P1 T1 = P2 T2 or P1 T2 = P2 T1. Because Amontons discovered the law beforehand, Gay-Lussacs name is now generally associated within chemistry with the law of combining volumes discussed in the section above, some introductory physics textbooks still define the pressure-temperature relationship as Gay-Lussacs law. Gay-Lussac primarily investigated the relationship between volume and temperature and published it in 1802, but his work did cover some comparison between pressure and temperature, however, in recent years the term has fallen out of favor. Gay-Lussacs law, Charless law, and Boyles law form the gas law. These three gas laws in combination with Avogadros law can be generalized by the gas law. Avogadros law Boyles law Charless law Combined gas law Castka, Joseph F. Metcalfe, H. Clark, Davis, Raymond E. Williams, the Complete Idiots Guide to Chemistry. How to Prepare for the SAT II Chemistry
41.
Combined gas law
–
The combined gas law is a gas law that combines Charless law, Boyles law, and Gay-Lussacs law. There is no official founder for this law, it is merely an amalgamation of the three previously discovered laws and these laws each relate one thermodynamic variable to another mathematically while holding everything else constant. Charless law states that volume and temperature are directly proportional to other as long as pressure is held constant. Boyles law asserts that pressure and volume are inversely proportional to each other at fixed temperature, finally, Gay-Lussacs law introduces a direct proportionality between temperature and pressure as long as it is at a constant volume. By combining and either of or, we can gain a new equation with P, V and T, if we divide equation by temperature and multiply equation by pressure we will get, P V T = k 1 T P V T = k 2 P. As the left-hand side of both equations is the same, we arrive at k 1 T = k 2 P, substituting in Avogadros Law yields the ideal gas equation. A derivation of the gas law using only elementary algebra can contain surprises. A physical derivation, longer but more reliable, begins by realizing that the constant volume parameter in Gay-Lussacs law will change as the volume changes. At constant volume, V1 the law might appear P = k1T, rather, it should first be determined in what sense these equations are compatible with one another. To gain insight into this, recall that any two variables determine the third, choosing P and V to be independent, we picture the T values forming a surface above the PV-plane. A definite V0 and P0 define a T0, a point on that surface, the ratio of the slopes of these two lines depends only on the value of P0/V0 at that point. Note that the form of did not depend on the particular point chosen. The same formula would have arisen for any combination of P and V values. Therefore, one can write k V k P = P V ∀ P, ∀ V This says that each point on the surface has its own pair of lines through it. Whereas is a relation between specific slopes and variable values, is a relation between slope functions and function variables and it holds true for any point on the surface, i. e. for any and all combinations of P and V values. To solve this equation for the function kV, first separate the variables, V on the left, V k V = P k P Choose any pressure P1. The right side evaluates to some value, call it karb. V k V = k arb This particular equation must now hold true, not just for one value of V, the only definition of kV that guarantees this for all V and arbitrary karb is k V = k arb V which may be verified by substitution in
42.
Plasma (physics)
–
Plasma is one of the four fundamental states of matter, the others being solid, liquid, and gas. Yet unlike these three states of matter, plasma does not naturally exist on the Earth under normal surface conditions, the term was first introduced by chemist Irving Langmuir in the 1920s. However, true plasma production is from the separation of these ions and electrons that produces an electric field. Based on the environmental temperature and density either partially ionised or fully ionised forms of plasma may be produced. The positive charge in ions is achieved by stripping away electrons from atomic nuclei, the number of electrons removed is related to either the increase in temperature or the local density of other ionised matter. Plasma may be the most abundant form of matter in the universe, although this is currently tentative based on the existence. Plasma is mostly associated with the Sun and stars, extending to the rarefied intracluster medium, Plasma was first identified in a Crookes tube, and so described by Sir William Crookes in 1879. The nature of the Crookes tube cathode ray matter was identified by British physicist Sir J. J. The term plasma was coined by Irving Langmuir in 1928, perhaps because the glowing discharge molds itself to the shape of the Crookes tube and we shall use the name plasma to describe this region containing balanced charges of ions and electrons. Plasma is a neutral medium of unbound positive and negative particles. Although these particles are unbound, they are not ‘free’ in the sense of not experiencing forces, in turn this governs collective behavior with many degrees of variation. The average number of particles in the Debye sphere is given by the plasma parameter, bulk interactions, The Debye screening length is short compared to the physical size of the plasma. This criterion means that interactions in the bulk of the plasma are more important than those at its edges, when this criterion is satisfied, the plasma is quasineutral. Plasma frequency, The electron plasma frequency is compared to the electron-neutral collision frequency. When this condition is valid, electrostatic interactions dominate over the processes of ordinary gas kinetics, for plasma to exist, ionization is necessary. The term plasma density by itself refers to the electron density, that is. The degree of ionization of a plasma is the proportion of atoms that have lost or gained electrons, even a partially ionized gas in which as little as 1% of the particles are ionized can have the characteristics of a plasma. The degree of ionization, α, is defined as α = n i n i + n n, where n i is the number density of ions and n n is the number density of neutral atoms
43.
Rheology
–
The term rheology was coined by Eugene C. Bingham, a professor at Lafayette College, in 1920, from a suggestion by a colleague, the term was inspired by the aphorism of Simplicius, panta rhei, everything flows, and was first used to describe the flow of liquids and the deformation of solids. Newtonian fluids can be characterized by a coefficient of viscosity for a specific temperature. Although this viscosity will change with temperature, it does not change with the strain rate, only a small group of fluids exhibit such constant viscosity. The large class of fluids whose viscosity changes with the rate are called non-Newtonian fluids. For example, ketchup can have its viscosity reduced by shaking, ketchup is a shear thinning material, like yogurt and emulsion paint, exhibiting thixotropy, where an increase in relative flow velocity will cause a reduction in viscosity, for example, by stirring. Some other non-Newtonian materials show the behavior, rheopecty, viscosity going up with relative deformation. Since Sir Isaac Newton originated the concept of viscosity, the study of liquids with strain rate dependent viscosity is also often called Non-Newtonian fluid mechanics, materials with the characteristics of a fluid will flow when subjected to a stress which is defined as the force per area. There are different sorts of stress and materials can respond differently for different stresses, much of theoretical rheology is concerned with associating external forces and torques with internal stresses and internal strain gradients and flow velocities. In this sense, a solid undergoing plastic deformation is a fluid, granular rheology refers to the continuum mechanical description of granular materials. These experimental techniques are known as rheometry and are concerned with the determination with well-defined rheological material functions, such relationships are then amenable to mathematical treatment by the established methods of continuum mechanics. The characterization of flow or deformation originating from a shear stress field is called shear rheometry. The study of extensional flows is called extensional rheology, shear flows are much easier to study and thus much more experimental data are available for shear flows than for extensional flows. A rheologist is an interdisciplinary scientist or engineer who studies the flow of liquids or the deformation of soft solids. It is not a degree subject, there is no qualification of rheologist as such. Most rheologists have a qualification in mathematics, the sciences, engineering, medicine, or certain technologies. Elasticity is essentially a time independent processes, as the strains appear the moment the stress is applied, if the material deformation rate increases linearly with increasing applied stress, then the material is viscous in the Newtonian sense. These materials are characterized due to the delay between the applied constant stress and the maximum strain
44.
Viscoelasticity
–
Viscoelasticity is the property of materials that exhibit both viscous and elastic characteristics when undergoing deformation. Viscous materials, like honey, resist shear flow and strain linearly with time when a stress is applied, elastic materials strain when stretched and quickly return to their original state once the stress is removed. Viscoelastic materials have elements of both of properties and, as such, exhibit time-dependent strain. In the nineteenth century, physicists such as Maxwell, Boltzmann, and Kelvin researched and experimented with creep and recovery of glasses, metals, viscoelasticity was further examined in the late twentieth century when synthetic polymers were engineered and used in a variety of applications. Viscoelasticity calculations depend heavily on the viscosity variable, η, the inverse of η is also known as fluidity, φ. The value of either can be derived as a function of temperature or as a given value, depending on the change of strain rate versus stress inside a material the viscosity can be categorized as having a linear, non-linear, or plastic response. When a material exhibits a linear response it is categorized as a Newtonian material, in this case the stress is linearly proportional to the strain rate. If the material exhibits a non-linear response to the strain rate, there is also an interesting case where the viscosity decreases as the shear/strain rate remains constant. A material which exhibits this type of behavior is known as thixotropic, in addition, when the stress is independent of this strain rate, the material exhibits plastic deformation. Many viscoelastic materials exhibit rubber like behavior explained by the theory of polymer elasticity. Some examples of materials include amorphous polymers, semicrystalline polymers, biopolymers, metals at very high temperatures. Cracking occurs when the strain is applied quickly and outside of the elastic limit, ligaments and tendons are viscoelastic, so the extent of the potential damage to them depends both on the rate of the change of their length as well as on the force applied. The viscosity of a viscoelastic substance gives the substance a strain rate dependence on time, purely elastic materials do not dissipate energy when a load is applied, then removed. However, a viscoelastic substance loses energy when a load is applied, hysteresis is observed in the stress–strain curve, with the area of the loop being equal to the energy lost during the loading cycle. Since viscosity is the resistance to thermally activated plastic deformation, a material will lose energy through a loading cycle. Plastic deformation results in lost energy, which is uncharacteristic of a purely elastic materials reaction to a loading cycle, specifically, viscoelasticity is a molecular rearrangement. When a stress is applied to a material such as a polymer. This movement or rearrangement is called creep, polymers remain a solid material even when these parts of their chains are rearranging in order to accompany the stress, and as this occurs, it creates a back stress in the material
45.
Rheometer
–
A rheometer is a laboratory device used to measure the way in which a liquid, suspension or slurry flows in response to applied forces. It is used for those fluids which cannot be defined by a value of viscosity and therefore require more parameters to be set. It measures the rheology of the fluid, there are two distinctively different types of rheometers. Rotational or shear type rheometers are usually designed as either a native strain-controlled instrument or a native stress-controlled instrument, the word rheometer comes from the Greek, and means a device for measuring flow. In the 19th century it was used for devices to measure electric current. It was also used for the measurement of flow of liquids, in medical practice and this latter use persisted to the second half of the 20th century in some areas. Following the coining of the term rheology the word came to be applied to instruments for measuring the character rather than quantity of flow, the principle and working of rheometers is described in several texts. A dynamic shear rheometer, commonly known as DSR is used for research, liquid is forced through a tube of constant cross-section and precisely known dimensions under conditions of laminar flow. Either the flow-rate or the drop are fixed and the other measured. Knowing the dimensions, the flow-rate can be converted into a value for the shear rate, varying the pressure or flow allows a flow curve to be determined. The liquid is placed within the annulus of one cylinder inside another, one of the cylinders is rotated at a set speed. This determines the shear rate inside the annulus, the liquid tends to drag the other cylinder round, and the force it exerts on that cylinders is measured, which can be converted to a shear stress. One version of this is the Fann V-G Viscometer, which runs at two speeds, and therefore only two points on the flow curve. This is sufficient to define a Bingham plastic model which used to be used in the oil industry for determining the flow character of drilling fluids. In recent years rheometers that spin at 600,300,200,100,6 &3 RPM have been used and this allows for more complex fluids models such as Herschel-Bulkley to be used. Some models allow the speed to be increased and decreased in a programmed fashion. The liquid is placed on horizontal plate and a cone placed into it. The angle between the surface of the cone and the plate is around 1 to 2 degrees but can vary depending on the types of tests being run, typically the plate is rotated and the torque on the cone measured
46.
Smart fluid
–
A smart fluid is a fluid whose properties can be changed by applying an electric field or a magnetic field. The most developed smart fluids today are fluids whose viscosity increases when a field is applied. Small magnetic dipoles are suspended in a fluid, and the applied magnetic field causes these small magnets to line up. These magnetorheological or MR fluids are being used in the suspension of the 2002 model of the Cadillac Seville STS automobile and more recently, depending on road conditions, the damping fluids viscosity is adjusted. This is more expensive than traditional systems, but it provides better control, some haptic devices whose resistance to touch can be controlled are also based on these MR fluids. Another major type of fluid are electrorheological or ER fluids. Besides fast acting clutches, brakes, shock absorbers and hydraulic valves, other, more esoteric, other smart fluids change their surface tension in the presence of an electric field. Other applications include brakes and seismic dampers, which are used in buildings in seismically-active zones to damp the oscillations occurring in an earthquake. Since then it appears that interest has waned a little, possibly due to the existence of various limitations of smart fluids which have yet to be overcome. Continuum mechanics Electrorheological fluid Ferrofluid Fluid mechanics Magnetorheological fluid Rheology Smart glass Smart metal http, //www. aip. org/tip/INPHFA/vol-9/iss-6/p14. html
47.
Magnetorheological fluid
–
A magnetorheological fluid is a type of smart fluid in a carrier fluid, usually a type of oil. When subjected to a field, the fluid greatly increases its apparent viscosity. Importantly, the stress of the fluid when in its active state can be controlled very accurately by varying the magnetic field intensity. The upshot is that the ability to transmit force can be controlled with an electromagnet. Extensive discussions of the physics and applications of MR fluids can be found in a recent book, MR fluid is different from a ferrofluid which has smaller particles. MR fluid particles are primarily on the micrometre-scale and are too dense for Brownian motion to keep them suspended, Ferrofluid particles are primarily nanoparticles that are suspended by Brownian motion and generally will not settle under normal conditions. As a result, these two fluids have different applications. When a magnetic field is applied, however, the particles align themselves along the lines of magnetic flux. To understand and predict the behavior of the MR fluid it is necessary to model the fluid mathematically, a task slightly complicated by the varying material properties. As mentioned above, smart fluids are such that they have a low viscosity in the absence of a magnetic field. In the case of MR fluids, the fluid actually assumes properties comparable to a solid when in the activated state, the behavior of a MR fluid can thus be considered similar to a Bingham plastic, a material model which has been well-investigated. However, a MR fluid does not exactly follow the characteristics of a Bingham plastic, for example, below the yield stress, the fluid behaves as a viscoelastic material, with a complex modulus that is also known to be dependent on the magnetic field intensity. MR fluids are also known to be subject to shear thinning, low shear strength has been the primary reason for limited range of applications. In the absence of pressure the maximum shear strength is about 100 kPa. If the fluid is compressed in the field direction and the compressive stress is 2 MPa. If the standard magnetic particles are replaced with elongated magnetic particles, ferroparticles settle out of the suspension over time due to the inherent density difference between the particles and their carrier fluid. The rate and degree to which this occurs is one of the primary attributes considered in industry when implementing or designing an MR device. Surfactants are typically used to offset this effect, but at a cost of the fluids magnetic saturation, and thus the maximum yield stress exhibited in its activated state
48.
Electrorheological fluid
–
Electrorheological fluids are suspensions of extremely fine non-conducting but electrically active particles in an electrically insulating fluid. The apparent viscosity of these fluids changes reversibly by an order of up to 100,000 in response to an electric field. For example, a typical ER fluid can go from the consistency of a liquid to that of a gel, and back, with response times on the order of milliseconds. The effect is called the Winslow effect after its discoverer, the American inventor Willis Winslow. Other common applications are in ER brakes and shock absorbers, there are many novel uses for these fluids. Potential uses are in accurate abrasive polishing and as haptic controllers, motorola filed a patent application for mobile device applications in 2006. The change in apparent viscosity is dependent on the electric field. The change is not a change in viscosity, hence these fluids are now known as ER fluids. The effect is described as an electric field dependent shear yield stress. When activated an ER fluid behaves as a Bingham plastic, with a point which is determined by the electric field strength. After the yield point is reached, the fluid shears as a fluid, hence the resistance to motion of the fluid can be controlled by adjusting the applied electric field. ER fluids are a type of smart fluid, a simple ER fluid can be made by mixing cornflour in a light vegetable oil or silicone oil. There are two theories to explain the effect, the interfacial tension or water bridge theory. The water bridge theory assumes a three phase system, the particles contain the third phase which is another liquid immiscible with the main phase liquid, with no applied electric field the third phase is strongly attracted to and held within the particles. This means the ER fluid is a suspension of particles, which behaves as a liquid, when an electric field is applied the third phase is driven to one side of the particles by electro osmosis and binds adjacent particles together to form chains. This chain structure means the ER fluid has become a solid, the electrostatic theory assumes just a two phase system, with dielectric particles forming chains aligned with an electric field in an analogous way to how magnetorheological fluid fluids work. An ER fluid has been constructed with the solid phase made from a conductor coated in an insulator and this ER fluid clearly cannot work by the water bridge model. However, although demonstrating that some ER fluids work by the electrostatic effect, the advantage of having an ER fluid which operates on the electrostatic effect is the elimination of leakage current, i. e. potentially there is no direct current
49.
Ferrofluid
–
A ferrofluid is a liquid that becomes strongly magnetized in the presence of a magnetic field. Ferrofluid was invented in 1963 by NASAs Steve Papell as a rocket fuel that could be drawn toward a pump inlet in a weightless environment by applying a magnetic field. Ferrofluids are colloidal liquids made of nanoscale ferromagnetic, or ferrimagnetic, each tiny particle is thoroughly coated with a surfactant to inhibit clumping. Large ferromagnetic particles can be ripped out of the homogeneous colloidal mixture, the magnetic attraction of nanoparticles is weak enough that the surfactants Van der Waals force is sufficient to prevent magnetic clumping or agglomeration. Ferrofluids usually do not retain magnetization in the absence of an applied field. The difference between ferrofluids and magnetorheological fluids is the size of the particles, the particles in a ferrofluid primarily consist of nanoparticles which are suspended by Brownian motion and generally will not settle under normal conditions. These two fluids have different applications as a result. Ferrofluids are composed of particles of magnetite, hematite or some other compound containing iron. This is small enough for thermal agitation to disperse them evenly within a carrier fluid and this is similar to the way that the ions in an aqueous paramagnetic salt solution make the solution paramagnetic. The composition of a typical ferrofluid is about 5% magnetic solids, 10% surfactant and 85% carrier, particles in ferrofluids are dispersed in a liquid, often using a surfactant, and thus ferrofluids are colloidal suspensions – materials with properties of more than one state of matter. In this case, the two states of matter are the metal and liquid it is in. This ability to change phases with the application of a field allows them to be used as seals, lubricants. This means that the particles do not agglomerate or phase separate even in extremely strong magnetic fields. However, the surfactant tends to break down over time, and eventually the nano-particles will agglomerate, the term magnetorheological fluid refers to liquids similar to ferrofluids that solidify in the presence of a magnetic field. Magnetorheological fluids have micrometre scale magnetic particles that are one to three orders of magnitude larger than those of ferrofluids, however, ferrofluids lose their magnetic properties at sufficiently high temperatures, known as the Curie temperature. When a paramagnetic fluid is subjected to a vertical magnetic field. This effect is known as the Rosensweig or normal-field instability, the instability is driven by the magnetic field, it can be explained by considering which shape of the fluid minimizes the total energy of the system. From the point of view of energy, peaks and valleys are energetically favorable
50.
Daniel Bernoulli
–
Daniel Bernoulli FRS was a Swiss mathematician and physicist and was one of the many prominent mathematicians in the Bernoulli family. He is particularly remembered for his applications of mathematics to mechanics, especially fluid mechanics, Daniel Bernoulli was born in Groningen, in the Netherlands, into a family of distinguished mathematicians. The Bernoulli family came originally from Antwerp, at time in the Spanish Netherlands. After a brief period in Frankfurt the family moved to Basel, Daniel was the son of Johann Bernoulli, nephew of Jacob Bernoulli. He had two brothers, Niklaus and Johann II, Daniel Bernoulli was described by W. W. Rouse Ball as by far the ablest of the younger Bernoullis. He is said to have had a bad relationship with his father, Johann Bernoulli also plagiarized some key ideas from Daniels book Hydrodynamica in his own book Hydraulica which he backdated to before Hydrodynamica. Despite Daniels attempts at reconciliation, his father carried the grudge until his death, around schooling age, his father, Johann, encouraged him to study business, there being poor rewards awaiting a mathematician. However, Daniel refused, because he wanted to study mathematics and he later gave in to his fathers wish and studied business. His father then asked him to study in medicine, and Daniel agreed under the condition that his father would teach him mathematics privately, Daniel studied medicine at Basel, Heidelberg, and Strasbourg, and earned a PhD in anatomy and botany in 1721. He was a contemporary and close friend of Leonhard Euler and he went to St. Petersburg in 1724 as professor of mathematics, but was very unhappy there, and a temporary illness in 1733 gave him an excuse for leaving St. Petersberg. He returned to the University of Basel, where he held the chairs of medicine, metaphysics. In May,1750 he was elected a Fellow of the Royal Society and his earliest mathematical work was the Exercitationes, published in 1724 with the help of Goldbach. Two years later he pointed out for the first time the frequent desirability of resolving a compound motion into motions of translation and motion of rotation, together Bernoulli and Euler tried to discover more about the flow of fluids. In particular, they wanted to know about the relationship between the speed at which blood flows and its pressure, soon physicians all over Europe were measuring patients blood pressure by sticking point-ended glass tubes directly into their arteries. It was not until about 170 years later, in 1896 that an Italian doctor discovered a less painful method which is still in use today. However, Bernoullis method of measuring pressure is used today in modern aircraft to measure the speed of the air passing the plane. Taking his discoveries further, Daniel Bernoulli now returned to his work on Conservation of Energy. It was known that a moving body exchanges its kinetic energy for energy when it gains height
51.
Robert Boyle
–
Robert William Boyle FRS was an Anglo-Irish natural philosopher, chemist, physicist and inventor born in Lismore, County Waterford, Ireland. Boyle is largely regarded today as the first modern chemist, and therefore one of the founders of modern chemistry, and one of the pioneers of modern experimental scientific method. He is best known for Boyles law, which describes the proportional relationship between the absolute pressure and volume of a gas, if the temperature is kept constant within a closed system. Among his works, The Sceptical Chymist is seen as a book in the field of chemistry. He was a devout and pious Anglican and is noted for his writings in theology, Boyle was born in Lismore Castle, in County Waterford, Ireland, the seventh son and fourteenth child of Richard Boyle, 1st Earl of Cork, and Catherine Fenton. Richard Boyle arrived in Dublin from England in 1588 during the Tudor plantations of Ireland and he had amassed enormous landholdings by the time Robert was born. As a child, Boyle was fostered to a local family, Boyle received private tutoring in Latin, Greek, and French and when he was eight years old, following the death of his mother, he was sent to Eton College in England. His fathers friend, Sir Henry Wotton, was then the provost of the college, during this time, his father hired a private tutor, Robert Carew, who had knowledge of Irish, to act as private tutor to his sons in Eton. After spending over three years at Eton, Robert travelled abroad with a French tutor and they visited Italy in 1641 and remained in Florence during the winter of that year studying the paradoxes of the great star-gazer Galileo Galilei, who was elderly but still living in 1641. Boyle returned to England from continental Europe in mid-1644 with a keen interest in scientific research and his father had died the previous year and had left him the manor of Stalbridge in Dorset, England and substantial estates in County Limerick in Ireland that he had acquired. They met frequently in London, often at Gresham College, having made several visits to his Irish estates beginning in 1647, Robert moved to Ireland in 1652 but became frustrated at his inability to make progress in his chemical work. In one letter, he described Ireland as a country where chemical spirits were so misunderstood. In 1654, Boyle left Ireland for Oxford to pursue his work more successfully, an inscription can be found on the wall of University College, Oxford the High Street at Oxford, marking the spot where Cross Hall stood until the early 19th century. It was here that Boyle rented rooms from the apothecary who owned the Hall. An account of Boyles work with the air pump was published in 1660 under the title New Experiments Physico-Mechanical, Touching the Spring of the Air, the person who originally formulated the hypothesis was Henry Power in 1661. Boyle in 1662 included a reference to a written by Power. In continental Europe the hypothesis is attributed to Edme Mariotte. In 1680 he was elected president of the society, but declined the honour from a scruple about oaths and they are extraordinary because all but a few of the 24 have come true
52.
Augustin-Louis Cauchy
–
Baron Augustin-Louis Cauchy FRS FRSE was a French mathematician who made pioneering contributions to analysis. He was one of the first to state and prove theorems of calculus rigorously and he almost singlehandedly founded complex analysis and the study of permutation groups in abstract algebra. A profound mathematician, Cauchy had an influence over his contemporaries. His writings range widely in mathematics and mathematical physics, more concepts and theorems have been named for Cauchy than for any other mathematician. Cauchy was a writer, he wrote approximately eight hundred research articles. Cauchy was the son of Louis François Cauchy and Marie-Madeleine Desestre, Cauchy married Aloise de Bure in 1818. She was a relative of the publisher who published most of Cauchys works. By her he had two daughters, Marie Françoise Alicia and Marie Mathilde, Cauchys father was a high official in the Parisian Police of the New Régime. He lost his position because of the French Revolution that broke out one month before Augustin-Louis was born, the Cauchy family survived the revolution and the following Reign of Terror by escaping to Arcueil, where Cauchy received his first education, from his father. After the execution of Robespierre, it was safe for the family to return to Paris, there Louis-François Cauchy found himself a new bureaucratic job, and quickly moved up the ranks. When Napoleon Bonaparte came to power, Louis-François Cauchy was further promoted, the famous mathematician Lagrange was also a friend of the Cauchy family. On Lagranges advice, Augustin-Louis was enrolled in the École Centrale du Panthéon, most of the curriculum consisted of classical languages, the young and ambitious Cauchy, being a brilliant student, won many prizes in Latin and Humanities. In spite of successes, Augustin-Louis chose an engineering career. In 1805 he placed second out of 293 applicants on this exam, one of the main purposes of this school was to give future civil and military engineers a high-level scientific and mathematical education. The school functioned under military discipline, which caused the young, nevertheless, he finished the Polytechnique in 1807, at the age of 18, and went on to the École des Ponts et Chaussées. He graduated in engineering, with the highest honors. After finishing school in 1810, Cauchy accepted a job as an engineer in Cherbourg. Cauchys first two manuscripts were accepted, the one was rejected
53.
Jacques Charles
–
Jacques Alexandre César Charles was a French inventor, scientist, mathematician, and balloonist. He was sometimes called Charles the Geometer and their pioneering use of hydrogen for lift led to this type of balloon being named a Charlière. Charless law, describing how gases tend to expand when heated, was formulated by Joseph Louis Gay-Lussac in 1802, Charles was elected to the Académie des Sciences in 1795 and subsequently became professor of physics at the Académie de Sciences. Charles was born in Beaugency-sur-Loire in 1746, He married Julie Françoise Bouchaud des Hérettes, Charles outlived her and died in Paris on April 7,1823. They used alternate strips of red and white silk, but the discolouration of the process left a red. Jacques Charles and the Robert brothers launched the worlds first hydrogen filled balloon on August 27,1783, from the Champ de Mars, the balloon was comparatively small, a 35 cubic metre sphere of rubberised silk, and only capable of lifting about 9 kg. It was filled with hydrogen that had made by pouring nearly a quarter of a tonne of sulphuric acid onto a half a tonne of scrap iron. The hydrogen gas was fed into the balloon via lead pipes, daily progress bulletins were issued on the inflation, and the crowd was so great that on the 26th the balloon was moved secretly by night to the Champ de Mars, a distance of 4 kilometres. The project was funded by a subscription organised by Barthelemy Faujas de Saint-Fond, at 13,45 on December 1,1783 Jacques Charles and the Robert brothers launched a new manned balloon from the Jardin des Tuileries in Paris. Jacques Charles was accompanied by Nicolas-Louis Robert as co-pilot of the 380-cubic-metre, the envelope was fitted with a hydrogen release valve and was covered with a net from which the basket was suspended. Sand ballast was used to control altitude and they ascended to a height of about 1,800 feet and landed at sunset in Nesles-la-Vallée after a 2-hour 5 minute flight covering 36 km. The chasers on horseback, who were led by the Duc de Chartres, Jacques Charles then decided to ascend again, but alone this time because the balloon had lost some of its hydrogen. This time it ascended rapidly to an altitude of about 3,000 metres and he began suffering from aching pain in his ears so he valved to release gas, and descended to land gently about 3 km away at Tour du Lay. Unlike the Robert brothers, Charles never flew again, although a hydrogen balloon came to be called a Charlière in his honour, among the special enclosure crowd was Benjamin Franklin, the diplomatic representative of the United States of America. Also present was Joseph Montgolfier, whom Charles honoured by asking him to release the small, bright green, pilot balloon to assess the wind and weather conditions. This event took place ten days after the worlds first manned flight by Jean-François Pilâtre de Rozier using a Montgolfier brothers hot air balloon. Simon Schama wrote in Citizens, Montgolfiers principal scientific collaborator was M. Charles, who had been the first to propose the gas produced by vitriol instead of the burning, dampened straw and wood that he had used in earlier flights. Charles himself was eager to ascend but had run into a firm veto from the King
54.
Leonhard Euler
–
He also introduced much of the modern mathematical terminology and notation, particularly for mathematical analysis, such as the notion of a mathematical function. He is also known for his work in mechanics, fluid dynamics, optics, astronomy, Euler was one of the most eminent mathematicians of the 18th century, and is held to be one of the greatest in history. He is also considered to be the most prolific mathematician of all time. His collected works fill 60 to 80 quarto volumes, more than anybody in the field and he spent most of his adult life in Saint Petersburg, Russia, and in Berlin, then the capital of Prussia. A statement attributed to Pierre-Simon Laplace expresses Eulers influence on mathematics, Read Euler, read Euler, Leonhard Euler was born on 15 April 1707, in Basel, Switzerland to Paul III Euler, a pastor of the Reformed Church, and Marguerite née Brucker, a pastors daughter. He had two sisters, Anna Maria and Maria Magdalena, and a younger brother Johann Heinrich. Soon after the birth of Leonhard, the Eulers moved from Basel to the town of Riehen, Paul Euler was a friend of the Bernoulli family, Johann Bernoulli was then regarded as Europes foremost mathematician, and would eventually be the most important influence on young Leonhard. Eulers formal education started in Basel, where he was sent to live with his maternal grandmother. In 1720, aged thirteen, he enrolled at the University of Basel, during that time, he was receiving Saturday afternoon lessons from Johann Bernoulli, who quickly discovered his new pupils incredible talent for mathematics. In 1726, Euler completed a dissertation on the propagation of sound with the title De Sono, at that time, he was unsuccessfully attempting to obtain a position at the University of Basel. In 1727, he first entered the Paris Academy Prize Problem competition, Pierre Bouguer, who became known as the father of naval architecture, won and Euler took second place. Euler later won this annual prize twelve times, around this time Johann Bernoullis two sons, Daniel and Nicolaus, were working at the Imperial Russian Academy of Sciences in Saint Petersburg. In November 1726 Euler eagerly accepted the offer, but delayed making the trip to Saint Petersburg while he applied for a physics professorship at the University of Basel. Euler arrived in Saint Petersburg on 17 May 1727 and he was promoted from his junior post in the medical department of the academy to a position in the mathematics department. He lodged with Daniel Bernoulli with whom he worked in close collaboration. Euler mastered Russian and settled life in Saint Petersburg. He also took on a job as a medic in the Russian Navy. The Academy at Saint Petersburg, established by Peter the Great, was intended to improve education in Russia, as a result, it was made especially attractive to foreign scholars like Euler
55.
Joseph Louis Gay-Lussac
–
Joseph Louis Gay-Lussac was a French chemist and physicist. Gay-Lussac was born at Saint-Léonard-de-Noblat in the department of Haute-Vienne. The father of Joseph Louis Gay, Anthony Gay, son of a doctor, was a lawyer and prosecutor, and worked as a judge in Noblat Bridge. Father of two sons and three daughters, he owned much of the Lussac village and usually added the name of this hamlet of the Haute-Vienne to his name, towards the year 1803, father and son finally adopted the name Gay-Lussac. During the Revolution, on behalf of the Law of Suspects and he received his early education at the hands of the Catholic Abbey of Bourdeix, though later in life became an atheist. In the care of the Abbot of Dumonteil he began his education in Paris, Gay-Lussac narrowly avoided conscription and by the time of entry to the École Polytechnique his father had been arrested. Three years later, Gay-Lussac transferred to the École des Ponts et Chaussées, in 1802, he was appointed demonstrator to A. F. Fourcroy at the École Polytechnique, where in he became professor of chemistry. From 1808 to 1832, he was professor of physics at the Sorbonne, in 1821, he was elected a foreign member of the Royal Swedish Academy of Sciences. In 1831 he was elected to represent Haute-Vienne in the chamber of deputies and he was elected a Foreign Honorary Member of the American Academy of Arts and Sciences in 1832. Gay-Lussac married Geneviève-Marie-Joseph Rojot in 1809 and he had first met her when she worked as a linen drapers shop assistant and was studying a chemistry textbook under the counter. He fathered five children, of whom the eldest became assistant to Justus Liebig in Giessen, some publications by Jules are mistaken as his fathers today since they share the same first initial. Gay-Lussac died in Paris, and his grave is there at Père Lachaise Cemetery and his name is one of the 72 names inscribed on the Eiffel Tower. 1802 – Gay-Lussac first formulated the law, Gay-Lussacs Law, stating if the mass. His work was preceded by that of Guillaume Amontons, who established the rough relation without the use of accurate thermometers. The law is written as p = k T, where k is a constant dependent on the mass and volume of the gas. 1804 – He and Jean-Baptiste Biot made a balloon ascent to a height of 7,016 metres in an early investigation of the Earths atmosphere. He wanted to collect samples of the air at different heights to record differences in temperature and moisture,1805 – Together with his friend and scientific collaborator Alexander von Humboldt, he discovered that the composition of the atmosphere does not change with decreasing pressure. They also discovered that water is formed by two parts of hydrogen and one part of oxygen,1808 – He was the co-discoverer of boron
56.
Robert Hooke
–
Robert Hooke FRS was an English natural philosopher, architect and polymath. Allan Chapman has characterised him as Englands Leonardo, Robert Gunthers Early Science in Oxford, a history of science in Oxford during the Protectorate, Restoration and Age of Enlightenment, devotes five of its fourteen volumes to Hooke. Hooke studied at Wadham College, Oxford during the Protectorate where he became one of a tightly knit group of ardent Royalists led by John Wilkins. Here he was employed as an assistant to Thomas Willis and to Robert Boyle and he built some of the earliest Gregorian telescopes and observed the rotations of Mars and Jupiter. In 1665 he inspired the use of microscopes for scientific exploration with his book, based on his microscopic observations of fossils, Hooke was an early proponent of biological evolution. Much of Hookes scientific work was conducted in his capacity as curator of experiments of the Royal Society, much of what is known of Hookes early life comes from an autobiography that he commenced in 1696 but never completed. Richard Waller mentions it in his introduction to The Posthumous Works of Robert Hooke, the work of Waller, along with John Wards Lives of the Gresham Professors and John Aubreys Brief Lives, form the major near-contemporaneous biographical accounts of Hooke. Robert Hooke was born in 1635 in Freshwater on the Isle of Wight to John Hooke, Robert was the last of four children, two boys and two girls, and there was an age difference of seven years between him and the next youngest. Their father John was a Church of England priest, the curate of Freshwaters Church of All Saints, Robert Hooke was expected to succeed in his education and join the Church. John Hooke also was in charge of a school, and so was able to teach Robert. He was a Royalist and almost certainly a member of a group who went to pay their respects to Charles I when he escaped to the Isle of Wight, Robert, too, grew up to be a staunch monarchist. As a youth, Robert Hooke was fascinated by observation, mechanical works and he dismantled a brass clock and built a wooden replica that, by all accounts, worked well enough, and he learned to draw, making his own materials from coal, chalk and ruddle. Hooke quickly mastered Latin and Greek, made study of Hebrew. Here, too, he embarked on his study of mechanics. It appears that Hooke was one of a group of students whom Busby educated in parallel to the work of the school. Contemporary accounts say he was not much seen in the school, in 1653, Hooke secured a choristers place at Christ Church, Oxford. He was employed as an assistant to Dr Thomas Willis. There he met the natural philosopher Robert Boyle, and gained employment as his assistant from about 1655 to 1662, constructing, operating and he did not take his Master of Arts until 1662 or 1663
57.
Blaise Pascal
–
Blaise Pascal was a French mathematician, physicist, inventor, writer and Christian philosopher. He was a prodigy who was educated by his father. Pascal also wrote in defence of the scientific method, in 1642, while still a teenager, he started some pioneering work on calculating machines. After three years of effort and 50 prototypes, he built 20 finished machines over the following 10 years, following Galileo Galilei and Torricelli, in 1647, he rebutted Aristotles followers who insisted that nature abhors a vacuum. Pascals results caused many disputes before being accepted, in 1646, he and his sister Jacqueline identified with the religious movement within Catholicism known by its detractors as Jansenism. Following a religious experience in late 1654, he began writing works on philosophy. His two most famous works date from this period, the Lettres provinciales and the Pensées, the set in the conflict between Jansenists and Jesuits. In that year, he wrote an important treatise on the arithmetical triangle. Between 1658 and 1659 he wrote on the cycloid and its use in calculating the volume of solids, Pascal had poor health, especially after the age of 18, and he died just two months after his 39th birthday. Pascal was born in Clermont-Ferrand, which is in Frances Auvergne region and he lost his mother, Antoinette Begon, at the age of three. His father, Étienne Pascal, who also had an interest in science and mathematics, was a local judge, Pascal had two sisters, the younger Jacqueline and the elder Gilberte. In 1631, five years after the death of his wife, the newly arrived family soon hired Louise Delfault, a maid who eventually became an instrumental member of the family. Étienne, who never remarried, decided that he alone would educate his children, for they all showed extraordinary intellectual ability, the young Pascal showed an amazing aptitude for mathematics and science. Particularly of interest to Pascal was a work of Desargues on conic sections and it states that if a hexagon is inscribed in a circle then the three intersection points of opposite sides lie on a line. Pascals work was so precocious that Descartes was convinced that Pascals father had written it, in France at that time offices and positions could be—and were—bought and sold. In 1631 Étienne sold his position as president of the Cour des Aides for 65,665 livres. The money was invested in a government bond which provided, if not a lavish, then certainly a comfortable income which allowed the Pascal family to move to, but in 1638 Richelieu, desperate for money to carry on the Thirty Years War, defaulted on the governments bonds. Suddenly Étienne Pascals worth had dropped from nearly 66,000 livres to less than 7,300 and it was only when Jacqueline performed well in a childrens play with Richelieu in attendance that Étienne was pardoned
58.
Isaac Newton
–
His book Philosophiæ Naturalis Principia Mathematica, first published in 1687, laid the foundations of classical mechanics. Newton also made contributions to optics, and he shares credit with Gottfried Wilhelm Leibniz for developing the infinitesimal calculus. Newtons Principia formulated the laws of motion and universal gravitation that dominated scientists view of the universe for the next three centuries. Newtons work on light was collected in his influential book Opticks. He also formulated a law of cooling, made the first theoretical calculation of the speed of sound. Newton was a fellow of Trinity College and the second Lucasian Professor of Mathematics at the University of Cambridge, politically and personally tied to the Whig party, Newton served two brief terms as Member of Parliament for the University of Cambridge, in 1689–90 and 1701–02. He was knighted by Queen Anne in 1705 and he spent the last three decades of his life in London, serving as Warden and Master of the Royal Mint and his father, also named Isaac Newton, had died three months before. Born prematurely, he was a child, his mother Hannah Ayscough reportedly said that he could have fit inside a quart mug. When Newton was three, his mother remarried and went to live with her new husband, the Reverend Barnabas Smith, leaving her son in the care of his maternal grandmother, Newtons mother had three children from her second marriage. From the age of twelve until he was seventeen, Newton was educated at The Kings School, Grantham which taught Latin and Greek. He was removed from school, and by October 1659, he was to be found at Woolsthorpe-by-Colsterworth, Henry Stokes, master at the Kings School, persuaded his mother to send him back to school so that he might complete his education. Motivated partly by a desire for revenge against a bully, he became the top-ranked student. In June 1661, he was admitted to Trinity College, Cambridge and he started as a subsizar—paying his way by performing valets duties—until he was awarded a scholarship in 1664, which guaranteed him four more years until he would get his M. A. He set down in his notebook a series of Quaestiones about mechanical philosophy as he found it, in 1665, he discovered the generalised binomial theorem and began to develop a mathematical theory that later became calculus. Soon after Newton had obtained his B. A. degree in August 1665, in April 1667, he returned to Cambridge and in October was elected as a fellow of Trinity. Fellows were required to become ordained priests, although this was not enforced in the restoration years, however, by 1675 the issue could not be avoided and by then his unconventional views stood in the way. Nevertheless, Newton managed to avoid it by means of a special permission from Charles II. A and he was elected a Fellow of the Royal Society in 1672. Newtons work has been said to distinctly advance every branch of mathematics then studied and his work on the subject usually referred to as fluxions or calculus, seen in a manuscript of October 1666, is now published among Newtons mathematical papers
59.
Claude-Louis Navier
–
Claude-Louis Navier, was a French engineer and physicist who specialized in mechanics. The Navier–Stokes equations are named after him and George Gabriel Stokes, after the death of his father in 1793, Naviers mother left his education in the hands of his uncle Émiland Gauthey, an engineer with the Corps of Bridges and Roads. In 1802, Navier enrolled at the École polytechnique, and in 1804 continued his studies at the École Nationale des Ponts et Chaussées and he eventually succeeded his uncle as Inspecteur general at the Corps des Ponts et Chaussées. He directed the construction of bridges at Choisy, Asnières and Argenteuil in the Department of the Seine, in 1824, Navier was admitted into the French Academy of Science. Navier formulated the theory of elasticity in a mathematically usable form. Navier is therefore considered to be the founder of modern structural analysis. His major contribution however remains the Navier–Stokes equations, central to fluid mechanics and his name is one of the 72 names inscribed on the Eiffel Tower. OConnor, John J. Robertson, Edmund F. Claude-Louis Navier, MacTutor History of Mathematics archive, University of St Andrews
60.
Sir George Stokes, 1st Baronet
–
Sir George Gabriel Stokes, 1st Baronet, PRS, was a physicist and mathematician. Born in Ireland, Stokes spent all of his career at the University of Cambridge, in physics, Stokes made seminal contributions to fluid dynamics and to physical optics. In mathematics he formulated the first version of what is now known as Stokes theorem and he served as secretary, then president, of the Royal Society of London. George Stokes was the youngest son of the Reverend Gabriel Stokes, rector of Skreen, County Sligo, Ireland, where he was born and brought up in an evangelical Protestant family. In accordance with the statutes, he had to resign the fellowship when he married in 1857. He retained his place on the foundation until 1902, when on the day before his 83rd birthday and he did not hold this position for long, for he died at Cambridge on 1 February the following year, and was buried in the Mill Road cemetery. In 1849, Stokes was appointed to the Lucasian professorship of mathematics at Cambridge, on 1 June 1899, the jubilee of this appointment was celebrated there in a ceremony, which was attended by numerous delegates from European and American universities. Stokes, who was made a baronet in 1889, further served his university by representing it in parliament from 1887 to 1892 as one of the two members for the Cambridge University constituency. During a portion of this period he also was president of the Royal Society, since he was also Lucasian Professor at this time, Stokes was the first person to hold all three positions simultaneously, Newton held the same three, although not at the same time. Stokess original work began about 1840, and from that date onwards the great extent of his output was only less remarkable than the brilliance of its quality, the Royal Societys catalogue of scientific papers gives the titles of over a hundred memoirs by him published down to 1883. Some of these are brief notes, others are short controversial or corrective statements. His first published papers, which appeared in 1842 and 1843, were on the motion of incompressible fluids. His work on motion and viscosity led to his calculating the terminal velocity for a sphere falling in a viscous medium. This became known as Stokes law and he derived an expression for the frictional force exerted on spherical objects with very small Reynolds numbers. His work is the basis of the falling sphere viscometer, in which the fluid is stationary in a glass tube. A sphere of size and density is allowed to descend through the liquid. If correctly selected, it reaches terminal velocity, which can be measured by the time it takes to pass two marks on the tube, electronic sensing can be used for opaque fluids. Knowing the terminal velocity, the size and density of the sphere, a series of steel ball bearings of different diameter is normally used in the classic experiment to improve the accuracy of the calculation
61.
Structural load
–
Structural loads or actions are forces, deformations, or accelerations applied to a structure or its components. Loads cause stresses, deformations, and displacements in structures, assessment of their effects is carried out by the methods of structural analysis. Excess load or overloading may cause failure, and hence such possibility should be either considered in the design or strictly controlled. Mechanical structures, such as aircraft, satellites, rockets, space stations, ships, engineers often evaluate structural loads based upon published regulations, contracts, or specifications. Accepted technical standards are used for testing and inspection. Dead loads are static forces that are constant for an extended time. They can be in tension or compression, the term can refer to a laboratory test method or to the normal usage of a material or structure. Live loads are usually unstable or moving loads and these dynamic loads may involve considerations such as impact, momentum, vibration, slosh dynamics of fluids, etc. An impact load is one time of application on a material is less than one-third of the natural period of vibration of that material. Cyclic loads on a structure can lead to damage, cumulative damage. These loads can be repeated loadings on a structure or can be due to vibration, structural loads are an important consideration in the design of buildings. Building codes require that structures be designed and built to safely resist all actions that they are likely to face during their service life, minimum loads or actions are specified in these building codes for types of structures, geographic locations, usage and materials of construction. Structural loads are split into categories by their originating cause, in terms of the actual load on a structure, there is no difference between dead or live loading, but the split occurs for use in safety calculations or ease of analysis on complex models. To meet the requirement that design strength be higher than maximum loads, building codes prescribe that, for structural design and these load factors are, roughly, a ratio of the theoretical design strength to the maximum load expected in service. The dead load includes loads that are constant over time, including the weight of the structure itself. The roof is also a dead load, dead loads are also known as permanent or static loads. Building materials are not dead loads until constructed in permanent position, iS875-1987 give unit weight of building materials, parts, components. Live loads, or imposed loads, are temporary, of short duration and these dynamic loads may involve considerations such as impact, momentum, vibration, slosh dynamics of fluids and material fatigue
62.
Gravity
–
Gravity, or gravitation, is a natural phenomenon by which all things with mass are brought toward one another, including planets, stars and galaxies. Since energy and mass are equivalent, all forms of energy, including light, on Earth, gravity gives weight to physical objects and causes the ocean tides. Gravity has a range, although its effects become increasingly weaker on farther objects. The most extreme example of this curvature of spacetime is a hole, from which nothing can escape once past its event horizon. More gravity results in time dilation, where time lapses more slowly at a lower gravitational potential. Gravity is the weakest of the four fundamental interactions of nature, the gravitational attraction is approximately 1038 times weaker than the strong force,1036 times weaker than the electromagnetic force and 1029 times weaker than the weak force. As a consequence, gravity has an influence on the behavior of subatomic particles. On the other hand, gravity is the dominant interaction at the macroscopic scale, for this reason, in part, pursuit of a theory of everything, the merging of the general theory of relativity and quantum mechanics into quantum gravity, has become an area of research. While the modern European thinkers are credited with development of gravitational theory, some of the earliest descriptions came from early mathematician-astronomers, such as Aryabhata, who had identified the force of gravity to explain why objects do not fall out when the Earth rotates. Later, the works of Brahmagupta referred to the presence of force, described it as an attractive force. Modern work on gravitational theory began with the work of Galileo Galilei in the late 16th and this was a major departure from Aristotles belief that heavier objects have a higher gravitational acceleration. Galileo postulated air resistance as the reason that objects with less mass may fall slower in an atmosphere, galileos work set the stage for the formulation of Newtons theory of gravity. In 1687, English mathematician Sir Isaac Newton published Principia, which hypothesizes the inverse-square law of universal gravitation. Newtons theory enjoyed its greatest success when it was used to predict the existence of Neptune based on motions of Uranus that could not be accounted for by the actions of the other planets. Calculations by both John Couch Adams and Urbain Le Verrier predicted the position of the planet. A discrepancy in Mercurys orbit pointed out flaws in Newtons theory, the issue was resolved in 1915 by Albert Einsteins new theory of general relativity, which accounted for the small discrepancy in Mercurys orbit. The simplest way to test the equivalence principle is to drop two objects of different masses or compositions in a vacuum and see whether they hit the ground at the same time. Such experiments demonstrate that all objects fall at the rate when other forces are negligible
63.
Electromagnetic force
–
Electromagnetism is a branch of physics involving the study of the electromagnetic force, a type of physical interaction that occurs between electrically charged particles. The electromagnetic force usually exhibits electromagnetic fields such as fields, magnetic fields. The other three fundamental interactions are the interaction, the weak interaction, and gravitation. The word electromagnetism is a form of two Greek terms, ἤλεκτρον, ēlektron, amber, and μαγνῆτις λίθος magnētis lithos, which means magnesian stone. The electromagnetic force plays a role in determining the internal properties of most objects encountered in daily life. Ordinary matter takes its form as a result of forces between individual atoms and molecules in matter, and is a manifestation of the electromagnetic force. Electrons are bound by the force to atomic nuclei, and their orbital shapes. The electromagnetic force governs the processes involved in chemistry, which arise from interactions between the electrons of neighboring atoms, there are numerous mathematical descriptions of the electromagnetic field. In classical electrodynamics, electric fields are described as electric potential, although electromagnetism is considered one of the four fundamental forces, at high energy the weak force and electromagnetic force are unified as a single electroweak force. In the history of the universe, during the epoch the unified force broke into the two separate forces as the universe cooled. Originally, electricity and magnetism were considered to be two separate forces, Magnetic poles attract or repel one another in a manner similar to positive and negative charges and always exist as pairs, every north pole is yoked to a south pole. An electric current inside a wire creates a corresponding magnetic field outside the wire. Its direction depends on the direction of the current in the wire. A current is induced in a loop of wire when it is moved toward or away from a field, or a magnet is moved towards or away from it. While preparing for a lecture on 21 April 1820, Hans Christian Ørsted made a surprising observation. As he was setting up his materials, he noticed a compass needle deflected away from north when the electric current from the battery he was using was switched on. At the time of discovery, Ørsted did not suggest any explanation of the phenomenon. However, three later he began more intensive investigations
64.
Stress (physics)
–
For example, when a solid vertical bar is supporting a weight, each particle in the bar pushes on the particles immediately below it. When a liquid is in a container under pressure, each particle gets pushed against by all the surrounding particles. The container walls and the pressure-inducing surface push against them in reaction and these macroscopic forces are actually the net result of a very large number of intermolecular forces and collisions between the particles in those molecules. Strain inside a material may arise by various mechanisms, such as stress as applied by external forces to the material or to its surface. Any strain of a material generates an internal elastic stress, analogous to the reaction force of a spring. In liquids and gases, only deformations that change the volume generate persistent elastic stress, however, if the deformation is gradually changing with time, even in fluids there will usually be some viscous stress, opposing that change. Elastic and viscous stresses are usually combined under the mechanical stress. Significant stress may exist even when deformation is negligible or non-existent, stress may exist in the absence of external forces, such built-in stress is important, for example, in prestressed concrete and tempered glass. Stress may also be imposed on a material without the application of net forces, for example by changes in temperature or chemical composition, stress that exceeds certain strength limits of the material will result in permanent deformation or even change its crystal structure and chemical composition. In some branches of engineering, the stress is occasionally used in a looser sense as a synonym of internal force. For example, in the analysis of trusses, it may refer to the total traction or compression force acting on a beam, since ancient times humans have been consciously aware of stress inside materials. Until the 17th century the understanding of stress was largely intuitive and empirical, with those tools, Augustin-Louis Cauchy was able to give the first rigorous and general mathematical model for stress in a homogeneous medium. Cauchy observed that the force across a surface was a linear function of its normal vector, and, moreover. The understanding of stress in liquids started with Newton, who provided a formula for friction forces in parallel laminar flow. Stress is defined as the force across a small boundary per unit area of that boundary, following the basic premises of continuum mechanics, stress is a macroscopic concept. In a fluid at rest the force is perpendicular to the surface, in a solid, or in a flow of viscous liquid, the force F may not be perpendicular to S, hence the stress across a surface must be regarded a vector quantity, not a scalar. Moreover, the direction and magnitude depend on the orientation of S. Thus the stress state of the material must be described by a tensor, called the stress tensor, with respect to any chosen coordinate system, the Cauchy stress tensor can be represented as a symmetric matrix of 3×3 real numbers
65.
Force
–
In physics, a force is any interaction that, when unopposed, will change the motion of an object. In other words, a force can cause an object with mass to change its velocity, force can also be described intuitively as a push or a pull. A force has both magnitude and direction, making it a vector quantity and it is measured in the SI unit of newtons and represented by the symbol F. The original form of Newtons second law states that the net force acting upon an object is equal to the rate at which its momentum changes with time. In an extended body, each part usually applies forces on the adjacent parts, such internal mechanical stresses cause no accelation of that body as the forces balance one another. Pressure, the distribution of small forces applied over an area of a body, is a simple type of stress that if unbalanced can cause the body to accelerate. Stress usually causes deformation of materials, or flow in fluids. In part this was due to an understanding of the sometimes non-obvious force of friction. A fundamental error was the belief that a force is required to maintain motion, most of the previous misunderstandings about motion and force were eventually corrected by Galileo Galilei and Sir Isaac Newton. With his mathematical insight, Sir Isaac Newton formulated laws of motion that were not improved-on for nearly three hundred years, the Standard Model predicts that exchanged particles called gauge bosons are the fundamental means by which forces are emitted and absorbed. Only four main interactions are known, in order of decreasing strength, they are, strong, electromagnetic, weak, high-energy particle physics observations made during the 1970s and 1980s confirmed that the weak and electromagnetic forces are expressions of a more fundamental electroweak interaction. Since antiquity the concept of force has been recognized as integral to the functioning of each of the simple machines. The mechanical advantage given by a machine allowed for less force to be used in exchange for that force acting over a greater distance for the same amount of work. Analysis of the characteristics of forces ultimately culminated in the work of Archimedes who was famous for formulating a treatment of buoyant forces inherent in fluids. Aristotle provided a discussion of the concept of a force as an integral part of Aristotelian cosmology. In Aristotles view, the sphere contained four elements that come to rest at different natural places therein. Aristotle believed that objects on Earth, those composed mostly of the elements earth and water, to be in their natural place on the ground. He distinguished between the tendency of objects to find their natural place, which led to natural motion, and unnatural or forced motion
66.
Yield (engineering)
–
A yield strength or yield point is the material property defined as the stress at which a material begins to deform plastically. Prior to the point the material will deform elastically and will return to its original shape when the applied stress is removed. Once the yield point is passed, some fraction of the deformation will be permanent, in the three-dimensional principal stresses, an infinite number of yield points form together a yield surface. The yield point determines the limits of performance for mechanical components, in structural engineering, this is a soft failure mode which does not normally cause catastrophic failure or ultimate failure unless it accelerates buckling. It is often difficult to define yielding due to the wide variety of stress–strain curves exhibited by real materials. In addition, there are possible ways to define yielding. This definition is used, since dislocations move at very low stresses. Elastic limit Beyond the elastic limit, permanent deformation will occur, the elastic limit is therefore the lowest stress at which permanent deformation can be measured. This requires a manual procedure, and the accuracy is critically dependent on the equipment used. For elastomers, such as rubber, the limit is much larger than the proportionality limit. Also, precise measurements have shown that plastic strain begins at low stresses. Yield point The point in the curve at which the curve levels off. Offset yield point When a yield point is not easily defined based on the shape of the curve an offset yield point is arbitrarily defined. The value for this is set at 0.1 or 0. 2% plastic strain. The offset value is given as a subscript, e. g. Rp0. 2=310 MPa, high strength steel and aluminum alloys do not exhibit a yield point, so this offset yield point is used on these materials. Upper and lower yield points Some metals, such as mild steel, the material response is linear up until the upper yield point, but the lower yield point is used in structural engineering as a conservative value. If a metal is only stressed to the yield point. A yield criterion, often expressed as yield surface, or yield locus, is a hypothesis concerning the limit of elasticity under any combination of stresses
67.
Slip (materials science)
–
An external force makes parts of the crystal lattice glide along each other, changing the materials geometry. Depending on the type of lattice, different slip systems are present in the material, more specifically, slip occurs on close-packed planes, and in close-packed directions. The magnitude and direction of slip are represented by the Burgers vector, the picture on the right shows a schematic view of the slip mechanism. The slip planes and slip directions in a crystal have specific crystallographic forms, often, this is the direction in which atoms are most closely spaced. A slip plane and a slip direction constitute a slip system, a critical resolved shear stress is required to initiate a slip. Slip is an important mode of deformation mechanism in crystals, for metals and technically used metallic alloys it is by far the most important deformation mechanism and subject to current research in materials science. Slip in face centered cubic crystals occurs along the close packed plane, specifically, the slip plane is of type, and the direction is of type <110>. In the diagram, the plane and direction are and. Given the permutations of the slip plane types and direction types, Slip in body-centered cubic crystals occurs along the plane of shortest Burgers vector as well, however, unlike fcc, there are no truly close-packed planes in the bcc crystal structure. Thus, a system in bcc requires heat to activate. Some bcc materials can contain up to 48 slip systems, there are six slip planes of type, each with two <111> directions. There are 24 and 12 planes each with one <111> direction, while the and planes are not exactly identical in activation energy to, they are so close in energy that for all intents and purposes they can be treated as identical. In the diagram on the right the specific plane and direction are and. | b | = a 2 | <111 > | =3 a 2 Slip in hexagonal close packed metals is much more limited than in bcc and fcc crystal structures, usually, hcp crystal structures allow slip on the densely packed basal planes along the <1120> directions. The activation of other slip planes depends on various parameters, e. g. the c/a ratio, since there are only 2 independent slip systems on the basal planes, for arbitrary plastic deformation additional slip or twin systems needs to be activated. This typically requires a much higher resolved shear stress and results in the behavior of hcp polycrystals. Cadmium, zinc, magnesium, titanium, and beryllium have a plane at. This creates a total of three systems, depending on orientation
68.
Dislocation
–
In materials science, a dislocation is a crystallographic defect or irregularity within a crystal structure. The presence of dislocations strongly influences many of the properties of materials, some types of dislocations can be visualized as being caused by the termination of a plane of atoms in the middle of a crystal. In such a case, the planes are not straight. The two primary types of dislocations are edge dislocations and screw dislocations, mixed dislocations are intermediate between these. Mathematically, dislocations are a type of defect, sometimes called a soliton. Dislocations behave as stable particles, they can move around, two dislocations of opposite orientation can cancel when brought together, but a single dislocation typically cannot disappear on its own. Two main types of dislocations exist, edge and screw, dislocations found in real materials are typically mixed, meaning that they have characteristics of both. A crystalline material consists of an array of atoms, arranged into lattice planes. One approach is to begin by considering a 3D representation of a crystal lattice. The viewer may then start to simplify the representation by visualising planes of atoms instead of the atoms themselves, an edge dislocation is a defect where an extra half-plane of atoms is introduced mid way through the crystal, distorting nearby planes of atoms. When enough force is applied from one side of the crystal structure, a simple schematic diagram of such atomic planes can be used to illustrate lattice defects such as dislocations. In an edge dislocation, the Burgers vector is perpendicular to the line direction, the stresses caused by an edge dislocation are complex due to its inherent asymmetry. A screw dislocation is much harder to visualize, imagine cutting a crystal along a plane and slipping one half across the other by a lattice vector, the halves fitting back together without leaving a defect. This is similar to the Riemann surface of the complex logarithm, if the cut only goes part way through the crystal, and then slipped, the boundary of the cut is a screw dislocation. It comprises a structure in which a path is traced around the linear defect by the atomic planes in the crystal lattice. Perhaps the closest analogy is a spiral-sliced ham, in pure screw dislocations, the Burgers vector is parallel to the line direction. Despite the difficulty in visualization, the caused by a screw dislocation are less complex than those of an edge dislocation. This equation suggests a long cylinder of stress radiating outward from the cylinder, please note, this simple model results in an infinite value for the core of the dislocation at r=0 and so it is only valid for stresses outside of the core of the dislocation
69.
Stress measures
–
The most commonly used measure of stress is the Cauchy stress tensor, often called simply the stress tensor or true stress. However, several measures of stress can be defined. Some such stress measures that are used in continuum mechanics, particularly in the computational context, are. This stress tensor is the transpose of the nominal stress, the second Piola-Kirchhoff stress or PK2 stress. The Biot stress Consider the situation shown in the following figure, the following definitions use the notations shown in the figure. In the reference configuration Ω0, the normal to a surface element d Γ0 is N ≡ n 0. In the deformed configuration Ω, the surface element changes to d Γ with outward normal n, note that this surface can either be a hypothetical cut inside the body or an actual surface. The quantity F is the deformation gradient tensor, J is its determinant, the Cauchy stress is a measure of the force acting on an element of area in the deformed configuration. The quantity τ = J σ is called the Kirchhoff stress tensor and is used widely in numerical algorithms in metal plasticity and this is because it relates the force in the deformed configuration to an oriented area vector in the reference configuration. The Biot stress is defined as the part of the tensor P T ⋅ R where R is the rotation tensor obtained from a polar decomposition of the deformation gradient. Therefore, the Biot stress tensor is defined as T =12, the Biot stress is also called the Jaumann stress. The quantity T does not have any physical interpretation, note that N and P are not symmetric because F is not symmetric
70.
Decimal
–
This article aims to be an accessible introduction. For the mathematical definition, see Decimal representation, the decimal numeral system has ten as its base, which, in decimal, is written 10, as is the base in every positional numeral system. It is the base most widely used by modern civilizations. Decimal fractions have terminating decimal representations and other fractions have repeating decimal representations, Decimal notation is the writing of numbers in a base-ten numeral system. Examples are Brahmi numerals, Greek numerals, Hebrew numerals, Roman numerals, Roman numerals have symbols for the decimal powers and secondary symbols for half these values. Brahmi numerals have symbols for the nine numbers 1–9, the nine decades 10–90, plus a symbol for 100, Chinese numerals have symbols for 1–9, and additional symbols for powers of ten, which in modern usage reach 1072. Positional decimal systems include a zero and use symbols for the ten values to represent any number, positional notation uses positions for each power of ten, units, tens, hundreds, thousands, etc. The position of each digit within a number denotes the multiplier multiplied with that position has a value ten times that of the position to its right. There were at least two independent sources of positional decimal systems in ancient civilization, the Chinese counting rod system. Ten is the number which is the count of fingers and thumbs on both hands, the English word digit as well as its translation in many languages is also the anatomical term for fingers and toes. In English, decimal means tenth, decimate means reduce by a tenth, however, the symbols used in different areas are not identical, for instance, Western Arabic numerals differ from the forms used by other Arab cultures. A decimal fraction is a fraction the denominator of which is a power of ten. g, Decimal fractions 8/10, 1489/100, 24/100000, and 58900/10000 are expressed in decimal notation as 0.8,14.89,0.00024,5.8900 respectively. In English-speaking, some Latin American and many Asian countries, a period or raised period is used as the separator, in many other countries, particularly in Europe. The integer part, or integral part of a number is the part to the left of the decimal separator. The part from the separator to the right is the fractional part. It is usual for a number that consists only of a fractional part to have a leading zero in its notation. Any rational number with a denominator whose only prime factors are 2 and/or 5 may be expressed as a decimal fraction and has a finite decimal expansion. 1/2 =0.5 1/20 =0.05 1/5 =0.2 1/50 =0.02 1/4 =0.25 1/40 =0.025 1/25 =0.04 1/8 =0.125 1/125 =0.008 1/10 =0
71.
Percentage
–
In mathematics, a percentage is a number or ratio expressed as a fraction of 100. It is often denoted using the percent sign, %, or the abbreviations pct. pct, a percentage is a dimensionless number. For example, 45% is equal to 45⁄100,45,100, percentages are often used to express a proportionate part of a total. If 50% of the number of students in the class are male. If there are 1000 students, then 500 of them are male, an increase of $0.15 on a price of $2.50 is an increase by a fraction of 0. 15/2.50 =0.06. Expressed as a percentage, this is a 6% increase, while many percentage values are between 0 and 100, there is no mathematical restriction and percentages may take on other values. For example, it is common to refer to 111% or −35%, especially for percent changes, in Ancient Rome, long before the existence of the decimal system, computations were often made in fractions which were multiples of 1⁄100. For example, Augustus levied a tax of 1⁄100 on goods sold at auction known as centesima rerum venalium, computation with these fractions was equivalent to computing percentages. Many of these texts applied these methods to profit and loss, interest rates, by the 17th century it was standard to quote interest rates in hundredths. The term per cent is derived from the Latin per centum, the sign for per cent evolved by gradual contraction of the Italian term per cento, meaning for a hundred. The per was often abbreviated as p. and eventually disappeared entirely, the cento was contracted to two circles separated by a horizontal line, from which the modern % symbol is derived. The percent value is computed by multiplying the value of the ratio by 100. For example, to find 50 apples as a percentage of 1250 apples, first compute the ratio 50⁄1250 =0.04, and then multiply by 100 to obtain 4%. The percent value can also be found by multiplying first, so in this example the 50 would be multiplied by 100 to give 5,000, and this result would be divided by 1250 to give 4%. To calculate a percentage of a percentage, convert both percentages to fractions of 100, or to decimals, and multiply them, for example, 50% of 40% is, 50⁄100 × 40⁄100 =0.50 ×0.40 =0.20 = 20⁄100 = 20%. It is not correct to divide by 100 and use the percent sign at the same time, whenever we talk about a percentage, it is important to specify what it is relative to, i. e. what is the total that corresponds to 100%. The following problem illustrates this point, in a certain college 60% of all students are female, and 10% of all students are computer science majors. If 5% of female students are computer science majors, what percentage of computer science majors are female and we are asked to compute the ratio of female computer science majors to all computer science majors
72.
Parts-per notation
–
In science and engineering, the parts-per notation is a set of pseudo-units to describe small values of miscellaneous dimensionless quantities, e. g. mole fraction or mass fraction. Since these fractions are quantity-per-quantity measures, they are pure numbers with no associated units of measurement, commonly used are ppm, ppb, ppt and ppq. Parts-per notation is often used describing dilute solutions in chemistry, for instance, the unit “1 ppm” can be used for a mass fraction if a water-borne pollutant is present at one-millionth of a gram per gram of sample solution. When working with aqueous solutions, it is common to assume that the density of water is 1.00 g/mL, therefore, it is common to equate 1 kilogram of water with 1 L of water. Consequently,1 ppm corresponds to 1 mg/L and 1 ppb corresponds to 1 μg/L, similarly, parts-per notation is used also in physics and engineering to express the value of various proportional phenomena. For instance, a metal alloy might expand 1.2 micrometers per meter of length for every degree Celsius. For instance, the accuracy of distance measurements when using a laser rangefinder might be 1 millimeter per kilometer of distance, this could be expressed as “Accuracy =1 ppm. ”Parts-per notations are all dimensionless quantities, in mathematical expressions. In fractions like “2 nanometers per meter” so the quotients are pure-number coefficients with positive values less than 1, when parts-per notations, including the percent symbol, are used in regular prose, they are still pure-number dimensionless quantities. However, they take the literal “parts per” meaning of a comparative ratio. Parts-per notations may be expressed in terms of any unit of the same measure, in nuclear magnetic resonance spectroscopy, chemical shift is usually expressed in ppm. It represents the difference of a frequency in parts per million from the reference frequency. The reference frequency depends on the magnetic field and the element being measured. It is usually expressed in MHz, typical chemical shifts are rarely more than a few hundred Hz from the reference frequency, so chemical shifts are conveniently expressed in ppm. Parts-per notation gives a quantity that does not depend on the instruments field strength. One part per hundred is generally represented by the percent symbol and denotes one part per 100 parts, one part in 102, and this is equivalent to approximately one drop of water diluted into 5 milliliters or about fifteen minutes out of one day. One part per thousand should generally be spelled out in full and it may also be denoted by the millage symbol. Note however, that specific disciplines such as oceanography, as well as educational exercises, one part per thousand denotes one part per 1000 parts, one part in 103, and a value of 1 × 10−3. This is equivalent to one drop of water diluted into 50 milliliters or about one, one part per ten thousand is denoted by the permyriad symbol
73.
Tensor
–
In mathematics, tensors are geometric objects that describe linear relations between geometric vectors, scalars, and other tensors. Elementary examples of such include the dot product, the cross product. Geometric vectors, often used in physics and engineering applications, given a coordinate basis or fixed frame of reference, a tensor can be represented as an organized multidimensional array of numerical values. The order of a tensor is the dimensionality of the array needed to represent it, or equivalently, for example, a linear map is represented by a matrix in a basis, and therefore is a 2nd-order tensor. A vector is represented as a 1-dimensional array in a basis, scalars are single numbers and are thus 0th-order tensors. Because they express a relationship between vectors, tensors themselves must be independent of a choice of coordinate system. The precise form of the transformation law determines the type of the tensor, the tensor type is a pair of natural numbers, where n is the number of contravariant indices and m is the number of covariant indices. The total order of a tensor is the sum of two numbers. The concept enabled an alternative formulation of the differential geometry of a manifold in the form of the Riemann curvature tensor. There are several approaches to defining tensors, although seemingly different, the approaches just describe the same geometric concept using different languages and at different levels of abstraction. For example, an operator is represented in a basis as a two-dimensional square n × n array. The numbers in the array are known as the scalar components of the tensor or simply its components. They are denoted by giving their position in the array, as subscripts and superscripts. For example, the components of an order 2 tensor T could be denoted Tij , whether an index is displayed as a superscript or subscript depends on the transformation properties of the tensor, described below. The total number of required to identify each component uniquely is equal to the dimension of the array. However, the term generally has another meaning in the context of matrices. Just as the components of a change when we change the basis of the vector space. Each tensor comes equipped with a law that details how the components of the tensor respond to a change of basis
74.
Elastomer
–
An elastomer is a polymer with viscoelasticity and very weak inter-molecular forces, generally having low Youngs modulus and high failure strain compared with other materials. The term, which is derived from elastic polymer, is used interchangeably with the term rubber. Each of the monomers which link to form the polymer is made of carbon. Elastomers are amorphous polymers existing above their glass transition temperature, so that considerable segmental motion is possible, at ambient temperatures, rubbers are thus relatively soft and deformable. Their primary uses are for seals, adhesives and molded flexible parts, application areas for different types of rubber are manifold and cover segments as diverse as tires, shoe soles, and dampening and insulating elements. The importance of rubbers can be judged from the fact that global revenues are forecast to rise to US$56 billion in 2020, Rubber like solids with elastic properties are called elastomers. Polymer chains are held together in elastomers by weakest intermolecular forces and this weak binding forces permit the polymers to be stretched. Natural rubber, neoprene rubber, buna-s and buna-n are elastomers, elastomers are usually thermosets but may also be thermoplastic. The long polymer chains cross-link during curing, i. e. vulcanizing, the molecular structure of elastomers can be imagined as a spaghetti and meatball structure, with the meatballs signifying cross-links. The elasticity is derived from the ability of the chains to reconfigure themselves to distribute an applied stress. The covalent cross-linkages ensure that the elastomer will return to its original configuration when the stress is removed, as a result of this extreme flexibility, elastomers can reversibly extend from 5-700%, depending on the specific material. Without the cross-linkages or with short, uneasily reconfigured chains, the stress would result in a permanent deformation. Temperature effects are present in the demonstrated elasticity of a polymer. It is also possible for a polymer to exhibit elasticity that is not due to covalent cross-links, butyl rubber Halogenated butyl rubbers Styrene-butadiene Rubber Nitrile rubber, also called Buna N rubbers Hydrogenated Nitrile Rubbers Therban and Zetpol. One should go through all differentiation while editing between Plastics and articles thereof and Rubber and articles thereof
75.
Soft tissue
–
In anatomy, soft tissue includes the tissues that connect, support, or surround other structures and organs of the body, not being hard tissue such as bone. Soft tissue includes tendons, ligaments, fascia, skin, fibrous tissues, fat, and synovial membranes and it is sometimes defined by what it is not. Soft tissue has been defined as nonepithelial, extraskeletal mesenchyme exclusive of the reticuloendothelial system, the characteristic substances inside the extracellular matrix of this kind of tissue are the collagen, elastin and ground substance. Normally the soft tissue is very hydrated because of the ground substance, the fibroblasts are the most common cell responsible for the production of soft tissues fibers and ground substance. Variations of fibroblasts, like chondroblasts, may also produce these substances, at small strains, elastin confers stiffness to the tissue and stores most of the strain energy. The collagen fibers are comparatively inextensible and are usually loose, with increasing tissue deformation the collagen is gradually stretched in the direction of deformation. When taut, these produce a strong growth in tissue stiffness. The composite behavior is analogous to a stocking, whose rubber band does the role of elastin as the nylon does the role of collagen. In soft tissues the collagen limits the deformation and protects the tissues from injury, human soft tissue is highly deformable, and its mechanical properties vary significantly from one person to another. Impact testing results showed that the stiffness and the resistance of a test subject’s tissue are correlated with the mass, velocity. Such properties may be useful for forensics investigation when contusions were induced, Soft tissues have the potential to undergo big deformations and still come back to the initial configuration when unloaded, i. e. their stress-strain curve is nonlinear. The soft tissues are viscoelastic, incompressible and usually anisotropic. Some viscoelastic properties observable in soft tissues are, relaxation, creep, in order to describe the mechanical response of soft tissues, several methods have been used. Even though soft tissues have viscoelastic properties, i. e. stress as function of strain rate, after some cycles of loading and unloading the material, the mechanical response becomes independent of strain rate. By this method the elasticity theory is used to model an inelastic material, fung has called this model as pseudoelastic to point out that the material is not truly elastic. In physiological state soft tissues usually present residual stress that may be released when the tissue is excised, physiologists and histologists must be aware of this fact to avoid mistakes when analyzing excised tissues. This retraction usually causes a visual artifact, W is the strain energy function per volume unit, which is the mechanical strain energy for a given temperature. The Fung-model, simplified with isotropic hypothesis and this written in respect of the principal stretches, W =12, where a, b and c are constants
76.
Elastomers
–
An elastomer is a polymer with viscoelasticity and very weak inter-molecular forces, generally having low Youngs modulus and high failure strain compared with other materials. The term, which is derived from elastic polymer, is used interchangeably with the term rubber. Each of the monomers which link to form the polymer is made of carbon. Elastomers are amorphous polymers existing above their glass transition temperature, so that considerable segmental motion is possible, at ambient temperatures, rubbers are thus relatively soft and deformable. Their primary uses are for seals, adhesives and molded flexible parts, application areas for different types of rubber are manifold and cover segments as diverse as tires, shoe soles, and dampening and insulating elements. The importance of rubbers can be judged from the fact that global revenues are forecast to rise to US$56 billion in 2020, Rubber like solids with elastic properties are called elastomers. Polymer chains are held together in elastomers by weakest intermolecular forces and this weak binding forces permit the polymers to be stretched. Natural rubber, neoprene rubber, buna-s and buna-n are elastomers, elastomers are usually thermosets but may also be thermoplastic. The long polymer chains cross-link during curing, i. e. vulcanizing, the molecular structure of elastomers can be imagined as a spaghetti and meatball structure, with the meatballs signifying cross-links. The elasticity is derived from the ability of the chains to reconfigure themselves to distribute an applied stress. The covalent cross-linkages ensure that the elastomer will return to its original configuration when the stress is removed, as a result of this extreme flexibility, elastomers can reversibly extend from 5-700%, depending on the specific material. Without the cross-linkages or with short, uneasily reconfigured chains, the stress would result in a permanent deformation. Temperature effects are present in the demonstrated elasticity of a polymer. It is also possible for a polymer to exhibit elasticity that is not due to covalent cross-links, butyl rubber Halogenated butyl rubbers Styrene-butadiene Rubber Nitrile rubber, also called Buna N rubbers Hydrogenated Nitrile Rubbers Therban and Zetpol. One should go through all differentiation while editing between Plastics and articles thereof and Rubber and articles thereof
77.
Normal stress
–
For example, when a solid vertical bar is supporting a weight, each particle in the bar pushes on the particles immediately below it. When a liquid is in a container under pressure, each particle gets pushed against by all the surrounding particles. The container walls and the pressure-inducing surface push against them in reaction and these macroscopic forces are actually the net result of a very large number of intermolecular forces and collisions between the particles in those molecules. Strain inside a material may arise by various mechanisms, such as stress as applied by external forces to the material or to its surface. Any strain of a material generates an internal elastic stress, analogous to the reaction force of a spring. In liquids and gases, only deformations that change the volume generate persistent elastic stress, however, if the deformation is gradually changing with time, even in fluids there will usually be some viscous stress, opposing that change. Elastic and viscous stresses are usually combined under the mechanical stress. Significant stress may exist even when deformation is negligible or non-existent, stress may exist in the absence of external forces, such built-in stress is important, for example, in prestressed concrete and tempered glass. Stress may also be imposed on a material without the application of net forces, for example by changes in temperature or chemical composition, stress that exceeds certain strength limits of the material will result in permanent deformation or even change its crystal structure and chemical composition. In some branches of engineering, the stress is occasionally used in a looser sense as a synonym of internal force. For example, in the analysis of trusses, it may refer to the total traction or compression force acting on a beam, since ancient times humans have been consciously aware of stress inside materials. Until the 17th century the understanding of stress was largely intuitive and empirical, with those tools, Augustin-Louis Cauchy was able to give the first rigorous and general mathematical model for stress in a homogeneous medium. Cauchy observed that the force across a surface was a linear function of its normal vector, and, moreover. The understanding of stress in liquids started with Newton, who provided a formula for friction forces in parallel laminar flow. Stress is defined as the force across a small boundary per unit area of that boundary, following the basic premises of continuum mechanics, stress is a macroscopic concept. In a fluid at rest the force is perpendicular to the surface, in a solid, or in a flow of viscous liquid, the force F may not be perpendicular to S, hence the stress across a surface must be regarded a vector quantity, not a scalar. Moreover, the direction and magnitude depend on the orientation of S. Thus the stress state of the material must be described by a tensor, called the stress tensor, with respect to any chosen coordinate system, the Cauchy stress tensor can be represented as a symmetric matrix of 3×3 real numbers
78.
Shear stress
–
A shear stress, often denoted τ, is defined as the component of stress coplanar with a material cross section. Shear stress arises from the vector component parallel to the cross section. Normal stress, on the hand, arises from the force vector component perpendicular to the material cross section on which it acts. The formula to calculate average shear stress is force per unit area, τ = F A, where, τ = the shear stress, F = the force applied, A = the cross-sectional area of material with area parallel to the applied force vector. Pure shear stress is related to shear strain, denoted γ, by the following equation, τ = γ G where G is the shear modulus of the isotropic material. Here E is Youngs modulus and ν is Poissons ratio, beam shear is defined as the internal shear stress of a beam caused by the shear force applied to the beam. The beam shear formula is known as Zhuravskii shear stress formula after Dmitrii Ivanovich Zhuravskii who derived it in 1855. Shear stresses within a structure may be calculated by idealizing the cross-section of the structure into a set of stringers. Dividing the shear flow by the thickness of a portion of the semi-monocoque structure yields the shear stress. Any real fluids moving along solid boundary will incur a shear stress on that boundary, the no-slip condition dictates that the speed of the fluid at the boundary is zero, but at some height from the boundary the flow speed must equal that of the fluid. The region between two points is aptly named the boundary layer. For all Newtonian fluids in laminar flow the shear stress is proportional to the rate in the fluid where the viscosity is the constant of proportionality. However, for non-Newtonian fluids, this is no longer the case as for these fluids the viscosity is not constant, the shear stress is imparted onto the boundary as a result of this loss of velocity. Specifically, the shear stress is defined as, τ w ≡ τ = μ ∂ u ∂ y | y =0. For an isotropic Newtonian flow it is a scalar, while for anisotropic Newtonian flows it can be a second-order tensor too. On the other hand, given a shear stress as function of the flow velocity, the constant one finds in this case is the dynamic viscosity of the flow. On the other hand, a flow in which the viscosity were and this nonnewtonian flow is isotropic, so the viscosity is simply a scalar, μ =1 u. This relationship can be exploited to measure the shear stress
79.
Isotropic
–
Isotropy is uniformity in all orientations, it is derived from the Greek isos and tropos. Precise definitions depend on the subject area, exceptions, or inequalities, are frequently indicated by the prefix an, hence anisotropy. Anisotropy is also used to describe situations where properties vary systematically, Isotropic radiation has the same intensity regardless of the direction of measurement, and an isotropic field exerts the same action regardless of how the test particle is oriented. Within mathematics, isotropy has a few different meanings, Isotropic manifolds A manifold is isotropic if the geometry on the manifold is the same regardless of direction, a manifold can be homogeneous without being isotropic, but if it is inhomogeneous, it is necessarily anisotropic. Isotropic quadratic form A quadratic form q is said to be if there is a non-zero vector v such that q =0. In complex geometry, a line through the origin in the direction of a vector is an isotropic line. Isotropic coordinates Isotropic coordinates are coordinates on a chart for Lorentzian manifolds. Isotropy group An isotropy group is the group of isomorphisms from any object to itself in a groupoid, Isotropic position A probability distribution over a vector space is in isotropic position if its covariance matrix is the identity. This follows from invariance of the Hamiltonian, which in turn is guaranteed for a spherically symmetric potential. Kinetic theory of gases is also an example of isotropy and it is assumed that the molecules move in random directions and as a consequence, there is an equal probability of a molecule moving in any direction. Thus when there are molecules in the gas, with high probability there will be very similar numbers moving in one direction as any other hence demonstrating approximate isotropy. Fluid dynamics Fluid flow is isotropic if there is no directional preference, an example of anisotropy is in flows with a background density as gravity works in only one direction. The apparent surface separating two differing isotropic fluids would be referred to as an isotrope, thermal expansion A solid is said to be isotropic if the expansion of solid is equal in all directions when thermal energy is provided to the solid. Electromagnetics An isotropic medium is one such that the permittivity, ε, and permeability, μ, of the medium are uniform in all directions of the medium, optics Optical isotropy means having the same optical properties in all directions. The individual reflectance or transmittance of the domains is averaged if the macroscopic reflectance or transmittance is to be calculated, cosmology The Big Bang theory of the evolution of the observable universe assumes that space is isotropic. It also assumes that space is homogeneous and these two assumptions together are known as the Cosmological Principle. As of 2006, the observations suggest that, on scales much larger than galaxies, galaxy clusters are Great features. Here homogeneous means that the universe is the same everywhere and isotropic implies that there is no preferred direction, in the study of mechanical properties of materials, isotropic means having identical values of a property in all directions
80.
Rhombus
–
In Euclidean geometry, a rhombus is a simple quadrilateral whose four sides all have the same length. Another name is equilateral quadrilateral, since equilateral means that all of its sides are equal in length, every rhombus is a parallelogram and a kite. A rhombus with right angles is a square, the word rhombus comes from Greek ῥόμβος, meaning something that spins, which derives from the verb ῥέμβω, meaning to turn round and round. The word was used both by Euclid and Archimedes, who used the term solid rhombus for two right circular cones sharing a common base, the surface we refer to as rhombus today is a cross section of this solid rhombus through the apex of each of the two cones. This is a case of the superellipse, with exponent 1. Every rhombus has two diagonals connecting pairs of vertices, and two pairs of parallel sides. Using congruent triangles, one can prove that the rhombus is symmetric across each of these diagonals and it follows that any rhombus has the following properties, Opposite angles of a rhombus have equal measure. The two diagonals of a rhombus are perpendicular, that is, a rhombus is an orthodiagonal quadrilateral, the first property implies that every rhombus is a parallelogram. Thus denoting the common side as a and the diagonals as p and q, not every parallelogram is a rhombus, though any parallelogram with perpendicular diagonals is a rhombus. In general, any quadrilateral with perpendicular diagonals, one of which is a line of symmetry, is a kite, every rhombus is a kite, and any quadrilateral that is both a kite and parallelogram is a rhombus. A rhombus is a tangential quadrilateral and that is, it has an inscribed circle that is tangent to all four sides. As for all parallelograms, the area K of a rhombus is the product of its base, the base is simply any side length a, K = a ⋅ h. The inradius, denoted by r, can be expressed in terms of the p and q as. The dual polygon of a rhombus is a rectangle, A rhombus has all sides equal, a rhombus has opposite angles equal, while a rectangle has opposite sides equal. A rhombus has a circle, while a rectangle has a circumcircle. A rhombus has an axis of symmetry through each pair of opposite vertex angles, the diagonals of a rhombus intersect at equal angles, while the diagonals of a rectangle are equal in length. The figure formed by joining the midpoints of the sides of a rhombus is a rectangle, a rhombohedron is a three-dimensional figure like a cube, except that its six faces are rhombi instead of squares. The rhombic dodecahedron is a polyhedron with 12 congruent rhombi as its faces
81.
Gamma
–
Gamma is the third letter of the Greek alphabet. In the system of Greek numerals it has a value of 3, in Ancient Greek, the letter gamma represented a voiced velar stop /ɡ/. In Modern Greek, this represents either a voiced velar fricative or a voiced palatal fricative. In the International Phonetic Alphabet and other modern Latin-alphabet based phonetic notations, the Greek letter Gamma Γ was derived from the Phoenician letter for the /g/ phoneme, and as such is cognate with Hebrew gimel ג. In Archaic Greece, the shape of gamma was closer to a classical lambda, letters that arose from the Greek gamma include Etruscan
82.
Epsilon
–
Epsilon is the fifth letter of the Greek alphabet, corresponding phonetically to a mid front unrounded vowel /e/. In the system of Greek numerals it has the value five and it was derived from the Phoenician letter He. Letters that arose from epsilon include the Roman E, Ë and Ɛ, in essence, the uppercase form of epsilon looks identical to Latin E. The lowercase version has two variants, both inherited from medieval Greek handwriting. One, the most common in typography and inherited from medieval minuscule. The other, also known as lunate or uncial epsilon and inherited from earlier uncial writing, while in normal typography these are just alternative font variants, they may have different meanings as mathematical symbols. Computer systems therefore offer distinct encodings for them, in Unicode, the character U+0一3F5 Greek lunate epsilon symbol is provided specifically for the lunate form. In TeX, \epsilon denotes the lunate form, while \varepsilon denotes the reversed-3 form, there is also a Latin epsilon or open e, which looks similar to the Greek lowercase epsilon. It is encoded in Unicode as U+025B and U+0190 and is used as an IPA phonetic symbol, the lunate or uncial epsilon has also provided inspiration for the euro sign. The lunate epsilon is not to be confused with the set membership symbol, in addition, mathematicians have read the symbol ∈ as element of, as in 1 is an element of the natural numbers for 1 ∈ N, for example. As late as 1960, ϵ itself was used for set membership, Only gradually did a fully separate stylized symbol take the place of epsilon. In a related context, Peano also introduced the use of a backwards epsilon, ∍, for the phrase such that, the letter Ε was taken over from the Phoenician letter He when Greeks first adopted alphabetic writing. In archaic Greek writing, its shape is often identical to that of the Phoenician letter. Archaic writing often preserves the Phoenician form with a stem extending slightly below the lowest horizontal bar. In the classical era, through the influence of cursive writing styles. Besides its classical Greek sound value, the short /e/ phoneme, for instance, in early Attic before c.500 B. C. it was used also both for the long, open /ɛː/, and for the long close /eː/. In the former role, it was replaced in the classic Greek alphabet by Eta. Some dialects used yet other ways of distinguishing between various e-like sounds, in Corinth, the normal function of Ε to denote /e/ and /ɛː/ was taken by a glyph resembling a pointed B, while Ε was used only for long close /eː/
83.
SI unit
–
The International System of Units is the modern form of the metric system, and is the most widely used system of measurement. It comprises a coherent system of units of measurement built on seven base units, the system also establishes a set of twenty prefixes to the unit names and unit symbols that may be used when specifying multiples and fractions of the units. The system was published in 1960 as the result of an initiative began in 1948. It is based on the system of units rather than any variant of the centimetre-gram-second system. The motivation for the development of the SI was the diversity of units that had sprung up within the CGS systems, the International System of Units has been adopted by most developed countries, however, the adoption has not been universal in all English-speaking countries. The metric system was first implemented during the French Revolution with just the metre and kilogram as standards of length, in the 1830s Carl Friedrich Gauss laid the foundations for a coherent system based on length, mass, and time. In the 1860s a group working under the auspices of the British Association for the Advancement of Science formulated the requirement for a coherent system of units with base units and derived units. Meanwhile, in 1875, the Treaty of the Metre passed responsibility for verification of the kilogram, in 1921, the Treaty was extended to include all physical quantities including electrical units originally defined in 1893. The units associated with these quantities were the metre, kilogram, second, ampere, kelvin, in 1971, a seventh base quantity, amount of substance represented by the mole, was added to the definition of SI. On 11 July 1792, the proposed the names metre, are, litre and grave for the units of length, area, capacity. The committee also proposed that multiples and submultiples of these units were to be denoted by decimal-based prefixes such as centi for a hundredth, on 10 December 1799, the law by which the metric system was to be definitively adopted in France was passed. Prior to this, the strength of the magnetic field had only been described in relative terms. The technique used by Gauss was to equate the torque induced on a magnet of known mass by the earth’s magnetic field with the torque induced on an equivalent system under gravity. The resultant calculations enabled him to assign dimensions based on mass, length, a French-inspired initiative for international cooperation in metrology led to the signing in 1875 of the Metre Convention. Initially the convention only covered standards for the metre and the kilogram, one of each was selected at random to become the International prototype metre and International prototype kilogram that replaced the mètre des Archives and kilogramme des Archives respectively. Each member state was entitled to one of each of the prototypes to serve as the national prototype for that country. Initially its prime purpose was a periodic recalibration of national prototype metres. The official language of the Metre Convention is French and the version of all official documents published by or on behalf of the CGPM is the French-language version