1.
Continuum mechanics
–
Continuum mechanics is a branch of mechanics that deals with the analysis of the kinematics and the mechanical behavior of materials modeled as a continuous mass rather than as discrete particles. The French mathematician Augustin-Louis Cauchy was the first to formulate such models in the 19th century, research in the area continues till today. Modeling an object as a continuum assumes that the substance of the object completely fills the space it occupies, Continuum mechanics deals with physical properties of solids and fluids which are independent of any particular coordinate system in which they are observed. These physical properties are represented by tensors, which are mathematical objects that have the required property of being independent of coordinate system. These tensors can be expressed in coordinate systems for computational convenience, Materials, such as solids, liquids and gases, are composed of molecules separated by space. On a microscopic scale, materials have cracks and discontinuities, a continuum is a body that can be continually sub-divided into infinitesimal elements with properties being those of the bulk material. More specifically, the continuum hypothesis/assumption hinges on the concepts of an elementary volume. This condition provides a link between an experimentalists and a viewpoint on constitutive equations as well as a way of spatial and statistical averaging of the microstructure. The latter then provide a basis for stochastic finite elements. The levels of SVE and RVE link continuum mechanics to statistical mechanics, the RVE may be assessed only in a limited way via experimental testing, when the constitutive response becomes spatially homogeneous. Specifically for fluids, the Knudsen number is used to assess to what extent the approximation of continuity can be made, consider car traffic on a highway---with just one lane for simplicity. Somewhat surprisingly, and in a tribute to its effectiveness, continuum mechanics effectively models the movement of cars via a differential equation for the density of cars. The familiarity of this situation empowers us to understand a little of the continuum-discrete dichotomy underlying continuum modelling in general. To start modelling define that, x measure distance along the highway, t is time, ρ is the density of cars on the highway, cars do not appear and disappear. Consider any group of cars, from the car at the back of the group located at x = a to the particular car at the front located at x = b. The total number of cars in this group N = ∫ a b ρ d x, since cars are conserved d N / d t =0. The only way an integral can be zero for all intervals is if the integrand is zero for all x, consequently, conservation derives the first order nonlinear conservation PDE ∂ ρ ∂ t + ∂ ∂ x =0 for all positions on the highway. This conservation PDE applies not only to car traffic but also to fluids, solids, crowds, animals, plants, bushfires, financial traders and this PDE is one equation with two unknowns, so another equation is needed to form a well posed problem
Continuum mechanics
–
Figure 1. Configuration of a continuum body
2.
Conservation of energy
–
In physics, the law of conservation of energy states that the total energy of an isolated system remains constant—it is said to be conserved over time. Energy can neither be created nor destroyed, rather, it transforms from one form to another, for instance, chemical energy can be converted to kinetic energy in the explosion of a stick of dynamite. A consequence of the law of conservation of energy is that a perpetual motion machine of the first kind cannot exist and that is to say, no system without an external energy supply can deliver an unlimited amount of energy to its surroundings. Ancient philosophers as far back as Thales of Miletus c.550 BCE had inklings of the conservation of some underlying substance of everything is made. However, there is no reason to identify this with what we know today as mass-energy. Empedocles wrote that in his system, composed of four roots, nothing comes to be or perishes, instead. In 1605, Simon Stevinus was able to solve a number of problems in statics based on the principle that perpetual motion was impossible. Essentially, he pointed out that the height a moving body rises is equal to the height from which it falls, and used this observation to infer the idea of inertia. The remarkable aspect of this observation is that the height that a moving body ascends to does not depend on the shape of the surface that the body is moving on. In 1669, Christian Huygens published his laws of collision, among the quantities he listed as being invariant before and after the collision of bodies were both the sum of their linear momentums as well as the sum of their kinetic energies. However, the difference between elastic and inelastic collision was not understood at the time and this led to the dispute among later researchers as to which of these conserved quantities was the more fundamental. In his Horologium Oscillatorium, he gave a much clearer statement regarding the height of ascent of a moving body, Huygens study of the dynamics of pendulum motion was based on a single principle, that the center of gravity of heavy objects cannot lift itself. The fact that energy is scalar, unlike linear momentum which is a vector. It was Leibniz during 1676–1689 who first attempted a mathematical formulation of the kind of energy which is connected with motion. Using Huygens work on collision, Leibniz noticed that in mechanical systems. He called this quantity the vis viva or living force of the system, the principle represents an accurate statement of the approximate conservation of kinetic energy in situations where there is no friction. Many physicists at that time, such as Newton, held that the conservation of momentum and it was later shown that both quantities are conserved simultaneously, given the proper conditions such as an elastic collision. In 1687, Isaac Newton published his Principia, which was organized around the concept of force and momentum
Conservation of energy
–
Gottfried Leibniz
Conservation of energy
–
Gaspard-Gustave Coriolis
Conservation of energy
–
James Prescott Joule
3.
Momentum
–
In classical mechanics, linear momentum, translational momentum, or simply momentum is the product of the mass and velocity of an object, quantified in kilogram-meters per second. It is dimensionally equivalent to impulse, the product of force and time, Newtons second law of motion states that the change in linear momentum of a body is equal to the net impulse acting on it. If the truck were lighter, or moving slowly, then it would have less momentum. Linear momentum is also a quantity, meaning that if a closed system is not affected by external forces. In classical mechanics, conservation of momentum is implied by Newtons laws. It also holds in special relativity and, with definitions, a linear momentum conservation law holds in electrodynamics, quantum mechanics, quantum field theory. It is ultimately an expression of one of the symmetries of space and time. Linear momentum depends on frame of reference, observers in different frames would find different values of linear momentum of a system. But each would observe that the value of linear momentum does not change with time, momentum has a direction as well as magnitude. Quantities that have both a magnitude and a direction are known as vector quantities, because momentum has a direction, it can be used to predict the resulting direction of objects after they collide, as well as their speeds. Below, the properties of momentum are described in one dimension. The vector equations are almost identical to the scalar equations, the momentum of a particle is traditionally represented by the letter p. It is the product of two quantities, the mass and velocity, p = m v, the units of momentum are the product of the units of mass and velocity. In SI units, if the mass is in kilograms and the velocity in meters per second then the momentum is in kilogram meters/second, in cgs units, if the mass is in grams and the velocity in centimeters per second, then the momentum is in gram centimeters/second. Being a vector, momentum has magnitude and direction, for example, a 1 kg model airplane, traveling due north at 1 m/s in straight and level flight, has a momentum of 1 kg m/s due north measured from the ground. The momentum of a system of particles is the sum of their momenta, if two particles have masses m1 and m2, and velocities v1 and v2, the total momentum is p = p 1 + p 2 = m 1 v 1 + m 2 v 2. If all the particles are moving, the center of mass will generally be moving as well, if the center of mass is moving at velocity vcm, the momentum is, p = m v cm. This is known as Eulers first law, if a force F is applied to a particle for a time interval Δt, the momentum of the particle changes by an amount Δ p = F Δ t
Momentum
–
In a game of pool, momentum is conserved; that is, if one ball stops dead after the collision, the other ball will continue away with all the momentum. If the moving ball continues or is deflected then both balls will carry a portion of the momentum from the collision.
4.
Solid mechanics
–
Solid mechanics is fundamental for civil, aerospace, nuclear, and mechanical engineering, for geology, and for many branches of physics such as materials science. It has specific applications in other areas, such as understanding the anatomy of living beings. One of the most common applications of solid mechanics is the Euler-Bernoulli beam equation. Solid mechanics extensively uses tensors to describe stresses, strains, as shown in the following table, solid mechanics inhabits a central place within continuum mechanics. The field of rheology presents an overlap between solid and fluid mechanics, a material has a rest shape and its shape departs away from the rest shape due to stress. The amount of departure from rest shape is called deformation, the proportion of deformation to original size is called strain and this region of deformation is known as the linearly elastic region. It is most common for analysts in solid mechanics to use linear material models, however, real materials often exhibit non-linear behavior. As new materials are used and old ones are pushed to their limits, There are four basic models that describe how a solid responds to an applied stress, Elastically – When an applied stress is removed, the material returns to its undeformed state. Linearly elastic materials, those that deform proportionally to the applied load and this implies that the material response has time-dependence. Plastically – Materials that behave elastically generally do so when the stress is less than a yield value. When the stress is greater than the stress, the material behaves plastically. That is, deformation occurs after yield is permanent. Thermoelastically - There is coupling of mechanical with thermal responses, in general, thermoelasticity is concerned with elastic solids under conditions that are neither isothermal nor adiabatic. The simplest theory involves the Fouriers law of conduction, as opposed to advanced theories with physically more realistic models. This theorem includes the method of least work as a special case 1874,1922, Timoshenko corrects the Euler-Bernoulli beam equation 1936, Hardy Cross publication of the moment distribution method, an important innovation in the design of continuous frames. Martin, and L. J. Applied mechanics Materials science Continuum mechanics Fracture mechanics L. D, landau, E. M. Lifshitz, Course of Theoretical Physics, Theory of Elasticity Butterworth-Heinemann, ISBN 0-7506-2633-X J. E. Marsden, T. J. Hughes, Mathematical Foundations of Elasticity, Dover, ISBN 0-486-67865-2 P. C. Chou, N. J. Pagano, Elasticity, Tensor, Dyadic, goodier, Theory of elasticity, 3d ed
Solid mechanics
–
Continuum mechanics
5.
Stress (mechanics)
–
For example, when a solid vertical bar is supporting a weight, each particle in the bar pushes on the particles immediately below it. When a liquid is in a container under pressure, each particle gets pushed against by all the surrounding particles. The container walls and the pressure-inducing surface push against them in reaction and these macroscopic forces are actually the net result of a very large number of intermolecular forces and collisions between the particles in those molecules. Strain inside a material may arise by various mechanisms, such as stress as applied by external forces to the material or to its surface. Any strain of a material generates an internal elastic stress, analogous to the reaction force of a spring. In liquids and gases, only deformations that change the volume generate persistent elastic stress, however, if the deformation is gradually changing with time, even in fluids there will usually be some viscous stress, opposing that change. Elastic and viscous stresses are usually combined under the mechanical stress. Significant stress may exist even when deformation is negligible or non-existent, stress may exist in the absence of external forces, such built-in stress is important, for example, in prestressed concrete and tempered glass. Stress may also be imposed on a material without the application of net forces, for example by changes in temperature or chemical composition, stress that exceeds certain strength limits of the material will result in permanent deformation or even change its crystal structure and chemical composition. In some branches of engineering, the stress is occasionally used in a looser sense as a synonym of internal force. For example, in the analysis of trusses, it may refer to the total traction or compression force acting on a beam, since ancient times humans have been consciously aware of stress inside materials. Until the 17th century the understanding of stress was largely intuitive and empirical, with those tools, Augustin-Louis Cauchy was able to give the first rigorous and general mathematical model for stress in a homogeneous medium. Cauchy observed that the force across a surface was a linear function of its normal vector, and, moreover. The understanding of stress in liquids started with Newton, who provided a formula for friction forces in parallel laminar flow. Stress is defined as the force across a small boundary per unit area of that boundary, following the basic premises of continuum mechanics, stress is a macroscopic concept. In a fluid at rest the force is perpendicular to the surface, in a solid, or in a flow of viscous liquid, the force F may not be perpendicular to S, hence the stress across a surface must be regarded a vector quantity, not a scalar. Moreover, the direction and magnitude depend on the orientation of S. Thus the stress state of the material must be described by a tensor, called the stress tensor, with respect to any chosen coordinate system, the Cauchy stress tensor can be represented as a symmetric matrix of 3×3 real numbers
Stress (mechanics)
–
Built-in strain, inside the plastic protractor, developed by the stress of the shape of the protractor, is revealed by the effect of polarized light.
Stress (mechanics)
–
Roman -era bridge in Switzerland
Stress (mechanics)
–
Inca bridge on the Apurimac River
Stress (mechanics)
–
Glass vase with the craquelé effect. The cracks are the result of brief but intense stress created when the semi-molten piece is briefly dipped in water.
6.
Deformation (mechanics)
–
Deformation in continuum mechanics is the transformation of a body from a reference configuration to a current configuration. A configuration is a set containing the positions of all particles of the body, a deformation may be caused by external loads, body forces, or changes in temperature, moisture content, or chemical reactions, etc. Strain is a description of deformation in terms of displacement of particles in the body that excludes rigid-body motions. In a continuous body, a deformation field results from a field induced by applied forces or is due to changes in the temperature field inside the body. The relation between stresses and induced strains is expressed by constitutive equations, e. g. Hookes law for linear elastic materials, deformations which are recovered after the stress field has been removed are called elastic deformations. In this case, the continuum completely recovers its original configuration, on the other hand, irreversible deformations remain even after stresses have been removed. Another type of deformation is viscous deformation, which is the irreversible part of viscoelastic deformation. In the case of elastic deformations, the response function linking strain to the stress is the compliance tensor of the material. Strain is a measure of deformation representing the displacement between particles in the relative to a reference length. A general deformation of a body can be expressed in the form x = F where X is the position of material points in the body. Such a measure does not distinguish between rigid body motions and changes in shape of the body, a deformation has units of length. We could, for example, define strain to be ε ≐ ∂ ∂ X = F ′ − I, hence strains are dimensionless and are usually expressed as a decimal fraction, a percentage or in parts-per notation. Strains measure how much a given deformation differs locally from a rigid-body deformation, a strain is in general a tensor quantity. Physical insight into strains can be gained by observing that a given strain can be decomposed into normal and this could be applied by elongation, shortening, or volume changes, or angular distortion. However, it is sufficient to know the normal and shear components of strain on a set of three perpendicular directions. In this case, the undeformed and deformed configurations of the continuum are significantly different and this is commonly the case with elastomers, plastically-deforming materials and other fluids and biological soft tissue. Infinitesimal strain theory, also called small strain theory, small deformation theory, small displacement theory, in this case, the undeformed and deformed configurations of the body can be assumed identical. Large-displacement or large-rotation theory, which assumes small strains but large rotations, in each of these theories the strain is then defined differently
Deformation (mechanics)
–
The deformation of a thin straight rod into a closed loop. The length of the rod remains almost unchanged during the deformation, which indicates that the strain is small. In this particular case of bending, displacements associated with rigid translations and rotations of material elements in the rod are much greater than displacements associated with straining.
7.
Finite strain theory
–
In this case, the undeformed and deformed configurations of the continuum are significantly different and a clear distinction has to be made between them. This is commonly the case with elastomers, plastically-deforming materials and other fluids, the displacement of a body has two components, a rigid-body displacement and a deformation. A rigid-body displacement consists of a translation and rotation of the body without changing its shape or size. Deformation implies the change in shape and/or size of the body from an initial or undeformed configuration κ0 to a current or deformed configuration κ t, a change in the configuration of a continuum body can be described by a displacement field. A displacement field is a field of all displacement vectors for all particles in the body. Relative displacement between particles occurs if and only if deformation has occurred, if displacement occurs without deformation, then it is deemed a rigid-body displacement. The displacement of particles indexed by variable i may be expressed as follows, the vector joining the positions of a particle in the undeformed configuration P i and deformed configuration p i is called the displacement vector. The partial derivative of the displacement vector with respect to the material coordinates yields the material displacement gradient tensor ∇ X u, α J i are the direction cosines between the material and spatial coordinate systems with unit vectors E J and e i, respectively. e. Due to the assumption of continuity of χ, F has the inverse H = F −1, then, by the implicit function theorem, the Jacobian determinant J must be nonsingular, i. e. Consider a particle or material point P with position vector X = X I I I in the undeformed configuration. After a displacement of the body, the new position of the particle indicated by p in the new configuration is given by the position x = x i e i. The coordinate systems for the undeformed and deformed configuration can be superimposed for convenience, consider now a material point Q neighboring P, with position vector X + Δ X = I I. In the deformed configuration this particle has a new position q given by the vector x + Δ x. Assuming that the line segments Δ X and Δ x joining the particles P and Q in both the undeformed and deformed configuration, respectively, to be small, then we can express them as d X and d x. A geometrically consistent definition of such a derivative requires an excursion into differential geometry, the time derivative of F is F ˙ = ∂ F ∂ t = ∂ ∂ t = ∂ ∂ X = ∂ ∂ X where V is the velocity. The derivative on the hand side represents a material velocity gradient. It is common to convert that into a gradient, i. e. F ˙ = ∂ ∂ X = ∂ ∂ x ⋅ ∂ x ∂ X = l ⋅ F where l is the spatial velocity gradient. If the spatial velocity gradient is constant, the equation can be solved exactly to give F = e l t assuming F =1 at t =0
Finite strain theory
–
Figure 1. Motion of a continuum body.
8.
Infinitesimal strain theory
–
With this assumption, the equations of continuum mechanics are considerably simplified. This approach may also be called small deformation theory, small displacement theory and it is contrasted with the finite strain theory where the opposite assumption is made. In such a linearization, the non-linear or second-order terms of the strain tensor are neglected. Therefore, the displacement gradient components and the spatial displacement gradient components are approximately equal. From the geometry of Figure 1 we have a b ¯ =2 +2 = d x 1 +2 ∂ u x ∂ x +2 +2 For very small displacement gradients, i. e. e. Therefore, the elements of the infinitesimal strain tensor are the normal strains in the coordinate directions. The results of operations are called strain invariants. Since there are no shear strain components in this coordinate system, an octahedral plane is one whose normal makes equal angles with the three principal directions. The engineering shear strain on a plane is called the octahedral shear strain and is given by γ o c t =232 +2 +2 where ε1, ε2, ε3 are the principal strains. Several definitions of equivalent strain can be found in the literature, thus, a solution does not generally exist for an arbitrary choice of strain components. Therefore, some restrictions, named compatibility equations, are imposed upon the strain components, with the addition of the three compatibility equations the number of independent equations are reduced to three, matching the number of unknown displacement components. These constraints on the strain tensor were discovered by Saint-Venant, and are called the Saint Venant compatibility equations, the compatibility functions serve to assure a single-valued continuous displacement function u i. The strains associated with length, i. e. the normal strain ε33, plane strain is then an acceptable approximation. The strain tensor for plane strain is written as, ε _ _ = in which the double underline indicates a second order tensor and this strain state is called plane strain. The corresponding stress tensor is, σ _ _ = in which the non-zero σ33 is needed to maintain the constraint ϵ33 =0. This stress term can be removed from the analysis to leave only the in-plane terms. Antiplane strain is another state of strain that can occur in a body. For infinitesimal deformations the scalar components of ω satisfy the condition | ω i j | ≪1, note that the displacement gradient is small only if both the strain tensor and the rotation tensor are infinitesimal
Infinitesimal strain theory
–
Figure 1. Two-dimensional geometric deformation of an infinitesimal material element.
9.
Elasticity (physics)
–
In physics, elasticity is the ability of a body to resist a distorting influence or deforming force and to return to its original size and shape when that influence or force is removed. Solid objects will deform when adequate forces are applied on them, if the material is elastic, the object will return to its initial shape and size when these forces are removed. The physical reasons for elastic behavior can be different for different materials. In metals, the atomic lattice changes size and shape when forces are applied, when forces are removed, the lattice goes back to the original lower energy state. For rubbers and other polymers, elasticity is caused by the stretching of polymer chains when forces are applied, perfect elasticity is an approximation of the real world. The most elastic body in modern science found is Quartz fibre which is not even a perfect elastic body, so perfect elastic body is an ideal concept only. Most materials which possess elasticity in practice remain purely elastic only up to very small deformations. In engineering, the amount of elasticity of a material is determined by two types of material parameter, the first type of material parameter is called a modulus, which measures the amount of force per unit area needed to achieve a given amount of deformation. The SI unit of modulus is the pascal, a higher modulus typically indicates that the material is harder to deform. The second type of measures the elastic limit, the maximum stress that can arise in a material before the onset of permanent deformation. Its SI unit is also pascal, when describing the relative elasticities of two materials, both the modulus and the elastic limit have to be considered. Rubbers typically have a low modulus and tend to stretch a lot, of two rubber materials with the same elastic limit, the one with a lower modulus will appear to be more elastic, which is however not correct. When an elastic material is deformed due to a force, it experiences internal resistance to the deformation. The various moduli apply to different kinds of deformation, for instance, Youngs modulus applies to extension/compression of a body, whereas the shear modulus applies to its shear. The elasticity of materials is described by a curve, which shows the relation between stress and strain. The curve is nonlinear, but it can be approximated as linear for sufficiently small deformations. For even higher stresses, materials exhibit behavior, that is, they deform irreversibly. Elasticity is not exhibited only by solids, non-Newtonian fluids, such as viscoelastic fluids, in response to a small, rapidly applied and removed strain, these fluids may deform and then return to their original shape. Under larger strains, or strains applied for longer periods of time, because the elasticity of a material is described in terms of a stress-strain relation, it is essential that the terms stress and strain be defined without ambiguity
Elasticity (physics)
–
Continuum mechanics
10.
Linear elasticity
–
Linear elasticity is the mathematical study of how solid objects deform and become internally stressed due to prescribed loading conditions. Linear elasticity models materials as continua, linear elasticity is a simplification of the more general nonlinear theory of elasticity and is a branch of continuum mechanics. The fundamental linearizing assumptions of linear elasticity are, infinitesimal strains or small deformations, in addition linear elasticity is valid only for stress states that do not produce yielding. These assumptions are reasonable for many engineering materials and engineering design scenarios, linear elasticity is therefore used extensively in structural analysis and engineering design, often with the aid of finite element analysis. The system of equations is completed by a set of linear algebraic constitutive relations. For elastic materials, Hookes law represents the behavior and relates the unknown stresses. Note, the Einstein summation convention of summing on repeated indices is used below and these are 3 independent equations with 6 independent unknowns. Strain-displacement equations, ε i j =12 where ε i j = ε j i is the strain and these are 6 independent equations relating strains and displacements with 9 independent unknowns. The equation for Hookes law is, σ i j = C i j k l ε k l where C i j k l is the stiffness tensor and these are 6 independent equations relating stresses and strains. An elastostatic boundary value problem for a media is a system of 15 independent equations. Specifying the boundary conditions, the value problem is completely defined. To solve the two approaches can be taken according to boundary conditions of the boundary value problem, a displacement formulation. In isotropic media, the stiffness tensor gives the relationship between the stresses and the strains, for an isotropic medium, the stiffness tensor has no preferred direction, an applied force will give the same displacements no matter the direction in which the force is applied. If the medium is homogeneous, then the elastic moduli will be independent of the position in the medium, the constitutive equation may now be written as, σ i j = K δ i j ε k k +2 μ. This expression separates the stress into a part on the left which may be associated with a scalar pressure. A simpler expression is, σ i j = λ δ i j ε k k +2 μ ε i j where λ is Lamés first parameter. More simply, ε i j =12 μ σ i j − ν E δ i j σ k k =1 E where ν is Poissons ratio and E is Youngs modulus. Elastostatics is the study of linear elasticity under the conditions of equilibrium, in all forces on the elastic body sum to zero
Linear elasticity
–
Spherical coordinates (r, θ, φ) as commonly used in physics: radial distance r, polar angle θ (theta), and azimuthal angle φ (phi). The symbol ρ (rho) is often used instead of r.
11.
Material failure theory
–
Failure theory is the science of predicting the conditions under which solid materials fail under the action of external loads. The failure of a material is classified into brittle failure or ductile failure. Depending on the conditions most materials can fail in a brittle or ductile manner or both, however, for most practical situations, a material may be classified as either brittle or ductile. Though failure theory has been in development for over 200 years, in mathematical terms, failure theory is expressed in the form of various failure criteria which are valid for specific materials. Failure criteria are functions in stress or strain space which separate failed states from unfailed states, a precise physical definition of a failed state is not easily quantified and several working definitions are in use in the engineering community. Quite often, phenomenological failure criteria of the form are used to predict brittle failure. In materials science, material failure is the loss of carrying capacity of a material unit. This definition per se introduces the fact that failure can be examined in different scales, from microscopic. On the other hand, due to the lack of globally accepted fracture criteria, such methodologies are useful for gaining insight in the cracking of specimens and simple structures under well defined global load distributions. Microscopic failure considers the initiation and propagation of a crack, failure criteria in this case are related to microscopic fracture. Some of the most popular models in this area are the micromechanical failure models. Such a model, proposed by Gurson and extended by Tvergaard, another approach, proposed by Rousselier, is based on continuum damage mechanics and thermodynamics. Both models form a modification of the von Mises yield potential by introducing a scalar quantity, which represents the void volume fraction of cavities. Macroscopic material failure is defined in terms of load carrying capacity or energy storage capacity, li presents a classification of macroscopic failure criteria in four categories, Stress or strain failure Energy type failure Damage failure Empirical failure. The material behavior at one level is considered as a collective of its behavior at a sub-level, an efficient deformation and failure model should be consistent at every level. The maximum stress criterion assumes that a material fails when the principal stress σ1 in a material element exceeds the uniaxial tensile strength of the material. Alternatively, the material will fail if the principal stress σ3 is less than the uniaxial compressive strength of the material. Numerous other phenomenological failure criteria can be found in the engineering literature, the degree of success of these criteria in predicting failure has been limited
Material failure theory
–
Continuum mechanics
12.
Fracture mechanics
–
Fracture mechanics is the field of mechanics concerned with the study of the propagation of cracks in materials. It uses methods of solid mechanics to calculate the driving force on a crack. In modern materials science, fracture mechanics is an important tool used to improve the performance of mechanical components, fractography is widely used with fracture mechanics to understand the causes of failures and also verify the theoretical failure predictions with real life failures. The prediction of crack growth is at the heart of the damage tolerance mechanical design discipline. There are three ways of applying a force to enable a crack to propagate, Mode I fracture – Opening mode, Mode II fracture – Sliding mode, the processes of material manufacture, processing, machining, and forming may introduce flaws in a finished mechanical component. Arising from the process, interior and surface flaws are found in all metal structures. Not all such flaws are unstable under service conditions, Fracture mechanics is the analysis of flaws to discover those that are safe and those that are liable to propagate as cracks and so cause failure of the flawed structure. Despite these inherent flaws, it is possible to achieve through damage tolerance analysis the safe operation of a structure, Fracture mechanics as a subject for critical study has barely been around for a century and thus is relatively new. Fracture mechanics should attempt to provide answers to the following questions. What crack size can be tolerated under service loading, i. e. what is the maximum permissible crack size. How long does it take for a crack to grow from an initial size, for example the minimum detectable crack size. What is the life of a structure when a certain pre-existing flaw size is assumed to exist. During the period available for crack detection how often should the structure be inspected for cracks, Fracture mechanics was developed during World War I by English aeronautical engineer, A. A. Griffith, to explain the failure of brittle materials. Griffiths work was motivated by two facts, The stress needed to fracture bulk glass is around 100 MPa. The theoretical stress needed for breaking atomic bonds of glass is approximately 10,000 MPa, a theory was needed to reconcile these conflicting observations. Also, experiments on glass fibers that Griffith himself conducted suggested that the stress increases as the fiber diameter decreases. Hence the uniaxial tensile strength, which had used extensively to predict material failure before Griffith. Griffith suggested that the low fracture strength observed in experiments, as well as the size-dependence of strength, was due to the presence of microscopic flaws in the bulk material, to verify the flaw hypothesis, Griffith introduced an artificial flaw in his experimental glass specimens
Fracture mechanics
–
The S.S. Schenectady split apart by brittle fracture while in harbor, 1943.
Fracture mechanics
–
The three fracture modes.
13.
Contact mechanics
–
Contact mechanics is the study of the deformation of solids that touch each other at one or more points. Central aspects in contact mechanics are the pressures and adhesion acting perpendicular to the bodies surfaces. This page focuses mainly on the direction, i. e. on frictionless contact mechanics. Frictional contact mechanics is discussed separately, current challenges faced in the field may include stress analysis of contact and coupling members and the influence of lubrication and material design on friction and wear. Applications of contact mechanics further extend into the micro- and nanotechnological realm, the original work in contact mechanics dates back to 1882 with the publication of the paper On the contact of elastic solids by Heinrich Hertz. Hertz was attempting to understand how the properties of multiple. Hertzian contact stress refers to the stresses that develop as two curved surfaces come in contact and deform slightly under the imposed loads. This amount of deformation is dependent on the modulus of elasticity of the material in contact and it gives the contact stress as a function of the normal contact force, the radii of curvature of both bodies and the modulus of elasticity of both bodies. Hertzian contact stress forms the foundation for the equations for load bearing capabilities and fatigue life in bearings, gears, classical contact mechanics is most notably associated with Heinrich Hertz. In 1882, Hertz solved the problem of two elastic bodies with curved surfaces. This still-relevant classical solution provides a foundation for modern problems in contact mechanics, for example, in mechanical engineering and tribology, Hertzian contact stress is a description of the stress within mating parts. The Hertzian contact stress refers to the stress close to the area of contact between two spheres of different radii. It was not until one hundred years later that Johnson, Kendall. This theory was rejected by Boris Derjaguin and co-workers who proposed a different theory of adhesion in the 1970s, the Derjaguin model came to be known as the DMT model, and the Johnson et al. model came to be known as the JKR model for adhesive elastic contact. This rejection proved to be instrumental in the development of the Tabor, further advancement in the field of contact mechanics in the mid-twentieth century may be attributed to names such as Bowden and Tabor. Bowden and Tabor were the first to emphasize the importance of surface roughness for bodies in contact, through investigation of the surface roughness, the true contact area between friction partners is found to be less than the apparent contact area. Such understanding also drastically changed the direction of undertakings in tribology, the works of Bowden and Tabor yielded several theories in contact mechanics of rough surfaces. The contributions of Archard must also be mentioned in discussion of pioneering works in this field, Archard concluded that, even for rough elastic surfaces, the contact area is approximately proportional to the normal force
Contact mechanics
–
Stresses in a contact area loaded simultaneously with a normal and a tangential force. Stresses were made visible using photoelasticity.
Contact mechanics
–
Contact of an elastic sphere with an elastic half-space
Contact mechanics
–
Contact between two spheres.
Contact mechanics
–
Contact between two crossed cylinders of equal radius.
14.
Frictional contact mechanics
–
Contact mechanics is the study of the deformation of solids that touch each other at one or more points. This can be divided into compressive and adhesive forces in the perpendicular to the interface. Frictional contact mechanics is the study of the deformation of bodies in the presence of frictional effects, frictional contact mechanics is concerned with a large range of different scales. At the macroscopic scale, it is applied for the investigation of the motion of contacting bodies, for instance the bouncing of a rubber ball on a surface depends on the frictional interaction at the contact interface. Here the total force versus indentation and lateral displacement are of main concern, at the intermediate scale, one is interested in the local stresses, strains and deformations of the contacting bodies in and near the contact area. For instance to derive or validate contact models at the scale, or to investigate wear. Application areas of scale are tire-pavement interaction, railway wheel-rail interaction, roller bearing analysis. Several famous scientists, engineers and mathematician contributed to our understanding of friction and they include Leonardo da Vinci, Guillaume Amontons, John Theophilus Desaguliers, Leonhard Euler, and Charles-Augustin de Coulomb. Later, Nikolai Pavlovich Petrov, Osborne Reynolds and Richard Stribeck supplemented this understanding with theories of lubrication, deformation of solid materials was investigated in the 17th and 18th centuries by Robert Hooke, Joseph Louis Lagrange, and in the 19th and 20th centuries by d’Alembert and Timoshenko. With respect to contact mechanics the classical contribution by Heinrich Hertz stands out, further the fundamental solutions by Boussinesq and Cerruti are of primary importance for the investigation of frictional contact problems in the elastic regime. Classical results for a true frictional contact problem concern the papers by F. W. Carter and they independently presented the creep versus creep force relation for a cylinder on a plane or for two cylinders in steady rolling contact using Coulomb’s dry friction law. These are applied to railway locomotive traction, and for understanding the hunting oscillation of railway vehicles, with respect to sliding, the classical solutions are due to C. Cattaneo and R. D. Mindlin, who considered the tangential shifting of a sphere on a plane, in the 1950s interest in the rolling contact of railway wheels grew. Johnson presented an approach for the 3D frictional problem with Hertzian geometry. Among others he found that spin creepage, which is symmetric about the center of the contact patch and this is due to the fore-aft differences in the distribution of tractions in the contact patch. In 1967 Joost Kalker published his milestone PhD thesis on the theory for rolling contact. This theory is exact for the situation of a friction coefficient in which case the slip area vanishes. It does assume Coulomb’s friction law, which more or less requires clean surfaces and this theory is for massive bodies such as the railway wheel-rail contact
Frictional contact mechanics
–
In railway applications one wants to know the relation between creepage (velocity difference) and the friction force.
15.
Fluid
–
In physics, a fluid is a substance that continually deforms under an applied shear stress. Fluids are a subset of the phases of matter and include liquids, gases, plasmas, fluids are substances that have zero shear modulus, or, in simpler terms, a fluid is a substance which cannot resist any shear force applied to it. Although the term includes both the liquid and gas phases, in common usage, fluid is often used as a synonym for liquid. For example, brake fluid is hydraulic oil and will not perform its required incompressible function if there is gas in it and this colloquial usage of the term is also common in medicine and in nutrition. Liquids form a surface while gases do not. The distinction between solids and fluid is not entirely obvious, the distinction is made by evaluating the viscosity of the substance. Silly Putty can be considered to behave like a solid or a fluid and it is best described as a viscoelastic fluid. There are many examples of substances proving difficult to classify, a particularly interesting one is pitch, as demonstrated in the pitch drop experiment currently running at the University of Queensland. Fluids display properties such as, not resisting deformation, or resisting it only slightly, and these properties are typically a function of their inability to support a shear stress in static equilibrium. Solids can be subjected to stresses, and to normal stresses—both compressive. In contrast, ideal fluids can only be subjected to normal, real fluids display viscosity and so are capable of being subjected to low levels of shear stress. In a solid, shear stress is a function of strain, a consequence of this behavior is Pascals law which describes the role of pressure in characterizing a fluids state. The study of fluids is fluid mechanics, which is subdivided into fluid dynamics, matter Liquid Gas Bird, Byron, Stewart, Warren, Lightfoot, Edward
Fluid
–
Continuum mechanics
16.
Fluid dynamics
–
In physics and engineering, fluid dynamics is a subdiscipline of fluid mechanics that describes the flow of fluids. It has several subdisciplines, including aerodynamics and hydrodynamics, before the twentieth century, hydrodynamics was synonymous with fluid dynamics. This is still reflected in names of some fluid dynamics topics, like magnetohydrodynamics and hydrodynamic stability, the foundational axioms of fluid dynamics are the conservation laws, specifically, conservation of mass, conservation of linear momentum, and conservation of energy. These are based on mechanics and are modified in quantum mechanics. They are expressed using the Reynolds transport theorem, in addition to the above, fluids are assumed to obey the continuum assumption. Fluids are composed of molecules that collide with one another and solid objects, however, the continuum assumption assumes that fluids are continuous, rather than discrete. The fact that the fluid is made up of molecules is ignored. The unsimplified equations do not have a general solution, so they are primarily of use in Computational Fluid Dynamics. The equations can be simplified in a number of ways, all of which make them easier to solve, some of the simplifications allow some simple fluid dynamics problems to be solved in closed form. Three conservation laws are used to solve fluid dynamics problems, the conservation laws may be applied to a region of the flow called a control volume. A control volume is a volume in space through which fluid is assumed to flow. The integral formulations of the laws are used to describe the change of mass, momentum. Mass continuity, The rate of change of fluid mass inside a control volume must be equal to the net rate of flow into the volume. Mass flow into the system is accounted as positive, and since the vector to the surface is opposite the sense of flow into the system the term is negated. The first term on the right is the net rate at which momentum is convected into the volume, the second term on the right is the force due to pressure on the volumes surfaces. The first two terms on the right are negated since momentum entering the system is accounted as positive, the third term on the right is the net acceleration of the mass within the volume due to any body forces. Surface forces, such as forces, are represented by F surf. The following is the form of the momentum conservation equation
Fluid dynamics
17.
Pascal's law
–
The law was established by French mathematician Blaise Pascal in 1647–48. The intuitive explanation of this formula is that the change in pressure between 2 elevations is due to the weight of the fluid between the elevations. A more correct interpretation, though, is that the change is caused by the change of potential energy per unit volume of the liquid due to the existence of the gravitational field. Note that the variation with height does not depend on any additional pressures, therefore, Pascals law can be interpreted as saying that any change in pressure applied at any given point of the fluid is transmitted undiminished throughout the fluid. If a U-tube is filled with water and pistons are placed at each end, pressure exerted against the piston will be transmitted throughout the liquid. The pressure that the left piston exerts against the water will be equal to the pressure the water exerts against the right piston. Suppose the tube on the side is made wider and a piston of a larger area is used, for example. If a 1 N load is placed on the left piston, the difference between force and pressure is important, the additional pressure is exerted against the entire area of the larger piston. Since there is 50 times the area,50 times as much force is exerted on the larger piston, thus, the larger piston will support a 50 N load - fifty times the load on the smaller piston. Forces can be multiplied using such a device, one newton input produces 50 newtons output. By further increasing the area of the piston, forces can be multiplied, in principle. Pascals principle underlies the operation of the hydraulic press, the hydraulic press does not violate energy conservation, because a decrease in distance moved compensates for the increase in force. When the small piston is moved downward 100 centimeters, the piston will be raised only one-fiftieth of this. Pascals principle applies to all fluids, whether gases or liquids, a typical application of Pascals principle for gases and liquids is the automobile lift seen in many service stations. Increased air pressure produced by an air compressor is transmitted through the air to the surface of oil in an underground reservoir, the oil, in turn, transmits the pressure to a piston, which lifts the automobile. The relatively low pressure that exerts the force against the piston is about the same as the air pressure in automobile tires. Hydraulics is employed by modern devices ranging from small to enormous. For example, there are hydraulic pistons in almost all construction machines where heavy loads are involved, Pascals barrel is the name of a hydrostatics experiment allegedly performed by Blaise Pascal in 1646
Pascal's law
–
The effects of Pascal's law in the (possibly apocryphal) " Pascal's barrel " experiment.
Pascal's law
–
Continuum mechanics
18.
Viscosity
–
The viscosity of a fluid is a measure of its resistance to gradual deformation by shear stress or tensile stress. For liquids, it corresponds to the concept of thickness, for example. Viscosity is a property of the fluid which opposes the motion between the two surfaces of the fluid in a fluid that are moving at different velocities. For a given velocity pattern, the stress required is proportional to the fluids viscosity, a fluid that has no resistance to shear stress is known as an ideal or inviscid fluid. Zero viscosity is observed only at low temperatures in superfluids. Otherwise, all fluids have positive viscosity, and are said to be viscous or viscid. A fluid with a high viscosity, such as pitch. The word viscosity is derived from the Latin viscum, meaning mistletoe, the dynamic viscosity of a fluid expresses its resistance to shearing flows, where adjacent layers move parallel to each other with different speeds. It can be defined through the situation known as a Couette flow. This fluid has to be homogeneous in the layer and at different shear stresses, if the speed of the top plate is small enough, the fluid particles will move parallel to it, and their speed will vary linearly from zero at the bottom to u at the top. Each layer of fluid will move faster than the one just below it, in particular, the fluid will apply on the top plate a force in the direction opposite to its motion, and an equal but opposite one to the bottom plate. An external force is required in order to keep the top plate moving at constant speed. The magnitude F of this force is found to be proportional to the u and the area A of each plate. The proportionality factor μ in this formula is the viscosity of the fluid, the ratio u/y is called the rate of shear deformation or shear velocity, and is the derivative of the fluid speed in the direction perpendicular to the plates. Isaac Newton expressed the forces by the differential equation τ = μ ∂ u ∂ y, where τ = F/A. This formula assumes that the flow is moving along parallel lines and this equation can be used where the velocity does not vary linearly with y, such as in fluid flowing through a pipe. Use of the Greek letter mu for the dynamic viscosity is common among mechanical and chemical engineers. However, the Greek letter eta is used by chemists, physicists
Viscosity
–
Pitch has a viscosity approximately 230 billion (2.3 × 10 11) times that of water.
Viscosity
–
Laminar shear of fluid between two plates. Friction between the fluid and the moving boundaries causes the fluid to shear. The force required for this action is a measure of the fluid's viscosity.
Viscosity
–
Example of the viscosity of milk and water. Liquids with higher viscosities make smaller splashes when poured at the same velocity.
Viscosity
–
Honey being drizzled.
19.
Newtonian fluid
–
That is equivalent to saying that those forces are proportional to the rates of change of the fluids velocity vector as one moves away from the point in question in various directions. Newtonian fluids are the simplest mathematical models of fluids that account for viscosity, while no real fluid fits the definition perfectly, many common liquids and gases, such as water and air, can be assumed to be Newtonian for practical calculations under ordinary conditions. However, non-Newtonian fluids are relatively common, and include oobleck, other examples include many polymer solutions, molten polymers, many solid suspensions, blood, and most highly viscous fluids. Newtonian fluids are named after Isaac Newton, who first postulated the relation between the strain rate and shear stress for such fluids in differential form. An element of a liquid or gas will suffer forces from the surrounding fluid. These forces can be approximated to first order by a viscous stress tensor. The deformation of that element, relative to some previous state. The tensors τ and ∇ v can be expressed by 3×3 matrices, one also defines a total stress tensor σ ) that combines the shear stress with conventional pressure p. The diagonal components of viscosity tensor is molecular viscosity of a liquid, and not diagonal components – turbulence eddy viscosity
Newtonian fluid
–
Continuum mechanics
20.
Non-Newtonian fluid
–
A non-Newtonian fluid is a fluid that does not follow Newtons Law of Viscosity. Most commonly, the viscosity of fluids is dependent on shear rate or shear rate history. Some non-Newtonian fluids with shear-independent viscosity, however, still exhibit normal stress-differences or other non-Newtonian behavior. Many salt solutions and molten polymers are non-Newtonian fluids, as are commonly found substances such as ketchup, custard, toothpaste, starch suspensions, maizena, paint, blood. In a Newtonian fluid, the relation between the stress and the shear rate is linear, passing through the origin, the constant of proportionality being the coefficient of viscosity. In a non-Newtonian fluid, the relation between the stress and the shear rate is different and can even be time-dependent. Therefore, a constant coefficient of viscosity cannot be defined, although the concept of viscosity is commonly used in fluid mechanics to characterize the shear properties of a fluid, it can be inadequate to describe non-Newtonian fluids. The properties are studied using tensor-valued constitutive equations, which are common in the field of continuum mechanics. The viscosity of a shear thickening fluid, or dilatant fluid, corn starch dissolved in water is a common example, when stirred slowly it looks milky, when stirred vigorously it feels like a very viscous liquid. Note that all thixotropic fluids are extremely shear thinning, but they are time dependent. Thus, to avoid confusion, the classification is more clearly termed pseudoplastic. Another example of a shear thinning fluid is blood and this application is highly favoured within the body, as it allows the viscosity of blood to decrease with increased shear strain rate. Fluids that have a linear shear stress/shear strain relationship require a finite yield stress before they begin to flow and these fluids are called Bingham plastics. Several examples are clay suspensions, drilling mud, toothpaste, mayonnaise, chocolate, the surface of a Bingham plastic can hold peaks when it is still. By contrast Newtonian fluids have flat featureless surfaces when still, there are also fluids whose strain rate is a function of time. Fluids that require a gradually increasing shear stress to maintain a constant strain rate are referred to as rheopectic, an opposite case of this is a fluid that thins out with time and requires a decreasing stress to maintain a constant strain rate. Many common substances exhibit non-Newtonian flows, uncooked cornflour has the same properties. The name oobleck is derived from the Dr. Seuss book Bartholomew, because of its properties, oobleck is often used in demonstrations that exhibit its unusual behavior
Non-Newtonian fluid
–
Demonstration of a non-Newtonian fluid at Universum in Mexico City
Non-Newtonian fluid
–
Classification of fluids with shear stress as a function of shear rate.
Non-Newtonian fluid
–
Oobleck on a subwoofer. Applying force to oobleck, by sound waves in this case, makes the non-Newtonian fluid thicken.
21.
Buoyancy
–
In science, buoyancy or upthrust, is an upward force exerted by a fluid that opposes the weight of an immersed object. In a column of fluid, pressure increases with depth as a result of the weight of the overlying fluid, thus the pressure at the bottom of a column of fluid is greater than at the top of the column. Similarly, the pressure at the bottom of an object submerged in a fluid is greater than at the top of the object and this pressure difference results in a net upwards force on the object. For this reason, an object whose density is greater than that of the fluid in which it is submerged tends to sink, If the object is either less dense than the liquid or is shaped appropriately, the force can keep the object afloat. This can occur only in a reference frame, which either has a gravitational field or is accelerating due to a force other than gravity defining a downward direction. In a situation of fluid statics, the net upward force is equal to the magnitude of the weight of fluid displaced by the body. The center of buoyancy of an object is the centroid of the volume of fluid. Archimedes principle is named after Archimedes of Syracuse, who first discovered this law in 212 B. C, more tersely, Buoyancy = weight of displaced fluid. The weight of the fluid is directly proportional to the volume of the displaced fluid. Thus, among completely submerged objects with equal masses, objects with greater volume have greater buoyancy and this is also known as upthrust. Suppose a rocks weight is measured as 10 newtons when suspended by a string in a vacuum with gravity acting upon it, suppose that when the rock is lowered into water, it displaces water of weight 3 newtons. The force it exerts on the string from which it hangs would be 10 newtons minus the 3 newtons of buoyancy force,10 −3 =7 newtons. Buoyancy reduces the apparent weight of objects that have sunk completely to the sea floor and it is generally easier to lift an object up through the water than it is to pull it out of the water. The density of the object relative to the density of the fluid can easily be calculated without measuring any volumes. Density of object density of fluid = weight weight − apparent immersed weight Example, If you drop wood into water, Example, A helium balloon in a moving car. During a period of increasing speed, the air mass inside the car moves in the direction opposite to the cars acceleration, the balloon is also pulled this way. However, because the balloon is buoyant relative to the air, it ends up being pushed out of the way, If the car slows down, the same balloon will begin to drift backward. For the same reason, as the car goes round a curve and this is the equation to calculate the pressure inside a fluid in equilibrium
Buoyancy
–
A metallic coin (one British pound coin) floats in mercury due to the buoyancy force upon it and appears to float higher because of the surface tension of the mercury.
Buoyancy
–
The forces at work in buoyancy. Note that the object is floating because the upward force of buoyancy is equal to the downward force of gravity.
22.
Mixing (process engineering)
–
In industrial process engineering, mixing is a unit operation that involves manipulation of a heterogeneous physical system with the intent to make it more homogeneous. Familiar examples include pumping of the water in a pool to homogenize the water temperature. Mixing is performed to allow heat and/or mass transfer to occur between one or more streams, components or phases, modern industrial processing almost always involves some form of mixing. Some classes of chemical reactors are also mixers, with the right equipment, it is possible to mix a solid, liquid or gas into another solid, liquid or gas. The opposite of mixing is segregation, a classical example of segregation is the brazil nut effect. The type of operation and equipment used during mixing depends on the state of materials being mixed, in this context, the act of mixing may be synonymous with stirring-, or kneading-processes. Mixing of liquids occurs frequently in process engineering, the nature of liquids to blend determines the equipment used. Turbulent or transitional mixing is conducted with turbines or impellers. Mixing of liquids that are miscible or at least soluble in each other frequently in process engineering. An everyday example would be the addition of milk or cream to tea or coffee, since both liquids are water-based, they dissolve easily in one another. The momentum of the liquid being added is sometimes enough to cause enough turbulence to mix the two, since the viscosity of liquids is relatively low. If necessary, a spoon or paddle could be used to complete the mixing process, blending in a more viscous liquid, such as honey, requires more mixing power per unit volume to achieve the same homogeneity in the same amount of time. Blending powders is one of the oldest unit-operations in the solids handling industries, for many decades powder blending has been used just to homogenize bulk materials. Many different machines have been designed to handle materials with various bulk solids properties, on the basis of the practical experience gained with these different machines, engineering knowledge has been developed to construct reliable equipment and to predict scale-up and mixing behavior. This wide range of applications of mixing equipment requires a level of knowledge, long time experience and extended test facilities to come to the optimal selection of equipment. In powder two different dimensions in the process can be determined, convective mixing and intensive mixing. In the case of convective mixing material in the mixer is transported from one location to another and this type of mixing leads to a less ordered state inside the mixer, the components that must be mixed are distributed over the other components. With progressing time the mixture becomes more randomly ordered, after a certain mixing time the ultimate random state is reached
Mixing (process engineering)
–
Machine for incorporating liquids and finely ground solids
Mixing (process engineering)
–
Schematics of an agitated vessel with a Rushton turbine and baffles
Mixing (process engineering)
–
A magnetic stirrer
Mixing (process engineering)
–
Axial flow impeller (left) and radial flow impeller (right).
23.
Atmosphere
–
An atmosphere is a layer of gases surrounding a planet or other material body, that is held in place by the gravity of that body. An atmosphere is likely to be retained if the gravity it is subject to is high. The atmosphere of Earth is mostly composed of nitrogen, oxygen, argon with carbon dioxide, the atmosphere helps protect living organisms from genetic damage by solar ultraviolet radiation, solar wind and cosmic rays. Its current composition is the product of billions of years of modification of the paleoatmosphere by living organisms. The term stellar atmosphere describes the region of a star. Stars with sufficiently low temperatures may form compound molecules in their outer atmosphere, Atmospheric pressure is the force per unit area that is applied perpendicularly to a surface by the surrounding gas. It is determined by a gravitational force in combination with the total mass of a column of gas above a location. On Earth, units of air pressure are based on the recognized standard atmosphere. It is measured with a barometer, the pressure of an atmospheric gas decreases with altitude due to the diminishing mass of gas above. The height at which the pressure from an atmosphere declines by a factor of e is called the height and is denoted by H. For such an atmosphere, the pressure declines exponentially with increasing altitude. However, atmospheres are not uniform in temperature, so the determination of the atmospheric pressure at any particular altitude is more complex. Surface gravity, the force that holds down an atmosphere, differs significantly among the planets, for example, the large gravitational force of the giant planet Jupiter is able to retain light gases such as hydrogen and helium that escape from objects with lower gravity. Thus, the distant and cold Titan, Triton, and Pluto are able to retain their atmospheres despite their relatively low gravities, rogue planets, theoretically, may also retain thick atmospheres. Since a collection of gas molecules may be moving at a range of velocities. Lighter molecules move faster than ones with the same thermal kinetic energy. It is thought that Venus and Mars may have lost much of their water when, after being photo dissociated into hydrogen and oxygen by solar ultraviolet, Earths magnetic field helps to prevent this, as, normally, the solar wind would greatly enhance the escape of hydrogen. However, over the past 3 billion years Earth may have lost gases through the polar regions due to auroral activity
Atmosphere
–
Mars's thin atmosphere
Atmosphere
–
Earth's atmospheric gases scatter blue light more than other wavelengths, giving the Earth a blue halo when seen from space.
24.
Boyle's law
–
Boyles law is an experimental gas law that describes how the pressure of a gas tends to increase as the volume of the container decreases. Mathematically, Boyles law can be stated as P ∝1 V or P V = k where P is the pressure of the gas, V is the volume of the gas, and k is a constant. The equation states that product of pressure and volume is a constant for a mass of confined gas as long as the temperature is constant. For comparing the same substance under two different sets of conditions, the law can be expressed as P1 V1 = P2 V2. The equation shows that, as increases, the pressure of the gas decreases in proportion. Similarly, as volume decreases, the pressure of the gas increases, the law was named after chemist and physicist Robert Boyle, who published the original law in 1662. This relationship between pressure and volume was first noted by Richard Towneley and Henry Power, Robert Boyle confirmed their discovery through experiments and published the results. According to Robert Gunther and other authorities, it was Boyles assistant, Robert Hooke, Boyles law is based on experiments with air, which he considered to be a fluid of particles at rest in between small invisible springs. At that time, air was still seen as one of the four elements, Boyles interest was probably to understand air as an essential element of life, for example, he published works on the growth of plants without air. Boyle used a closed J-shaped tube and after pouring mercury from one side he forced the air on the side to contract under the pressure of mercury. The French physicist Edme Mariotte discovered the law independent of Boyle in 1679. Thus this law is referred to as Mariottes law or the Boyle–Mariotte law. Instead of a static theory a kinetic theory is needed, which was provided two centuries later by Maxwell and Boltzmann and this law was the first physical law to be expressed in the form of an equation describing the dependence of two variable quantities. The law itself can be stated as follows, Or Boyles law is a gas law, stating that the pressure and volume of a gas have an inverse relationship, if volume increases, then pressure decreases and vice versa, when temperature is held constant. Therefore, when the volume is halved, the pressure is doubled, and if the volume is doubled, Boyles law states that at constant temperature for a fixed mass, the absolute pressure and the volume of a gas are inversely proportional. The law can also be stated in a different manner. Most gases behave like ideal gases at moderate pressures and temperatures, the technology of the 17th century could not produce high pressures or low temperatures. Hence, the law was not likely to have deviations at the time of publication, the deviation is expressed as the compressibility factor
Boyle's law
–
Diving medicine:
Boyle's law
–
Continuum mechanics
25.
Rheology
–
The term rheology was coined by Eugene C. Bingham, a professor at Lafayette College, in 1920, from a suggestion by a colleague, the term was inspired by the aphorism of Simplicius, panta rhei, everything flows, and was first used to describe the flow of liquids and the deformation of solids. Newtonian fluids can be characterized by a coefficient of viscosity for a specific temperature. Although this viscosity will change with temperature, it does not change with the strain rate, only a small group of fluids exhibit such constant viscosity. The large class of fluids whose viscosity changes with the rate are called non-Newtonian fluids. For example, ketchup can have its viscosity reduced by shaking, ketchup is a shear thinning material, like yogurt and emulsion paint, exhibiting thixotropy, where an increase in relative flow velocity will cause a reduction in viscosity, for example, by stirring. Some other non-Newtonian materials show the behavior, rheopecty, viscosity going up with relative deformation. Since Sir Isaac Newton originated the concept of viscosity, the study of liquids with strain rate dependent viscosity is also often called Non-Newtonian fluid mechanics, materials with the characteristics of a fluid will flow when subjected to a stress which is defined as the force per area. There are different sorts of stress and materials can respond differently for different stresses, much of theoretical rheology is concerned with associating external forces and torques with internal stresses and internal strain gradients and flow velocities. In this sense, a solid undergoing plastic deformation is a fluid, granular rheology refers to the continuum mechanical description of granular materials. These experimental techniques are known as rheometry and are concerned with the determination with well-defined rheological material functions, such relationships are then amenable to mathematical treatment by the established methods of continuum mechanics. The characterization of flow or deformation originating from a shear stress field is called shear rheometry. The study of extensional flows is called extensional rheology, shear flows are much easier to study and thus much more experimental data are available for shear flows than for extensional flows. A rheologist is an interdisciplinary scientist or engineer who studies the flow of liquids or the deformation of soft solids. It is not a degree subject, there is no qualification of rheologist as such. Most rheologists have a qualification in mathematics, the sciences, engineering, medicine, or certain technologies. Elasticity is essentially a time independent processes, as the strains appear the moment the stress is applied, if the material deformation rate increases linearly with increasing applied stress, then the material is viscous in the Newtonian sense. These materials are characterized due to the delay between the applied constant stress and the maximum strain
Rheology
–
Linear structure of cellulose -- the most common component of all organic plant life on Earth. * Note the evidence of hydrogen bonding which increases the viscosity at any temperature and pressure. This is an effect similar to that of polymer crosslinking, but less pronounced.
26.
Viscoelasticity
–
Viscoelasticity is the property of materials that exhibit both viscous and elastic characteristics when undergoing deformation. Viscous materials, like honey, resist shear flow and strain linearly with time when a stress is applied, elastic materials strain when stretched and quickly return to their original state once the stress is removed. Viscoelastic materials have elements of both of properties and, as such, exhibit time-dependent strain. In the nineteenth century, physicists such as Maxwell, Boltzmann, and Kelvin researched and experimented with creep and recovery of glasses, metals, viscoelasticity was further examined in the late twentieth century when synthetic polymers were engineered and used in a variety of applications. Viscoelasticity calculations depend heavily on the viscosity variable, η, the inverse of η is also known as fluidity, φ. The value of either can be derived as a function of temperature or as a given value, depending on the change of strain rate versus stress inside a material the viscosity can be categorized as having a linear, non-linear, or plastic response. When a material exhibits a linear response it is categorized as a Newtonian material, in this case the stress is linearly proportional to the strain rate. If the material exhibits a non-linear response to the strain rate, there is also an interesting case where the viscosity decreases as the shear/strain rate remains constant. A material which exhibits this type of behavior is known as thixotropic, in addition, when the stress is independent of this strain rate, the material exhibits plastic deformation. Many viscoelastic materials exhibit rubber like behavior explained by the theory of polymer elasticity. Some examples of materials include amorphous polymers, semicrystalline polymers, biopolymers, metals at very high temperatures. Cracking occurs when the strain is applied quickly and outside of the elastic limit, ligaments and tendons are viscoelastic, so the extent of the potential damage to them depends both on the rate of the change of their length as well as on the force applied. The viscosity of a viscoelastic substance gives the substance a strain rate dependence on time, purely elastic materials do not dissipate energy when a load is applied, then removed. However, a viscoelastic substance loses energy when a load is applied, hysteresis is observed in the stress–strain curve, with the area of the loop being equal to the energy lost during the loading cycle. Since viscosity is the resistance to thermally activated plastic deformation, a material will lose energy through a loading cycle. Plastic deformation results in lost energy, which is uncharacteristic of a purely elastic materials reaction to a loading cycle, specifically, viscoelasticity is a molecular rearrangement. When a stress is applied to a material such as a polymer. This movement or rearrangement is called creep, polymers remain a solid material even when these parts of their chains are rearranging in order to accompany the stress, and as this occurs, it creates a back stress in the material
Viscoelasticity
–
Stress–strain curves for a purely elastic material (a) and a viscoelastic material (b). The red area is a hysteresis loop and shows the amount of energy lost (as heat) in a loading and unloading cycle. It is equal to, where is stress and is strain.
Viscoelasticity
–
Different types of responses () to a change in strain rate (d /dt)
27.
Rheometer
–
A rheometer is a laboratory device used to measure the way in which a liquid, suspension or slurry flows in response to applied forces. It is used for those fluids which cannot be defined by a value of viscosity and therefore require more parameters to be set. It measures the rheology of the fluid, there are two distinctively different types of rheometers. Rotational or shear type rheometers are usually designed as either a native strain-controlled instrument or a native stress-controlled instrument, the word rheometer comes from the Greek, and means a device for measuring flow. In the 19th century it was used for devices to measure electric current. It was also used for the measurement of flow of liquids, in medical practice and this latter use persisted to the second half of the 20th century in some areas. Following the coining of the term rheology the word came to be applied to instruments for measuring the character rather than quantity of flow, the principle and working of rheometers is described in several texts. A dynamic shear rheometer, commonly known as DSR is used for research, liquid is forced through a tube of constant cross-section and precisely known dimensions under conditions of laminar flow. Either the flow-rate or the drop are fixed and the other measured. Knowing the dimensions, the flow-rate can be converted into a value for the shear rate, varying the pressure or flow allows a flow curve to be determined. The liquid is placed within the annulus of one cylinder inside another, one of the cylinders is rotated at a set speed. This determines the shear rate inside the annulus, the liquid tends to drag the other cylinder round, and the force it exerts on that cylinders is measured, which can be converted to a shear stress. One version of this is the Fann V-G Viscometer, which runs at two speeds, and therefore only two points on the flow curve. This is sufficient to define a Bingham plastic model which used to be used in the oil industry for determining the flow character of drilling fluids. In recent years rheometers that spin at 600,300,200,100,6 &3 RPM have been used and this allows for more complex fluids models such as Herschel-Bulkley to be used. Some models allow the speed to be increased and decreased in a programmed fashion. The liquid is placed on horizontal plate and a cone placed into it. The angle between the surface of the cone and the plate is around 1 to 2 degrees but can vary depending on the types of tests being run, typically the plate is rotated and the torque on the cone measured
Rheometer
–
Continuum mechanics
28.
Smart fluid
–
A smart fluid is a fluid whose properties can be changed by applying an electric field or a magnetic field. The most developed smart fluids today are fluids whose viscosity increases when a field is applied. Small magnetic dipoles are suspended in a fluid, and the applied magnetic field causes these small magnets to line up. These magnetorheological or MR fluids are being used in the suspension of the 2002 model of the Cadillac Seville STS automobile and more recently, depending on road conditions, the damping fluids viscosity is adjusted. This is more expensive than traditional systems, but it provides better control, some haptic devices whose resistance to touch can be controlled are also based on these MR fluids. Another major type of fluid are electrorheological or ER fluids. Besides fast acting clutches, brakes, shock absorbers and hydraulic valves, other, more esoteric, other smart fluids change their surface tension in the presence of an electric field. Other applications include brakes and seismic dampers, which are used in buildings in seismically-active zones to damp the oscillations occurring in an earthquake. Since then it appears that interest has waned a little, possibly due to the existence of various limitations of smart fluids which have yet to be overcome. Continuum mechanics Electrorheological fluid Ferrofluid Fluid mechanics Magnetorheological fluid Rheology Smart glass Smart metal http, //www. aip. org/tip/INPHFA/vol-9/iss-6/p14. html
Smart fluid
–
Continuum mechanics
29.
Robert Boyle
–
Robert William Boyle FRS was an Anglo-Irish natural philosopher, chemist, physicist and inventor born in Lismore, County Waterford, Ireland. Boyle is largely regarded today as the first modern chemist, and therefore one of the founders of modern chemistry, and one of the pioneers of modern experimental scientific method. He is best known for Boyles law, which describes the proportional relationship between the absolute pressure and volume of a gas, if the temperature is kept constant within a closed system. Among his works, The Sceptical Chymist is seen as a book in the field of chemistry. He was a devout and pious Anglican and is noted for his writings in theology, Boyle was born in Lismore Castle, in County Waterford, Ireland, the seventh son and fourteenth child of Richard Boyle, 1st Earl of Cork, and Catherine Fenton. Richard Boyle arrived in Dublin from England in 1588 during the Tudor plantations of Ireland and he had amassed enormous landholdings by the time Robert was born. As a child, Boyle was fostered to a local family, Boyle received private tutoring in Latin, Greek, and French and when he was eight years old, following the death of his mother, he was sent to Eton College in England. His fathers friend, Sir Henry Wotton, was then the provost of the college, during this time, his father hired a private tutor, Robert Carew, who had knowledge of Irish, to act as private tutor to his sons in Eton. After spending over three years at Eton, Robert travelled abroad with a French tutor and they visited Italy in 1641 and remained in Florence during the winter of that year studying the paradoxes of the great star-gazer Galileo Galilei, who was elderly but still living in 1641. Boyle returned to England from continental Europe in mid-1644 with a keen interest in scientific research and his father had died the previous year and had left him the manor of Stalbridge in Dorset, England and substantial estates in County Limerick in Ireland that he had acquired. They met frequently in London, often at Gresham College, having made several visits to his Irish estates beginning in 1647, Robert moved to Ireland in 1652 but became frustrated at his inability to make progress in his chemical work. In one letter, he described Ireland as a country where chemical spirits were so misunderstood. In 1654, Boyle left Ireland for Oxford to pursue his work more successfully, an inscription can be found on the wall of University College, Oxford the High Street at Oxford, marking the spot where Cross Hall stood until the early 19th century. It was here that Boyle rented rooms from the apothecary who owned the Hall. An account of Boyles work with the air pump was published in 1660 under the title New Experiments Physico-Mechanical, Touching the Spring of the Air, the person who originally formulated the hypothesis was Henry Power in 1661. Boyle in 1662 included a reference to a written by Power. In continental Europe the hypothesis is attributed to Edme Mariotte. In 1680 he was elected president of the society, but declined the honour from a scruple about oaths and they are extraordinary because all but a few of the 24 have come true
Robert Boyle
–
Robert Boyle (1627–91)
Robert Boyle
–
Sculpture of a young boy, thought to be Boyle, on his parents' monument in St Patrick's Cathedral, Dublin.
Robert Boyle
–
One of Robert Boyle's notebooks (1690-1691) held by the Royal Society of London. The Royal Society archives holds 46 volumes of philosophical, scientific and theological papers by Boyle and seven volumes of his correspondence.
Robert Boyle
–
Plaque at the site of Boyle and Hooke's experiments in Oxford
30.
Leonhard Euler
–
He also introduced much of the modern mathematical terminology and notation, particularly for mathematical analysis, such as the notion of a mathematical function. He is also known for his work in mechanics, fluid dynamics, optics, astronomy, Euler was one of the most eminent mathematicians of the 18th century, and is held to be one of the greatest in history. He is also considered to be the most prolific mathematician of all time. His collected works fill 60 to 80 quarto volumes, more than anybody in the field and he spent most of his adult life in Saint Petersburg, Russia, and in Berlin, then the capital of Prussia. A statement attributed to Pierre-Simon Laplace expresses Eulers influence on mathematics, Read Euler, read Euler, Leonhard Euler was born on 15 April 1707, in Basel, Switzerland to Paul III Euler, a pastor of the Reformed Church, and Marguerite née Brucker, a pastors daughter. He had two sisters, Anna Maria and Maria Magdalena, and a younger brother Johann Heinrich. Soon after the birth of Leonhard, the Eulers moved from Basel to the town of Riehen, Paul Euler was a friend of the Bernoulli family, Johann Bernoulli was then regarded as Europes foremost mathematician, and would eventually be the most important influence on young Leonhard. Eulers formal education started in Basel, where he was sent to live with his maternal grandmother. In 1720, aged thirteen, he enrolled at the University of Basel, during that time, he was receiving Saturday afternoon lessons from Johann Bernoulli, who quickly discovered his new pupils incredible talent for mathematics. In 1726, Euler completed a dissertation on the propagation of sound with the title De Sono, at that time, he was unsuccessfully attempting to obtain a position at the University of Basel. In 1727, he first entered the Paris Academy Prize Problem competition, Pierre Bouguer, who became known as the father of naval architecture, won and Euler took second place. Euler later won this annual prize twelve times, around this time Johann Bernoullis two sons, Daniel and Nicolaus, were working at the Imperial Russian Academy of Sciences in Saint Petersburg. In November 1726 Euler eagerly accepted the offer, but delayed making the trip to Saint Petersburg while he applied for a physics professorship at the University of Basel. Euler arrived in Saint Petersburg on 17 May 1727 and he was promoted from his junior post in the medical department of the academy to a position in the mathematics department. He lodged with Daniel Bernoulli with whom he worked in close collaboration. Euler mastered Russian and settled life in Saint Petersburg. He also took on a job as a medic in the Russian Navy. The Academy at Saint Petersburg, established by Peter the Great, was intended to improve education in Russia, as a result, it was made especially attractive to foreign scholars like Euler
Leonhard Euler
–
Portrait by Jakob Emanuel Handmann (1756)
Leonhard Euler
–
1957 Soviet Union stamp commemorating the 250th birthday of Euler. The text says: 250 years from the birth of the great mathematician, academician Leonhard Euler.
Leonhard Euler
–
Stamp of the former German Democratic Republic honoring Euler on the 200th anniversary of his death. Across the centre it shows his polyhedral formula, nowadays written as " v − e + f = 2".
Leonhard Euler
–
Euler's grave at the Alexander Nevsky Monastery
31.
Robert Hooke
–
Robert Hooke FRS was an English natural philosopher, architect and polymath. Allan Chapman has characterised him as Englands Leonardo, Robert Gunthers Early Science in Oxford, a history of science in Oxford during the Protectorate, Restoration and Age of Enlightenment, devotes five of its fourteen volumes to Hooke. Hooke studied at Wadham College, Oxford during the Protectorate where he became one of a tightly knit group of ardent Royalists led by John Wilkins. Here he was employed as an assistant to Thomas Willis and to Robert Boyle and he built some of the earliest Gregorian telescopes and observed the rotations of Mars and Jupiter. In 1665 he inspired the use of microscopes for scientific exploration with his book, based on his microscopic observations of fossils, Hooke was an early proponent of biological evolution. Much of Hookes scientific work was conducted in his capacity as curator of experiments of the Royal Society, much of what is known of Hookes early life comes from an autobiography that he commenced in 1696 but never completed. Richard Waller mentions it in his introduction to The Posthumous Works of Robert Hooke, the work of Waller, along with John Wards Lives of the Gresham Professors and John Aubreys Brief Lives, form the major near-contemporaneous biographical accounts of Hooke. Robert Hooke was born in 1635 in Freshwater on the Isle of Wight to John Hooke, Robert was the last of four children, two boys and two girls, and there was an age difference of seven years between him and the next youngest. Their father John was a Church of England priest, the curate of Freshwaters Church of All Saints, Robert Hooke was expected to succeed in his education and join the Church. John Hooke also was in charge of a school, and so was able to teach Robert. He was a Royalist and almost certainly a member of a group who went to pay their respects to Charles I when he escaped to the Isle of Wight, Robert, too, grew up to be a staunch monarchist. As a youth, Robert Hooke was fascinated by observation, mechanical works and he dismantled a brass clock and built a wooden replica that, by all accounts, worked well enough, and he learned to draw, making his own materials from coal, chalk and ruddle. Hooke quickly mastered Latin and Greek, made study of Hebrew. Here, too, he embarked on his study of mechanics. It appears that Hooke was one of a group of students whom Busby educated in parallel to the work of the school. Contemporary accounts say he was not much seen in the school, in 1653, Hooke secured a choristers place at Christ Church, Oxford. He was employed as an assistant to Dr Thomas Willis. There he met the natural philosopher Robert Boyle, and gained employment as his assistant from about 1655 to 1662, constructing, operating and he did not take his Master of Arts until 1662 or 1663
Robert Hooke
–
Modern portrait of Robert Hooke (Rita Greer 2004), based on descriptions by Aubrey and Waller; no contemporary depictions of Hooke are known to survive.
Robert Hooke
–
Memorial portrait of Robert Hooke at Alum Bay, Isle of Wight, his birthplace, by Rita Greer (2012).
Robert Hooke
–
Robert Boyle
Robert Hooke
–
Diagram of a louse from Hooke's Micrographia
32.
Blaise Pascal
–
Blaise Pascal was a French mathematician, physicist, inventor, writer and Christian philosopher. He was a prodigy who was educated by his father. Pascal also wrote in defence of the scientific method, in 1642, while still a teenager, he started some pioneering work on calculating machines. After three years of effort and 50 prototypes, he built 20 finished machines over the following 10 years, following Galileo Galilei and Torricelli, in 1647, he rebutted Aristotles followers who insisted that nature abhors a vacuum. Pascals results caused many disputes before being accepted, in 1646, he and his sister Jacqueline identified with the religious movement within Catholicism known by its detractors as Jansenism. Following a religious experience in late 1654, he began writing works on philosophy. His two most famous works date from this period, the Lettres provinciales and the Pensées, the set in the conflict between Jansenists and Jesuits. In that year, he wrote an important treatise on the arithmetical triangle. Between 1658 and 1659 he wrote on the cycloid and its use in calculating the volume of solids, Pascal had poor health, especially after the age of 18, and he died just two months after his 39th birthday. Pascal was born in Clermont-Ferrand, which is in Frances Auvergne region and he lost his mother, Antoinette Begon, at the age of three. His father, Étienne Pascal, who also had an interest in science and mathematics, was a local judge, Pascal had two sisters, the younger Jacqueline and the elder Gilberte. In 1631, five years after the death of his wife, the newly arrived family soon hired Louise Delfault, a maid who eventually became an instrumental member of the family. Étienne, who never remarried, decided that he alone would educate his children, for they all showed extraordinary intellectual ability, the young Pascal showed an amazing aptitude for mathematics and science. Particularly of interest to Pascal was a work of Desargues on conic sections and it states that if a hexagon is inscribed in a circle then the three intersection points of opposite sides lie on a line. Pascals work was so precocious that Descartes was convinced that Pascals father had written it, in France at that time offices and positions could be—and were—bought and sold. In 1631 Étienne sold his position as president of the Cour des Aides for 65,665 livres. The money was invested in a government bond which provided, if not a lavish, then certainly a comfortable income which allowed the Pascal family to move to, but in 1638 Richelieu, desperate for money to carry on the Thirty Years War, defaulted on the governments bonds. Suddenly Étienne Pascals worth had dropped from nearly 66,000 livres to less than 7,300 and it was only when Jacqueline performed well in a childrens play with Richelieu in attendance that Étienne was pardoned
Blaise Pascal
–
Painting of Blaise Pascal made by François II Quesnel for Gérard Edelinck in 1691.
Blaise Pascal
–
An early Pascaline on display at the Musée des Arts et Métiers, Paris
Blaise Pascal
–
Portrait of Pascal
Blaise Pascal
–
Pascal studying the cycloid, by Augustin Pajou, 1785, Louvre
33.
Claude-Louis Navier
–
Claude-Louis Navier, was a French engineer and physicist who specialized in mechanics. The Navier–Stokes equations are named after him and George Gabriel Stokes, after the death of his father in 1793, Naviers mother left his education in the hands of his uncle Émiland Gauthey, an engineer with the Corps of Bridges and Roads. In 1802, Navier enrolled at the École polytechnique, and in 1804 continued his studies at the École Nationale des Ponts et Chaussées and he eventually succeeded his uncle as Inspecteur general at the Corps des Ponts et Chaussées. He directed the construction of bridges at Choisy, Asnières and Argenteuil in the Department of the Seine, in 1824, Navier was admitted into the French Academy of Science. Navier formulated the theory of elasticity in a mathematically usable form. Navier is therefore considered to be the founder of modern structural analysis. His major contribution however remains the Navier–Stokes equations, central to fluid mechanics and his name is one of the 72 names inscribed on the Eiffel Tower. OConnor, John J. Robertson, Edmund F. Claude-Louis Navier, MacTutor History of Mathematics archive, University of St Andrews
Claude-Louis Navier
–
Bust of Claude Louis Marie Henri Navier at the École Nationale des Ponts et Chaussées
34.
Mechanics
–
Mechanics is an area of science concerned with the behaviour of physical bodies when subjected to forces or displacements, and the subsequent effects of the bodies on their environment. The scientific discipline has its origins in Ancient Greece with the writings of Aristotle, during the early modern period, scientists such as Khayaam, Galileo, Kepler, and Newton, laid the foundation for what is now known as classical mechanics. It is a branch of physics that deals with particles that are either at rest or are moving with velocities significantly less than the speed of light. It can also be defined as a branch of science which deals with the motion of, historically, classical mechanics came first, while quantum mechanics is a comparatively recent invention. Classical mechanics originated with Isaac Newtons laws of motion in Philosophiæ Naturalis Principia Mathematica, both are commonly held to constitute the most certain knowledge that exists about physical nature. Classical mechanics has especially often been viewed as a model for other so-called exact sciences, essential in this respect is the relentless use of mathematics in theories, as well as the decisive role played by experiment in generating and testing them. Quantum mechanics is of a scope, as it encompasses classical mechanics as a sub-discipline which applies under certain restricted circumstances. According to the principle, there is no contradiction or conflict between the two subjects, each simply pertains to specific situations. The correspondence principle states that the behavior of systems described by quantum theories reproduces classical physics in the limit of quantum numbers. Quantum mechanics has superseded classical mechanics at the level and is indispensable for the explanation and prediction of processes at the molecular, atomic. However, for macroscopic processes classical mechanics is able to solve problems which are difficult in quantum mechanics and hence remains useful. Modern descriptions of such behavior begin with a definition of such quantities as displacement, time, velocity, acceleration, mass. Until about 400 years ago, however, motion was explained from a different point of view. He showed that the speed of falling objects increases steadily during the time of their fall and this acceleration is the same for heavy objects as for light ones, provided air friction is discounted. The English mathematician and physicist Isaac Newton improved this analysis by defining force and mass, for objects traveling at speeds close to the speed of light, Newton’s laws were superseded by Albert Einstein’s theory of relativity. For atomic and subatomic particles, Newton’s laws were superseded by quantum theory, for everyday phenomena, however, Newton’s three laws of motion remain the cornerstone of dynamics, which is the study of what causes motion. In analogy to the distinction between quantum and classical mechanics, Einsteins general and special theories of relativity have expanded the scope of Newton, the differences between relativistic and Newtonian mechanics become significant and even dominant as the velocity of a massive body approaches the speed of light. Relativistic corrections are also needed for quantum mechanics, although general relativity has not been integrated, the two theories remain incompatible, a hurdle which must be overcome in developing a theory of everything
Mechanics
–
Arabic Machine Manuscript. Unknown date (at a guess: 16th to 19th centuries).
35.
Force
–
In physics, a force is any interaction that, when unopposed, will change the motion of an object. In other words, a force can cause an object with mass to change its velocity, force can also be described intuitively as a push or a pull. A force has both magnitude and direction, making it a vector quantity and it is measured in the SI unit of newtons and represented by the symbol F. The original form of Newtons second law states that the net force acting upon an object is equal to the rate at which its momentum changes with time. In an extended body, each part usually applies forces on the adjacent parts, such internal mechanical stresses cause no accelation of that body as the forces balance one another. Pressure, the distribution of small forces applied over an area of a body, is a simple type of stress that if unbalanced can cause the body to accelerate. Stress usually causes deformation of materials, or flow in fluids. In part this was due to an understanding of the sometimes non-obvious force of friction. A fundamental error was the belief that a force is required to maintain motion, most of the previous misunderstandings about motion and force were eventually corrected by Galileo Galilei and Sir Isaac Newton. With his mathematical insight, Sir Isaac Newton formulated laws of motion that were not improved-on for nearly three hundred years, the Standard Model predicts that exchanged particles called gauge bosons are the fundamental means by which forces are emitted and absorbed. Only four main interactions are known, in order of decreasing strength, they are, strong, electromagnetic, weak, high-energy particle physics observations made during the 1970s and 1980s confirmed that the weak and electromagnetic forces are expressions of a more fundamental electroweak interaction. Since antiquity the concept of force has been recognized as integral to the functioning of each of the simple machines. The mechanical advantage given by a machine allowed for less force to be used in exchange for that force acting over a greater distance for the same amount of work. Analysis of the characteristics of forces ultimately culminated in the work of Archimedes who was famous for formulating a treatment of buoyant forces inherent in fluids. Aristotle provided a discussion of the concept of a force as an integral part of Aristotelian cosmology. In Aristotles view, the sphere contained four elements that come to rest at different natural places therein. Aristotle believed that objects on Earth, those composed mostly of the elements earth and water, to be in their natural place on the ground. He distinguished between the tendency of objects to find their natural place, which led to natural motion, and unnatural or forced motion
Force
–
Aristotle famously described a force as anything that causes an object to undergo "unnatural motion"
Force
–
Forces are also described as a push or pull on an object. They can be due to phenomena such as gravity, magnetism, or anything that might cause a mass to accelerate.
Force
–
Though Sir Isaac Newton 's most famous equation is, he actually wrote down a different form for his second law of motion that did not use differential calculus.
Force
–
Galileo Galilei was the first to point out the inherent contradictions contained in Aristotle's description of forces.
36.
Mechanical engineering
–
Mechanical engineering is the discipline that applies the principles of engineering, physics, and materials science for the design, analysis, manufacturing, and maintenance of mechanical systems. It is the branch of engineering that involves the design, production and it is one of the oldest and broadest of the engineering disciplines. The mechanical engineering field requires an understanding of areas including mechanics, kinematics, thermodynamics, materials science, structural analysis. Mechanical engineering emerged as a field during the Industrial Revolution in Europe in the 18th century, however, Mechanical engineering science emerged in the 19th century as a result of developments in the field of physics. The field has evolved to incorporate advancements in technology, and mechanical engineers today are pursuing developments in such fields as composites, mechatronics. Mechanical engineers may work in the field of biomedical engineering, specifically with biomechanics, transport phenomena, biomechatronics, bionanotechnology. Mechanical engineering finds its application in the archives of various ancient, in ancient Greece, the works of Archimedes deeply influenced mechanics in the Western tradition and Heron of Alexandria created the first steam engine. In China, Zhang Heng improved a water clock and invented a seismometer, during the 7th to 15th century, the era called the Islamic Golden Age, there were remarkable contributions from Muslim inventors in the field of mechanical technology. Al-Jazari, who was one of them, wrote his famous Book of Knowledge of Ingenious Mechanical Devices in 1206 and he is also considered to be the inventor of such mechanical devices which now form the very basic of mechanisms, such as the crankshaft and camshaft. Newton was reluctant to publish his methods and laws for years, gottfried Wilhelm Leibniz is also credited with creating Calculus during the same time frame. On the European continent, Johann von Zimmermann founded the first factory for grinding machines in Chemnitz, education in mechanical engineering has historically been based on a strong foundation in mathematics and science. Degrees in mechanical engineering are offered at universities worldwide. In Spain, Portugal and most of South America, where neither B. Sc. nor B. Tech, programs have been adopted, the formal name for the degree is Mechanical Engineer, and the course work is based on five or six years of training. In Italy the course work is based on five years of education, and training, in Greece, the coursework is based on a five-year curriculum and the requirement of a Diploma Thesis, which upon completion a Diploma is awarded rather than a B. Sc. In Australia, mechanical engineering degrees are awarded as Bachelor of Engineering or similar nomenclature although there are a number of specialisations. The degree takes four years of study to achieve. To ensure quality in engineering degrees, Engineers Australia accredits engineering degrees awarded by Australian universities in accordance with the global Washington Accord, before the degree can be awarded, the student must complete at least 3 months of on the job work experience in an engineering firm. Similar systems are present in South Africa and are overseen by the Engineering Council of South Africa
Mechanical engineering
–
Mechanical engineers design and build engines, power plants, other machines...
Mechanical engineering
–
... structures, and vehicles of all sizes.
Mechanical engineering
–
An oblique view of a four-cylinder inline crankshaft with pistons
Mechanical engineering
–
Training FMS with learning robot SCORBOT-ER 4u, workbench CNC Mill and CNC Lathe
37.
Chemical engineering
–
A chemical engineer designs large-scale processes that convert chemicals, raw materials, living cells, microorganisms and energy into useful forms and products. A1996 British Journal for the History of Science article cites James F. Donnelly for mentioning an 1839 reference to chemical engineering in relation to the production of sulfuric acid. In the same however, George E. Davis, an English consultant, was credited for having coined the term. The History of Science in United States, An Encyclopedia puts this at around 1890, Chemical engineering, describing the use of mechanical equipment in the chemical industry, became common vocabulary in England after 1850. By 1910, the profession, chemical engineer, was already in use in Britain. Chemical engineering emerged upon the development of operations, a fundamental concept of the discipline of chemical engineering. Most authors agree that Davis invented the concept of operations if not substantially developed it. He gave a series of lectures on unit operations at the Manchester Technical School in 1887, three years before Davis lectures, Henry Edward Armstrong taught a degree course in chemical engineering at the City and Guilds of London Institute. Armstrongs course failed simply because its graduates, were not especially attractive to employers. Employers of the time would have rather hired chemists and mechanical engineers, starting from 1888, Lewis M. Norton taught at MIT the first chemical engineering course in the United States. Nortons course was contemporaneous and essentially similar with Armstrongs course, both courses, however, simply merged chemistry and engineering subjects. Its practitioners had difficulty convincing engineers that they were engineers and chemists that they were not simply chemists, unit operations was introduced into the course by William Hultz Walker in 1905. By the early 1920s, unit operations became an important aspect of engineering at MIT and other US universities. For instance, it defined chemical engineering to be a science of itself, unit operations in a 1922 report, and with which principle, it had published a list of academic institutions which offered satisfactory chemical engineering courses. Meanwhile, promoting chemical engineering as a science in Britain lead to the establishment of the Institution of Chemical Engineers in 1922. IChemE likewise helped make unit operations considered essential to the discipline, by the 1940s, it became clear that unit operations alone was insufficient in developing chemical reactors. While the predominance of unit operations in chemical engineering courses in Britain, along with other novel concepts, such process systems engineering, a second paradigm was defined. Transport phenomena gave an analytical approach to chemical engineering while PSE focused on its elements, such as control system
Chemical engineering
–
Chemical engineers design, construct and operate process plants (distillation columns pictured)
Chemical engineering
–
George E. Davis
Chemical engineering
–
Chemical engineers use computers to control automated systems in plants.
Chemical engineering
–
Operators in a chemical plant using an older analog control board, seen in East-Germany, 1986.
38.
Geophysics
–
Although geophysics was only recognized as a separate discipline in the 19th century, its origins date back to ancient times. The first magnetic compasses were made from lodestones, while more modern magnetic compasses played an important role in the history of navigation, the first seismic instrument was built in 132 BC. Geophysics is applied to societal needs, such as resources, mitigation of natural hazards. Geophysics is a highly interdisciplinary subject, and geophysicists contribute to area of the Earth sciences. To provide an idea of what constitutes geophysics, this section describes phenomena that are studied in physics and how they relate to the Earth. The gravitational pull of the Moon and Sun give rise to two high tides and two low tides every lunar day, or every 24 hours and 50 minutes, therefore, there is a gap of 12 hours and 25 minutes between every high tide and between every low tide. Gravitational forces make rocks press down on rocks, increasing their density as the depth increases. Measurements of gravitational acceleration and gravitational potential at the Earths surface, the surface gravitational field provides information on the dynamics of tectonic plates. The geopotential surface called the geoid is one definition of the shape of the Earth, the geoid would be the global mean sea level if the oceans were in equilibrium and could be extended through the continents. The Earth is cooling, and the heat flow generates the Earths magnetic field through the geodynamo. The main sources of heat are the heat and radioactivity. Some heat is carried up from the bottom of the mantle by mantle plumes, the heat flow at the Earths surface is about 4.2 ×1013 W, and it is a potential source of geothermal energy. Seismic waves are vibrations that travel through the Earths interior or along its surface, the entire Earth can also oscillate in forms that are called normal modes or free oscillations of the Earth. Ground motions from waves or normal modes are measured using seismographs, if the waves come from a localized source such as an earthquake or explosion, measurements at more than one location can be used to locate the source. The locations of earthquakes provide information on plate tectonics and mantle convection, measurements of seismic waves are a source of information on the region that the waves travel through. If the density or composition of the rock changes suddenly, some waves are reflected, reflections can provide information on near-surface structure. Changes in the direction, called refraction, can be used to infer the deep structure of the Earth. Earthquakes pose a risk to humans, understanding their mechanisms, which depend on the type of earthquake, can lead to better estimates of earthquake risk and improvements in earthquake engineering
Geophysics
–
Illustration of the deformations of a block by body waves and surface waves (see seismic wave).
Geophysics
–
Age of the sea floor. Much of the dating information comes from magnetic anomalies.
Geophysics
–
Replica of Zhang Heng 's seismoscope, possibly the first contribution to seismology.
39.
Numerical methods
–
Numerical analysis is the study of algorithms that use numerical approximation for the problems of mathematical analysis. Being able to compute the sides of a triangle is important, for instance, in astronomy, carpentry. Numerical analysis continues this tradition of practical mathematical calculations. Much like the Babylonian approximation of the root of 2, modern numerical analysis does not seek exact answers. Instead, much of numerical analysis is concerned with obtaining approximate solutions while maintaining reasonable bounds on errors, before the advent of modern computers numerical methods often depended on hand interpolation in large printed tables. Since the mid 20th century, computers calculate the required functions instead and these same interpolation formulas nevertheless continue to be used as part of the software algorithms for solving differential equations. Computing the trajectory of a spacecraft requires the accurate numerical solution of a system of differential equations. Car companies can improve the safety of their vehicles by using computer simulations of car crashes. Such simulations essentially consist of solving differential equations numerically. Hedge funds use tools from all fields of analysis to attempt to calculate the value of stocks. Airlines use sophisticated optimization algorithms to decide ticket prices, airplane and crew assignments, historically, such algorithms were developed within the overlapping field of operations research. Insurance companies use programs for actuarial analysis. The rest of this section outlines several important themes of numerical analysis, the field of numerical analysis predates the invention of modern computers by many centuries. Linear interpolation was already in use more than 2000 years ago, to facilitate computations by hand, large books were produced with formulas and tables of data such as interpolation points and function coefficients. The function values are no very useful when a computer is available. The mechanical calculator was developed as a tool for hand computation. These calculators evolved into electronic computers in the 1940s, and it was found that these computers were also useful for administrative purposes. But the invention of the computer also influenced the field of analysis, since now longer
Numerical methods
–
Babylonian clay tablet YBC 7289 (c. 1800–1600 BC) with annotations. The approximation of the square root of 2 is four sexagesimal figures, which is about six decimal figures. 1 + 24/60 + 51/60 2 + 10/60 3 = 1.41421296...
Numerical methods
–
Direct method
Numerical methods
40.
Particle image velocimetry
–
Particle image velocimetry is an optical method of flow visualization used in education and research. It is used to obtain instantaneous velocity measurements and related properties in fluids, the fluid is seeded with tracer particles which, for sufficiently small particles, are assumed to faithfully follow the flow dynamics. The fluid with entrained particles is illuminated so that particles are visible, the motion of the seeding particles is used to calculate speed and direction of the flow being studied. Other techniques used to measure flows are laser Doppler velocimetry and hot-wire anemometry, the main difference between PIV and those techniques is that PIV produces two-dimensional or even three-dimensional vector fields, while the other techniques measure the velocity at a point. During PIV, the concentration is such that it is possible to identify individual particles in an image. A fiber optic cable or liquid light guide may connect the laser to the lens setup, PIV software is used to post-process the optical images. While the method of adding particles or objects to a fluid in order to observe its flow is likely to have been used from time to time through the ages no sustained application of the method is known. The first to use particles to study fluids in a systematic manner was Ludwig Prandtl. Laser Doppler Velocimetry predates PIV as an analysis system to become widespread for research. Able to obtain all of a fluids velocity measurements at a specific point, PIV itself found its roots in Laser speckle velocimetry, a technique that several groups began experimenting with in the late 1970s. In the early 1980s it was found that it was advantageous to decrease the concentration down to levels where individual particles could be observed. The images were recorded using analog cameras and needed immense amount of computing power to be analyzed. With the increasing power of computers and widespread use of CCD cameras, digital PIV has become increasingly common, the seeding particles are an inherently critical component of the PIV system. Depending on the fluid under investigation, the particles must be able to match the fluid properties reasonably well, otherwise they will not follow the flow satisfactorily enough for the PIV analysis to be considered accurate. Ideal particles will have the same density as the system being used. While the actual particle choice is dependent on the nature of the fluid, generally for macro PIV investigations they are glass beads, polystyrene, polyethylene, the particles are typically of a diameter in the order of 10 to 100 micrometers. For some experiments involving combustion, seeding particle size may be smaller, in the order of 1 micrometer, due to the small size of the particles, the particles motion is dominated by stokes drag and settling or rising affects. The scattered light from the particles is dominated by Mie scattering, thus the particle size needs to be balanced to scatter enough light to accurately visualize all particles within the laser sheet plane, but small enough to accurately follow the flow
Particle image velocimetry
–
Application of PIV in combustion
Particle image velocimetry
–
PIV-Analysis of a vortex pair. The magnification in the upper left shows the increase in spatial resolution that can be achieved using a modern multi-pass window deformation technique.
Particle image velocimetry
–
PIV analysis of a stalled flat plate, shear rate superimposed
41.
On Floating Bodies
–
On Floating Bodies is a Greek-language work consisting of two books written by Archimedes of Syracuse, one of the most important mathematicians, physicists, and engineers of antiquity. On Floating Bodies, which is thought to have written around 250 BC, survives only partly in Greek. It is the first known work on hydrostatics, of which Archimedes is recognized as the founder, the purpose of On Floating Bodies was to determine the positions that various solids will assume when floating in a fluid, according to their form and the variation in their specific gravities. It contains the first statement of what is now known as Archimedes principle, Archimedes lived in the Greek city-state of Syracuse, Sicily. He is credited with laying the foundations of hydrostatics, statics, a leading scientist of classical antiquity, Archimedes also developed elaborate systems of pulleys to move large objects with a minimum of effort. The Archimedes screw underpins modern hydroengineering, and his machines of war helped to hold back the armies of Rome in the First Punic War. Archimedes opposed the arguments of Aristotle, pointing out that it was impossible to separate mathematics and nature, the only known copy of On Floating Bodies in Greek comes from the Archimedes Palimpsest. In the first part of the treatise, Archimedes establishes various general principles, such as that a solid denser than a fluid will, Archimedes spells out the law of equilibrium of fluids, and proves that water will adopt a spherical form around a center of gravity. This may have been an attempt at explaining the theory of contemporary Greek astronomers such as Eratosthenes that the Earth is round. The fluids described by Archimedes are not self-gravitating, since he assumes the existence of a point towards all things fall in order to derive the spherical shape. Further, Proposition 5 of Archimedes treatise On Floating Bodies states that, the second book is a mathematical achievement unmatched in antiquity and rarely equaled since. It is restricted to the case when the base of the paraboloid lies either entirely above or entirely below the fluid surface, Archimedes investigation of paraboloids was probably an idealization of the shapes of ships hulls. Some of his sections float with the base water and the summit above water. Of his works survive, the second of his two books of On Floating Bodies is considered his most mature work, commonly described as a tour de force
On Floating Bodies
–
a page from Floating Bodies, Archimedes Palimpsest
42.
Jean le Rond d'Alembert
–
Jean-Baptiste le Rond dAlembert was a French mathematician, mechanician, physicist, philosopher, and music theorist. Until 1759 he was also co-editor with Denis Diderot of the Encyclopédie, DAlemberts formula for obtaining solutions to the wave equation is named after him. The wave equation is referred to as dAlemberts equation. Born in Paris, dAlembert was the son of the writer Claudine Guérin de Tencin and the chevalier Louis-Camus Destouches. Destouches was abroad at the time of dAlemberts birth, days after birth his mother left him on the steps of the Saint-Jean-le-Rond de Paris church. According to custom, he was named after the saint of the church. DAlembert was placed in an orphanage for foundling children, but his father found him and placed him with the wife of a glazier, Madame Rousseau, Destouches secretly paid for the education of Jean le Rond, but did not want his paternity officially recognized. DAlembert first attended a private school, the chevalier Destouches left dAlembert an annuity of 1200 livres on his death in 1726. Under the influence of the Destouches family, at the age of twelve entered the Jansenist Collège des Quatre-Nations. Here he studied philosophy, law, and the arts, graduating as baccalauréat en arts in 1735, in his later life, DAlembert scorned the Cartesian principles he had been taught by the Jansenists, physical promotion, innate ideas and the vortices. The Jansenists steered DAlembert toward a career, attempting to deter him from pursuits such as poetry. Theology was, however, rather unsubstantial fodder for dAlembert and he entered law school for two years, and was nominated avocat in 1738. He was also interested in medicine and mathematics, Jean was first registered under the name Daremberg, but later changed it to dAlembert. The name dAlembert was proposed by Johann Heinrich Lambert for a moon of Venus. In July 1739 he made his first contribution to the field of mathematics, at the time Lanalyse démontrée was a standard work, which dAlembert himself had used to study the foundations of mathematics. DAlembert was also a Latin scholar of note and worked in the latter part of his life on a superb translation of Tacitus. In 1740, he submitted his second scientific work from the field of fluid mechanics Mémoire sur la réfraction des corps solides, in this work dAlembert theoretically explained refraction. In 1741, after failed attempts, dAlembert was elected into the Académie des Sciences
Jean le Rond d'Alembert
–
Jean-Baptiste le Rond d'Alembert, pastel by Maurice Quentin de La Tour
43.
Pierre-Simon Laplace
–
Pierre-Simon, marquis de Laplace was an influential French scholar whose work was important to the development of mathematics, statistics, physics and astronomy. He summarized and extended the work of his predecessors in his five-volume Mécanique Céleste and this work translated the geometric study of classical mechanics to one based on calculus, opening up a broader range of problems. In statistics, the Bayesian interpretation of probability was developed mainly by Laplace, Laplace formulated Laplaces equation, and pioneered the Laplace transform which appears in many branches of mathematical physics, a field that he took a leading role in forming. The Laplacian differential operator, widely used in mathematics, is named after him. Laplace is remembered as one of the greatest scientists of all time, sometimes referred to as the French Newton or Newton of France, he has been described as possessing a phenomenal natural mathematical faculty superior to that of any of his contemporaries. Laplace became a count of the Empire in 1806 and was named a marquis in 1817, Laplace was born in Beaumont-en-Auge, Normandy on 23 March 1749, a village four miles west of Pont lEveque in Normandy. According to W. W. Rouse Ball, His father, Pierre de Laplace and his great-uncle, Maitre Oliver de Laplace, had held the title of Chirurgien Royal. It would seem that from a pupil he became an usher in the school at Beaumont, however, Karl Pearson is scathing about the inaccuracies in Rouse Balls account and states, Indeed Caen was probably in Laplaces day the most intellectually active of all the towns of Normandy. It was here that Laplace was educated and was provisionally a professor and it was here he wrote his first paper published in the Mélanges of the Royal Society of Turin, Tome iv. 1766–1769, at least two years before he went at 22 or 23 to Paris in 1771, thus before he was 20 he was in touch with Lagrange in Turin. He did not go to Paris a raw self-taught country lad with only a peasant background, the École Militaire of Beaumont did not replace the old school until 1776. His parents were from comfortable families and his father was Pierre Laplace, and his mother was Marie-Anne Sochon. The Laplace family was involved in agriculture until at least 1750, Pierre Simon Laplace attended a school in the village run at a Benedictine priory, his father intending that he be ordained in the Roman Catholic Church. At sixteen, to further his fathers intention, he was sent to the University of Caen to read theology, at the university, he was mentored by two enthusiastic teachers of mathematics, Christophe Gadbled and Pierre Le Canu, who awoke his zeal for the subject. Here Laplaces brilliance as a mathematician was recognised and while still at Caen he wrote a memoir Sur le Calcul integral aux differences infiniment petites et aux differences finies. About this time, recognizing that he had no vocation for the priesthood, in this connection reference may perhaps be made to the statement, which has appeared in some notices of him, that he broke altogether with the church and became an atheist. Laplace did not graduate in theology but left for Paris with a letter of introduction from Le Canu to Jean le Rond dAlembert who at time was supreme in scientific circles. According to his great-great-grandson, dAlembert received him rather poorly, and to get rid of him gave him a mathematics book
Pierre-Simon Laplace
–
Pierre-Simon Laplace (1749–1827). Posthumous portrait by Jean-Baptiste Paulin Guérin, 1838.
Pierre-Simon Laplace
–
Laplace's house at Arcueil.
Pierre-Simon Laplace
–
Laplace.
Pierre-Simon Laplace
–
Tomb of Pierre-Simon Laplace
44.
Engineers
–
Engineers design materials, structures, and systems while considering the limitations imposed by practicality, regulation, safety, and cost. The word engineer is derived from the Latin words ingeniare and ingenium, the work of engineers forms the link between scientific discoveries and their subsequent applications to human and business needs and quality of life. His/her work is predominantly intellectual and varied and not of a mental or physical character. It requires the exercise of original thought and judgement and the ability to supervise the technical, he/she is thus placed in a position to make contributions to the development of engineering science or its applications. In due time he/she will be able to give authoritative technical advice, much of an engineers time is spent on researching, locating, applying, and transferring information. Indeed, research suggests engineers spend 56% of their time engaged in various information behaviours, Engineers must weigh different design choices on their merits and choose the solution that best matches the requirements. Their crucial and unique task is to identify, understand, Engineers apply techniques of engineering analysis in testing, production, or maintenance. Analytical engineers may supervise production in factories and elsewhere, determine the causes of a process failure and they also estimate the time and cost required to complete projects. Supervisory engineers are responsible for major components or entire projects, Engineering analysis involves the application of scientific analytic principles and processes to reveal the properties and state of the system, device or mechanism under study. Most engineers specialize in one or more engineering disciplines, numerous specialties are recognized by professional societies, and each of the major branches of engineering has numerous subdivisions. Civil engineering, for example, includes structural and transportation engineering and materials engineering include ceramic, metallurgical, mechanical engineering cuts across just about every discipline since its core essence is applied physics. Engineers also may specialize in one industry, such as vehicles, or in one type of technology. Several recent studies have investigated how engineers spend their time, that is, research suggests that there are several key themes present in engineers’ work, technical work, social work, computer-based work, information behaviours. Amongst other more detailed findings, a recent work sampling study found that engineers spend 62. 92% of their time engaged in work,40. 37% in social work. The time engineers spend engaged in activities is also reflected in the competencies required in engineering roles. There are many branches of engineering, each of which specializes in specific technologies, typically engineers will have deep knowledge in one area and basic knowledge in related areas. When developing a product, engineers work in interdisciplinary teams. For example, when building robots an engineering team will typically have at least three types of engineers, a mechanical engineer would design the body and actuators
Engineers
–
An electrical engineer, circa 1950
Engineers
–
Engineers conferring on prototype design, 1954
Engineers
–
NASA Launch Control Center Firing Room 2 as it appeared in the Apollo era
Engineers
–
The Challenger disaster is held as a case study of engineering ethics.
45.
George Gabriel Stokes
–
Sir George Gabriel Stokes, 1st Baronet, PRS, was a physicist and mathematician. Born in Ireland, Stokes spent all of his career at the University of Cambridge, in physics, Stokes made seminal contributions to fluid dynamics and to physical optics. In mathematics he formulated the first version of what is now known as Stokes theorem and he served as secretary, then president, of the Royal Society of London. George Stokes was the youngest son of the Reverend Gabriel Stokes, rector of Skreen, County Sligo, Ireland, where he was born and brought up in an evangelical Protestant family. In accordance with the statutes, he had to resign the fellowship when he married in 1857. He retained his place on the foundation until 1902, when on the day before his 83rd birthday and he did not hold this position for long, for he died at Cambridge on 1 February the following year, and was buried in the Mill Road cemetery. In 1849, Stokes was appointed to the Lucasian professorship of mathematics at Cambridge, on 1 June 1899, the jubilee of this appointment was celebrated there in a ceremony, which was attended by numerous delegates from European and American universities. Stokes, who was made a baronet in 1889, further served his university by representing it in parliament from 1887 to 1892 as one of the two members for the Cambridge University constituency. During a portion of this period he also was president of the Royal Society, since he was also Lucasian Professor at this time, Stokes was the first person to hold all three positions simultaneously, Newton held the same three, although not at the same time. Stokess original work began about 1840, and from that date onwards the great extent of his output was only less remarkable than the brilliance of its quality, the Royal Societys catalogue of scientific papers gives the titles of over a hundred memoirs by him published down to 1883. Some of these are brief notes, others are short controversial or corrective statements. His first published papers, which appeared in 1842 and 1843, were on the motion of incompressible fluids. His work on motion and viscosity led to his calculating the terminal velocity for a sphere falling in a viscous medium. This became known as Stokes law and he derived an expression for the frictional force exerted on spherical objects with very small Reynolds numbers. His work is the basis of the falling sphere viscometer, in which the fluid is stationary in a glass tube. A sphere of size and density is allowed to descend through the liquid. If correctly selected, it reaches terminal velocity, which can be measured by the time it takes to pass two marks on the tube, electronic sensing can be used for opaque fluids. Knowing the terminal velocity, the size and density of the sphere, a series of steel ball bearings of different diameter is normally used in the classic experiment to improve the accuracy of the calculation
George Gabriel Stokes
–
Sir George Stokes, Bt.
George Gabriel Stokes
–
Signature
George Gabriel Stokes
–
Fluorspar
George Gabriel Stokes
–
A calcite crystal laid upon a paper with some letters showing the double refraction
46.
Boundary layers
–
In the Earths atmosphere, the atmospheric boundary layer is the air layer near the ground affected by diurnal heat, moisture or momentum transfer to or from the surface. On an aircraft wing the boundary layer is the part of the close to the wing. Laminar boundary layers can be classified according to their structure. When a fluid rotates and viscous forces are balanced by the Coriolis effect, in the theory of heat transfer, a thermal boundary layer occurs. A surface can have multiple types of boundary layer simultaneously, the viscous nature of airflow reduces the local velocities on a surface and is responsible for skin friction. The layer of air over the surface that is slowed down or stopped by viscosity, is the boundary layer. There are two different types of boundary layer flow, laminar and turbulent, laminar Boundary Layer Flow The laminar boundary is a very smooth flow, while the turbulent boundary layer contains swirls or eddies. The laminar flow creates less skin friction drag than the turbulent flow, Boundary layer flow over a wing surface begins as a smooth laminar flow. As the flow continues back from the edge, the laminar boundary layer increases in thickness. Turbulent Boundary Layer Flow At some distance back from the leading edge, the low energy laminar flow, however, tends to break down more suddenly than the turbulent layer. The aerodynamic boundary layer was first defined by Ludwig Prandtl in a paper presented on August 12,1904 at the third International Congress of Mathematicians in Heidelberg and this allows a closed-form solution for the flow in both areas, a significant simplification of the full Navier–Stokes equations. The majority of the transfer to and from a body also takes place within the boundary layer. The pressure distribution throughout the layer in the direction normal to the surface remains constant throughout the boundary layer. The thickness of the velocity boundary layer is defined as the distance from the solid body at which the viscous flow velocity is 99% of the freestream velocity. Displacement Thickness is an alternative definition stating that the boundary layer represents a deficit in mass compared to inviscid flow with slip at the wall. It is the distance by which the wall would have to be displaced in the case to give the same total mass flow as the viscous case. The no-slip condition requires the flow velocity at the surface of an object be zero. The flow velocity will then increase rapidly within the layer, governed by the boundary layer equations
Boundary layers
–
Ludwig Prandtl
Boundary layers
–
Boundary layer visualization, showing transition from laminar to turbulent condition
47.
Ludwig Prandtl
–
Ludwig Prandtl was a German engineer. In the 1920s he developed the basis for the fundamental principles of subsonic aerodynamics in particular. His studies identified the boundary layer, thin-airfoils, and lifting-line theories, the Prandtl number was named after him. Prandtl was born in Freising, near Munich, in 1875 and his mother suffered from a lengthy illness and, as a result, Ludwig spent more time with his father, a professor of engineering. His father also encouraged him to nature and think about his observations. He entered the Technische Hochschule Munich in 1894 and graduated with a Ph. D. under guidance of Professor August Foeppl in six years and his work at Munich had been in solid mechanics, and his first job was as an engineer designing factory equipment. There, he entered the field of fluid mechanics where he had to design a suction device, after carrying out some experiments, he came up with a new device that worked well and used less power than the one it replaced. In 1901 Prandtl became a professor of mechanics at the technical school in Hannover. It was here that he developed many of his most important theories, in 1904 he delivered a groundbreaking paper, Fluid Flow in Very Little Friction, in which he described the boundary layer and its importance for drag and streamlining. The paper also described flow separation as a result of the boundary layer, several of his students made attempts at closed-form solutions, but failed, and in the end the approximation contained in his original paper remains in widespread use. The effect of the paper was so great that Prandtl became director of the Institute for Technical Physics at the University of Göttingen later in the year, over the next decades he developed it into a powerhouse of aerodynamics, leading the world until the end of World War II. In 1925 the university spun off his arm to create the Kaiser Wilhelm Institute for Flow Research. Following earlier leads by Frederick Lanchester from 1902–1907, Prandtl worked with Albert Betz, the results were published in 1918–1919, known as the Lanchester-Prandtl wing theory. He also made additions to study cambered airfoils, like those on World War I aircraft. This work led to the realization that on any wing of finite length, wing-tip effects became very important to the overall performance, considerable work was included on the nature of induced drag and wingtip vortices, which had previously been ignored. These tools enabled aircraft designers to make meaningful theoretical studies of their aircraft before they were built, Prandtl and his student Theodor Meyer developed the first theories of supersonic shock waves and flow in 1908. The Prandtl-Meyer expansion fans allowed for the construction of wind tunnels. He had little time to work on the problem further until the 1920s, today, all supersonic wind tunnels and rocket nozzles are designed using the same method
Ludwig Prandtl
–
Ludwig Prandtl
Ludwig Prandtl
–
Ludwig Prandtl 1904 with his fluid test channel
48.
Osborne Reynolds
–
Osborne Reynolds FRS was a prominent innovator in the understanding of fluid dynamics. Separately, his studies of transfer between solids and fluids brought improvements in boiler and condenser design. He spent his career at what is now called University of Manchester. Osborne Reynolds was born in Belfast and moved with his parents soon afterward to Dedham and his father worked as a school headmaster and clergyman, but was also a very able mathematician with a keen interest in mechanics. The father took out a number of patents for improvements to equipment. Osborne Reynolds attended Queens College, Cambridge and graduated in 1867 as the seventh wrangler in mathematics, Reynolds showed an early aptitude and liking for the study of mechanics. For the year following his graduation from Cambridge he again took up a post with an engineering firm. My attention drawn to various phenomena, for the explanation of which I discovered that a knowledge of mathematics was essential. Reynolds remained at Owens College for the rest of his career – in 1880 the college became a constituent college of the newly founded Victoria University and he was elected a Fellow of the Royal Society in 1877 and awarded the Royal Medal in 1888. Reynolds most famously studied the conditions in which the flow of fluid in pipes transitioned from laminar flow to turbulent flow. The larger pipe was glass so the behaviour of the layer of dyed flow could be observed, when the velocity was low, the dyed layer remained distinct through the entire length of the large tube. When the velocity was increased, the broke up at a given point. The point at which this happened was the point from laminar to turbulent flow. From these experiments came the dimensionless Reynolds number for dynamic similarity—the ratio of forces to viscous forces. Reynolds also proposed what is now known as Reynolds-averaging of turbulent flows, such averaging allows for bulk description of turbulent flow, for example using the Reynolds-averaged Navier–Stokes equations. Reynolds contributions to fluid mechanics were not lost on ship designers, Reynolds himself had a number of papers concerning ship design published in Transactions of the Institution of Naval Architects. His publications in fluid dynamics began in the early 1870s and his final theoretical model published in the mid-1890s is still the standard mathematical framework used today. Proceedings of the Royal Society of London, on the dynamical theory of incompressible viscous fluids and the determination of the criterion
Osborne Reynolds
–
Osborne Reynolds in 1903
Osborne Reynolds
–
Reynolds' experiment on fluid dynamics in pipes
49.
Andrey Kolmogorov
–
Andrey Kolmogorov was born in Tambov, about 500 kilometers south-southeast of Moscow, in 1903. Kolmogorova, died giving birth to him, Andrey was raised by two of his aunts in Tunoshna at the estate of his grandfather, a well-to-do nobleman. Little is known about Andreys father and he was supposedly named Nikolai Matveevich Kataev and had been an agronomist. Nikolai had been exiled from St. Petersburg to the Yaroslavl province after his participation in the movement against the czars. He disappeared in 1919 and he was presumed to have killed in the Russian Civil War. Andrey Kolmogorov was educated in his aunt Veras village school, and his earliest literary efforts, Andrey was the editor of the mathematical section of this journal. In 1910, his aunt adopted him, and they moved to Moscow, later that same year, Kolmogorov began to study at the Moscow State University and at the same time Mendeleev Moscow Institute of Chemistry and Technology. Kolmogorov writes about this time, I arrived at Moscow University with a knowledge of mathematics. I knew in particular the beginning of set theory, I studied many questions in articles in the Encyclopedia of Brockhaus and Efron, filling out for myself what was presented too concisely in these articles. Kolmogorov gained a reputation for his wide-ranging erudition, during the same period, Kolmogorov worked out and proved several results in set theory and in the theory of Fourier series. In 1922, Kolmogorov gained international recognition for constructing a Fourier series that diverges almost everywhere, around this time, he decided to devote his life to mathematics. In 1925, Kolmogorov graduated from the Moscow State University and began to study under the supervision of Nikolai Luzin, Kolmogorov became interested in probability theory. In 1929, Kolmogorov earned his Doctor of Philosophy degree, from Moscow State University, in 1930, Kolmogorov went on his first long trip abroad, traveling to Göttingen and Munich, and then to Paris. He had various contacts in Göttingen. His pioneering work, About the Analytical Methods of Probability Theory, was published in 1931, also in 1931, he became a professor at the Moscow State University. In 1935, Kolmogorov became the first chairman of the department of probability theory at the Moscow State University, around the same years Kolmogorov contributed to the field of ecology and generalized the Lotka–Volterra model of predator-prey systems. In 1936, Kolmogorov and Alexandrov were involved in the persecution of their common teacher Nikolai Luzin, in the so-called Luzin affair. In a 1938 paper, Kolmogorov established the basic theorems for smoothing and predicting stationary stochastic processes—a paper that had military applications during the Cold War
Andrey Kolmogorov
–
Andrey Kolmogorov
Andrey Kolmogorov
–
Kolmogorov (left) delivers a talk at a Soviet information theory symposium. (Tallinn, 1973).
Andrey Kolmogorov
–
Kolmogorov works on his talk (Tallinn, 1973).
50.
Geoffrey Ingram Taylor
–
Sir Geoffrey Ingram Taylor OM was a British physicist and mathematician, and a major figure in fluid dynamics and wave theory. His biographer and one-time student, George Batchelor, described him as one of the most notable scientists of this century, Taylor was born in St. Johns Wood, London. His father, Edward Ingram Taylor, was an artist, and his mother, Margaret Boole, as a child he was fascinated by science after attending the Royal Institution Christmas Lectures, and performed experiments using paint rollers and sticky-tape. Taylor read mathematics at Trinity College, Cambridge and his first paper was on quanta showing that Youngs slit diffraction experiment produced fringes even with feeble light sources such that less than one photon on average was present at a time. He followed this up with work on shock waves, winning a Smiths Prize, in 1910 he was elected to a Fellowship at Trinity College, and the following year he was appointed to a meteorology post, becoming Reader in Dynamical Meteorology. His work on turbulence in the led to the publication of Turbulent motion in fluids. In 1913 Taylor served as a meteorologist aboard the Ice Patrol vessel Scotia, where his observations formed the basis of his later work on a theoretical model of mixing of the air. At the outbreak of World War I, he was sent to the Royal Aircraft Factory at Farnborough to apply his knowledge to design, working, amongst other things. Not content just to sit back and do the science, he learned to fly aeroplanes. After the war Taylor returned to Trinity and worked on an application of turbulent flow to oceanography and he also worked on the problem of bodies passing through a rotating fluid. In 1923 he was appointed to a Royal Society research professorship as a Yarrow Research Professor and this enabled him to stop teaching, which he had been doing for the previous four years, and which he both disliked and had no great aptitude for. He also produced another major contribution to turbulent flow, where he introduced a new approach through a study of velocity fluctuations. The insight was critical in developing the science of solid mechanics. During World War II, Taylor again applied his expertise to military problems such as the propagation of blast waves, Taylor was sent to the United States in 1944–1945 as part of the British delegation to the Manhattan Project. At Los Alamos, Taylor helped solve implosion instability problems in the development of atomic weapons particularly the plutonium bomb used at Nagasaki on August 9th,1945, in 1944 he also received his knighthood and the Copley Medal from the Royal Society. Taylor was present at the Trinity, July 16,1945, as part of General Leslie Groves VIP List of just 10 people who observed the test from Compania Hill, about 20 miles northwest of the shot tower. By a strange twist, Joan Hinton, another descendant of the mathematician, George Boole, had been working on the same project. His estimate of 22 kt was remarkably close to the value of 20 kt
Geoffrey Ingram Taylor
–
Sir Geoffrey Ingram Taylor
51.
Atmospheric pressure
–
Atmospheric pressure, sometimes also called barometric pressure, is the pressure exerted by the weight of air in the atmosphere of Earth. In most circumstances atmospheric pressure is approximated by the hydrostatic pressure caused by the weight of air above the measurement point. As elevation increases, there is less overlying atmospheric mass, so that atmospheric pressure decreases with increasing elevation. On average, a column of air one square centimetre in cross-section, measured from sea level to the top of the atmosphere, has a mass of about 1.03 kilograms and that force is a pressure of 10.1 N/cm2 or 101 kN/m2. A column 1 square inch in cross-section would have a weight of about 14.7 lb or about 65.4 N and it is modified by the planetary rotation and local effects such as wind velocity, density variations due to temperature and variations in composition. The standard atmosphere is a unit of pressure defined as 101325 Pa, the mean sea level pressure is the average atmospheric pressure at sea level. This is the pressure normally given in weather reports on radio, television. When barometers in the home are set to match the weather reports, they measure pressure adjusted to sea level. The altimeter setting in aviation, is an atmospheric pressure adjustment, average sea-level pressure is 1013.25 mbar. In aviation weather reports, QNH is transmitted around the world in millibars or hectopascals, except in the United States, Canada, however, in Canadas public weather reports, sea level pressure is instead reported in kilopascals. The highest sea-level pressure on Earth occurs in Siberia, where the Siberian High often attains a sea-level pressure above 1050 mbar, the lowest measurable sea-level pressure is found at the centers of tropical cyclones and tornadoes, with a record low of 870 mbar. Pressure varies smoothly from the Earths surface to the top of the mesosphere, although the pressure changes with the weather, NASA has averaged the conditions for all parts of the earth year-round. As altitude increases, atmospheric pressure decreases, one can calculate the atmospheric pressure at a given altitude. Temperature and humidity affect the atmospheric pressure, and it is necessary to know these to compute an accurate figure. The graph at right was developed for a temperature of 15 °C, at low altitudes above the sea level, the pressure decreases by about 1.2 kPa for every 100 meters. See pressure system for the effects of air pressure variations on weather, Atmospheric pressure shows a diurnal or semidiurnal cycle caused by global atmospheric tides. This effect is strongest in tropical zones, with an amplitude of a few millibars and these variations have two superimposed cycles, a circadian cycle and semi-circadian cycle. The highest adjusted-to-sea level barometric pressure recorded on Earth was 1085.7 hPa measured in Tosontsengel
Atmospheric pressure
–
Kollsman-type barometric aircraft altimeter (as used in North America) displaying an altitude of 80 ft (24 m).
Atmospheric pressure
–
15 year average mean sea level pressure for June, July, and August (top) and December, January, and February (bottom). ERA-15 re-analysis.
Atmospheric pressure
–
A very local storm above Snæfellsjökull, showing clouds formed on the mountain by orographic lift
Atmospheric pressure
–
Hurricane Wilma on 19 October 2005–882 hPa (12.79 psi) in eye
52.
Altitude
–
Altitude or height is defined based on the context in which it is used. As a general definition, altitude is a measurement, usually in the vertical or up direction. The reference datum also often varies according to the context, although the term altitude is commonly used to mean the height above sea level of a location, in geography the term elevation is often preferred for this usage. Vertical distance measurements in the direction are commonly referred to as depth. In aviation, the altitude can have several meanings, and is always qualified by explicitly adding a modifier. Parties exchanging altitude information must be clear which definition is being used, aviation altitude is measured using either mean sea level or local ground level as the reference datum. When flying at a level, the altimeter is always set to standard pressure. On the flight deck, the instrument for measuring altitude is the pressure altimeter. There are several types of altitude, Indicated altitude is the reading on the altimeter when it is set to the local barometric pressure at mean sea level. In UK aviation radiotelephony usage, the distance of a level, a point or an object considered as a point, measured from mean sea level. Absolute altitude is the height of the aircraft above the terrain over which it is flying and it can be measured using a radar altimeter. Also referred to as radar height or feet/metres above ground level, true altitude is the actual elevation above mean sea level. It is indicated altitude corrected for temperature and pressure. Height is the elevation above a reference point, commonly the terrain elevation. Pressure altitude is used to indicate flight level which is the standard for reporting in the U. S. in Class A airspace. Pressure altitude and indicated altitude are the same when the setting is 29.92 Hg or 1013.25 millibars. Density altitude is the altitude corrected for non-ISA International Standard Atmosphere atmospheric conditions, aircraft performance depends on density altitude, which is affected by barometric pressure, humidity and temperature. On a very hot day, density altitude at an airport may be so high as to preclude takeoff and these types of altitude can be explained more simply as various ways of measuring the altitude, Indicated altitude – the altitude shown on the altimeter
Altitude
–
Vertical distance comparison
53.
Hydraulics
–
Hydraulics is a technology and applied science using engineering, chemistry, and other sciences involving the mechanical properties and use of liquids or fluids. At a very basic level, hydraulics is the version of pneumatics. Fluid mechanics provides the foundation for hydraulics, which focuses on the applied engineering using the properties of fluids. In fluid power, hydraulics are used for the generation, control, hydraulic topics range through some parts of science and most of engineering modules, and cover concepts such as pipe flow, dam design, fluidics and fluid control circuitry, pumps. The principles of hydraulics are in use naturally in the body within the heart. Free surface hydraulics is the branch of hydraulics dealing with surface flow, such as occurring in rivers, canals, lakes, estuaries. Its sub-field open channel flow studies the flow in open channels, the word hydraulics originates from the Greek word ὑδραυλικός which in turn originates from ὕδωρ and αὐλός. Early uses of water power date back to Mesopotamia and ancient Egypt, other early examples of water power include the Qanat system in ancient Persia and the Turpan water system in ancient Central Asia. The Greeks constructed sophisticated water and hydraulic power systems, an example is the construction by Eupalinos, under a public contract, of a watering channel for Samos, the Tunnel of Eupalinos. An early example of the usage of hydraulic wheel, probably the earliest in Europe, is the Perachora wheel, notable is the construction of the first hydraulic automata by Ctesibius and Hero of Alexandria. Hero describes a number of working machines using hydraulic power, such as the force pump, in ancient China there was Sunshu Ao, Ximen Bao, Du Shi, Zhang Heng, and Ma Jun, while medieval China had Su Song and Shen Kuo. Du Shi employed a waterwheel to power the bellows of a blast furnace producing cast iron, Zhang Heng was the first to employ hydraulics to provide motive power in rotating an armillary sphere for astronomical observation. In ancient Sri Lanka, hydraulics were used in the ancient kingdoms of Anuradhapura. The discovery of the principle of the tower, or valve pit. By the first century AD, several irrigation works had been completed. The coral on the rock at the site includes cisterns for collecting water. They were among the first to use of the siphon to carry water across valleys. They used lead widely in plumbing systems for domestic and public supply, hydraulic mining was used in the gold-fields of northern Spain, which was conquered by Augustus in 25 BC
Hydraulics
–
Moat and gardens at Sigirya.
Hydraulics
–
An open channel, with a uniform depth, Open Channel Hydraulics deals with uniform and non-uniform streams.
Hydraulics
–
Aqueduct of Segovia, a 1st-century AD masterpiece.
54.
Gravity of Earth
–
The gravity of Earth, which is denoted by g, refers to the acceleration that is imparted to objects due to the distribution of mass within the Earth. In SI units this acceleration is measured in metres per second squared or equivalently in newtons per kilogram and this quantity is sometimes referred to informally as little g. The precise strength of Earths gravity varies depending on location, the nominal average value at the Earths surface, known as standard gravity is, by definition,9.80665 m/s2. This quantity is denoted variously as gn, ge, g0, gee, the weight of an object on the Earths surface is the downwards force on that object, given by Newtons second law of motion, or F = ma. Gravitational acceleration contributes to the acceleration, but other factors, such as the rotation of the Earth, also contribute. The Earth is not spherically symmetric, but is slightly flatter at the poles while bulging at the Equator, there are consequently slight deviations in both the magnitude and direction of gravity across its surface. The net force as measured by a scale and plumb bob is called effective gravity or apparent gravity, effective gravity includes other factors that affect the net force. These factors vary and include such as centrifugal force at the surface from the Earths rotation. Effective gravity on the Earths surface varies by around 0. 7%, in large cities, it ranges from 9.766 in Kuala Lumpur, Mexico City, and Singapore to 9.825 in Oslo and Helsinki. The surface of the Earth is rotating, so it is not a frame of reference. At latitudes nearer the Equator, the centrifugal force produced by Earths rotation is larger than at polar latitudes. This counteracts the Earths gravity to a small degree – up to a maximum of 0. 3% at the Equator –, the same two factors influence the direction of the effective gravity. Gravity decreases with altitude as one rises above the Earths surface because greater altitude means greater distance from the Earths centre, all other things being equal, an increase in altitude from sea level to 9,000 metres causes a weight decrease of about 0. 29%. It is a misconception that astronauts in orbit are weightless because they have flown high enough to escape the Earths gravity. In fact, at an altitude of 400 kilometres, equivalent to an orbit of the Space Shuttle. Weightlessness actually occurs because orbiting objects are in free-fall, the effect of ground elevation depends on the density of the ground. A person flying at 30000 ft above sea level over mountains will feel more gravity than someone at the same elevation, however, a person standing on the earths surface feels less gravity when the elevation is higher. The following formula approximates the Earths gravity variation with altitude, g h = g 02 Where gh is the acceleration at height h above sea level
Gravity of Earth
–
Earth's gravity measured by NASA's GRACE mission, showing deviations from the theoretical gravity of an idealized smooth Earth, the so-called earth ellipsoid. Red shows the areas where gravity is stronger than the smooth, standard value, and blue reveals areas where gravity is weaker. (Animated version.)
Gravity of Earth
–
Earth's radial density distribution according to the Preliminary Reference Earth Model (PREM).
55.
Medicine
–
Medicine is the science and practice of the diagnosis, treatment, and prevention of disease. The word medicine is derived from Latin medicus, meaning a physician, Medicine encompasses a variety of health care practices evolved to maintain and restore health by the prevention and treatment of illness. Medicine has existed for thousands of years, during most of which it was an art frequently having connections to the religious and philosophical beliefs of local culture. For example, a man would apply herbs and say prayers for healing, or an ancient philosopher. In recent centuries, since the advent of modern science, most medicine has become a combination of art, while stitching technique for sutures is an art learned through practice, the knowledge of what happens at the cellular and molecular level in the tissues being stitched arises through science. Prescientific forms of medicine are now known as medicine and folk medicine. They remain commonly used with or instead of medicine and are thus called alternative medicine. For example, evidence on the effectiveness of acupuncture is variable and inconsistent for any condition, in contrast, treatments outside the bounds of safety and efficacy are termed quackery. Medical availability and clinical practice varies across the world due to differences in culture. In modern clinical practice, physicians personally assess patients in order to diagnose, treat, the doctor-patient relationship typically begins an interaction with an examination of the patients medical history and medical record, followed by a medical interview and a physical examination. Basic diagnostic medical devices are typically used, after examination for signs and interviewing for symptoms, the doctor may order medical tests, take a biopsy, or prescribe pharmaceutical drugs or other therapies. Differential diagnosis methods help to rule out conditions based on the information provided, during the encounter, properly informing the patient of all relevant facts is an important part of the relationship and the development of trust. The medical encounter is then documented in the record, which is a legal document in many jurisdictions. Follow-ups may be shorter but follow the general procedure. The diagnosis and treatment may take only a few minutes or a few weeks depending upon the complexity of the issue, the components of the medical interview and encounter are, Chief complaint, the reason for the current medical visit. They are in the patients own words and are recorded along with the duration of each one, also called chief concern or presenting complaint. History of present illness, the order of events of symptoms. Distinguishable from history of illness, often called past medical history
Medicine
–
Early Medicine Bottles
Medicine
Medicine
–
The Doctor, by Sir Luke Fildes (1891)
Medicine
–
The Hospital of Santa Maria della Scala, fresco by Domenico di Bartolo, 1441–1442
56.
Velocity
–
The velocity of an object is the rate of change of its position with respect to a frame of reference, and is a function of time. Velocity is equivalent to a specification of its speed and direction of motion, Velocity is an important concept in kinematics, the branch of classical mechanics that describes the motion of bodies. Velocity is a vector quantity, both magnitude and direction are needed to define it. The scalar absolute value of velocity is called speed, being a coherent derived unit whose quantity is measured in the SI system as metres per second or as the SI base unit of. For example,5 metres per second is a scalar, whereas 5 metres per second east is a vector, if there is a change in speed, direction or both, then the object has a changing velocity and is said to be undergoing an acceleration. To have a constant velocity, an object must have a constant speed in a constant direction, constant direction constrains the object to motion in a straight path thus, a constant velocity means motion in a straight line at a constant speed. For example, a car moving at a constant 20 kilometres per hour in a path has a constant speed. Hence, the car is considered to be undergoing an acceleration, Speed describes only how fast an object is moving, whereas velocity gives both how fast and in what direction the object is moving. If a car is said to travel at 60 km/h, its speed has been specified, however, if the car is said to move at 60 km/h to the north, its velocity has now been specified. The big difference can be noticed when we consider movement around a circle and this is because the average velocity is calculated by only considering the displacement between the starting and the end points while the average speed considers only the total distance traveled. Velocity is defined as the rate of change of position with respect to time, average velocity can be calculated as, v ¯ = Δ x Δ t. The average velocity is less than or equal to the average speed of an object. This can be seen by realizing that while distance is always strictly increasing, from this derivative equation, in the one-dimensional case it can be seen that the area under a velocity vs. time is the displacement, x. In calculus terms, the integral of the velocity v is the displacement function x. In the figure, this corresponds to the area under the curve labeled s. Since the derivative of the position with respect to time gives the change in position divided by the change in time, although velocity is defined as the rate of change of position, it is often common to start with an expression for an objects acceleration. As seen by the three green tangent lines in the figure, an objects instantaneous acceleration at a point in time is the slope of the tangent to the curve of a v graph at that point. In other words, acceleration is defined as the derivative of velocity with respect to time, from there, we can obtain an expression for velocity as the area under an a acceleration vs. time graph
Velocity
–
As a change of direction occurs while the cars turn on the curved track, their velocity is not constant.
57.
Nebula
–
A nebula is an interstellar cloud of dust, hydrogen, helium and other ionized gases. Originally, nebula was a name for any diffuse astronomical object, the Andromeda Galaxy, for instance, was once referred to as the Andromeda Nebula before the true nature of galaxies was confirmed in the early 20th century by Vesto Slipher, Edwin Hubble and others. Most nebulae are of vast size, even millions of years in diameter. The Orion Nebula, the brightest nebula in the sky that occupies a region twice the diameter of the full Moon, can be viewed with the naked eye but was missed by early astronomers. Many nebulae are visible due to their fluorescence caused by the hot stars, while others are so diffuse they can only be detected with long exposures. Some nebulae are variably illuminated by T Tauri variable stars, Nebulae are often star-forming regions, such as in the Pillars of Creation in the Eagle Nebula. In these regions the formations of gas, dust, and other materials together to form denser regions, which attract further matter. The remaining material is believed to form planets and other planetary system objects. Around 150 AD, Claudius Ptolemaeus recorded, in books VII–VIII of his Almagest and he also noted a region of nebulosity between the constellations Ursa Major and Leo that was not associated with any star. The first true nebula, as distinct from a cluster, was mentioned by the Persian astronomer Abd al-Rahman al-Sufi. He noted a little cloud where the Andromeda Galaxy is located and he also cataloged the Omicron Velorum star cluster as a nebulous star and other nebulous objects, such as Brocchis Cluster. The supernova that created the Crab Nebula, the SN1054, was observed by Arabic, in 1610, Nicolas-Claude Fabri de Peiresc discovered the Orion Nebula using a telescope. This nebula was observed by Johann Baptist Cysat in 1618. However, the first detailed study of the Orion Nebula was not performed until 1659 by Christiaan Huygens, in 1715, Edmund Halley published a list of six nebulae. This number steadily increased during the century, with Jean-Philippe de Cheseaux compiling a list of 20 in 1746, from 1751 to 1753, Nicolas Louis de Lacaille cataloged 42 nebulae from the Cape of Good Hope, most of which were previously unknown. Charles Messier then compiled a catalog of 103 nebulae by 1781, his interest was detecting comets, the number of nebulae was then greatly expanded by the efforts of William Herschel and his sister Caroline Herschel. Their Catalogue of One Thousand New Nebulae and Clusters of Stars was published in 1786, a second catalog of a thousand was published in 1789 and the third and final catalog of 510 appeared in 1802. During much of their work, William Herschel believed that these nebulae were merely unresolved clusters of stars, in 1790, however, he discovered a star surrounded by nebulosity and concluded that this was a true nebulosity, rather than a more distant cluster
Nebula
–
The " Pillars of Creation " from the Eagle Nebula. Evidence from the Spitzer Telescope suggests that the pillars may already have been destroyed by a supernova explosion, but the light showing us the destruction will not reach the Earth for another millennium.
Nebula
–
Portion of the Carina Nebula
Nebula
–
The Triangulum Emission Garren Nebula NGC 604
Nebula
–
Herbig–Haro object HH 161 and HH 164.
58.
Traffic engineering (transportation)
–
Traffic engineering is a branch of civil engineering that uses engineering techniques to achieve the safe and efficient movement of people and goods on roadways. It focuses mainly on research for safe and efficient traffic flow, such as geometry, sidewalks and crosswalks, cycling infrastructure, traffic signs, road surface markings. Traffic engineering deals with the part of transportation system, except the infrastructures provided. Typical traffic engineering projects involve designing traffic control device installations and modifications, including signals, signs. However, traffic engineers also consider traffic safety by investigating locations with high crash rates, traffic flow management can be short-term or long-term. Traditionally, road improvements have consisted mainly of building additional infrastructure, however, dynamic elements are now being introduced into road traffic management. Dynamic elements have long used in rail transport. These include sensors to measure flows and automatic, interconnected. Also, traffic flow and speed sensors are used to detect problems and alert operators, so that the cause of the congestion can be determined and these systems are collectively called intelligent transportation systems. However, above a threshold, increased density reduces speed. Additionally, beyond a threshold, increased density reduces flow as well. Therefore, speeds and lane flows at bottlenecks can be high during peak periods by managing traffic density using devices that limit the rate at which vehicles can enter the highway. Ramp meters, signals on entrance ramps that control the rate at which vehicles are allowed to enter the mainline facility, highway safety engineering is a branch of traffic engineering that deals with reducing the frequency and severity of crashes. It uses physics and vehicle dynamics, as well as road user psychology and human factors engineering, a typical traffic safety investigation follows these steps 1. Locations are selected by looking for sites with higher than average crash rates and this includes obtaining police reports of crashes, observing road user behavior, and collecting information on traffic signs, road surface markings, traffic lights and road geometry. Look for collisions patterns or road conditions that may be contributing to the problem, identify possible countermeasures to reduce the severity or frequency of crashes. • Evaluate cost/benefit ratios of the alternatives • Consider whether a proposed improvement will solve the problem, for example, preventing left turns at one intersection may eliminate left turn crashes at that location, only to increase them a block away. • Are any disadvantages of proposed improvements likely to be worse than the problem you are trying to solve, usually, this occurs some time after the implementation
Traffic engineering (transportation)
–
Complex intersections with multiple vehicle lanes, bike lanes, and crosswalks are common examples of traffic engineering projects
Traffic engineering (transportation)
–
A ramp meter limits the rate at which vehicles can enter the freeway
59.
Shear stress
–
A shear stress, often denoted τ, is defined as the component of stress coplanar with a material cross section. Shear stress arises from the vector component parallel to the cross section. Normal stress, on the hand, arises from the force vector component perpendicular to the material cross section on which it acts. The formula to calculate average shear stress is force per unit area, τ = F A, where, τ = the shear stress, F = the force applied, A = the cross-sectional area of material with area parallel to the applied force vector. Pure shear stress is related to shear strain, denoted γ, by the following equation, τ = γ G where G is the shear modulus of the isotropic material. Here E is Youngs modulus and ν is Poissons ratio, beam shear is defined as the internal shear stress of a beam caused by the shear force applied to the beam. The beam shear formula is known as Zhuravskii shear stress formula after Dmitrii Ivanovich Zhuravskii who derived it in 1855. Shear stresses within a structure may be calculated by idealizing the cross-section of the structure into a set of stringers. Dividing the shear flow by the thickness of a portion of the semi-monocoque structure yields the shear stress. Any real fluids moving along solid boundary will incur a shear stress on that boundary, the no-slip condition dictates that the speed of the fluid at the boundary is zero, but at some height from the boundary the flow speed must equal that of the fluid. The region between two points is aptly named the boundary layer. For all Newtonian fluids in laminar flow the shear stress is proportional to the rate in the fluid where the viscosity is the constant of proportionality. However, for non-Newtonian fluids, this is no longer the case as for these fluids the viscosity is not constant, the shear stress is imparted onto the boundary as a result of this loss of velocity. Specifically, the shear stress is defined as, τ w ≡ τ = μ ∂ u ∂ y | y =0. For an isotropic Newtonian flow it is a scalar, while for anisotropic Newtonian flows it can be a second-order tensor too. On the other hand, given a shear stress as function of the flow velocity, the constant one finds in this case is the dynamic viscosity of the flow. On the other hand, a flow in which the viscosity were and this nonnewtonian flow is isotropic, so the viscosity is simply a scalar, μ =1 u. This relationship can be exploited to measure the shear stress
Shear stress
–
A shearing force is applied to the top of the rectangle while the bottom is held in place. The resulting shear stress,, deforms the rectangle into a parallelogram. The area involved would be the top of the parallelogram.
60.
Conservation of momentum
–
In classical mechanics, linear momentum, translational momentum, or simply momentum is the product of the mass and velocity of an object, quantified in kilogram-meters per second. It is dimensionally equivalent to impulse, the product of force and time, Newtons second law of motion states that the change in linear momentum of a body is equal to the net impulse acting on it. If the truck were lighter, or moving slowly, then it would have less momentum. Linear momentum is also a quantity, meaning that if a closed system is not affected by external forces. In classical mechanics, conservation of momentum is implied by Newtons laws. It also holds in special relativity and, with definitions, a linear momentum conservation law holds in electrodynamics, quantum mechanics, quantum field theory. It is ultimately an expression of one of the symmetries of space and time. Linear momentum depends on frame of reference, observers in different frames would find different values of linear momentum of a system. But each would observe that the value of linear momentum does not change with time, momentum has a direction as well as magnitude. Quantities that have both a magnitude and a direction are known as vector quantities, because momentum has a direction, it can be used to predict the resulting direction of objects after they collide, as well as their speeds. Below, the properties of momentum are described in one dimension. The vector equations are almost identical to the scalar equations, the momentum of a particle is traditionally represented by the letter p. It is the product of two quantities, the mass and velocity, p = m v, the units of momentum are the product of the units of mass and velocity. In SI units, if the mass is in kilograms and the velocity in meters per second then the momentum is in kilogram meters/second, in cgs units, if the mass is in grams and the velocity in centimeters per second, then the momentum is in gram centimeters/second. Being a vector, momentum has magnitude and direction, for example, a 1 kg model airplane, traveling due north at 1 m/s in straight and level flight, has a momentum of 1 kg m/s due north measured from the ground. The momentum of a system of particles is the sum of their momenta, if two particles have masses m1 and m2, and velocities v1 and v2, the total momentum is p = p 1 + p 2 = m 1 v 1 + m 2 v 2. If all the particles are moving, the center of mass will generally be moving as well, if the center of mass is moving at velocity vcm, the momentum is, p = m v cm. This is known as Eulers first law, if a force F is applied to a particle for a time interval Δt, the momentum of the particle changes by an amount Δ p = F Δ t
Conservation of momentum
–
In a game of pool, momentum is conserved; that is, if one ball stops dead after the collision, the other ball will continue away with all the momentum. If the moving ball continues or is deflected then both balls will carry a portion of the momentum from the collision.
61.
Derivative
–
The derivative of a function of a real variable measures the sensitivity to change of the function value with respect to a change in its argument. Derivatives are a tool of calculus. For example, the derivative of the position of an object with respect to time is the objects velocity. The derivative of a function of a variable at a chosen input value. The tangent line is the best linear approximation of the function near that input value, for this reason, the derivative is often described as the instantaneous rate of change, the ratio of the instantaneous change in the dependent variable to that of the independent variable. Derivatives may be generalized to functions of real variables. In this generalization, the derivative is reinterpreted as a transformation whose graph is the best linear approximation to the graph of the original function. The Jacobian matrix is the matrix that represents this linear transformation with respect to the basis given by the choice of independent and dependent variables and it can be calculated in terms of the partial derivatives with respect to the independent variables. For a real-valued function of variables, the Jacobian matrix reduces to the gradient vector. The process of finding a derivative is called differentiation, the reverse process is called antidifferentiation. The fundamental theorem of calculus states that antidifferentiation is the same as integration, differentiation and integration constitute the two fundamental operations in single-variable calculus. Differentiation is the action of computing a derivative, the derivative of a function y = f of a variable x is a measure of the rate at which the value y of the function changes with respect to the change of the variable x. It is called the derivative of f with respect to x, If x and y are real numbers, and if the graph of f is plotted against x, the derivative is the slope of this graph at each point. The simplest case, apart from the case of a constant function, is when y is a linear function of x. This formula is true because y + Δ y = f = m + b = m x + m Δ x + b = y + m Δ x. Thus, since y + Δ y = y + m Δ x and this gives an exact value for the slope of a line. If the function f is not linear, however, then the change in y divided by the change in x varies, differentiation is a method to find an exact value for this rate of change at any given value of x. The idea, illustrated by Figures 1 to 3, is to compute the rate of change as the value of the ratio of the differences Δy / Δx as Δx becomes infinitely small
Derivative
–
The graph of a function, drawn in black, and a tangent line to that function, drawn in red. The slope of the tangent line is equal to the derivative of the function at the marked point.
62.
Continuous function
–
In mathematics, a continuous function is a function for which sufficiently small changes in the input result in arbitrarily small changes in the output. Otherwise, a function is said to be a discontinuous function, a continuous function with a continuous inverse function is called a homeomorphism. Continuity of functions is one of the concepts of topology. The introductory portion of this focuses on the special case where the inputs and outputs of functions are real numbers. In addition, this article discusses the definition for the general case of functions between two metric spaces. In order theory, especially in theory, one considers a notion of continuity known as Scott continuity. Other forms of continuity do exist but they are not discussed in this article, as an example, consider the function h, which describes the height of a growing flower at time t. By contrast, if M denotes the amount of money in an account at time t, then the function jumps at each point in time when money is deposited or withdrawn. A form of the definition of continuity was first given by Bernard Bolzano in 1817. Cauchy defined infinitely small quantities in terms of quantities. The formal definition and the distinction between pointwise continuity and uniform continuity were first given by Bolzano in the 1830s but the work wasnt published until the 1930s, all three of those nonequivalent definitions of pointwise continuity are still in use. Eduard Heine provided the first published definition of continuity in 1872. This is not a definition of continuity since the function f =1 x is continuous on its whole domain of R ∖ A function is continuous at a point if it does not have a hole or jump. A “hole” or “jump” in the graph of a function if the value of the function at a point c differs from its limiting value along points that are nearby. Such a point is called a discontinuity, a function is then continuous if it has no holes or jumps, that is, if it is continuous at every point of its domain. Otherwise, a function is discontinuous, at the points where the value of the function differs from its limiting value, there are several ways to make this definition mathematically rigorous. These definitions are equivalent to one another, so the most convenient definition can be used to determine whether a function is continuous or not. In the definitions below, f, I → R. is a function defined on a subset I of the set R of real numbers and this subset I is referred to as the domain of f
Continuous function
–
Illustration of the ε-δ-definition: for ε=0.5, c=2, the value δ=0.5 satisfies the condition of the definition.
63.
Molecules
–
A molecule is an electrically neutral group of two or more atoms held together by chemical bonds. Molecules are distinguished from ions by their lack of electrical charge, however, in quantum physics, organic chemistry, and biochemistry, the term molecule is often used less strictly, also being applied to polyatomic ions. In the kinetic theory of gases, the molecule is often used for any gaseous particle regardless of its composition. According to this definition, noble gas atoms are considered molecules as they are in fact monoatomic molecules. A molecule may be homonuclear, that is, it consists of atoms of one element, as with oxygen, or it may be heteronuclear. Atoms and complexes connected by non-covalent interactions, such as hydrogen bonds or ionic bonds, are not considered single molecules. Molecules as components of matter are common in organic substances and they also make up most of the oceans and atmosphere. Also, no typical molecule can be defined for ionic crystals and covalent crystals, the theme of repeated unit-cellular-structure also holds for most condensed phases with metallic bonding, which means that solid metals are also not made of molecules. In glasses, atoms may also be together by chemical bonds with no presence of any definable molecule. The science of molecules is called molecular chemistry or molecular physics, in practice, however, this distinction is vague. In molecular sciences, a molecule consists of a system composed of two or more atoms. Polyatomic ions may sometimes be thought of as electrically charged molecules. The term unstable molecule is used for very reactive species, i. e, according to Merriam-Webster and the Online Etymology Dictionary, the word molecule derives from the Latin moles or small unit of mass. Molecule – extremely minute particle, from French molécule, from New Latin molecula, diminutive of Latin moles mass, a vague meaning at first, the vogue for the word can be traced to the philosophy of Descartes. The definition of the molecule has evolved as knowledge of the structure of molecules has increased, earlier definitions were less precise, defining molecules as the smallest particles of pure chemical substances that still retain their composition and chemical properties. Molecules are held together by covalent bonding or ionic bonding. Several types of non-metal elements exist only as molecules in the environment, for example, hydrogen only exists as hydrogen molecule. A molecule of a compound is made out of two or more elements, a covalent bond is a chemical bond that involves the sharing of electron pairs between atoms
Molecules
–
Atomic force microscopy image of a PTCDA molecule, which contains five carbon rings in a non-linear arrangement.
Molecules
–
A scanning tunneling microscopy image of pentacene molecules, which consist of linear chains of five carbon rings.
Molecules
–
Arrangement of polyvinylidene fluoride molecules in a nanofiber – transmission electron microscopy image.
Molecules
64.
Statistical mechanics
–
Statistical mechanics is a branch of theoretical physics using probability theory to study the average behaviour of a mechanical system, where the state of the system is uncertain. A common use of mechanics is in explaining the thermodynamic behaviour of large systems. This branch of mechanics, which treats and extends classical thermodynamics, is known as statistical thermodynamics or equilibrium statistical mechanics. Statistical mechanics also finds use outside equilibrium, an important subbranch known as non-equilibrium statistical mechanics deals with the issue of microscopically modelling the speed of irreversible processes that are driven by imbalances. Examples of such processes include chemical reactions or flows of particles, in physics there are two types of mechanics usually examined, classical mechanics and quantum mechanics. Statistical mechanics fills this disconnection between the laws of mechanics and the experience of incomplete knowledge, by adding some uncertainty about which state the system is in. The statistical ensemble is a probability distribution over all states of the system. In classical statistical mechanics, the ensemble is a probability distribution over phase points, in quantum statistical mechanics, the ensemble is a probability distribution over pure states, and can be compactly summarized as a density matrix. These two meanings are equivalent for many purposes, and will be used interchangeably in this article, however the probability is interpreted, each state in the ensemble evolves over time according to the equation of motion. Thus, the ensemble itself also evolves, as the systems in the ensemble continually leave one state. The ensemble evolution is given by the Liouville equation or the von Neumann equation, one special class of ensemble is those ensembles that do not evolve over time. These ensembles are known as equilibrium ensembles and their condition is known as statistical equilibrium, Statistical equilibrium occurs if, for each state in the ensemble, the ensemble also contains all of its future and past states with probabilities equal to the probability of being in that state. The study of equilibrium ensembles of isolated systems is the focus of statistical thermodynamics, non-equilibrium statistical mechanics addresses the more general case of ensembles that change over time, and/or ensembles of non-isolated systems. The primary goal of thermodynamics is to derive the classical thermodynamics of materials in terms of the properties of their constituent particles. Whereas statistical mechanics proper involves dynamics, here the attention is focussed on statistical equilibrium, Statistical equilibrium does not mean that the particles have stopped moving, rather, only that the ensemble is not evolving. A sufficient condition for statistical equilibrium with a system is that the probability distribution is a function only of conserved properties. There are many different equilibrium ensembles that can be considered, additional postulates are necessary to motivate why the ensemble for a given system should have one form or another. A common approach found in textbooks is to take the equal a priori probability postulate
Statistical mechanics
–
Statistical mechanics
65.
Scale (ratio)
–
The scale ratio of a model represents the proportional ratio of a linear dimension of the model to the same feature of the original. Examples include a 3-dimensional scale model of a building or the drawings of the elevations or plans of a building. In such cases the scale is dimensionless and exact throughout the model or drawing, the scale can be expressed in four ways, in words, as a ratio, as a fraction and as a graphical scale. Thus on an architects drawing one might read one centimetre to one metre or 1,100 or 1/100, in general a representation may involve more than one scale at the same time. For example, a showing a new road in elevation might use different horizontal and vertical scales. An elevation of a bridge might be annotated with arrows with a proportional to a force loading, as in 1 cm to 1000 newtons. A weather map at some scale may be annotated with arrows at a dimensional scale of 1 cm to 20 mph. A town plan may be constructed as a scale drawing. In general the scale of a projection depends on position and direction, the variation of scale may be considerable in small scale maps which may cover the globe. In large scale maps of areas the variation of scale may be insignificant for most purposes. The scale of a map projection must be interpreted as a nominal scale, a scale model is a representation or copy of an object that is larger or smaller than the actual size of the object being represented. Very often the model is smaller than the original and used as a guide to making the object in full size. In mathematics, the idea of geometric scaling can be generalized, the scale means for 3 or more numbers to be in Place List of scale model sizes Scale Scale invariance Scale space Spatial scale
Scale (ratio)
–
Da Vinci's Vitruvian Man illustrates the ratios of the dimensions of the human body; a human figure is often used to illustrate the scale of architectural or engineering drawings.
66.
Differential equations
–
A differential equation is a mathematical equation that relates some function with its derivatives. In applications, the functions usually represent physical quantities, the derivatives represent their rates of change, because such relations are extremely common, differential equations play a prominent role in many disciplines including engineering, physics, economics, and biology. In pure mathematics, differential equations are studied from different perspectives. Only the simplest differential equations are solvable by explicit formulas, however, if a self-contained formula for the solution is not available, the solution may be numerically approximated using computers. Differential equations first came into existence with the invention of calculus by Newton, jacob Bernoulli proposed the Bernoulli differential equation in 1695. This is a differential equation of the form y ′ + P y = Q y n for which the following year Leibniz obtained solutions by simplifying it. Historically, the problem of a string such as that of a musical instrument was studied by Jean le Rond dAlembert, Leonhard Euler, Daniel Bernoulli. In 1746, d’Alembert discovered the wave equation, and within ten years Euler discovered the three-dimensional wave equation. The Euler–Lagrange equation was developed in the 1750s by Euler and Lagrange in connection with their studies of the tautochrone problem. This is the problem of determining a curve on which a particle will fall to a fixed point in a fixed amount of time. Lagrange solved this problem in 1755 and sent the solution to Euler, both further developed Lagranges method and applied it to mechanics, which led to the formulation of Lagrangian mechanics. Contained in this book was Fouriers proposal of his heat equation for conductive diffusion of heat and this partial differential equation is now taught to every student of mathematical physics. For example, in mechanics, the motion of a body is described by its position. Newtons laws allow one to express these variables dynamically as an equation for the unknown position of the body as a function of time. In some cases, this equation may be solved explicitly. An example of modelling a real world problem using differential equations is the determination of the velocity of a ball falling through the air, considering only gravity, the balls acceleration towards the ground is the acceleration due to gravity minus the acceleration due to air resistance. Gravity is considered constant, and air resistance may be modeled as proportional to the balls velocity and this means that the balls acceleration, which is a derivative of its velocity, depends on the velocity. Finding the velocity as a function of time involves solving a differential equation, Differential equations can be divided into several types
Differential equations
–
Navier–Stokes differential equations used to simulate airflow around an obstruction.
67.
Kinematic viscosity
–
The viscosity of a fluid is a measure of its resistance to gradual deformation by shear stress or tensile stress. For liquids, it corresponds to the concept of thickness, for example. Viscosity is a property of the fluid which opposes the motion between the two surfaces of the fluid in a fluid that are moving at different velocities. For a given velocity pattern, the stress required is proportional to the fluids viscosity, a fluid that has no resistance to shear stress is known as an ideal or inviscid fluid. Zero viscosity is observed only at low temperatures in superfluids. Otherwise, all fluids have positive viscosity, and are said to be viscous or viscid. A fluid with a high viscosity, such as pitch. The word viscosity is derived from the Latin viscum, meaning mistletoe, the dynamic viscosity of a fluid expresses its resistance to shearing flows, where adjacent layers move parallel to each other with different speeds. It can be defined through the situation known as a Couette flow. This fluid has to be homogeneous in the layer and at different shear stresses, if the speed of the top plate is small enough, the fluid particles will move parallel to it, and their speed will vary linearly from zero at the bottom to u at the top. Each layer of fluid will move faster than the one just below it, in particular, the fluid will apply on the top plate a force in the direction opposite to its motion, and an equal but opposite one to the bottom plate. An external force is required in order to keep the top plate moving at constant speed. The magnitude F of this force is found to be proportional to the u and the area A of each plate. The proportionality factor μ in this formula is the viscosity of the fluid, the ratio u/y is called the rate of shear deformation or shear velocity, and is the derivative of the fluid speed in the direction perpendicular to the plates. Isaac Newton expressed the forces by the differential equation τ = μ ∂ u ∂ y, where τ = F/A. This formula assumes that the flow is moving along parallel lines and this equation can be used where the velocity does not vary linearly with y, such as in fluid flowing through a pipe. Use of the Greek letter mu for the dynamic viscosity is common among mechanical and chemical engineers. However, the Greek letter eta is used by chemists, physicists
Kinematic viscosity
–
Pitch has a viscosity approximately 230 billion (2.3 × 10 11) times that of water.
Kinematic viscosity
–
A simulation of substances with different viscosities. The substance above has lower viscosity than the substance below
Kinematic viscosity
–
Example of the viscosity of milk and water. Liquids with higher viscosities make smaller splashes when poured at the same velocity.
Kinematic viscosity
–
Honey being drizzled.
68.
Calculus
–
Calculus is the mathematical study of continuous change, in the same way that geometry is the study of shape and algebra is the study of generalizations of arithmetic operations. It has two branches, differential calculus, and integral calculus, these two branches are related to each other by the fundamental theorem of calculus. Both branches make use of the notions of convergence of infinite sequences. Generally, modern calculus is considered to have developed in the 17th century by Isaac Newton. Today, calculus has widespread uses in science, engineering and economics, Calculus is a part of modern mathematics education. A course in calculus is a gateway to other, more advanced courses in mathematics devoted to the study of functions and limits, Calculus has historically been called the calculus of infinitesimals, or infinitesimal calculus. Calculus is also used for naming some methods of calculation or theories of computation, such as calculus, calculus of variations, lambda calculus. The ancient period introduced some of the ideas that led to integral calculus, the method of exhaustion was later discovered independently in China by Liu Hui in the 3rd century AD in order to find the area of a circle. In the 5th century AD, Zu Gengzhi, son of Zu Chongzhi, indian mathematicians gave a non-rigorous method of a sort of differentiation of some trigonometric functions. In the Middle East, Alhazen derived a formula for the sum of fourth powers. He used the results to carry out what would now be called an integration, Cavalieris work was not well respected since his methods could lead to erroneous results, and the infinitesimal quantities he introduced were disreputable at first. The formal study of calculus brought together Cavalieris infinitesimals with the calculus of finite differences developed in Europe at around the same time, pierre de Fermat, claiming that he borrowed from Diophantus, introduced the concept of adequality, which represented equality up to an infinitesimal error term. The combination was achieved by John Wallis, Isaac Barrow, and James Gregory, in other work, he developed series expansions for functions, including fractional and irrational powers, and it was clear that he understood the principles of the Taylor series. He did not publish all these discoveries, and at this time infinitesimal methods were considered disreputable. These ideas were arranged into a calculus of infinitesimals by Gottfried Wilhelm Leibniz. He is now regarded as an independent inventor of and contributor to calculus, unlike Newton, Leibniz paid a lot of attention to the formalism, often spending days determining appropriate symbols for concepts. Leibniz and Newton are usually credited with the invention of calculus. Newton was the first to apply calculus to general physics and Leibniz developed much of the used in calculus today
Calculus
–
Isaac Newton developed the use of calculus in his laws of motion and gravitation.
Calculus
–
Gottfried Wilhelm Leibniz was the first to publish his results on the development of calculus.
Calculus
–
Maria Gaetana Agnesi
Calculus
–
The logarithmic spiral of the Nautilus shell is a classical image used to depict the growth and change related to calculus
69.
Ideal fluid
–
In physics, a perfect fluid is a fluid that can be completely characterized by its rest frame mass density ρ m, and isotropic pressure p. Real fluids are sticky and contain heat. Perfect fluids are idealized models in which these possibilities are neglected, specifically, perfect fluids have no shear stresses, viscosity, or heat conduction. Perfect fluids admit a Lagrangian formulation, which allows the techniques used in theory, in particular, quantization. This formulation can be generalized, but unfortunately, heat conduction, perfect fluids are often used in general relativity to model idealized distributions of matter, such as the interior of a star or an isotropic universe. In the latter case, the equation of state of the fluid may be used in Friedmann–Lemaître–Robertson–Walker equations to describe the evolution of the universe. Equation of state Ideal gas Fluid solutions in general relativity The Large Scale Structure of Space-Time, by S. W. Hawking and G. F. R. Ellis, ISBN 0-521-20016-4, ISBN 0-521-09906-4 Mark D. Roberts
Ideal fluid
–
The stress–energy tensor of a perfect fluid contains only the diagonal components.
70.
Boundary layer
–
In the Earths atmosphere, the atmospheric boundary layer is the air layer near the ground affected by diurnal heat, moisture or momentum transfer to or from the surface. On an aircraft wing the boundary layer is the part of the close to the wing. Laminar boundary layers can be classified according to their structure. When a fluid rotates and viscous forces are balanced by the Coriolis effect, in the theory of heat transfer, a thermal boundary layer occurs. A surface can have multiple types of boundary layer simultaneously, the viscous nature of airflow reduces the local velocities on a surface and is responsible for skin friction. The layer of air over the surface that is slowed down or stopped by viscosity, is the boundary layer. There are two different types of boundary layer flow, laminar and turbulent, laminar Boundary Layer Flow The laminar boundary is a very smooth flow, while the turbulent boundary layer contains swirls or eddies. The laminar flow creates less skin friction drag than the turbulent flow, Boundary layer flow over a wing surface begins as a smooth laminar flow. As the flow continues back from the edge, the laminar boundary layer increases in thickness. Turbulent Boundary Layer Flow At some distance back from the leading edge, the low energy laminar flow, however, tends to break down more suddenly than the turbulent layer. The aerodynamic boundary layer was first defined by Ludwig Prandtl in a paper presented on August 12,1904 at the third International Congress of Mathematicians in Heidelberg and this allows a closed-form solution for the flow in both areas, a significant simplification of the full Navier–Stokes equations. The majority of the transfer to and from a body also takes place within the boundary layer. The pressure distribution throughout the layer in the direction normal to the surface remains constant throughout the boundary layer. The thickness of the velocity boundary layer is defined as the distance from the solid body at which the viscous flow velocity is 99% of the freestream velocity. Displacement Thickness is an alternative definition stating that the boundary layer represents a deficit in mass compared to inviscid flow with slip at the wall. It is the distance by which the wall would have to be displaced in the case to give the same total mass flow as the viscous case. The no-slip condition requires the flow velocity at the surface of an object be zero. The flow velocity will then increase rapidly within the layer, governed by the boundary layer equations
Boundary layer
–
Ludwig Prandtl
71.
Laminar flow
–
In fluid dynamics, laminar flow occurs when a fluid flows in parallel layers, with no disruption between the layers. At low velocities, the fluid tends to flow without lateral mixing, there are no cross-currents perpendicular to the direction of flow, nor eddies or swirls of fluids. In laminar flow, the motion of the particles of the fluid is very orderly with particles close to a surface moving in straight lines parallel to that surface. Laminar flow is a flow regime characterized by high momentum diffusion, Laminar flow tends to occur at lower velocities, below a threshold at which it becomes turbulent. Turbulent flow is an orderly flow regime that is characterised by eddies or small packets of fluid particles which result in lateral mixing. In non-scientific terms, laminar flow is smooth while turbulent flow is rough, the type of flow occurring in a fluid in a channel is important in fluid dynamics problems and subsequently affects heat and mass transfer in fluid systems. The dimensionless Reynolds number is an important parameter in the equations that describe whether fully developed flow conditions lead to laminar or turbulent flow, Laminar flow generally occurs when the fluid is moving slowly or the fluid is very viscous. If the Reynolds number is small, much less than 1, then the fluid will exhibit Stokes or creeping flow. The specific calculation of the Reynolds number, and the values where laminar flow occurs, will depend on the geometry of the flow system, Q is the volumetric flow rate. A is the pipes cross-sectional area, V is the mean velocity of the fluid. μ is the viscosity of the fluid. ν is the viscosity of the fluid, ν = μ/ρ. ρ is the density of the fluid, for such systems, laminar flow occurs when the Reynolds number is below a critical value of approximately 2,040, though the transition range is typically between 1,800 and 2,100. For fluid systems occurring on external surfaces, such as flow past objects suspended in the fluid, the particle Reynolds number Rep would be used for particle suspended in flowing fluids, for example. As with flow in pipes, laminar flow typically occurs with lower Reynolds numbers, while turbulent flow and related phenomena, such as vortex shedding, a common application of laminar flow is in the smooth flow of a viscous liquid through a tube or pipe. In that case, the velocity of flow varies from zero at the walls to a maximum along the centre of the vessel. The flow profile of laminar flow in a tube can be calculated by dividing the flow into thin cylindrical elements, another example is the flow of air over an aircraft wing. The boundary layer is a thin sheet of air lying over the surface of the wing
Laminar flow
–
A sphere in Stokes flow, at very low Reynolds number. An object moving through a fluid experiences a force in the direction opposite to its motion.
72.
Speed of sound
–
The speed of sound is the distance travelled per unit time by a sound wave as it propagates through an elastic medium. In dry air at 20 °C, the speed of sound is 343 metres per second, the speed of sound in an ideal gas depends only on its temperature and composition. The speed has a dependence on frequency and pressure in ordinary air. In common everyday speech, speed of sound refers to the speed of waves in air. However, the speed of sound varies from substance to substance, sound travels most slowly in gases, it travels faster in liquids, and faster still in solids. For example, sound travels at 343 m/s in air, it travels at 1,484 m/s in water, in an exceptionally stiff material such as diamond, sound travels at 12,000 m/s, which is around the maximum speed that sound will travel under normal conditions. Sound waves in solids are composed of waves, and a different type of sound wave called a shear wave. Shear waves in solids usually travel at different speeds, as exhibited in seismology, the speed of compression waves in solids is determined by the mediums compressibility, shear modulus and density. The speed of waves is determined only by the solid materials shear modulus. In fluid dynamics, the speed of sound in a medium is used as a relative measure for the speed of an object moving through the medium. The ratio of the speed of an object to the speed of sound in the fluid is called the objects Mach number, objects moving at speeds greater than Mach1 are said to be traveling at supersonic speeds. During the 17th century, there were attempts to measure the speed of sound accurately, including attempts by Marin Mersenne in 1630, Pierre Gassendi in 1635. In 1709, the Reverend William Derham, Rector of Upminster, published an accurate measure of the speed of sound. Measurements were made of gunshots from a number of local landmarks, the distance was known by triangulation, and thus the speed that the sound had travelled was calculated. The transmission of sound can be illustrated by using a model consisting of an array of balls interconnected by springs, for real material the balls represent molecules and the springs represent the bonds between them. Sound passes through the model by compressing and expanding the springs, transmitting energy to neighbouring balls, which transmit energy to their springs, the speed of sound through the model depends on the stiffness of the springs, and the mass of the balls. As long as the spacing of the balls remains constant, stiffer springs transmit energy more quickly, effects like dispersion and reflection can also be understood using this model. In a real material, the stiffness of the springs is called the modulus
Speed of sound
–
U.S. Navy F/A-18 traveling near the speed of sound. The white halo consists of condensed water droplets formed by the sudden drop in air pressure behind the shock cone around the aircraft (see Prandtl-Glauert singularity).
Speed of sound
–
Pressure-pulse or compression-type wave (longitudinal wave) confined to a plane. This is the only type of sound wave that travels in fluids (gases and liquids)
73.
Drag (physics)
–
In fluid dynamics, drag is a force acting opposite to the relative motion of any object moving with respect to a surrounding fluid. This can exist between two layers or a fluid and a solid surface. Unlike other resistive forces, such as dry friction, which are independent of velocity. Drag force is proportional to the velocity for a laminar flow, even though the ultimate cause of a drag is viscous friction, the turbulent drag is independent of viscosity. Drag forces always decrease fluid velocity relative to the object in the fluids path. In the case of viscous drag of fluid in a pipe, in physics of sports, the drag force is necessary to explain the performance of runners, particularly of sprinters. Types of drag are generally divided into the categories, parasitic drag, consisting of form drag, skin friction, interference drag, lift-induced drag. The phrase parasitic drag is used in aerodynamics, since for lifting wings drag it is in general small compared to lift. For flow around bluff bodies, form and interference drags often dominate, further, lift-induced drag is only relevant when wings or a lifting body are present, and is therefore usually discussed either in aviation or in the design of semi-planing or planing hulls. Wave drag occurs either when an object is moving through a fluid at or near the speed of sound or when a solid object is moving along a fluid boundary. Drag depends on the properties of the fluid and on the size, shape, at low R e, C D is asymptotically proportional to R e −1, which means that the drag is linearly proportional to the speed. At high R e, C D is more or less constant, the graph to the right shows how C D varies with R e for the case of a sphere. As mentioned, the equation with a constant drag coefficient gives the force experienced by an object moving through a fluid at relatively large velocity. This is also called quadratic drag, the equation is attributed to Lord Rayleigh, who originally used L2 in place of A. Sometimes a body is a composite of different parts, each with a different reference areas, in the case of a wing the reference areas are the same and the drag force is in the same ratio to the lift force as the ratio of drag coefficient to lift coefficient. Therefore, the reference for a wing is often the area rather than the frontal area. For an object with a surface, and non-fixed separation points—like a sphere or circular cylinder—the drag coefficient may vary with Reynolds number Re. For an object with well-defined fixed separation points, like a disk with its plane normal to the flow direction
Drag (physics)
–
The power curve: form and induced drag vs. airspeed
Drag (physics)
74.
Friction
–
Friction is the force resisting the relative motion of solid surfaces, fluid layers, and material elements sliding against each other. There are several types of friction, Dry friction resists relative lateral motion of two surfaces in contact. Dry friction is subdivided into static friction between non-moving surfaces, and kinetic friction between moving surfaces, fluid friction describes the friction between layers of a viscous fluid that are moving relative to each other. Lubricated friction is a case of fluid friction where a lubricant fluid separates two solid surfaces, skin friction is a component of drag, the force resisting the motion of a fluid across the surface of a body. Internal friction is the force resisting motion between the making up a solid material while it undergoes deformation. When surfaces in contact move relative to other, the friction between the two surfaces converts kinetic energy into thermal energy. This property can have consequences, as illustrated by the use of friction created by rubbing pieces of wood together to start a fire. Kinetic energy is converted to thermal energy whenever motion with friction occurs, another important consequence of many types of friction can be wear, which may lead to performance degradation and/or damage to components. Friction is a component of the science of tribology, Friction is not itself a fundamental force. Dry friction arises from a combination of adhesion, surface roughness, surface deformation. The complexity of interactions makes the calculation of friction from first principles impractical and necessitates the use of empirical methods for analysis. Friction is a non-conservative force - work done against friction is path dependent, in the presence of friction, some energy is always lost in the form of heat. Thus mechanical energy is not conserved, the Greeks, including Aristotle, Vitruvius, and Pliny the Elder, were interested in the cause and mitigation of friction. They were aware of differences between static and kinetic friction with Themistius stating in 350 A. D. that it is easier to further the motion of a moving body than to move a body at rest. The classic laws of sliding friction were discovered by Leonardo da Vinci in 1493, a pioneer in tribology and these laws were rediscovered by Guillaume Amontons in 1699. Amontons presented the nature of friction in terms of surface irregularities, the understanding of friction was further developed by Charles-Augustin de Coulomb. Coulomb further considered the influence of sliding velocity, temperature and humidity, the distinction between static and dynamic friction is made in Coulombs friction law, although this distinction was already drawn by Johann Andreas von Segner in 1758. Leslie was equally skeptical about the role of adhesion proposed by Desaguliers, in Leslies view, friction should be seen as a time-dependent process of flattening, pressing down asperities, which creates new obstacles in what were cavities before
Friction
–
When the mass is not moving, the object experiences static friction. The friction increases as the applied force increases until the block moves. After the block moves, it experiences kinetic friction, which is less than the maximum static friction.
75.
Non-newtonian fluid
–
A non-Newtonian fluid is a fluid that does not follow Newtons Law of Viscosity. Most commonly, the viscosity of fluids is dependent on shear rate or shear rate history. Some non-Newtonian fluids with shear-independent viscosity, however, still exhibit normal stress-differences or other non-Newtonian behavior. Many salt solutions and molten polymers are non-Newtonian fluids, as are commonly found substances such as ketchup, custard, toothpaste, starch suspensions, maizena, paint, blood. In a Newtonian fluid, the relation between the stress and the shear rate is linear, passing through the origin, the constant of proportionality being the coefficient of viscosity. In a non-Newtonian fluid, the relation between the stress and the shear rate is different and can even be time-dependent. Therefore, a constant coefficient of viscosity cannot be defined, although the concept of viscosity is commonly used in fluid mechanics to characterize the shear properties of a fluid, it can be inadequate to describe non-Newtonian fluids. The properties are studied using tensor-valued constitutive equations, which are common in the field of continuum mechanics. The viscosity of a shear thickening fluid, or dilatant fluid, corn starch dissolved in water is a common example, when stirred slowly it looks milky, when stirred vigorously it feels like a very viscous liquid. Note that all thixotropic fluids are extremely shear thinning, but they are time dependent. Thus, to avoid confusion, the classification is more clearly termed pseudoplastic. Another example of a shear thinning fluid is blood and this application is highly favoured within the body, as it allows the viscosity of blood to decrease with increased shear strain rate. Fluids that have a linear shear stress/shear strain relationship require a finite yield stress before they begin to flow and these fluids are called Bingham plastics. Several examples are clay suspensions, drilling mud, toothpaste, mayonnaise, chocolate, the surface of a Bingham plastic can hold peaks when it is still. By contrast Newtonian fluids have flat featureless surfaces when still, there are also fluids whose strain rate is a function of time. Fluids that require a gradually increasing shear stress to maintain a constant strain rate are referred to as rheopectic, an opposite case of this is a fluid that thins out with time and requires a decreasing stress to maintain a constant strain rate. Many common substances exhibit non-Newtonian flows, uncooked cornflour has the same properties. The name oobleck is derived from the Dr. Seuss book Bartholomew, because of its properties, oobleck is often used in demonstrations that exhibit its unusual behavior
Non-newtonian fluid
–
Demonstration of a non-Newtonian fluid at Universum in Mexico City
Non-newtonian fluid
–
Classification of fluids with shear stress as a function of shear rate.
Non-newtonian fluid
–
Oobleck on a subwoofer. Applying force to oobleck, by sound waves in this case, makes the non-Newtonian fluid thicken.
76.
Sand
–
Sand is a naturally occurring granular material composed of finely divided rock and mineral particles. It is defined by size, being finer than gravel and coarser than silt, Sand can also refer to a textural class of soil or soil type, i. e. a soil containing more than 85% sand-sized particles by mass. The second most common type of sand is calcium carbonate, for example aragonite, for example, it is the primary form of sand apparent in areas where reefs have dominated the ecosystem for millions of years like the Caribbean. Sand is a non renewable resource over human timescales, and sand suitable for making concrete is in high demand, in terms of particle size as used by geologists, sand particles range in diameter from 0.0625 mm to 2 mm. An individual particle in this size is termed a sand grain. Sand grains are between gravel and silt, a 1953 engineering standard published by the American Association of State Highway and Transportation Officials set the minimum sand size at 0.074 mm. A1938 specification of the United States Department of Agriculture was 0.05 mm. Sand feels gritty when rubbed between the fingers. ISO14688 grades sands as fine, medium and coarse with ranges 0.063 mm to 0.2 mm to 0.63 mm to 2.0 mm. In the United States, sand is commonly divided into five sub-categories based on size, very fine sand, fine sand, medium sand, coarse sand, and very coarse sand. These sizes are based on the Krumbein phi scale, where size in Φ = -log2D, on this scale, for sand the value of Φ varies from −1 to +4, with the divisions between sub-categories at whole numbers. The composition of sand is highly variable, depending on the local rock sources. The gypsum sand dunes of the White Sands National Monument in New Mexico are famous for their bright, arkose is a sand or sandstone with considerable feldspar content, derived from weathering and erosion of a granitic rock outcrop. Some sands contain magnetite, chlorite, glauconite or gypsum, Sands rich in magnetite are dark to black in color, as are sands derived from volcanic basalts and obsidian. Chlorite-glauconite bearing sands are typically green in color, as are sands derived from basaltic with a high olivine content, many sands, especially those found extensively in Southern Europe, have iron impurities within the quartz crystals of the sand, giving a deep yellow color. Sand deposits in some areas contain garnets and other resistant minerals, the study of individual grains can reveal much historical information as to the origin and kind of transport of the grain. Quartz sand that is weathered from granite or gneiss quartz crystals will be angular. It is called grus in geology or sharp sand in the trade where it is preferred for concrete. Sand that is transported long distances by water or wind will be rounded, people who collect sand as a hobby are known as arenophiles
Sand
–
Sand dunes in the Idehan Ubari, Libya.
Sand
–
Close-up (1×1 cm) of sand from the Gobi Desert, Mongolia.
Sand
–
Heavy minerals (dark) in a quartz beach sand (Chennai, India).
Sand
–
Sand from Coral Pink Sand Dunes State Park, Utah. These are grains of quartz with a hematite coating providing the orange color.
77.
Euler equations (fluid dynamics)
–
In fluid dynamics, the Euler equations are a set of quasilinear hyperbolic equations governing adiabatic and inviscid flow. They are named after Leonhard Euler, in fact, Euler equations can be obtained by linearization of some more precise continuity equations like Navier–Stokes equations in a local equilibrium state given by a Maxwellian. The Euler equations can be applied to incompressible and to compressible flow – assuming the flow velocity is a solenoidal field, historically, only the incompressible equations have been derived by Euler. However, fluid dynamics literature often refers to the full set – including the energy equation – of the more general compressible equations together as the Euler equations, from the mathematical point of view, Euler equations are notably hyperbolic conservation equations in the case without external field. In fact, like any Cauchy equation, the Euler equations originally formulated in convective form can also be put in the conservation form, the convective form emphasizes changes to the state in a frame of reference moving with the fluid. The Euler equations first appeared in published form in Eulers article Principes généraux du mouvement des fluides and they were among the first partial differential equations to be written down. At the time Euler published his work, the system of equations consisted of the momentum and continuity equations, an additional equation, which was later to be called the adiabatic condition, was supplied by Pierre-Simon Laplace in 1816. G represents body accelerations acting on the continuum, for example gravity, inertial accelerations, electric field acceleration, the first equation is the Euler momentum equation with uniform density. The second equation is the constraint, stating the flow velocity is a solenoidal field. Notably, the continuity equation would be required also in this case as an additional third equation in case of density varying in time or varying in space. The equations above thus represent respectively conservation of mass and momentum, in 3D for example N =3 and the r and u vectors are explicitly and. Flow velocity and pressure are the physical variables. In 3D N =3 and the r and u vectors are explicitly and, although Euler first presented these equations in 1755, many fundamental questions about them remain unanswered. In three space dimensions it is not even known whether solutions of the equations are defined for all time or if they form singularities, in order to make the equations dimensionless, a characteristic length r 0, and a characteristic velocity u 0, need to be defined. These should be such that the dimensionless variables are all of order one. The limit of high Froude numbers is thus notable and can be studied with perturbation theory, the conservation form emphasizes the mathematical properties of Euler equations, and especially the contracted form is often the most convenient one for computational fluid dynamics simulations. Computationally, there are advantages in using the conserved variables. This gives rise to a class of numerical methods called conservative methods
Euler equations (fluid dynamics)
–
The "Streamline curvature theorem" states that the pressure at the upper surface of an airfoil is lower than the pressure far away and that the pressure at the lower surface is higher than the pressure far away; hence the pressure difference between the upper and lower surfaces of an airfoil generates a lift force.
78.
Applied physics
–
Applied physics is physics which is intended for a particular technological or practical use. It is usually considered as a bridge or a connection between physics and engineering and this approach is similar to that of applied mathematics. Applied physicists can also be interested in the use of physics for scientific research, for instance, the field of accelerator physics can contribute to research in theoretical physics by working with engineers enabling design and construction of high-energy colliders
Applied physics
–
Experiment using a laser
Applied physics
–
A magnetic resonance image
Applied physics
–
Computer modeling of the space shuttle during re-entry
79.
Experimental physics
–
Experimental physics is the category of disciplines and sub-disciplines in the field of physics that are concerned with the observation of physical phenomena and experiments. Methods vary from discipline to discipline, from experiments and observations, such as the Cavendish experiment, to more complicated ones. It is often put in contrast with theoretical physics, which is concerned with predicting and explaining the physical behaviour of nature than the acquisition of knowledge about it. Although experimental and theoretical physics are concerned with different aspects of nature, theoretical physics can also offer insight on what data is needed in order to gain a better understanding of the universe, and on what experiments to devise in order to obtain it. In the early 17th century, Galileo made extensive use of experimentation to validate physical theories, Galileo formulated and successfully tested several results in dynamics, in particular the law of inertia, which later became the first law in Newtons laws of motion. In Galileos Two New Sciences, a dialogue between the characters Simplicio and Salviati discuss the motion of a ship and how that ships cargo is indifferent to its motion. Huygens used the motion of a boat along a Dutch canal to illustrate a form of the conservation of momentum. Experimental physics is considered to have reached a point with the publication of the Philosophiae Naturalis Principia Mathematica in 1687 by Sir Isaac Newton. Both theories agreed well with experiment, the Principia also included several theories in fluid dynamics. From the late 17th century onward, thermodynamics was developed by physicist and chemist Boyle, Young, in 1733, Bernoulli used statistical arguments with classical mechanics to derive thermodynamic results, initiating the field of statistical mechanics. In 1798, Thompson demonstrated the conversion of work into heat. Ludwig Boltzmann, in the century, is responsible for the modern form of statistical mechanics. Besides classical mechanics and thermodynamics, another field of experimental inquiry within physics was the nature of electricity. Observations in the 17th and eighteenth century by such as Robert Boyle, Stephen Gray. These observations also established our basic understanding of electrical charge and current, by 1808 John Dalton had discovered that atoms of different elements have different weights and proposed the modern theory of the atom. It was Hans Christian Ørsted who first proposed the connection between electricity and magnetism after observing the deflection of a needle by a nearby electric current. By the early 1830s Michael Faraday had demonstrated that magnetic fields, in 1864 James Clerk Maxwell presented to the Royal Society a set of equations that described this relationship between electricity and magnetism. Maxwells equations also predicted correctly that light is an electromagnetic wave, starting with astronomy, the principles of natural philosophy crystallized into fundamental laws of physics which were enunciated and improved in the succeeding centuries
Experimental physics
–
A view of the CMS detector, an experimental endeavour of the LHC at CERN.
80.
Theoretical physics
–
Theoretical physics is a branch of physics that employs mathematical models and abstractions of physical objects and systems to rationalize, explain and predict natural phenomena. This is in contrast to physics, which uses experimental tools to probe these phenomena. The advancement of science depends in general on the interplay between experimental studies and theory, in some cases, theoretical physics adheres to standards of mathematical rigor while giving little weight to experiments and observations. Conversely, Einstein was awarded the Nobel Prize for explaining the photoelectric effect, a physical theory is a model of physical events. It is judged by the extent to which its predictions agree with empirical observations, the quality of a physical theory is also judged on its ability to make new predictions which can be verified by new observations. A physical theory similarly differs from a theory, in the sense that the word theory has a different meaning in mathematical terms. A physical theory involves one or more relationships between various measurable quantities, archimedes realized that a ship floats by displacing its mass of water, Pythagoras understood the relation between the length of a vibrating string and the musical tone it produces. Other examples include entropy as a measure of the uncertainty regarding the positions and motions of unseen particles, Theoretical physics consists of several different approaches. In this regard, theoretical particle physics forms a good example, for instance, phenomenologists might employ empirical formulas to agree with experimental results, often without deep physical understanding. Modelers often appear much like phenomenologists, but try to model speculative theories that have certain desirable features, some attempt to create approximate theories, called effective theories, because fully developed theories may be regarded as unsolvable or too complicated. Other theorists may try to unify, formalise, reinterpret or generalise extant theories, or create completely new ones altogether. Sometimes the vision provided by pure mathematical systems can provide clues to how a system might be modeled, e. g. the notion, due to Riemann and others. Theoretical problems that need computational investigation are often the concern of computational physics, Theoretical advances may consist in setting aside old, incorrect paradigms or may be an alternative model that provides answers that are more accurate or that can be more widely applied. In the latter case, a correspondence principle will be required to recover the previously known result, sometimes though, advances may proceed along different paths. However, an exception to all the above is the wave–particle duality, Physical theories become accepted if they are able to make correct predictions and no incorrect ones. They are also likely to be accepted if they connect a wide range of phenomena. Testing the consequences of a theory is part of the scientific method, Physical theories can be grouped into three categories, mainstream theories, proposed theories and fringe theories. Theoretical physics began at least 2,300 years ago, under the Pre-socratic philosophy, during the Middle Ages and Renaissance, the concept of experimental science, the counterpoint to theory, began with scholars such as Ibn al-Haytham and Francis Bacon
Theoretical physics
–
Visual representation of a Schwarzschild wormhole. Wormholes have never been observed, but they are predicted to exist through mathematical models and scientific theory.
81.
Field (physics)
–
In physics, a field is a physical quantity, typically a number or tensor, that has a value for each point in space and time. For example, on a map, the surface wind velocity is described by assigning a vector to each point on a map. Each vector represents the speed and direction of the movement of air at that point, as another example, an electric field can be thought of as a condition in space emanating from an electric charge and extending throughout the whole of space. When a test electric charge is placed in this electric field, physicists have found the notion of a field to be of such practical utility for the analysis of forces that they have come to think of a force as due to a field. In the modern framework of the theory of fields, even without referring to a test particle, a field occupies space, contains energy. This led physicists to consider electromagnetic fields to be a physical entity, the fact that the electromagnetic field can possess momentum and energy makes it very real. A particle makes a field, and a field acts on another particle, in practice, the strength of most fields has been found to diminish with distance to the point of being undetectable. One consequence is that the Earths gravitational field quickly becomes undetectable on cosmic scales, a field has a unique tensorial character in every point where it is defined, i. e. a field cannot be a scalar field somewhere and a vector field somewhere else. For example, the Newtonian gravitational field is a vector field, moreover, within each category, a field can be either a classical field or a quantum field, depending on whether it is characterized by numbers or quantum operators respectively. In fact in this theory an equivalent representation of field is a field particle, to Isaac Newton his law of universal gravitation simply expressed the gravitational force that acted between any pair of massive objects. In the eighteenth century, a new quantity was devised to simplify the bookkeeping of all these gravitational forces and this quantity, the gravitational field, gave at each point in space the total gravitational force which would be felt by an object with unit mass at that point. The development of the independent concept of a field began in the nineteenth century with the development of the theory of electromagnetism. In the early stages, André-Marie Ampère and Charles-Augustin de Coulomb could manage with Newton-style laws that expressed the forces between pairs of electric charges or electric currents. However, it became more natural to take the field approach and express these laws in terms of electric and magnetic fields. The independent nature of the field became more apparent with James Clerk Maxwells discovery that waves in these fields propagated at a finite speed, Maxwell, at first, did not adopt the modern concept of a field as fundamental quantity that could independently exist. Instead, he supposed that the field expressed the deformation of some underlying medium—the luminiferous aether—much like the tension in a rubber membrane. If that were the case, the velocity of the electromagnetic waves should depend upon the velocity of the observer with respect to the aether. Despite much effort, no evidence of such an effect was ever found
Field (physics)
–
Illustration of the electric field surrounding a positive (red) and a negative (blue) charge.
82.
Optics
–
Optics is the branch of physics which involves the behaviour and properties of light, including its interactions with matter and the construction of instruments that use or detect it. Optics usually describes the behaviour of visible, ultraviolet, and infrared light, because light is an electromagnetic wave, other forms of electromagnetic radiation such as X-rays, microwaves, and radio waves exhibit similar properties. Most optical phenomena can be accounted for using the classical description of light. Complete electromagnetic descriptions of light are, however, often difficult to apply in practice, practical optics is usually done using simplified models. The most common of these, geometric optics, treats light as a collection of rays that travel in straight lines, physical optics is a more comprehensive model of light, which includes wave effects such as diffraction and interference that cannot be accounted for in geometric optics. Historically, the model of light was developed first, followed by the wave model of light. Progress in electromagnetic theory in the 19th century led to the discovery that waves were in fact electromagnetic radiation. Some phenomena depend on the fact that light has both wave-like and particle-like properties, explanation of these effects requires quantum mechanics. When considering lights particle-like properties, the light is modelled as a collection of particles called photons, quantum optics deals with the application of quantum mechanics to optical systems. Optical science is relevant to and studied in many related disciplines including astronomy, various engineering fields, photography, practical applications of optics are found in a variety of technologies and everyday objects, including mirrors, lenses, telescopes, microscopes, lasers, and fibre optics. Optics began with the development of lenses by the ancient Egyptians and Mesopotamians, the earliest known lenses, made from polished crystal, often quartz, date from as early as 700 BC for Assyrian lenses such as the Layard/Nimrud lens. The ancient Romans and Greeks filled glass spheres with water to make lenses, the word optics comes from the ancient Greek word ὀπτική, meaning appearance, look. Greek philosophy on optics broke down into two opposing theories on how vision worked, the theory and the emission theory. The intro-mission approach saw vision as coming from objects casting off copies of themselves that were captured by the eye, plato first articulated the emission theory, the idea that visual perception is accomplished by rays emitted by the eyes. He also commented on the parity reversal of mirrors in Timaeus, some hundred years later, Euclid wrote a treatise entitled Optics where he linked vision to geometry, creating geometrical optics. Ptolemy, in his treatise Optics, held a theory of vision, the rays from the eye formed a cone, the vertex being within the eye. The rays were sensitive, and conveyed back to the observer’s intellect about the distance. He summarised much of Euclid and went on to describe a way to measure the angle of refraction, during the Middle Ages, Greek ideas about optics were resurrected and extended by writers in the Muslim world
Optics
–
Optics includes study of dispersion of light.
Optics
–
The Nimrud lens
Optics
–
Reproduction of a page of Ibn Sahl 's manuscript showing his knowledge of the law of refraction, now known as Snell's law
Optics
–
Cover of the first edition of Newton's Opticks
83.
Geometrical optics
–
Geometrical optics, or ray optics, describes light propagation in terms of rays. The ray in geometric optics is an abstraction, or instrument, Geometrical optics does not account for certain optical effects such as diffraction and interference. This simplification is useful in practice, it is an excellent approximation when the wavelength is compared to the size of structures with which the light interacts. The techniques are useful in describing geometrical aspects of imaging, including optical aberrations. A light ray is a line or curve that is perpendicular to the lights wavefronts, Geometrical optics is often simplified by making the paraxial approximation, or small angle approximation. The mathematical behavior then becomes linear, allowing optical components and systems to be described by simple matrices, glossy surfaces such as mirrors reflect light in a simple, predictable way. This allows for production of reflected images that can be associated with an actual or extrapolated location in space. With such surfaces, the direction of the ray is determined by the angle the incident ray makes with the surface normal. The incident and reflected rays lie in a plane. This is known as the Law of Reflection, for flat mirrors, the law of reflection implies that images of objects are upright and the same distance behind the mirror as the objects are in front of the mirror. The image size is the same as the object size, the law also implies that mirror images are parity inverted, which is perceived as a left-right inversion. Mirrors with curved surfaces can be modeled by ray tracing and using the law of reflection at each point on the surface, for mirrors with parabolic surfaces, parallel rays incident on the mirror produce reflected rays that converge at a common focus. Other curved surfaces may also focus light, but with due to the diverging shape causing the focus to be smeared out in space. In particular, spherical mirrors exhibit spherical aberration, curved mirrors can form images with magnification greater than or less than one, and the image can be upright or inverted. An upright image formed by reflection in a mirror is always virtual, while an image is real. Refraction occurs when light travels through an area of space that has a index of refraction. The simplest case of refraction occurs when there is an interface between a uniform medium with index of refraction n 1 and another medium with index of refraction n 2 and this phenomenon is called total internal reflection and allows for fiber optics technology. As light signals travel down a fiber cable, they undergo total internal reflection allowing for essentially no light lost over the length of the cable
Geometrical optics
–
As light travels through space, it oscillates in amplitude. In this image, each maximum amplitude crest is marked with a plane to illustrate the wavefront. The ray is the arrow perpendicular to these parallel surfaces.
84.
Nonlinear optics
–
The nonlinearity is typically observed only at very high light intensities such as those provided by lasers. Above the Schwinger limit, the vacuum itself is expected to become nonlinear, in nonlinear optics, the superposition principle no longer holds. However, some effects were discovered before the development of the laser. The theoretical basis for many nonlinear processes were first described in Bloembergens monograph Nonlinear Optics, Nonlinear optics explains nonlinear response of properties such as frequency, polarization, phase or path of incident light. Third-harmonic generation, generation of light with a frequency, three photons are destroyed, creating a single photon at three times the frequency. High-harmonic generation, generation of light with much greater than the original. Sum-frequency generation, generation of light with a frequency that is the sum of two other frequencies, difference-frequency generation, generation of light with a frequency that is the difference between two other frequencies. Optical parametric amplification, amplification of an input in the presence of a higher-frequency pump wave. Optical parametric oscillation, generation of a signal and idler wave using an amplifier in a resonator. Optical parametric generation, like parametric oscillation but without a resonator, spontaneous parametric down-conversion, the amplification of the vacuum fluctuations in the low-gain regime. Optical rectification, generation of electric fields. Nonlinear light-matter interaction with electrons and plasmas. Optical Kerr effect, intensity-dependent refractive index, self-focusing, an effect due to the optical Kerr effect caused by the spatial variation in the intensity creating a spatial variation in the refractive index. Kerr-lens modelocking, the use of self-focusing as a mechanism to mode-lock laser, self-phase modulation, an effect due to the optical Kerr effect caused by the temporal variation in the intensity creating a temporal variation in the refractive index. Optical solitons, a solution for either an optical pulse or spatial mode that does not change during propagation due to a balance between dispersion and the Kerr effect. Cross-phase modulation, where one wavelength of light can affect the phase of another wavelength of light through the optical Kerr effect, four-wave mixing, can also arise from other nonlinearities. Cross-polarized wave generation, a χ effect in which a wave with polarization vector perpendicular to the one is generated. Stimulated Brillouin scattering, interaction of photons with acoustic phonons Multi-photon absorption, multiple photoionisation, near-simultaneous removal of many bound electrons by one photon
Nonlinear optics
–
Reversal of Linear Momentum and Angular Momentum in Phase Conjugating Mirror.
Nonlinear optics
Nonlinear optics
–
Dark-Red Gallium Selenide in its bulk form.
85.
Theory of relativity
–
The theory of relativity usually encompasses two interrelated theories by Albert Einstein, special relativity and general relativity. Special relativity applies to particles and their interactions, describing all their physical phenomena except gravity. General relativity explains the law of gravitation and its relation to other forces of nature and it applies to the cosmological and astrophysical realm, including astronomy. The theory transformed theoretical physics and astronomy during the 20th century and it introduced concepts including spacetime as a unified entity of space and time, relativity of simultaneity, kinematic and gravitational time dilation, and length contraction. In the field of physics, relativity improved the science of elementary particles and their fundamental interactions, with relativity, cosmology and astrophysics predicted extraordinary astronomical phenomena such as neutron stars, black holes, and gravitational waves. Max Planck, Hermann Minkowski and others did subsequent work, Einstein developed general relativity between 1907 and 1915, with contributions by many others after 1915. The final form of general relativity was published in 1916, the term theory of relativity was based on the expression relative theory used in 1906 by Planck, who emphasized how the theory uses the principle of relativity. In the discussion section of the paper, Alfred Bucherer used for the first time the expression theory of relativity. By the 1920s, the community understood and accepted special relativity. It rapidly became a significant and necessary tool for theorists and experimentalists in the new fields of physics, nuclear physics. By comparison, general relativity did not appear to be as useful and it seemed to offer little potential for experimental test, as most of its assertions were on an astronomical scale. Its mathematics of general relativity seemed difficult and fully understandable only by a number of people. Around 1960, general relativity became central to physics and astronomy, new mathematical techniques to apply to general relativity streamlined calculations and made its concepts more easily visualized. Special relativity is a theory of the structure of spacetime and it was introduced in Einsteins 1905 paper On the Electrodynamics of Moving Bodies. Special relativity is based on two postulates which are contradictory in classical mechanics, The laws of physics are the same for all observers in motion relative to one another. The speed of light in a vacuum is the same for all observers, the resultant theory copes with experiment better than classical mechanics. For instance, postulate 2 explains the results of the Michelson–Morley experiment, moreover, the theory has many surprising and counterintuitive consequences. Some of these are, Relativity of simultaneity, Two events, simultaneous for one observer, time dilation, Moving clocks are measured to tick more slowly than an observers stationary clock
Theory of relativity
–
USSR stamp dedicated to Albert Einstein
Theory of relativity
–
Key concepts
86.
General relativity
–
General relativity is the geometric theory of gravitation published by Albert Einstein in 1915 and the current description of gravitation in modern physics. General relativity generalizes special relativity and Newtons law of gravitation, providing a unified description of gravity as a geometric property of space and time. In particular, the curvature of spacetime is directly related to the energy and momentum of whatever matter, the relation is specified by the Einstein field equations, a system of partial differential equations. Examples of such differences include gravitational time dilation, gravitational lensing, the redshift of light. The predictions of relativity have been confirmed in all observations. Although general relativity is not the only theory of gravity. Einsteins theory has important astrophysical implications, for example, it implies the existence of black holes—regions of space in which space and time are distorted in such a way that nothing, not even light, can escape—as an end-state for massive stars. The bending of light by gravity can lead to the phenomenon of gravitational lensing, General relativity also predicts the existence of gravitational waves, which have since been observed directly by physics collaboration LIGO. In addition, general relativity is the basis of current cosmological models of an expanding universe. Soon after publishing the special theory of relativity in 1905, Einstein started thinking about how to incorporate gravity into his new relativistic framework. In 1907, beginning with a thought experiment involving an observer in free fall. After numerous detours and false starts, his work culminated in the presentation to the Prussian Academy of Science in November 1915 of what are now known as the Einstein field equations. These equations specify how the geometry of space and time is influenced by whatever matter and radiation are present, the Einstein field equations are nonlinear and very difficult to solve. Einstein used approximation methods in working out initial predictions of the theory, but as early as 1916, the astrophysicist Karl Schwarzschild found the first non-trivial exact solution to the Einstein field equations, the Schwarzschild metric. This solution laid the groundwork for the description of the stages of gravitational collapse. In 1917, Einstein applied his theory to the universe as a whole, in line with contemporary thinking, he assumed a static universe, adding a new parameter to his original field equations—the cosmological constant—to match that observational presumption. By 1929, however, the work of Hubble and others had shown that our universe is expanding and this is readily described by the expanding cosmological solutions found by Friedmann in 1922, which do not require a cosmological constant. Lemaître used these solutions to formulate the earliest version of the Big Bang models, in which our universe has evolved from an extremely hot, Einstein later declared the cosmological constant the biggest blunder of his life
General relativity
–
A simulated black hole of 10 solar masses within the Milky Way, seen from a distance of 600 kilometers.
General relativity
–
Albert Einstein developed the theories of special and general relativity. Picture from 1921.
General relativity
–
Einstein cross: four images of the same astronomical object, produced by a gravitational lens
General relativity
–
Artist's impression of the space-borne gravitational wave detector LISA
87.
Solar physics
–
Solar physics is the branch of astrophysics that specializes in the study of the Sun. It deals with detailed measurements that are only for our closest star. Because the Sun is uniquely situated for close-range observing, there is a split between the discipline of observational astrophysics and observational solar physics. The study of physics is also important as it is believed that changes in the solar atmosphere. The Sun also provides a physical laboratory for the study of plasma physics, babylonians were keeping a record of solar eclipses, with the oldest record originating from the ancient city of Ugarit, in modern-day Syria. This record dates to about 1300 BC, ancient Chinese astronomers were also observing solar phenomena with the purpose of keeping track of calendars, which were based on lunar and solar cycles. Unfortunately, records kept before 720 BC are very vague and offer no useful information, however, after 720 BC,37 solar eclipses were noted over the course of 240 years. Astronomical knowledge flourished in the Islamic world during medieval times, many observatories were built in cities from Damascus to Baghdad, where detailed astronomical observations were taken. Particularly, a few solar parameters were measured and detailed observations of the Sun were taken, Solar observations were taken with the purpose of navigation, but mostly for timekeeping. Islam requires its followers to pray five times a day, at position of the Sun in the sky. As such, accurate observations of the Sun and its trajectory on the sky were needed, in the late 10th century, Iranian astronomer Abu-Mahmud Khojandi built a massive observatory near Tehran. There, he took measurements of a series of meridian transits of the Sun. Following the fall of the Western Roman Empire, Western Europe was cut from all sources of ancient scientific knowledge and this, plus de-urbanisation and diseases such as the Black Death led to a decline in scientific knowledge in Medieval Europe, especially in the early Middle Ages. During this period, observations of the Sun were taken either in relation to the zodiac, or to assist in building places of worship such as churches, in astronomy, the renaissance period started with the work of Nicolaus Copernicus. He proposed that planets revolve around the Sun and not around the Earth and this model is known as the heliocentric model. His work was expanded by Johannes Kepler and Galileo Galilei. Particularly, Galilei used his new telescope to look at the Sun, in 1610, he discovered sunspots on its surface. In the autumn of 1611, Johannes Fabricius wrote the first book on sunspots, modern day solar physics is focused towards understanding the many phenomena observed with the help of modern telescopes and satellites
Solar physics
–
The SDO satellite
Solar physics
–
Internal structure
88.
Atomic, molecular, and optical physics
–
Atomic, molecular, and optical physics is the study of matter-matter and light-matter interactions, at the scale of one or a few atoms and energy scales around several electron volts. The three areas are closely interrelated, AMO theory includes classical, semi-classical and quantum treatments. Atomic physics is the subfield of AMO that studies atoms as a system of electrons. The term atomic physics is associated with nuclear power and nuclear bombs, due to the synonymous use of atomic. However, physicists distinguish between atomic physics — which deals with the atom as a system consisting of a nucleus and electrons — and nuclear physics, the important experimental techniques are the various types of spectroscopy. Molecular physics, while related to atomic physics, also overlaps greatly with theoretical chemistry, physical chemistry. Both subfields are primarily concerned with electronic structure and the processes by which these arrangements change. Generally this work involves using quantum mechanics, for molecular physics this approach is known as quantum chemistry. One important aspect of physics is that the essential atomic orbital theory in the field of atomic physics expands to the molecular orbital theory. Molecular physics is concerned with processes in molecules, but it is additionally concerned with effects due to the molecular structure. Additionally to the electronic states which are known from atoms, molecules are able to rotate. These rotations and vibrations are quantized, there are discrete energy levels, the smallest energy differences exist between different rotational states, therefore pure rotational spectra are in the far infrared region of the electromagnetic spectrum. Vibrational spectra are in the infrared and spectra resulting from electronic transitions are mostly in the visible. From measuring rotational and vibrational properties of molecules like the distance between the nuclei can be calculated. As with many fields, strict delineation can be highly contrived and atomic physics is often considered in the wider context of atomic, molecular. Physics research groups are usually so classified and it differs from general optics and optical engineering in that it is focused on the discovery and application of new phenomena. Often the same people are involved in both the research and the applied technology development. Researchers in optical physics use and develop light sources that span the spectrum from microwaves to X-rays
Atomic, molecular, and optical physics
–
An optical lattice formed by laser interference. Optical lattices are used to simulate interacting condensed matter systems.
89.
Computational physics
–
Computational physics is the study and implementation of numerical analysis to solve problems in physics for which a quantitative theory already exists. Historically, computational physics was the first application of computers in science. In physics, different theories based on mathematical models provide very precise predictions on how systems behave, unfortunately, it is often the case that solving the mathematical model for a particular system in order to produce a useful prediction is not feasible. This can occur, for instance, when the solution does not have a closed-form expression, in such cases, numerical approximations are required. There is a debate about the status of computation within the scientific method, while computers can be used in experiments for the measurement and recording of data, this clearly does not constitute a computational approach. Physics problems are in very difficult to solve exactly. This is due to several reasons, lack of algebraic and/or analytic solubility, complexity, on the more advanced side, mathematical perturbation theory is also sometimes used. In addition, the computational cost and computational complexity for many-body problems tend to grow quickly, a macroscopic system typically has a size of the order of 1023 constituent particles, so it is somewhat of a problem. Solving quantum mechanical problems is generally of exponential order in the size of the system, because computational physics uses a broad class of problems, it is generally divided amongst the different mathematical problems it numerically solves, or the methods it applies. Furthermore, computational physics encompasses the tuning of the structure to solve the problems. It is possible to find a corresponding computational branch for every field in physics, for example computational mechanics. Computational mechanics consists of fluid dynamics, computational solid mechanics. One subfield at the confluence between CFD and electromagnetic modelling is computational magnetohydrodynamics, the quantum many-body problem leads naturally to the large and rapidly growing field of computational chemistry. Computational solid state physics is an important division of computational physics dealing directly with material science. A field related to computational condensed matter is computational statistical mechanics, computational statistical physics makes heavy use of Monte Carlo-like methods. More broadly, it concerns itself with in the social sciences, network theory, and mathematical models for the propagation of disease. Computational astrophysics is the application of techniques and methods to astrophysical problems. Stickler, E. Schachinger, Basic concepts in computational physics, E. Winsberg, Science in the Age of Computer Simulation
Computational physics
–
Computational physics
90.
Mesoscopic physics
–
Disambiguation, This page refers to the sub-discipline of condensed matter physics, not the branch of meteorology concerned with the study of weather systems smaller than synoptic scale systems. Mesoscopic physics is a subdiscipline of condensed matter physics deals with materials of an intermediate length. The scale of these materials can be described as being between the size of a quantity of atoms and of materials measuring micrometres, the lower limit can also be defined as being the size of individual atoms. At the micrometre level are bulk materials, both mesoscopic and macroscopic objects contain a large number of atoms. In other words, a device, when scaled down to a meso-size. For example, at the level the conductance of a wire increases continuously with its diameter. However, at the level, the wires conductance is quantized. The applied science of physics deals with the potential of building nanodevices. Mesoscopic physics also addresses fundamental practical problems which occur when an object is miniaturized. The physical properties of materials change as their approaches the nanoscale. For bulk materials larger than one micrometre, the percentage of atoms at the surface is insignificant in relation to the number of atoms in the entire material. The subdiscipline has dealt primarily with artificial structures of metal or semiconducting material which have been fabricated by the techniques employed for producing microelectronic circuits. There is no definition for mesoscopic physics but the systems studied are normally in the range of 100 nm to 1000 nm,100 nanometers is the approximate upper limit for a nanoparticle. Thus, mesoscopic physics has a connection to the fields of nanofabrication. Devices used in nanotechnology are examples of mesoscopic systems, three categories of new phenomena in such systems are interference effects, quantum confinement effects and charging effects. Quantum confinement effects describe electrons in terms of levels, potential well, valence bands, conduction band. Electrons in bulk material can be described by energy bands or electron energy levels. Electrons exist at different energy levels or bands, in bulk materials these energy levels are described as continuous because the difference in energy is negligible
Mesoscopic physics
–
Condensed matter physics
91.
Particle physics
–
Particle physics is the branch of physics that studies the nature of the particles that constitute matter and radiation. By our current understanding, these particles are excitations of the quantum fields that also govern their interactions. The currently dominant theory explaining these fundamental particles and fields, along with their dynamics, is called the Standard Model, in more technical terms, they are described by quantum state vectors in a Hilbert space, which is also treated in quantum field theory. All particles and their interactions observed to date can be described almost entirely by a field theory called the Standard Model. The Standard Model, as formulated, has 61 elementary particles. Those elementary particles can combine to form composite particles, accounting for the hundreds of species of particles that have been discovered since the 1960s. The Standard Model has been found to agree with almost all the tests conducted to date. However, most particle physicists believe that it is a description of nature. In recent years, measurements of mass have provided the first experimental deviations from the Standard Model. The idea that all matter is composed of elementary particles dates from at least the 6th century BC, in the 19th century, John Dalton, through his work on stoichiometry, concluded that each element of nature was composed of a single, unique type of particle. Throughout the 1950s and 1960s, a variety of particles were found in collisions of particles from increasingly high-energy beams. It was referred to informally as the particle zoo, the current state of the classification of all elementary particles is explained by the Standard Model. It describes the strong, weak, and electromagnetic fundamental interactions, the species of gauge bosons are the gluons, W−, W+ and Z bosons, and the photons. The Standard Model also contains 24 fundamental particles, which are the constituents of all matter, finally, the Standard Model also predicted the existence of a type of boson known as the Higgs boson. Early in the morning on 4 July 2012, physicists with the Large Hadron Collider at CERN announced they had found a new particle that behaves similarly to what is expected from the Higgs boson, the worlds major particle physics laboratories are, Brookhaven National Laboratory. Its main facility is the Relativistic Heavy Ion Collider, which collides heavy ions such as gold ions and it is the worlds first heavy ion collider, and the worlds only polarized proton collider. Its main projects are now the electron-positron colliders VEPP-2000, operated since 2006 and its main project is now the Large Hadron Collider, which had its first beam circulation on 10 September 2008, and is now the worlds most energetic collider of protons. It also became the most energetic collider of heavy ions after it began colliding lead ions and its main facility is the Hadron Elektron Ring Anlage, which collides electrons and positrons with protons
Particle physics
–
Large Hadron Collider tunnel at CERN
92.
Biomechanics
–
Biomechanics is the study of the structure and function of biological systems such as humans, animals, plants, organs, fungi, and cells by means of the methods of mechanics. Biomechanics is closely related to engineering, because it often uses traditional engineering sciences to analyze biological systems, some simple applications of Newtonian mechanics and/or materials sciences can supply correct approximations to the mechanics of many biological systems. Usually biological systems are more complex than man-built systems. Numerical methods are applied in almost every biomechanical study. Research is done in a process of hypothesis and verification, including several steps of modeling, computer simulation. Elements of mechanical engineering, electrical engineering, computer science, gait analysis, Biomechanics in sports can be stated as the muscular, joint and skeletal actions of the body during the execution of a given task, skill and/or technique. Proper understanding of biomechanics relating to sports skill has the greatest implications on, sports performance, rehabilitation and injury prevention, as noted by Doctor Michael Yessis, one could say that best athlete is the one that executes his or her skill the best. The mechanical analysis of biomaterials and biofluids is usually carried forth with the concepts of continuum mechanics and this assumption breaks down when the length scales of interest approach the order of the micro structural details of the material. One of the most remarkable characteristic of biomaterials is their hierarchical structure, in other words, the mechanical characteristics of these materials rely on physical phenomena occurring in multiple levels, from the molecular all the way up to the tissue and organ levels. Biomaterials are classified in two groups, hard and soft tissues, mechanical deformation of hard tissues may be analysed with the theory of linear elasticity. On the other hand, soft tissues usually undergo large deformations and thus their analysis rely on the strain theory. The interest in continuum biomechanics is spurred by the need for realism in the development of medical simulation, biological fluid mechanics, or biofluid mechanics, is the study of both gas and liquid fluid flows in or around biological organisms. An often studied liquid biofluids problem is that of blood flow in the cardiovascular system. Under certain mathematical circumstances, blood flow can be modelled by the Navier–Stokes equations, in vivo whole blood is assumed to be an incompressible Newtonian fluid. However, this fails when considering forward flow within arterioles. At the microscopic scale, the effects of red blood cells become significant. When the diameter of the vessel is just slightly larger than the diameter of the red blood cell the Fahraeus–Lindquist effect occurs. However, as the diameter of the vessel decreases further
Biomechanics
–
Page of one of the first works of Biomechanics (De Motu Animalium of Giovanni Alfonso Borelli) in the 17th century
Biomechanics
–
Red blood cells
Biomechanics
–
Chinstrap penguin leaping over water
Biomechanics
–
Subdisciplines
93.
Health physics
–
Health physics is the applied physics of radiation protection for health and health care purposes. It is the science concerned with the recognition, evaluation, and control of health hazards to permit the safe use, Health physics professionals promote excellence in the science and practice of radiation protection and safety. Practical ionising radiation measurement is essential for health physics and it enables the evaluation of protection measures, and the assessment of the radiation dose likely, or actually received by individuals. The provision of such instruments is normally controlled by law, in the UK it is the Ionising Radiation Regulations 1999. The measuring instruments for radiation protection are both installed and portable, installed instruments are fixed in positions which are known to be important in assessing the general radiation hazard in an area. Examples are installed area radiation monitors, Gamma interlock monitors, personnel exit monitors, interlock monitors are used in applications to prevent inadvertent exposure of workers to an excess dose by preventing personnel access to an area when a high radiation level is present. Airborne contamination monitors measure the concentration of particles in the atmosphere to guard against radioactive particles being deposited in the lungs of personnel. Personnel exit monitors are used to monitor workers who are exiting a contamination controlled or potentially contaminated area and these can be in the form of hand monitors, clothing frisk probes, or whole body monitors. These monitor the surface of the body and clothing to check if any radioactive contamination has been deposited. These generally measure alpha or beta or gamma, or combinations of these, portable instruments are hand-held or transportable. The hand-held instrument is used as a survey meter to check an object or person in detail. They can also be used for personnel exit monitoring or personnel contamination checks in the field and these generally measure alpha, beta or gamma, or combinations of these. Such instruments are installed on trolleys to allow easy deployment. A number of commonly used instruments are listed below. Ionization chambers proportional counters Geiger counters Semiconductor detectors Scintillation detectors The links should be followed for a description of each. In the United Kingdom the HSE has issued a guidance note on selecting the correct radiation measurement instrument for the application concerned. This covers all ionising radiation instrument technologies, and is a useful comparative guide, dosimeters are devices worn by the user which measure the radiation dose that the user is receiving. This is more related to the amount of energy deposited rather than the charge
Health physics
–
1947 Oak Ridge National Laboratory poster.
Health physics
–
General topics
94.
Psychophysics
–
Psychophysics quantitatively investigates the relationship between physical stimuli and the sensations and perceptions they produce. Psychophysics also refers to a class of methods that can be applied to study a perceptual system. Modern applications rely heavily on threshold measurement, ideal observer analysis, Psychophysics has widespread and important practical applications. For example, in the study of signal processing, psychophysics has informed the development of models. These models explain why humans perceive very little loss of quality when audio. Many of the techniques and theories of psychophysics were formulated in 1860 when Gustav Theodor Fechner in Leipzig published Elemente der Psychophysik. He coined the term psychophysics, describing research intended to relate physical stimuli to the contents of such as sensations. As a physicist and philosopher, Fechner aimed at developing a method that relates matter to the mind, connecting the publicly observable world, from this, Fechner derived his well-known logarithmic scale, now known as Fechner scale. Webers and Fechners work formed one of the bases of psychology as a science, Fechners work systematised the introspectionist approach, that had to contend with the Behaviorist approach in which even verbal responses are as physical as the stimuli. Fechners work was studied and extended by Charles S. Peirce, who was aided by his student Joseph Jastrow, Peirce and Jastrow largely confirmed Fechners empirical findings, but not all. In particular, an experiment of Peirce and Jastrow rejected Fechners estimation of a threshold of perception of weights. The Peirce–Jastrow experiments were conducted as part of Peirces application of his program to human perception, other studies considered the perception of light. Jastrow wrote the summary, Mr. Peirce’s courses in logic gave me my first real experience of intellectual muscle. He borrowed the apparatus for me, which I took to my room, installed at my window, and with which, the results were published over our joint names in the Proceedings of the National Academy of Sciences. This work clearly distinguishes observable cognitive performance from the expression of consciousness, one leading method is based on signal detection theory, developed for cases of very weak stimuli. However, the subjectivist approach persists among those in the tradition of Stanley Smith Stevens, Stevens revived the idea of a power law suggested by 19th century researchers, in contrast with Fechners log-linear function. He also advocated the assignment of numbers in ratio to the strengths of stimuli, Stevens added techniques such as magnitude production and cross-modality matching. He opposed the assignment of stimulus strengths to points on a line that are labeled in order of strength, nevertheless, that sort of response has remained popular in applied psychophysics
Psychophysics
–
Diagram showing a specific staircase procedure: Transformed Up/Down Method (1 up/ 2 down rule). Until the first reversal (which is neglected) the simple up/down rule and a larger step size is used.
95.
Atmospheric physics
–
Atmospheric physics is the application of physics to the study of the atmosphere. At the dawn of the age and the introduction of sounding rockets, aeronomy became a subdiscipline concerning the upper layers of the atmosphere. There are two kinds of remote sensing, passive sensors detect natural radiation that is emitted or reflected by the object or surrounding area being observed. Reflected sunlight is the most common source of radiation measured by passive sensors, examples of passive remote sensors include film photography, infra-red, charge-coupled devices, and radiometers. Active collection, on the hand, emits energy in order to scan objects and areas whereupon a sensor then detects. Remote sensing makes it possible to data on dangerous or inaccessible areas. Military collection during the war made use of stand-off collection of data about dangerous border areas. Remote sensing also replaces costly and slow data collection on the ground, Atmospheric physicists typically divide radiation into solar radiation and terrestrial radiation. Solar radiation contains variety of wavelengths, visible light has wavelengths between 0.4 and 0.7 micrometers. Shorter wavelengths are known as the part of the spectrum. Ozone is most effective in absorbing radiation around 0.25 micrometers and this increases the temperature of the nearby stratosphere. Snow reflects 88% of UV rays, while sand reflects 12%, the more glancing the angle is between the atmosphere and the suns rays, the more likely that energy will be reflected or absorbed by the atmosphere. Terrestrial radiation is emitted at longer wavelengths than solar radiation. This is because Earth is much colder than the sun, radiation is emitted by Earth across a range of wavelengths, as formalized in Plancks law. The wavelength of energy is around 10 micrometers. Cloud physics is the study of the processes that lead to the formation, growth. Clouds are composed of droplets of water, tiny crystals of ice. Under suitable conditions, the combine to form precipitation, where they may fall to the earth
Atmospheric physics
–
Atmospheric sciences
Atmospheric physics
–
Brightness can indicate reflectivity as in this 1960 weather radar image (of Hurricane Abby). The radar's frequency, pulse form, and antenna largely determine what it can observe.
Atmospheric physics
–
Cloud to ground Lightning in the global atmospheric electrical circuit.
Atmospheric physics
–
Representation of upper-atmospheric lightning and electrical-discharge phenomena
96.
Integrated Authority File
–
The Integrated Authority File or GND is an international authority file for the organisation of personal names, subject headings and corporate bodies from catalogues. It is used mainly for documentation in libraries and increasingly also by archives, the GND is managed by the German National Library in cooperation with various regional library networks in German-speaking Europe and other partners. The GND falls under the Creative Commons Zero license, the GND specification provides a hierarchy of high-level entities and sub-classes, useful in library classification, and an approach to unambiguous identification of single elements. It also comprises an ontology intended for knowledge representation in the semantic web, available in the RDF format
Integrated Authority File
–
GND screenshot
97.
National Diet Library
–
The National Diet Library is the only national library in Japan. It was established in 1948 for the purpose of assisting members of the National Diet of Japan in researching matters of public policy, the library is similar in purpose and scope to the United States Library of Congress. The National Diet Library consists of two facilities in Tokyo and Kyoto, and several other branch libraries throughout Japan. The Diets power in prewar Japan was limited, and its need for information was correspondingly small, the original Diet libraries never developed either the collections or the services which might have made them vital adjuncts of genuinely responsible legislative activity. Until Japans defeat, moreover, the executive had controlled all political documents, depriving the people and the Diet of access to vital information. The U. S. occupation forces under General Douglas MacArthur deemed reform of the Diet library system to be an important part of the democratization of Japan after its defeat in World War II. In 1946, each house of the Diet formed its own National Diet Library Standing Committee, hani Gorō, a Marxist historian who had been imprisoned during the war for thought crimes and had been elected to the House of Councillors after the war, spearheaded the reform efforts. Hani envisioned the new body as both a citadel of popular sovereignty, and the means of realizing a peaceful revolution, the National Diet Library opened in June 1948 in the present-day State Guest-House with an initial collection of 100,000 volumes. The first Librarian of the Diet Library was the politician Tokujirō Kanamori, the philosopher Masakazu Nakai served as the first Vice Librarian. In 1949, the NDL merged with the National Library and became the national library in Japan. At this time the collection gained a million volumes previously housed in the former National Library in Ueno. In 1961, the NDL opened at its present location in Nagatachō, in 1986, the NDLs Annex was completed to accommodate a combined total of 12 million books and periodicals. The Kansai-kan, which opened in October 2002 in the Kansai Science City, has a collection of 6 million items, in May 2002, the NDL opened a new branch, the International Library of Childrens Literature, in the former building of the Imperial Library in Ueno. This branch contains some 400,000 items of literature from around the world. Though the NDLs original mandate was to be a library for the National Diet. In the fiscal year ending March 2004, for example, the library reported more than 250,000 reference inquiries, in contrast, as Japans national library, the NDL collects copies of all publications published in Japan. The NDL has an extensive collection of some 30 million pages of documents relating to the Occupation of Japan after World War II. This collection include the documents prepared by General Headquarters and the Supreme Commander of the Allied Powers, the Far Eastern Commission, the NDL maintains a collection of some 530,000 books and booklets and 2 million microform titles relating to the sciences
National Diet Library
–
Tokyo Main Library of the National Diet Library
National Diet Library
–
Kansai-kan of the National Diet Library
National Diet Library
–
The National Diet Library
National Diet Library
–
Main building in Tokyo