1.
Continuum mechanics
–
Continuum mechanics is a branch of mechanics that deals with the analysis of the kinematics and the mechanical behavior of materials modeled as a continuous mass rather than as discrete particles. The French mathematician Augustin-Louis Cauchy was the first to formulate such models in the 19th century, research in the area continues till today. Modeling an object as a continuum assumes that the substance of the object completely fills the space it occupies, Continuum mechanics deals with physical properties of solids and fluids which are independent of any particular coordinate system in which they are observed. These physical properties are represented by tensors, which are mathematical objects that have the required property of being independent of coordinate system. These tensors can be expressed in coordinate systems for computational convenience, Materials, such as solids, liquids and gases, are composed of molecules separated by space. On a microscopic scale, materials have cracks and discontinuities, a continuum is a body that can be continually sub-divided into infinitesimal elements with properties being those of the bulk material. More specifically, the continuum hypothesis/assumption hinges on the concepts of an elementary volume. This condition provides a link between an experimentalists and a viewpoint on constitutive equations as well as a way of spatial and statistical averaging of the microstructure. The latter then provide a basis for stochastic finite elements. The levels of SVE and RVE link continuum mechanics to statistical mechanics, the RVE may be assessed only in a limited way via experimental testing, when the constitutive response becomes spatially homogeneous. Specifically for fluids, the Knudsen number is used to assess to what extent the approximation of continuity can be made, consider car traffic on a highway---with just one lane for simplicity. Somewhat surprisingly, and in a tribute to its effectiveness, continuum mechanics effectively models the movement of cars via a differential equation for the density of cars. The familiarity of this situation empowers us to understand a little of the continuum-discrete dichotomy underlying continuum modelling in general. To start modelling define that, x measure distance along the highway, t is time, ρ is the density of cars on the highway, cars do not appear and disappear. Consider any group of cars, from the car at the back of the group located at x = a to the particular car at the front located at x = b. The total number of cars in this group N = ∫ a b ρ d x, since cars are conserved d N / d t =0. The only way an integral can be zero for all intervals is if the integrand is zero for all x, consequently, conservation derives the first order nonlinear conservation PDE ∂ ρ ∂ t + ∂ ∂ x =0 for all positions on the highway. This conservation PDE applies not only to car traffic but also to fluids, solids, crowds, animals, plants, bushfires, financial traders and this PDE is one equation with two unknowns, so another equation is needed to form a well posed problem
Continuum mechanics
–
Figure 1. Configuration of a continuum body
2.
Conservation of mass
–
Hence, the quantity of mass is conserved over time. Thus, during any chemical reaction, nuclear reaction, or radioactive decay in an isolated system, the concept of mass conservation is widely used in many fields such as chemistry, mechanics, and fluid dynamics. e. Those completely isolated from all exchanges with the environment, in this circumstance, the mass–energy equivalence theorem states that mass conservation is equivalent to total energy conservation, which is the first law of thermodynamics. By contrast, for a closed system mass is only approximately conserved. Certain types of matter may be created or destroyed, but in all of these processes, for a discussion, see mass in general relativity. An important idea in ancient Greek philosophy was that Nothing comes from nothing, so that what exists now has always existed, no new matter can come into existence where there was none before. A further principle of conservation was stated by Epicurus who, describing the nature of the Universe, wrote that the totality of things was always such as it is now, and always will be. Jain philosophy, a non-creationist philosophy based on the teachings of Mahavira, states that the universe, the Jain text Tattvarthasutra states that a substance is permanent, but its modes are characterised by creation and destruction. A principle of the conservation of matter was also stated by Nasīr al-Dīn al-Tūsī and he wrote that A body of matter cannot disappear completely. It only changes its form, condition, composition, color and other properties, the principle of conservation of mass was first outlined by Mikhail Lomonosov in 1748. He proved it by experiments—though this is sometimes challenged, antoine Lavoisier had expressed these ideas in 1774. Others whose ideas pre-dated the work of Lavoisier include Joseph Black, Henry Cavendish, the conservation of mass was obscure for millennia because of the buoyancy effect of the Earths atmosphere on the weight of gases. For example, a piece of wood weighs less after burning, the vacuum pump also enabled the weighing of gases using scales. Once understood, the conservation of mass was of importance in progressing from alchemy to modern chemistry. His research indicated that in certain reactions the loss or gain could not have more than from 2 to 4 parts in 100,000. The difference in the accuracy aimed at and attained by Lavoisier on the one hand, in special relativity, the conservation of mass does not apply if the system is open and energy escapes. However, it continue to apply to totally closed systems. If energy cannot escape a system, its mass cannot decrease, in relativity theory, so long as any type of energy is retained within a system, this energy exhibits mass
Conservation of mass
–
Russian scientist
Mikhail Lomonosov is the one who discovered the law of mass conservation in 1756 by experiments, and came to the conclusion that phlogiston theory is incorrect.
Conservation of mass
Conservation of mass
–
Antoine Lavoisier 's discovery of the Law of Conservation of Mass led to many new findings in the 19th century.
Joseph Proust 's
law of definite proportions and
John Dalton 's
atomic theory branched from the discoveries of Antoine Lavoisier. Lavoisier's quantitative experiments revealed that combustion involved
oxygen rather than what was previously thought to be
phlogiston.
3.
Momentum
–
In classical mechanics, linear momentum, translational momentum, or simply momentum is the product of the mass and velocity of an object, quantified in kilogram-meters per second. It is dimensionally equivalent to impulse, the product of force and time, Newtons second law of motion states that the change in linear momentum of a body is equal to the net impulse acting on it. If the truck were lighter, or moving slowly, then it would have less momentum. Linear momentum is also a quantity, meaning that if a closed system is not affected by external forces. In classical mechanics, conservation of momentum is implied by Newtons laws. It also holds in special relativity and, with definitions, a linear momentum conservation law holds in electrodynamics, quantum mechanics, quantum field theory. It is ultimately an expression of one of the symmetries of space and time. Linear momentum depends on frame of reference, observers in different frames would find different values of linear momentum of a system. But each would observe that the value of linear momentum does not change with time, momentum has a direction as well as magnitude. Quantities that have both a magnitude and a direction are known as vector quantities, because momentum has a direction, it can be used to predict the resulting direction of objects after they collide, as well as their speeds. Below, the properties of momentum are described in one dimension. The vector equations are almost identical to the scalar equations, the momentum of a particle is traditionally represented by the letter p. It is the product of two quantities, the mass and velocity, p = m v, the units of momentum are the product of the units of mass and velocity. In SI units, if the mass is in kilograms and the velocity in meters per second then the momentum is in kilogram meters/second, in cgs units, if the mass is in grams and the velocity in centimeters per second, then the momentum is in gram centimeters/second. Being a vector, momentum has magnitude and direction, for example, a 1 kg model airplane, traveling due north at 1 m/s in straight and level flight, has a momentum of 1 kg m/s due north measured from the ground. The momentum of a system of particles is the sum of their momenta, if two particles have masses m1 and m2, and velocities v1 and v2, the total momentum is p = p 1 + p 2 = m 1 v 1 + m 2 v 2. If all the particles are moving, the center of mass will generally be moving as well, if the center of mass is moving at velocity vcm, the momentum is, p = m v cm. This is known as Eulers first law, if a force F is applied to a particle for a time interval Δt, the momentum of the particle changes by an amount Δ p = F Δ t
Momentum
–
In a game of
pool, momentum is conserved; that is, if one ball stops dead after the collision, the other ball will continue away with all the momentum. If the moving ball continues or is deflected then both balls will carry a portion of the momentum from the collision.
4.
Stress (mechanics)
–
For example, when a solid vertical bar is supporting a weight, each particle in the bar pushes on the particles immediately below it. When a liquid is in a container under pressure, each particle gets pushed against by all the surrounding particles. The container walls and the pressure-inducing surface push against them in reaction and these macroscopic forces are actually the net result of a very large number of intermolecular forces and collisions between the particles in those molecules. Strain inside a material may arise by various mechanisms, such as stress as applied by external forces to the material or to its surface. Any strain of a material generates an internal elastic stress, analogous to the reaction force of a spring. In liquids and gases, only deformations that change the volume generate persistent elastic stress, however, if the deformation is gradually changing with time, even in fluids there will usually be some viscous stress, opposing that change. Elastic and viscous stresses are usually combined under the mechanical stress. Significant stress may exist even when deformation is negligible or non-existent, stress may exist in the absence of external forces, such built-in stress is important, for example, in prestressed concrete and tempered glass. Stress may also be imposed on a material without the application of net forces, for example by changes in temperature or chemical composition, stress that exceeds certain strength limits of the material will result in permanent deformation or even change its crystal structure and chemical composition. In some branches of engineering, the stress is occasionally used in a looser sense as a synonym of internal force. For example, in the analysis of trusses, it may refer to the total traction or compression force acting on a beam, since ancient times humans have been consciously aware of stress inside materials. Until the 17th century the understanding of stress was largely intuitive and empirical, with those tools, Augustin-Louis Cauchy was able to give the first rigorous and general mathematical model for stress in a homogeneous medium. Cauchy observed that the force across a surface was a linear function of its normal vector, and, moreover. The understanding of stress in liquids started with Newton, who provided a formula for friction forces in parallel laminar flow. Stress is defined as the force across a small boundary per unit area of that boundary, following the basic premises of continuum mechanics, stress is a macroscopic concept. In a fluid at rest the force is perpendicular to the surface, in a solid, or in a flow of viscous liquid, the force F may not be perpendicular to S, hence the stress across a surface must be regarded a vector quantity, not a scalar. Moreover, the direction and magnitude depend on the orientation of S. Thus the stress state of the material must be described by a tensor, called the stress tensor, with respect to any chosen coordinate system, the Cauchy stress tensor can be represented as a symmetric matrix of 3×3 real numbers
Stress (mechanics)
–
Built-in strain, inside the plastic protractor, developed by the stress of the shape of the
protractor, is revealed by the
effect of polarized light.
Stress (mechanics)
–
Roman -era bridge in
Switzerland
Stress (mechanics)
–
Inca bridge on the
Apurimac River
Stress (mechanics)
–
Glass vase with the
craquelé effect. The cracks are the result of brief but intense stress created when the semi-molten piece is briefly dipped in water.
5.
Infinitesimal strain theory
–
With this assumption, the equations of continuum mechanics are considerably simplified. This approach may also be called small deformation theory, small displacement theory and it is contrasted with the finite strain theory where the opposite assumption is made. In such a linearization, the non-linear or second-order terms of the strain tensor are neglected. Therefore, the displacement gradient components and the spatial displacement gradient components are approximately equal. From the geometry of Figure 1 we have a b ¯ =2 +2 = d x 1 +2 ∂ u x ∂ x +2 +2 For very small displacement gradients, i. e. e. Therefore, the elements of the infinitesimal strain tensor are the normal strains in the coordinate directions. The results of operations are called strain invariants. Since there are no shear strain components in this coordinate system, an octahedral plane is one whose normal makes equal angles with the three principal directions. The engineering shear strain on a plane is called the octahedral shear strain and is given by γ o c t =232 +2 +2 where ε1, ε2, ε3 are the principal strains. Several definitions of equivalent strain can be found in the literature, thus, a solution does not generally exist for an arbitrary choice of strain components. Therefore, some restrictions, named compatibility equations, are imposed upon the strain components, with the addition of the three compatibility equations the number of independent equations are reduced to three, matching the number of unknown displacement components. These constraints on the strain tensor were discovered by Saint-Venant, and are called the Saint Venant compatibility equations, the compatibility functions serve to assure a single-valued continuous displacement function u i. The strains associated with length, i. e. the normal strain ε33, plane strain is then an acceptable approximation. The strain tensor for plane strain is written as, ε _ _ = in which the double underline indicates a second order tensor and this strain state is called plane strain. The corresponding stress tensor is, σ _ _ = in which the non-zero σ33 is needed to maintain the constraint ϵ33 =0. This stress term can be removed from the analysis to leave only the in-plane terms. Antiplane strain is another state of strain that can occur in a body. For infinitesimal deformations the scalar components of ω satisfy the condition | ω i j | ≪1, note that the displacement gradient is small only if both the strain tensor and the rotation tensor are infinitesimal
Infinitesimal strain theory
–
Figure 1. Two-dimensional geometric deformation of an infinitesimal material element.
6.
Elasticity (physics)
–
In physics, elasticity is the ability of a body to resist a distorting influence or deforming force and to return to its original size and shape when that influence or force is removed. Solid objects will deform when adequate forces are applied on them, if the material is elastic, the object will return to its initial shape and size when these forces are removed. The physical reasons for elastic behavior can be different for different materials. In metals, the atomic lattice changes size and shape when forces are applied, when forces are removed, the lattice goes back to the original lower energy state. For rubbers and other polymers, elasticity is caused by the stretching of polymer chains when forces are applied, perfect elasticity is an approximation of the real world. The most elastic body in modern science found is Quartz fibre which is not even a perfect elastic body, so perfect elastic body is an ideal concept only. Most materials which possess elasticity in practice remain purely elastic only up to very small deformations. In engineering, the amount of elasticity of a material is determined by two types of material parameter, the first type of material parameter is called a modulus, which measures the amount of force per unit area needed to achieve a given amount of deformation. The SI unit of modulus is the pascal, a higher modulus typically indicates that the material is harder to deform. The second type of measures the elastic limit, the maximum stress that can arise in a material before the onset of permanent deformation. Its SI unit is also pascal, when describing the relative elasticities of two materials, both the modulus and the elastic limit have to be considered. Rubbers typically have a low modulus and tend to stretch a lot, of two rubber materials with the same elastic limit, the one with a lower modulus will appear to be more elastic, which is however not correct. When an elastic material is deformed due to a force, it experiences internal resistance to the deformation. The various moduli apply to different kinds of deformation, for instance, Youngs modulus applies to extension/compression of a body, whereas the shear modulus applies to its shear. The elasticity of materials is described by a curve, which shows the relation between stress and strain. The curve is nonlinear, but it can be approximated as linear for sufficiently small deformations. For even higher stresses, materials exhibit behavior, that is, they deform irreversibly. Elasticity is not exhibited only by solids, non-Newtonian fluids, such as viscoelastic fluids, in response to a small, rapidly applied and removed strain, these fluids may deform and then return to their original shape. Under larger strains, or strains applied for longer periods of time, because the elasticity of a material is described in terms of a stress-strain relation, it is essential that the terms stress and strain be defined without ambiguity
Elasticity (physics)
–
Continuum mechanics
7.
Material failure theory
–
Failure theory is the science of predicting the conditions under which solid materials fail under the action of external loads. The failure of a material is classified into brittle failure or ductile failure. Depending on the conditions most materials can fail in a brittle or ductile manner or both, however, for most practical situations, a material may be classified as either brittle or ductile. Though failure theory has been in development for over 200 years, in mathematical terms, failure theory is expressed in the form of various failure criteria which are valid for specific materials. Failure criteria are functions in stress or strain space which separate failed states from unfailed states, a precise physical definition of a failed state is not easily quantified and several working definitions are in use in the engineering community. Quite often, phenomenological failure criteria of the form are used to predict brittle failure. In materials science, material failure is the loss of carrying capacity of a material unit. This definition per se introduces the fact that failure can be examined in different scales, from microscopic. On the other hand, due to the lack of globally accepted fracture criteria, such methodologies are useful for gaining insight in the cracking of specimens and simple structures under well defined global load distributions. Microscopic failure considers the initiation and propagation of a crack, failure criteria in this case are related to microscopic fracture. Some of the most popular models in this area are the micromechanical failure models. Such a model, proposed by Gurson and extended by Tvergaard, another approach, proposed by Rousselier, is based on continuum damage mechanics and thermodynamics. Both models form a modification of the von Mises yield potential by introducing a scalar quantity, which represents the void volume fraction of cavities. Macroscopic material failure is defined in terms of load carrying capacity or energy storage capacity, li presents a classification of macroscopic failure criteria in four categories, Stress or strain failure Energy type failure Damage failure Empirical failure. The material behavior at one level is considered as a collective of its behavior at a sub-level, an efficient deformation and failure model should be consistent at every level. The maximum stress criterion assumes that a material fails when the principal stress σ1 in a material element exceeds the uniaxial tensile strength of the material. Alternatively, the material will fail if the principal stress σ3 is less than the uniaxial compressive strength of the material. Numerous other phenomenological failure criteria can be found in the engineering literature, the degree of success of these criteria in predicting failure has been limited
Material failure theory
–
Continuum mechanics
8.
Fluid
–
In physics, a fluid is a substance that continually deforms under an applied shear stress. Fluids are a subset of the phases of matter and include liquids, gases, plasmas, fluids are substances that have zero shear modulus, or, in simpler terms, a fluid is a substance which cannot resist any shear force applied to it. Although the term includes both the liquid and gas phases, in common usage, fluid is often used as a synonym for liquid. For example, brake fluid is hydraulic oil and will not perform its required incompressible function if there is gas in it and this colloquial usage of the term is also common in medicine and in nutrition. Liquids form a surface while gases do not. The distinction between solids and fluid is not entirely obvious, the distinction is made by evaluating the viscosity of the substance. Silly Putty can be considered to behave like a solid or a fluid and it is best described as a viscoelastic fluid. There are many examples of substances proving difficult to classify, a particularly interesting one is pitch, as demonstrated in the pitch drop experiment currently running at the University of Queensland. Fluids display properties such as, not resisting deformation, or resisting it only slightly, and these properties are typically a function of their inability to support a shear stress in static equilibrium. Solids can be subjected to stresses, and to normal stresses—both compressive. In contrast, ideal fluids can only be subjected to normal, real fluids display viscosity and so are capable of being subjected to low levels of shear stress. In a solid, shear stress is a function of strain, a consequence of this behavior is Pascals law which describes the role of pressure in characterizing a fluids state. The study of fluids is fluid mechanics, which is subdivided into fluid dynamics, matter Liquid Gas Bird, Byron, Stewart, Warren, Lightfoot, Edward
Fluid
–
Continuum mechanics
9.
Fluid statics
–
Fluid statics or hydrostatics is the branch of fluid mechanics that studies incompressible fluids at rest. It encompasses the study of the conditions under which fluids are at rest in stable equilibrium as opposed to fluid dynamics, hydrostatics are categorized as a part of the fluid statics, which is the study of all fluids, incompressible or not, at rest. Hydrostatics is fundamental to hydraulics, the engineering of equipment for storing, transporting and using fluids and it is also relevant to geophysics and astrophysics, to meteorology, to medicine, and many other fields. Some principles of hydrostatics have been known in an empirical and intuitive sense since antiquity, by the builders of boats, cisterns, aqueducts and fountains. Archimedes is credited with the discovery of Archimedes Principle, which relates the force on an object that is submerged in a fluid to the weight of fluid displaced by the object. The fair cup or Pythagorean cup, which dates from about the 6th century BC, is a technology whose invention is credited to the Greek mathematician. It was used as a learning tool, the cup consists of a line carved into the interior of the cup, and a small vertical pipe in the center of the cup that leads to the bottom. The height of this pipe is the same as the line carved into the interior of the cup, the cup may be filled to the line without any fluid passing into the pipe in the center of the cup. However, when the amount of fluid exceeds this fill line, due to the drag that molecules exert on one another, the cup will be emptied. Herons fountain is a device invented by Heron of Alexandria that consists of a jet of fluid being fed by a reservoir of fluid. The fountain is constructed in such a way that the height of the jet exceeds the height of the fluid in the reservoir, the device consisted of an opening and two containers arranged one above the other. The intermediate pot, which was sealed, was filled with fluid, trapped air inside the vessels induces a jet of water out of a nozzle, emptying all water from the intermediate reservoir. Pascal made contributions to developments in both hydrostatics and hydrodynamics, due to the fundamental nature of fluids, a fluid cannot remain at rest under the presence of a shear stress. However, fluids can exert pressure normal to any contacting surface, if a point in the fluid is thought of as an infinitesimally small cube, then it follows from the principles of equilibrium that the pressure on every side of this unit of fluid must be equal. If this were not the case, the fluid would move in the direction of the resulting force, thus, the pressure on a fluid at rest is isotropic, i. e. it acts with equal magnitude in all directions. This characteristic allows fluids to transmit force through the length of pipes or tubes, i. e. a force applied to a fluid in a pipe is transmitted, via the fluid, to the other end of the pipe. This principle was first formulated, in an extended form, by Blaise Pascal. In a fluid at rest, all frictional and inertial stresses vanish, when this condition of V =0 is applied to the Navier-Stokes equation, the gradient of pressure becomes a function of body forces only
Fluid statics
–
Table of Hydraulics and Hydrostatics, from the 1728
Cyclopædia
Fluid statics
–
Diving medicine:
10.
Viscosity
–
The viscosity of a fluid is a measure of its resistance to gradual deformation by shear stress or tensile stress. For liquids, it corresponds to the concept of thickness, for example. Viscosity is a property of the fluid which opposes the motion between the two surfaces of the fluid in a fluid that are moving at different velocities. For a given velocity pattern, the stress required is proportional to the fluids viscosity, a fluid that has no resistance to shear stress is known as an ideal or inviscid fluid. Zero viscosity is observed only at low temperatures in superfluids. Otherwise, all fluids have positive viscosity, and are said to be viscous or viscid. A fluid with a high viscosity, such as pitch. The word viscosity is derived from the Latin viscum, meaning mistletoe, the dynamic viscosity of a fluid expresses its resistance to shearing flows, where adjacent layers move parallel to each other with different speeds. It can be defined through the situation known as a Couette flow. This fluid has to be homogeneous in the layer and at different shear stresses, if the speed of the top plate is small enough, the fluid particles will move parallel to it, and their speed will vary linearly from zero at the bottom to u at the top. Each layer of fluid will move faster than the one just below it, in particular, the fluid will apply on the top plate a force in the direction opposite to its motion, and an equal but opposite one to the bottom plate. An external force is required in order to keep the top plate moving at constant speed. The magnitude F of this force is found to be proportional to the u and the area A of each plate. The proportionality factor μ in this formula is the viscosity of the fluid, the ratio u/y is called the rate of shear deformation or shear velocity, and is the derivative of the fluid speed in the direction perpendicular to the plates. Isaac Newton expressed the forces by the differential equation τ = μ ∂ u ∂ y, where τ = F/A. This formula assumes that the flow is moving along parallel lines and this equation can be used where the velocity does not vary linearly with y, such as in fluid flowing through a pipe. Use of the Greek letter mu for the dynamic viscosity is common among mechanical and chemical engineers. However, the Greek letter eta is used by chemists, physicists
Viscosity
–
Pitch has a viscosity approximately 230 billion (2.3 × 10 11) times that of water.
Viscosity
–
Laminar shear of fluid between two plates. Friction between the fluid and the moving boundaries causes the fluid to shear. The force required for this action is a measure of the fluid's viscosity.
Viscosity
–
Example of the viscosity of milk and water. Liquids with higher viscosities make smaller splashes when poured at the same velocity.
Viscosity
–
Honey being drizzled.
11.
Newtonian fluid
–
That is equivalent to saying that those forces are proportional to the rates of change of the fluids velocity vector as one moves away from the point in question in various directions. Newtonian fluids are the simplest mathematical models of fluids that account for viscosity, while no real fluid fits the definition perfectly, many common liquids and gases, such as water and air, can be assumed to be Newtonian for practical calculations under ordinary conditions. However, non-Newtonian fluids are relatively common, and include oobleck, other examples include many polymer solutions, molten polymers, many solid suspensions, blood, and most highly viscous fluids. Newtonian fluids are named after Isaac Newton, who first postulated the relation between the strain rate and shear stress for such fluids in differential form. An element of a liquid or gas will suffer forces from the surrounding fluid. These forces can be approximated to first order by a viscous stress tensor. The deformation of that element, relative to some previous state. The tensors τ and ∇ v can be expressed by 3×3 matrices, one also defines a total stress tensor σ ) that combines the shear stress with conventional pressure p. The diagonal components of viscosity tensor is molecular viscosity of a liquid, and not diagonal components – turbulence eddy viscosity
Newtonian fluid
–
Continuum mechanics
12.
Non-Newtonian fluid
–
A non-Newtonian fluid is a fluid that does not follow Newtons Law of Viscosity. Most commonly, the viscosity of fluids is dependent on shear rate or shear rate history. Some non-Newtonian fluids with shear-independent viscosity, however, still exhibit normal stress-differences or other non-Newtonian behavior. Many salt solutions and molten polymers are non-Newtonian fluids, as are commonly found substances such as ketchup, custard, toothpaste, starch suspensions, maizena, paint, blood. In a Newtonian fluid, the relation between the stress and the shear rate is linear, passing through the origin, the constant of proportionality being the coefficient of viscosity. In a non-Newtonian fluid, the relation between the stress and the shear rate is different and can even be time-dependent. Therefore, a constant coefficient of viscosity cannot be defined, although the concept of viscosity is commonly used in fluid mechanics to characterize the shear properties of a fluid, it can be inadequate to describe non-Newtonian fluids. The properties are studied using tensor-valued constitutive equations, which are common in the field of continuum mechanics. The viscosity of a shear thickening fluid, or dilatant fluid, corn starch dissolved in water is a common example, when stirred slowly it looks milky, when stirred vigorously it feels like a very viscous liquid. Note that all thixotropic fluids are extremely shear thinning, but they are time dependent. Thus, to avoid confusion, the classification is more clearly termed pseudoplastic. Another example of a shear thinning fluid is blood and this application is highly favoured within the body, as it allows the viscosity of blood to decrease with increased shear strain rate. Fluids that have a linear shear stress/shear strain relationship require a finite yield stress before they begin to flow and these fluids are called Bingham plastics. Several examples are clay suspensions, drilling mud, toothpaste, mayonnaise, chocolate, the surface of a Bingham plastic can hold peaks when it is still. By contrast Newtonian fluids have flat featureless surfaces when still, there are also fluids whose strain rate is a function of time. Fluids that require a gradually increasing shear stress to maintain a constant strain rate are referred to as rheopectic, an opposite case of this is a fluid that thins out with time and requires a decreasing stress to maintain a constant strain rate. Many common substances exhibit non-Newtonian flows, uncooked cornflour has the same properties. The name oobleck is derived from the Dr. Seuss book Bartholomew, because of its properties, oobleck is often used in demonstrations that exhibit its unusual behavior
Non-Newtonian fluid
–
Demonstration of a non-Newtonian fluid at
Universum in Mexico City
Non-Newtonian fluid
–
Classification of fluids with shear stress as a function of shear rate.
Non-Newtonian fluid
–
Oobleck on a subwoofer. Applying force to oobleck, by sound waves in this case, makes the non-Newtonian fluid thicken.
13.
Mixing (process engineering)
–
In industrial process engineering, mixing is a unit operation that involves manipulation of a heterogeneous physical system with the intent to make it more homogeneous. Familiar examples include pumping of the water in a pool to homogenize the water temperature. Mixing is performed to allow heat and/or mass transfer to occur between one or more streams, components or phases, modern industrial processing almost always involves some form of mixing. Some classes of chemical reactors are also mixers, with the right equipment, it is possible to mix a solid, liquid or gas into another solid, liquid or gas. The opposite of mixing is segregation, a classical example of segregation is the brazil nut effect. The type of operation and equipment used during mixing depends on the state of materials being mixed, in this context, the act of mixing may be synonymous with stirring-, or kneading-processes. Mixing of liquids occurs frequently in process engineering, the nature of liquids to blend determines the equipment used. Turbulent or transitional mixing is conducted with turbines or impellers. Mixing of liquids that are miscible or at least soluble in each other frequently in process engineering. An everyday example would be the addition of milk or cream to tea or coffee, since both liquids are water-based, they dissolve easily in one another. The momentum of the liquid being added is sometimes enough to cause enough turbulence to mix the two, since the viscosity of liquids is relatively low. If necessary, a spoon or paddle could be used to complete the mixing process, blending in a more viscous liquid, such as honey, requires more mixing power per unit volume to achieve the same homogeneity in the same amount of time. Blending powders is one of the oldest unit-operations in the solids handling industries, for many decades powder blending has been used just to homogenize bulk materials. Many different machines have been designed to handle materials with various bulk solids properties, on the basis of the practical experience gained with these different machines, engineering knowledge has been developed to construct reliable equipment and to predict scale-up and mixing behavior. This wide range of applications of mixing equipment requires a level of knowledge, long time experience and extended test facilities to come to the optimal selection of equipment. In powder two different dimensions in the process can be determined, convective mixing and intensive mixing. In the case of convective mixing material in the mixer is transported from one location to another and this type of mixing leads to a less ordered state inside the mixer, the components that must be mixed are distributed over the other components. With progressing time the mixture becomes more randomly ordered, after a certain mixing time the ultimate random state is reached
Mixing (process engineering)
–
Machine for incorporating liquids and finely ground solids
Mixing (process engineering)
–
Schematics of an agitated vessel with a Rushton turbine and baffles
Mixing (process engineering)
–
A magnetic stirrer
Mixing (process engineering)
–
Axial flow impeller (left) and radial flow impeller (right).
14.
Liquid
–
A liquid is a nearly incompressible fluid that conforms to the shape of its container but retains a constant volume independent of pressure. As such, it is one of the four states of matter. A liquid is made up of tiny vibrating particles of matter, such as atoms, water is, by far, the most common liquid on Earth. Like a gas, a liquid is able to flow and take the shape of a container, most liquids resist compression, although others can be compressed. Unlike a gas, a liquid does not disperse to fill every space of a container, a distinctive property of the liquid state is surface tension, leading to wetting phenomena. The density of a liquid is usually close to that of a solid, therefore, liquid and solid are both termed condensed matter. On the other hand, as liquids and gases share the ability to flow, although liquid water is abundant on Earth, this state of matter is actually the least common in the known universe, because liquids require a relatively narrow temperature/pressure range to exist. Most known matter in the universe is in form as interstellar clouds or in plasma form within stars. Liquid is one of the four states of matter, with the others being solid, gas. Unlike a solid, the molecules in a liquid have a greater freedom to move. The forces that bind the molecules together in a solid are only temporary in a liquid, a liquid, like a gas, displays the properties of a fluid. A liquid can flow, assume the shape of a container, if liquid is placed in a bag, it can be squeezed into any shape. These properties make a suitable for applications such as hydraulics. Liquid particles are bound firmly but not rigidly and they are able to move around one another freely, resulting in a limited degree of particle mobility. As the temperature increases, the vibrations of the molecules causes distances between the molecules to increase. When a liquid reaches its point, the cohesive forces that bind the molecules closely together break. If the temperature is decreased, the distances between the molecules become smaller, only two elements are liquid at standard conditions for temperature and pressure, mercury and bromine. Four more elements have melting points slightly above room temperature, francium, caesium, gallium and rubidium, metal alloys that are liquid at room temperature include NaK, a sodium-potassium metal alloy, galinstan, a fusible alloy liquid, and some amalgams
Liquid
–
The formation of a spherical
droplet of liquid water minimizes the
surface area, which is the natural result of
surface tension in liquids.
Liquid
–
Thermal image of a sink full of hot water with cold water being added, showing how the hot and the cold water flow into each other.
Liquid
–
Surface waves in
water
15.
Surface tension
–
Surface tension is the elastic tendency of a fluid surface which makes it acquire the least surface area possible. Surface tension allows insects, usually denser than water, to float, at liquid-air interfaces, surface tension results from the greater attraction of liquid molecules to each other than to the molecules in the air. The net effect is a force at its surface that causes the liquid to behave as if its surface were covered with a stretched elastic membrane. Thus, the surface becomes under tension from the imbalanced forces, because of the relatively high attraction of water molecules for each other through a web of hydrogen bonds, water has a higher surface tension compared to that of most other liquids. Surface tension is an important factor in the phenomenon of capillarity, Surface tension has the dimension of force per unit length, or of energy per unit area. The two are equivalent, but when referring to energy per unit of area, it is common to use the surface energy. In materials science, surface tension is used for either surface stress or surface free energy, the cohesive forces among liquid molecules are responsible for the phenomenon of surface tension. In the bulk of the liquid, each molecule is pulled equally in every direction by neighboring liquid molecules, the molecules at the surface do not have the same molecules on all sides of them and therefore are pulled inwards. This creates some internal pressure and forces liquid surfaces to contract to the minimal area, Surface tension is responsible for the shape of liquid droplets. Although easily deformed, droplets of water tend to be pulled into a shape by the imbalance in cohesive forces of the surface layer. In the absence of forces, including gravity, drops of virtually all liquids would be approximately spherical. The spherical shape minimizes the necessary wall tension of the surface according to Laplaces law. Another way to view surface tension is in terms of energy, a molecule in contact with a neighbor is in a lower state of energy than if it were alone. The interior molecules have as many neighbors as they can possibly have, for the liquid to minimize its energy state, the number of higher energy boundary molecules must be minimized. The minimized number of boundary molecules results in a surface area. As a result of surface area minimization, a surface will assume the smoothest shape it can, since any curvature in the surface shape results in greater area, a higher energy will also result. Consequently, the surface will push back against any curvature in much the way as a ball pushed uphill will push back to minimize its gravitational potential energy. Bubbles in pure water are unstable, the addition of surfactants, however, can have a stabilizing effect on the bubbles
Surface tension
–
Surface tension preventing a paper clip from submerging.
Surface tension
Surface tension
–
A. Water beading on a leaf
Surface tension
–
C.
Water striders stay atop the liquid because of surface tension
16.
Capillary action
–
Capillary action is the ability of a liquid to flow in narrow spaces without the assistance of, or even in opposition to, external forces like gravity. It occurs because of forces between the liquid and surrounding solid surfaces. If the diameter of the tube is sufficiently small, then the combination of surface tension, the first recorded observation of capillary action was by Leonardo da Vinci. A former student of Galileo, Niccolò Aggiunti, was said to have investigated capillary action, boyle then reported an experiment in which he dipped a capillary tube into red wine and then subjected the tube to a partial vacuum. Some thought that liquids rose in capillaries because air couldnt enter capillaries as easily as liquids, others thought that the particles of liquid were attracted to each other and to the walls of the capillary. They derived the Young–Laplace equation of capillary action, by 1830, the German mathematician Carl Friedrich Gauss had determined the boundary conditions governing capillary action. In 1871, the British physicist William Thomson determined the effect of the meniscus on a liquids vapor pressure—a relation known as the Kelvin equation, German physicist Franz Ernst Neumann subsequently determined the interaction between two immiscible liquids. Albert Einsteins first paper, which was submitted to Annalen der Physik in 1900, was on capillarity, a common apparatus used to demonstrate the first phenomenon is the capillary tube. When the lower end of a glass tube is placed in a liquid, such as water. Adhesion occurs between the fluid and the inner wall pulling the liquid column up until there is a sufficient mass of liquid for gravitational forces to overcome these intermolecular forces. So, a tube will draw a liquid column higher than a wider tube will. Capillary action is essential for the drainage of constantly produced tear fluid from the eye, wicking is the absorption of a liquid by a material in the manner of a candle wick. Paper towels absorb liquid through capillary action, allowing a fluid to be transferred from a surface to the towel, the small pores of a sponge act as small capillaries, causing it to absorb a large amount of fluid. Some textile fabrics are said to use capillary action to wick sweat away from the skin and these are often referred to as wicking fabrics, after the capillary properties of candle and lamp wicks. Capillary action is observed in thin layer chromatography, in which a solvent moves vertically up a plate via capillary action, in this case the pores are gaps between very small particles. Capillary action draws ink to the tips of fountain pen nibs from a reservoir or cartridge inside the pen, in hydrology, capillary action describes the attraction of water molecules to soil particles. Capillary action is responsible for moving groundwater from wet areas of the soil to dry areas, differences in soil potential drive capillary action in soil. Thus the thinner the space in which the water can travel, for a water-filled glass tube in air at standard laboratory conditions, γ =0.0728 N/m at 20 °C, ρ =1000 kg/m3, and g =9.81 m/s2
Capillary action
–
Capillary flow experiment to investigate capillary flows and phenomena aboard the
International Space Station
Capillary action
–
Capillary action of
water compared to
mercury, in each case with respect to a polar surface such as glass
Capillary action
–
Water height in a capillary plotted against capillary diameter
Capillary action
–
Capillary flow in a brick, with a sorptivity of 5.0 mm min −1/2 and a porosity of 0.25.
17.
Atmosphere
–
An atmosphere is a layer of gases surrounding a planet or other material body, that is held in place by the gravity of that body. An atmosphere is likely to be retained if the gravity it is subject to is high. The atmosphere of Earth is mostly composed of nitrogen, oxygen, argon with carbon dioxide, the atmosphere helps protect living organisms from genetic damage by solar ultraviolet radiation, solar wind and cosmic rays. Its current composition is the product of billions of years of modification of the paleoatmosphere by living organisms. The term stellar atmosphere describes the region of a star. Stars with sufficiently low temperatures may form compound molecules in their outer atmosphere, Atmospheric pressure is the force per unit area that is applied perpendicularly to a surface by the surrounding gas. It is determined by a gravitational force in combination with the total mass of a column of gas above a location. On Earth, units of air pressure are based on the recognized standard atmosphere. It is measured with a barometer, the pressure of an atmospheric gas decreases with altitude due to the diminishing mass of gas above. The height at which the pressure from an atmosphere declines by a factor of e is called the height and is denoted by H. For such an atmosphere, the pressure declines exponentially with increasing altitude. However, atmospheres are not uniform in temperature, so the determination of the atmospheric pressure at any particular altitude is more complex. Surface gravity, the force that holds down an atmosphere, differs significantly among the planets, for example, the large gravitational force of the giant planet Jupiter is able to retain light gases such as hydrogen and helium that escape from objects with lower gravity. Thus, the distant and cold Titan, Triton, and Pluto are able to retain their atmospheres despite their relatively low gravities, rogue planets, theoretically, may also retain thick atmospheres. Since a collection of gas molecules may be moving at a range of velocities. Lighter molecules move faster than ones with the same thermal kinetic energy. It is thought that Venus and Mars may have lost much of their water when, after being photo dissociated into hydrogen and oxygen by solar ultraviolet, Earths magnetic field helps to prevent this, as, normally, the solar wind would greatly enhance the escape of hydrogen. However, over the past 3 billion years Earth may have lost gases through the polar regions due to auroral activity
Atmosphere
–
Mars's thin atmosphere
Atmosphere
–
Earth's atmospheric
gases scatter blue light more than other wavelengths, giving the
Earth a blue halo when seen from
space.
18.
Combined gas law
–
The combined gas law is a gas law that combines Charless law, Boyles law, and Gay-Lussacs law. There is no official founder for this law, it is merely an amalgamation of the three previously discovered laws and these laws each relate one thermodynamic variable to another mathematically while holding everything else constant. Charless law states that volume and temperature are directly proportional to other as long as pressure is held constant. Boyles law asserts that pressure and volume are inversely proportional to each other at fixed temperature, finally, Gay-Lussacs law introduces a direct proportionality between temperature and pressure as long as it is at a constant volume. By combining and either of or, we can gain a new equation with P, V and T, if we divide equation by temperature and multiply equation by pressure we will get, P V T = k 1 T P V T = k 2 P. As the left-hand side of both equations is the same, we arrive at k 1 T = k 2 P, substituting in Avogadros Law yields the ideal gas equation. A derivation of the gas law using only elementary algebra can contain surprises. A physical derivation, longer but more reliable, begins by realizing that the constant volume parameter in Gay-Lussacs law will change as the volume changes. At constant volume, V1 the law might appear P = k1T, rather, it should first be determined in what sense these equations are compatible with one another. To gain insight into this, recall that any two variables determine the third, choosing P and V to be independent, we picture the T values forming a surface above the PV-plane. A definite V0 and P0 define a T0, a point on that surface, the ratio of the slopes of these two lines depends only on the value of P0/V0 at that point. Note that the form of did not depend on the particular point chosen. The same formula would have arisen for any combination of P and V values. Therefore, one can write k V k P = P V ∀ P, ∀ V This says that each point on the surface has its own pair of lines through it. Whereas is a relation between specific slopes and variable values, is a relation between slope functions and function variables and it holds true for any point on the surface, i. e. for any and all combinations of P and V values. To solve this equation for the function kV, first separate the variables, V on the left, V k V = P k P Choose any pressure P1. The right side evaluates to some value, call it karb. V k V = k arb This particular equation must now hold true, not just for one value of V, the only definition of kV that guarantees this for all V and arbitrary karb is k V = k arb V which may be verified by substitution in
Combined gas law
–
Diving medicine:
Combined gas law
–
Continuum mechanics
19.
Viscoelasticity
–
Viscoelasticity is the property of materials that exhibit both viscous and elastic characteristics when undergoing deformation. Viscous materials, like honey, resist shear flow and strain linearly with time when a stress is applied, elastic materials strain when stretched and quickly return to their original state once the stress is removed. Viscoelastic materials have elements of both of properties and, as such, exhibit time-dependent strain. In the nineteenth century, physicists such as Maxwell, Boltzmann, and Kelvin researched and experimented with creep and recovery of glasses, metals, viscoelasticity was further examined in the late twentieth century when synthetic polymers were engineered and used in a variety of applications. Viscoelasticity calculations depend heavily on the viscosity variable, η, the inverse of η is also known as fluidity, φ. The value of either can be derived as a function of temperature or as a given value, depending on the change of strain rate versus stress inside a material the viscosity can be categorized as having a linear, non-linear, or plastic response. When a material exhibits a linear response it is categorized as a Newtonian material, in this case the stress is linearly proportional to the strain rate. If the material exhibits a non-linear response to the strain rate, there is also an interesting case where the viscosity decreases as the shear/strain rate remains constant. A material which exhibits this type of behavior is known as thixotropic, in addition, when the stress is independent of this strain rate, the material exhibits plastic deformation. Many viscoelastic materials exhibit rubber like behavior explained by the theory of polymer elasticity. Some examples of materials include amorphous polymers, semicrystalline polymers, biopolymers, metals at very high temperatures. Cracking occurs when the strain is applied quickly and outside of the elastic limit, ligaments and tendons are viscoelastic, so the extent of the potential damage to them depends both on the rate of the change of their length as well as on the force applied. The viscosity of a viscoelastic substance gives the substance a strain rate dependence on time, purely elastic materials do not dissipate energy when a load is applied, then removed. However, a viscoelastic substance loses energy when a load is applied, hysteresis is observed in the stress–strain curve, with the area of the loop being equal to the energy lost during the loading cycle. Since viscosity is the resistance to thermally activated plastic deformation, a material will lose energy through a loading cycle. Plastic deformation results in lost energy, which is uncharacteristic of a purely elastic materials reaction to a loading cycle, specifically, viscoelasticity is a molecular rearrangement. When a stress is applied to a material such as a polymer. This movement or rearrangement is called creep, polymers remain a solid material even when these parts of their chains are rearranging in order to accompany the stress, and as this occurs, it creates a back stress in the material
Viscoelasticity
–
Stress–strain curves for a purely elastic material (a) and a viscoelastic material (b). The red area is a
hysteresis loop and shows the amount of energy lost (as heat) in a loading and unloading cycle. It is equal to, where is stress and is strain.
Viscoelasticity
–
Different types of responses () to a change in strain rate (d /dt)
20.
Rheometer
–
A rheometer is a laboratory device used to measure the way in which a liquid, suspension or slurry flows in response to applied forces. It is used for those fluids which cannot be defined by a value of viscosity and therefore require more parameters to be set. It measures the rheology of the fluid, there are two distinctively different types of rheometers. Rotational or shear type rheometers are usually designed as either a native strain-controlled instrument or a native stress-controlled instrument, the word rheometer comes from the Greek, and means a device for measuring flow. In the 19th century it was used for devices to measure electric current. It was also used for the measurement of flow of liquids, in medical practice and this latter use persisted to the second half of the 20th century in some areas. Following the coining of the term rheology the word came to be applied to instruments for measuring the character rather than quantity of flow, the principle and working of rheometers is described in several texts. A dynamic shear rheometer, commonly known as DSR is used for research, liquid is forced through a tube of constant cross-section and precisely known dimensions under conditions of laminar flow. Either the flow-rate or the drop are fixed and the other measured. Knowing the dimensions, the flow-rate can be converted into a value for the shear rate, varying the pressure or flow allows a flow curve to be determined. The liquid is placed within the annulus of one cylinder inside another, one of the cylinders is rotated at a set speed. This determines the shear rate inside the annulus, the liquid tends to drag the other cylinder round, and the force it exerts on that cylinders is measured, which can be converted to a shear stress. One version of this is the Fann V-G Viscometer, which runs at two speeds, and therefore only two points on the flow curve. This is sufficient to define a Bingham plastic model which used to be used in the oil industry for determining the flow character of drilling fluids. In recent years rheometers that spin at 600,300,200,100,6 &3 RPM have been used and this allows for more complex fluids models such as Herschel-Bulkley to be used. Some models allow the speed to be increased and decreased in a programmed fashion. The liquid is placed on horizontal plate and a cone placed into it. The angle between the surface of the cone and the plate is around 1 to 2 degrees but can vary depending on the types of tests being run, typically the plate is rotated and the torque on the cone measured
Rheometer
–
Continuum mechanics
21.
Smart fluid
–
A smart fluid is a fluid whose properties can be changed by applying an electric field or a magnetic field. The most developed smart fluids today are fluids whose viscosity increases when a field is applied. Small magnetic dipoles are suspended in a fluid, and the applied magnetic field causes these small magnets to line up. These magnetorheological or MR fluids are being used in the suspension of the 2002 model of the Cadillac Seville STS automobile and more recently, depending on road conditions, the damping fluids viscosity is adjusted. This is more expensive than traditional systems, but it provides better control, some haptic devices whose resistance to touch can be controlled are also based on these MR fluids. Another major type of fluid are electrorheological or ER fluids. Besides fast acting clutches, brakes, shock absorbers and hydraulic valves, other, more esoteric, other smart fluids change their surface tension in the presence of an electric field. Other applications include brakes and seismic dampers, which are used in buildings in seismically-active zones to damp the oscillations occurring in an earthquake. Since then it appears that interest has waned a little, possibly due to the existence of various limitations of smart fluids which have yet to be overcome. Continuum mechanics Electrorheological fluid Ferrofluid Fluid mechanics Magnetorheological fluid Rheology Smart glass Smart metal http, //www. aip. org/tip/INPHFA/vol-9/iss-6/p14. html
Smart fluid
–
Continuum mechanics
22.
Magnetorheological fluid
–
A magnetorheological fluid is a type of smart fluid in a carrier fluid, usually a type of oil. When subjected to a field, the fluid greatly increases its apparent viscosity. Importantly, the stress of the fluid when in its active state can be controlled very accurately by varying the magnetic field intensity. The upshot is that the ability to transmit force can be controlled with an electromagnet. Extensive discussions of the physics and applications of MR fluids can be found in a recent book, MR fluid is different from a ferrofluid which has smaller particles. MR fluid particles are primarily on the micrometre-scale and are too dense for Brownian motion to keep them suspended, Ferrofluid particles are primarily nanoparticles that are suspended by Brownian motion and generally will not settle under normal conditions. As a result, these two fluids have different applications. When a magnetic field is applied, however, the particles align themselves along the lines of magnetic flux. To understand and predict the behavior of the MR fluid it is necessary to model the fluid mathematically, a task slightly complicated by the varying material properties. As mentioned above, smart fluids are such that they have a low viscosity in the absence of a magnetic field. In the case of MR fluids, the fluid actually assumes properties comparable to a solid when in the activated state, the behavior of a MR fluid can thus be considered similar to a Bingham plastic, a material model which has been well-investigated. However, a MR fluid does not exactly follow the characteristics of a Bingham plastic, for example, below the yield stress, the fluid behaves as a viscoelastic material, with a complex modulus that is also known to be dependent on the magnetic field intensity. MR fluids are also known to be subject to shear thinning, low shear strength has been the primary reason for limited range of applications. In the absence of pressure the maximum shear strength is about 100 kPa. If the fluid is compressed in the field direction and the compressive stress is 2 MPa. If the standard magnetic particles are replaced with elongated magnetic particles, ferroparticles settle out of the suspension over time due to the inherent density difference between the particles and their carrier fluid. The rate and degree to which this occurs is one of the primary attributes considered in industry when implementing or designing an MR device. Surfactants are typically used to offset this effect, but at a cost of the fluids magnetic saturation, and thus the maximum yield stress exhibited in its activated state
Magnetorheological fluid
Magnetorheological fluid
–
Schematic of a magnetorheological fluid solidifying and blocking a pipe in response to an external magnetic field. (Animated version available.)
Magnetorheological fluid
Magnetorheological fluid
23.
Electrorheological fluid
–
Electrorheological fluids are suspensions of extremely fine non-conducting but electrically active particles in an electrically insulating fluid. The apparent viscosity of these fluids changes reversibly by an order of up to 100,000 in response to an electric field. For example, a typical ER fluid can go from the consistency of a liquid to that of a gel, and back, with response times on the order of milliseconds. The effect is called the Winslow effect after its discoverer, the American inventor Willis Winslow. Other common applications are in ER brakes and shock absorbers, there are many novel uses for these fluids. Potential uses are in accurate abrasive polishing and as haptic controllers, motorola filed a patent application for mobile device applications in 2006. The change in apparent viscosity is dependent on the electric field. The change is not a change in viscosity, hence these fluids are now known as ER fluids. The effect is described as an electric field dependent shear yield stress. When activated an ER fluid behaves as a Bingham plastic, with a point which is determined by the electric field strength. After the yield point is reached, the fluid shears as a fluid, hence the resistance to motion of the fluid can be controlled by adjusting the applied electric field. ER fluids are a type of smart fluid, a simple ER fluid can be made by mixing cornflour in a light vegetable oil or silicone oil. There are two theories to explain the effect, the interfacial tension or water bridge theory. The water bridge theory assumes a three phase system, the particles contain the third phase which is another liquid immiscible with the main phase liquid, with no applied electric field the third phase is strongly attracted to and held within the particles. This means the ER fluid is a suspension of particles, which behaves as a liquid, when an electric field is applied the third phase is driven to one side of the particles by electro osmosis and binds adjacent particles together to form chains. This chain structure means the ER fluid has become a solid, the electrostatic theory assumes just a two phase system, with dielectric particles forming chains aligned with an electric field in an analogous way to how magnetorheological fluid fluids work. An ER fluid has been constructed with the solid phase made from a conductor coated in an insulator and this ER fluid clearly cannot work by the water bridge model. However, although demonstrating that some ER fluids work by the electrostatic effect, the advantage of having an ER fluid which operates on the electrostatic effect is the elimination of leakage current, i. e. potentially there is no direct current
Electrorheological fluid
–
Continuum mechanics
24.
Daniel Bernoulli
–
Daniel Bernoulli FRS was a Swiss mathematician and physicist and was one of the many prominent mathematicians in the Bernoulli family. He is particularly remembered for his applications of mathematics to mechanics, especially fluid mechanics, Daniel Bernoulli was born in Groningen, in the Netherlands, into a family of distinguished mathematicians. The Bernoulli family came originally from Antwerp, at time in the Spanish Netherlands. After a brief period in Frankfurt the family moved to Basel, Daniel was the son of Johann Bernoulli, nephew of Jacob Bernoulli. He had two brothers, Niklaus and Johann II, Daniel Bernoulli was described by W. W. Rouse Ball as by far the ablest of the younger Bernoullis. He is said to have had a bad relationship with his father, Johann Bernoulli also plagiarized some key ideas from Daniels book Hydrodynamica in his own book Hydraulica which he backdated to before Hydrodynamica. Despite Daniels attempts at reconciliation, his father carried the grudge until his death, around schooling age, his father, Johann, encouraged him to study business, there being poor rewards awaiting a mathematician. However, Daniel refused, because he wanted to study mathematics and he later gave in to his fathers wish and studied business. His father then asked him to study in medicine, and Daniel agreed under the condition that his father would teach him mathematics privately, Daniel studied medicine at Basel, Heidelberg, and Strasbourg, and earned a PhD in anatomy and botany in 1721. He was a contemporary and close friend of Leonhard Euler and he went to St. Petersburg in 1724 as professor of mathematics, but was very unhappy there, and a temporary illness in 1733 gave him an excuse for leaving St. Petersberg. He returned to the University of Basel, where he held the chairs of medicine, metaphysics. In May,1750 he was elected a Fellow of the Royal Society and his earliest mathematical work was the Exercitationes, published in 1724 with the help of Goldbach. Two years later he pointed out for the first time the frequent desirability of resolving a compound motion into motions of translation and motion of rotation, together Bernoulli and Euler tried to discover more about the flow of fluids. In particular, they wanted to know about the relationship between the speed at which blood flows and its pressure, soon physicians all over Europe were measuring patients blood pressure by sticking point-ended glass tubes directly into their arteries. It was not until about 170 years later, in 1896 that an Italian doctor discovered a less painful method which is still in use today. However, Bernoullis method of measuring pressure is used today in modern aircraft to measure the speed of the air passing the plane. Taking his discoveries further, Daniel Bernoulli now returned to his work on Conservation of Energy. It was known that a moving body exchanges its kinetic energy for energy when it gains height
Daniel Bernoulli
–
Daniel Bernoulli
25.
Jacques Charles
–
Jacques Alexandre César Charles was a French inventor, scientist, mathematician, and balloonist. He was sometimes called Charles the Geometer and their pioneering use of hydrogen for lift led to this type of balloon being named a Charlière. Charless law, describing how gases tend to expand when heated, was formulated by Joseph Louis Gay-Lussac in 1802, Charles was elected to the Académie des Sciences in 1795 and subsequently became professor of physics at the Académie de Sciences. Charles was born in Beaugency-sur-Loire in 1746, He married Julie Françoise Bouchaud des Hérettes, Charles outlived her and died in Paris on April 7,1823. They used alternate strips of red and white silk, but the discolouration of the process left a red. Jacques Charles and the Robert brothers launched the worlds first hydrogen filled balloon on August 27,1783, from the Champ de Mars, the balloon was comparatively small, a 35 cubic metre sphere of rubberised silk, and only capable of lifting about 9 kg. It was filled with hydrogen that had made by pouring nearly a quarter of a tonne of sulphuric acid onto a half a tonne of scrap iron. The hydrogen gas was fed into the balloon via lead pipes, daily progress bulletins were issued on the inflation, and the crowd was so great that on the 26th the balloon was moved secretly by night to the Champ de Mars, a distance of 4 kilometres. The project was funded by a subscription organised by Barthelemy Faujas de Saint-Fond, at 13,45 on December 1,1783 Jacques Charles and the Robert brothers launched a new manned balloon from the Jardin des Tuileries in Paris. Jacques Charles was accompanied by Nicolas-Louis Robert as co-pilot of the 380-cubic-metre, the envelope was fitted with a hydrogen release valve and was covered with a net from which the basket was suspended. Sand ballast was used to control altitude and they ascended to a height of about 1,800 feet and landed at sunset in Nesles-la-Vallée after a 2-hour 5 minute flight covering 36 km. The chasers on horseback, who were led by the Duc de Chartres, Jacques Charles then decided to ascend again, but alone this time because the balloon had lost some of its hydrogen. This time it ascended rapidly to an altitude of about 3,000 metres and he began suffering from aching pain in his ears so he valved to release gas, and descended to land gently about 3 km away at Tour du Lay. Unlike the Robert brothers, Charles never flew again, although a hydrogen balloon came to be called a Charlière in his honour, among the special enclosure crowd was Benjamin Franklin, the diplomatic representative of the United States of America. Also present was Joseph Montgolfier, whom Charles honoured by asking him to release the small, bright green, pilot balloon to assess the wind and weather conditions. This event took place ten days after the worlds first manned flight by Jean-François Pilâtre de Rozier using a Montgolfier brothers hot air balloon. Simon Schama wrote in Citizens, Montgolfiers principal scientific collaborator was M. Charles, who had been the first to propose the gas produced by vitriol instead of the burning, dampened straw and wood that he had used in earlier flights. Charles himself was eager to ascend but had run into a firm veto from the King
Jacques Charles
–
Jacques Alexandre César Charles, 1820
Jacques Charles
–
The balloon built by Jacques Charles and the Robert brothers is attacked by terrified villagers in Gonesse.
Jacques Charles
–
Contemporary illustration of the first flight by Prof. Jacques Charles with Nicolas-Louis Robert, December 1, 1783. Viewed from the
Place de la Concorde to the
Tuileries Palace (destroyed in 1871)
Jacques Charles
–
Meusnier's dirigible
26.
Robert Hooke
–
Robert Hooke FRS was an English natural philosopher, architect and polymath. Allan Chapman has characterised him as Englands Leonardo, Robert Gunthers Early Science in Oxford, a history of science in Oxford during the Protectorate, Restoration and Age of Enlightenment, devotes five of its fourteen volumes to Hooke. Hooke studied at Wadham College, Oxford during the Protectorate where he became one of a tightly knit group of ardent Royalists led by John Wilkins. Here he was employed as an assistant to Thomas Willis and to Robert Boyle and he built some of the earliest Gregorian telescopes and observed the rotations of Mars and Jupiter. In 1665 he inspired the use of microscopes for scientific exploration with his book, based on his microscopic observations of fossils, Hooke was an early proponent of biological evolution. Much of Hookes scientific work was conducted in his capacity as curator of experiments of the Royal Society, much of what is known of Hookes early life comes from an autobiography that he commenced in 1696 but never completed. Richard Waller mentions it in his introduction to The Posthumous Works of Robert Hooke, the work of Waller, along with John Wards Lives of the Gresham Professors and John Aubreys Brief Lives, form the major near-contemporaneous biographical accounts of Hooke. Robert Hooke was born in 1635 in Freshwater on the Isle of Wight to John Hooke, Robert was the last of four children, two boys and two girls, and there was an age difference of seven years between him and the next youngest. Their father John was a Church of England priest, the curate of Freshwaters Church of All Saints, Robert Hooke was expected to succeed in his education and join the Church. John Hooke also was in charge of a school, and so was able to teach Robert. He was a Royalist and almost certainly a member of a group who went to pay their respects to Charles I when he escaped to the Isle of Wight, Robert, too, grew up to be a staunch monarchist. As a youth, Robert Hooke was fascinated by observation, mechanical works and he dismantled a brass clock and built a wooden replica that, by all accounts, worked well enough, and he learned to draw, making his own materials from coal, chalk and ruddle. Hooke quickly mastered Latin and Greek, made study of Hebrew. Here, too, he embarked on his study of mechanics. It appears that Hooke was one of a group of students whom Busby educated in parallel to the work of the school. Contemporary accounts say he was not much seen in the school, in 1653, Hooke secured a choristers place at Christ Church, Oxford. He was employed as an assistant to Dr Thomas Willis. There he met the natural philosopher Robert Boyle, and gained employment as his assistant from about 1655 to 1662, constructing, operating and he did not take his Master of Arts until 1662 or 1663
Robert Hooke
–
Modern portrait of Robert Hooke (Rita Greer 2004), based on descriptions by
Aubrey and
Waller; no contemporary depictions of Hooke are known to survive.
Robert Hooke
–
Memorial portrait of Robert Hooke at
Alum Bay,
Isle of Wight, his birthplace, by Rita Greer (2012).
Robert Hooke
–
Robert Boyle
Robert Hooke
–
Diagram of a
louse from Hooke's
Micrographia
27.
Isaac Newton
–
His book Philosophiæ Naturalis Principia Mathematica, first published in 1687, laid the foundations of classical mechanics. Newton also made contributions to optics, and he shares credit with Gottfried Wilhelm Leibniz for developing the infinitesimal calculus. Newtons Principia formulated the laws of motion and universal gravitation that dominated scientists view of the universe for the next three centuries. Newtons work on light was collected in his influential book Opticks. He also formulated a law of cooling, made the first theoretical calculation of the speed of sound. Newton was a fellow of Trinity College and the second Lucasian Professor of Mathematics at the University of Cambridge, politically and personally tied to the Whig party, Newton served two brief terms as Member of Parliament for the University of Cambridge, in 1689–90 and 1701–02. He was knighted by Queen Anne in 1705 and he spent the last three decades of his life in London, serving as Warden and Master of the Royal Mint and his father, also named Isaac Newton, had died three months before. Born prematurely, he was a child, his mother Hannah Ayscough reportedly said that he could have fit inside a quart mug. When Newton was three, his mother remarried and went to live with her new husband, the Reverend Barnabas Smith, leaving her son in the care of his maternal grandmother, Newtons mother had three children from her second marriage. From the age of twelve until he was seventeen, Newton was educated at The Kings School, Grantham which taught Latin and Greek. He was removed from school, and by October 1659, he was to be found at Woolsthorpe-by-Colsterworth, Henry Stokes, master at the Kings School, persuaded his mother to send him back to school so that he might complete his education. Motivated partly by a desire for revenge against a bully, he became the top-ranked student. In June 1661, he was admitted to Trinity College, Cambridge and he started as a subsizar—paying his way by performing valets duties—until he was awarded a scholarship in 1664, which guaranteed him four more years until he would get his M. A. He set down in his notebook a series of Quaestiones about mechanical philosophy as he found it, in 1665, he discovered the generalised binomial theorem and began to develop a mathematical theory that later became calculus. Soon after Newton had obtained his B. A. degree in August 1665, in April 1667, he returned to Cambridge and in October was elected as a fellow of Trinity. Fellows were required to become ordained priests, although this was not enforced in the restoration years, however, by 1675 the issue could not be avoided and by then his unconventional views stood in the way. Nevertheless, Newton managed to avoid it by means of a special permission from Charles II. A and he was elected a Fellow of the Royal Society in 1672. Newtons work has been said to distinctly advance every branch of mathematics then studied and his work on the subject usually referred to as fluxions or calculus, seen in a manuscript of October 1666, is now published among Newtons mathematical papers
Isaac Newton
–
Portrait of Isaac Newton in 1689 (age 46) by
Godfrey Kneller
Isaac Newton
–
Newton in a 1702 portrait by
Godfrey Kneller
Isaac Newton
–
Isaac Newton (Bolton, Sarah K. Famous Men of Science. NY: Thomas Y. Crowell & Co., 1889)
Isaac Newton
–
Replica of Newton's second
Reflecting telescope that he presented to the
Royal Society in 1672
28.
Claude-Louis Navier
–
Claude-Louis Navier, was a French engineer and physicist who specialized in mechanics. The Navier–Stokes equations are named after him and George Gabriel Stokes, after the death of his father in 1793, Naviers mother left his education in the hands of his uncle Émiland Gauthey, an engineer with the Corps of Bridges and Roads. In 1802, Navier enrolled at the École polytechnique, and in 1804 continued his studies at the École Nationale des Ponts et Chaussées and he eventually succeeded his uncle as Inspecteur general at the Corps des Ponts et Chaussées. He directed the construction of bridges at Choisy, Asnières and Argenteuil in the Department of the Seine, in 1824, Navier was admitted into the French Academy of Science. Navier formulated the theory of elasticity in a mathematically usable form. Navier is therefore considered to be the founder of modern structural analysis. His major contribution however remains the Navier–Stokes equations, central to fluid mechanics and his name is one of the 72 names inscribed on the Eiffel Tower. OConnor, John J. Robertson, Edmund F. Claude-Louis Navier, MacTutor History of Mathematics archive, University of St Andrews
Claude-Louis Navier
–
Bust of Claude Louis Marie Henri Navier at the
École Nationale des Ponts et Chaussées
29.
Mechanics
–
Mechanics is an area of science concerned with the behaviour of physical bodies when subjected to forces or displacements, and the subsequent effects of the bodies on their environment. The scientific discipline has its origins in Ancient Greece with the writings of Aristotle, during the early modern period, scientists such as Khayaam, Galileo, Kepler, and Newton, laid the foundation for what is now known as classical mechanics. It is a branch of physics that deals with particles that are either at rest or are moving with velocities significantly less than the speed of light. It can also be defined as a branch of science which deals with the motion of, historically, classical mechanics came first, while quantum mechanics is a comparatively recent invention. Classical mechanics originated with Isaac Newtons laws of motion in Philosophiæ Naturalis Principia Mathematica, both are commonly held to constitute the most certain knowledge that exists about physical nature. Classical mechanics has especially often been viewed as a model for other so-called exact sciences, essential in this respect is the relentless use of mathematics in theories, as well as the decisive role played by experiment in generating and testing them. Quantum mechanics is of a scope, as it encompasses classical mechanics as a sub-discipline which applies under certain restricted circumstances. According to the principle, there is no contradiction or conflict between the two subjects, each simply pertains to specific situations. The correspondence principle states that the behavior of systems described by quantum theories reproduces classical physics in the limit of quantum numbers. Quantum mechanics has superseded classical mechanics at the level and is indispensable for the explanation and prediction of processes at the molecular, atomic. However, for macroscopic processes classical mechanics is able to solve problems which are difficult in quantum mechanics and hence remains useful. Modern descriptions of such behavior begin with a definition of such quantities as displacement, time, velocity, acceleration, mass. Until about 400 years ago, however, motion was explained from a different point of view. He showed that the speed of falling objects increases steadily during the time of their fall and this acceleration is the same for heavy objects as for light ones, provided air friction is discounted. The English mathematician and physicist Isaac Newton improved this analysis by defining force and mass, for objects traveling at speeds close to the speed of light, Newton’s laws were superseded by Albert Einstein’s theory of relativity. For atomic and subatomic particles, Newton’s laws were superseded by quantum theory, for everyday phenomena, however, Newton’s three laws of motion remain the cornerstone of dynamics, which is the study of what causes motion. In analogy to the distinction between quantum and classical mechanics, Einsteins general and special theories of relativity have expanded the scope of Newton, the differences between relativistic and Newtonian mechanics become significant and even dominant as the velocity of a massive body approaches the speed of light. Relativistic corrections are also needed for quantum mechanics, although general relativity has not been integrated, the two theories remain incompatible, a hurdle which must be overcome in developing a theory of everything
Mechanics
–
Arabic Machine Manuscript. Unknown date (at a guess: 16th to 19th centuries).
30.
Force
–
In physics, a force is any interaction that, when unopposed, will change the motion of an object. In other words, a force can cause an object with mass to change its velocity, force can also be described intuitively as a push or a pull. A force has both magnitude and direction, making it a vector quantity and it is measured in the SI unit of newtons and represented by the symbol F. The original form of Newtons second law states that the net force acting upon an object is equal to the rate at which its momentum changes with time. In an extended body, each part usually applies forces on the adjacent parts, such internal mechanical stresses cause no accelation of that body as the forces balance one another. Pressure, the distribution of small forces applied over an area of a body, is a simple type of stress that if unbalanced can cause the body to accelerate. Stress usually causes deformation of materials, or flow in fluids. In part this was due to an understanding of the sometimes non-obvious force of friction. A fundamental error was the belief that a force is required to maintain motion, most of the previous misunderstandings about motion and force were eventually corrected by Galileo Galilei and Sir Isaac Newton. With his mathematical insight, Sir Isaac Newton formulated laws of motion that were not improved-on for nearly three hundred years, the Standard Model predicts that exchanged particles called gauge bosons are the fundamental means by which forces are emitted and absorbed. Only four main interactions are known, in order of decreasing strength, they are, strong, electromagnetic, weak, high-energy particle physics observations made during the 1970s and 1980s confirmed that the weak and electromagnetic forces are expressions of a more fundamental electroweak interaction. Since antiquity the concept of force has been recognized as integral to the functioning of each of the simple machines. The mechanical advantage given by a machine allowed for less force to be used in exchange for that force acting over a greater distance for the same amount of work. Analysis of the characteristics of forces ultimately culminated in the work of Archimedes who was famous for formulating a treatment of buoyant forces inherent in fluids. Aristotle provided a discussion of the concept of a force as an integral part of Aristotelian cosmology. In Aristotles view, the sphere contained four elements that come to rest at different natural places therein. Aristotle believed that objects on Earth, those composed mostly of the elements earth and water, to be in their natural place on the ground. He distinguished between the tendency of objects to find their natural place, which led to natural motion, and unnatural or forced motion
Force
–
Aristotle famously described a force as anything that causes an object to undergo "unnatural motion"
Force
–
Forces are also described as a push or pull on an object. They can be due to phenomena such as
gravity,
magnetism, or anything that might cause a mass to accelerate.
Force
–
Though
Sir Isaac Newton 's most famous equation is, he actually wrote down a different form for his second law of motion that did not use
differential calculus.
Force
–
Galileo Galilei was the first to point out the inherent contradictions contained in Aristotle's description of forces.
31.
Geophysics
–
Although geophysics was only recognized as a separate discipline in the 19th century, its origins date back to ancient times. The first magnetic compasses were made from lodestones, while more modern magnetic compasses played an important role in the history of navigation, the first seismic instrument was built in 132 BC. Geophysics is applied to societal needs, such as resources, mitigation of natural hazards. Geophysics is a highly interdisciplinary subject, and geophysicists contribute to area of the Earth sciences. To provide an idea of what constitutes geophysics, this section describes phenomena that are studied in physics and how they relate to the Earth. The gravitational pull of the Moon and Sun give rise to two high tides and two low tides every lunar day, or every 24 hours and 50 minutes, therefore, there is a gap of 12 hours and 25 minutes between every high tide and between every low tide. Gravitational forces make rocks press down on rocks, increasing their density as the depth increases. Measurements of gravitational acceleration and gravitational potential at the Earths surface, the surface gravitational field provides information on the dynamics of tectonic plates. The geopotential surface called the geoid is one definition of the shape of the Earth, the geoid would be the global mean sea level if the oceans were in equilibrium and could be extended through the continents. The Earth is cooling, and the heat flow generates the Earths magnetic field through the geodynamo. The main sources of heat are the heat and radioactivity. Some heat is carried up from the bottom of the mantle by mantle plumes, the heat flow at the Earths surface is about 4.2 ×1013 W, and it is a potential source of geothermal energy. Seismic waves are vibrations that travel through the Earths interior or along its surface, the entire Earth can also oscillate in forms that are called normal modes or free oscillations of the Earth. Ground motions from waves or normal modes are measured using seismographs, if the waves come from a localized source such as an earthquake or explosion, measurements at more than one location can be used to locate the source. The locations of earthquakes provide information on plate tectonics and mantle convection, measurements of seismic waves are a source of information on the region that the waves travel through. If the density or composition of the rock changes suddenly, some waves are reflected, reflections can provide information on near-surface structure. Changes in the direction, called refraction, can be used to infer the deep structure of the Earth. Earthquakes pose a risk to humans, understanding their mechanisms, which depend on the type of earthquake, can lead to better estimates of earthquake risk and improvements in earthquake engineering
Geophysics
–
Illustration of the deformations of a block by body waves and surface waves (see
seismic wave).
Geophysics
–
Age of the sea floor. Much of the dating information comes from magnetic anomalies.
Geophysics
–
Replica of
Zhang Heng 's seismoscope, possibly the first contribution to
seismology.
32.
Astrophysics
–
Astrophysics is the branch of astronomy that employs the principles of physics and chemistry to ascertain the nature of the heavenly bodies, rather than their positions or motions in space. Among the objects studied are the Sun, other stars, galaxies, extrasolar planets, the interstellar medium and their emissions are examined across all parts of the electromagnetic spectrum, and the properties examined include luminosity, density, temperature, and chemical composition. In practice, modern astronomical research often involves an amount of work in the realms of theoretical and observational physics. Although astronomy is as ancient as recorded history itself, it was separated from the study of terrestrial physics. Their challenge was that the tools had not yet been invented with which to prove these assertions, for much of the nineteenth century, astronomical research was focused on the routine work of measuring the positions and computing the motions of astronomical objects. Kirchhoff deduced that the lines in the solar spectrum are caused by absorption by chemical elements in the Solar atmosphere. In this way it was proved that the elements found in the Sun. Among those who extended the study of solar and stellar spectra was Norman Lockyer and he thus claimed the line represented a new element, which was called helium, after the Greek Helios, the Sun personified. By 1890, a catalog of over 10,000 stars had been prepared that grouped them into thirteen spectral types, most significantly, she discovered that hydrogen and helium were the principal components of stars. This discovery was so unexpected that her dissertation readers convinced her to modify the conclusion before publication, however, later research confirmed her discovery. By the end of the 20th century, studies of astronomical spectra had expanded to cover wavelengths extending from radio waves through optical, x-ray and it is the practice of observing celestial objects by using telescopes and other astronomical apparatus. The majority of observations are made using the electromagnetic spectrum. Radio astronomy studies radiation with a greater than a few millimeters. The study of these waves requires very large radio telescopes, infrared astronomy studies radiation with a wavelength that is too long to be visible to the naked eye but is shorter than radio waves. Infrared observations are made with telescopes similar to the familiar optical telescopes. Objects colder than stars are studied at infrared frequencies. Optical astronomy is the oldest kind of astronomy, telescopes paired with a charge-coupled device or spectroscopes are the most common instruments used. The Earths atmosphere interferes somewhat with optical observations, so adaptive optics, in this wavelength range, stars are highly visible, and many chemical spectra can be observed to study the chemical composition of stars, galaxies and nebulae
Astrophysics
–
Early 20th-century comparison of elemental, solar, and stellar spectra
Astrophysics
–
Supernova remnant LMC N 63A imaged in x-ray (blue), optical (green) and radio (red) wavelengths. The X-ray glow is from material heated to about ten million degrees Celsius by a shock wave generated by the supernova explosion.
Astrophysics
–
The stream lines on this simulation of a
supernova show the flow of matter behind the shock wave giving clues as to the origin of pulsars
33.
Biology
–
Biology is a natural science concerned with the study of life and living organisms, including their structure, function, growth, evolution, distribution, identification and taxonomy. Modern biology is a vast and eclectic field, composed of branches and subdisciplines. However, despite the broad scope of biology, there are certain unifying concepts within it that consolidate it into single, coherent field. In general, biology recognizes the cell as the unit of life, genes as the basic unit of heredity. It is also understood today that all organisms survive by consuming and transforming energy and by regulating their internal environment to maintain a stable, the term biology is derived from the Greek word βίος, bios, life and the suffix -λογία, -logia, study of. The Latin-language form of the term first appeared in 1736 when Swedish scientist Carl Linnaeus used biologi in his Bibliotheca botanica, the first German use, Biologie, was in a 1771 translation of Linnaeus work. In 1797, Theodor Georg August Roose used the term in the preface of a book, karl Friedrich Burdach used the term in 1800 in a more restricted sense of the study of human beings from a morphological, physiological and psychological perspective. The science that concerns itself with these objects we will indicate by the biology or the doctrine of life. Although modern biology is a recent development, sciences related to. Natural philosophy was studied as early as the ancient civilizations of Mesopotamia, Egypt, the Indian subcontinent, however, the origins of modern biology and its approach to the study of nature are most often traced back to ancient Greece. While the formal study of medicine back to Hippocrates, it was Aristotle who contributed most extensively to the development of biology. Especially important are his History of Animals and other works where he showed naturalist leanings, and later more empirical works that focused on biological causation and the diversity of life. Aristotles successor at the Lyceum, Theophrastus, wrote a series of books on botany that survived as the most important contribution of antiquity to the plant sciences, even into the Middle Ages. Scholars of the medieval Islamic world who wrote on biology included al-Jahiz, Al-Dīnawarī, who wrote on botany, biology began to quickly develop and grow with Anton van Leeuwenhoeks dramatic improvement of the microscope. It was then that scholars discovered spermatozoa, bacteria, infusoria, investigations by Jan Swammerdam led to new interest in entomology and helped to develop the basic techniques of microscopic dissection and staining. Advances in microscopy also had a impact on biological thinking. In the early 19th century, a number of biologists pointed to the importance of the cell. Thanks to the work of Robert Remak and Rudolf Virchow, however, meanwhile, taxonomy and classification became the focus of natural historians
Biology
Biology
Biology
Biology
34.
Numerical methods
–
Numerical analysis is the study of algorithms that use numerical approximation for the problems of mathematical analysis. Being able to compute the sides of a triangle is important, for instance, in astronomy, carpentry. Numerical analysis continues this tradition of practical mathematical calculations. Much like the Babylonian approximation of the root of 2, modern numerical analysis does not seek exact answers. Instead, much of numerical analysis is concerned with obtaining approximate solutions while maintaining reasonable bounds on errors, before the advent of modern computers numerical methods often depended on hand interpolation in large printed tables. Since the mid 20th century, computers calculate the required functions instead and these same interpolation formulas nevertheless continue to be used as part of the software algorithms for solving differential equations. Computing the trajectory of a spacecraft requires the accurate numerical solution of a system of differential equations. Car companies can improve the safety of their vehicles by using computer simulations of car crashes. Such simulations essentially consist of solving differential equations numerically. Hedge funds use tools from all fields of analysis to attempt to calculate the value of stocks. Airlines use sophisticated optimization algorithms to decide ticket prices, airplane and crew assignments, historically, such algorithms were developed within the overlapping field of operations research. Insurance companies use programs for actuarial analysis. The rest of this section outlines several important themes of numerical analysis, the field of numerical analysis predates the invention of modern computers by many centuries. Linear interpolation was already in use more than 2000 years ago, to facilitate computations by hand, large books were produced with formulas and tables of data such as interpolation points and function coefficients. The function values are no very useful when a computer is available. The mechanical calculator was developed as a tool for hand computation. These calculators evolved into electronic computers in the 1940s, and it was found that these computers were also useful for administrative purposes. But the invention of the computer also influenced the field of analysis, since now longer
Numerical methods
–
Babylonian clay tablet
YBC 7289 (c. 1800–1600 BC) with annotations. The approximation of the
square root of 2 is four
sexagesimal figures, which is about six
decimal figures. 1 + 24/60 + 51/60 2 + 10/60 3 = 1.41421296...
Numerical methods
–
Direct method
Numerical methods
35.
Archimedes
–
Archimedes of Syracuse was a Greek mathematician, physicist, engineer, inventor, and astronomer. Although few details of his life are known, he is regarded as one of the scientists in classical antiquity. He was also one of the first to apply mathematics to physical phenomena, founding hydrostatics and statics and he is credited with designing innovative machines, such as his screw pump, compound pulleys, and defensive war machines to protect his native Syracuse from invasion. Archimedes died during the Siege of Syracuse when he was killed by a Roman soldier despite orders that he should not be harmed. Cicero describes visiting the tomb of Archimedes, which was surmounted by a sphere and a cylinder, unlike his inventions, the mathematical writings of Archimedes were little known in antiquity. Archimedes was born c.287 BC in the city of Syracuse, Sicily, at that time a self-governing colony in Magna Graecia. The date of birth is based on a statement by the Byzantine Greek historian John Tzetzes that Archimedes lived for 75 years, in The Sand Reckoner, Archimedes gives his fathers name as Phidias, an astronomer about whom nothing is known. Plutarch wrote in his Parallel Lives that Archimedes was related to King Hiero II, a biography of Archimedes was written by his friend Heracleides but this work has been lost, leaving the details of his life obscure. It is unknown, for instance, whether he married or had children. During his youth, Archimedes may have studied in Alexandria, Egypt and he referred to Conon of Samos as his friend, while two of his works have introductions addressed to Eratosthenes. Archimedes died c.212 BC during the Second Punic War, according to the popular account given by Plutarch, Archimedes was contemplating a mathematical diagram when the city was captured. A Roman soldier commanded him to come and meet General Marcellus but he declined, the soldier was enraged by this, and killed Archimedes with his sword. Plutarch also gives an account of the death of Archimedes which suggests that he may have been killed while attempting to surrender to a Roman soldier. According to this story, Archimedes was carrying mathematical instruments, and was killed because the thought that they were valuable items. General Marcellus was reportedly angered by the death of Archimedes, as he considered him a valuable asset and had ordered that he not be harmed. Marcellus called Archimedes a geometrical Briareus, the last words attributed to Archimedes are Do not disturb my circles, a reference to the circles in the mathematical drawing that he was supposedly studying when disturbed by the Roman soldier. This quote is given in Latin as Noli turbare circulos meos. The phrase is given in Katharevousa Greek as μὴ μου τοὺς κύκλους τάραττε
Archimedes
–
Archimedes Thoughtful by
Fetti (1620)
Archimedes
–
Cicero Discovering the Tomb of Archimedes by
Benjamin West (1805)
Archimedes
–
Artistic interpretation of Archimedes' mirror used to burn Roman ships. Painting by
Giulio Parigi.
Archimedes
–
A sphere has 2/3 the volume and surface area of its circumscribing cylinder including its bases. A
sphere and
cylinder were placed on the tomb of Archimedes at his request. (see also:
Equiareal map)
36.
Evangelista Torricelli
–
Evangelista Torricelli was born on 15 October 1608 in Rome, he invented the barometer in Florence, Italy. The firstborn child of Gaspare Ruberti and Giacoma Torricelli and his family was from Faenza in the Province of Ravenna, then part of the Papal States. His father was a worker and the family was very poor. Seeing his talents, his parents sent him to be educated in Faenza, under the care of his uncle, Jacobo, a Camaldolese monk, who first ensured that his nephew was given a sound basic education. He then entered young Torricelli into a Jesuit College in 1624, possibly the one in Faenza itself, to mathematics and philosophy until 1626, by which time his father. The uncle then sent Torricelli to Rome to study science under the Benedictine monk Benedetto Castelli, Castelli was a student of Galileo Galilei. Benedetto Castelli made experiments on running water, and he was entrusted by Pope Urban VIII with hydraulic undertakings, there is no actual evidence that Torricelli was enrolled at the university. It is almost certain that Torricelli was taught by Castelli, in exchange he worked for him as his secretary from 1626 to 1632 as a private arrangement. Because of this, Torricelli was exposed to experiments funded by Pope Urban VIII, while living in Rome, Torricelli became also the student of the brilliant mathematician, Bonaventura Cavalieri, with whom he became great friends. It was in Rome that Torricelli also became friends with two students of Castelli, Raffaello Magiotti and Antonio Nardi. Galileo referred to Torricelli, Magiotti, and Nardi affectionately as his triumvirate in Rome, although Galileo promptly invited Torricelli to visit, he did not accept until just three months before Galileos death. The reason for this was that Torricellis mother, Caterina Angetti died, after Galileos death on 8 January 1642, Grand Duke Ferdinando II de Medici asked him to succeed Galileo as the grand-ducal mathematician and chair of mathematics at the University of Pisa. Right before the appointment, Torricelli was considering returning to Rome because of there being nothing left for him in Florence, in this role he solved some of the great mathematical problems of the day, such as finding a cycloids area and center of gravity. As a result of study, he wrote the book the Opera Geometrica in which he described his observations. The book was published in 1644 and he was interested in Optics, and invented a method whereby microscopic lenses might be made of glass which could be easily melted in a lamp. As a result, he designed and built a number of telescopes and simple microscopes, several large lenses, on 11 June 1644, he famously wrote in a letter to Michelangelo Ricci, Noi viviamo sommersi nel fondo dun pelago daria. Torricelli died in Florence on 25 October 1647,10 days after his 39th birthday and he left all his belongings to his adopted son Alessandro. This early work owes much to the study of the classics, in Faenza, a statue of Torricelli was created in 1868 as a thank you for all that Torricelli had done in advancing science during his short lifetime
Evangelista Torricelli
–
Evangelista Torricelli portrayed on the frontpage of Lezioni d'Evangelista Torricelli
Evangelista Torricelli
–
Torricelli's statue in the
Museo di Storia Naturale di Firenze
Evangelista Torricelli
–
Evangelista Torricelli by Lorenzo Lippi (circa 1647, Galleria Silvano Lodi & Due)
Evangelista Torricelli
–
NSRW Torricelli's experiment
37.
Joseph Louis Lagrange
–
Joseph-Louis Lagrange, born Giuseppe Lodovico Lagrangia or Giuseppe Ludovico De la Grange Tournier, was an Italian and French Enlightenment Era mathematician and astronomer. He made significant contributions to the fields of analysis, number theory, in 1787, at age 51, he moved from Berlin to Paris and became a member of the French Academy of Sciences. He remained in France until the end of his life, Lagrange was one of the creators of the calculus of variations, deriving the Euler–Lagrange equations for extrema of functionals. He also extended the method to take into account possible constraints and he proved that every natural number is a sum of four squares. His treatise Theorie des fonctions analytiques laid some of the foundations of group theory, in calculus, Lagrange developed a novel approach to interpolation and Taylor series. Born as Giuseppe Lodovico Lagrangia, Lagrange was of Italian and French descent and his mother was from the countryside of Turin. He was raised as a Roman Catholic, a career as a lawyer was planned out for Lagrange by his father, and certainly Lagrange seems to have accepted this willingly. He studied at the University of Turin and his subject was classical Latin. At first he had no enthusiasm for mathematics, finding Greek geometry rather dull. It was not until he was seventeen that he showed any taste for mathematics – his interest in the subject being first excited by a paper by Edmond Halley which he came across by accident. Alone and unaided he threw himself into mathematical studies, at the end of a years incessant toil he was already an accomplished mathematician, in that capacity, Lagrange was the first to teach calculus in an engineering school. In this Academy one of his students was François Daviet de Foncenex, Lagrange is one of the founders of the calculus of variations. Starting in 1754, he worked on the problem of tautochrone, Lagrange wrote several letters to Leonhard Euler between 1754 and 1756 describing his results. He outlined his δ-algorithm, leading to the Euler–Lagrange equations of variational calculus, Lagrange also applied his ideas to problems of classical mechanics, generalizing the results of Euler and Maupertuis. Euler was very impressed with Lagranges results, Lagrange published his method in two memoirs of the Turin Society in 1762 and 1773. Many of these are elaborate papers, the article concludes with a masterly discussion of echoes, beats, and compound sounds. Other articles in volume are on recurring series, probabilities. The next work he produced was in 1764 on the libration of the Moon, and an explanation as to why the face was always turned to the earth
Joseph Louis Lagrange
–
Joseph-Louis (Giuseppe Luigi), comte de Lagrange
Joseph Louis Lagrange
–
Lagrange's tomb in the crypt of the
Panthéon
38.
Engineers
–
Engineers design materials, structures, and systems while considering the limitations imposed by practicality, regulation, safety, and cost. The word engineer is derived from the Latin words ingeniare and ingenium, the work of engineers forms the link between scientific discoveries and their subsequent applications to human and business needs and quality of life. His/her work is predominantly intellectual and varied and not of a mental or physical character. It requires the exercise of original thought and judgement and the ability to supervise the technical, he/she is thus placed in a position to make contributions to the development of engineering science or its applications. In due time he/she will be able to give authoritative technical advice, much of an engineers time is spent on researching, locating, applying, and transferring information. Indeed, research suggests engineers spend 56% of their time engaged in various information behaviours, Engineers must weigh different design choices on their merits and choose the solution that best matches the requirements. Their crucial and unique task is to identify, understand, Engineers apply techniques of engineering analysis in testing, production, or maintenance. Analytical engineers may supervise production in factories and elsewhere, determine the causes of a process failure and they also estimate the time and cost required to complete projects. Supervisory engineers are responsible for major components or entire projects, Engineering analysis involves the application of scientific analytic principles and processes to reveal the properties and state of the system, device or mechanism under study. Most engineers specialize in one or more engineering disciplines, numerous specialties are recognized by professional societies, and each of the major branches of engineering has numerous subdivisions. Civil engineering, for example, includes structural and transportation engineering and materials engineering include ceramic, metallurgical, mechanical engineering cuts across just about every discipline since its core essence is applied physics. Engineers also may specialize in one industry, such as vehicles, or in one type of technology. Several recent studies have investigated how engineers spend their time, that is, research suggests that there are several key themes present in engineers’ work, technical work, social work, computer-based work, information behaviours. Amongst other more detailed findings, a recent work sampling study found that engineers spend 62. 92% of their time engaged in work,40. 37% in social work. The time engineers spend engaged in activities is also reflected in the competencies required in engineering roles. There are many branches of engineering, each of which specializes in specific technologies, typically engineers will have deep knowledge in one area and basic knowledge in related areas. When developing a product, engineers work in interdisciplinary teams. For example, when building robots an engineering team will typically have at least three types of engineers, a mechanical engineer would design the body and actuators
Engineers
–
An electrical engineer, circa 1950
Engineers
–
Engineers conferring on prototype design, 1954
Engineers
–
NASA Launch Control Center Firing Room 2 as it appeared in the Apollo era
Engineers
–
The
Challenger disaster is held as a case study of
engineering ethics.
39.
Osborne Reynolds
–
Osborne Reynolds FRS was a prominent innovator in the understanding of fluid dynamics. Separately, his studies of transfer between solids and fluids brought improvements in boiler and condenser design. He spent his career at what is now called University of Manchester. Osborne Reynolds was born in Belfast and moved with his parents soon afterward to Dedham and his father worked as a school headmaster and clergyman, but was also a very able mathematician with a keen interest in mechanics. The father took out a number of patents for improvements to equipment. Osborne Reynolds attended Queens College, Cambridge and graduated in 1867 as the seventh wrangler in mathematics, Reynolds showed an early aptitude and liking for the study of mechanics. For the year following his graduation from Cambridge he again took up a post with an engineering firm. My attention drawn to various phenomena, for the explanation of which I discovered that a knowledge of mathematics was essential. Reynolds remained at Owens College for the rest of his career – in 1880 the college became a constituent college of the newly founded Victoria University and he was elected a Fellow of the Royal Society in 1877 and awarded the Royal Medal in 1888. Reynolds most famously studied the conditions in which the flow of fluid in pipes transitioned from laminar flow to turbulent flow. The larger pipe was glass so the behaviour of the layer of dyed flow could be observed, when the velocity was low, the dyed layer remained distinct through the entire length of the large tube. When the velocity was increased, the broke up at a given point. The point at which this happened was the point from laminar to turbulent flow. From these experiments came the dimensionless Reynolds number for dynamic similarity—the ratio of forces to viscous forces. Reynolds also proposed what is now known as Reynolds-averaging of turbulent flows, such averaging allows for bulk description of turbulent flow, for example using the Reynolds-averaged Navier–Stokes equations. Reynolds contributions to fluid mechanics were not lost on ship designers, Reynolds himself had a number of papers concerning ship design published in Transactions of the Institution of Naval Architects. His publications in fluid dynamics began in the early 1870s and his final theoretical model published in the mid-1890s is still the standard mathematical framework used today. Proceedings of the Royal Society of London, on the dynamical theory of incompressible viscous fluids and the determination of the criterion
Osborne Reynolds
–
Osborne Reynolds in 1903
Osborne Reynolds
–
Reynolds' experiment on fluid dynamics in pipes
40.
Andrey Kolmogorov
–
Andrey Kolmogorov was born in Tambov, about 500 kilometers south-southeast of Moscow, in 1903. Kolmogorova, died giving birth to him, Andrey was raised by two of his aunts in Tunoshna at the estate of his grandfather, a well-to-do nobleman. Little is known about Andreys father and he was supposedly named Nikolai Matveevich Kataev and had been an agronomist. Nikolai had been exiled from St. Petersburg to the Yaroslavl province after his participation in the movement against the czars. He disappeared in 1919 and he was presumed to have killed in the Russian Civil War. Andrey Kolmogorov was educated in his aunt Veras village school, and his earliest literary efforts, Andrey was the editor of the mathematical section of this journal. In 1910, his aunt adopted him, and they moved to Moscow, later that same year, Kolmogorov began to study at the Moscow State University and at the same time Mendeleev Moscow Institute of Chemistry and Technology. Kolmogorov writes about this time, I arrived at Moscow University with a knowledge of mathematics. I knew in particular the beginning of set theory, I studied many questions in articles in the Encyclopedia of Brockhaus and Efron, filling out for myself what was presented too concisely in these articles. Kolmogorov gained a reputation for his wide-ranging erudition, during the same period, Kolmogorov worked out and proved several results in set theory and in the theory of Fourier series. In 1922, Kolmogorov gained international recognition for constructing a Fourier series that diverges almost everywhere, around this time, he decided to devote his life to mathematics. In 1925, Kolmogorov graduated from the Moscow State University and began to study under the supervision of Nikolai Luzin, Kolmogorov became interested in probability theory. In 1929, Kolmogorov earned his Doctor of Philosophy degree, from Moscow State University, in 1930, Kolmogorov went on his first long trip abroad, traveling to Göttingen and Munich, and then to Paris. He had various contacts in Göttingen. His pioneering work, About the Analytical Methods of Probability Theory, was published in 1931, also in 1931, he became a professor at the Moscow State University. In 1935, Kolmogorov became the first chairman of the department of probability theory at the Moscow State University, around the same years Kolmogorov contributed to the field of ecology and generalized the Lotka–Volterra model of predator-prey systems. In 1936, Kolmogorov and Alexandrov were involved in the persecution of their common teacher Nikolai Luzin, in the so-called Luzin affair. In a 1938 paper, Kolmogorov established the basic theorems for smoothing and predicting stationary stochastic processes—a paper that had military applications during the Cold War
Andrey Kolmogorov
–
Andrey Kolmogorov
Andrey Kolmogorov
–
Kolmogorov (left) delivers a talk at a Soviet information theory symposium. (
Tallinn, 1973).
Andrey Kolmogorov
–
Kolmogorov works on his talk (
Tallinn, 1973).
41.
Turbulence
–
Turbulence or turbulent flow is a flow regime in fluid dynamics characterized by chaotic changes in pressure and flow velocity. It is in contrast to a flow regime, which occurs when a fluid flows in parallel layers. Turbulence is caused by kinetic energy in parts of a fluid flow. For this reason turbulence is easier to create in low viscosity fluids, in general terms, in turbulent flow, unsteady vortices appear of many sizes which interact with each other, consequently drag due to friction effects increases. This would increase the energy needed to pump fluid through a pipe, however this effect can also be exploited by such as aerodynamic spoilers on aircraft, which deliberately spoil the laminar flow to increase drag and reduce lift. The onset of turbulence can be predicted by a constant called the Reynolds number. However, turbulence has long resisted detailed physical analysis, and the interactions within turbulence creates a complex situation. Richard Feynman has described turbulence as the most important unsolved problem of classical physics, smoke rising from a cigarette is mostly turbulent flow. However, for the first few centimeters the flow is laminar, the smoke plume becomes turbulent as its Reynolds number increases, due to its flow velocity and characteristic length increasing. If the golf ball were smooth, the boundary layer flow over the front of the sphere would be laminar at typical conditions. However, the layer would separate early, as the pressure gradient switched from favorable to unfavorable. To prevent this happening, the surface is dimpled to perturb the boundary layer. This results in higher skin friction, but moves the point of boundary layer separation further along, resulting in form drag. The flow conditions in industrial equipment and machines. The external flow over all kind of such as cars, airplanes, ships. The motions of matter in stellar atmospheres, a jet exhausting from a nozzle into a quiescent fluid. As the flow emerges into this external fluid, shear layers originating at the lips of the nozzle are created and these layers separate the fast moving jet from the external fluid, and at a certain critical Reynolds number they become unstable and break down to turbulence. Biologically generated turbulence resulting from swimming animals affects ocean mixing, snow fences work by inducing turbulence in the wind, forcing it to drop much of its snow load near the fence
Turbulence
–
Flow visualization of a turbulent jet, made by
laser-induced fluorescence. The jet exhibits a wide range of length scales, an important characteristic of turbulent flows.
Turbulence
–
Laminar and turbulent water flow over the hull of a submarine
Turbulence
–
Turbulence in the
tip vortex from an
airplane wing
42.
Mechanical equilibrium
–
In classical mechanics, a particle is in mechanical equilibrium if the net force on that particle is zero. By extension, a system made up of many parts is in mechanical equilibrium if the net force on each of its individual parts is zero. In addition to defining mechanical equilibrium in terms of force, there are alternative definitions for mechanical equilibrium which are all mathematically equivalent. In terms of momentum, a system is in if the momentum of its parts is all constant. In terms of velocity, the system is in equilibrium if velocity is constant, in a rotational mechanical equilibrium the angular momentum of the object is conserved and the net torque is zero. More generally in conservative systems, equilibrium is established at a point in space where the gradient of the potential energy with respect to the generalized coordinates is zero. If a particle in equilibrium has zero velocity, that particle is in static equilibrium, since all particles in equilibrium have constant velocity, it is always possible to find an inertial reference frame in which the particle is stationary with respect to the frame. An important property of systems at mechanical equilibrium is their stability, if we have a function which describes the systems potential energy, we can determine the systems equilibria using calculus. A system is in equilibrium at the critical points of the function describing the systems potential energy. We can locate these points using the fact that the derivative of the function is zero at these points, if the system is displaced an arbitrarily small distance from the equilibrium state, the forces of the system cause it to move even farther away. Second derivative >0, The potential energy is at a local minimum, the response to a small perturbation is forces that tend to restore the equilibrium. If more than one equilibrium state is possible for a system. Second derivative =0 or does not exist, The state is neutral to the lowest order, to investigate the precise stability of the system, higher order derivatives must be examined. In a truly neutral state the energy does not vary and the state of equilibrium has a finite width and this is sometimes referred to as state that is marginally stable or in a state of indifference. Generally an equilibrium is only referred to as if it is stable in all directions. Sometimes there is not enough information about the acting on a body to determine if it is in equilibrium or not. This makes it an indeterminate system. The special case of mechanical equilibrium of an object is static equilibrium
Mechanical equilibrium
–
Force diagram showing the
forces acting on an object at rest on a surface. The
normal force N is equal and opposite to the
gravitational force mg so the net force is zero. Consequently the object is in a state of static mechanical equilibrium.
43.
Hydrostatic equilibrium
–
In fluid mechanics, a fluid is said to be in hydrostatic equilibrium or hydrostatic balance when it is at rest, or when the flow velocity at each point is constant over time. This occurs when external forces such as gravity are balanced by a pressure gradient force, hydrostatic equilibrium is the current distinguishing criterion between dwarf planets and small Solar System bodies, and has other roles in astrophysics and planetary geology. This qualification typically means that the object is symmetrically rounded into a spheroid or ellipsoid shape, there are 31 observationally confirmed such objects, sometimes called planemos, in the Solar System, seven more that are virtually certain, and a hundred or so more that are likely. Newtons laws of state that a volume of a fluid that is not in motion or that is in a state of constant velocity must have zero net force on it. This means the sum of the forces in a given direction must be opposed by a sum of forces in the opposite direction. This force balance is called a hydrostatic equilibrium, the fluid can be split into a large number of cuboid volume elements, by considering a single element, the action of the fluid can be derived. If the density is ρ, the volume is V and g the standard gravity, then, the volume of this cuboid is equal to the area of the top or bottom, times the height — the formula for finding the volume of a cube. This sum equals zero if the velocity is constant. Dividing by A,0 = P b o t t o m − P t o p − ρ ⋅ g ⋅ h, or, P t o p − P b o t t o m = − ρ ⋅ g ⋅ h. Ptop − Pbottom is a change in pressure, and h is the height of the volume change in the distance above the ground. By saying these changes are small, the equation can be written in differential form. Density changes with pressure, and gravity changes with height, so the equation would be, d P = − ρ ⋅ g ⋅ d h. Note finally that this last equation can be derived by solving the three-dimensional Navier–Stokes equations for the situation where u = v = ∂ p ∂ x = ∂ p ∂ y =0. Then the only equation is the z -equation, which now reads ∂ p ∂ z + ρ g =0. Thus, hydrostatic balance can be regarded as a simple equilibrium solution of the Navier–Stokes equations. M is a foliation of spheres weighted by the mass density ρ, the hydrostatic equilibrium pertains to hydrostatics and the principles of equilibrium of fluids. A hydrostatic balance is a balance for weighing substances in water. Hydrostatic balance allows the discovery of their specific gravities, in any given layer of a star, there is a hydrostatic equilibrium between the outward thermal pressure from below and the weight of the material above pressing inward
Hydrostatic equilibrium
–
If the highlighted volume of fluid is not moving, the forces on it upwards must equal the forces downwards.
44.
Engineering
–
The term Engineering is derived from the Latin ingenium, meaning cleverness and ingeniare, meaning to contrive, devise. Engineering has existed since ancient times as humans devised fundamental inventions such as the wedge, lever, wheel, each of these inventions is essentially consistent with the modern definition of engineering. The term engineering is derived from the engineer, which itself dates back to 1390 when an engineer originally referred to a constructor of military engines. In this context, now obsolete, a referred to a military machine. Notable examples of the obsolete usage which have survived to the present day are military engineering corps, the word engine itself is of even older origin, ultimately deriving from the Latin ingenium, meaning innate quality, especially mental power, hence a clever invention. The earliest civil engineer known by name is Imhotep, as one of the officials of the Pharaoh, Djosèr, he probably designed and supervised the construction of the Pyramid of Djoser at Saqqara in Egypt around 2630–2611 BC. Ancient Greece developed machines in both civilian and military domains, the Antikythera mechanism, the first known mechanical computer, and the mechanical inventions of Archimedes are examples of early mechanical engineering. In the Middle Ages, the trebuchet was developed, the first steam engine was built in 1698 by Thomas Savery. The development of this gave rise to the Industrial Revolution in the coming decades. With the rise of engineering as a profession in the 18th century, similarly, in addition to military and civil engineering, the fields then known as the mechanic arts became incorporated into engineering. The inventions of Thomas Newcomen and the Scottish engineer James Watt gave rise to mechanical engineering. The development of specialized machines and machine tools during the revolution led to the rapid growth of mechanical engineering both in its birthplace Britain and abroad. John Smeaton was the first self-proclaimed civil engineer and is regarded as the father of civil engineering. He was an English civil engineer responsible for the design of bridges, canals, harbours and he was also a capable mechanical engineer and an eminent physicist. Smeaton designed the third Eddystone Lighthouse where he pioneered the use of hydraulic lime and his lighthouse remained in use until 1877 and was dismantled and partially rebuilt at Plymouth Hoe where it is known as Smeatons Tower. The United States census of 1850 listed the occupation of engineer for the first time with a count of 2,000, there were fewer than 50 engineering graduates in the U. S. before 1865. In 1870 there were a dozen U. S. mechanical engineering graduates, in 1890 there were 6,000 engineers in civil, mining, mechanical and electrical. There was no chair of applied mechanism and applied mechanics established at Cambridge until 1875, the theoretical work of James Maxwell and Heinrich Hertz in the late 19th century gave rise to the field of electronics
Engineering
–
The
steam engine, a major driver in the
Industrial Revolution, underscores the importance of engineering in modern history. This
beam engine is on display in the
Technical University of Madrid.
Engineering
–
Relief map of the
Citadel of Lille, designed in 1668 by
Vauban, the foremost military engineer of his age.
Engineering
–
The Ancient Romans built
aqueducts to bring a steady supply of clean fresh water to cities and towns in the empire.
Engineering
–
The
International Space Station represents a modern engineering challenge from many disciplines.
45.
Meteorology
–
Meteorology is a branch of the atmospheric sciences which includes atmospheric chemistry and atmospheric physics, with a major focus on weather forecasting. The study of meteorology dates back millennia, though significant progress in meteorology did not occur until the 18th century, the 19th century saw modest progress in the field after weather observation networks were formed across broad regions. Prior attempts at prediction of weather depended on historical data, Meteorological phenomena are observable weather events that are explained by the science of meteorology. Different spatial scales are used to describe and predict weather on local, regional, Meteorology, climatology, atmospheric physics, and atmospheric chemistry are sub-disciplines of the atmospheric sciences. Meteorology and hydrology compose the interdisciplinary field of hydrometeorology, the interactions between Earths atmosphere and its oceans are part of a coupled ocean-atmosphere system. Meteorology has application in diverse fields such as the military, energy production, transport, agriculture. The word meteorology is from Greek μετέωρος metéōros lofty, high and -λογία -logia -logy, varāhamihiras classical work Brihatsamhita, written about 500 AD, provides clear evidence that a deep knowledge of atmospheric processes existed even in those times. In 350 BC, Aristotle wrote Meteorology, Aristotle is considered the founder of meteorology. One of the most impressive achievements described in the Meteorology is the description of what is now known as the hydrologic cycle and they are all called swooping bolts because they swoop down upon the Earth. Lightning is sometimes smoky, and is then called smoldering lightning, sometimes it darts quickly along, at other times, it travels in crooked lines, and is called forked lightning. When it swoops down upon some object it is called swooping lightning, the Greek scientist Theophrastus compiled a book on weather forecasting, called the Book of Signs. The work of Theophrastus remained a dominant influence in the study of weather, in 25 AD, Pomponius Mela, a geographer for the Roman Empire, formalized the climatic zone system. According to Toufic Fahd, around the 9th century, Al-Dinawari wrote the Kitab al-Nabat, ptolemy wrote on the atmospheric refraction of light in the context of astronomical observations. St. Roger Bacon was the first to calculate the size of the rainbow. He stated that a rainbow summit can not appear higher than 42 degrees above the horizon, in the late 13th century and early 14th century, Kamāl al-Dīn al-Fārisī and Theodoric of Freiberg were the first to give the correct explanations for the primary rainbow phenomenon. Theoderic went further and also explained the secondary rainbow, in 1716, Edmund Halley suggested that aurorae are caused by magnetic effluvia moving along the Earths magnetic field lines. In 1441, King Sejongs son, Prince Munjong, invented the first standardized rain gauge and these were sent throughout the Joseon Dynasty of Korea as an official tool to assess land taxes based upon a farmers potential harvest. In 1450, Leone Battista Alberti developed a swinging-plate anemometer, and was known as the first anemometer, in 1607, Galileo Galilei constructed a thermoscope
Meteorology
–
Atmospheric sciences
Meteorology
–
Parhelion (sundog) at
Savoie
Meteorology
–
Twilight at
Baker Beach
Meteorology
–
A hemispherical cup anemometer
46.
Flow measurement
–
Flow measurement is the quantification of bulk fluid movement. Flow can be measured in a variety of ways, positive-displacement flow meters accumulate a fixed volume of fluid and then count the number of times the volume is filled to measure flow. Other flow measurement methods rely on forces produced by the stream as it overcomes a known constriction. Flow may be measured by measuring the velocity of fluid over a known area, both gas and liquid flow can be measured in volumetric or mass flow rates, such as liters per second or kilograms per second, respectively. These measurements are related by the materials density, the density of a liquid is almost independent of conditions. This is not the case for gases, the densities of which depend greatly upon pressure, temperature and to a lesser extent, composition. When gases or liquids are transferred for their content, as in the sale of natural gas. The energy flow rate is the flow rate multiplied by the energy content per unit volume or mass flow rate multiplied by the energy content per unit mass. Energy flow rate is derived from mass or volumetric flow rate by the use of a flow computer. In engineering contexts, the flow rate is usually given the symbol Q, and the mass flow rate. For a fluid having density ρ, mass and volumetric flow rates may be related by m ˙ = ρ ∗ Q, gases are compressible and change volume when placed under pressure, are heated or are cooled. A volume of gas under one set of pressure and temperature conditions is not equivalent to the gas under different conditions. References will be made to flow rate through a meter and standard or base flow rate through a meter with units such as acm/h, sm3/sec, kscm/h, LFM. Gas mass flow rate can be measured, independent of pressure and temperature effects, with thermal mass flow meters, Coriolis mass flow meters. In oceanography a common unit to measure volume transport is an equivalent to 106 m3/s. A positive displacement meter may be compared to a bucket and a stopwatch, the stopwatch is started when the flow starts, and stopped when the bucket reaches its limit. The volume divided by the time gives the flow rate, for continuous measurements, we need a system of continually filling and emptying buckets to divide the flow without letting it out of the pipe. The piston meter operates on the principle of a piston rotating within a chamber of known volume, for each rotation, an amount of water passes through the piston chamber
Flow measurement
–
A propeller-type current meter as used for hydroelectric turbine testing.
Flow measurement
–
8inch (200mm) V-Cone Flowmeter shown with ANSI 300# raised face
weld neck flanges
Flow measurement
–
A magnetic flow meter at the
Tetley's Brewery in
Leeds,
West Yorkshire.
47.
Velocity
–
The velocity of an object is the rate of change of its position with respect to a frame of reference, and is a function of time. Velocity is equivalent to a specification of its speed and direction of motion, Velocity is an important concept in kinematics, the branch of classical mechanics that describes the motion of bodies. Velocity is a vector quantity, both magnitude and direction are needed to define it. The scalar absolute value of velocity is called speed, being a coherent derived unit whose quantity is measured in the SI system as metres per second or as the SI base unit of. For example,5 metres per second is a scalar, whereas 5 metres per second east is a vector, if there is a change in speed, direction or both, then the object has a changing velocity and is said to be undergoing an acceleration. To have a constant velocity, an object must have a constant speed in a constant direction, constant direction constrains the object to motion in a straight path thus, a constant velocity means motion in a straight line at a constant speed. For example, a car moving at a constant 20 kilometres per hour in a path has a constant speed. Hence, the car is considered to be undergoing an acceleration, Speed describes only how fast an object is moving, whereas velocity gives both how fast and in what direction the object is moving. If a car is said to travel at 60 km/h, its speed has been specified, however, if the car is said to move at 60 km/h to the north, its velocity has now been specified. The big difference can be noticed when we consider movement around a circle and this is because the average velocity is calculated by only considering the displacement between the starting and the end points while the average speed considers only the total distance traveled. Velocity is defined as the rate of change of position with respect to time, average velocity can be calculated as, v ¯ = Δ x Δ t. The average velocity is less than or equal to the average speed of an object. This can be seen by realizing that while distance is always strictly increasing, from this derivative equation, in the one-dimensional case it can be seen that the area under a velocity vs. time is the displacement, x. In calculus terms, the integral of the velocity v is the displacement function x. In the figure, this corresponds to the area under the curve labeled s. Since the derivative of the position with respect to time gives the change in position divided by the change in time, although velocity is defined as the rate of change of position, it is often common to start with an expression for an objects acceleration. As seen by the three green tangent lines in the figure, an objects instantaneous acceleration at a point in time is the slope of the tangent to the curve of a v graph at that point. In other words, acceleration is defined as the derivative of velocity with respect to time, from there, we can obtain an expression for velocity as the area under an a acceleration vs. time graph
Velocity
–
As a change of direction occurs while the cars turn on the curved track, their velocity is not constant.
48.
Density
–
The density, or more precisely, the volumetric mass density, of a substance is its mass per unit volume. The symbol most often used for density is ρ, although the Latin letter D can also be used. Mathematically, density is defined as mass divided by volume, ρ = m V, where ρ is the density, m is the mass, and V is the volume. In some cases, density is defined as its weight per unit volume. For a pure substance the density has the numerical value as its mass concentration. Different materials usually have different densities, and density may be relevant to buoyancy, purity, osmium and iridium are the densest known elements at standard conditions for temperature and pressure but certain chemical compounds may be denser. Thus a relative density less than one means that the floats in water. The density of a material varies with temperature and pressure and this variation is typically small for solids and liquids but much greater for gases. Increasing the pressure on an object decreases the volume of the object, increasing the temperature of a substance decreases its density by increasing its volume. In most materials, heating the bottom of a results in convection of the heat from the bottom to the top. This causes it to rise relative to more dense unheated material, the reciprocal of the density of a substance is occasionally called its specific volume, a term sometimes used in thermodynamics. Density is a property in that increasing the amount of a substance does not increase its density. Archimedes knew that the irregularly shaped wreath could be crushed into a cube whose volume could be calculated easily and compared with the mass, upon this discovery, he leapt from his bath and ran naked through the streets shouting, Eureka. As a result, the term eureka entered common parlance and is used today to indicate a moment of enlightenment, the story first appeared in written form in Vitruvius books of architecture, two centuries after it supposedly took place. Some scholars have doubted the accuracy of this tale, saying among other things that the method would have required precise measurements that would have been difficult to make at the time, from the equation for density, mass density has units of mass divided by volume. As there are units of mass and volume covering many different magnitudes there are a large number of units for mass density in use. The SI unit of kilogram per metre and the cgs unit of gram per cubic centimetre are probably the most commonly used units for density.1,000 kg/m3 equals 1 g/cm3. In industry, other larger or smaller units of mass and or volume are often more practical, see below for a list of some of the most common units of density
Density
–
Air density vs. temperature
49.
Aircraft
–
An aircraft is a machine that is able to fly by gaining support from the air. It counters the force of gravity by using either static lift or by using the lift of an airfoil. The human activity that surrounds aircraft is called aviation, crewed aircraft are flown by an onboard pilot, but unmanned aerial vehicles may be remotely controlled or self-controlled by onboard computers. Aircraft may be classified by different criteria, such as type, aircraft propulsion, usage. Each of the two World Wars led to technical advances. Consequently, the history of aircraft can be divided into five eras, Pioneers of flight, first World War,1914 to 1918. Aviation between the World Wars,1918 to 1939, Second World War,1939 to 1945. Postwar era, also called the jet age,1945 to the present day, aerostats use buoyancy to float in the air in much the same way that ships float on the water. They are characterized by one or more large gasbags or canopies, filled with a relatively low-density gas such as helium, hydrogen, or hot air, which is less dense than the surrounding air. When the weight of this is added to the weight of the aircraft structure, a balloon was originally any aerostat, while the term airship was used for large, powered aircraft designs – usually fixed-wing. In 1919 Frederick Handley Page was reported as referring to ships of the air, in the 1930s, large intercontinental flying boats were also sometimes referred to as ships of the air or flying-ships. – though none had yet been built, the advent of powered balloons, called dirigible balloons, and later of rigid hulls allowing a great increase in size, began to change the way these words were used. Huge powered aerostats, characterized by an outer framework and separate aerodynamic skin surrounding the gas bags, were produced. There were still no fixed-wing aircraft or non-rigid balloons large enough to be called airships, then several accidents, such as the Hindenburg disaster in 1937, led to the demise of these airships. Nowadays a balloon is an aerostat and an airship is a powered one. A powered, steerable aerostat is called a dirigible, sometimes this term is applied only to non-rigid balloons, and sometimes dirigible balloon is regarded as the definition of an airship. Non-rigid dirigibles are characterized by a moderately aerodynamic gasbag with stabilizing fins at the back and these soon became known as blimps. During the Second World War, this shape was adopted for tethered balloons, in windy weather
Aircraft
–
NASA test aircraft
Aircraft
–
The
Mil Mi-8 is the
most-produced helicopter in history
Aircraft
–
"Voodoo" a modified P 51 Mustang is the 2014 Reno Air Race Champion
Aircraft
–
A hot air
balloon in flight
50.
Petroleum
–
Petroleum is a naturally occurring, yellow-to-black liquid found in geological formations beneath the Earths surface, which is commonly refined into various types of fuels. Components of petroleum are separated using a technique called fractional distillation and it consists of hydrocarbons of various molecular weights and other organic compounds. The name petroleum covers both naturally occurring unprocessed crude oil and petroleum products that are made up of refined crude oil. A fossil fuel, petroleum is formed when large quantities of dead organisms, usually zooplankton and algae, are buried underneath sedimentary rock, Petroleum has mostly been recovered by oil drilling. Drilling is carried out studies of structural geology, sedimentary basin analysis. Petroleum is used in manufacturing a variety of materials. Concern over the depletion of the earths finite reserves of oil, the burning of fossil fuels plays the major role in the current episode of global warming. The word petroleum comes from Greek, πέτρα for rocks and Greek, the term was found in 10th-century Old English sources. It was used in the treatise De Natura Fossilium, published in 1546 by the German mineralogist Georg Bauer, Petroleum, in one form or another, has been used since ancient times, and is now important across society, including in economy, politics and technology. Great quantities of it were found on the banks of the river Issus, ancient Persian tablets indicate the medicinal and lighting uses of petroleum in the upper levels of their society. By 347 AD, oil was produced from bamboo-drilled wells in China, early British explorers to Myanmar documented a flourishing oil extraction industry based in Yenangyaung that, in 1795, had hundreds of hand-dug wells under production. The mythological origins of the oil fields at Yenangyaung, and its hereditary monopoly control by 24 families, Pechelbronn is said to be the first European site where petroleum has been explored and used. The still active Erdpechquelle, a spring where petroleum appears mixed with water has been used since 1498, Oil sands have been mined since the 18th century. In Wietze in lower Saxony, natural asphalt/bitumen has been explored since the 18th century, both in Pechelbronn as in Wietze, the coal industry dominated the petroleum technologies. In 1848 Young set up a small business refining the crude oil, Young eventually succeeded, by distilling cannel coal at a low heat, in creating a fluid resembling petroleum, which when treated in the same way as the seep oil gave similar products. The production of oils and solid paraffin wax from coal formed the subject of his patent dated 17 October 1850. In 1850 Young & Meldrum and Edward William Binney entered into partnership under the title of E. W. Binney & Co. at Bathgate in West Lothian, the worlds first oil refinery was built in 1856 by Ignacy Łukasiewicz. The demand for petroleum as a fuel for lighting in North America, edwin Drakes 1859 well near Titusville, Pennsylvania, is popularly considered the first modern well
Petroleum
–
Pumpjack pumping an oil well near
Lubbock, Texas
Petroleum
–
An oil refinery in Mina-Al-Ahmadi, Kuwait
Petroleum
–
Natural petroleum spring in
Korňa, Slovakia
Petroleum
–
Oil derrick in
Okemah, Oklahoma, 1922
51.
Weather
–
Weather is the state of the atmosphere, to the degree that it is hot or cold, wet or dry, calm or stormy, clear or cloudy. Most weather phenomena occur in the lowest level of the atmosphere, Weather refers to day-to-day temperature and precipitation activity, whereas climate is the term for the averaging of atmospheric conditions over longer periods of time. When used without qualification, weather is understood to mean the weather of Earth. Weather is driven by air pressure, temperature and moisture differences between one place and another and these differences can occur due to the suns angle at any particular spot, which varies with latitude. The strong temperature contrast between polar and tropical air gives rise to the largest scale atmospheric circulations, the Hadley Cell, the Ferrel Cell, the Polar Cell, Weather systems in the mid-latitudes, such as extratropical cyclones, are caused by instabilities of the jet stream flow. Because the Earths axis is tilted relative to its orbital plane, on Earths surface, temperatures usually range ±40 °C annually. Over thousands of years, changes in Earths orbit can affect the amount and distribution of energy received by the Earth, thus influencing long-term climate. Surface temperature differences in turn cause pressure differences, higher altitudes are cooler than lower altitudes as most atmospheric heating is due to contact with the Earths surface while radiative losses to space are mostly constant. Weather forecasting is the application of science and technology to predict the state of the atmosphere for a future time and a given location. The Earths weather system is a system, as a result. Human attempts to control the weather have occurred throughout history, and there is evidence that human activities such as agriculture, studying how the weather works on other planets has been helpful in understanding how weather works on Earth. A famous landmark in the Solar System, Jupiters Great Red Spot, is a storm known to have existed for at least 300 years. However, weather is not limited to planetary bodies, a stars corona is constantly being lost to space, creating what is essentially a very thin atmosphere throughout the Solar System. The movement of mass ejected from the Sun is known as the solar wind, on Earth, the common weather phenomena include wind, cloud, rain, snow, fog and dust storms. Less common events include natural disasters such as tornadoes, hurricanes, typhoons, almost all familiar weather phenomena occur in the troposphere. Weather does occur in the stratosphere and can affect weather lower down in the troposphere, Weather occurs primarily due to air pressure, temperature and moisture differences between one place to another. These differences can occur due to the sun angle at any particular spot, in other words, the farther from the tropics one lies, the lower the sun angle is, which causes those locations to be cooler due the spread of the sunlight over a greater surface. The strong temperature contrast between polar and tropical air gives rise to the large scale atmospheric circulation cells and the jet stream, Weather systems in the mid-latitudes, such as extratropical cyclones, are caused by instabilities of the jet stream flow
Weather
–
Thunderstorm near Garajau,
Madeira
Weather
–
Cumulus mediocris cloud surrounded by
stratocumulus
Weather
–
New Orleans, Louisiana, after being struck by Hurricane Katrina. Katrina was a
Category 3 hurricane when it struck although it had been a category 5 hurricane in the
Gulf of Mexico.
Weather
–
Early morning sunshine over
Bratislava, Slovakia
52.
Traffic engineering (transportation)
–
Traffic engineering is a branch of civil engineering that uses engineering techniques to achieve the safe and efficient movement of people and goods on roadways. It focuses mainly on research for safe and efficient traffic flow, such as geometry, sidewalks and crosswalks, cycling infrastructure, traffic signs, road surface markings. Traffic engineering deals with the part of transportation system, except the infrastructures provided. Typical traffic engineering projects involve designing traffic control device installations and modifications, including signals, signs. However, traffic engineers also consider traffic safety by investigating locations with high crash rates, traffic flow management can be short-term or long-term. Traditionally, road improvements have consisted mainly of building additional infrastructure, however, dynamic elements are now being introduced into road traffic management. Dynamic elements have long used in rail transport. These include sensors to measure flows and automatic, interconnected. Also, traffic flow and speed sensors are used to detect problems and alert operators, so that the cause of the congestion can be determined and these systems are collectively called intelligent transportation systems. However, above a threshold, increased density reduces speed. Additionally, beyond a threshold, increased density reduces flow as well. Therefore, speeds and lane flows at bottlenecks can be high during peak periods by managing traffic density using devices that limit the rate at which vehicles can enter the highway. Ramp meters, signals on entrance ramps that control the rate at which vehicles are allowed to enter the mainline facility, highway safety engineering is a branch of traffic engineering that deals with reducing the frequency and severity of crashes. It uses physics and vehicle dynamics, as well as road user psychology and human factors engineering, a typical traffic safety investigation follows these steps 1. Locations are selected by looking for sites with higher than average crash rates and this includes obtaining police reports of crashes, observing road user behavior, and collecting information on traffic signs, road surface markings, traffic lights and road geometry. Look for collisions patterns or road conditions that may be contributing to the problem, identify possible countermeasures to reduce the severity or frequency of crashes. • Evaluate cost/benefit ratios of the alternatives • Consider whether a proposed improvement will solve the problem, for example, preventing left turns at one intersection may eliminate left turn crashes at that location, only to increase them a block away. • Are any disadvantages of proposed improvements likely to be worse than the problem you are trying to solve, usually, this occurs some time after the implementation
Traffic engineering (transportation)
–
Complex intersections with multiple vehicle lanes, bike lanes, and crosswalks are common examples of traffic engineering projects
Traffic engineering (transportation)
–
A ramp meter limits the rate at which vehicles can enter the freeway
53.
Shear stress
–
A shear stress, often denoted τ, is defined as the component of stress coplanar with a material cross section. Shear stress arises from the vector component parallel to the cross section. Normal stress, on the hand, arises from the force vector component perpendicular to the material cross section on which it acts. The formula to calculate average shear stress is force per unit area, τ = F A, where, τ = the shear stress, F = the force applied, A = the cross-sectional area of material with area parallel to the applied force vector. Pure shear stress is related to shear strain, denoted γ, by the following equation, τ = γ G where G is the shear modulus of the isotropic material. Here E is Youngs modulus and ν is Poissons ratio, beam shear is defined as the internal shear stress of a beam caused by the shear force applied to the beam. The beam shear formula is known as Zhuravskii shear stress formula after Dmitrii Ivanovich Zhuravskii who derived it in 1855. Shear stresses within a structure may be calculated by idealizing the cross-section of the structure into a set of stringers. Dividing the shear flow by the thickness of a portion of the semi-monocoque structure yields the shear stress. Any real fluids moving along solid boundary will incur a shear stress on that boundary, the no-slip condition dictates that the speed of the fluid at the boundary is zero, but at some height from the boundary the flow speed must equal that of the fluid. The region between two points is aptly named the boundary layer. For all Newtonian fluids in laminar flow the shear stress is proportional to the rate in the fluid where the viscosity is the constant of proportionality. However, for non-Newtonian fluids, this is no longer the case as for these fluids the viscosity is not constant, the shear stress is imparted onto the boundary as a result of this loss of velocity. Specifically, the shear stress is defined as, τ w ≡ τ = μ ∂ u ∂ y | y =0. For an isotropic Newtonian flow it is a scalar, while for anisotropic Newtonian flows it can be a second-order tensor too. On the other hand, given a shear stress as function of the flow velocity, the constant one finds in this case is the dynamic viscosity of the flow. On the other hand, a flow in which the viscosity were and this nonnewtonian flow is isotropic, so the viscosity is simply a scalar, μ =1 u. This relationship can be exploited to measure the shear stress
Shear stress
–
A shearing force is applied to the top of the rectangle while the bottom is held in place. The resulting shear stress,, deforms the rectangle into a
parallelogram. The area involved would be the top of the parallelogram.
54.
Derivative
–
The derivative of a function of a real variable measures the sensitivity to change of the function value with respect to a change in its argument. Derivatives are a tool of calculus. For example, the derivative of the position of an object with respect to time is the objects velocity. The derivative of a function of a variable at a chosen input value. The tangent line is the best linear approximation of the function near that input value, for this reason, the derivative is often described as the instantaneous rate of change, the ratio of the instantaneous change in the dependent variable to that of the independent variable. Derivatives may be generalized to functions of real variables. In this generalization, the derivative is reinterpreted as a transformation whose graph is the best linear approximation to the graph of the original function. The Jacobian matrix is the matrix that represents this linear transformation with respect to the basis given by the choice of independent and dependent variables and it can be calculated in terms of the partial derivatives with respect to the independent variables. For a real-valued function of variables, the Jacobian matrix reduces to the gradient vector. The process of finding a derivative is called differentiation, the reverse process is called antidifferentiation. The fundamental theorem of calculus states that antidifferentiation is the same as integration, differentiation and integration constitute the two fundamental operations in single-variable calculus. Differentiation is the action of computing a derivative, the derivative of a function y = f of a variable x is a measure of the rate at which the value y of the function changes with respect to the change of the variable x. It is called the derivative of f with respect to x, If x and y are real numbers, and if the graph of f is plotted against x, the derivative is the slope of this graph at each point. The simplest case, apart from the case of a constant function, is when y is a linear function of x. This formula is true because y + Δ y = f = m + b = m x + m Δ x + b = y + m Δ x. Thus, since y + Δ y = y + m Δ x and this gives an exact value for the slope of a line. If the function f is not linear, however, then the change in y divided by the change in x varies, differentiation is a method to find an exact value for this rate of change at any given value of x. The idea, illustrated by Figures 1 to 3, is to compute the rate of change as the value of the ratio of the differences Δy / Δx as Δx becomes infinitely small
Derivative
–
The
graph of a function, drawn in black, and a
tangent line to that function, drawn in red. The
slope of the tangent line is equal to the derivative of the function at the marked point.
55.
Continuous function
–
In mathematics, a continuous function is a function for which sufficiently small changes in the input result in arbitrarily small changes in the output. Otherwise, a function is said to be a discontinuous function, a continuous function with a continuous inverse function is called a homeomorphism. Continuity of functions is one of the concepts of topology. The introductory portion of this focuses on the special case where the inputs and outputs of functions are real numbers. In addition, this article discusses the definition for the general case of functions between two metric spaces. In order theory, especially in theory, one considers a notion of continuity known as Scott continuity. Other forms of continuity do exist but they are not discussed in this article, as an example, consider the function h, which describes the height of a growing flower at time t. By contrast, if M denotes the amount of money in an account at time t, then the function jumps at each point in time when money is deposited or withdrawn. A form of the definition of continuity was first given by Bernard Bolzano in 1817. Cauchy defined infinitely small quantities in terms of quantities. The formal definition and the distinction between pointwise continuity and uniform continuity were first given by Bolzano in the 1830s but the work wasnt published until the 1930s, all three of those nonequivalent definitions of pointwise continuity are still in use. Eduard Heine provided the first published definition of continuity in 1872. This is not a definition of continuity since the function f =1 x is continuous on its whole domain of R ∖ A function is continuous at a point if it does not have a hole or jump. A “hole” or “jump” in the graph of a function if the value of the function at a point c differs from its limiting value along points that are nearby. Such a point is called a discontinuity, a function is then continuous if it has no holes or jumps, that is, if it is continuous at every point of its domain. Otherwise, a function is discontinuous, at the points where the value of the function differs from its limiting value, there are several ways to make this definition mathematically rigorous. These definitions are equivalent to one another, so the most convenient definition can be used to determine whether a function is continuous or not. In the definitions below, f, I → R. is a function defined on a subset I of the set R of real numbers and this subset I is referred to as the domain of f
Continuous function
–
Illustration of the ε-δ-definition: for ε=0.5, c=2, the value δ=0.5 satisfies the condition of the definition.
56.
Molecules
–
A molecule is an electrically neutral group of two or more atoms held together by chemical bonds. Molecules are distinguished from ions by their lack of electrical charge, however, in quantum physics, organic chemistry, and biochemistry, the term molecule is often used less strictly, also being applied to polyatomic ions. In the kinetic theory of gases, the molecule is often used for any gaseous particle regardless of its composition. According to this definition, noble gas atoms are considered molecules as they are in fact monoatomic molecules. A molecule may be homonuclear, that is, it consists of atoms of one element, as with oxygen, or it may be heteronuclear. Atoms and complexes connected by non-covalent interactions, such as hydrogen bonds or ionic bonds, are not considered single molecules. Molecules as components of matter are common in organic substances and they also make up most of the oceans and atmosphere. Also, no typical molecule can be defined for ionic crystals and covalent crystals, the theme of repeated unit-cellular-structure also holds for most condensed phases with metallic bonding, which means that solid metals are also not made of molecules. In glasses, atoms may also be together by chemical bonds with no presence of any definable molecule. The science of molecules is called molecular chemistry or molecular physics, in practice, however, this distinction is vague. In molecular sciences, a molecule consists of a system composed of two or more atoms. Polyatomic ions may sometimes be thought of as electrically charged molecules. The term unstable molecule is used for very reactive species, i. e, according to Merriam-Webster and the Online Etymology Dictionary, the word molecule derives from the Latin moles or small unit of mass. Molecule – extremely minute particle, from French molécule, from New Latin molecula, diminutive of Latin moles mass, a vague meaning at first, the vogue for the word can be traced to the philosophy of Descartes. The definition of the molecule has evolved as knowledge of the structure of molecules has increased, earlier definitions were less precise, defining molecules as the smallest particles of pure chemical substances that still retain their composition and chemical properties. Molecules are held together by covalent bonding or ionic bonding. Several types of non-metal elements exist only as molecules in the environment, for example, hydrogen only exists as hydrogen molecule. A molecule of a compound is made out of two or more elements, a covalent bond is a chemical bond that involves the sharing of electron pairs between atoms
Molecules
–
Atomic force microscopy image of a
PTCDA molecule, which contains five carbon rings in a non-linear arrangement.
Molecules
–
A
scanning tunneling microscopy image of
pentacene molecules, which consist of linear chains of five carbon rings.
Molecules
–
Arrangement of
polyvinylidene fluoride molecules in a
nanofiber –
transmission electron microscopy image.
Molecules
57.
Mean free path
–
In physics, the mean free path is the average distance traveled by a moving particle between successive impacts, which modify its direction or energy or other particle properties. The following table lists typical values for air at different pressures at room temperature. In gamma-ray radiography the mean path of a pencil beam of mono-energetic photons is the average distance a photon travels between collisions with atoms of the target material. It depends on the material and the energy of the photons, ℓ = μ −1 = −1, as photons move through the target material, they are attenuated with probabilities depending on their energy, as a result their distribution changes in process called spectrum hardening. Because of spectrum hardening, the mean path of the X-ray spectrum changes with distance. Sometimes one measures the thickness of a material in the number of mean free paths, material with the thickness of one mean free path will attenuate 37% of photons. This concept is related to half-value layer, a material with a thickness of one HVL will attenuate 50% of photons. A standard x-ray image is an image, an image with negative logarithm of its intensities is sometimes called a number of mean free paths image. In particle physics the concept of the mean free path is not commonly used, in particular, for high-energy photons, which mostly interact by electron–positron pair production, the radiation length is used much like the mean free path in radiography. Independent-particle models in nuclear physics require the undisturbed orbiting of nucleons within the nucleus before they interact with other nucleons and this requirement seems to be in contradiction to the assumptions made in the theory. We are facing one of the fundamental problems of nuclear structure physics which has yet to be solved. If one takes a suspension of non-light-absorbing particles of diameter d with a volume fraction Φ, the mean path of the photons is, l =2 d 3 Φ Q s. Qs can be evaluated numerically for spherical particles using Mie theory. In an otherwise empty cavity, the mean path of a single particle bouncing off the walls is, l =4 V S. This relation is used in the derivation of the Sabine equation in acoustics, the mean free path is used in the design of chemical apparatus, e. g. systems for distillation. The sizes of atoms and molecules can be estimated from their mean free path, MFP can be used to estimate the resistivity of a material from the mean free path of its electrons. In aerodynamics, the mean free path has the order of magnitude as the shockwave thickness at Mach numbers greater than one. Imagine a beam of particles being shot through a target, the atoms that might stop a beam particle are shown in red
Mean free path
–
Mean free path for photons in energy range from 1 keV to 20 MeV for Elements Z = 1 to 100. Based on data from. The discontinuities are due to low density of gas elements. Six bands correspond to neighborhoods of six
noble gases. Also shown are locations of
absorption edges.
58.
Ideal fluid
–
In physics, a perfect fluid is a fluid that can be completely characterized by its rest frame mass density ρ m, and isotropic pressure p. Real fluids are sticky and contain heat. Perfect fluids are idealized models in which these possibilities are neglected, specifically, perfect fluids have no shear stresses, viscosity, or heat conduction. Perfect fluids admit a Lagrangian formulation, which allows the techniques used in theory, in particular, quantization. This formulation can be generalized, but unfortunately, heat conduction, perfect fluids are often used in general relativity to model idealized distributions of matter, such as the interior of a star or an isotropic universe. In the latter case, the equation of state of the fluid may be used in Friedmann–Lemaître–Robertson–Walker equations to describe the evolution of the universe. Equation of state Ideal gas Fluid solutions in general relativity The Large Scale Structure of Space-Time, by S. W. Hawking and G. F. R. Ellis, ISBN 0-521-20016-4, ISBN 0-521-09906-4 Mark D. Roberts
Ideal fluid
–
The stress–energy tensor of a perfect fluid contains only the diagonal components.
59.
Laminar flow
–
In fluid dynamics, laminar flow occurs when a fluid flows in parallel layers, with no disruption between the layers. At low velocities, the fluid tends to flow without lateral mixing, there are no cross-currents perpendicular to the direction of flow, nor eddies or swirls of fluids. In laminar flow, the motion of the particles of the fluid is very orderly with particles close to a surface moving in straight lines parallel to that surface. Laminar flow is a flow regime characterized by high momentum diffusion, Laminar flow tends to occur at lower velocities, below a threshold at which it becomes turbulent. Turbulent flow is an orderly flow regime that is characterised by eddies or small packets of fluid particles which result in lateral mixing. In non-scientific terms, laminar flow is smooth while turbulent flow is rough, the type of flow occurring in a fluid in a channel is important in fluid dynamics problems and subsequently affects heat and mass transfer in fluid systems. The dimensionless Reynolds number is an important parameter in the equations that describe whether fully developed flow conditions lead to laminar or turbulent flow, Laminar flow generally occurs when the fluid is moving slowly or the fluid is very viscous. If the Reynolds number is small, much less than 1, then the fluid will exhibit Stokes or creeping flow. The specific calculation of the Reynolds number, and the values where laminar flow occurs, will depend on the geometry of the flow system, Q is the volumetric flow rate. A is the pipes cross-sectional area, V is the mean velocity of the fluid. μ is the viscosity of the fluid. ν is the viscosity of the fluid, ν = μ/ρ. ρ is the density of the fluid, for such systems, laminar flow occurs when the Reynolds number is below a critical value of approximately 2,040, though the transition range is typically between 1,800 and 2,100. For fluid systems occurring on external surfaces, such as flow past objects suspended in the fluid, the particle Reynolds number Rep would be used for particle suspended in flowing fluids, for example. As with flow in pipes, laminar flow typically occurs with lower Reynolds numbers, while turbulent flow and related phenomena, such as vortex shedding, a common application of laminar flow is in the smooth flow of a viscous liquid through a tube or pipe. In that case, the velocity of flow varies from zero at the walls to a maximum along the centre of the vessel. The flow profile of laminar flow in a tube can be calculated by dividing the flow into thin cylindrical elements, another example is the flow of air over an aircraft wing. The boundary layer is a thin sheet of air lying over the surface of the wing
Laminar flow
–
A
sphere in Stokes flow, at very low
Reynolds number. An object moving through a fluid experiences a
force in the direction opposite to its motion.
60.
Perpendicular
–
In elementary geometry, the property of being perpendicular is the relationship between two lines which meet at a right angle. The property extends to other related geometric objects, a line is said to be perpendicular to another line if the two lines intersect at a right angle. Perpendicularity can be shown to be symmetric, meaning if a first line is perpendicular to a second line, for this reason, we may speak of two lines as being perpendicular without specifying an order. Perpendicularity easily extends to segments and rays, in symbols, A B ¯ ⊥ C D ¯ means line segment AB is perpendicular to line segment CD. A line is said to be perpendicular to an if it is perpendicular to every line in the plane that it intersects. This definition depends on the definition of perpendicularity between lines, two planes in space are said to be perpendicular if the dihedral angle at which they meet is a right angle. Perpendicularity is one instance of the more general mathematical concept of orthogonality, perpendicularity is the orthogonality of classical geometric objects. Thus, in advanced mathematics, the perpendicular is sometimes used to describe much more complicated geometric orthogonality conditions. The word foot is used in connection with perpendiculars. This usage is exemplified in the top diagram, above, the diagram can be in any orientation. The foot is not necessarily at the bottom, step 2, construct circles centered at A and B having equal radius. Let Q and R be the points of intersection of two circles. Step 3, connect Q and R to construct the desired perpendicular PQ, to prove that the PQ is perpendicular to AB, use the SSS congruence theorem for and QPB to conclude that angles OPA and OPB are equal. Then use the SAS congruence theorem for triangles OPA and OPB to conclude that angles POA, to make the perpendicular to the line g at or through the point P using Thales theorem, see the animation at right. The Pythagorean Theorem can be used as the basis of methods of constructing right angles, for example, by counting links, three pieces of chain can be made with lengths in the ratio 3,4,5. These can be out to form a triangle, which will have a right angle opposite its longest side. This method is useful for laying out gardens and fields, where the dimensions are large, the chains can be used repeatedly whenever required. If two lines are perpendicular to a third line, all of the angles formed along the third line are right angles
Perpendicular
–
The segment AB is perpendicular to the segment CD because the two angles it creates (indicated in orange and blue) are each 90 degrees.
61.
Friction
–
Friction is the force resisting the relative motion of solid surfaces, fluid layers, and material elements sliding against each other. There are several types of friction, Dry friction resists relative lateral motion of two surfaces in contact. Dry friction is subdivided into static friction between non-moving surfaces, and kinetic friction between moving surfaces, fluid friction describes the friction between layers of a viscous fluid that are moving relative to each other. Lubricated friction is a case of fluid friction where a lubricant fluid separates two solid surfaces, skin friction is a component of drag, the force resisting the motion of a fluid across the surface of a body. Internal friction is the force resisting motion between the making up a solid material while it undergoes deformation. When surfaces in contact move relative to other, the friction between the two surfaces converts kinetic energy into thermal energy. This property can have consequences, as illustrated by the use of friction created by rubbing pieces of wood together to start a fire. Kinetic energy is converted to thermal energy whenever motion with friction occurs, another important consequence of many types of friction can be wear, which may lead to performance degradation and/or damage to components. Friction is a component of the science of tribology, Friction is not itself a fundamental force. Dry friction arises from a combination of adhesion, surface roughness, surface deformation. The complexity of interactions makes the calculation of friction from first principles impractical and necessitates the use of empirical methods for analysis. Friction is a non-conservative force - work done against friction is path dependent, in the presence of friction, some energy is always lost in the form of heat. Thus mechanical energy is not conserved, the Greeks, including Aristotle, Vitruvius, and Pliny the Elder, were interested in the cause and mitigation of friction. They were aware of differences between static and kinetic friction with Themistius stating in 350 A. D. that it is easier to further the motion of a moving body than to move a body at rest. The classic laws of sliding friction were discovered by Leonardo da Vinci in 1493, a pioneer in tribology and these laws were rediscovered by Guillaume Amontons in 1699. Amontons presented the nature of friction in terms of surface irregularities, the understanding of friction was further developed by Charles-Augustin de Coulomb. Coulomb further considered the influence of sliding velocity, temperature and humidity, the distinction between static and dynamic friction is made in Coulombs friction law, although this distinction was already drawn by Johann Andreas von Segner in 1758. Leslie was equally skeptical about the role of adhesion proposed by Desaguliers, in Leslies view, friction should be seen as a time-dependent process of flattening, pressing down asperities, which creates new obstacles in what were cavities before
Friction
–
When the mass is not moving, the object experiences static friction. The friction increases as the applied force increases until the block moves. After the block moves, it experiences kinetic friction, which is less than the maximum static friction.
62.
Non-newtonian fluid
–
A non-Newtonian fluid is a fluid that does not follow Newtons Law of Viscosity. Most commonly, the viscosity of fluids is dependent on shear rate or shear rate history. Some non-Newtonian fluids with shear-independent viscosity, however, still exhibit normal stress-differences or other non-Newtonian behavior. Many salt solutions and molten polymers are non-Newtonian fluids, as are commonly found substances such as ketchup, custard, toothpaste, starch suspensions, maizena, paint, blood. In a Newtonian fluid, the relation between the stress and the shear rate is linear, passing through the origin, the constant of proportionality being the coefficient of viscosity. In a non-Newtonian fluid, the relation between the stress and the shear rate is different and can even be time-dependent. Therefore, a constant coefficient of viscosity cannot be defined, although the concept of viscosity is commonly used in fluid mechanics to characterize the shear properties of a fluid, it can be inadequate to describe non-Newtonian fluids. The properties are studied using tensor-valued constitutive equations, which are common in the field of continuum mechanics. The viscosity of a shear thickening fluid, or dilatant fluid, corn starch dissolved in water is a common example, when stirred slowly it looks milky, when stirred vigorously it feels like a very viscous liquid. Note that all thixotropic fluids are extremely shear thinning, but they are time dependent. Thus, to avoid confusion, the classification is more clearly termed pseudoplastic. Another example of a shear thinning fluid is blood and this application is highly favoured within the body, as it allows the viscosity of blood to decrease with increased shear strain rate. Fluids that have a linear shear stress/shear strain relationship require a finite yield stress before they begin to flow and these fluids are called Bingham plastics. Several examples are clay suspensions, drilling mud, toothpaste, mayonnaise, chocolate, the surface of a Bingham plastic can hold peaks when it is still. By contrast Newtonian fluids have flat featureless surfaces when still, there are also fluids whose strain rate is a function of time. Fluids that require a gradually increasing shear stress to maintain a constant strain rate are referred to as rheopectic, an opposite case of this is a fluid that thins out with time and requires a decreasing stress to maintain a constant strain rate. Many common substances exhibit non-Newtonian flows, uncooked cornflour has the same properties. The name oobleck is derived from the Dr. Seuss book Bartholomew, because of its properties, oobleck is often used in demonstrations that exhibit its unusual behavior
Non-newtonian fluid
–
Demonstration of a non-Newtonian fluid at
Universum in Mexico City
Non-newtonian fluid
–
Classification of fluids with shear stress as a function of shear rate.
Non-newtonian fluid
–
Oobleck on a subwoofer. Applying force to oobleck, by sound waves in this case, makes the non-Newtonian fluid thicken.
63.
Cartesian coordinate system
–
Each reference line is called a coordinate axis or just axis of the system, and the point where they meet is its origin, usually at ordered pair. The coordinates can also be defined as the positions of the projections of the point onto the two axis, expressed as signed distances from the origin. One can use the principle to specify the position of any point in three-dimensional space by three Cartesian coordinates, its signed distances to three mutually perpendicular planes. In general, n Cartesian coordinates specify the point in an n-dimensional Euclidean space for any dimension n and these coordinates are equal, up to sign, to distances from the point to n mutually perpendicular hyperplanes. The invention of Cartesian coordinates in the 17th century by René Descartes revolutionized mathematics by providing the first systematic link between Euclidean geometry and algebra. Using the Cartesian coordinate system, geometric shapes can be described by Cartesian equations, algebraic equations involving the coordinates of the points lying on the shape. For example, a circle of radius 2, centered at the origin of the plane, a familiar example is the concept of the graph of a function. Cartesian coordinates are also tools for most applied disciplines that deal with geometry, including astronomy, physics, engineering. They are the most common system used in computer graphics, computer-aided geometric design. Nicole Oresme, a French cleric and friend of the Dauphin of the 14th Century, used similar to Cartesian coordinates well before the time of Descartes. The adjective Cartesian refers to the French mathematician and philosopher René Descartes who published this idea in 1637 and it was independently discovered by Pierre de Fermat, who also worked in three dimensions, although Fermat did not publish the discovery. Both authors used a single axis in their treatments and have a length measured in reference to this axis. The concept of using a pair of axes was introduced later, after Descartes La Géométrie was translated into Latin in 1649 by Frans van Schooten and these commentators introduced several concepts while trying to clarify the ideas contained in Descartes work. Many other coordinate systems have developed since Descartes, such as the polar coordinates for the plane. The development of the Cartesian coordinate system would play a role in the development of the Calculus by Isaac Newton. The two-coordinate description of the plane was later generalized into the concept of vector spaces. Choosing a Cartesian coordinate system for a one-dimensional space – that is, for a straight line—involves choosing a point O of the line, a unit of length, and an orientation for the line. An orientation chooses which of the two half-lines determined by O is the positive, and which is negative, we say that the line is oriented from the negative half towards the positive half
Cartesian coordinate system
–
The
right hand rule.
Cartesian coordinate system
–
Illustration of a Cartesian coordinate plane. Four points are marked and labeled with their coordinates: (2,3) in green, (−3,1) in red, (−1.5,−2.5) in blue, and the origin (0,0) in purple.
Cartesian coordinate system
–
3D Cartesian Coordinate Handedness
64.
Euler equations (fluid dynamics)
–
In fluid dynamics, the Euler equations are a set of quasilinear hyperbolic equations governing adiabatic and inviscid flow. They are named after Leonhard Euler, in fact, Euler equations can be obtained by linearization of some more precise continuity equations like Navier–Stokes equations in a local equilibrium state given by a Maxwellian. The Euler equations can be applied to incompressible and to compressible flow – assuming the flow velocity is a solenoidal field, historically, only the incompressible equations have been derived by Euler. However, fluid dynamics literature often refers to the full set – including the energy equation – of the more general compressible equations together as the Euler equations, from the mathematical point of view, Euler equations are notably hyperbolic conservation equations in the case without external field. In fact, like any Cauchy equation, the Euler equations originally formulated in convective form can also be put in the conservation form, the convective form emphasizes changes to the state in a frame of reference moving with the fluid. The Euler equations first appeared in published form in Eulers article Principes généraux du mouvement des fluides and they were among the first partial differential equations to be written down. At the time Euler published his work, the system of equations consisted of the momentum and continuity equations, an additional equation, which was later to be called the adiabatic condition, was supplied by Pierre-Simon Laplace in 1816. G represents body accelerations acting on the continuum, for example gravity, inertial accelerations, electric field acceleration, the first equation is the Euler momentum equation with uniform density. The second equation is the constraint, stating the flow velocity is a solenoidal field. Notably, the continuity equation would be required also in this case as an additional third equation in case of density varying in time or varying in space. The equations above thus represent respectively conservation of mass and momentum, in 3D for example N =3 and the r and u vectors are explicitly and. Flow velocity and pressure are the physical variables. In 3D N =3 and the r and u vectors are explicitly and, although Euler first presented these equations in 1755, many fundamental questions about them remain unanswered. In three space dimensions it is not even known whether solutions of the equations are defined for all time or if they form singularities, in order to make the equations dimensionless, a characteristic length r 0, and a characteristic velocity u 0, need to be defined. These should be such that the dimensionless variables are all of order one. The limit of high Froude numbers is thus notable and can be studied with perturbation theory, the conservation form emphasizes the mathematical properties of Euler equations, and especially the contracted form is often the most convenient one for computational fluid dynamics simulations. Computationally, there are advantages in using the conserved variables. This gives rise to a class of numerical methods called conservative methods
Euler equations (fluid dynamics)
–
The "Streamline curvature theorem" states that the pressure at the upper surface of an airfoil is lower than the pressure far away and that the pressure at the lower surface is higher than the pressure far away; hence the pressure difference between the upper and lower surfaces of an airfoil generates a lift force.
65.
Secondary flow
–
The flow in these regions is the secondary flow. These regions are usually in the vicinity of the boundary of the adjacent to solid surfaces where viscous forces are at work. The basic principles of physics and the Coriolis effect satisfactorily explain that the direction of the wind in the atmosphere is parallel to the isobars and this flow of air across the isobars near ground level is a secondary flow. It does not conform to the flow, which is parallel to the isobars. At heights well above ground there is a balance between the Coriolis effect, the local pressure gradient, and the velocity of the wind. Closer to the ground the air is not able to accelerate to the necessary for balanced flow. As a result, the wind direction near ground level is parallel to the isobars in the region. Hence, the flow toward the center of a region of low pressure is also drawn upward by the significantly lower pressure at mid altitudes. This slow, widespread ascent of the air in a region of low pressure can cause widespread cloud, in a region of high pressure the secondary flow includes a slow, widespread descent of air from mid altitudes toward ground level, and then outward across the isobars. This descent causes a reduction in humidity and explains why regions of high pressure usually experience cloud-free skies for many days. The primary flow around a tropical cyclone is parallel to the isobars –, the closer to the center of the cyclone, the faster is the wind speed. In accordance with Bernoullis principle where the speed is fastest the barometric pressure is lowest. Consequently, near the center of the cyclone the barometric pressure is very low, there is a strong pressure gradient across the isobars toward the center of the cyclone. This pressure gradient provides the force necessary for the circular motion of each parcel of air. This strong gradient, coupled with the speed of the air near the earth’s surface, causes a secondary flow at surface level toward the center of the cyclone. The slower speed of the air at the surface prevents the barometric pressure from falling as low as would be expected from the barometric pressure at mid altitudes. This is compatible with Bernoullis principle, the secondary flow at the Earths surface is toward the center of the cyclone but is then drawn upward by the significantly lower pressure at mid and high altitudes. As the secondary flow is drawn upward the air cools and its pressure falls, tornadoes and dust devils display localised vortex flow
Secondary flow
–
An example of a dust devil in
Ramadi,
Iraq.
66.
International Standard Book Number
–
The International Standard Book Number is a unique numeric commercial book identifier. An ISBN is assigned to each edition and variation of a book, for example, an e-book, a paperback and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, the method of assigning an ISBN is nation-based and varies from country to country, often depending on how large the publishing industry is within a country. The initial ISBN configuration of recognition was generated in 1967 based upon the 9-digit Standard Book Numbering created in 1966, the 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108. Occasionally, a book may appear without a printed ISBN if it is printed privately or the author does not follow the usual ISBN procedure, however, this can be rectified later. Another identifier, the International Standard Serial Number, identifies periodical publications such as magazines, the ISBN configuration of recognition was generated in 1967 in the United Kingdom by David Whitaker and in 1968 in the US by Emery Koltay. The 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108, the United Kingdom continued to use the 9-digit SBN code until 1974. The ISO on-line facility only refers back to 1978, an SBN may be converted to an ISBN by prefixing the digit 0. For example, the edition of Mr. J. G. Reeder Returns, published by Hodder in 1965, has SBN340013818 -340 indicating the publisher,01381 their serial number. This can be converted to ISBN 0-340-01381-8, the check digit does not need to be re-calculated, since 1 January 2007, ISBNs have contained 13 digits, a format that is compatible with Bookland European Article Number EAN-13s. An ISBN is assigned to each edition and variation of a book, for example, an ebook, a paperback, and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, a 13-digit ISBN can be separated into its parts, and when this is done it is customary to separate the parts with hyphens or spaces. Separating the parts of a 10-digit ISBN is also done with either hyphens or spaces, figuring out how to correctly separate a given ISBN number is complicated, because most of the parts do not use a fixed number of digits. ISBN issuance is country-specific, in that ISBNs are issued by the ISBN registration agency that is responsible for country or territory regardless of the publication language. Some ISBN registration agencies are based in national libraries or within ministries of culture, in other cases, the ISBN registration service is provided by organisations such as bibliographic data providers that are not government funded. In Canada, ISBNs are issued at no cost with the purpose of encouraging Canadian culture. In the United Kingdom, United States, and some countries, where the service is provided by non-government-funded organisations. Australia, ISBNs are issued by the library services agency Thorpe-Bowker
International Standard Book Number
–
A 13-digit ISBN, 978-3-16-148410-0, as represented by an
EAN-13 bar code
67.
McGraw-Hill, Inc.
–
S&P Global Inc. is an American publicly traded corporation headquartered in New York City. Its primary areas of business are financial information and analytics and it is the parent company of S&P Global Ratings, S&P Global Market Intelligence, and S&P Global Platts, and is the majority owner of the S&P Dow Jones Indices joint venture. The predecessor companies of S&P Global have history dating to 1888 and he continued to add further publications, eventually establishing The McGraw Publishing Company in 1899. Hill had also produced several technical and trade publications and in 1902 formed his own business, in 1909 both men, having known each others interests, agreed upon an alliance and combined the book departments of their publishing companies into The McGraw-Hill Book Company. John Hill served as President, with James McGraw as Vice-President, the buyout made McGraw-Hill the largest educational publisher in the United States. In 1964, After Hill died, Merged both McGraw-Hill Publishing Company and McGraw-Hill Book Company into McGraw-Hill, Inc, in 1979, McGraw-Hill purchased Byte magazine from its owner/publisher Virginia Williamson who then became a vice-president of McGraw-Hill. In 1995, McGraw-Hill, Inc. became The McGraw-Hill Companies, in 2007, McGraw-Hill launched an online study network, GradeGuru. com, which gave McGraw-Hill an opportunity to connect directly with its end users, the students. The site closed on April 29,2012, on October 3,2011, McGraw-Hill announced it was selling its entire television station group to the E. W. Scripps Company for $212 million. The sale was completed on December 30,2011 and it had been involved in broadcasting since 1972, when it purchased four television stations from a division of Time Inc. McGraw Hill has produced the Glencoe series of books for decades. On November 26,2012, McGraw-Hill announced it was selling its entire education division, on March 22,2013, it announced it had completed the sale for $2.4 billion cash. On May 1,2013, shareholders of McGraw-Hill voted to change the name to McGraw Hill Financial. McGraw-Hill divested the subsidiary McGraw-Hill Construction to Symphony Technology Group for US$320 million on September 22,2014, the sale included Engineering News-Record, Architectural Record, Dodge and Sweets. McGraw-Hill Construction has been renamed Dodge Data & Analytics, in February 2016, it was announced that McGraw-Hill Financial would change its name to S&P Global Inc. by the end of April 2016. The company officially changed its name following a vote on April 27,2016. In April 2016, the company announced it was selling J. D. Power & Associates to investment firm XIO Group for $1.1 billion, S&P Global now organizes its businesses in four units based on the market in which they are involved. S&P Global Ratings provides independent investment research including ratings on various investment instruments, subsidiaries include Leveraged Commentary & Data. Launched on July 2,2012, S&P Dow Jones Indices is the worlds largest global resource for index-based concepts, data and it produces the S&P500 and the Dow Jones Industrial Average. Headquartered in London, S&P Global Platts is a provider of information and a source of benchmark price assessments for the commodities, energy, petrochemicals, metals, and agriculture markets
McGraw-Hill, Inc.
–
McGraw Hill Financial, Inc.
McGraw-Hill, Inc.
–
1221 Avenue of the Americas, the headquarters of McGraw-Hill
McGraw-Hill, Inc.
–
2008 conference booth
68.
Applied physics
–
Applied physics is physics which is intended for a particular technological or practical use. It is usually considered as a bridge or a connection between physics and engineering and this approach is similar to that of applied mathematics. Applied physicists can also be interested in the use of physics for scientific research, for instance, the field of accelerator physics can contribute to research in theoretical physics by working with engineers enabling design and construction of high-energy colliders
Applied physics
–
Experiment using a
laser
Applied physics
–
A
magnetic resonance image
Applied physics
–
Computer modeling of the
space shuttle during re-entry
69.
Experimental physics
–
Experimental physics is the category of disciplines and sub-disciplines in the field of physics that are concerned with the observation of physical phenomena and experiments. Methods vary from discipline to discipline, from experiments and observations, such as the Cavendish experiment, to more complicated ones. It is often put in contrast with theoretical physics, which is concerned with predicting and explaining the physical behaviour of nature than the acquisition of knowledge about it. Although experimental and theoretical physics are concerned with different aspects of nature, theoretical physics can also offer insight on what data is needed in order to gain a better understanding of the universe, and on what experiments to devise in order to obtain it. In the early 17th century, Galileo made extensive use of experimentation to validate physical theories, Galileo formulated and successfully tested several results in dynamics, in particular the law of inertia, which later became the first law in Newtons laws of motion. In Galileos Two New Sciences, a dialogue between the characters Simplicio and Salviati discuss the motion of a ship and how that ships cargo is indifferent to its motion. Huygens used the motion of a boat along a Dutch canal to illustrate a form of the conservation of momentum. Experimental physics is considered to have reached a point with the publication of the Philosophiae Naturalis Principia Mathematica in 1687 by Sir Isaac Newton. Both theories agreed well with experiment, the Principia also included several theories in fluid dynamics. From the late 17th century onward, thermodynamics was developed by physicist and chemist Boyle, Young, in 1733, Bernoulli used statistical arguments with classical mechanics to derive thermodynamic results, initiating the field of statistical mechanics. In 1798, Thompson demonstrated the conversion of work into heat. Ludwig Boltzmann, in the century, is responsible for the modern form of statistical mechanics. Besides classical mechanics and thermodynamics, another field of experimental inquiry within physics was the nature of electricity. Observations in the 17th and eighteenth century by such as Robert Boyle, Stephen Gray. These observations also established our basic understanding of electrical charge and current, by 1808 John Dalton had discovered that atoms of different elements have different weights and proposed the modern theory of the atom. It was Hans Christian Ørsted who first proposed the connection between electricity and magnetism after observing the deflection of a needle by a nearby electric current. By the early 1830s Michael Faraday had demonstrated that magnetic fields, in 1864 James Clerk Maxwell presented to the Royal Society a set of equations that described this relationship between electricity and magnetism. Maxwells equations also predicted correctly that light is an electromagnetic wave, starting with astronomy, the principles of natural philosophy crystallized into fundamental laws of physics which were enunciated and improved in the succeeding centuries
Experimental physics
–
A view of the
CMS detector, an experimental endeavour of the
LHC at
CERN.
70.
Theoretical physics
–
Theoretical physics is a branch of physics that employs mathematical models and abstractions of physical objects and systems to rationalize, explain and predict natural phenomena. This is in contrast to physics, which uses experimental tools to probe these phenomena. The advancement of science depends in general on the interplay between experimental studies and theory, in some cases, theoretical physics adheres to standards of mathematical rigor while giving little weight to experiments and observations. Conversely, Einstein was awarded the Nobel Prize for explaining the photoelectric effect, a physical theory is a model of physical events. It is judged by the extent to which its predictions agree with empirical observations, the quality of a physical theory is also judged on its ability to make new predictions which can be verified by new observations. A physical theory similarly differs from a theory, in the sense that the word theory has a different meaning in mathematical terms. A physical theory involves one or more relationships between various measurable quantities, archimedes realized that a ship floats by displacing its mass of water, Pythagoras understood the relation between the length of a vibrating string and the musical tone it produces. Other examples include entropy as a measure of the uncertainty regarding the positions and motions of unseen particles, Theoretical physics consists of several different approaches. In this regard, theoretical particle physics forms a good example, for instance, phenomenologists might employ empirical formulas to agree with experimental results, often without deep physical understanding. Modelers often appear much like phenomenologists, but try to model speculative theories that have certain desirable features, some attempt to create approximate theories, called effective theories, because fully developed theories may be regarded as unsolvable or too complicated. Other theorists may try to unify, formalise, reinterpret or generalise extant theories, or create completely new ones altogether. Sometimes the vision provided by pure mathematical systems can provide clues to how a system might be modeled, e. g. the notion, due to Riemann and others. Theoretical problems that need computational investigation are often the concern of computational physics, Theoretical advances may consist in setting aside old, incorrect paradigms or may be an alternative model that provides answers that are more accurate or that can be more widely applied. In the latter case, a correspondence principle will be required to recover the previously known result, sometimes though, advances may proceed along different paths. However, an exception to all the above is the wave–particle duality, Physical theories become accepted if they are able to make correct predictions and no incorrect ones. They are also likely to be accepted if they connect a wide range of phenomena. Testing the consequences of a theory is part of the scientific method, Physical theories can be grouped into three categories, mainstream theories, proposed theories and fringe theories. Theoretical physics began at least 2,300 years ago, under the Pre-socratic philosophy, during the Middle Ages and Renaissance, the concept of experimental science, the counterpoint to theory, began with scholars such as Ibn al-Haytham and Francis Bacon
Theoretical physics
–
Visual representation of a Schwarzschild
wormhole. Wormholes have never been observed, but they are predicted to exist through
mathematical models and
scientific theory.
71.
Energy
–
In physics, energy is the property that must be transferred to an object in order to perform work on – or to heat – the object, and can be converted in form, but not created or destroyed. The SI unit of energy is the joule, which is the transferred to an object by the mechanical work of moving it a distance of 1 metre against a force of 1 newton. Mass and energy are closely related, for example, with a sensitive enough scale, one could measure an increase in mass after heating an object. Living organisms require available energy to stay alive, such as the humans get from food. Civilisation gets the energy it needs from energy resources such as fuels, nuclear fuel. The processes of Earths climate and ecosystem are driven by the radiant energy Earth receives from the sun, the total energy of a system can be subdivided and classified in various ways. It may also be convenient to distinguish gravitational energy, thermal energy, several types of energy, electric energy. Many of these overlap, for instance, thermal energy usually consists partly of kinetic. Some types of energy are a mix of both potential and kinetic energy. An example is energy which is the sum of kinetic. Whenever physical scientists discover that a phenomenon appears to violate the law of energy conservation. Heat and work are special cases in that they are not properties of systems, in general we cannot measure how much heat or work are present in an object, but rather only how much energy is transferred among objects in certain ways during the occurrence of a given process. Heat and work are measured as positive or negative depending on which side of the transfer we view them from, the distinctions between different kinds of energy is not always clear-cut. In contrast to the definition, energeia was a qualitative philosophical concept, broad enough to include ideas such as happiness. The modern analog of this property, kinetic energy, differs from vis viva only by a factor of two, in 1807, Thomas Young was possibly the first to use the term energy instead of vis viva, in its modern sense. Gustave-Gaspard Coriolis described kinetic energy in 1829 in its modern sense, the law of conservation of energy was also first postulated in the early 19th century, and applies to any isolated system. It was argued for years whether heat was a physical substance, dubbed the caloric, or merely a physical quantity. In 1845 James Prescott Joule discovered the link between mechanical work and the generation of heat and these developments led to the theory of conservation of energy, formalized largely by William Thomson as the field of thermodynamics
Energy
–
In a typical
lightning strike, 500
megajoules of
electric potential energy is converted into the same amount of energy in other forms, mostly
light energy,
sound energy and
thermal energy.
Energy
–
Thermal energy is energy of microscopic constituents of matter, which may include both
kinetic and
potential energy.
Energy
–
Thomas Young – the first to use the term "energy" in the modern sense.
Energy
–
A
Turbo generator transforms the energy of pressurised steam into electrical energy
72.
Motion (physics)
–
In physics, motion is a change in position of an object over time. Motion is described in terms of displacement, distance, velocity, acceleration, time, motion of a body is observed by attaching a frame of reference to an observer and measuring the change in position of the body relative to that frame. If the position of a body is not changing with respect to a frame of reference. An objects motion cannot change unless it is acted upon by a force, momentum is a quantity which is used for measuring motion of an object. As there is no frame of reference, absolute motion cannot be determined. Thus, everything in the universe can be considered to be moving, more generally, motion is a concept that applies to objects, bodies, and matter particles, to radiation, radiation fields and radiation particles, and to space, its curvature and space-time. One can also speak of motion of shapes and boundaries, so, the term motion in general signifies a continuous change in the configuration of a physical system. For example, one can talk about motion of a wave or about motion of a quantum particle, in physics, motion is described through two sets of apparently contradictory laws of mechanics. Motions of all large scale and familiar objects in the universe are described by classical mechanics, whereas the motion of very small atomic and sub-atomic objects is described by quantum mechanics. It produces very accurate results within these domains, and is one of the oldest and largest in science, engineering, classical mechanics is fundamentally based on Newtons laws of motion. These laws describe the relationship between the acting on a body and the motion of that body. They were first compiled by Sir Isaac Newton in his work Philosophiæ Naturalis Principia Mathematica and his three laws are, A body either is at rest or moves with constant velocity, until and unless an outer force is applied to it. An object will travel in one direction only until an outer force changes its direction, whenever one body exerts a force F onto a second body, the second body exerts the force −F on the first body. F and −F are equal in magnitude and opposite in sense, so, the body which exerts F will go backwards. Newtons three laws of motion, along with his Newtons law of motion, which were the first to provide a mathematical model for understanding orbiting bodies in outer space. This explanation unified the motion of bodies and motion of objects on earth. Classical mechanics was later enhanced by Albert Einsteins special relativity. Motion of objects with a velocity, approaching the speed of light
Motion (physics)
–
Motion involves a change in position, such as in this perspective of rapidly leaving
Yongsan Station.
73.
Thermodynamics
–
Thermodynamics is a branch of physics concerned with heat and temperature and their relation to energy and work. The behavior of these quantities is governed by the four laws of thermodynamics, the laws of thermodynamics are explained in terms of microscopic constituents by statistical mechanics. Thermodynamics applies to a variety of topics in science and engineering, especially physical chemistry, chemical engineering. The initial application of thermodynamics to mechanical heat engines was extended early on to the study of chemical compounds, Chemical thermodynamics studies the nature of the role of entropy in the process of chemical reactions and has provided the bulk of expansion and knowledge of the field. Other formulations of thermodynamics emerged in the following decades, statistical thermodynamics, or statistical mechanics, concerned itself with statistical predictions of the collective motion of particles from their microscopic behavior. In 1909, Constantin Carathéodory presented a mathematical approach to the field in his axiomatic formulation of thermodynamics. A description of any thermodynamic system employs the four laws of thermodynamics that form an axiomatic basis, the first law specifies that energy can be exchanged between physical systems as heat and work. In thermodynamics, interactions between large ensembles of objects are studied and categorized, central to this are the concepts of the thermodynamic system and its surroundings. A system is composed of particles, whose average motions define its properties, properties can be combined to express internal energy and thermodynamic potentials, which are useful for determining conditions for equilibrium and spontaneous processes. With these tools, thermodynamics can be used to describe how systems respond to changes in their environment and this can be applied to a wide variety of topics in science and engineering, such as engines, phase transitions, chemical reactions, transport phenomena, and even black holes. This article is focused mainly on classical thermodynamics which primarily studies systems in thermodynamic equilibrium, non-equilibrium thermodynamics is often treated as an extension of the classical treatment, but statistical mechanics has brought many advances to that field. Guericke was driven to make a vacuum in order to disprove Aristotles long-held supposition that nature abhors a vacuum. Shortly after Guericke, the English physicist and chemist Robert Boyle had learned of Guerickes designs and, in 1656, in coordination with English scientist Robert Hooke, using this pump, Boyle and Hooke noticed a correlation between pressure, temperature, and volume. In time, Boyles Law was formulated, which states that pressure, later designs implemented a steam release valve that kept the machine from exploding. By watching the valve rhythmically move up and down, Papin conceived of the idea of a piston and he did not, however, follow through with his design. Nevertheless, in 1697, based on Papins designs, engineer Thomas Savery built the first engine, although these early engines were crude and inefficient, they attracted the attention of the leading scientists of the time. Black and Watt performed experiments together, but it was Watt who conceived the idea of the condenser which resulted in a large increase in steam engine efficiency. Drawing on all the work led Sadi Carnot, the father of thermodynamics, to publish Reflections on the Motive Power of Fire
Thermodynamics
–
Annotated color version of the original 1824
Carnot heat engine showing the hot body (boiler), working body (system, steam), and cold body (water), the letters labeled according to the stopping points in
Carnot cycle
74.
Classical mechanics
–
In physics, classical mechanics is one of the two major sub-fields of mechanics, along with quantum mechanics. Classical mechanics is concerned with the set of physical laws describing the motion of bodies under the influence of a system of forces. The study of the motion of bodies is an ancient one, making classical mechanics one of the oldest and largest subjects in science, engineering and technology. Classical mechanics describes the motion of objects, from projectiles to parts of machinery, as well as astronomical objects, such as spacecraft, planets, stars. Within classical mechanics are fields of study that describe the behavior of solids, liquids and gases, Classical mechanics also provides extremely accurate results as long as the domain of study is restricted to large objects and the speeds involved do not approach the speed of light. When both quantum and classical mechanics cannot apply, such as at the level with high speeds. Since these aspects of physics were developed long before the emergence of quantum physics and relativity, however, a number of modern sources do include relativistic mechanics, which in their view represents classical mechanics in its most developed and accurate form. Later, more abstract and general methods were developed, leading to reformulations of classical mechanics known as Lagrangian mechanics and these advances were largely made in the 18th and 19th centuries, and they extend substantially beyond Newtons work, particularly through their use of analytical mechanics. The following introduces the concepts of classical mechanics. For simplicity, it often models real-world objects as point particles, the motion of a point particle is characterized by a small number of parameters, its position, mass, and the forces applied to it. Each of these parameters is discussed in turn, in reality, the kind of objects that classical mechanics can describe always have a non-zero size. Objects with non-zero size have more complicated behavior than hypothetical point particles, because of the degrees of freedom. However, the results for point particles can be used to such objects by treating them as composite objects. The center of mass of a composite object behaves like a point particle, Classical mechanics uses common-sense notions of how matter and forces exist and interact. It assumes that matter and energy have definite, knowable attributes such as where an object is in space, non-relativistic mechanics also assumes that forces act instantaneously. The position of a point particle is defined with respect to a fixed reference point in space called the origin O, in space. A simple coordinate system might describe the position of a point P by means of a designated as r. In general, the point particle need not be stationary relative to O, such that r is a function of t, the time
Classical mechanics
–
Sir
Isaac Newton (1643–1727), an influential figure in the history of physics and whose
three laws of motion form the basis of classical mechanics
Classical mechanics
–
Diagram of orbital motion of a satellite around the earth, showing perpendicular velocity and acceleration (force) vectors.
Classical mechanics
–
Hamilton 's greatest contribution is perhaps the reformulation of
Newtonian mechanics, now called
Hamiltonian mechanics.
75.
Lagrangian mechanics
–
Lagrangian mechanics is a reformulation of classical mechanics, introduced by the Italian-French mathematician and astronomer Joseph-Louis Lagrange in 1788. No new physics is introduced in Lagrangian mechanics compared to Newtonian mechanics, Newtons laws can include non-conservative forces like friction, however, they must include constraint forces explicitly and are best suited to Cartesian coordinates. Lagrangian mechanics is ideal for systems with conservative forces and for bypassing constraint forces in any coordinate system, dissipative and driven forces can be accounted for by splitting the external forces into a sum of potential and non-potential forces, leading to a set of modified Euler-Lagrange equations. Generalized coordinates can be chosen by convenience, to exploit symmetries in the system or the geometry of the constraints, Lagrangian mechanics also reveals conserved quantities and their symmetries in a direct way, as a special case of Noethers theorem. Lagrangian mechanics is important not just for its applications. It can also be applied to systems by analogy, for instance to coupled electric circuits with inductances and capacitances. Lagrangian mechanics is used to solve mechanical problems in physics. Lagrangian mechanics applies to the dynamics of particles, fields are described using a Lagrangian density, Lagranges equations are also used in optimisation problems of dynamic systems. In mechanics, Lagranges equations of the second kind are used more than those of the first kind. Suppose we have a bead sliding around on a wire, or a simple pendulum. This choice eliminates the need for the constraint force to enter into the resultant system of equations, there are fewer equations since one is not directly calculating the influence of the constraint on the particle at a given moment. For a wide variety of systems, if the size and shape of a massive object are negligible. For a system of N point particles with masses m1, m2, MN, each particle has a position vector, denoted r1, r2. Cartesian coordinates are often sufficient, so r1 =, r2 =, in three dimensional space, each position vector requires three coordinates to uniquely define the location of a point, so there are 3N coordinates to uniquely define the configuration of the system. These are all points in space to locate the particles. The velocity of particle is how fast the particle moves along its path of motion. In Newtonian mechanics, the equations of motion are given by Newtons laws, the second law net force equals mass times acceleration, Σ F = m d2r/dt2, applies to each particle. For an N particle system in 3d, there are 3N second order differential equations in the positions of the particles to solve for
Lagrangian mechanics
–
Joseph-Louis Lagrange (1736—1813)
Lagrangian mechanics
–
Isaac Newton (1642—1726)
Lagrangian mechanics
–
Jean d'Alembert (1717—1783)
76.
Wave
–
In physics, a wave is an oscillation accompanied by a transfer of energy that travels through a medium. Frequency refers to the addition of time, wave motion transfers energy from one point to another, which displace particles of the transmission medium–that is, with little or no associated mass transport. Waves consist, instead, of oscillations or vibrations, around almost fixed locations, there are two main types of waves. Mechanical waves propagate through a medium, and the substance of this medium is deformed, restoring forces then reverse the deformation. For example, sound waves propagate via air molecules colliding with their neighbors, when the molecules collide, they also bounce away from each other. This keeps the molecules from continuing to travel in the direction of the wave, the second main type, electromagnetic waves, do not require a medium. Instead, they consist of periodic oscillations of electrical and magnetic fields generated by charged particles. These types vary in wavelength, and include radio waves, microwaves, infrared radiation, visible light, ultraviolet radiation, X-rays, waves are described by a wave equation which sets out how the disturbance proceeds over time. The mathematical form of this varies depending on the type of wave. Further, the behavior of particles in quantum mechanics are described by waves, in addition, gravitational waves also travel through space, which are a result of a vibration or movement in gravitational fields. While mechanical waves can be transverse and longitudinal, all electromagnetic waves are transverse in free space. A single, all-encompassing definition for the wave is not straightforward. A vibration can be defined as a back-and-forth motion around a reference value, however, a vibration is not necessarily a wave. An attempt to define the necessary and sufficient characteristics that qualify a phenomenon as a results in a blurred line. The term wave is often understood as referring to a transport of spatial disturbances that are generally not accompanied by a motion of the medium occupying this space as a whole. In a wave, the energy of a vibration is moving away from the source in the form of a disturbance within the surrounding medium and it may appear that the description of waves is closely related to their physical origin for each specific instance of a wave process. For example, acoustics is distinguished from optics in that sound waves are related to a rather than an electromagnetic wave transfer caused by vibration. Concepts such as mass, momentum, inertia, or elasticity and this difference in origin introduces certain wave characteristics particular to the properties of the medium involved
Wave
–
Surface waves in
water
Wave
–
Wavelength λ, can be measured between any two corresponding points on a waveform
Wave
–
Light beam exhibiting reflection, refraction, transmission and dispersion when encountering a prism
77.
Field (physics)
–
In physics, a field is a physical quantity, typically a number or tensor, that has a value for each point in space and time. For example, on a map, the surface wind velocity is described by assigning a vector to each point on a map. Each vector represents the speed and direction of the movement of air at that point, as another example, an electric field can be thought of as a condition in space emanating from an electric charge and extending throughout the whole of space. When a test electric charge is placed in this electric field, physicists have found the notion of a field to be of such practical utility for the analysis of forces that they have come to think of a force as due to a field. In the modern framework of the theory of fields, even without referring to a test particle, a field occupies space, contains energy. This led physicists to consider electromagnetic fields to be a physical entity, the fact that the electromagnetic field can possess momentum and energy makes it very real. A particle makes a field, and a field acts on another particle, in practice, the strength of most fields has been found to diminish with distance to the point of being undetectable. One consequence is that the Earths gravitational field quickly becomes undetectable on cosmic scales, a field has a unique tensorial character in every point where it is defined, i. e. a field cannot be a scalar field somewhere and a vector field somewhere else. For example, the Newtonian gravitational field is a vector field, moreover, within each category, a field can be either a classical field or a quantum field, depending on whether it is characterized by numbers or quantum operators respectively. In fact in this theory an equivalent representation of field is a field particle, to Isaac Newton his law of universal gravitation simply expressed the gravitational force that acted between any pair of massive objects. In the eighteenth century, a new quantity was devised to simplify the bookkeeping of all these gravitational forces and this quantity, the gravitational field, gave at each point in space the total gravitational force which would be felt by an object with unit mass at that point. The development of the independent concept of a field began in the nineteenth century with the development of the theory of electromagnetism. In the early stages, André-Marie Ampère and Charles-Augustin de Coulomb could manage with Newton-style laws that expressed the forces between pairs of electric charges or electric currents. However, it became more natural to take the field approach and express these laws in terms of electric and magnetic fields. The independent nature of the field became more apparent with James Clerk Maxwells discovery that waves in these fields propagated at a finite speed, Maxwell, at first, did not adopt the modern concept of a field as fundamental quantity that could independently exist. Instead, he supposed that the field expressed the deformation of some underlying medium—the luminiferous aether—much like the tension in a rubber membrane. If that were the case, the velocity of the electromagnetic waves should depend upon the velocity of the observer with respect to the aether. Despite much effort, no evidence of such an effect was ever found
Field (physics)
–
Illustration of the electric field surrounding a positive (red) and a negative (blue) charge.
78.
Gravity
–
Gravity, or gravitation, is a natural phenomenon by which all things with mass are brought toward one another, including planets, stars and galaxies. Since energy and mass are equivalent, all forms of energy, including light, on Earth, gravity gives weight to physical objects and causes the ocean tides. Gravity has a range, although its effects become increasingly weaker on farther objects. The most extreme example of this curvature of spacetime is a hole, from which nothing can escape once past its event horizon. More gravity results in time dilation, where time lapses more slowly at a lower gravitational potential. Gravity is the weakest of the four fundamental interactions of nature, the gravitational attraction is approximately 1038 times weaker than the strong force,1036 times weaker than the electromagnetic force and 1029 times weaker than the weak force. As a consequence, gravity has an influence on the behavior of subatomic particles. On the other hand, gravity is the dominant interaction at the macroscopic scale, for this reason, in part, pursuit of a theory of everything, the merging of the general theory of relativity and quantum mechanics into quantum gravity, has become an area of research. While the modern European thinkers are credited with development of gravitational theory, some of the earliest descriptions came from early mathematician-astronomers, such as Aryabhata, who had identified the force of gravity to explain why objects do not fall out when the Earth rotates. Later, the works of Brahmagupta referred to the presence of force, described it as an attractive force. Modern work on gravitational theory began with the work of Galileo Galilei in the late 16th and this was a major departure from Aristotles belief that heavier objects have a higher gravitational acceleration. Galileo postulated air resistance as the reason that objects with less mass may fall slower in an atmosphere, galileos work set the stage for the formulation of Newtons theory of gravity. In 1687, English mathematician Sir Isaac Newton published Principia, which hypothesizes the inverse-square law of universal gravitation. Newtons theory enjoyed its greatest success when it was used to predict the existence of Neptune based on motions of Uranus that could not be accounted for by the actions of the other planets. Calculations by both John Couch Adams and Urbain Le Verrier predicted the position of the planet. A discrepancy in Mercurys orbit pointed out flaws in Newtons theory, the issue was resolved in 1915 by Albert Einsteins new theory of general relativity, which accounted for the small discrepancy in Mercurys orbit. The simplest way to test the equivalence principle is to drop two objects of different masses or compositions in a vacuum and see whether they hit the ground at the same time. Such experiments demonstrate that all objects fall at the rate when other forces are negligible
Gravity
–
Sir Isaac Newton, an English physicist who lived from 1642 to 1727
Gravity
Gravity
–
Two-dimensional analogy of spacetime distortion generated by the mass of an object. Matter changes the geometry of spacetime, this (curved) geometry being interpreted as gravity. White lines do not represent the curvature of space but instead represent the
coordinate system imposed on the curved spacetime, which would be
rectilinear in a flat spacetime.
Gravity
–
Ball falling freely under gravity. See text for description.
79.
Geometrical optics
–
Geometrical optics, or ray optics, describes light propagation in terms of rays. The ray in geometric optics is an abstraction, or instrument, Geometrical optics does not account for certain optical effects such as diffraction and interference. This simplification is useful in practice, it is an excellent approximation when the wavelength is compared to the size of structures with which the light interacts. The techniques are useful in describing geometrical aspects of imaging, including optical aberrations. A light ray is a line or curve that is perpendicular to the lights wavefronts, Geometrical optics is often simplified by making the paraxial approximation, or small angle approximation. The mathematical behavior then becomes linear, allowing optical components and systems to be described by simple matrices, glossy surfaces such as mirrors reflect light in a simple, predictable way. This allows for production of reflected images that can be associated with an actual or extrapolated location in space. With such surfaces, the direction of the ray is determined by the angle the incident ray makes with the surface normal. The incident and reflected rays lie in a plane. This is known as the Law of Reflection, for flat mirrors, the law of reflection implies that images of objects are upright and the same distance behind the mirror as the objects are in front of the mirror. The image size is the same as the object size, the law also implies that mirror images are parity inverted, which is perceived as a left-right inversion. Mirrors with curved surfaces can be modeled by ray tracing and using the law of reflection at each point on the surface, for mirrors with parabolic surfaces, parallel rays incident on the mirror produce reflected rays that converge at a common focus. Other curved surfaces may also focus light, but with due to the diverging shape causing the focus to be smeared out in space. In particular, spherical mirrors exhibit spherical aberration, curved mirrors can form images with magnification greater than or less than one, and the image can be upright or inverted. An upright image formed by reflection in a mirror is always virtual, while an image is real. Refraction occurs when light travels through an area of space that has a index of refraction. The simplest case of refraction occurs when there is an interface between a uniform medium with index of refraction n 1 and another medium with index of refraction n 2 and this phenomenon is called total internal reflection and allows for fiber optics technology. As light signals travel down a fiber cable, they undergo total internal reflection allowing for essentially no light lost over the length of the cable
Geometrical optics
–
As light travels through space, it
oscillates in
amplitude. In this image, each maximum amplitude
crest is marked with a
plane to illustrate the
wavefront. The
ray is the arrow
perpendicular to these
parallel surfaces.
80.
Quantum field theory
–
QFT treats particles as excited states of the underlying physical field, so these are called field quanta. In quantum field theory, quantum mechanical interactions among particles are described by interaction terms among the corresponding underlying quantum fields and these interactions are conveniently visualized by Feynman diagrams, which are a formal tool of relativistically covariant perturbation theory, serving to evaluate particle processes. The first achievement of quantum theory, namely quantum electrodynamics, is still the paradigmatic example of a successful quantum field theory. Ordinarily, quantum mechanics cannot give an account of photons which constitute the prime case of relativistic particles, since photons have rest mass zero, and correspondingly travel in the vacuum at the speed c, a non-relativistic theory such as ordinary QM cannot give even an approximate description. Photons are implicit in the emission and absorption processes which have to be postulated, for instance, the formalism of QFT is needed for an explicit description of photons. In fact most topics in the development of quantum theory were related to the interaction of radiation and matter. However, quantum mechanics as formulated by Dirac, Heisenberg, and Schrödinger in 1926–27 started from atomic spectra, as soon as the conceptual framework of quantum mechanics was developed, a small group of theoreticians tried to extend quantum methods to electromagnetic fields. A good example is the paper by Born, Jordan & Heisenberg. The basic idea was that in QFT the electromagnetic field should be represented by matrices in the way that position. The ideas of QM were thus extended to systems having a number of degrees of freedom. The inception of QFT is usually considered to be Diracs famous 1927 paper on The quantum theory of the emission and absorption of radiation, here Dirac coined the name quantum electrodynamics for the part of QFT that was developed first. Employing the theory of the harmonic oscillator, Dirac gave a theoretical description of how photons appear in the quantization of the electromagnetic radiation field. Later, Diracs procedure became a model for the quantization of fields as well. These first approaches to QFT were further developed during the three years. P. Jordan introduced creation and annihilation operators for fields obeying Fermi–Dirac statistics and these differ from the corresponding operators for Bose–Einstein statistics in that the former satisfy anti-commutation relations while the latter satisfy commutation relations. The methods of QFT could be applied to derive equations resulting from the treatment of particles, e. g. the Dirac equation, the Klein–Gordon equation. Schweber points out that the idea and procedure of second quantization goes back to Jordan, in a number of papers from 1927, some difficult problems concerning commutation relations, statistics, and Lorentz invariance were eventually solved. The first comprehensive account of a theory of quantum fields, in particular
Quantum field theory
81.
Theory of relativity
–
The theory of relativity usually encompasses two interrelated theories by Albert Einstein, special relativity and general relativity. Special relativity applies to particles and their interactions, describing all their physical phenomena except gravity. General relativity explains the law of gravitation and its relation to other forces of nature and it applies to the cosmological and astrophysical realm, including astronomy. The theory transformed theoretical physics and astronomy during the 20th century and it introduced concepts including spacetime as a unified entity of space and time, relativity of simultaneity, kinematic and gravitational time dilation, and length contraction. In the field of physics, relativity improved the science of elementary particles and their fundamental interactions, with relativity, cosmology and astrophysics predicted extraordinary astronomical phenomena such as neutron stars, black holes, and gravitational waves. Max Planck, Hermann Minkowski and others did subsequent work, Einstein developed general relativity between 1907 and 1915, with contributions by many others after 1915. The final form of general relativity was published in 1916, the term theory of relativity was based on the expression relative theory used in 1906 by Planck, who emphasized how the theory uses the principle of relativity. In the discussion section of the paper, Alfred Bucherer used for the first time the expression theory of relativity. By the 1920s, the community understood and accepted special relativity. It rapidly became a significant and necessary tool for theorists and experimentalists in the new fields of physics, nuclear physics. By comparison, general relativity did not appear to be as useful and it seemed to offer little potential for experimental test, as most of its assertions were on an astronomical scale. Its mathematics of general relativity seemed difficult and fully understandable only by a number of people. Around 1960, general relativity became central to physics and astronomy, new mathematical techniques to apply to general relativity streamlined calculations and made its concepts more easily visualized. Special relativity is a theory of the structure of spacetime and it was introduced in Einsteins 1905 paper On the Electrodynamics of Moving Bodies. Special relativity is based on two postulates which are contradictory in classical mechanics, The laws of physics are the same for all observers in motion relative to one another. The speed of light in a vacuum is the same for all observers, the resultant theory copes with experiment better than classical mechanics. For instance, postulate 2 explains the results of the Michelson–Morley experiment, moreover, the theory has many surprising and counterintuitive consequences. Some of these are, Relativity of simultaneity, Two events, simultaneous for one observer, time dilation, Moving clocks are measured to tick more slowly than an observers stationary clock
Theory of relativity
–
USSR stamp dedicated to Albert Einstein
Theory of relativity
–
Key concepts
82.
Special relativity
–
In physics, special relativity is the generally accepted and experimentally well-confirmed physical theory regarding the relationship between space and time. In Albert Einsteins original pedagogical treatment, it is based on two postulates, The laws of physics are invariant in all inertial systems, the speed of light in a vacuum is the same for all observers, regardless of the motion of the light source. It was originally proposed in 1905 by Albert Einstein in the paper On the Electrodynamics of Moving Bodies, as of today, special relativity is the most accurate model of motion at any speed. Even so, the Newtonian mechanics model is useful as an approximation at small velocities relative to the speed of light. Not until Einstein developed general relativity, to incorporate general frames of reference, a translation that has often been used is restricted relativity, special really means special case. It has replaced the notion of an absolute universal time with the notion of a time that is dependent on reference frame. Rather than an invariant time interval between two events, there is an invariant spacetime interval, a defining feature of special relativity is the replacement of the Galilean transformations of Newtonian mechanics with the Lorentz transformations. Time and space cannot be defined separately from each other, rather space and time are interwoven into a single continuum known as spacetime. Events that occur at the time for one observer can occur at different times for another. The theory is special in that it applies in the special case where the curvature of spacetime due to gravity is negligible. In order to include gravity, Einstein formulated general relativity in 1915, Special relativity, contrary to some outdated descriptions, is capable of handling accelerations as well as accelerated frames of reference. e. At a sufficiently small scale and in conditions of free fall, a locally Lorentz-invariant frame that abides by special relativity can be defined at sufficiently small scales, even in curved spacetime. Galileo Galilei had already postulated that there is no absolute and well-defined state of rest, Einstein extended this principle so that it accounted for the constant speed of light, a phenomenon that had been recently observed in the Michelson–Morley experiment. He also postulated that it holds for all the laws of physics, Einstein discerned two fundamental propositions that seemed to be the most assured, regardless of the exact validity of the known laws of either mechanics or electrodynamics. These propositions were the constancy of the speed of light and the independence of physical laws from the choice of inertial system, the Principle of Invariant Light Speed –. Light is always propagated in empty space with a definite velocity c which is independent of the state of motion of the emitting body. That is, light in vacuum propagates with the c in at least one system of inertial coordinates. Following Einsteins original presentation of special relativity in 1905, many different sets of postulates have been proposed in various alternative derivations, however, the most common set of postulates remains those employed by Einstein in his original paper
Special relativity
–
Albert Einstein around 1905, the year his "
Annus Mirabilis papers " – which included Zur Elektrodynamik bewegter Körper, the paper founding special relativity – were published.
83.
Computational physics
–
Computational physics is the study and implementation of numerical analysis to solve problems in physics for which a quantitative theory already exists. Historically, computational physics was the first application of computers in science. In physics, different theories based on mathematical models provide very precise predictions on how systems behave, unfortunately, it is often the case that solving the mathematical model for a particular system in order to produce a useful prediction is not feasible. This can occur, for instance, when the solution does not have a closed-form expression, in such cases, numerical approximations are required. There is a debate about the status of computation within the scientific method, while computers can be used in experiments for the measurement and recording of data, this clearly does not constitute a computational approach. Physics problems are in very difficult to solve exactly. This is due to several reasons, lack of algebraic and/or analytic solubility, complexity, on the more advanced side, mathematical perturbation theory is also sometimes used. In addition, the computational cost and computational complexity for many-body problems tend to grow quickly, a macroscopic system typically has a size of the order of 1023 constituent particles, so it is somewhat of a problem. Solving quantum mechanical problems is generally of exponential order in the size of the system, because computational physics uses a broad class of problems, it is generally divided amongst the different mathematical problems it numerically solves, or the methods it applies. Furthermore, computational physics encompasses the tuning of the structure to solve the problems. It is possible to find a corresponding computational branch for every field in physics, for example computational mechanics. Computational mechanics consists of fluid dynamics, computational solid mechanics. One subfield at the confluence between CFD and electromagnetic modelling is computational magnetohydrodynamics, the quantum many-body problem leads naturally to the large and rapidly growing field of computational chemistry. Computational solid state physics is an important division of computational physics dealing directly with material science. A field related to computational condensed matter is computational statistical mechanics, computational statistical physics makes heavy use of Monte Carlo-like methods. More broadly, it concerns itself with in the social sciences, network theory, and mathematical models for the propagation of disease. Computational astrophysics is the application of techniques and methods to astrophysical problems. Stickler, E. Schachinger, Basic concepts in computational physics, E. Winsberg, Science in the Age of Computer Simulation
Computational physics
–
Computational physics
84.
Condensed matter physics
–
Condensed matter physics is a branch of physics that deals with the physical properties of condensed phases of matter, where particles adhere to each other. Condensed matter physicists seek to understand the behavior of these phases by using physical laws, in particular, they include the laws of quantum mechanics, electromagnetism and statistical mechanics. The field overlaps with chemistry, materials science, and nanotechnology, the theoretical physics of condensed matter shares important concepts and methods with that of particle physics and nuclear physics. A variety of topics in physics such as crystallography, metallurgy, elasticity, magnetism, etc. were treated as distinct areas until the 1940s, when they were grouped together as solid state physics. Around the 1960s, the study of properties of liquids was added to this list, forming the basis for the new. The Bell Telephone Laboratories was one of the first institutes to conduct a program in condensed matter physics. References to condensed state can be traced to earlier sources, as a matter of fact, it would be more correct to unify them under the title of condensed bodies. One of the first studies of condensed states of matter was by English chemist Humphry Davy, Davy observed that of the forty chemical elements known at the time, twenty-six had metallic properties such as lustre, ductility and high electrical and thermal conductivity. This indicated that the atoms in John Daltons atomic theory were not indivisible as Dalton claimed, Davy further claimed that elements that were then believed to be gases, such as nitrogen and hydrogen could be liquefied under the right conditions and would then behave as metals. In 1823, Michael Faraday, then an assistant in Davys lab, successfully liquefied chlorine and went on to all known gaseous elements, except for nitrogen, hydrogen. By 1908, James Dewar and Heike Kamerlingh Onnes were successfully able to hydrogen and then newly discovered helium. Paul Drude in 1900 proposed the first theoretical model for an electron moving through a metallic solid. Drudes model described properties of metals in terms of a gas of free electrons, the phenomenon completely surprised the best theoretical physicists of the time, and it remained unexplained for several decades. Drudes classical model was augmented by Wolfgang Pauli, Arnold Sommerfeld, Felix Bloch, Pauli realized that the free electrons in metal must obey the Fermi–Dirac statistics. Using this idea, he developed the theory of paramagnetism in 1926, shortly after, Sommerfeld incorporated the Fermi–Dirac statistics into the free electron model and made it better able to explain the heat capacity. Two years later, Bloch used quantum mechanics to describe the motion of an electron in a periodic lattice. Magnetism as a property of matter has been known in China since 4000 BC, Pierre Curie studied the dependence of magnetization on temperature and discovered the Curie point phase transition in ferromagnetic materials. In 1906, Pierre Weiss introduced the concept of magnetic domains to explain the properties of ferromagnets
Condensed matter physics
–
Heike Kamerlingh Onnes and
Johannes van der Waals with the
helium "liquefactor" in
Leiden (1908)
Condensed matter physics
–
Condensed matter physics
Condensed matter physics
–
A replica of the first
point-contact transistor in
Bell labs
Condensed matter physics
–
Computer simulation of "nanogears" made of
fullerene molecules. It is hoped that advances in nanoscience will lead to machines working on the molecular scale.
85.
Mesoscopic physics
–
Disambiguation, This page refers to the sub-discipline of condensed matter physics, not the branch of meteorology concerned with the study of weather systems smaller than synoptic scale systems. Mesoscopic physics is a subdiscipline of condensed matter physics deals with materials of an intermediate length. The scale of these materials can be described as being between the size of a quantity of atoms and of materials measuring micrometres, the lower limit can also be defined as being the size of individual atoms. At the micrometre level are bulk materials, both mesoscopic and macroscopic objects contain a large number of atoms. In other words, a device, when scaled down to a meso-size. For example, at the level the conductance of a wire increases continuously with its diameter. However, at the level, the wires conductance is quantized. The applied science of physics deals with the potential of building nanodevices. Mesoscopic physics also addresses fundamental practical problems which occur when an object is miniaturized. The physical properties of materials change as their approaches the nanoscale. For bulk materials larger than one micrometre, the percentage of atoms at the surface is insignificant in relation to the number of atoms in the entire material. The subdiscipline has dealt primarily with artificial structures of metal or semiconducting material which have been fabricated by the techniques employed for producing microelectronic circuits. There is no definition for mesoscopic physics but the systems studied are normally in the range of 100 nm to 1000 nm,100 nanometers is the approximate upper limit for a nanoparticle. Thus, mesoscopic physics has a connection to the fields of nanofabrication. Devices used in nanotechnology are examples of mesoscopic systems, three categories of new phenomena in such systems are interference effects, quantum confinement effects and charging effects. Quantum confinement effects describe electrons in terms of levels, potential well, valence bands, conduction band. Electrons in bulk material can be described by energy bands or electron energy levels. Electrons exist at different energy levels or bands, in bulk materials these energy levels are described as continuous because the difference in energy is negligible
Mesoscopic physics
–
Condensed matter physics
86.
Solid-state physics
–
Solid-state physics is the study of rigid matter, or solids, through methods such as quantum mechanics, crystallography, electromagnetism, and metallurgy. It is the largest branch of condensed matter physics, solid-state physics studies how the large-scale properties of solid materials result from their atomic-scale properties. Thus, solid-state physics forms a basis of materials science. It also has applications, for example in the technology of transistors and semiconductors. Solid materials are formed from densely packed atoms, which interact intensely and these interactions produce the mechanical, thermal, electrical, magnetic and optical properties of solids. Depending on the involved and the conditions in which it was formed. The bulk of physics, as a general theory, is focused on crystals. Primarily, this is because the periodicity of atoms in a crystal — its defining characteristic — facilitates mathematical modeling, likewise, crystalline materials often have electrical, magnetic, optical, or mechanical properties that can be exploited for engineering purposes. The forces between the atoms in a crystal can take a variety of forms, for example, in a crystal of sodium chloride, the crystal is made up of ionic sodium and chlorine, and held together with ionic bonds. In others, the atoms share electrons and form covalent bonds, in metals, electrons are shared amongst the whole crystal in metallic bonding. Finally, the noble gases do not undergo any of these types of bonding, in solid form, the noble gases are held together with van der Waals forces resulting from the polarisation of the electronic charge cloud on each atom. The differences between the types of solid result from the differences between their bonding, the DSSP catered to industrial physicists, and solid-state physics became associated with the technological applications made possible by research on solids. By the early 1960s, the DSSP was the largest division of the American Physical Society, large communities of solid state physicists also emerged in Europe after World War II, in particular in England, Germany, and the Soviet Union. In the United States and Europe, solid state became a prominent field through its investigations into semiconductors, superconductivity, nuclear magnetic resonance, today, solid-state physics is broadly considered to be the subfield of condensed matter physics that focuses on the properties of solids with regular crystal lattices. Many properties of materials are affected by their crystal structure and this structure can be investigated using a range of crystallographic techniques, including X-ray crystallography, neutron diffraction and electron diffraction. The sizes of the crystals in a crystalline solid material vary depending on the material involved. Real crystals feature defects or irregularities in the arrangements. Properties of materials such as electrical conduction and heat capacity are investigated by solid state physics, an early model of electrical conduction was the Drude model, which applied kinetic theory to the electrons in a solid
Solid-state physics
–
An example of a simple
cubic lattice
87.
Mathematical physics
–
Mathematical physics refers to development of mathematical methods for application to problems in physics. It is a branch of applied mathematics, but deals with physical problems, there are several distinct branches of mathematical physics, and these roughly correspond to particular historical periods. The rigorous, abstract and advanced re-formulation of Newtonian mechanics adopting the Lagrangian mechanics, both formulations are embodied in analytical mechanics. These approaches and ideas can be and, in fact, have extended to other areas of physics as statistical mechanics, continuum mechanics, classical field theory. Moreover, they have provided several examples and basic ideas in differential geometry, the theory of partial differential equations are perhaps most closely associated with mathematical physics. These were developed intensively from the half of the eighteenth century until the 1930s. Physical applications of these developments include hydrodynamics, celestial mechanics, continuum mechanics, elasticity theory, acoustics, thermodynamics, electricity, magnetism, and aerodynamics. The theory of atomic spectra developed almost concurrently with the fields of linear algebra. Nonrelativistic quantum mechanics includes Schrödinger operators, and it has connections to atomic, Quantum information theory is another subspecialty. The special and general theories of relativity require a different type of mathematics. This was group theory, which played an important role in quantum field theory and differential geometry. This was, however, gradually supplemented by topology and functional analysis in the description of cosmological as well as quantum field theory phenomena. In this area both homological algebra and category theory are important nowadays, statistical mechanics forms a separate field, which includes the theory of phase transitions. It relies upon the Hamiltonian mechanics and it is related with the more mathematical ergodic theory. There are increasing interactions between combinatorics and physics, in statistical physics. The usage of the mathematical physics is sometimes idiosyncratic. Certain parts of mathematics that arose from the development of physics are not, in fact, considered parts of mathematical physics. The term mathematical physics is sometimes used to research aimed at studying and solving problems inspired by physics or thought experiments within a mathematically rigorous framework
Mathematical physics
–
An example of mathematical physics: solutions of
Schrödinger's equation for
quantum harmonic oscillators (left) with their
amplitudes (right).
88.
Nuclear physics
–
Nuclear physics is the field of physics that studies atomic nuclei and their constituents and interactions. Other forms of matter are also studied. Nuclear physics should not be confused with atomic physics, which studies the atom as a whole, discoveries in nuclear physics have led to applications in many fields. Such applications are studied in the field of nuclear engineering, Particle physics evolved out of nuclear physics and the two fields are typically taught in close association. Nuclear astrophysics, the application of physics to astrophysics, is crucial in explaining the inner workings of stars. The discovery of the electron by J. J. Thomson a year later was an indication that the atom had internal structure, in the years that followed, radioactivity was extensively investigated, notably by Marie and Pierre Curie as well as by Ernest Rutherford and his collaborators. By the turn of the physicists had also discovered three types of radiation emanating from atoms, which they named alpha, beta, and gamma radiation. Experiments by Otto Hahn in 1911 and by James Chadwick in 1914 discovered that the beta decay spectrum was continuous rather than discrete. That is, electrons were ejected from the atom with a range of energies, rather than the discrete amounts of energy that were observed in gamma. This was a problem for physics at the time, because it seemed to indicate that energy was not conserved in these decays. The 1903 Nobel Prize in Physics was awarded jointly to Becquerel for his discovery and to Marie, Rutherford was awarded the Nobel Prize in Chemistry in 1908 for his investigations into the disintegration of the elements and the chemistry of radioactive substances. In 1905 Albert Einstein formulated the idea of mass–energy equivalence, in 1906 Ernest Rutherford published Retardation of the α Particle from Radium in passing through matter. Hans Geiger expanded on this work in a communication to the Royal Society with experiments he and Rutherford had done, passing alpha particles through air, aluminum foil and gold leaf. More work was published in 1909 by Geiger and Ernest Marsden, in 1911–1912 Rutherford went before the Royal Society to explain the experiments and propound the new theory of the atomic nucleus as we now understand it. The plum pudding model had predicted that the particles should come out of the foil with their trajectories being at most slightly bent. But Rutherford instructed his team to look for something that shocked him to observe and he likened it to firing a bullet at tissue paper and having it bounce off. As an example, in this model consisted of a nucleus with 14 protons and 7 electrons. The Rutherford model worked well until studies of nuclear spin were carried out by Franco Rasetti at the California Institute of Technology in 1929
Nuclear physics
–
Nuclear physics
89.
Psychophysics
–
Psychophysics quantitatively investigates the relationship between physical stimuli and the sensations and perceptions they produce. Psychophysics also refers to a class of methods that can be applied to study a perceptual system. Modern applications rely heavily on threshold measurement, ideal observer analysis, Psychophysics has widespread and important practical applications. For example, in the study of signal processing, psychophysics has informed the development of models. These models explain why humans perceive very little loss of quality when audio. Many of the techniques and theories of psychophysics were formulated in 1860 when Gustav Theodor Fechner in Leipzig published Elemente der Psychophysik. He coined the term psychophysics, describing research intended to relate physical stimuli to the contents of such as sensations. As a physicist and philosopher, Fechner aimed at developing a method that relates matter to the mind, connecting the publicly observable world, from this, Fechner derived his well-known logarithmic scale, now known as Fechner scale. Webers and Fechners work formed one of the bases of psychology as a science, Fechners work systematised the introspectionist approach, that had to contend with the Behaviorist approach in which even verbal responses are as physical as the stimuli. Fechners work was studied and extended by Charles S. Peirce, who was aided by his student Joseph Jastrow, Peirce and Jastrow largely confirmed Fechners empirical findings, but not all. In particular, an experiment of Peirce and Jastrow rejected Fechners estimation of a threshold of perception of weights. The Peirce–Jastrow experiments were conducted as part of Peirces application of his program to human perception, other studies considered the perception of light. Jastrow wrote the summary, Mr. Peirce’s courses in logic gave me my first real experience of intellectual muscle. He borrowed the apparatus for me, which I took to my room, installed at my window, and with which, the results were published over our joint names in the Proceedings of the National Academy of Sciences. This work clearly distinguishes observable cognitive performance from the expression of consciousness, one leading method is based on signal detection theory, developed for cases of very weak stimuli. However, the subjectivist approach persists among those in the tradition of Stanley Smith Stevens, Stevens revived the idea of a power law suggested by 19th century researchers, in contrast with Fechners log-linear function. He also advocated the assignment of numbers in ratio to the strengths of stimuli, Stevens added techniques such as magnitude production and cross-modality matching. He opposed the assignment of stimulus strengths to points on a line that are labeled in order of strength, nevertheless, that sort of response has remained popular in applied psychophysics
Psychophysics
–
Diagram showing a specific staircase procedure: Transformed Up/Down Method (1 up/ 2 down rule). Until the first reversal (which is neglected) the simple up/down rule and a larger step size is used.
90.
Cloud physics
–
Cloud physics is the study of the physical processes that lead to the formation, growth and precipitation of atmospheric clouds. Clouds consist of microscopic droplets of water, tiny crystals of ice. Cloud droplets initially form by the condensation of water vapor onto condensation nuclei when the supersaturation of air exceeds a critical value according to Köhler theory. Cloud condensation nuclei are necessary for cloud droplets formation because of the Kelvin effect, at small radii, the amount of supersaturation needed for condensation to occur is so large, that it does not happen naturally. Raoults Law describes how the pressure is dependent on the amount of solute in a solution. At high concentrations, when the droplets are small, the supersaturation required is smaller than without the presence of a nucleus. In warm clouds, larger cloud droplets fall at a terminal velocity, because at a given velocity. The large droplets can then collide with small droplets and combine to form even larger drops, when the drops become large enough that their downward velocity is greater than the upward velocity of the surrounding air, the drops can fall to the earth as precipitation. The collision and coalescence is not as important in mixed phase clouds where the Bergeron process dominates, other important processes that form precipitation are riming, when a supercooled liquid drop collides with a solid snowflake, and aggregation, when two solid snowflakes collide and combine. Advances in weather radar and satellite technology have allowed the precise study of clouds on a large scale. The history of cloud microphysics developed in the 19th century and is described in several publications, otto von Guericke originated the idea that clouds were composed of water bubbles. In 1847 Augustus Waller used spider web to examine droplets under the microscope and these observations were confirmed by William Henry Dines in 1880 and Richard Assmann in 1884. As water evaporates from an area of the surface, the air over that area becomes moist. Moist air is lighter than the dry air, creating an unstable situation. When enough moist air has accumulated, all the moist air rises as a single packet, as more moist air forms along the surface, the process repeats, resulting in a series of discrete packets of moist air rising to form clouds. The main mechanism behind this process is adiabatic cooling, atmospheric pressure decreases with altitude, so the rising air expands in a process that expends energy and causes the air to cool, which makes water vapor condense into cloud. Water vapor in saturated air is normally attracted to condensation nuclei such as dust, the water droplets in a cloud have a normal radius of about 0.002 mm. The droplets may collide to form larger droplets, which remain aloft as long as the velocity of the air within the cloud is equal to or greater than the terminal velocity of the droplets
Cloud physics
–
Atmospheric sciences
Cloud physics
–
Late-summer
rainstorm in
Denmark. Nearly black color of base indicates main cloud in foreground probably cumulonimbus.
Cloud physics
–
Windy evening
twilight enhanced by the Sun's angle, can visually mimic a
tornado resulting from orographic lift