1.
Continuum mechanics
–
Continuum mechanics is a branch of mechanics that deals with the analysis of the kinematics and the mechanical behavior of materials modeled as a continuous mass rather than as discrete particles. The French mathematician Augustin-Louis Cauchy was the first to formulate such models in the 19th century, research in the area continues till today. Modeling an object as a continuum assumes that the substance of the object completely fills the space it occupies, Continuum mechanics deals with physical properties of solids and fluids which are independent of any particular coordinate system in which they are observed. These physical properties are represented by tensors, which are mathematical objects that have the required property of being independent of coordinate system. These tensors can be expressed in coordinate systems for computational convenience, Materials, such as solids, liquids and gases, are composed of molecules separated by space. On a microscopic scale, materials have cracks and discontinuities, a continuum is a body that can be continually sub-divided into infinitesimal elements with properties being those of the bulk material. More specifically, the continuum hypothesis/assumption hinges on the concepts of an elementary volume. This condition provides a link between an experimentalists and a viewpoint on constitutive equations as well as a way of spatial and statistical averaging of the microstructure. The latter then provide a basis for stochastic finite elements. The levels of SVE and RVE link continuum mechanics to statistical mechanics, the RVE may be assessed only in a limited way via experimental testing, when the constitutive response becomes spatially homogeneous. Specifically for fluids, the Knudsen number is used to assess to what extent the approximation of continuity can be made, consider car traffic on a highway---with just one lane for simplicity. Somewhat surprisingly, and in a tribute to its effectiveness, continuum mechanics effectively models the movement of cars via a differential equation for the density of cars. The familiarity of this situation empowers us to understand a little of the continuum-discrete dichotomy underlying continuum modelling in general. To start modelling define that, x measure distance along the highway, t is time, ρ is the density of cars on the highway, cars do not appear and disappear. Consider any group of cars, from the car at the back of the group located at x = a to the particular car at the front located at x = b. The total number of cars in this group N = ∫ a b ρ d x, since cars are conserved d N / d t =0. The only way an integral can be zero for all intervals is if the integrand is zero for all x, consequently, conservation derives the first order nonlinear conservation PDE ∂ ρ ∂ t + ∂ ∂ x =0 for all positions on the highway. This conservation PDE applies not only to car traffic but also to fluids, solids, crowds, animals, plants, bushfires, financial traders and this PDE is one equation with two unknowns, so another equation is needed to form a well posed problem
Continuum mechanics
–
Figure 1. Configuration of a continuum body
2.
Kinematics
–
Kinematics as a field of study is often referred to as the geometry of motion and as such may be seen as a branch of mathematics. The study of the influence of forces acting on masses falls within the purview of kinetics, for further details, see analytical dynamics. Kinematics is used in astrophysics to describe the motion of celestial bodies, in mechanical engineering, robotics, and biomechanics kinematics is used to describe the motion of systems composed of joined parts such as an engine, a robotic arm or the human skeleton. Kinematic analysis is the process of measuring the quantities used to describe motion. In addition, kinematics applies geometry to the study of the mechanical advantage of a mechanical system or mechanism. The term kinematic is the English version of A. M, ampères cinématique, which he constructed from the Greek κίνημα kinema, itself derived from κινεῖν kinein. Kinematic and cinématique are related to the French word cinéma, particle kinematics is the study of the trajectory of a particle. The position of a particle is defined to be the vector from the origin of a coordinate frame to the particle. If the tower is 50 m high, then the vector to the top of the tower is r=. In the most general case, a coordinate system is used to define the position of a particle. However, if the particle is constrained to move in a surface, all observations in physics are incomplete without those observations being described with respect to a reference frame. The position vector of a particle is a vector drawn from the origin of the frame to the particle. It expresses both the distance of the point from the origin and its direction from the origin, the magnitude of the position vector |P| gives the distance between the point P and the origin. | P | = x P2 + y P2 + z P2, the direction cosines of the position vector provide a quantitative measure of direction. It is important to note that the vector of a particle isnt unique. The position vector of a particle is different relative to different frames of reference. The velocity of a particle is a quantity that describes the direction of motion. More mathematically, the rate of change of the vector of a point
Kinematics
–
Each particle on the wheel travels in a planar circular trajectory (Kinematics of Machinery, 1876).
Kinematics
–
Kinematic quantities of a classical particle: mass m, position r, velocity v, acceleration a.
Kinematics
–
Illustration of a four-bar linkage from http://en.wikisource.org/wiki/The_Kinematics_of_Machinery Kinematics of Machinery, 1876
3.
Statistical mechanics
–
Statistical mechanics is a branch of theoretical physics using probability theory to study the average behaviour of a mechanical system, where the state of the system is uncertain. A common use of mechanics is in explaining the thermodynamic behaviour of large systems. This branch of mechanics, which treats and extends classical thermodynamics, is known as statistical thermodynamics or equilibrium statistical mechanics. Statistical mechanics also finds use outside equilibrium, an important subbranch known as non-equilibrium statistical mechanics deals with the issue of microscopically modelling the speed of irreversible processes that are driven by imbalances. Examples of such processes include chemical reactions or flows of particles, in physics there are two types of mechanics usually examined, classical mechanics and quantum mechanics. Statistical mechanics fills this disconnection between the laws of mechanics and the experience of incomplete knowledge, by adding some uncertainty about which state the system is in. The statistical ensemble is a probability distribution over all states of the system. In classical statistical mechanics, the ensemble is a probability distribution over phase points, in quantum statistical mechanics, the ensemble is a probability distribution over pure states, and can be compactly summarized as a density matrix. These two meanings are equivalent for many purposes, and will be used interchangeably in this article, however the probability is interpreted, each state in the ensemble evolves over time according to the equation of motion. Thus, the ensemble itself also evolves, as the systems in the ensemble continually leave one state. The ensemble evolution is given by the Liouville equation or the von Neumann equation, one special class of ensemble is those ensembles that do not evolve over time. These ensembles are known as equilibrium ensembles and their condition is known as statistical equilibrium, Statistical equilibrium occurs if, for each state in the ensemble, the ensemble also contains all of its future and past states with probabilities equal to the probability of being in that state. The study of equilibrium ensembles of isolated systems is the focus of statistical thermodynamics, non-equilibrium statistical mechanics addresses the more general case of ensembles that change over time, and/or ensembles of non-isolated systems. The primary goal of thermodynamics is to derive the classical thermodynamics of materials in terms of the properties of their constituent particles. Whereas statistical mechanics proper involves dynamics, here the attention is focussed on statistical equilibrium, Statistical equilibrium does not mean that the particles have stopped moving, rather, only that the ensemble is not evolving. A sufficient condition for statistical equilibrium with a system is that the probability distribution is a function only of conserved properties. There are many different equilibrium ensembles that can be considered, additional postulates are necessary to motivate why the ensemble for a given system should have one form or another. A common approach found in textbooks is to take the equal a priori probability postulate
Statistical mechanics
–
Statistical mechanics
4.
Acceleration
–
Acceleration, in physics, is the rate of change of velocity of an object with respect to time. An objects acceleration is the net result of any and all forces acting on the object, the SI unit for acceleration is metre per second squared. Accelerations are vector quantities and add according to the parallelogram law, as a vector, the calculated net force is equal to the product of the objects mass and its acceleration. For example, when a car starts from a standstill and travels in a line at increasing speeds. If the car turns, there is an acceleration toward the new direction, in this example, we can call the forward acceleration of the car a linear acceleration, which passengers in the car might experience as a force pushing them back into their seats. When changing direction, we call this non-linear acceleration, which passengers might experience as a sideways force. If the speed of the car decreases, this is an acceleration in the direction from the direction of the vehicle. Passengers may experience deceleration as a force lifting them forwards, mathematically, there is no separate formula for deceleration, both are changes in velocity. Each of these accelerations might be felt by passengers until their velocity matches that of the car, an objects average acceleration over a period of time is its change in velocity divided by the duration of the period. Mathematically, a ¯ = Δ v Δ t, instantaneous acceleration, meanwhile, is the limit of the average acceleration over an infinitesimal interval of time. The SI unit of acceleration is the metre per second squared, or metre per second per second, as the velocity in metres per second changes by the acceleration value, every second. An object moving in a circular motion—such as a satellite orbiting the Earth—is accelerating due to the change of direction of motion, in this case it is said to be undergoing centripetal acceleration. Proper acceleration, the acceleration of a relative to a free-fall condition, is measured by an instrument called an accelerometer. As speeds approach the speed of light, relativistic effects become increasingly large and these components are called the tangential acceleration and the normal or radial acceleration. Geometrical analysis of space curves, which explains tangent, normal and binormal, is described by the Frenet–Serret formulas. Uniform or constant acceleration is a type of motion in which the velocity of an object changes by an amount in every equal time period. A frequently cited example of uniform acceleration is that of an object in free fall in a gravitational field. The acceleration of a body in the absence of resistances to motion is dependent only on the gravitational field strength g
Acceleration
–
Components of acceleration for a curved motion. The tangential component a t is due to the change in speed of traversal, and points along the curve in the direction of the velocity vector (or in the opposite direction). The normal component (also called centripetal component for circular motion) a c is due to the change in direction of the velocity vector and is normal to the trajectory, pointing toward the center of curvature of the path.
Acceleration
–
Acceleration is the rate of change of velocity. At any point on a trajectory, the magnitude of the acceleration is given by the rate of change of velocity in both magnitude and direction at that point. The true acceleration at time t is found in the limit as time interval Δt → 0 of Δ v / Δt
5.
Angular momentum
–
In physics, angular momentum is the rotational analog of linear momentum. It is an important quantity in physics because it is a conserved quantity – the angular momentum of a system remains constant unless acted on by an external torque. The definition of momentum for a point particle is a pseudovector r×p. This definition can be applied to each point in continua like solids or fluids, unlike momentum, angular momentum does depend on where the origin is chosen, since the particles position is measured from it. The angular momentum of an object can also be connected to the angular velocity ω of the object via the moment of inertia I. However, while ω always points in the direction of the rotation axis, Angular momentum is additive, the total angular momentum of a system is the vector sum of the angular momenta. For continua or fields one uses integration, torque can be defined as the rate of change of angular momentum, analogous to force. Applications include the gyrocompass, control moment gyroscope, inertial systems, reaction wheels, flying discs or Frisbees. In general, conservation does limit the motion of a system. In quantum mechanics, angular momentum is an operator with quantized eigenvalues, Angular momentum is subject to the Heisenberg uncertainty principle, meaning only one component can be measured with definite precision, the other two cannot. Also, the spin of elementary particles does not correspond to literal spinning motion, Angular momentum is a vector quantity that represents the product of a bodys rotational inertia and rotational velocity about a particular axis. Angular momentum can be considered an analog of linear momentum. Thus, where momentum is proportional to mass m and linear speed v, p = m v, angular momentum is proportional to moment of inertia I. Unlike mass, which only on amount of matter, moment of inertia is also dependent on the position of the axis of rotation. Unlike linear speed, which occurs in a line, angular speed occurs about a center of rotation. Therefore, strictly speaking, L should be referred to as the angular momentum relative to that center and this simple analysis can also apply to non-circular motion if only the component of the motion which is perpendicular to the radius vector is considered. In that case, L = r m v ⊥, where v ⊥ = v sin θ is the component of the motion. It is this definition, × to which the moment of momentum refers
Angular momentum
–
This gyroscope remains upright while spinning due to the conservation of its angular momentum.
Angular momentum
–
An ice skater conserves angular momentum – her rotational speed increases as her moment of inertia decreases by drawing in her arms and legs.
6.
Potential energy
–
In physics, potential energy is energy possessed by a body by virtue of its position relative to others, stresses within itself, electric charge, and other factors. The unit for energy in the International System of Units is the joule, the term potential energy was introduced by the 19th century Scottish engineer and physicist William Rankine, although it has links to Greek philosopher Aristotles concept of potentiality. Potential energy is associated with forces that act on a body in a way that the work done by these forces on the body depends only on the initial and final positions of the body in space. These forces, that are called potential forces, can be represented at every point in space by vectors expressed as gradients of a scalar function called potential. Potential energy is the energy of an object. It is the energy by virtue of a position relative to other objects. Potential energy is associated with restoring forces such as a spring or the force of gravity. The action of stretching the spring or lifting the mass is performed by a force that works against the force field of the potential. This work is stored in the field, which is said to be stored as potential energy. If the external force is removed the field acts on the body to perform the work as it moves the body back to the initial position. Suppose a ball which mass is m, and it is in h position in height, if the acceleration of free fall is g, the weight of the ball is mg. There are various types of energy, each associated with a particular type of force. Chemical potential energy, such as the energy stored in fossil fuels, is the work of the Coulomb force during rearrangement of mutual positions of electrons and nuclei in atoms and molecules. Thermal energy usually has two components, the energy of random motions of particles and the potential energy of their mutual positions. Forces derivable from a potential are also called conservative forces, the work done by a conservative force is W = − Δ U where Δ U is the change in the potential energy associated with the force. The negative sign provides the convention that work done against a force field increases potential energy, common notations for potential energy are U, V, also Ep. Potential energy is closely linked with forces, in this case, the force can be defined as the negative of the vector gradient of the potential field. If the work for a force is independent of the path, then the work done by the force is evaluated at the start
Potential energy
–
In the case of a bow and arrow, when the archer does work on the bow, drawing the string back, some of the chemical energy of the archer's body is transformed into elastic potential-energy in the bent limbs of the bow. When the string is released, the force between the string and the arrow does work on the arrow. Thus, the potential energy in the bow limbs is transformed into the kinetic energy of the arrow as it takes flight.
Potential energy
–
A trebuchet uses the gravitational potential energy of the counterweight to throw projectiles over two hundred meters
Potential energy
–
Springs are used for storing elastic potential energy
Potential energy
–
Archery is one of humankind's oldest applications of elastic potential energy
7.
Frame of reference
–
In physics, a frame of reference consists of an abstract coordinate system and the set of physical reference points that uniquely fix the coordinate system and standardize measurements. In n dimensions, n+1 reference points are sufficient to define a reference frame. Using rectangular coordinates, a frame may be defined with a reference point at the origin. In Einsteinian relativity, reference frames are used to specify the relationship between an observer and the phenomenon or phenomena under observation. In this context, the phrase often becomes observational frame of reference, a relativistic reference frame includes the coordinate time, which does not correspond across different frames moving relatively to each other. The situation thus differs from Galilean relativity, where all possible coordinate times are essentially equivalent, the need to distinguish between the various meanings of frame of reference has led to a variety of terms. For example, sometimes the type of system is attached as a modifier. Sometimes the state of motion is emphasized, as in rotating frame of reference, sometimes the way it transforms to frames considered as related is emphasized as in Galilean frame of reference. Sometimes frames are distinguished by the scale of their observations, as in macroscopic and microscopic frames of reference, in this sense, an observational frame of reference allows study of the effect of motion upon an entire family of coordinate systems that could be attached to this frame. On the other hand, a system may be employed for many purposes where the state of motion is not the primary concern. For example, a system may be adopted to take advantage of the symmetry of a system. In a still broader perspective, the formulation of many problems in physics employs generalized coordinates, normal modes or eigenvectors and it seems useful to divorce the various aspects of a reference frame for the discussion below. A coordinate system is a concept, amounting to a choice of language used to describe observations. Consequently, an observer in a frame of reference can choose to employ any coordinate system to describe observations made from that frame of reference. A change in the choice of coordinate system does not change an observers state of motion. This viewpoint can be found elsewhere as well, which is not to dispute that some coordinate systems may be a better choice for some observations than are others. Choice of what to measure and with what observational apparatus is a separate from the observers state of motion. D. Norton, The discussion is taken beyond simple space-time coordinate systems by Brading, extension to coordinate systems using generalized coordinates underlies the Hamiltonian and Lagrangian formulations of quantum field theory, classical relativistic mechanics, and quantum gravity
Frame of reference
–
An observer O, situated at the origin of a local set of coordinates – a frame of reference F. The observer in this frame uses the coordinates (x, y, z, t) to describe a spacetime event, shown as a star.
8.
Impulse (physics)
–
In classical mechanics, impulse is the integral of a force, F, over the time interval, t, for which it acts. Since force is a quantity, impulse is also a vector in the same direction. Impulse applied to an object produces an equivalent vector change in its linear momentum, the SI unit of impulse is the newton second, and the dimensionally equivalent unit of momentum is the kilogram meter per second. The corresponding English engineering units are the pound-second and the slug-foot per second, a resultant force causes acceleration and a change in the velocity of the body for as long as it acts. Conversely, a force applied for a long time produces the same change in momentum—the same impulse—as a larger force applied briefly. This is often called the impulse-momentum theorem, as a result, an impulse may also be regarded as the change in momentum of an object to which a resultant force is applied. Impulse has the units and dimensions as momentum. In the International System of Units, these are kg·m/s = N·s, in English engineering units, they are slug·ft/s = lbf·s. The term impulse is also used to refer to a force or impact. This type of impulse is often idealized so that the change in momentum produced by the force happens with no change in time and this sort of change is a step change, and is not physically possible. However, this is a model for computing the effects of ideal collisions. The application of Newtons second law for variable mass allows impulse, in the case of rockets, the impulse imparted can be normalized by unit of propellant expended, to create a performance parameter, specific impulse. This fact can be used to derive the Tsiolkovsky rocket equation, which relates the vehicles propulsive change in velocity to the specific impulse. Wave–particle duality defines the impulse of a wave collision, the preservation of momentum in the collision is then called phase matching. Applications include, Compton effect nonlinear optics Acousto-optic modulator Electron phonon scattering Serway, Raymond A. Jewett, John W. Physics for Scientists, Physics for Scientists and Engineers, Mechanics, Oscillations and Waves, Thermodynamics
Impulse (physics)
–
A large force applied for a very short duration, such as a golf shot, is often described as the club giving the ball an impulse.
Impulse (physics)
9.
Momentum
–
In classical mechanics, linear momentum, translational momentum, or simply momentum is the product of the mass and velocity of an object, quantified in kilogram-meters per second. It is dimensionally equivalent to impulse, the product of force and time, Newtons second law of motion states that the change in linear momentum of a body is equal to the net impulse acting on it. If the truck were lighter, or moving slowly, then it would have less momentum. Linear momentum is also a quantity, meaning that if a closed system is not affected by external forces. In classical mechanics, conservation of momentum is implied by Newtons laws. It also holds in special relativity and, with definitions, a linear momentum conservation law holds in electrodynamics, quantum mechanics, quantum field theory. It is ultimately an expression of one of the symmetries of space and time. Linear momentum depends on frame of reference, observers in different frames would find different values of linear momentum of a system. But each would observe that the value of linear momentum does not change with time, momentum has a direction as well as magnitude. Quantities that have both a magnitude and a direction are known as vector quantities, because momentum has a direction, it can be used to predict the resulting direction of objects after they collide, as well as their speeds. Below, the properties of momentum are described in one dimension. The vector equations are almost identical to the scalar equations, the momentum of a particle is traditionally represented by the letter p. It is the product of two quantities, the mass and velocity, p = m v, the units of momentum are the product of the units of mass and velocity. In SI units, if the mass is in kilograms and the velocity in meters per second then the momentum is in kilogram meters/second, in cgs units, if the mass is in grams and the velocity in centimeters per second, then the momentum is in gram centimeters/second. Being a vector, momentum has magnitude and direction, for example, a 1 kg model airplane, traveling due north at 1 m/s in straight and level flight, has a momentum of 1 kg m/s due north measured from the ground. The momentum of a system of particles is the sum of their momenta, if two particles have masses m1 and m2, and velocities v1 and v2, the total momentum is p = p 1 + p 2 = m 1 v 1 + m 2 v 2. If all the particles are moving, the center of mass will generally be moving as well, if the center of mass is moving at velocity vcm, the momentum is, p = m v cm. This is known as Eulers first law, if a force F is applied to a particle for a time interval Δt, the momentum of the particle changes by an amount Δ p = F Δ t
Momentum
–
In a game of pool, momentum is conserved; that is, if one ball stops dead after the collision, the other ball will continue away with all the momentum. If the moving ball continues or is deflected then both balls will carry a portion of the momentum from the collision.
10.
Space
–
Space is the boundless three-dimensional extent in which objects and events have relative position and direction. Physical space is conceived in three linear dimensions, although modern physicists usually consider it, with time, to be part of a boundless four-dimensional continuum known as spacetime. The concept of space is considered to be of importance to an understanding of the physical universe. However, disagreement continues between philosophers over whether it is itself an entity, a relationship between entities, or part of a conceptual framework. Many of these classical philosophical questions were discussed in the Renaissance and then reformulated in the 17th century, in Isaac Newtons view, space was absolute—in the sense that it existed permanently and independently of whether there was any matter in the space. Other natural philosophers, notably Gottfried Leibniz, thought instead that space was in fact a collection of relations between objects, given by their distance and direction from one another. In the 18th century, the philosopher and theologian George Berkeley attempted to refute the visibility of spatial depth in his Essay Towards a New Theory of Vision. Kant referred to the experience of space in his Critique of Pure Reason as being a pure a priori form of intuition. In the 19th and 20th centuries mathematicians began to examine geometries that are non-Euclidean, in space is conceived as curved. According to Albert Einsteins theory of relativity, space around gravitational fields deviates from Euclidean space. Experimental tests of general relativity have confirmed that non-Euclidean geometries provide a model for the shape of space. In the seventeenth century, the philosophy of space and time emerged as an issue in epistemology. At its heart, Gottfried Leibniz, the German philosopher-mathematician, and Isaac Newton, unoccupied regions are those that could have objects in them, and thus spatial relations with other places. For Leibniz, then, space was an abstraction from the relations between individual entities or their possible locations and therefore could not be continuous but must be discrete. Space could be thought of in a way to the relations between family members. Although people in the family are related to one another, the relations do not exist independently of the people, but since there would be no observational way of telling these universes apart then, according to the identity of indiscernibles, there would be no real difference between them. According to the principle of sufficient reason, any theory of space that implied that there could be two possible universes must therefore be wrong. Newton took space to be more than relations between objects and based his position on observation and experimentation
Space
–
Gottfried Leibniz
Space
–
A right-handed three-dimensional Cartesian coordinate system used to indicate positions in space.
Space
–
Isaac Newton
Space
–
Immanuel Kant
11.
Speed
–
In everyday use and in kinematics, the speed of an object is the magnitude of its velocity, it is thus a scalar quantity. Speed has the dimensions of distance divided by time, the SI unit of speed is the metre per second, but the most common unit of speed in everyday usage is the kilometre per hour or, in the US and the UK, miles per hour. For air and marine travel the knot is commonly used, the fastest possible speed at which energy or information can travel, according to special relativity, is the speed of light in a vacuum c =299792458 metres per second. Matter cannot quite reach the speed of light, as this would require an amount of energy. In relativity physics, the concept of rapidity replaces the classical idea of speed, italian physicist Galileo Galilei is usually credited with being the first to measure speed by considering the distance covered and the time it takes. Galileo defined speed as the distance covered per unit of time, in equation form, this is v = d t, where v is speed, d is distance, and t is time. A cyclist who covers 30 metres in a time of 2 seconds, objects in motion often have variations in speed. If s is the length of the path travelled until time t, in the special case where the velocity is constant, this can be simplified to v = s / t. The average speed over a time interval is the total distance travelled divided by the time duration. Speed at some instant, or assumed constant during a short period of time, is called instantaneous speed. By looking at a speedometer, one can read the speed of a car at any instant. A car travelling at 50 km/h generally goes for less than one hour at a constant speed, if the vehicle continued at that speed for half an hour, it would cover half that distance. If it continued for one minute, it would cover about 833 m. Different from instantaneous speed, average speed is defined as the distance covered divided by the time interval. For example, if a distance of 80 kilometres is driven in 1 hour, likewise, if 320 kilometres are travelled in 4 hours, the average speed is also 80 kilometres per hour. When a distance in kilometres is divided by a time in hours, average speed does not describe the speed variations that may have taken place during shorter time intervals, and so average speed is often quite different from a value of instantaneous speed. If the average speed and the time of travel are known, using this equation for an average speed of 80 kilometres per hour on a 4-hour trip, the distance covered is found to be 320 kilometres. Linear speed is the distance travelled per unit of time, while speed is the linear speed of something moving along a circular path
Speed
–
Speed can be thought of as the rate at which an object covers distance. A fast-moving object has a high speed and covers a relatively large distance in a given amount of time, while a slow-moving object covers a relatively small amount of distance in the same amount of time.
12.
Torque
–
Torque, moment, or moment of force is rotational force. Just as a force is a push or a pull. Loosely speaking, torque is a measure of the force on an object such as a bolt or a flywheel. For example, pushing or pulling the handle of a wrench connected to a nut or bolt produces a torque that loosens or tightens the nut or bolt, the symbol for torque is typically τ, the lowercase Greek letter tau. When it is called moment of force, it is denoted by M. The SI unit for torque is the newton metre, for more on the units of torque, see Units. This article follows US physics terminology in its use of the word torque, in the UK and in US mechanical engineering, this is called moment of force, usually shortened to moment. In US physics and UK physics terminology these terms are interchangeable, unlike in US mechanical engineering, Torque is defined mathematically as the rate of change of angular momentum of an object. The definition of states that one or both of the angular velocity or the moment of inertia of an object are changing. Moment is the term used for the tendency of one or more applied forces to rotate an object about an axis. For example, a force applied to a shaft causing acceleration, such as a drill bit accelerating from rest. By contrast, a force on a beam produces a moment, but since the angular momentum of the beam is not changing. Similarly with any force couple on an object that has no change to its angular momentum and this article follows the US physics terminology by calling all moments by the term torque, whether or not they cause the angular momentum of an object to change. The concept of torque, also called moment or couple, originated with the studies of Archimedes on levers, the term torque was apparently introduced into English scientific literature by James Thomson, the brother of Lord Kelvin, in 1884. A force applied at an angle to a lever multiplied by its distance from the levers fulcrum is its torque. A force of three newtons applied two metres from the fulcrum, for example, exerts the same torque as a force of one newton applied six metres from the fulcrum. More generally, the torque on a particle can be defined as the product, τ = r × F, where r is the particles position vector relative to the fulcrum. Alternatively, τ = r F ⊥, where F⊥ is the amount of force directed perpendicularly to the position of the particle, any force directed parallel to the particles position vector does not produce a torque
Torque
13.
Velocity
–
The velocity of an object is the rate of change of its position with respect to a frame of reference, and is a function of time. Velocity is equivalent to a specification of its speed and direction of motion, Velocity is an important concept in kinematics, the branch of classical mechanics that describes the motion of bodies. Velocity is a vector quantity, both magnitude and direction are needed to define it. The scalar absolute value of velocity is called speed, being a coherent derived unit whose quantity is measured in the SI system as metres per second or as the SI base unit of. For example,5 metres per second is a scalar, whereas 5 metres per second east is a vector, if there is a change in speed, direction or both, then the object has a changing velocity and is said to be undergoing an acceleration. To have a constant velocity, an object must have a constant speed in a constant direction, constant direction constrains the object to motion in a straight path thus, a constant velocity means motion in a straight line at a constant speed. For example, a car moving at a constant 20 kilometres per hour in a path has a constant speed. Hence, the car is considered to be undergoing an acceleration, Speed describes only how fast an object is moving, whereas velocity gives both how fast and in what direction the object is moving. If a car is said to travel at 60 km/h, its speed has been specified, however, if the car is said to move at 60 km/h to the north, its velocity has now been specified. The big difference can be noticed when we consider movement around a circle and this is because the average velocity is calculated by only considering the displacement between the starting and the end points while the average speed considers only the total distance traveled. Velocity is defined as the rate of change of position with respect to time, average velocity can be calculated as, v ¯ = Δ x Δ t. The average velocity is less than or equal to the average speed of an object. This can be seen by realizing that while distance is always strictly increasing, from this derivative equation, in the one-dimensional case it can be seen that the area under a velocity vs. time is the displacement, x. In calculus terms, the integral of the velocity v is the displacement function x. In the figure, this corresponds to the area under the curve labeled s. Since the derivative of the position with respect to time gives the change in position divided by the change in time, although velocity is defined as the rate of change of position, it is often common to start with an expression for an objects acceleration. As seen by the three green tangent lines in the figure, an objects instantaneous acceleration at a point in time is the slope of the tangent to the curve of a v graph at that point. In other words, acceleration is defined as the derivative of velocity with respect to time, from there, we can obtain an expression for velocity as the area under an a acceleration vs. time graph
Velocity
–
As a change of direction occurs while the cars turn on the curved track, their velocity is not constant.
14.
Virtual work
–
Virtual work arises in the application of the principle of least action to the study of forces and movement of a mechanical system. The work of a force acting on a particle as it moves along a displacement will be different for different displacements, among all the possible displacements that a particle may follow, called virtual displacements, one will minimize the action. This displacement is therefore the displacement followed by the according to the principle of least action. The work of a force on a particle along a displacement is known as the virtual work. The principle of work had always been used in some form since antiquity in the study of statics. It was used by the Greeks, medieval Arabs and Latins, working with Leibnizian concepts, Johann Bernoulli systematized the virtual work principle and made explicit the concept of infinitesimal displacement. He was able to solve problems for both bodies as well as fluids. Bernoullis version of virtual work law appeared in his letter to Pierre Varignon in 1715 and this formulation of the principle is today known as the principle of virtual velocities and is commonly considered as the prototype of the contemporary virtual work principles. In 1743 DAlembert published his Traite de Dynamique where he applied the principle of work, based on the Bernoullis work. His idea was to convert a dynamical problem into static problem by introducing inertial force, consider a point particle that moves along a path which is described by a function r from point A, where r, to point B, where r. It is possible that the moves from A to B along a nearby path described by r + δr. The variation δr satisfies the requirement δr = δr =0, the components of the variation, δr1, δr2 and δr3, are called virtual displacements. This can be generalized to a mechanical system defined by the generalized coordinates qi. In which case, the variation of the qi is defined by the virtual displacements δqi. Virtual work is the work done by the applied forces. When considering forces applied to a body in equilibrium, the principle of least action requires the virtual work of these forces to be zero. Consider a particle P that moves from a point A to a point B along a trajectory r and it is important to notice that the value of the work W depends on the trajectory r. Suppose the force F is the same as F, the variation of the work δW associated with this nearby path, known as the virtual work, can be computed to be δ W = W ¯ − W = ∫ t 0 t 1 d t
Virtual work
–
This is an engraving from Mechanics Magazine published in London in 1824.
Virtual work
–
Illustration from Army Service Corps Training on Mechanical Transport, (1911), Fig. 112 Transmission of motion and force by gear wheels, compound train
15.
Analytical mechanics
–
In theoretical physics and mathematical physics, analytical mechanics, or theoretical mechanics is a collection of closely related alternative formulations of classical mechanics. It was developed by scientists and mathematicians during the 18th century and onward. A scalar is a quantity, whereas a vector is represented by quantity, the equations of motion are derived from the scalar quantity by some underlying principle about the scalars variation. Analytical mechanics takes advantage of a systems constraints to solve problems, the constraints limit the degrees of freedom the system can have, and can be used to reduce the number of coordinates needed to solve for the motion. The formalism is well suited to arbitrary choices of coordinates, known in the context as generalized coordinates and it does not always work for non-conservative forces or dissipative forces like friction, in which case one may revert to Newtonian mechanics or use the Udwadia–Kalaba equation. Two dominant branches of mechanics are Lagrangian mechanics and Hamiltonian mechanics. There are other such as Hamilton–Jacobi theory, Routhian mechanics. All equations of motion for particles and fields, in any formalism, one result is Noethers theorem, a statement which connects conservation laws to their associated symmetries. Analytical mechanics does not introduce new physics and is not more general than Newtonian mechanics, rather it is a collection of equivalent formalisms which have broad application. In fact the principles and formalisms can be used in relativistic mechanics and general relativity. Analytical mechanics is used widely, from physics to applied mathematics. The methods of analytical mechanics apply to particles, each with a finite number of degrees of freedom. They can be modified to describe continuous fields or fluids, which have infinite degrees of freedom, the definitions and equations have a close analogy with those of mechanics. Generalized coordinates and constraints In Newtonian mechanics, one customarily uses all three Cartesian coordinates, or other 3D coordinate system, to refer to a position during its motion. In physical systems, however, some structure or other system usually constrains the motion from taking certain directions. In the Lagrangian and Hamiltonian formalisms, the constraints are incorporated into the motions geometry and these are known as generalized coordinates, denoted qi. Difference between curvillinear and generalized coordinates Generalized coordinates incorporate constraints on the system, there is one generalized coordinate qi for each degree of freedom, i. e. each way the system can change its configuration, as curvilinear lengths or angles of rotation. Generalized coordinates are not the same as curvilinear coordinates, DAlemberts principle The foundation which the subject is built on is DAlemberts principle
Analytical mechanics
–
As the system evolves, q traces a path through configuration space (only some are shown). The path taken by the system (red) has a stationary action (δ S = 0) under small changes in the configuration of the system (δ q).
16.
Routhian mechanics
–
In analytical mechanics, a branch of theoretical physics, Routhian mechanics is a hybrid formulation of Lagrangian mechanics and Hamiltonian mechanics developed by Edward John Routh. Correspondingly, the Routhian is the function which replaces both the Lagrangian and Hamiltonian functions, the Routhian, like the Hamiltonian, can be obtained from a Legendre transform of the Lagrangian, and has a similar mathematical form to the Hamiltonian, but is not exactly the same. The difference between the Lagrangian, Hamiltonian, and Routhian functions are their variables, the Routhian differs from these functions in that some coordinates are chosen to have corresponding generalized velocities, the rest to have corresponding generalized momenta. This choice is arbitrary, and can be done to simplify the problem, in each case the Lagrangian and Hamiltonian functions are replaced by a single function, the Routhian. The full set thus has the advantages of both sets of equations, with the convenience of splitting one set of coordinates to the Hamilton equations, and the rest to the Lagrangian equations. The Lagrangian equations are powerful results, used frequently in theory, however, if cyclic coordinates occur there will still be equations to solve for all the coordinates, including the cyclic coordinates despite their absence in the Lagrangian. Overall fewer equations need to be solved compared to the Lagrangian approach, as with the rest of analytical mechanics, Routhian mechanics is completely equivalent to Newtonian mechanics, all other formulations of classical mechanics, and introduces no new physics. It offers a way to solve mechanical problems. The velocities dqi/dt are expressed as functions of their corresponding momenta by inverting their defining relation, in this context, pi is said to be the momentum canonically conjugate to qi. The Routhian is intermediate between L and H, some coordinates q1, q2, qn are chosen to have corresponding generalized momenta p1, p2. Pn, the rest of the coordinates ζ1, ζ2, ζs to have generalized velocities dζ1/dt, dζ2/dt. Dζs/dt, and time may appear explicitly, where again the generalized velocity dqi/dt is to be expressed as a function of generalized momentum pi via its defining relation. The choice of n coordinates are to have corresponding momenta. The above is used by Landau and Lifshitz, and Goldstein, some authors may define the Routhian to be the negative of the above definition. Below, the Routhian equations of motion are obtained in two ways, in the other useful derivatives are found that can be used elsewhere. Consider the case of a system with two degrees of freedom, q and ζ, with generalized velocities dq/dt and dζ/dt, now change variables, from the set to, simply switching the velocity dq/dt to the momentum p. This change of variables in the differentials is the Legendre transformation, the differential of the new function to replace L will be a sum of differentials in dq, dζ, dp, d, and dt. Notice the Routhian replaces the Hamiltonian and Lagrangian functions in all the equations of motion, the remaining equation states the partial time derivatives of L and R are negatives ∂ L ∂ t = − ∂ R ∂ t. n, and j =1,2
Routhian mechanics
–
Edward John Routh, 1831–1907.
17.
Damping
–
If a frictional force proportional to the velocity is also present, the harmonic oscillator is described as a damped oscillator. Depending on the coefficient, the system can, Oscillate with a frequency lower than in the non-damped case. Decay to the position, without oscillations. The boundary solution between an underdamped oscillator and an overdamped oscillator occurs at a value of the friction coefficient and is called critically damped. If an external time dependent force is present, the oscillator is described as a driven oscillator. Mechanical examples include pendulums, masses connected to springs, and acoustical systems, other analogous systems include electrical harmonic oscillators such as RLC circuits. The harmonic oscillator model is important in physics, because any mass subject to a force in stable equilibrium acts as a harmonic oscillator for small vibrations. Harmonic oscillators occur widely in nature and are exploited in many devices, such as clocks. They are the source of virtually all sinusoidal vibrations and waves, a simple harmonic oscillator is an oscillator that is neither driven nor damped. It consists of a m, which experiences a single force, F, which pulls the mass in the direction of the point x=0 and depends only on the masss position x. Balance of forces for the system is F = m a = m d 2 x d t 2 = m x ¨ = − k x. Solving this differential equation, we find that the motion is described by the function x = A cos , the motion is periodic, repeating itself in a sinusoidal fashion with constant amplitude, A. The position at a time t also depends on the phase, φ. The period and frequency are determined by the size of the mass m, the velocity and acceleration of a simple harmonic oscillator oscillate with the same frequency as the position but with shifted phases. The velocity is maximum for zero displacement, while the acceleration is in the direction as the displacement. The potential energy stored in a harmonic oscillator at position x is U =12 k x 2. In real oscillators, friction, or damping, slows the motion of the system, due to frictional force, the velocity decreases in proportion to the acting frictional force. While simple harmonic motion oscillates with only the force acting on the system
Damping
–
Mass attached to a spring and damper.
18.
Damping ratio
–
In engineering, the damping ratio is a dimensionless measure describing how oscillations in a system decay after a disturbance. Many systems exhibit oscillatory behavior when they are disturbed from their position of static equilibrium, a mass suspended from a spring, for example, might, if pulled and released, bounce up and down. On each bounce, the system is trying to return to its equilibrium position, sometimes losses damp the system and can cause the oscillations to gradually decay in amplitude towards zero or attenuate. The damping ratio is a measure of describing how rapidly the oscillations decay from one bounce to the next, where the spring–mass system is completely lossless, the mass would oscillate indefinitely, with each bounce of equal height to the last. This hypothetical case is called undamped, If the system contained high losses, for example if the spring–mass experiment were conducted in a viscous fluid, the mass could slowly return to its rest position without ever overshooting. Commonly, the mass tends to overshoot its starting position, and then return, with each overshoot, some energy in the system is dissipated, and the oscillations die towards zero. Between the overdamped and underdamped cases, there exists a level of damping at which the system will just fail to overshoot. This case is called critical damping, the key difference between critical damping and overdamping is that, in critical damping, the system returns to equilibrium in the minimum amount of time. The damping ratio is a parameter, usually denoted by ζ and it is particularly important in the study of control theory. It is also important in the harmonic oscillator, the damping ratio provides a mathematical means of expressing the level of damping in a system relative to critical damping. This equation can be solved with the approach, X = C e s t, where C and s are both complex constants. That approach assumes a solution that is oscillatory and/or decaying exponentially, using it in the ODE gives a condition on the frequency of the damped oscillations, s = − ω n. Undamped, Is the case where ζ →0 corresponds to the simple harmonic oscillator. Underdamped, If s is a number, then the solution is a decaying exponential combined with an oscillatory portion that looks like exp . This case occurs for ζ <1, and is referred to as underdamped, overdamped, If s is a real number, then the solution is simply a decaying exponential with no oscillation. This case occurs for ζ >1, and is referred to as overdamped, critically damped, The case where ζ =1 is the border between the overdamped and underdamped cases, and is referred to as critically damped. This turns out to be an outcome in many cases where engineering design of a damped oscillator is required. The factors Q, damping ratio ζ, and exponential decay rate α are related such that ζ =12 Q = α ω0, a lower damping ratio implies a lower decay rate, and so very underdamped systems oscillate for long times
Damping ratio
–
The effect of varying damping ratio on a second-order system.
19.
Harmonic oscillator
–
If a frictional force proportional to the velocity is also present, the harmonic oscillator is described as a damped oscillator. Depending on the coefficient, the system can, Oscillate with a frequency lower than in the non-damped case. Decay to the position, without oscillations. The boundary solution between an underdamped oscillator and an overdamped oscillator occurs at a value of the friction coefficient and is called critically damped. If an external time dependent force is present, the oscillator is described as a driven oscillator. Mechanical examples include pendulums, masses connected to springs, and acoustical systems, other analogous systems include electrical harmonic oscillators such as RLC circuits. The harmonic oscillator model is important in physics, because any mass subject to a force in stable equilibrium acts as a harmonic oscillator for small vibrations. Harmonic oscillators occur widely in nature and are exploited in many devices, such as clocks. They are the source of virtually all sinusoidal vibrations and waves, a simple harmonic oscillator is an oscillator that is neither driven nor damped. It consists of a m, which experiences a single force, F, which pulls the mass in the direction of the point x=0 and depends only on the masss position x. Balance of forces for the system is F = m a = m d 2 x d t 2 = m x ¨ = − k x. Solving this differential equation, we find that the motion is described by the function x = A cos , the motion is periodic, repeating itself in a sinusoidal fashion with constant amplitude, A. The position at a time t also depends on the phase, φ. The period and frequency are determined by the size of the mass m, the velocity and acceleration of a simple harmonic oscillator oscillate with the same frequency as the position but with shifted phases. The velocity is maximum for zero displacement, while the acceleration is in the direction as the displacement. The potential energy stored in a harmonic oscillator at position x is U =12 k x 2. In real oscillators, friction, or damping, slows the motion of the system, due to frictional force, the velocity decreases in proportion to the acting frictional force. While simple harmonic motion oscillates with only the force acting on the system
Harmonic oscillator
–
Another damped harmonic oscillator
Harmonic oscillator
–
Dependence of the system behavior on the value of the damping ratio ζ
20.
Newton's law of universal gravitation
–
This is a general physical law derived from empirical observations by what Isaac Newton called inductive reasoning. It is a part of classical mechanics and was formulated in Newtons work Philosophiæ Naturalis Principia Mathematica, in modern language, the law states, Every point mass attracts every single other point mass by a force pointing along the line intersecting both points. The force is proportional to the product of the two masses and inversely proportional to the square of the distance between them, the first test of Newtons theory of gravitation between masses in the laboratory was the Cavendish experiment conducted by the British scientist Henry Cavendish in 1798. It took place 111 years after the publication of Newtons Principia, Newtons law of gravitation resembles Coulombs law of electrical forces, which is used to calculate the magnitude of the electrical force arising between two charged bodies. Both are inverse-square laws, where force is proportional to the square of the distance between the bodies. Coulombs law has the product of two charges in place of the product of the masses, and the constant in place of the gravitational constant. Newtons law has since been superseded by Albert Einsteins theory of general relativity, at the same time Hooke agreed that the Demonstration of the Curves generated thereby was wholly Newtons. In this way, the question arose as to what, if anything and this is a subject extensively discussed since that time and on which some points, outlined below, continue to excite controversy. And that these powers are so much the more powerful in operating. Thus Hooke clearly postulated mutual attractions between the Sun and planets, in a way that increased with nearness to the attracting body, Hookes statements up to 1674 made no mention, however, that an inverse square law applies or might apply to these attractions. Hookes gravitation was also not yet universal, though it approached universality more closely than previous hypotheses and he also did not provide accompanying evidence or mathematical demonstration. It was later on, in writing on 6 January 1679|80 to Newton, Newton, faced in May 1686 with Hookes claim on the inverse square law, denied that Hooke was to be credited as author of the idea. Among the reasons, Newton recalled that the idea had been discussed with Sir Christopher Wren previous to Hookes 1679 letter, Newton also pointed out and acknowledged prior work of others, including Bullialdus, and Borelli. D T Whiteside has described the contribution to Newtons thinking that came from Borellis book, a copy of which was in Newtons library at his death. Newton further defended his work by saying that had he first heard of the inverse square proportion from Hooke, Hooke, without evidence in favor of the supposition, could only guess that the inverse square law was approximately valid at great distances from the center. Thus Newton gave a justification, otherwise lacking, for applying the inverse square law to large spherical planetary masses as if they were tiny particles, after his 1679-1680 correspondence with Hooke, Newton adopted the language of inward or centripetal force. They also involved the combination of tangential and radial displacements, which Newton was making in the 1660s, the lesson offered by Hooke to Newton here, although significant, was one of perspective and did not change the analysis. This background shows there was basis for Newton to deny deriving the inverse square law from Hooke, on the other hand, Newton did accept and acknowledge, in all editions of the Principia, that Hooke had separately appreciated the inverse square law in the solar system
Newton's law of universal gravitation
21.
Rigid body
–
In physics, a rigid body is an idealization of a solid body in which deformation is neglected. In other words, the distance between any two points of a rigid body remains constant in time regardless of external forces exerted on it. Even though such an object cannot physically exist due to relativity, in classical mechanics a rigid body is usually considered as a continuous mass distribution, while in quantum mechanics a rigid body is usually thought of as a collection of point masses. For instance, in quantum mechanics molecules are often seen as rigid bodies, the position of a rigid body is the position of all the particles of which it is composed. To simplify the description of this position, we exploit the property that the body is rigid, if the body is rigid, it is sufficient to describe the position of at least three non-collinear particles. This makes it possible to reconstruct the position of all the other particles, however, typically a different, mathematically more convenient, but equivalent approach is used. Thus, the position of a body has two components, linear and angular, respectively. The same is true for other kinematic and kinetic quantities describing the motion of a body, such as linear and angular velocity, acceleration, momentum, impulse. This reference point may define the origin of a coordinate system fixed to the body, there are several ways to numerically describe the orientation of a rigid body, including a set of three Euler angles, a quaternion, or a direction cosine matrix. In general, when a body moves, both its position and orientation vary with time. In the kinematic sense, these changes are referred to as translation and rotation, indeed, the position of a rigid body can be viewed as a hypothetic translation and rotation of the body starting from a hypothetic reference position. Velocity and angular velocity are measured with respect to a frame of reference, the linear velocity of a rigid body is a vector quantity, equal to the time rate of change of its linear position. Thus, it is the velocity of a point fixed to the body. During purely translational motion, all points on a body move with the same velocity. However, when motion involves rotation, the velocity of any two points on the body will generally not be the same. Two points of a body will have the same instantaneous velocity only if they happen to lie on an axis parallel to the instantaneous axis of rotation. Angular velocity is a quantity that describes the angular speed at which the orientation of the rigid body is changing. All points on a rigid body experience the same velocity at all times
Rigid body
–
The position of a rigid body is determined by the position of its center of mass and by its attitude (at least six parameters in total).
22.
Rigid body dynamics
–
Rigid-body dynamics studies the movement of systems of interconnected bodies under the action of external forces. This excludes bodies that display fluid highly elastic, and plastic behavior, the dynamics of a rigid body system is described by the laws of kinematics and by the application of Newtons second law or their derivative form Lagrangian mechanics. The formulation and solution of rigid body dynamics is an important tool in the simulation of mechanical systems. If a system of particles moves parallel to a fixed plane, in this case, Newtons laws for a rigid system of N particles, Pi, i=1. N, simplify because there is no movement in the k direction. Determine the resultant force and torque at a reference point R, to obtain F = ∑ i =1 N m i A i, T = ∑ i =1 N ×, where ri denotes the planar trajectory of each particle. In this case, the vectors can be simplified by introducing the unit vectors ei from the reference point R to a point ri. Several methods to describe orientations of a body in three dimensions have been developed. They are summarized in the following sections, the first attempt to represent an orientation is attributed to Leonhard Euler. The values of three rotations are called Euler angles. These are three angles, also known as yaw, pitch and roll, Navigation angles and Cardan angles, in aerospace engineering they are usually referred to as Euler angles. Euler also realized that the composition of two rotations is equivalent to a rotation about a different fixed axis. Therefore, the composition of the three angles has to be equal to only one rotation, whose axis was complicated to calculate until matrices were developed. Based on this fact he introduced a way to describe any rotation, with a vector on the rotation axis. Therefore, any orientation can be represented by a vector that leads to it from the reference frame. When used to represent an orientation, the vector is commonly called orientation vector, or attitude vector. A similar method, called axis-angle representation, describes a rotation or orientation using a unit vector aligned with the axis. With the introduction of matrices the Euler theorems were rewritten, the rotations were described by orthogonal matrices referred to as rotation matrices or direction cosine matrices. When used to represent an orientation, a matrix is commonly called orientation matrix
Rigid body dynamics
–
Human body modelled as a system of rigid bodies of geometrical solids. Representative bones were added for better visualization of the walking person.
Rigid body dynamics
–
Movement of each of the components of the Boulton & Watt Steam Engine (1784) is modeled by a continuous set of rigid displacements
23.
Vibration
–
Vibration is a mechanical phenomenon whereby oscillations occur about an equilibrium point. The word comes from Latin vibrationem, the oscillations may be periodic, such as the motion of a pendulum—or random, such as the movement of a tire on a gravel road. Vibration can be desirable, for example, the motion of a fork, the reed in a woodwind instrument or harmonica. In many cases, however, vibration is undesirable, wasting energy, for example, the vibrational motions of engines, electric motors, or any mechanical device in operation are typically unwanted. Such vibrations could be caused by imbalances in the parts, uneven friction. Careful designs usually minimize unwanted vibrations, the studies of sound and vibration are closely related. Sound, or pressure waves, are generated by vibrating structures, hence, attempts to reduce noise are often related to issues of vibration. Free vibration occurs when a system is set in motion with an initial input. Examples of this type of vibration are pulling a child back on a swing and letting go, or hitting a tuning fork, the mechanical system vibrates at one or more of its natural frequencies and damps down to motionlessness. Forced vibration is when a disturbance is applied to a mechanical system. The disturbance can be a periodic and steady-state input, a transient input, the periodic input can be a harmonic or a non-harmonic disturbance. Damped vibration, When the energy of a system is gradually dissipated by friction and other resistances. The vibrations gradually reduce or change in frequency or intensity or cease, Vibration testing is accomplished by introducing a forcing function into a structure, usually with some type of shaker. Alternately, a DUT is attached to the table of a shaker, Vibration testing is performed to examine the response of a device under test to a defined vibration environment. The measured response may be life, resonant frequencies or squeak. Squeak and rattle testing is performed with a type of quiet shaker that produces very low sound levels while under operation. For relatively low frequency forcing, servohydraulic shakers are used, for higher frequencies, electrodynamic shakers are used. Generally, one or more input or control points located on the DUT-side of a fixture is kept at a specified acceleration, other response points experience maximum vibration level or minimum vibration level
Vibration
–
Car Suspension: designing vibration control is undertaken as part of acoustic, automotive or mechanical engineering.
Vibration
–
One of the possible modes of vibration of a circular drum (see other modes).
24.
Centripetal force
–
A centripetal force is a force that makes a body follow a curved path. Its direction is orthogonal to the motion of the body. Isaac Newton described it as a force by which bodies are drawn or impelled, or in any way tend, in Newtonian mechanics, gravity provides the centripetal force responsible for astronomical orbits. One common example involving centripetal force is the case in which a body moves with uniform speed along a circular path, the centripetal force is directed at right angles to the motion and also along the radius towards the centre of the circular path. The mathematical description was derived in 1659 by the Dutch physicist Christiaan Huygens, the direction of the force is toward the center of the circle in which the object is moving, or the osculating circle. The speed in the formula is squared, so twice the speed needs four times the force, the inverse relationship with the radius of curvature shows that half the radial distance requires twice the force. Expressed using the orbital period T for one revolution of the circle, the rope example is an example involving a pull force. The centripetal force can also be supplied as a push force, newtons idea of a centripetal force corresponds to what is nowadays referred to as a central force. Another example of centripetal force arises in the helix that is traced out when a particle moves in a uniform magnetic field in the absence of other external forces. In this case, the force is the centripetal force that acts towards the helix axis. Below are three examples of increasing complexity, with derivations of the formulas governing velocity and acceleration, uniform circular motion refers to the case of constant rate of rotation. Here are two approaches to describing this case, assume uniform circular motion, which requires three things. The object moves only on a circle, the radius of the circle r does not change in time. The object moves with constant angular velocity ω around the circle, therefore, θ = ω t where t is time. Now find the velocity v and acceleration a of the motion by taking derivatives of position with respect to time, consequently, a = − ω2 r. negative shows that the acceleration is pointed towards the center of the circle, hence it is called centripetal. While objects naturally follow a path, this centripetal acceleration describes the circular motion path caused by a centripetal force. The image at right shows the relationships for uniform circular motion. In this subsection, dθ/dt is assumed constant, independent of time, consequently, d r d t = lim Δ t →0 r − r Δ t = d ℓ d t
Centripetal force
–
A body experiencing uniform circular motion requires a centripetal force, towards the axis as shown, to maintain its circular path.
25.
Coriolis force
–
In physics, the Coriolis force is an inertial force that acts on objects that are in motion relative to a rotating reference frame. In a reference frame with clockwise rotation, the acts to the left of the motion of the object. In one with anticlockwise rotation, the acts to the right. Early in the 20th century, the term Coriolis force began to be used in connection with meteorology, deflection of an object due to the Coriolis force is called the Coriolis effect. Newtons laws of motion describe the motion of an object in a frame of reference. When Newtons laws are transformed to a frame of reference. Both forces are proportional to the mass of the object, the Coriolis force is proportional to the rotation rate and the centrifugal force is proportional to its square. The Coriolis force acts in a perpendicular to the rotation axis. The centrifugal force acts outwards in the direction and is proportional to the distance of the body from the axis of the rotating frame. These additional forces are termed inertial forces, fictitious forces or pseudo forces and they allow the application of Newtons laws to a rotating system. They are correction factors that do not exist in a non-accelerating or inertial reference frame, a commonly encountered rotating reference frame is the Earth. The Coriolis effect is caused by the rotation of the Earth, such motions are constrained by the surface of the Earth, so only the horizontal component of the Coriolis force is generally important. This force causes moving objects on the surface of the Earth to be deflected to the right in the Northern Hemisphere, the horizontal deflection effect is greater near the poles, since the effective rotation rate about a local vertical axis is largest there, and smallest at the equator. This effect is responsible for the rotation of large cyclones, riccioli, Grimaldi, and Dechales all described the effect as part of an argument against the heliocentric system of Copernicus. In other words, they argued that the Earths rotation should create the effect, the effect was described in the tidal equations of Pierre-Simon Laplace in 1778. Gaspard-Gustave Coriolis published a paper in 1835 on the yield of machines with rotating parts. That paper considered the forces that are detected in a rotating frame of reference. Coriolis divided these forces into two categories
Coriolis force
–
This low-pressure system over Iceland spins counter-clockwise due to balance between the Coriolis force and the pressure gradient force.
Coriolis force
–
Coordinate system at latitude φ with x -axis east, y -axis north and z -axis upward (that is, radially outward from center of sphere).
Coriolis force
–
Cloud formations in a famous image of Earth from Apollo 17, makes similar circulation directly visible
Coriolis force
–
A carousel is rotating counter-clockwise. Left panel: a ball is tossed by a thrower at 12:00 o'clock and travels in a straight line to the center of the carousel. While it travels, the thrower circles in a counter-clockwise direction. Right panel: The ball's motion as seen by the thrower, who now remains at 12:00 o'clock, because there is no rotation from their viewpoint.
26.
Angular displacement
–
Angular displacement of a body is the angle in radians through which a point or line has been rotated in a specified sense about a specified axis. When an object rotates about its axis, the motion cannot simply be analyzed as a particle, since in circular motion it undergoes a changing velocity, when dealing with the rotation of an object, it becomes simpler to consider the body itself rigid. A body is generally considered rigid when the separations between all the particles remains constant throughout the motion, so for example parts of its mass are not flying off. In a realistic sense, all things can be deformable, however this impact is minimal, Thus the rotation of a rigid body over a fixed axis is referred to as rotational motion. In the example illustrated to the right, a particle on object P is at a distance r from the origin, O. It becomes important to represent the position of particle P in terms of its polar coordinates. In this particular example, the value of θ is changing, if using radians, it provides a very simple relationship between distance traveled around the circle and the distance r from the centre. Therefore,1 revolution is 2 π radians, when object travels from point P to point Q, as it does in the illustration to the left, over δ t the radius of the circle goes around a change in angle. Δ θ = θ2 − θ1 which equals the Angular Displacement, in three dimensions, angular displacement is an entity with a direction and a magnitude. The direction specifies the axis of rotation, which exists by virtue of the Eulers rotation theorem. This entity is called an axis-angle, despite having direction and magnitude, angular displacement is not a vector because it does not obey the commutative law for addition. Nevertheless, when dealing with infinitesimal rotations, second order infinitesimals can be discarded, several ways to describe angular displacement exist, like rotation matrices or Euler angles. See charts on SO for others, given that any frame in the space can be described by a rotation matrix, the displacement among them can also be described by a rotation matrix. Being A0 and A f two matrices, the angular displacement matrix between them can be obtained as Δ A = A f, when this product is performed having a very small difference between both frames we will obtain a matrix close to the identity. In the limit, we will have a rotation matrix. An infinitesimal angular displacement is a rotation matrix, As any rotation matrix has a single real eigenvalue, which is +1. Its module can be deduced from the value of the infinitesimal rotation, when it is divided by the time, this will yield the angular velocity vector. Suppose we specify an axis of rotation by a unit vector, expanding the rotation matrix as an infinite addition, and taking the first order approach, the rotation matrix ΔR is represented as, Δ R = + Δ θ = I + A Δ θ
Angular displacement
–
Rotation of a rigid object P about a fixed object about a fixed axis O.
27.
Isaac Newton
–
His book Philosophiæ Naturalis Principia Mathematica, first published in 1687, laid the foundations of classical mechanics. Newton also made contributions to optics, and he shares credit with Gottfried Wilhelm Leibniz for developing the infinitesimal calculus. Newtons Principia formulated the laws of motion and universal gravitation that dominated scientists view of the universe for the next three centuries. Newtons work on light was collected in his influential book Opticks. He also formulated a law of cooling, made the first theoretical calculation of the speed of sound. Newton was a fellow of Trinity College and the second Lucasian Professor of Mathematics at the University of Cambridge, politically and personally tied to the Whig party, Newton served two brief terms as Member of Parliament for the University of Cambridge, in 1689–90 and 1701–02. He was knighted by Queen Anne in 1705 and he spent the last three decades of his life in London, serving as Warden and Master of the Royal Mint and his father, also named Isaac Newton, had died three months before. Born prematurely, he was a child, his mother Hannah Ayscough reportedly said that he could have fit inside a quart mug. When Newton was three, his mother remarried and went to live with her new husband, the Reverend Barnabas Smith, leaving her son in the care of his maternal grandmother, Newtons mother had three children from her second marriage. From the age of twelve until he was seventeen, Newton was educated at The Kings School, Grantham which taught Latin and Greek. He was removed from school, and by October 1659, he was to be found at Woolsthorpe-by-Colsterworth, Henry Stokes, master at the Kings School, persuaded his mother to send him back to school so that he might complete his education. Motivated partly by a desire for revenge against a bully, he became the top-ranked student. In June 1661, he was admitted to Trinity College, Cambridge and he started as a subsizar—paying his way by performing valets duties—until he was awarded a scholarship in 1664, which guaranteed him four more years until he would get his M. A. He set down in his notebook a series of Quaestiones about mechanical philosophy as he found it, in 1665, he discovered the generalised binomial theorem and began to develop a mathematical theory that later became calculus. Soon after Newton had obtained his B. A. degree in August 1665, in April 1667, he returned to Cambridge and in October was elected as a fellow of Trinity. Fellows were required to become ordained priests, although this was not enforced in the restoration years, however, by 1675 the issue could not be avoided and by then his unconventional views stood in the way. Nevertheless, Newton managed to avoid it by means of a special permission from Charles II. A and he was elected a Fellow of the Royal Society in 1672. Newtons work has been said to distinctly advance every branch of mathematics then studied and his work on the subject usually referred to as fluxions or calculus, seen in a manuscript of October 1666, is now published among Newtons mathematical papers
Isaac Newton
–
Portrait of Isaac Newton in 1689 (age 46) by Godfrey Kneller
Isaac Newton
–
Newton in a 1702 portrait by Godfrey Kneller
Isaac Newton
–
Isaac Newton (Bolton, Sarah K. Famous Men of Science. NY: Thomas Y. Crowell & Co., 1889)
Isaac Newton
–
Replica of Newton's second Reflecting telescope that he presented to the Royal Society in 1672
28.
Jeremiah Horrocks
–
Jeremiah Horrocks, sometimes given as Jeremiah Horrox, was an English astronomer. Jeremiah Horrocks was born at Lower Lodge Farm in Toxteth Park and his father James had moved to Toxteth Park to be apprenticed to Thomas Aspinwall, a watchmaker, and subsequently married his masters daughter Mary. Both families were well educated Puritans, the Horrocks sent their sons to the University of Cambridge. For their unorthodox beliefs the Puritans were excluded from public office, in 1632 Horrocks matriculated at Emmanuel College at the University of Cambridge as a sizar. At Cambridge he associated with the mathematician John Wallis and the platonist John Worthington, at that time he was one of only a few at Cambridge to accept Copernicuss revolutionary heliocentric theory, and he studied the works of Johannes Kepler, Tycho Brahe and others. In 1635 for reasons not clear Horrocks left Cambridge without graduating, now committed to the study of astronomy, Horrocks began to collect astronomical books and equipment, by 1638 he owned the best telescope he could find. Liverpool was a town so navigational instruments such as the astrolabe. But there was no market for the very specialised astronomical instruments he needed and he was well placed to do this, his father and uncles were watchmakers with expertise in creating precise instruments. While a youth he read most of the treatises of his day and marked their weaknesses. Tradition has it that after he left home he supported himself by holding a curacy in Much Hoole, near Preston in Lancashire, according to local tradition in Much Hoole, he lived at Carr House, within the Bank Hall Estate, Bretherton. Carr House was a property owned by the Stones family who were prosperous farmers and merchants. Horrocks was the first to demonstrate that the Moon moved in a path around the Earth. He anticipated Isaac Newton in suggesting the influence of the Sun as well as the Earth on the moons orbit, in the Principia Newton acknowledged Horrockss work in relation to his theory of lunar motion. In the final months of his life Horrocks made detailed studies of tides in attempting to explain the nature of causation of tidal movements. Keplers tables had predicted a near-miss of a transit of Venus in 1639 but, having made his own observations of Venus for years, Horrocks predicted a transit would indeed occur. Horrocks made a simple helioscope by focusing the image of the Sun through a telescope onto a plane surface, from his location in Much Hoole he calculated the transit would begin at approximately 3,00 pm on 24 November 1639, Julian calendar. The weather was cloudy but he first observed the tiny black shadow of Venus crossing the Sun at about 3,15 pm, the 1639 transit was also observed by William Crabtree from his home in Broughton near Manchester. His figure of 95 million kilometres was far from the 150 million kilometres known today and it presented Horrocks enthusiastic and romantic nature, including humorous comments and passages of original poetry
Jeremiah Horrocks
–
Making the first observation of the transit of Venus in 1639
Jeremiah Horrocks
–
A representation of Horrocks' recording of the transit published in 1662 by Johannes Hevelius
Jeremiah Horrocks
–
The title page of Jeremiah Horrocks' Opera Posthuma, published by the Royal Society in 1672.
Jeremiah Horrocks
–
Jeremiah Horrocks Observatory on Moor Park, Preston
29.
Daniel Bernoulli
–
Daniel Bernoulli FRS was a Swiss mathematician and physicist and was one of the many prominent mathematicians in the Bernoulli family. He is particularly remembered for his applications of mathematics to mechanics, especially fluid mechanics, Daniel Bernoulli was born in Groningen, in the Netherlands, into a family of distinguished mathematicians. The Bernoulli family came originally from Antwerp, at time in the Spanish Netherlands. After a brief period in Frankfurt the family moved to Basel, Daniel was the son of Johann Bernoulli, nephew of Jacob Bernoulli. He had two brothers, Niklaus and Johann II, Daniel Bernoulli was described by W. W. Rouse Ball as by far the ablest of the younger Bernoullis. He is said to have had a bad relationship with his father, Johann Bernoulli also plagiarized some key ideas from Daniels book Hydrodynamica in his own book Hydraulica which he backdated to before Hydrodynamica. Despite Daniels attempts at reconciliation, his father carried the grudge until his death, around schooling age, his father, Johann, encouraged him to study business, there being poor rewards awaiting a mathematician. However, Daniel refused, because he wanted to study mathematics and he later gave in to his fathers wish and studied business. His father then asked him to study in medicine, and Daniel agreed under the condition that his father would teach him mathematics privately, Daniel studied medicine at Basel, Heidelberg, and Strasbourg, and earned a PhD in anatomy and botany in 1721. He was a contemporary and close friend of Leonhard Euler and he went to St. Petersburg in 1724 as professor of mathematics, but was very unhappy there, and a temporary illness in 1733 gave him an excuse for leaving St. Petersberg. He returned to the University of Basel, where he held the chairs of medicine, metaphysics. In May,1750 he was elected a Fellow of the Royal Society and his earliest mathematical work was the Exercitationes, published in 1724 with the help of Goldbach. Two years later he pointed out for the first time the frequent desirability of resolving a compound motion into motions of translation and motion of rotation, together Bernoulli and Euler tried to discover more about the flow of fluids. In particular, they wanted to know about the relationship between the speed at which blood flows and its pressure, soon physicians all over Europe were measuring patients blood pressure by sticking point-ended glass tubes directly into their arteries. It was not until about 170 years later, in 1896 that an Italian doctor discovered a less painful method which is still in use today. However, Bernoullis method of measuring pressure is used today in modern aircraft to measure the speed of the air passing the plane. Taking his discoveries further, Daniel Bernoulli now returned to his work on Conservation of Energy. It was known that a moving body exchanges its kinetic energy for energy when it gains height
Daniel Bernoulli
–
Daniel Bernoulli
30.
Augustin-Louis Cauchy
–
Baron Augustin-Louis Cauchy FRS FRSE was a French mathematician who made pioneering contributions to analysis. He was one of the first to state and prove theorems of calculus rigorously and he almost singlehandedly founded complex analysis and the study of permutation groups in abstract algebra. A profound mathematician, Cauchy had an influence over his contemporaries. His writings range widely in mathematics and mathematical physics, more concepts and theorems have been named for Cauchy than for any other mathematician. Cauchy was a writer, he wrote approximately eight hundred research articles. Cauchy was the son of Louis François Cauchy and Marie-Madeleine Desestre, Cauchy married Aloise de Bure in 1818. She was a relative of the publisher who published most of Cauchys works. By her he had two daughters, Marie Françoise Alicia and Marie Mathilde, Cauchys father was a high official in the Parisian Police of the New Régime. He lost his position because of the French Revolution that broke out one month before Augustin-Louis was born, the Cauchy family survived the revolution and the following Reign of Terror by escaping to Arcueil, where Cauchy received his first education, from his father. After the execution of Robespierre, it was safe for the family to return to Paris, there Louis-François Cauchy found himself a new bureaucratic job, and quickly moved up the ranks. When Napoleon Bonaparte came to power, Louis-François Cauchy was further promoted, the famous mathematician Lagrange was also a friend of the Cauchy family. On Lagranges advice, Augustin-Louis was enrolled in the École Centrale du Panthéon, most of the curriculum consisted of classical languages, the young and ambitious Cauchy, being a brilliant student, won many prizes in Latin and Humanities. In spite of successes, Augustin-Louis chose an engineering career. In 1805 he placed second out of 293 applicants on this exam, one of the main purposes of this school was to give future civil and military engineers a high-level scientific and mathematical education. The school functioned under military discipline, which caused the young, nevertheless, he finished the Polytechnique in 1807, at the age of 18, and went on to the École des Ponts et Chaussées. He graduated in engineering, with the highest honors. After finishing school in 1810, Cauchy accepted a job as an engineer in Cherbourg. Cauchys first two manuscripts were accepted, the one was rejected
Augustin-Louis Cauchy
–
Cauchy around 1840. Lithography by Zéphirin Belliard after a painting by Jean Roller.
Augustin-Louis Cauchy
–
The title page of a textbook by Cauchy.
Augustin-Louis Cauchy
–
Leçons sur le calcul différentiel, 1829
31.
Measure (mathematics)
–
In mathematical analysis, a measure on a set is a systematic way to assign a number to each suitable subset of that set, intuitively interpreted as its size. In this sense, a measure is a generalization of the concepts of length, area, for instance, the Lebesgue measure of the interval in the real numbers is its length in the everyday sense of the word – specifically,1. Technically, a measure is a function that assigns a real number or +∞ to subsets of a set X. It must further be countably additive, the measure of a subset that can be decomposed into a finite number of smaller disjoint subsets, is the sum of the measures of the smaller subsets. In general, if one wants to associate a consistent size to each subset of a set while satisfying the other axioms of a measure. This problem was resolved by defining measure only on a sub-collection of all subsets, the so-called measurable subsets and this means that countable unions, countable intersections and complements of measurable subsets are measurable. Non-measurable sets in a Euclidean space, on which the Lebesgue measure cannot be defined consistently, are complicated in the sense of being badly mixed up with their complement. Indeed, their existence is a consequence of the axiom of choice. Measure theory was developed in stages during the late 19th and early 20th centuries by Émile Borel, Henri Lebesgue, Johann Radon. The main applications of measures are in the foundations of the Lebesgue integral, in Andrey Kolmogorovs axiomatisation of probability theory, probability theory considers measures that assign to the whole set the size 1, and considers measurable subsets to be events whose probability is given by the measure. Ergodic theory considers measures that are invariant under, or arise naturally from, let X be a set and Σ a σ-algebra over X. A function μ from Σ to the real number line is called a measure if it satisfies the following properties, Non-negativity. Countable additivity, For all countable collections i =1 ∞ of pairwise disjoint sets in Σ, μ = ∑ k =1 ∞ μ One may require that at least one set E has finite measure. Then the empty set automatically has measure zero because of countable additivity, because μ = μ = μ + μ + μ + …, which implies that μ =0. If only the second and third conditions of the definition of measure above are met, the pair is called a measurable space, the members of Σ are called measurable sets. If and are two spaces, then a function f, X → Y is called measurable if for every Y-measurable set B ∈ Σ Y. A triple is called a measure space, a probability measure is a measure with total measure one – i. e. A probability space is a space with a probability measure
Measure (mathematics)
–
Informally, a measure has the property of being monotone in the sense that if A is a subset of B, the measure of A is less than or equal to the measure of B. Furthermore, the measure of the empty set is required to be 0.
32.
Kilogram
–
The kilogram or kilogramme is the base unit of mass in the International System of Units and is defined as being equal to the mass of the International Prototype of the Kilogram. The avoirdupois pound, used in both the imperial and US customary systems, is defined as exactly 0.45359237 kg, making one kilogram approximately equal to 2.2046 avoirdupois pounds. Other traditional units of weight and mass around the world are also defined in terms of the kilogram, the gram, 1/1000 of a kilogram, was provisionally defined in 1795 as the mass of one cubic centimeter of water at the melting point of ice. The final kilogram, manufactured as a prototype in 1799 and from which the IPK was derived in 1875, had an equal to the mass of 1 dm3 of water at its maximum density. The kilogram is the only SI base unit with an SI prefix as part of its name and it is also the only SI unit that is still directly defined by an artifact rather than a fundamental physical property that can be reproduced in different laboratories. Three other base units and 17 derived units in the SI system are defined relative to the kilogram, only 8 other units do not require the kilogram in their definition, temperature, time and frequency, length, and angle. At its 2011 meeting, the CGPM agreed in principle that the kilogram should be redefined in terms of the Planck constant, the decision was originally deferred until 2014, in 2014 it was deferred again until the next meeting. There are currently several different proposals for the redefinition, these are described in the Proposed Future Definitions section below, the International Prototype Kilogram is rarely used or handled. In the decree of 1795, the term gramme thus replaced gravet, the French spelling was adopted in the United Kingdom when the word was used for the first time in English in 1797, with the spelling kilogram being adopted in the United States. In the United Kingdom both spellings are used, with kilogram having become by far the more common, UK law regulating the units to be used when trading by weight or measure does not prevent the use of either spelling. In the 19th century the French word kilo, a shortening of kilogramme, was imported into the English language where it has used to mean both kilogram and kilometer. In 1935 this was adopted by the IEC as the Giorgi system, now known as MKS system. In 1948 the CGPM commissioned the CIPM to make recommendations for a practical system of units of measurement. This led to the launch of SI in 1960 and the subsequent publication of the SI Brochure, the kilogram is a unit of mass, a property which corresponds to the common perception of how heavy an object is. Mass is a property, that is, it is related to the tendency of an object at rest to remain at rest, or if in motion to remain in motion at a constant velocity. Accordingly, for astronauts in microgravity, no effort is required to hold objects off the cabin floor, they are weightless. However, since objects in microgravity still retain their mass and inertia, the ratio of the force of gravity on the two objects, measured by the scale, is equal to the ratio of their masses. On April 7,1795, the gram was decreed in France to be the weight of a volume of pure water equal to the cube of the hundredth part of the metre
Kilogram
–
A domestic-quality one-kilogram weight made of cast iron (the credit card is for scale). The shape follows OIML recommendation R52 for cast-iron hexagonal weights
Kilogram
–
Measurement of weight – gravitational attraction of the measurand causes a distortion of the spring
Kilogram
–
Measurement of mass – the gravitational force on the measurand is balanced against the gravitational force on the weights.
Kilogram
–
The Arago kilogram, an exact copy of the "Kilogramme des Archives" commissioned in 1821 by the US under supervision of French physicist François Arago that served as the US's first kilogram standard of mass until 1889, when the US converted to primary metric standards and received its current kilogram prototypes, K4 and K20.
33.
Weight
–
In science and engineering, the weight of an object is usually taken to be the force on the object due to gravity. Weight is a vector whose magnitude, often denoted by an italic letter W, is the product of the m of the object. The unit of measurement for weight is that of force, which in the International System of Units is the newton. For example, an object with a mass of one kilogram has a weight of about 9.8 newtons on the surface of the Earth, in this sense of weight, a body can be weightless only if it is far away from any other mass. Although weight and mass are scientifically distinct quantities, the terms are often confused with other in everyday use. There is also a tradition within Newtonian physics and engineering which sees weight as that which is measured when one uses scales. There the weight is a measure of the magnitude of the force exerted on a body. Typically, in measuring an objects weight, the object is placed on scales at rest with respect to the earth, thus, in a state of free fall, the weight would be zero. In this second sense of weight, terrestrial objects can be weightless, ignoring air resistance, the famous apple falling from the tree, on its way to meet the ground near Isaac Newton, is weightless. Further complications in elucidating the various concepts of weight have to do with the theory of relativity according to gravity is modelled as a consequence of the curvature of spacetime. In the teaching community, a debate has existed for over half a century on how to define weight for their students. The current situation is that a set of concepts co-exist. Discussion of the concepts of heaviness and lightness date back to the ancient Greek philosophers and these were typically viewed as inherent properties of objects. Plato described weight as the tendency of objects to seek their kin. To Aristotle weight and levity represented the tendency to restore the order of the basic elements, air, earth, fire. He ascribed absolute weight to earth and absolute levity to fire, archimedes saw weight as a quality opposed to buoyancy, with the conflict between the two determining if an object sinks or floats. The first operational definition of weight was given by Euclid, who defined weight as, weight is the heaviness or lightness of one thing, compared to another, operational balances had, however, been around much longer. According to Aristotle, weight was the cause of the falling motion of an object
Weight
–
Ancient Greek official bronze weights dating from around the 6th century BC, exhibited in the Ancient Agora Museum in Athens, housed in the Stoa of Attalus.
Weight
–
Weighing grain, from the Babur-namah
Weight
–
This top-fuel dragster can accelerate from zero to 160 kilometres per hour (99 mph) in 0.86 seconds. This is a horizontal acceleration of 5.3 g. Combined with the vertical g-force in the stationary case the Pythagorean theorem yields a g-force of 5.4 g. It is this g-force that causes the driver's weight if one uses the operational definition. If one uses the gravitational definition, the driver's weight is unchanged by the motion of the car.
Weight
–
Measuring weight versus mass
34.
Weighing scale
–
Weighing scales are devices to measure weight or calculate mass. Scales and balances are used in commerce, as many products are sold. Very accurate balances, called analytical balances, are used in fields such as chemistry. Although records dating to the 1700s refer to spring scales for measuring weight, the earliest design for such a device dates to 1770 and credits Richard Salter, an early scale-maker. Postal workers could work quickly with spring scales than balance scales because they could be read instantaneously. By the 1940s various electronic devices were being attached to these designs to make more accurate. A spring scale measures weight by reporting the distance that a spring deflects under a load and this contrasts to a balance, which compares the torque on the arm due to a sample weight to the torque on the arm due to a standard reference weight using a horizontal lever. Spring scales measure force, which is the force of constraint acting on an object. They are usually calibrated so that measured force translates to mass at earths gravity, the object to be weighed can be simply hung from the spring or set on a pivot and bearing platform. In a spring scale, the spring either stretches or compresses, by Hookes law, every spring has a proportionality constant that relates how hard it is pulled to how far it stretches. Rack and pinion mechanisms are used to convert the linear spring motion to a dial reading. With proper manufacturing and setup, however, spring scales can be rated as legal for commerce, to remove the temperature error, a commerce-legal spring scale must either have temperature-compensated springs or be used at a fairly constant temperature. To eliminate the effect of gravity variations, a spring scale must be calibrated where it is used. It is also common in high-capacity applications such as crane scales to use force to sense weight. The test force is applied to a piston or diaphragm and transmitted through hydraulic lines to an indicator based on a Bourdon tube or electronic sensor. A digital bathroom scale is a type of electronic weighing machine, the digital bathroom scale is a smart scale which has many functions like smartphone integration, cloud storage, fitness tracking, etc. In electronic versions of spring scales, the deflection of a beam supporting the weight is measured using a strain gauge. The capacity of such devices is only limited by the resistance of the beam to deflection and these scales are used in the modern bakery, grocery, delicatessen, seafood, meat, produce and other perishable goods departments
Weighing scale
–
Digital kitchen scale, a strain gauge scale
Weighing scale
–
Scales used for trade purposes in the state of Florida, as this scale at the checkout in a cafeteria, are inspected for accuracy by the FDACS's Bureau of Weights and Measures.
Weighing scale
–
A two-pan balance
Weighing scale
–
Two 10- decagram masses
35.
Newtonian physics
–
In physics, classical mechanics is one of the two major sub-fields of mechanics, along with quantum mechanics. Classical mechanics is concerned with the set of physical laws describing the motion of bodies under the influence of a system of forces. The study of the motion of bodies is an ancient one, making classical mechanics one of the oldest and largest subjects in science, engineering and technology. Classical mechanics describes the motion of objects, from projectiles to parts of machinery, as well as astronomical objects, such as spacecraft, planets, stars. Within classical mechanics are fields of study that describe the behavior of solids, liquids and gases, Classical mechanics also provides extremely accurate results as long as the domain of study is restricted to large objects and the speeds involved do not approach the speed of light. When both quantum and classical mechanics cannot apply, such as at the level with high speeds. Since these aspects of physics were developed long before the emergence of quantum physics and relativity, however, a number of modern sources do include relativistic mechanics, which in their view represents classical mechanics in its most developed and accurate form. Later, more abstract and general methods were developed, leading to reformulations of classical mechanics known as Lagrangian mechanics and these advances were largely made in the 18th and 19th centuries, and they extend substantially beyond Newtons work, particularly through their use of analytical mechanics. The following introduces the concepts of classical mechanics. For simplicity, it often models real-world objects as point particles, the motion of a point particle is characterized by a small number of parameters, its position, mass, and the forces applied to it. Each of these parameters is discussed in turn, in reality, the kind of objects that classical mechanics can describe always have a non-zero size. Objects with non-zero size have more complicated behavior than hypothetical point particles, because of the degrees of freedom. However, the results for point particles can be used to such objects by treating them as composite objects. The center of mass of a composite object behaves like a point particle, Classical mechanics uses common-sense notions of how matter and forces exist and interact. It assumes that matter and energy have definite, knowable attributes such as where an object is in space, non-relativistic mechanics also assumes that forces act instantaneously. The position of a point particle is defined with respect to a fixed reference point in space called the origin O, in space. A simple coordinate system might describe the position of a point P by means of a designated as r. In general, the point particle need not be stationary relative to O, such that r is a function of t, the time
Newtonian physics
–
Sir Isaac Newton (1643–1727), an influential figure in the history of physics and whose three laws of motion form the basis of classical mechanics
Newtonian physics
–
Diagram of orbital motion of a satellite around the earth, showing perpendicular velocity and acceleration (force) vectors.
Newtonian physics
–
Hamilton 's greatest contribution is perhaps the reformulation of Newtonian mechanics, now called Hamiltonian mechanics.
36.
Forms of energy
–
In the context of physical science, several forms of energy have been identified. These include, Some entries in the above list constitute or comprise others in the list, the list is not necessarily complete. Whenever physical scientists discover that a phenomenon appears to violate the law of energy conservation. Heat, work, and transfer of matter are special cases in that they are not properties of systems, but are instead properties of processes that transfer energy. In general we cannot measure how much heat or work are present in an object and these energy transfers are measured as positive or negative depending on which side of the transfer we view them from. These include gravitational energy, several types of energy, electric energy. Other familiar types of energy are a mix of both potential and kinetic energy. An example is energy which is the sum of kinetic. Energy may be transformed between different forms at various efficiencies, items that transform between these forms are called transducers. General non-relativistic mechanics Mechanical energy manifest in forms, but can be broadly classified into potential energy. The term potential energy is a general term, because it exists in all force fields, such as gravitation. Potential energy refers to the energy any object gain due to its position in a force field, the relation between mechanical energy with kinetic and potential energy is simply E = T + V. The Hamiltonian is just an expression, rather than a form of energy. General scope Kinetic energy is the required to accelerate an object to a given speed. Here the two terms on the hand side are identified with the total energy and the rest energy of the object. This equation reduces to the one above it, at small speed, the kinetic energy is zero at v=0, so that at rest, the total energy is the rest energy. Potential energy is defined as the work done against a force in changing the position of an object with respect to a reference position. In other words, it is the work done on the object to give it that much energy, changes in work and potential energy are related simply, Δ U = − Δ W
Forms of energy
–
Thermal energy is energy of microscopic constituents of matter, which may include both kinetic and potential energy.
Forms of energy
37.
Gravitational force
–
Gravity, or gravitation, is a natural phenomenon by which all things with mass are brought toward one another, including planets, stars and galaxies. Since energy and mass are equivalent, all forms of energy, including light, on Earth, gravity gives weight to physical objects and causes the ocean tides. Gravity has a range, although its effects become increasingly weaker on farther objects. The most extreme example of this curvature of spacetime is a hole, from which nothing can escape once past its event horizon. More gravity results in time dilation, where time lapses more slowly at a lower gravitational potential. Gravity is the weakest of the four fundamental interactions of nature, the gravitational attraction is approximately 1038 times weaker than the strong force,1036 times weaker than the electromagnetic force and 1029 times weaker than the weak force. As a consequence, gravity has an influence on the behavior of subatomic particles. On the other hand, gravity is the dominant interaction at the macroscopic scale, for this reason, in part, pursuit of a theory of everything, the merging of the general theory of relativity and quantum mechanics into quantum gravity, has become an area of research. While the modern European thinkers are credited with development of gravitational theory, some of the earliest descriptions came from early mathematician-astronomers, such as Aryabhata, who had identified the force of gravity to explain why objects do not fall out when the Earth rotates. Later, the works of Brahmagupta referred to the presence of force, described it as an attractive force. Modern work on gravitational theory began with the work of Galileo Galilei in the late 16th and this was a major departure from Aristotles belief that heavier objects have a higher gravitational acceleration. Galileo postulated air resistance as the reason that objects with less mass may fall slower in an atmosphere, galileos work set the stage for the formulation of Newtons theory of gravity. In 1687, English mathematician Sir Isaac Newton published Principia, which hypothesizes the inverse-square law of universal gravitation. Newtons theory enjoyed its greatest success when it was used to predict the existence of Neptune based on motions of Uranus that could not be accounted for by the actions of the other planets. Calculations by both John Couch Adams and Urbain Le Verrier predicted the position of the planet. A discrepancy in Mercurys orbit pointed out flaws in Newtons theory, the issue was resolved in 1915 by Albert Einsteins new theory of general relativity, which accounted for the small discrepancy in Mercurys orbit. The simplest way to test the equivalence principle is to drop two objects of different masses or compositions in a vacuum and see whether they hit the ground at the same time. Such experiments demonstrate that all objects fall at the rate when other forces are negligible
Gravitational force
–
Sir Isaac Newton, an English physicist who lived from 1642 to 1727
Gravitational force
–
Two-dimensional analogy of spacetime distortion generated by the mass of an object. Matter changes the geometry of spacetime, this (curved) geometry being interpreted as gravity. White lines do not represent the curvature of space but instead represent the coordinate system imposed on the curved spacetime, which would be rectilinear in a flat spacetime.
Gravitational force
–
Ball falling freely under gravity. See text for description.
Gravitational force
–
Gravity acts on stars that conform our Milky Way.
38.
Orders of magnitude (mass)
–
To help compare different orders of magnitude, the following lists describe various mass levels between 10−40 kg and 1053 kg. The table below is based on the kilogram, the unit of mass in the International System of Units. The kilogram is the standard unit to include an SI prefix as part of its name. The gram is an SI derived unit of mass, however, the names of all SI mass units are based on gram, rather than on kilogram, thus 103 kg is a megagram, not a kilokilogram. The tonne is a SI-compatible unit of equal to a megagram. The unit is in use for masses above about 103 kg and is often used with SI prefixes. Other units of mass are also in use, historical units include the stone, the pound, the carat, and the grain. For subatomic particles, physicists use the equivalent to the energy represented by an electronvolt. At the atomic level, chemists use the mass of one-twelfth of a carbon-12 atom, astronomers use the mass of the sun. Unlike other physical quantities, mass-energy does not have an a priori expected minimal quantity, as is the case with time or length, plancks law allows for the existence of photons with arbitrarily low energies. This series on orders of magnitude does not have a range of larger masses Mass units conversion calculator Mass units conversion calculator JavaScript
Orders of magnitude (mass)
–
Iron weights up to 50 kilograms depicted in Dictionnaire encyclopédique de l'épicerie et des industries annexes.
39.
International System of Units
–
The International System of Units is the modern form of the metric system, and is the most widely used system of measurement. It comprises a coherent system of units of measurement built on seven base units, the system also establishes a set of twenty prefixes to the unit names and unit symbols that may be used when specifying multiples and fractions of the units. The system was published in 1960 as the result of an initiative began in 1948. It is based on the system of units rather than any variant of the centimetre-gram-second system. The motivation for the development of the SI was the diversity of units that had sprung up within the CGS systems, the International System of Units has been adopted by most developed countries, however, the adoption has not been universal in all English-speaking countries. The metric system was first implemented during the French Revolution with just the metre and kilogram as standards of length, in the 1830s Carl Friedrich Gauss laid the foundations for a coherent system based on length, mass, and time. In the 1860s a group working under the auspices of the British Association for the Advancement of Science formulated the requirement for a coherent system of units with base units and derived units. Meanwhile, in 1875, the Treaty of the Metre passed responsibility for verification of the kilogram, in 1921, the Treaty was extended to include all physical quantities including electrical units originally defined in 1893. The units associated with these quantities were the metre, kilogram, second, ampere, kelvin, in 1971, a seventh base quantity, amount of substance represented by the mole, was added to the definition of SI. On 11 July 1792, the proposed the names metre, are, litre and grave for the units of length, area, capacity. The committee also proposed that multiples and submultiples of these units were to be denoted by decimal-based prefixes such as centi for a hundredth, on 10 December 1799, the law by which the metric system was to be definitively adopted in France was passed. Prior to this, the strength of the magnetic field had only been described in relative terms. The technique used by Gauss was to equate the torque induced on a magnet of known mass by the earth’s magnetic field with the torque induced on an equivalent system under gravity. The resultant calculations enabled him to assign dimensions based on mass, length, a French-inspired initiative for international cooperation in metrology led to the signing in 1875 of the Metre Convention. Initially the convention only covered standards for the metre and the kilogram, one of each was selected at random to become the International prototype metre and International prototype kilogram that replaced the mètre des Archives and kilogramme des Archives respectively. Each member state was entitled to one of each of the prototypes to serve as the national prototype for that country. Initially its prime purpose was a periodic recalibration of national prototype metres. The official language of the Metre Convention is French and the version of all official documents published by or on behalf of the CGPM is the French-language version
International System of Units
–
Stone marking the Austro-Hungarian /Italian border at Pontebba displaying myriametres, a unit of 10 km used in Central Europe in the 19th century (but since deprecated).
International System of Units
–
The seven base units in the International System of Units
International System of Units
–
Carl Friedrich Gauss
International System of Units
–
Thomson
40.
International prototype kilogram
–
The kilogram or kilogramme is the base unit of mass in the International System of Units and is defined as being equal to the mass of the International Prototype of the Kilogram. The avoirdupois pound, used in both the imperial and US customary systems, is defined as exactly 0.45359237 kg, making one kilogram approximately equal to 2.2046 avoirdupois pounds. Other traditional units of weight and mass around the world are also defined in terms of the kilogram, the gram, 1/1000 of a kilogram, was provisionally defined in 1795 as the mass of one cubic centimeter of water at the melting point of ice. The final kilogram, manufactured as a prototype in 1799 and from which the IPK was derived in 1875, had an equal to the mass of 1 dm3 of water at its maximum density. The kilogram is the only SI base unit with an SI prefix as part of its name and it is also the only SI unit that is still directly defined by an artifact rather than a fundamental physical property that can be reproduced in different laboratories. Three other base units and 17 derived units in the SI system are defined relative to the kilogram, only 8 other units do not require the kilogram in their definition, temperature, time and frequency, length, and angle. At its 2011 meeting, the CGPM agreed in principle that the kilogram should be redefined in terms of the Planck constant, the decision was originally deferred until 2014, in 2014 it was deferred again until the next meeting. There are currently several different proposals for the redefinition, these are described in the Proposed Future Definitions section below, the International Prototype Kilogram is rarely used or handled. In the decree of 1795, the term gramme thus replaced gravet, the French spelling was adopted in the United Kingdom when the word was used for the first time in English in 1797, with the spelling kilogram being adopted in the United States. In the United Kingdom both spellings are used, with kilogram having become by far the more common, UK law regulating the units to be used when trading by weight or measure does not prevent the use of either spelling. In the 19th century the French word kilo, a shortening of kilogramme, was imported into the English language where it has used to mean both kilogram and kilometer. In 1935 this was adopted by the IEC as the Giorgi system, now known as MKS system. In 1948 the CGPM commissioned the CIPM to make recommendations for a practical system of units of measurement. This led to the launch of SI in 1960 and the subsequent publication of the SI Brochure, the kilogram is a unit of mass, a property which corresponds to the common perception of how heavy an object is. Mass is a property, that is, it is related to the tendency of an object at rest to remain at rest, or if in motion to remain in motion at a constant velocity. Accordingly, for astronauts in microgravity, no effort is required to hold objects off the cabin floor, they are weightless. However, since objects in microgravity still retain their mass and inertia, the ratio of the force of gravity on the two objects, measured by the scale, is equal to the ratio of their masses. On April 7,1795, the gram was decreed in France to be the weight of a volume of pure water equal to the cube of the hundredth part of the metre
International prototype kilogram
–
A domestic-quality one-kilogram weight made of cast iron (the credit card is for scale). The shape follows OIML recommendation R52 for cast-iron hexagonal weights
International prototype kilogram
–
Measurement of weight – gravitational attraction of the measurand causes a distortion of the spring
International prototype kilogram
–
Measurement of mass – the gravitational force on the measurand is balanced against the gravitational force on the weights.
International prototype kilogram
–
The Arago kilogram, an exact copy of the "Kilogramme des Archives" commissioned in 1821 by the US under supervision of French physicist François Arago that served as the US's first kilogram standard of mass until 1889, when the US converted to primary metric standards and received its current kilogram prototypes, K4 and K20.
41.
Planck constant
–
The Planck constant is a physical constant that is the quantum of action, central in quantum mechanics. The light quantum behaved in some respects as a neutral particle. It was eventually called the photon, the Planck–Einstein relation connects the particulate photon energy E with its associated wave frequency f, E = h f This energy is extremely small in terms of ordinarily perceived everyday objects. Since the frequency f, wavelength λ, and speed of c are related by f = c λ. This leads to another relationship involving the Planck constant, with p denoting the linear momentum of a particle, the de Broglie wavelength λ of the particle is given by λ = h p. In applications where it is natural to use the frequency it is often useful to absorb a factor of 2π into the Planck constant. The resulting constant is called the reduced Planck constant or Dirac constant and it is equal to the Planck constant divided by 2π, and is denoted ħ, ℏ = h 2 π. The energy of a photon with angular frequency ω, where ω = 2πf, is given by E = ℏ ω, while its linear momentum relates to p = ℏ k and this was confirmed by experiments soon afterwards. This holds throughout quantum theory, including electrodynamics and these two relations are the temporal and spatial component parts of the special relativistic expression using 4-Vectors. P μ = = ℏ K μ = ℏ Classical statistical mechanics requires the existence of h, eventually, following upon Plancks discovery, it was recognized that physical action cannot take on an arbitrary value. Instead, it must be multiple of a very small quantity. This is the old quantum theory developed by Bohr and Sommerfeld, in which particle trajectories exist but are hidden. Thus there is no value of the action as classically defined, related to this is the concept of energy quantization which existed in old quantum theory and also exists in altered form in modern quantum physics. Classical physics cannot explain either quantization of energy or the lack of a particle motion. In many cases, such as for light or for atoms, quantization of energy also implies that only certain energy levels are allowed. The Planck constant has dimensions of physical action, i. e. energy multiplied by time, or momentum multiplied by distance, in SI units, the Planck constant is expressed in joule-seconds or or. The value of the Planck constant is, h =6.626070040 ×10 −34 J⋅s =4.135667662 ×10 −15 eV⋅s. The value of the reduced Planck constant is, ℏ = h 2 π =1.054571800 ×10 −34 J⋅s =6.582119514 ×10 −16 eV⋅s
Planck constant
–
Plaque at the Humboldt University of Berlin: "Max Planck, discoverer of the elementary quantum of action h, taught in this building from 1889 to 1928."
42.
Tonne
–
The SI symbol for the tonne is t, adopted at the same time as the unit itself in 1879. Its use is also official, for the metric ton, within the United States, having been adopted by the US National Institute of Standards and it is a symbol, not an abbreviation, and should not be followed by a period. Informal and non-approved symbols or abbreviations include T, mT, MT, in French and all English-speaking countries that are predominantly metric, tonne is the correct spelling. Before metrication in the UK the unit used for most purposes was the Imperial ton of 2,240 pounds avoirdupois, equivalent to 1,016 kg, differing by just 1. 6% from the tonne. Ton and tonne are both derived from a Germanic word in use in the North Sea area since the Middle Ages to designate a large cask. A full tun, standing about a high, could easily weigh a tonne. An English tun of wine weighs roughly a tonne,954 kg if full of water, in the United States, the unit was originally referred to using the French words millier or tonneau, but these terms are now obsolete. The Imperial and US customary units comparable to the tonne are both spelled ton in English, though they differ in mass, one tonne is equivalent to, Metric/SI,1 megagram. Equal to 1000000 grams or 1000 kilograms, megagram, Mg, is the official SI unit. Mg is distinct from mg, milligram, pounds, Exactly 1000/0. 453 592 37 lb, or approximately 2204.622622 lb. US/Short tons, Exactly 1/0. 907 184 74 short tons, or approximately 1.102311311 ST. One short ton is exactly 0.90718474 t, imperial/Long tons, Exactly 1/1. 016 046 9088 long tons, or approximately 0.9842065276 LT. One long ton is exactly 1.0160469088 t, for multiples of the tonne, it is more usual to speak of thousands or millions of tonnes. Kilotonne, megatonne, and gigatonne are more used for the energy of nuclear explosions and other events. When used in context, there is little need to distinguish between metric and other tons, and the unit is spelt either as ton or tonne with the relevant prefix attached. *The equivalent units columns use the short scale large-number naming system used in most English-language countries. †Values in the equivalent short and long tons columns are rounded to five significant figures, ǂThough non-standard, the symbol kt is also sometimes used for knot, a unit of speed for sea-going vessels, and should not be confused with kilotonne. A metric ton unit can mean 10 kilograms within metal trading and it traditionally referred to a metric ton of ore containing 1% of metal. In the case of uranium, the acronym MTU is sometimes considered to be metric ton of uranium, in the petroleum industry the tonne of oil equivalent is a unit of energy, the amount of energy released by burning one tonne of crude oil, approximately 42 GJ
Tonne
–
Base units
43.
Electronvolt
–
In physics, the electronvolt is a unit of energy equal to approximately 1. 6×10−19 joules. By definition, it is the amount of energy gained by the charge of an electron moving across an electric potential difference of one volt. Thus it is 1 volt multiplied by the elementary charge, therefore, one electronvolt is equal to 6981160217662079999♠1. 6021766208×10−19 J. The electronvolt is not a SI unit, and its definition is empirical, like the elementary charge on which it is based, it is not an independent quantity but is equal to 1 J/C √2hα / μ0c0. It is a unit of energy within physics, widely used in solid state, atomic, nuclear. It is commonly used with the metric prefixes milli-, kilo-, in some older documents, and in the name Bevatron, the symbol BeV is used, which stands for billion electronvolts, it is equivalent to the GeV. By mass–energy equivalence, the electronvolt is also a unit of mass and it is common in particle physics, where units of mass and energy are often interchanged, to express mass in units of eV/c2, where c is the speed of light in vacuum. It is common to express mass in terms of eV as a unit of mass. The mass equivalent of 1 eV/c2 is 1 eV / c 2 = ⋅1 V2 =1.783 ×10 −36 kg. For example, an electron and a positron, each with a mass of 0.511 MeV/c2, the proton has a mass of 0.938 GeV/c2. In general, the masses of all hadrons are of the order of 1 GeV/c2, the unified atomic mass unit,1 gram divided by Avogadros number, is almost the mass of a hydrogen atom, which is mostly the mass of the proton. To convert to megaelectronvolts, use the formula,1 u =931.4941 MeV/c2 =0.9314941 GeV/c2, in high-energy physics, the electronvolt is often used as a unit of momentum. A potential difference of 1 volt causes an electron to gain an amount of energy and this gives rise to usage of eV as units of momentum, for the energy supplied results in acceleration of the particle. The dimensions of units are LMT−1. The dimensions of units are L2MT−2. Then, dividing the units of energy by a constant that has units of velocity. In the field of particle physics, the fundamental velocity unit is the speed of light in vacuum c. Thus, dividing energy in eV by the speed of light, the fundamental velocity constant c is often dropped from the units of momentum by way of defining units of length such that the value of c is unity
Electronvolt
–
γ: Gamma rays
44.
Particle physics
–
Particle physics is the branch of physics that studies the nature of the particles that constitute matter and radiation. By our current understanding, these particles are excitations of the quantum fields that also govern their interactions. The currently dominant theory explaining these fundamental particles and fields, along with their dynamics, is called the Standard Model, in more technical terms, they are described by quantum state vectors in a Hilbert space, which is also treated in quantum field theory. All particles and their interactions observed to date can be described almost entirely by a field theory called the Standard Model. The Standard Model, as formulated, has 61 elementary particles. Those elementary particles can combine to form composite particles, accounting for the hundreds of species of particles that have been discovered since the 1960s. The Standard Model has been found to agree with almost all the tests conducted to date. However, most particle physicists believe that it is a description of nature. In recent years, measurements of mass have provided the first experimental deviations from the Standard Model. The idea that all matter is composed of elementary particles dates from at least the 6th century BC, in the 19th century, John Dalton, through his work on stoichiometry, concluded that each element of nature was composed of a single, unique type of particle. Throughout the 1950s and 1960s, a variety of particles were found in collisions of particles from increasingly high-energy beams. It was referred to informally as the particle zoo, the current state of the classification of all elementary particles is explained by the Standard Model. It describes the strong, weak, and electromagnetic fundamental interactions, the species of gauge bosons are the gluons, W−, W+ and Z bosons, and the photons. The Standard Model also contains 24 fundamental particles, which are the constituents of all matter, finally, the Standard Model also predicted the existence of a type of boson known as the Higgs boson. Early in the morning on 4 July 2012, physicists with the Large Hadron Collider at CERN announced they had found a new particle that behaves similarly to what is expected from the Higgs boson, the worlds major particle physics laboratories are, Brookhaven National Laboratory. Its main facility is the Relativistic Heavy Ion Collider, which collides heavy ions such as gold ions and it is the worlds first heavy ion collider, and the worlds only polarized proton collider. Its main projects are now the electron-positron colliders VEPP-2000, operated since 2006 and its main project is now the Large Hadron Collider, which had its first beam circulation on 10 September 2008, and is now the worlds most energetic collider of protons. It also became the most energetic collider of heavy ions after it began colliding lead ions and its main facility is the Hadron Elektron Ring Anlage, which collides electrons and positrons with protons
Particle physics
–
Large Hadron Collider tunnel at CERN
45.
Atomic mass unit
–
The unified atomic mass unit or dalton is a standard unit of mass that quantifies mass on an atomic or molecular scale. One unified atomic mass unit is approximately the mass of one nucleon and is equivalent to 1 g/mol. The CIPM has categorised it as a non-SI unit accepted for use with the SI, the amu without the unified prefix is technically an obsolete unit based on oxygen, which was replaced in 1961. However, many still use the term amu but now define it in the same way as u. In this sense, most uses of the atomic mass units. For standardization a specific atomic nucleus had to be chosen because the mass of a nucleon depends on the count of the nucleons in the atomic nucleus due to mass defect. This is also why the mass of a proton or neutron by itself is more than 1 u, the atomic mass unit is not the unit of mass in the atomic units system, which is rather the electron rest mass. The relative atomic mass scale has traditionally been a relative value and this evaluation was made prior to the discovery of the existence of elemental isotopes, which occurred in 1912. The divergence of these values could result in errors in computations, the chemistry amu, based on the relative atomic mass of natural oxygen, was about 1.000282 as massive as the physics amu, based on pure isotopic 16O. For these and other reasons, the standard for both physics and chemistry was changed to carbon-12 in 1961. The choice of carbon-12 was made to minimise further divergence with prior literature. The new and current unit was referred to as the atomic mass unit u. and given a new symbol, u. The Dalton is another name for the atomic mass unit. 1 u = m u =112 m Despite this change, modern sources often use the old term amu but define it as u. Therefore, in general, amu likely does not refer to the old oxygen standard unit, the unified atomic mass unit and the dalton are different names for the same unit of measure. As with other names such as watt and newton, dalton is not capitalized in English. In 2003 the Consultative Committee for Units, part of the CIPM, recommended a preference for the usage of the dalton over the atomic mass unit as it is shorter. In 2005, the International Union of Pure and Applied Physics endorsed the use of the dalton as an alternative to the atomic mass unit
Atomic mass unit
–
Base units
46.
Pound (mass)
–
The pound or pound-mass is a unit of mass used in the imperial, United States customary and other systems of measurement. The international standard symbol for the pound is lb. The unit is descended from the Roman libra, the English word pound is cognate with, among others, German Pfund, Dutch pond, and Swedish pund. All ultimately derive from a borrowing into Proto-Germanic of the Latin expression lībra pondō, usage of the unqualified term pound reflects the historical conflation of mass and weight. This accounts for the modern distinguishing terms pound-mass and pound-force, the United States and countries of the Commonwealth of Nations agreed upon common definitions for the pound and the yard. Since 1 July 1959, the avoirdupois pound has been defined as exactly 0.45359237 kg. In the United Kingdom, the use of the pound was implemented in the Weights and Measures Act 1963.9144 metre exactly. An avoirdupois pound is equal to 16 avoirdupois ounces and to exactly 7,000 grains, the conversion factor between the kilogram and the international pound was therefore chosen to be divisible by 7, and an grain is thus equal to exactly 64.79891 milligrams. The US has not adopted the system despite many efforts to do so. Historically, in different parts of the world, at different points in time, and for different applications, the libra is an ancient Roman unit of mass that was equivalent to approximately 328.9 grams. It was divided into 12 unciae, or ounces, the libra is the origin of the abbreviation for pound, lb. A number of different definitions of the pound have historically used in Britain. Amongst these were the avoirdupois pound and the tower, merchants. Historically, the sterling was a tower pound of silver. In 1528, the standard was changed to the Troy pound, the avoirdupois pound, also known as the wool pound, first came into general use c. It was initially equal to 6992 troy grains, the pound avoirdupois was divided into 16 ounces. During the reign of Queen Elizabeth, the pound was redefined as 7,000 troy grains. Since then, the grain has often been a part of the avoirdupois system
Pound (mass)
–
Various historic pounds from a German textbook dated 1848
Pound (mass)
–
The Tower Pound
47.
Solar mass
–
The solar mass is a standard unit of mass in astronomy, equal to approximately 1.99 ×1030 kilograms. It is used to indicate the masses of stars, as well as clusters, nebulae. It is equal to the mass of the Sun, about two kilograms, M☉ = ×1030 kg The above mass is about 332946 times the mass of Earth. Because Earth follows an orbit around the Sun, its solar mass can be computed from the equation for the orbital period of a small body orbiting a central mass. The value he obtained differs by only 1% from the modern value, the diurnal parallax of the Sun was accurately measured during the transits of Venus in 1761 and 1769, yielding a value of 9″. From the value of the parallax, one can determine the distance to the Sun from the geometry of Earth. The first person to estimate the mass of the Sun was Isaac Newton, in his work Principia, he estimated that the ratio of the mass of Earth to the Sun was about 1/28700. Later he determined that his value was based upon a faulty value for the solar parallax and he corrected his estimated ratio to 1/169282 in the third edition of the Principia. The current value for the parallax is smaller still, yielding an estimated mass ratio of 1/332946. As a unit of measurement, the solar mass came into use before the AU, the mass of the Sun has been decreasing since the time it formed. This occurs through two processes in nearly equal amounts, first, in the Suns core, hydrogen is converted into helium through nuclear fusion, in particular the p–p chain, and this reaction converts some mass into energy in the form of gamma ray photons. Most of this energy eventually radiates away from the Sun, second, high-energy protons and electrons in the atmosphere of the Sun are ejected directly into outer space as a solar wind. The original mass of the Sun at the time it reached the main sequence remains uncertain, the early Sun had much higher mass-loss rates than at present, and it may have lost anywhere from 1–7% of its natal mass over the course of its main-sequence lifetime. The Sun gains a small amount of mass through the impact of asteroids. However, as the Sun already contains 99. 86% of the Solar Systems total mass, M☉ G / c2 ≈1.48 km M☉ G / c3 ≈4.93 μs I. -J. A Bright Young Sun Consistent with Helioseismology and Warm Temperatures on Ancient Earth and Mars
Solar mass
–
Internal structure
Solar mass
–
Size and mass of very large stars: Most massive example, the blue Pistol Star (150 M ☉). Others are Rho Cassiopeiae (40 M ☉), Betelgeuse (20 M ☉), and VY Canis Majoris (17 M ☉). The Sun (1 M ☉) which is not visible in this thumbnail is included to illustrate the scale of example stars. Earth's orbit (grey), Jupiter's orbit (red), and Neptune's orbit (blue) are also given.
48.
Sun
–
The Sun is the star at the center of the Solar System. It is a perfect sphere of hot plasma, with internal convective motion that generates a magnetic field via a dynamo process. It is by far the most important source of energy for life on Earth. Its diameter is about 109 times that of Earth, and its mass is about 330,000 times that of Earth, accounting for about 99. 86% of the total mass of the Solar System. About three quarters of the Suns mass consists of hydrogen, the rest is mostly helium, with smaller quantities of heavier elements, including oxygen, carbon, neon. The Sun is a G-type main-sequence star based on its spectral class and it formed approximately 4.6 billion years ago from the gravitational collapse of matter within a region of a large molecular cloud. Most of this matter gathered in the center, whereas the rest flattened into a disk that became the Solar System. The central mass became so hot and dense that it eventually initiated nuclear fusion in its core and it is thought that almost all stars form by this process. The Sun is roughly middle-aged, it has not changed dramatically for more than four billion years and it is calculated that the Sun will become sufficiently large enough to engulf the current orbits of Mercury, Venus, and probably Earth. The enormous effect of the Sun on Earth has been recognized since prehistoric times, the synodic rotation of Earth and its orbit around the Sun are the basis of the solar calendar, which is the predominant calendar in use today. The English proper name Sun developed from Old English sunne and may be related to south, all Germanic terms for the Sun stem from Proto-Germanic *sunnōn. The English weekday name Sunday stems from Old English and is ultimately a result of a Germanic interpretation of Latin dies solis, the Latin name for the Sun, Sol, is not common in general English language use, the adjectival form is the related word solar. The term sol is used by planetary astronomers to refer to the duration of a solar day on another planet. A mean Earth solar day is approximately 24 hours, whereas a mean Martian sol is 24 hours,39 minutes, and 35.244 seconds. From at least the 4th Dynasty of Ancient Egypt, the Sun was worshipped as the god Ra, portrayed as a falcon-headed divinity surmounted by the solar disk, and surrounded by a serpent. In the New Empire period, the Sun became identified with the dung beetle, in the form of the Sun disc Aten, the Sun had a brief resurgence during the Amarna Period when it again became the preeminent, if not only, divinity for the Pharaoh Akhenaton. The Sun is viewed as a goddess in Germanic paganism, Sól/Sunna, in ancient Roman culture, Sunday was the day of the Sun god. It was adopted as the Sabbath day by Christians who did not have a Jewish background, the symbol of light was a pagan device adopted by Christians, and perhaps the most important one that did not come from Jewish traditions
Sun
–
The Sun in visible wavelength with filtered white light on 8 July 2014. Characteristic limb darkening and numerous sunspots are visible.
Sun
–
During a total solar eclipse, the solar corona can be seen with the naked eye, during the brief period of totality.
Sun
–
Taken by Hinode 's Solar Optical Telescope on 12 January 2007, this image of the Sun reveals the filamentary nature of the plasma connecting regions of different magnetic polarity.
Sun
–
Visible light photograph of sunspot, 13 December 2006
49.
Compton wavelength
–
The Compton wavelength is a quantum mechanical property of a particle. It was introduced by Arthur Compton in his explanation of the scattering of photons by electrons, the Compton wavelength of a particle is equivalent to the wavelength of a photon whose energy is the same as the mass of the particle. The standard Compton wavelength, λ, of a particle is given by λ = h m c, where h is the Planck constant, m is the particles mass, the significance of this formula is shown in the derivation of the Compton shift formula. The CODATA2014 value for the Compton wavelength of the electron is 6988242631023670000♠2. 4263102367×10−12 m, other particles have different Compton wavelengths. When the Compton wavelength is divided by 2π, one obtains the “reduced” Compton wavelength ƛ, i. e. the Compton wavelength for 1 radian instead of 2π radians, where ħ is the “reduced” Planck constant. The reduced Compton wavelength is a representation for mass on the quantum scale. The reduced Compton wavelength appears in the relativistic Klein–Gordon equation for a free particle and it appears in the Dirac equation, − i γ μ ∂ μ ψ + ψ =0. The reduced Compton wavelength also appears in Schrödingers equation, although its presence is obscured in traditional representations of the equation. Dividing through by ℏ c, and rewriting in terms of the structure constant, one obtains. The reduced Compton wavelength is a representation for mass on the quantum scale. Equations that pertain to inertial mass like Klein-Gordon and Schrödingers, use the reduced Compton wavelength, the non-reduced Compton wavelength is a natural representation for mass that has been converted into energy. Equations that pertain to the conversion of mass energy, or to the wavelengths of photons interacting with mass. A particle of mass m has a rest energy of E = mc2, the non-reduced Compton wavelength for this particle is the wavelength of a photon of the same energy. For photons of frequency f, energy is given by E = h f = h c λ = m c 2, the Compton wavelength expresses a fundamental limitation on measuring the position of a particle, taking into account quantum mechanics and special relativity. This limitation depends on the m of the particle. To see how, note that we can measure the position of a particle by bouncing light off it –, light with a short wavelength consists of photons of high energy. If the energy of these photons exceeds mc2, when one hits the particle position is being measured the collision may yield enough energy to create a new particle of the same type. This renders moot the question of the particles location
Compton wavelength
–
The Schwarzschild radius (r s) represents the ability of mass to cause curvature in space and time.
50.
Black hole
–
A black hole is a region of spacetime exhibiting such strong gravitational effects that nothing—not even particles and electromagnetic radiation such as light—can escape from inside it. The theory of relativity predicts that a sufficiently compact mass can deform spacetime to form a black hole. The boundary of the region from which no escape is possible is called the event horizon, although the event horizon has an enormous effect on the fate and circumstances of an object crossing it, no locally detectable features appear to be observed. In many ways a black hole acts like a black body. Moreover, quantum theory in curved spacetime predicts that event horizons emit Hawking radiation. This temperature is on the order of billionths of a kelvin for black holes of stellar mass, objects whose gravitational fields are too strong for light to escape were first considered in the 18th century by John Michell and Pierre-Simon Laplace. Black holes were considered a mathematical curiosity, it was during the 1960s that theoretical work showed they were a generic prediction of general relativity. The discovery of neutron stars sparked interest in gravitationally collapsed compact objects as a possible astrophysical reality, black holes of stellar mass are expected to form when very massive stars collapse at the end of their life cycle. After a black hole has formed, it can continue to grow by absorbing mass from its surroundings, by absorbing other stars and merging with other black holes, supermassive black holes of millions of solar masses may form. There is general consensus that supermassive black holes exist in the centers of most galaxies, despite its invisible interior, the presence of a black hole can be inferred through its interaction with other matter and with electromagnetic radiation such as visible light. Matter that falls onto a black hole can form an accretion disk heated by friction. If there are other stars orbiting a black hole, their orbits can be used to determine the black holes mass, such observations can be used to exclude possible alternatives such as neutron stars.3 million solar masses. On 15 June 2016, a detection of a gravitational wave event from colliding black holes was announced. The idea of a body so massive that light could not escape was briefly proposed by astronomical pioneer John Michell in a letter published in 1783-4. Michell correctly noted that such supermassive but non-radiating bodies might be detectable through their effects on nearby visible bodies. In 1915, Albert Einstein developed his theory of general relativity, only a few months later, Karl Schwarzschild found a solution to the Einstein field equations, which describes the gravitational field of a point mass and a spherical mass. A few months after Schwarzschild, Johannes Droste, a student of Hendrik Lorentz, independently gave the solution for the point mass. This solution had a peculiar behaviour at what is now called the Schwarzschild radius, the nature of this surface was not quite understood at the time
Black hole
–
Predicted appearance of non-rotating black hole with toroidal ring of ionised matter, such as has been proposed as a model for Sagittarius A*. The asymmetry is due to the Doppler effect resulting from the enormous orbital speed needed for centrifugal balance of the very strong gravitational attraction of the hole.
Black hole
–
Simulation of gravitational lensing by a black hole, which distorts the image of a galaxy in the background
Black hole
–
A simple illustration of a non-spinning black hole
Black hole
–
A simulated event in the CMS detector, a collision in which a micro black hole may be created.
51.
Standard gravitational parameter
–
In celestial mechanics, the standard gravitational parameter μ of a celestial body is the product of the gravitational constant G and the mass M of the body. μ = G M For several objects in the Solar System, the SI units of the standard gravitational parameter are m3 s−2. However, units of km3 s−2 are frequently used in the scientific literature, the central body in an orbital system can be defined as the one whose mass is much larger than the mass of the orbiting body, or M ≫ m. This approximation is standard for planets orbiting the Sun or most moons, conversely, measurements of the smaller bodys orbit only provide information on the product, μ, not G and M separately. This can be generalized for elliptic orbits, μ =4 π2 a 3 / T2 where a is the semi-major axis, for parabolic trajectories rv2 is constant and equal to 2μ. For elliptic and hyperbolic orbits μ = 2a| ε |, where ε is the orbital energy. The value for the Earth is called the gravitational constant. However, the M can be out only by dividing the MG by G. The uncertainty of those measures is 1 to 7000, so M will have the same uncertainty, the value for the Sun is called the heliocentric gravitational constant or geopotential of the Sun and equals 1. 32712440018×1020 m3 s−2. Note that the mass is also denoted by μ. Astronomical system of units Planetary mass
Standard gravitational parameter
–
The Schwarzschild radius (r s) represents the ability of mass to cause curvature in space and time.
52.
Proportionality (mathematics)
–
In mathematics, two variables are proportional if a change in one is always accompanied by a change in the other, and if the changes are always related by use of a constant multiplier. The constant is called the coefficient of proportionality or proportionality constant, if one variable is always the product of the other and a constant, the two are said to be directly proportional. X and y are directly proportional if the ratio y/x is constant, if the product of the two variables is always a constant, the two are said to be inversely proportional. X and y are inversely proportional if the product xy is constant, to express the statement y is directly proportional to x mathematically, we write an equation y = cx, where c is the proportionality constant. Symbolically, this is written as y ∝ x, to express the statement y is inversely proportional to x mathematically, we write an equation y = c/x. We can equivalently write y is proportional to 1/x. An equality of two ratios is called a proportion, for example, a/c = b/d, where no term is zero. Given two variables x and y, y is proportional to x if there is a non-zero constant k such that y = k x. The relation is denoted, using the ∝ or ~ symbol, as y ∝ x. If an object travels at a constant speed, then the distance traveled is directly proportional to the time spent traveling, the circumference of a circle is directly proportional to its diameter, with the constant of proportionality equal to π. Since y = k x is equivalent to x = y, it follows that if y is proportional to x, with proportionality constant k. The concept of inverse proportionality can be contrasted against direct proportionality, consider two variables said to be inversely proportional to each other. If all other variables are constant, the magnitude or absolute value of one inversely proportional variable decreases if the other variable increases. Formally, two variables are proportional if each of the variables is directly proportional to the multiplicative inverse of the other. As an example, the time taken for a journey is inversely proportional to the speed of travel, the graph of two variables varying inversely on the Cartesian coordinate plane is a rectangular hyperbola. The product of the x and y values of each point on the curve equals the constant of proportionality, since neither x nor y can equal zero, the graph never crosses either axis. A variable y is proportional to a variable x, if y is directly proportional to the exponential function of x, that is if there exist non-zero constants k. Likewise, a y is logarithmically proportional to a variable x, if y is directly proportional to the logarithm of x, that is if there exist non-zero constants k
Proportionality (mathematics)
–
Variable y is directly proportional to the variable x.
53.
Free-fall
–
In Newtonian physics, free fall is any motion of a body where gravity is the only force acting upon it. In the context of relativity, where gravitation is reduced to a space-time curvature. The present article only concerns itself with free fall in the Newtonian domain, an object in the technical sense of free fall may not necessarily be falling down in the usual sense of the term. An object moving upwards would not normally be considered to be falling, the moon is thus in free fall. The term free fall is used more loosely than in the strict sense defined above. Thus, falling through an atmosphere without a parachute, or lifting device, is also often referred to as free fall. The ancient Greek philosopher Aristotle discussed falling objects in Physics which was perhaps the first book on mechanics, the Italian scientist Galileo Galilei subjected the Aristotelian theories to experimentation and careful observation. He then combined the results of experiments with mathematical analysis in an unprecedented way. According to a tale that may be apocryphal, in 1589–92 Galileo dropped two objects of mass from the Leaning Tower of Pisa. Given the speed at such a fall would occur, it is doubtful that Galileo could have extracted much information from this experiment. Most of his observations of falling bodies were really of bodies rolling down ramps and this slowed things down enough to the point where he was able to measure the time intervals with water clocks and his own pulse. This he repeated a full hundred times until he had achieved an accuracy such that the deviation between two observations never exceeded one-tenth of a pulse beat, in 1589–92, Galileo wrote De Motu Antiquiora, an unpublished manuscript on the motion of falling bodies. Examples of objects in free fall include, A spacecraft with propulsion off, an object dropped at the top of a drop tube. An object thrown upward or a jumping off the ground at low speed. Technically, an object is in free fall even when moving upwards or instantaneously at rest at the top of its motion, if gravity is the only influence acting, then the acceleration is always downward and has the same magnitude for all bodies, commonly denoted g. Since all objects fall at the rate in the absence of other forces, objects. Examples of objects not in free fall, Flying in an aircraft, standing on the ground, the gravitational force is counteracted by the normal force from the ground. Descending to the Earth using a parachute, which balances the force of gravity with a drag force
Free-fall
–
Measured fall time of a small steel sphere falling from various heights. The data is in good agreement with the predicted fall time of, where h is the height and g is the free-fall acceleration due to gravity.
Free-fall
–
Acceleration of a small meteoroid when entering the Earth's atmosphere at different initial velocities.
Free-fall
–
Joseph Kittinger starting his record-breaking skydive in 1960. His record was broken only in 2012.
54.
Moon
–
The Moon is an astronomical body that orbits planet Earth, being Earths only permanent natural satellite. It is the fifth-largest natural satellite in the Solar System, following Jupiters satellite Io, the Moon is second-densest satellite among those whose densities are known. The average distance of the Moon from the Earth is 384,400 km, the Moon is thought to have formed about 4.51 billion years ago, not long after Earth. It is the second-brightest regularly visible celestial object in Earths sky, after the Sun and its surface is actually dark, although compared to the night sky it appears very bright, with a reflectance just slightly higher than that of worn asphalt. Its prominence in the sky and its cycle of phases have made the Moon an important cultural influence since ancient times on language, calendars, art. The Moons gravitational influence produces the ocean tides, body tides, and this matching of apparent visual size will not continue in the far future. The Moons linear distance from Earth is currently increasing at a rate of 3.82 ±0.07 centimetres per year, since the Apollo 17 mission in 1972, the Moon has been visited only by uncrewed spacecraft. The usual English proper name for Earths natural satellite is the Moon, the noun moon is derived from moone, which developed from mone, which is derived from Old English mōna, which ultimately stems from Proto-Germanic *mǣnōn, like all Germanic language cognates. Occasionally, the name Luna is used, in literature, especially science fiction, Luna is used to distinguish it from other moons, while in poetry, the name has been used to denote personification of our moon. The principal modern English adjective pertaining to the Moon is lunar, a less common adjective is selenic, derived from the Ancient Greek Selene, from which is derived the prefix seleno-. Both the Greek Selene and the Roman goddess Diana were alternatively called Cynthia, the names Luna, Cynthia, and Selene are reflected in terminology for lunar orbits in words such as apolune, pericynthion, and selenocentric. The name Diana is connected to dies meaning day, several mechanisms have been proposed for the Moons formation 4.51 billion years ago, and some 60 million years after the origin of the Solar System. These hypotheses also cannot account for the angular momentum of the Earth–Moon system. This hypothesis, although not perfect, perhaps best explains the evidence, eighteen months prior to an October 1984 conference on lunar origins, Bill Hartmann, Roger Phillips, and Jeff Taylor challenged fellow lunar scientists, You have eighteen months. Go back to your Apollo data, go back to computer, do whatever you have to. Dont come to our conference unless you have something to say about the Moons birth, at the 1984 conference at Kona, Hawaii, the giant impact hypothesis emerged as the most popular. Afterward there were only two groups, the giant impact camp and the agnostics. Giant impacts are thought to have been common in the early Solar System, computer simulations of a giant impact have produced results that are consistent with the mass of the lunar core and the present angular momentum of the Earth–Moon system
Moon
–
Full moon as seen from Earth's northern hemisphere
Moon
–
The Moon, tinted reddish, during a lunar eclipse
Moon
–
Near side of the Moon
Moon
–
Far side of the Moon
55.
Pair production
–
Pair production is the creation of an elementary particle and its antiparticle. Examples include creating an electron and a positron, a muon and an antimuon, or a proton, pair production often refers specifically to a photon creating an electron-positron pair near a nucleus but can more generally refer to any neutral boson creating a particle-antiparticle pair. However, all other conserved quantum numbers of the particles must sum to zero – thus the created particles shall have opposite values of each other. For instance, if one particle has electric charge of +1 the other must have electric charge of −1, the probability of pair production in photon-matter interactions increases with photon energy and also increases approximately as the square of atomic number of the nearby atom. For photons with high energy, pair production is the dominant mode of photon interaction with matter. These interactions were first observed in Patrick Blacketts counter-controlled cloud chamber, the photon must have higher energy than the sum of the rest mass energies of an electron and positron for the production to occur. The photon must be near a nucleus in order to satisfy conservation of momentum, because of this, when pair production occurs, the atomic nucleus receives some recoil. The reverse of this process is electron positron annihilation and these properties can be derived through the kinematics of the interaction. Using four vector notation, the conservation of energy-momentum before and after the interaction gives and we can square the conservation equation,2 =2 However, in most cases the recoil of the nuclei is much smaller compared to the energy of the photon and can be neglected. This derivation is a semi-classical approximation, an exact derivation of the kinematics can be done taking into account the full quantum mechanical scattering of photon and nucleus. Cross sections are tabulated for different materials and energies, in 2008 the Titan laser aimed at a 1-millimeter-thick gold target was used to generate positron–electron pairs in large numbers. Pair production is invoked to predict the existence of hypothetical Hawking radiation, according to quantum mechanics, particle pairs are constantly appearing and disappearing as a quantum foam. In a region of strong tidal forces, the two particles in a pair may sometimes be wrenched apart before they have a chance to mutually annihilate. When this happens in the region around a hole, one particle may escape while its antiparticle partner is captured by the black hole. Supernova SN 2006gy is hypothesized to have been a pair production type supernova, annihilation Electron–positron annihilation Meitner–Hupfeld effect Pair-instability supernova Two-photon physics Dirac equation Matter creation Theory of photon-impact bound-free pair production
Pair production
–
Light–matter interaction
56.
Gravitational lens
–
A gravitational lens is a distribution of matter between a distant light source and an observer, that is capable of bending the light from the source as the light travels towards the observer. This effect is known as gravitational lensing, and the amount of bending is one of the predictions of Albert Einsteins general theory of relativity, fritz Zwicky posited in 1937 that the effect could allow galaxy clusters to act as gravitational lenses. It was not until 1979 that this effect was confirmed by observation of the so-called Twin QSO SBS 0957+561. Unlike an optical lens, a lens produces a maximum deflection of light that passes closest to its center. Consequently, a lens has no single focal point, but a focal line. The term lens in the context of light deflection was first used by O. J. Lodge, who remarked that it is not permissible to say that the gravitational field acts like a lens. If the source, the lensing object, and the observer lie in a straight line. If there is any misalignment, the observer will see an arc segment instead and this phenomenon was first mentioned in 1924 by the St. Petersburg physicist Orest Chwolson, and quantified by Albert Einstein in 1936. It is usually referred to in the literature as an Einstein ring, more commonly, where the lensing mass is complex and does not cause a spherical distortion of space–time, the source will resemble partial arcs scattered around the lens. There are three classes of gravitational lensing,1, strong lensing, where there are easily visible distortions such as the formation of Einstein rings, arcs, and multiple images. The lensing shows up statistically as a stretching of the background objects perpendicular to the direction to the center of the lens. By measuring the shapes and orientations of large numbers of distant galaxies and this, in turn, can be used to reconstruct the mass distribution in the area, in particular, the background distribution of dark matter can be reconstructed. Since galaxies are elliptical and the weak gravitational lensing signal is small. They may also provide an important future constraint on dark energy, Microlensing, where no distortion in shape can be seen but the amount of light received from a background object changes in time. The lensing object may be stars in the Milky Way in one case, with the background source being stars in a remote galaxy, or, in another case. The effect is small, such that even a galaxy with a more than 100 billion times that of the Sun will produce multiple images separated by only a few arcseconds. Galaxy clusters can produce separations of several arcminutes, in both cases the galaxies and sources are quite distant, many hundreds of megaparsecs away from our Galaxy
Gravitational lens
–
Gravitational lensing
Gravitational lens
–
One of Eddington 's photographs of the 1919 solar eclipse experiment, presented in his 1920 paper announcing its success
Gravitational lens
–
Bending light around a massive object from a distant source. The orange arrows show the apparent position of the background source. The white arrows show the path of the light from the true position of the source.
Gravitational lens
–
In the formation known as Einstein's Cross, four images of the same distant quasar appear around a foreground galaxy due to strong gravitational lensing.
57.
Spacetime
–
In physics, spacetime is any mathematical model that combines space and time into a single interwoven continuum. Until the turn of the 20th century, the assumption had been that the 3D geometry of the universe was distinct from time, Einsteins theory was framed in terms of kinematics, and showed how measurements of space and time varied for observers in different reference frames. His theory was an advance over Lorentzs 1904 theory of electromagnetic phenomena. A key feature of this interpretation is the definition of an interval that combines distance. Although measurements of distance and time between events differ among observers, the interval is independent of the inertial frame of reference in which they are recorded. The resultant spacetime came to be known as Minkowski space, non-relativistic classical mechanics treats time as a universal quantity of measurement which is uniform throughout space and which is separate from space. Classical mechanics assumes that time has a constant rate of passage that is independent of the state of motion of an observer, furthermore, it assumes that space is Euclidean, which is to say, it assumes that space follows the geometry of common sense. General relativity, in addition, provides an explanation of how gravitational fields can slow the passage of time for an object as seen by an observer outside the field. Mathematically, spacetime is a manifold, which is to say, by analogy, at small enough scales, a globe appears flat. An extremely large scale factor, c relates distances measured in space with distances measured in time, waves implied the existence of a medium which waved, but attempts to measure the properties of the hypothetical luminiferous aether implied by these experiments provided contradictory results. For example, the Fizeau experiment of 1851 demonstrated that the speed of light in flowing water was less than the speed of light in air plus the speed of the flowing water, the partial aether-dragging implied by this result was in conflict with measurements of stellar aberration. By 1904, Lorentz had expanded his theory such that he had arrived at equations formally identical with those that Einstein were to derive later, but with a fundamentally different interpretation. As a theory of dynamics, his theory assumed actual physical deformations of the constituents of matter. For example, most physicists believed that Lorentz contraction would be detectable by such experiments as the Trouton–Noble experiment or the Experiments of Rayleigh and Brace. However, these negative results, and in his 1904 theory of the electron. Einstein performed his analyses in terms of kinematics rather than dynamics and it would appear that he did not at first think geometrically about spacetime. It was Einsteins former mathematics professor, Hermann Minkowski, who was to provide an interpretation of special relativity. Einstein was initially dismissive of the interpretation of special relativity
Spacetime
–
Key concepts
58.
Gravitational time dilation
–
Gravitational time dilation is a form of time dilation, an actual difference of elapsed time between two events as measured by observers situated at varying distances from a gravitating mass. The weaker the gravitational potential, the time passes. Albert Einstein originally predicted this effect in his theory of relativity and this has been demonstrated by noting that atomic clocks at differing altitudes will eventually show different times. The effects detected in such Earth-bound experiments are small, with differences being measured in nanoseconds. Demonstrating larger effects would require greater distances from the Earth or a larger gravitational source, Gravitational time dilation was first described by Albert Einstein in 1907 as a consequence of special relativity in accelerated frames of reference. In general relativity, it is considered to be a difference in the passage of time at different positions as described by a metric tensor of spacetime. The existence of time dilation was first confirmed directly by the Pound–Rebka experiment in 1959. Clocks that are far from massive bodies run more quickly, for example, considered over the total lifetime of the earth, a clock set at the peak of Mount Everest would be about 39 hours ahead of a clock set at sea level. This is because gravitational time dilation is manifested in accelerated frames of reference or, by virtue of the equivalence principle, in the gravitational field of massive objects. According to general relativity, inertial mass and gravitational mass are the same, let us consider a family of observers along a straight vertical line, each of whom experiences a distinct constant g-force directed along this line. Let g be the dependence of g-force on height, a coordinate along the aforementioned line. For simplicity, in a Rindlers family of observers in a flat space-time, the dependence would be g = c 2 / with constant H, which yields T d = e ln − ln H = H + h H. On the other hand, when g is constant and g h is much smaller than c 2. See Ehrenfest paradox for application of the formula to a rotating reference frame in flat space-time. In comparison, a clock on the surface of the sun will accumulate around 66.4 fewer seconds in one year, in the Schwarzschild metric, free-falling objects can be in circular orbits if the orbital radius is larger than 32 r s. T0 = t f 1 −32 ⋅ r s r, according to the general theory of relativity, gravitational time dilation is copresent with the existence of an accelerated reference frame. An exception is the center of a distribution of matter. Additionally, all phenomena in similar circumstances undergo time dilation equally according to the equivalence principle used in the general theory of relativity
Gravitational time dilation
–
Key concepts
Gravitational time dilation
–
Satellite clocks are slowed by their orbital speed but sped up by their distance out of the Earth's gravitational well.
59.
Gravity Probe B
–
Gravity Probe B was a satellite-based mission which launched on 20 April 2004 on a Delta II rocket. The spaceflight phase lasted until 2005, its aim was to measure spacetime curvature near Earth and this provided a test of general relativity, gravitomagnetism and related models. The principal investigator was Francis Everitt, initial results confirmed the expected geodetic effect to an accuracy of about 1%. The expected frame-dragging effect was similar in magnitude to the current noise level, work continued to model and account for these sources of error, thus permitting extraction of the frame-dragging signal. Gravity Probe B was a relativity gyroscope experiment funded by NASA, efforts were led by Stanford University physics department with Lockheed Martin as the primary subcontractor. Mission scientists viewed it as the second gravity experiment in space, the mission plans were to test two unverified predictions of general relativity, the geodetic effect and frame-dragging. The gyroscopes were intended to be so free from disturbance that they would provide a near-perfect space-time reference system and this would allow them to reveal how space and time are warped by the presence of the Earth, and by how much the Earths rotation drags space-time around with it. The geodetic effect is an effect caused by space-time being curved by the mass of the Earth, a gyroscopes axis when parallel transported around the Earth in one complete revolution does not end up pointing in exactly the same direction as before. The angle missing may be thought of as the amount the gyroscope leans over into the slope of the space-time curvature. A more precise explanation for the space part of the geodetic precession is obtained by using a nearly flat cone to model the space curvature of the Earths gravitational field. Such a cone is made by cutting out a thin pie-slice from a circle, the spatial geodetic precession is a measure of the missing pie-slice angle. Gravity Probe B was expected to measure this effect to an accuracy of one part in 10,000, the much smaller frame-dragging effect is an example of gravitomagnetism. It is an analog of magnetism in classical electrodynamics, but caused by rotating masses rather than rotating electric charges, however, Lorenzo Iorio claimed that the level of total uncertainty of the tests conducted with the two LAGEOS satellites has likely been greatly underestimated. A recent analysis of Mars Global Surveyor data has claimed to have confirmed the frame dragging effect to a precision of 0. 5%, also the Lense–Thirring effect of the Sun has been recently investigated in view of a possible detection with the inner planets in the near future. The launch was planned for 19 April 2004 at Vandenberg Air Force Base but was scrubbed within 5 minutes of the launch window due to changing winds in the upper atmosphere. An unusual feature of the mission is that it only had a launch window due to the precise orbit required by the experiment. On 20 April, at 9,57,23 AM PDT the spacecraft was launched successfully, the satellite was placed in orbit at 11,12,33 AM after a cruise period over the south pole and a short second burn. Some preliminary results were presented at a session during the American Physical Society meeting in April 2007
Gravity Probe B
–
Gravity Probe B
Gravity Probe B
–
Gravity Probe B with solar panels folded.
Gravity Probe B
–
At the time, the fused quartz gyroscopes created for Gravity Probe B were the most nearly perfect spheres ever created by humans. The gyroscopes differ from a perfect sphere by no more than 40 atoms of thickness, refracting the image of Einstein in background.
Gravity Probe B
60.
Frequency
–
Frequency is the number of occurrences of a repeating event per unit time. It is also referred to as frequency, which emphasizes the contrast to spatial frequency. The period is the duration of time of one cycle in a repeating event, for example, if a newborn babys heart beats at a frequency of 120 times a minute, its period—the time interval between beats—is half a second. Frequency is an important parameter used in science and engineering to specify the rate of oscillatory and vibratory phenomena, such as vibrations, audio signals, radio waves. For cyclical processes, such as rotation, oscillations, or waves, in physics and engineering disciplines, such as optics, acoustics, and radio, frequency is usually denoted by a Latin letter f or by the Greek letter ν or ν. For a simple motion, the relation between the frequency and the period T is given by f =1 T. The SI unit of frequency is the hertz, named after the German physicist Heinrich Hertz, a previous name for this unit was cycles per second. The SI unit for period is the second, a traditional unit of measure used with rotating mechanical devices is revolutions per minute, abbreviated r/min or rpm. As a matter of convenience, longer and slower waves, such as ocean surface waves, short and fast waves, like audio and radio, are usually described by their frequency instead of period. Spatial frequency is analogous to temporal frequency, but the axis is replaced by one or more spatial displacement axes. Y = sin = sin d θ d x = k Wavenumber, in the case of more than one spatial dimension, wavenumber is a vector quantity. For periodic waves in nondispersive media, frequency has a relationship to the wavelength. Even in dispersive media, the frequency f of a wave is equal to the phase velocity v of the wave divided by the wavelength λ of the wave. In the special case of electromagnetic waves moving through a vacuum, then v = c, where c is the speed of light in a vacuum, and this expression becomes, f = c λ. When waves from a monochrome source travel from one medium to another, their remains the same—only their wavelength. For example, if 71 events occur within 15 seconds the frequency is, the latter method introduces a random error into the count of between zero and one count, so on average half a count. This is called gating error and causes an error in the calculated frequency of Δf = 1/, or a fractional error of Δf / f = 1/ where Tm is the timing interval. This error decreases with frequency, so it is a problem at low frequencies where the number of counts N is small, an older method of measuring the frequency of rotating or vibrating objects is to use a stroboscope
Frequency
–
A resonant-reed frequency meter, an obsolete device used from about 1900 to the 1940s for measuring the frequency of alternating current. It consists of a strip of metal with reeds of graduated lengths, vibrated by an electromagnet. When the unknown frequency is applied to the electromagnet, the reed which is resonant at that frequency will vibrate with large amplitude, visible next to the scale.
Frequency
–
As time elapses – represented here as a movement from left to right, i.e. horizontally – the five sinusoidal waves shown vary regularly (i.e. cycle), but at different rates. The red wave (top) has the lowest frequency (i.e. varies at the slowest rate) while the purple wave (bottom) has the highest frequency (varies at the fastest rate).
Frequency
Frequency
–
Modern frequency counter
61.
Spectroscopy
–
Spectroscopy /spɛkˈtrɒskəpi/ is the study of the interaction between matter and electromagnetic radiation. Historically, spectroscopy originated through the study of visible light dispersed according to its wavelength, later the concept was expanded greatly to include any interaction with radiative energy as a function of its wavelength or frequency. Spectroscopic data is represented by an emission spectrum, a plot of the response of interest as a function of wavelength or frequency. Spectroscopy and spectrography are terms used to refer to the measurement of radiation intensity as a function of wavelength and are used to describe experimental spectroscopic methods. Spectral measurement devices are referred to as spectrometers, spectrophotometers, spectrographs or spectral analyzers, daily observations of color can be related to spectroscopy. Neon lighting is an application of atomic spectroscopy. Neon and other noble gases have characteristic emission frequencies, neon lamps use collision of electrons with the gas to excite these emissions. Inks, dyes and paints include chemical compounds selected for their characteristics in order to generate specific colors. A commonly encountered molecular spectrum is that of nitrogen dioxide, gaseous nitrogen dioxide has a characteristic red absorption feature, and this gives air polluted with nitrogen dioxide a reddish-brown color. Rayleigh scattering is a spectroscopic scattering phenomenon that accounts for the color of the sky, Spectroscopy is used in physical and analytical chemistry because atoms and molecules have unique spectra. As a result, these spectra can be used to detect, identify and quantify information about the atoms, Spectroscopy is also used in astronomy and remote sensing on earth. The measured spectra are used to determine the composition and physical properties of astronomical objects. One of the concepts in spectroscopy is a resonance and its corresponding resonant frequency. Resonances were first characterized in mechanical systems such as pendulums, mechanical systems that vibrate or oscillate will experience large amplitude oscillations when they are driven at their resonant frequency. A plot of amplitude vs. excitation frequency will have a peak centered at the resonance frequency and this plot is one type of spectrum, with the peak often referred to as a spectral line, and most spectral lines have a similar appearance. In quantum mechanical systems, the resonance is a coupling of two quantum mechanical stationary states of one system, such as an atom, via an oscillatory source of energy such as a photon. The coupling of the two states is strongest when the energy of the matches the energy difference between the two states. The energy of a photon is related to its frequency by E = h ν where h is Plancks constant, spectra of atoms and molecules often consist of a series of spectral lines, each one representing a resonance between two different quantum states
Spectroscopy
–
Analysis of white light by dispersing it with a prism is an example of spectroscopy.
Spectroscopy
–
A huge diffraction grating at the heart of the ultra-precise ESPRESSO spectrograph.
Spectroscopy
–
UVES is a high-resolution spectrograph on the Very Large Telescope.
62.
Watt balance
–
A watt balance is an experimental electromechanical weight measuring instrument that measures the weight of a test object very precisely by the strength of an electric current and a voltage. In 2016, metrologists agreed to rename watt balances as Kibble balances, in honour of and it is being developed as a metrological instrument that may one day provide a definition of the kilogram unit of mass based on electronic units, a so-called electronic or electrical kilogram. The name watt balance comes from the fact that the weight of the test mass is proportional to the product of the current and the voltage, which is measured in units of watts. In this new application, the balance will be used in the opposite sense, the weight of the kilogram is then used to compute the mass of the kilogram by accurately determining the local gravitational acceleration. This will define the mass of a kilogram in terms of a current, the principle that is used in the watt balance was proposed by B. P. Kibble of the UK National Physical Laboratory in 1975 for measurement of the gyromagnetic ratio. The main weakness of the balance method is that the result depends on the accuracy with which the dimensions of the coils are measured. The watt balance method has an extra step in which the effect of the geometry of the coils is eliminated. This extra step involves moving the force coil through a magnetic flux at a known speed. This step was done in 1990, in 2014, NRC researchers published the most accurate measurement of the Planck constant to date, with a relative uncertainty of 1. 8×10−8. A conducting wire of length L that carries an electric current I perpendicular to a field of strength B will experience a Laplace force equal to BLI. In the watt balance, the current is varied so that this force exactly counteracts the weight w of a mass m. This is also the principle behind the ampere balance, W is given by the mass m multiplied by the local gravitational acceleration g. Kibbles watt balance avoids the problems of measuring B and L with a calibration step. The same wire is moved through the magnetic field at a known speed v. By Faradays law of induction, a potential difference U is generated across the ends of the wire. The unknown product BL can be eliminated from the equations to give U I = m g v. With U, I, g, and v accurately measured, this gives an accurate value for m. Both sides of the equation have the dimensions of power, measured in watts in the International System of Units, the current watt balance experiments are equivalent to measuring the value of the conventional watt in SI units. The importance of measurements is that they are also a direct measurement of the Planck constant h, h =4 K J2 R K. The principle of the kilogram would be to define the value of the Planck constant in the same way that the meter is defined by the speed of light
Watt balance
–
The NIST watt balance; the vacuum chamber dome, which lowers over the entire apparatus, is visible at top
Watt balance
–
Precision Ampere balance at the US National Bureau of Standards (now NIST) in 1927. The current coils are visible under the balance, attached to the right balance arm. The Watt balance is a development of the Ampere balance.
63.
Mass versus weight
–
In common usage, the mass of an object is often referred to as its weight, though these are in fact different concepts and quantities. In scientific contexts, mass refers loosely to the amount of matter in an object, in other words, an object with a mass of 1.0 kilogram will weigh approximately 9.81 newtons on the surface of the Earth. Its weight will be less on Mars, more on Saturn, and negligible in space far from any significant source of gravity. Objects on the surface of the Earth have weight, although sometimes this weight is difficult to measure, thus, the weightless object floating in water actually transfers its weight to the bottom of the container. Similarly, a balloon has mass but may appear to have no weight or even negative weight, however the weight of the balloon and the gas inside it has merely been transferred to a large area of the Earths surface, making the weight difficult to measure. The weight of an airplane is similarly distributed to the ground. If the airplane is in flight, the same weight-force is distributed to the surface of the Earth as when the plane was on the runway. A better scientific definition of mass is its description as being composed of inertia and this force can be added to by any other kind of force. While the weight of an object varies in proportion to the strength of the gravitational field, accordingly, for an astronaut on a spacewalk in orbit, no effort is required to hold a communications satellite in front of him, it is weightless. However, since objects in orbit retain their mass and inertia, on Earth, a swing seat can demonstrate this relationship between force, mass, and acceleration. Applying the same impetus to a child would produce a much greater speed. Mass is a property, that is, the tendency of an object to remain at constant velocity unless acted upon by an outside force. Inertia is seen when a ball is pushed horizontally on a level, smooth surface. This is quite distinct from its weight, which is the gravitational force of the bowling ball one must counter when holding it off the floor. The weight of the ball on the Moon would be one-sixth of that on the Earth. Consequently, whenever the physics of recoil kinetics dominate and the influence of gravity is a negligible factor, in the physical sciences, the terms mass and weight are rigidly defined as separate measures, as they are different physical properties. For example, in commerce, the net weight of products actually refers to mass. Conversely, the index rating on automobile tires, which specifies the maximum structural load for a tire in kilograms, refers to weight, that is
Mass versus weight
–
If one were to stand behind this girl at the bottom of the arc and try to stop her, one would be acting against her inertia, which arises from mass, not weight.
Mass versus weight
–
Matter's mass strongly influences many familiar kinetic properties.
Mass versus weight
–
A hot air balloon when it has neutral buoyancy has no weight for the men to support but still retains its great mass and inertia.
Mass versus weight
–
A balance-type weighing scale: Unaffected by the strength of gravity.
64.
Gravity of Earth
–
The gravity of Earth, which is denoted by g, refers to the acceleration that is imparted to objects due to the distribution of mass within the Earth. In SI units this acceleration is measured in metres per second squared or equivalently in newtons per kilogram and this quantity is sometimes referred to informally as little g. The precise strength of Earths gravity varies depending on location, the nominal average value at the Earths surface, known as standard gravity is, by definition,9.80665 m/s2. This quantity is denoted variously as gn, ge, g0, gee, the weight of an object on the Earths surface is the downwards force on that object, given by Newtons second law of motion, or F = ma. Gravitational acceleration contributes to the acceleration, but other factors, such as the rotation of the Earth, also contribute. The Earth is not spherically symmetric, but is slightly flatter at the poles while bulging at the Equator, there are consequently slight deviations in both the magnitude and direction of gravity across its surface. The net force as measured by a scale and plumb bob is called effective gravity or apparent gravity, effective gravity includes other factors that affect the net force. These factors vary and include such as centrifugal force at the surface from the Earths rotation. Effective gravity on the Earths surface varies by around 0. 7%, in large cities, it ranges from 9.766 in Kuala Lumpur, Mexico City, and Singapore to 9.825 in Oslo and Helsinki. The surface of the Earth is rotating, so it is not a frame of reference. At latitudes nearer the Equator, the centrifugal force produced by Earths rotation is larger than at polar latitudes. This counteracts the Earths gravity to a small degree – up to a maximum of 0. 3% at the Equator –, the same two factors influence the direction of the effective gravity. Gravity decreases with altitude as one rises above the Earths surface because greater altitude means greater distance from the Earths centre, all other things being equal, an increase in altitude from sea level to 9,000 metres causes a weight decrease of about 0. 29%. It is a misconception that astronauts in orbit are weightless because they have flown high enough to escape the Earths gravity. In fact, at an altitude of 400 kilometres, equivalent to an orbit of the Space Shuttle. Weightlessness actually occurs because orbiting objects are in free-fall, the effect of ground elevation depends on the density of the ground. A person flying at 30000 ft above sea level over mountains will feel more gravity than someone at the same elevation, however, a person standing on the earths surface feels less gravity when the elevation is higher. The following formula approximates the Earths gravity variation with altitude, g h = g 02 Where gh is the acceleration at height h above sea level
Gravity of Earth
–
Earth's gravity measured by NASA's GRACE mission, showing deviations from the theoretical gravity of an idealized smooth Earth, the so-called earth ellipsoid. Red shows the areas where gravity is stronger than the smooth, standard value, and blue reveals areas where gravity is weaker. (Animated version.)
Gravity of Earth
–
Earth's radial density distribution according to the Preliminary Reference Earth Model (PREM).
65.
Kilograms
–
The kilogram or kilogramme is the base unit of mass in the International System of Units and is defined as being equal to the mass of the International Prototype of the Kilogram. The avoirdupois pound, used in both the imperial and US customary systems, is defined as exactly 0.45359237 kg, making one kilogram approximately equal to 2.2046 avoirdupois pounds. Other traditional units of weight and mass around the world are also defined in terms of the kilogram, the gram, 1/1000 of a kilogram, was provisionally defined in 1795 as the mass of one cubic centimeter of water at the melting point of ice. The final kilogram, manufactured as a prototype in 1799 and from which the IPK was derived in 1875, had an equal to the mass of 1 dm3 of water at its maximum density. The kilogram is the only SI base unit with an SI prefix as part of its name and it is also the only SI unit that is still directly defined by an artifact rather than a fundamental physical property that can be reproduced in different laboratories. Three other base units and 17 derived units in the SI system are defined relative to the kilogram, only 8 other units do not require the kilogram in their definition, temperature, time and frequency, length, and angle. At its 2011 meeting, the CGPM agreed in principle that the kilogram should be redefined in terms of the Planck constant, the decision was originally deferred until 2014, in 2014 it was deferred again until the next meeting. There are currently several different proposals for the redefinition, these are described in the Proposed Future Definitions section below, the International Prototype Kilogram is rarely used or handled. In the decree of 1795, the term gramme thus replaced gravet, the French spelling was adopted in the United Kingdom when the word was used for the first time in English in 1797, with the spelling kilogram being adopted in the United States. In the United Kingdom both spellings are used, with kilogram having become by far the more common, UK law regulating the units to be used when trading by weight or measure does not prevent the use of either spelling. In the 19th century the French word kilo, a shortening of kilogramme, was imported into the English language where it has used to mean both kilogram and kilometer. In 1935 this was adopted by the IEC as the Giorgi system, now known as MKS system. In 1948 the CGPM commissioned the CIPM to make recommendations for a practical system of units of measurement. This led to the launch of SI in 1960 and the subsequent publication of the SI Brochure, the kilogram is a unit of mass, a property which corresponds to the common perception of how heavy an object is. Mass is a property, that is, it is related to the tendency of an object at rest to remain at rest, or if in motion to remain in motion at a constant velocity. Accordingly, for astronauts in microgravity, no effort is required to hold objects off the cabin floor, they are weightless. However, since objects in microgravity still retain their mass and inertia, the ratio of the force of gravity on the two objects, measured by the scale, is equal to the ratio of their masses. On April 7,1795, the gram was decreed in France to be the weight of a volume of pure water equal to the cube of the hundredth part of the metre
Kilograms
–
A domestic-quality one-kilogram weight made of cast iron (the credit card is for scale). The shape follows OIML recommendation R52 for cast-iron hexagonal weights
Kilograms
–
Measurement of weight – gravitational attraction of the measurand causes a distortion of the spring
Kilograms
–
Measurement of mass – the gravitational force on the measurand is balanced against the gravitational force on the weights.
Kilograms
–
The Arago kilogram, an exact copy of the "Kilogramme des Archives" commissioned in 1821 by the US under supervision of French physicist François Arago that served as the US's first kilogram standard of mass until 1889, when the US converted to primary metric standards and received its current kilogram prototypes, K4 and K20.
66.
Newtons
–
The newton is the International System of Units derived unit of force. It is named after Isaac Newton in recognition of his work on classical mechanics, see below for the conversion factors. One newton is the force needed to one kilogram of mass at the rate of one metre per second squared in direction of the applied force. In 1948, the 9th CGPM resolution 7 adopted the name newton for this force, the MKS system then became the blueprint for todays SI system of units. The newton thus became the unit of force in le Système International dUnités. This SI unit is named after Isaac Newton, as with every International System of Units unit named for a person, the first letter of its symbol is upper case. Note that degree Celsius conforms to this rule because the d is lowercase. — Based on The International System of Units, section 5.2. Newtons second law of motion states that F = ma, where F is the applied, m is the mass of the object receiving the force. The newton is therefore, where the symbols are used for the units, N for newton, kg for kilogram, m for metre. In dimensional analysis, F = M L T2 where F is force, M is mass, L is length, at average gravity on earth, a kilogram mass exerts a force of about 9.8 newtons. An average-sized apple exerts about one newton of force, which we measure as the apples weight, for example, the tractive effort of a Class Y steam train and the thrust of an F100 fighter jet engine are both around 130 kN. One kilonewton,1 kN, is 102.0 kgf,1 kN =102 kg ×9.81 m/s2 So for example, a platform rated at 321 kilonewtons will safely support a 32,100 kilograms load. Specifications in kilonewtons are common in safety specifications for, the values of fasteners, Earth anchors. Working loads in tension and in shear, thrust of rocket engines and launch vehicles clamping forces of the various moulds in injection moulding machines used to manufacture plastic parts
Newtons
–
Base units
67.
Fermion
–
In particle physics, a fermion is any subatomic particle characterized by Fermi–Dirac statistics. These particles obey the Pauli exclusion principle, fermions include all quarks and leptons, as well as any composite particle made of an odd number of these, such as all baryons and many atoms and nuclei. Fermions differ from bosons, which obey Bose–Einstein statistics, a fermion can be an elementary particle, such as the electron, or it can be a composite particle, such as the proton. According to the theorem in any reasonable relativistic quantum field theory, particles with integer spin are bosons. Besides this spin characteristic, fermions have another specific property, they possess conserved baryon or lepton quantum numbers, therefore, what is usually referred to as the spin statistics relation is in fact a spin statistics-quantum number relation. As a consequence of the Pauli exclusion principle, only one fermion can occupy a quantum state at any given time. If multiple fermions have the same probability distribution, then at least one property of each fermion, such as its spin. Weakly interacting fermions can also display bosonic behavior under extreme conditions, at low temperature fermions show superfluidity for uncharged particles and superconductivity for charged particles. Composite fermions, such as protons and neutrons, are the key building blocks of everyday matter, the Standard Model recognizes two types of elementary fermions, quarks and leptons. In all, the model distinguishes 24 different fermions, there are six quarks, and six leptons, along with the corresponding antiparticle of each of these. Mathematically, fermions come in three types - Weyl fermions, Dirac fermions, and Majorana fermions, most Standard Model fermions are believed to be Dirac fermions, although it is unknown at this time whether the neutrinos are Dirac or Majorana fermions. Dirac fermions can be treated as a combination of two Weyl fermions, in July 2015, Weyl fermions have been experimentally realized in Weyl semimetals. Composite particles can be bosons or fermions depending on their constituents, more precisely, because of the relation between spin and statistics, a particle containing an odd number of fermions is itself a fermion. Examples include the following, A baryon, such as the proton or neutron, the nucleus of a carbon-13 atom contains six protons and seven neutrons and is therefore a fermion. The atom helium-3 is made of two protons, one neutron, and two electrons, and therefore it is a fermion. The number of bosons within a composite made up of simple particles bound with a potential has no effect on whether it is a boson or a fermion. Fermionic or bosonic behavior of a particle is only seen at large distances. At proximity, where spatial structure begins to be important, a composite particle behaves according to its constituent makeup, fermions can exhibit bosonic behavior when they become loosely bound in pairs
Fermion
–
Enrico Fermi
Fermion
–
Large Hadron Collider tunnel at CERN
68.
Force carrier
–
In particle physics, force carriers or messenger particles or intermediate particles are particles that give rise to forces between other particles. These particles are bundles of energy of a kind of field. There is one kind of field for every type of elementary particle, for instance, there is an electron field whose quanta are electrons, and an electromagnetic field whose quanta are photons. The force carrier particles that mediate the electromagnetic, weak, in particle physics, quantum field theories such as the Standard Model describe nature in terms of fields. Each field has a description as the set of particles of a particular type. The energy of a wave in a field is quantized, the Standard Model contains the following particles, each of which is an excitation of a particular field, Gluons, excitations of the strong gauge field. Photons, W bosons, and Z bosons, excitations of the gauge fields. Higgs bosons, excitations of one component of the Higgs field, several types of fermions, described as excitations of fermionic fields. In addition, composite particles such as mesons can be described as excitations of an effective field, gravity is not a part of the Standard Model, but it is thought that there may be particles called gravitons which are the excitations of gravitational waves. The status of this particle is still tentative, because the theory is incomplete, when one particle scatters off another, altering its trajectory, there are two ways to think about the process. In the field picture, we imagine that the field generated by one caused a force on the other. Alternatively, we can imagine one particle emitting a virtual particle which is absorbed by the other, the virtual particle transfers momentum from one particle to the other. The description of forces in terms of particles is limited by the applicability of the perturbation theory from which it is derived. In certain situations, such as low-energy QCD and the description of bound states, the electromagnetic force can be described by the exchange of virtual photons. The nuclear force binding protons and neutrons can be described by a field of which mesons are the excitations. At sufficiently large energies, the interaction between quarks can be described by the exchange of virtual gluons. Beta decay is an example of a due to the exchange of a W boson. Gravitation may be due to the exchange of virtual gravitons, in time, this relationship became known as Coulombs law
Force carrier
–
Large Hadron Collider tunnel at CERN
69.
Little group
–
In mathematics, an action of a group is a way of interpreting the elements of the group as acting on some space in a way that preserves the structure of that space. Common examples of spaces that groups act on are sets, vector spaces, actions of groups on vector spaces are called representations of the group. Some groups can be interpreted as acting on spaces in a canonical way, more generally, symmetry groups such as the homeomorphism group of a topological space or the general linear group of a vector space, as well as their subgroups, also admit canonical actions. A common way of specifying non-canonical actions is to describe a homomorphism φ from a group G to the group of symmetries of a set X. The action of an element g ∈ G on a point x ∈ X is assumed to be identical to the action of its image φ ∈ Sym on the point x. The homomorphism φ is also called the action of G. Thus, if G is a group and X is a set, if X has additional structure, then φ is only called an action if for each g ∈ G, the permutation φ preserves the structure of X. The abstraction provided by group actions is a one, because it allows geometrical ideas to be applied to more abstract objects. Many objects in mathematics have natural group actions defined on them, in particular, groups can act on other groups, or even on themselves. Because of this generality, the theory of group actions contains wide-reaching theorems, such as the orbit stabilizer theorem, the group G is said to act on X. The set X is called a G-set. In complete analogy, one can define a group action of G on X as an operation X × G → X mapping to x. g. =. h for all g, h in G and all x in X, for a left action h acts first and is followed by g, while for a right action g acts first and is followed by h. Because of the formula −1 = h−1g−1, one can construct an action from a right action by composing with the inverse operation of the group. Also, an action of a group G on X is the same thing as a left action of its opposite group Gop on X. It is thus sufficient to only consider left actions without any loss of generality. The trivial action of any group G on any set X is defined by g. x = x for all g in G and all x in X, that is, every group element induces the identity permutation on X. In every group G, left multiplication is an action of G on G, g. x = gx for all g, x in G
Little group
–
Given an equilateral triangle, the counterclockwise rotation by 120° around the center of the triangle maps every vertex of the triangle to another one. The cyclic group C 3 consisting of the rotations by 0°, 120° and 240° acts on the set of the three vertices.
70.
Standard Model
–
The Standard Model of particle physics is a theory concerning the electromagnetic, weak, and strong interactions, as well as classifying all the elementary particles known. It was developed throughout the half of the 20th century. The current formulation was finalized in the mid-1970s upon experimental confirmation of the existence of quarks, since then, discoveries of the top quark, the tau neutrino, and the Higgs boson have given further credence to the Standard Model. Because of its success in explaining a wide variety of experimental results and it does not incorporate the full theory of gravitation as described by general relativity, or account for the accelerating expansion of the Universe. The model does not contain any viable dark matter particle that all of the required properties deduced from observational cosmology. It also does not incorporate neutrino oscillations, the development of the Standard Model was driven by theoretical and experimental particle physicists alike. For theorists, the Standard Model is a paradigm of a field theory. The first step towards the Standard Model was Sheldon Glashows discovery in 1961 of a way to combine the electromagnetic, in 1967 Steven Weinberg and Abdus Salam incorporated the Higgs mechanism into Glashows electroweak interaction, giving it its modern form. The Higgs mechanism is believed to rise to the masses of all the elementary particles in the Standard Model. This includes the masses of the W and Z bosons, the W± and Z0 bosons were discovered experimentally in 1983, and the ratio of their masses was found to be as the Standard Model predicted. The theory of the interaction, to which many contributed, acquired its modern form around 1973–74. At present, matter and energy are best understood in terms of the kinematics, to date, physics has reduced the laws governing the behavior and interaction of all known forms of matter and energy to a small set of fundamental laws and theories. The Standard Model includes members of classes of elementary particles. All particles can be summarized as follows, The Standard Model includes 12 elementary particles of spin known as fermions. According to the theorem, fermions respect the Pauli exclusion principle. Each fermion has a corresponding antiparticle, the fermions of the Standard Model are classified according to how they interact. There are six quarks, and six leptons, pairs from each classification are grouped together to form a generation, with corresponding particles exhibiting similar physical behavior. The defining property of the quarks is that they carry color charge, a phenomenon called color confinement results in quarks being very strongly bound to one another, forming color-neutral composite particles containing either a quark and an antiquark or three quarks
Standard Model
–
Large Hadron Collider tunnel at CERN
Standard Model
–
The Standard Model of elementary particles (more schematic depiction), with the three generations of matter, gauge bosons in the fourth column, and the Higgs boson in the fifth.
71.
Field (physics)
–
In physics, a field is a physical quantity, typically a number or tensor, that has a value for each point in space and time. For example, on a map, the surface wind velocity is described by assigning a vector to each point on a map. Each vector represents the speed and direction of the movement of air at that point, as another example, an electric field can be thought of as a condition in space emanating from an electric charge and extending throughout the whole of space. When a test electric charge is placed in this electric field, physicists have found the notion of a field to be of such practical utility for the analysis of forces that they have come to think of a force as due to a field. In the modern framework of the theory of fields, even without referring to a test particle, a field occupies space, contains energy. This led physicists to consider electromagnetic fields to be a physical entity, the fact that the electromagnetic field can possess momentum and energy makes it very real. A particle makes a field, and a field acts on another particle, in practice, the strength of most fields has been found to diminish with distance to the point of being undetectable. One consequence is that the Earths gravitational field quickly becomes undetectable on cosmic scales, a field has a unique tensorial character in every point where it is defined, i. e. a field cannot be a scalar field somewhere and a vector field somewhere else. For example, the Newtonian gravitational field is a vector field, moreover, within each category, a field can be either a classical field or a quantum field, depending on whether it is characterized by numbers or quantum operators respectively. In fact in this theory an equivalent representation of field is a field particle, to Isaac Newton his law of universal gravitation simply expressed the gravitational force that acted between any pair of massive objects. In the eighteenth century, a new quantity was devised to simplify the bookkeeping of all these gravitational forces and this quantity, the gravitational field, gave at each point in space the total gravitational force which would be felt by an object with unit mass at that point. The development of the independent concept of a field began in the nineteenth century with the development of the theory of electromagnetism. In the early stages, André-Marie Ampère and Charles-Augustin de Coulomb could manage with Newton-style laws that expressed the forces between pairs of electric charges or electric currents. However, it became more natural to take the field approach and express these laws in terms of electric and magnetic fields. The independent nature of the field became more apparent with James Clerk Maxwells discovery that waves in these fields propagated at a finite speed, Maxwell, at first, did not adopt the modern concept of a field as fundamental quantity that could independently exist. Instead, he supposed that the field expressed the deformation of some underlying medium—the luminiferous aether—much like the tension in a rubber membrane. If that were the case, the velocity of the electromagnetic waves should depend upon the velocity of the observer with respect to the aether. Despite much effort, no evidence of such an effect was ever found
Field (physics)
–
Illustration of the electric field surrounding a positive (red) and a negative (blue) charge.
72.
General theory of relativity
–
General relativity is the geometric theory of gravitation published by Albert Einstein in 1915 and the current description of gravitation in modern physics. General relativity generalizes special relativity and Newtons law of gravitation, providing a unified description of gravity as a geometric property of space and time. In particular, the curvature of spacetime is directly related to the energy and momentum of whatever matter, the relation is specified by the Einstein field equations, a system of partial differential equations. Examples of such differences include gravitational time dilation, gravitational lensing, the redshift of light. The predictions of relativity have been confirmed in all observations. Although general relativity is not the only theory of gravity. Einsteins theory has important astrophysical implications, for example, it implies the existence of black holes—regions of space in which space and time are distorted in such a way that nothing, not even light, can escape—as an end-state for massive stars. The bending of light by gravity can lead to the phenomenon of gravitational lensing, General relativity also predicts the existence of gravitational waves, which have since been observed directly by physics collaboration LIGO. In addition, general relativity is the basis of current cosmological models of an expanding universe. Soon after publishing the special theory of relativity in 1905, Einstein started thinking about how to incorporate gravity into his new relativistic framework. In 1907, beginning with a thought experiment involving an observer in free fall. After numerous detours and false starts, his work culminated in the presentation to the Prussian Academy of Science in November 1915 of what are now known as the Einstein field equations. These equations specify how the geometry of space and time is influenced by whatever matter and radiation are present, the Einstein field equations are nonlinear and very difficult to solve. Einstein used approximation methods in working out initial predictions of the theory, but as early as 1916, the astrophysicist Karl Schwarzschild found the first non-trivial exact solution to the Einstein field equations, the Schwarzschild metric. This solution laid the groundwork for the description of the stages of gravitational collapse. In 1917, Einstein applied his theory to the universe as a whole, in line with contemporary thinking, he assumed a static universe, adding a new parameter to his original field equations—the cosmological constant—to match that observational presumption. By 1929, however, the work of Hubble and others had shown that our universe is expanding and this is readily described by the expanding cosmological solutions found by Friedmann in 1922, which do not require a cosmological constant. Lemaître used these solutions to formulate the earliest version of the Big Bang models, in which our universe has evolved from an extremely hot, Einstein later declared the cosmological constant the biggest blunder of his life
General theory of relativity
–
A simulated black hole of 10 solar masses within the Milky Way, seen from a distance of 600 kilometers.
General theory of relativity
–
Albert Einstein developed the theories of special and general relativity. Picture from 1921.
General theory of relativity
–
Einstein cross: four images of the same astronomical object, produced by a gravitational lens
General theory of relativity
–
Artist's impression of the space-borne gravitational wave detector LISA
73.
Leaning Tower of Pisa
–
The Leaning Tower of Pisa or simply the Tower of Pisa is the campanile, or freestanding bell tower, of the cathedral of the Italian city of Pisa, known worldwide for its unintended tilt. It is situated behind Pisas cathedral and is the third oldest structure in the citys Cathedral Square, after the cathedral, the towers tilt began during construction, caused by an inadequate foundation on ground too soft on one side to properly support the structures weight. The tilt increased in the decades before the structure was completed and gradually increased until the structure was stabilized by efforts in the late 20th and early 21st centuries. The height of the tower is 55.86 metres from the ground on the low side and 56.67 metres on the high side, the width of the walls at the base is 2.44 m. Its weight is estimated at 14,500 metric tons, the tower has 296 or 294 steps, the seventh floor has two fewer steps on the north-facing staircase. Prior to restoration work performed between 1990 and 2001, the tower leaned at an angle of 5.5 degrees and this means the top of the tower is displaced horizontally 3.9 metres from the centre. There has been controversy about the identity of the architect of the Leaning Tower of Pisa. For many years, the design was attributed to Guglielmo and Bonanno Pisano, Pisano left Pisa in 1185 for Monreale, Sicily, only to come back and die in his home town. A piece of cast bearing his name was discovered at the foot of the tower in 1820, construction of the tower occurred in three stages over 199 years. Work on the floor of the white marble campanile began on August 14,1173 during a period of military success. This ground floor is a blind arcade articulated by engaged columns with classical Corinthian capitals, the tower began to sink after construction had progressed to the second floor in 1178. This was due to a mere three-metre foundation, set in weak, unstable subsoil, construction was subsequently halted for almost a century, because the Republic of Pisa was almost continually engaged in battles with Genoa, Lucca, and Florence. This allowed time for the soil to settle. Otherwise, the tower would almost certainly have toppled, in 1198, clocks were temporarily installed on the third floor of the unfinished construction. In 1272, construction resumed under Giovanni di Simone, architect of the Camposanto, in an effort to compensate for the tilt, the engineers built upper floors with one side taller than the other. Because of this, the tower is actually curved, construction was halted again in 1284 when the Pisans were defeated by the Genoans in the Battle of Meloria. The seventh floor was completed in 1319, the bell-chamber was finally added in 1372. It was built by Tommaso di Andrea Pisano, who succeeded in harmonizing the Gothic elements of the bell-chamber with the Romanesque style of the tower, there are seven bells, one for each note of the musical major scale
Leaning Tower of Pisa
–
Leaning Tower of Pisa
Leaning Tower of Pisa
–
Pisa Cathedral & Leaning Tower of Pisa
Leaning Tower of Pisa
–
Leaning Tower of Pisa in 2004
Leaning Tower of Pisa
74.
Torsion balance
–
A torsion spring is a spring that works by torsion or twisting, that is, a flexible elastic object that stores mechanical energy when it is twisted. When it is twisted, it exerts a force in the opposite direction, a torsion bar is a straight bar of metal or rubber that is subjected to twisting about its axis by torque applied at its ends. A more delicate form used in instruments, called a torsion fiber consists of a fiber of silk, glass, or quartz under tension. This terminology can be confusing because in a torsion spring the forces acting on the wire are actually bending stresses. It is analogous to the constant of a linear spring. The negative sign indicates that the direction of the torque is opposite to the direction of twist. Other uses are in the large, coiled torsion springs used to counterbalance the weight of doors. Small, coiled torsion springs are used to operate pop-up doors found on small consumer goods like digital cameras. It absorbs road shocks as the wheel goes over bumps and rough road surfaces, torsion-bar suspensions are used in many modern cars and trucks, as well as military vehicles. The sway bar used in vehicle suspension systems also uses the torsion spring principle. The torsion pendulum used in pendulum clocks is a wheel-shaped weight suspended from its center by a wire torsion spring. The weight rotates about the axis of the spring, twisting it, the force of the spring reverses the direction of rotation, so the wheel oscillates back and forth, driven at the top by the clocks gears. The balance spring or hairspring in mechanical watches is a fine, spiral-shaped torsion spring that pushes the wheel back toward its center position as it rotates back. The balance wheel and spring function similarly to the torsion pendulum above in keeping time for the watch, the DArsonval movement used in mechanical pointer-type meters to measure electric current is a type of torsion balance. A coil of wire attached to the twists in a magnetic field against the resistance of a torsion spring. Hookes law ensures that the angle of the pointer is proportional to the current, a DMD or digital micromirror device chip is at the heart of many video projectors. It uses hundreds of thousands of mirrors on tiny torsion springs fabricated on a silicon surface to reflect light onto the screen. The torsion balance consists of a bar suspended from its middle by a thin fiber, the fiber acts as a very weak torsion spring
Torsion balance
–
A mousetrap powered by a helical torsion spring
75.
Apollo 15
–
Apollo 15 was the ninth manned mission in the United States Apollo program, the fourth to land on the Moon, and the eighth successful manned mission. It was the first of what were termed J missions, long stays on the Moon and it was also the first mission on which the Lunar Roving Vehicle was used. The mission began on July 26,1971, and ended on August 7, at the time, NASA called it the most successful manned flight ever achieved. Commander David Scott and Lunar Module Pilot James Irwin spent three days on the Moon, including 18½ hours outside the spacecraft on lunar extra-vehicular activity, the mission landed near Hadley rille, in an area of the Mare Imbrium called Palus Putredinus. The crew explored the area using the first lunar rover, which allowed them to travel farther from the Lunar Module than had been possible on missions without the rover. They collected 77 kilograms of lunar surface material, Scott had attended the University of Michigan, but left before graduating to accept an appointment to the United States Military Academy. The crewmen did their work at either the United States Military Academy or the United States Naval Academy. C. Gordon Fullerton Joseph P. Allen Robert A. Parker Karl G.5 km Apogee,171.3 km Inclination,29. 679° Period,87. There had been a rivalry between that prime and backup crew on that mission, with the prime being all United States Navy. Originally Apollo 15 would have been an H mission, like Apollos 12,13 and 14, but on September 2,1970, NASA announced it was canceling what were to be the current incarnations of the Apollo 15 and Apollo 19 missions. To maximize the return from the missions, Apollo 15 would now fly as a J mission and have the honor of carrying the first lunar rover. One of the changes in the training for Apollo 15 was the geology training. Although on previous flights the crews had trained in field geology. Scott and Irwin would train with Leon Silver, a Caltech geologist who on Earth was interested in the Precambrian, Silver had been suggested by Harrison Schmitt as an alternative to the classroom lecturers that NASA had previously used. Among other things, Silver had made important refinements to the methods for dating rocks using the decay of uranium into lead in the late 1950s, crews began to wear mock-ups of the backpacks they would carry, and communicate using walkie-talkies to a CAPCOM in a tent. The CAPCOM was accompanied by a group of geologists unfamiliar with the area who would rely on the descriptions to interpret the findings. The decision to land at Hadley came in September 1970, the Site Selection Committees had narrowed the field down to two sites — Hadley Rille or the crater Marius, near which were a group of low, possibly volcanic, domes. Although not ultimately his decision, the commander of a mission always held great sway, to David Scott the choice was clear, with Hadley, being exploration at its finest
Apollo 15
–
Jim Irwin with the Lunar Roving Vehicle on the first lunar surface EVA of Apollo 15
Apollo 15
Apollo 15
–
Commander David Scott during geology training in New Mexico on March 19, 1971
Apollo 15
–
Apollo 15 launches on July 26, 1971
76.
Theoretical physics
–
Theoretical physics is a branch of physics that employs mathematical models and abstractions of physical objects and systems to rationalize, explain and predict natural phenomena. This is in contrast to physics, which uses experimental tools to probe these phenomena. The advancement of science depends in general on the interplay between experimental studies and theory, in some cases, theoretical physics adheres to standards of mathematical rigor while giving little weight to experiments and observations. Conversely, Einstein was awarded the Nobel Prize for explaining the photoelectric effect, a physical theory is a model of physical events. It is judged by the extent to which its predictions agree with empirical observations, the quality of a physical theory is also judged on its ability to make new predictions which can be verified by new observations. A physical theory similarly differs from a theory, in the sense that the word theory has a different meaning in mathematical terms. A physical theory involves one or more relationships between various measurable quantities, archimedes realized that a ship floats by displacing its mass of water, Pythagoras understood the relation between the length of a vibrating string and the musical tone it produces. Other examples include entropy as a measure of the uncertainty regarding the positions and motions of unseen particles, Theoretical physics consists of several different approaches. In this regard, theoretical particle physics forms a good example, for instance, phenomenologists might employ empirical formulas to agree with experimental results, often without deep physical understanding. Modelers often appear much like phenomenologists, but try to model speculative theories that have certain desirable features, some attempt to create approximate theories, called effective theories, because fully developed theories may be regarded as unsolvable or too complicated. Other theorists may try to unify, formalise, reinterpret or generalise extant theories, or create completely new ones altogether. Sometimes the vision provided by pure mathematical systems can provide clues to how a system might be modeled, e. g. the notion, due to Riemann and others. Theoretical problems that need computational investigation are often the concern of computational physics, Theoretical advances may consist in setting aside old, incorrect paradigms or may be an alternative model that provides answers that are more accurate or that can be more widely applied. In the latter case, a correspondence principle will be required to recover the previously known result, sometimes though, advances may proceed along different paths. However, an exception to all the above is the wave–particle duality, Physical theories become accepted if they are able to make correct predictions and no incorrect ones. They are also likely to be accepted if they connect a wide range of phenomena. Testing the consequences of a theory is part of the scientific method, Physical theories can be grouped into three categories, mainstream theories, proposed theories and fringe theories. Theoretical physics began at least 2,300 years ago, under the Pre-socratic philosophy, during the Middle Ages and Renaissance, the concept of experimental science, the counterpoint to theory, began with scholars such as Ibn al-Haytham and Francis Bacon
Theoretical physics
–
Visual representation of a Schwarzschild wormhole. Wormholes have never been observed, but they are predicted to exist through mathematical models and scientific theory.
77.
Gravitational interaction
–
Gravity, or gravitation, is a natural phenomenon by which all things with mass are brought toward one another, including planets, stars and galaxies. Since energy and mass are equivalent, all forms of energy, including light, on Earth, gravity gives weight to physical objects and causes the ocean tides. Gravity has a range, although its effects become increasingly weaker on farther objects. The most extreme example of this curvature of spacetime is a hole, from which nothing can escape once past its event horizon. More gravity results in time dilation, where time lapses more slowly at a lower gravitational potential. Gravity is the weakest of the four fundamental interactions of nature, the gravitational attraction is approximately 1038 times weaker than the strong force,1036 times weaker than the electromagnetic force and 1029 times weaker than the weak force. As a consequence, gravity has an influence on the behavior of subatomic particles. On the other hand, gravity is the dominant interaction at the macroscopic scale, for this reason, in part, pursuit of a theory of everything, the merging of the general theory of relativity and quantum mechanics into quantum gravity, has become an area of research. While the modern European thinkers are credited with development of gravitational theory, some of the earliest descriptions came from early mathematician-astronomers, such as Aryabhata, who had identified the force of gravity to explain why objects do not fall out when the Earth rotates. Later, the works of Brahmagupta referred to the presence of force, described it as an attractive force. Modern work on gravitational theory began with the work of Galileo Galilei in the late 16th and this was a major departure from Aristotles belief that heavier objects have a higher gravitational acceleration. Galileo postulated air resistance as the reason that objects with less mass may fall slower in an atmosphere, galileos work set the stage for the formulation of Newtons theory of gravity. In 1687, English mathematician Sir Isaac Newton published Principia, which hypothesizes the inverse-square law of universal gravitation. Newtons theory enjoyed its greatest success when it was used to predict the existence of Neptune based on motions of Uranus that could not be accounted for by the actions of the other planets. Calculations by both John Couch Adams and Urbain Le Verrier predicted the position of the planet. A discrepancy in Mercurys orbit pointed out flaws in Newtons theory, the issue was resolved in 1915 by Albert Einsteins new theory of general relativity, which accounted for the small discrepancy in Mercurys orbit. The simplest way to test the equivalence principle is to drop two objects of different masses or compositions in a vacuum and see whether they hit the ground at the same time. Such experiments demonstrate that all objects fall at the rate when other forces are negligible
Gravitational interaction
–
Sir Isaac Newton, an English physicist who lived from 1642 to 1727
Gravitational interaction
–
Two-dimensional analogy of spacetime distortion generated by the mass of an object. Matter changes the geometry of spacetime, this (curved) geometry being interpreted as gravity. White lines do not represent the curvature of space but instead represent the coordinate system imposed on the curved spacetime, which would be rectilinear in a flat spacetime.
Gravitational interaction
–
Ball falling freely under gravity. See text for description.
Gravitational interaction
–
Gravity acts on stars that conform our Milky Way.
78.
Anubis
–
Anubis or Anpu is the Greek name of a god associated with mummification and the afterlife in ancient Egyptian religion, usually depicted as a canine or a man with a canine head. Like many ancient Egyptian deities, Anubis assumed different roles in various contexts, depicted as a protector of graves as early as the First Dynasty, Anubis was also an embalmer. By the Middle Kingdom he was replaced by Osiris in his role as lord of the underworld, one of his prominent roles was as a god who ushered souls into the afterlife. He attended the weighing scale during the Weighing of the Heart, despite being one of the most ancient and one of the most frequently depicted and mentioned gods in the Egyptian pantheon, Anubis played almost no role in Egyptian myths. Anubis was depicted in black, a color that symbolized both rebirth and the discoloration of the corpse after embalming, Anubis is associated with Wepwawet, another Egyptian god portrayed with a dogs head or in canine form, but with grey or white fur. Historians assume that the two figures were eventually combined and his daughter is the serpent goddess Kebechet. Anubis is a Greek rendering of this gods Egyptian name, in Egypts Early Dynastic period, Anubis was portrayed in full animal form, with a jackal head and body. A jackal god, probably Anubis, is depicted in stone inscriptions from the reigns of Hor-Aha, Djer, the oldest known textual mention of Anubis is in the Pyramid Texts of the Old Kingdom, where he is associated with the burial of the pharaoh. In the Old Kingdom, Anubis was the most important god of the dead and he was replaced in that role by Osiris during the Middle Kingdom. In the Roman era, which started in 30 BC, tomb paintings depict him holding the hand of deceased persons to them to Osiris. The parentage of Anubis varied between myths, times and sources, in early mythology, he was portrayed as a son of Ra. In the Coffin Texts, which were written in the First Intermediate Period, another tradition depicted him as the son of his father Ra and mother Nephthys. George Hart sees this story as an attempt to incorporate the independent deity Anubis into the Osirian pantheon, an Egyptian papyrus from the Roman period simply called Anubis the son of Isis. In the Ptolemaic period, when Egypt became a Hellenistic kingdom ruled by Greek pharaohs, Anubis was merged with the Greek god Hermes, the two gods were considered similar because they both guided souls to the afterlife. The center of this cult was in uten-ha/Sa-ka/ Cynopolis, a place whose Greek name means city of dogs, in Book XI of The Golden Ass by Apuleius, there is evidence that the worship of this god was continued in Rome through at least the 2nd century. Indeed, Hermanubis also appears in the alchemical and hermetical literature of the Middle Ages, in contrast to real wolves, Anubis was a protector of graves and cemeteries. Several epithets attached to his name in Egyptian texts and inscriptions referred to that role, the Jumilhac papyrus recounts another tale where Anubis protected the body of Osiris from Set. Set attempted to attack the body of Osiris by transforming himself into a leopard, Anubis stopped and subdued Set, however, and he branded Sets skin with a hot iron rod
Anubis
–
Anubis attending the mummy of the deceased.
Anubis
–
Statue of Hermanubis, a hybrid of Anubis and the Greek god Hermes (Vatican Museums)
Anubis
–
The "weighing of the heart," from the book of the dead of Hunefer. Anubis is portrayed as both guiding the deceased forward and manipulating the scales, under the scrutiny of the ibis-headed Thoth.
Anubis
–
A crouching or "recumbent" statue of Anubis as a black-coated wolf (from the Tomb of Tutankhamun)
79.
Ratio
–
In mathematics, a ratio is a relationship between two numbers indicating how many times the first number contains the second. For example, if a bowl of fruit contains eight oranges and six lemons, thus, a ratio can be a fraction as opposed to a whole number. Also, in example the ratio of lemons to oranges is 6,8. The numbers compared in a ratio can be any quantities of a kind, such as objects, persons, lengths. A ratio is written a to b or a, b, when the two quantities have the same units, as is often the case, their ratio is a dimensionless number. A rate is a quotient of variables having different units, but in many applications, the word ratio is often used instead for this more general notion as well. The numbers A and B are sometimes called terms with A being the antecedent, the proportion expressing the equality of the ratios A, B and C, D is written A, B = C, D or A, B, C, D. This latter form, when spoken or written in the English language, is expressed as A is to B as C is to D. A, B, C and D are called the terms of the proportion. A and D are called the extremes, and B and C are called the means, the equality of three or more proportions is called a continued proportion. Ratios are sometimes used three or more terms. The ratio of the dimensions of a two by four that is ten inches long is 2,4,10, a good concrete mix is sometimes quoted as 1,2,4 for the ratio of cement to sand to gravel. It is impossible to trace the origin of the concept of ratio because the ideas from which it developed would have been familiar to preliterate cultures. For example, the idea of one village being twice as large as another is so basic that it would have been understood in prehistoric society, however, it is possible to trace the origin of the word ratio to the Ancient Greek λόγος. Early translators rendered this into Latin as ratio, a more modern interpretation of Euclids meaning is more akin to computation or reckoning. Medieval writers used the word to indicate ratio and proportionalitas for the equality of ratios, Euclid collected the results appearing in the Elements from earlier sources. The Pythagoreans developed a theory of ratio and proportion as applied to numbers, the discovery of a theory of ratios that does not assume commensurability is probably due to Eudoxus of Cnidus. The exposition of the theory of proportions that appears in Book VII of The Elements reflects the earlier theory of ratios of commensurables, the existence of multiple theories seems unnecessarily complex to modern sensibility since ratios are, to a large extent, identified with quotients. This is a recent development however, as can be seen from the fact that modern geometry textbooks still use distinct terminology and notation for ratios
Ratio
–
The ratio of width to height of standard-definition television.
80.
Balance scale
–
Weighing scales are devices to measure weight or calculate mass. Scales and balances are used in commerce, as many products are sold. Very accurate balances, called analytical balances, are used in fields such as chemistry. Although records dating to the 1700s refer to spring scales for measuring weight, the earliest design for such a device dates to 1770 and credits Richard Salter, an early scale-maker. Postal workers could work quickly with spring scales than balance scales because they could be read instantaneously. By the 1940s various electronic devices were being attached to these designs to make more accurate. A spring scale measures weight by reporting the distance that a spring deflects under a load and this contrasts to a balance, which compares the torque on the arm due to a sample weight to the torque on the arm due to a standard reference weight using a horizontal lever. Spring scales measure force, which is the force of constraint acting on an object. They are usually calibrated so that measured force translates to mass at earths gravity, the object to be weighed can be simply hung from the spring or set on a pivot and bearing platform. In a spring scale, the spring either stretches or compresses, by Hookes law, every spring has a proportionality constant that relates how hard it is pulled to how far it stretches. Rack and pinion mechanisms are used to convert the linear spring motion to a dial reading. With proper manufacturing and setup, however, spring scales can be rated as legal for commerce, to remove the temperature error, a commerce-legal spring scale must either have temperature-compensated springs or be used at a fairly constant temperature. To eliminate the effect of gravity variations, a spring scale must be calibrated where it is used. It is also common in high-capacity applications such as crane scales to use force to sense weight. The test force is applied to a piston or diaphragm and transmitted through hydraulic lines to an indicator based on a Bourdon tube or electronic sensor. A digital bathroom scale is a type of electronic weighing machine, the digital bathroom scale is a smart scale which has many functions like smartphone integration, cloud storage, fitness tracking, etc. In electronic versions of spring scales, the deflection of a beam supporting the weight is measured using a strain gauge. The capacity of such devices is only limited by the resistance of the beam to deflection and these scales are used in the modern bakery, grocery, delicatessen, seafood, meat, produce and other perishable goods departments
Balance scale
–
Digital kitchen scale, a strain gauge scale
Balance scale
–
Scales used for trade purposes in the state of Florida, as this scale at the checkout in a cafeteria, are inspected for accuracy by the FDACS's Bureau of Weights and Measures.
Balance scale
–
A two-pan balance
Balance scale
–
Two 10- decagram masses
81.
Carob
–
Ceratonia siliqua, commonly known as the carob tree, St Johns-bread, or locust bean, or simply locust-tree, is a species of flowering evergreen shrub or tree in the pea family, Fabaceae. It is widely cultivated for its edible pods, and as a tree in gardens. The ripe, dried pod is often ground to carob powder, Carob bars, an alternative to chocolate bars, are often available in health-food stores. The carat, a unit of mass for gemstones, and of purity for gold, takes its name, indirectly, from the Greek word for a carob seed, the Ceratonia siliqua tree grows up to 15 m tall. The crown is broad and semispherical, supported by a trunk with brown rough bark. Leaves are 10 to 20 cm long, alternate, pinnate and it is frost-tolerant to roughly 20 °F. Most carob trees are dioecious, some are hermaphrodite, the male trees do not produce fruit. The fruit is a legume, that can be elongated, compressed, straight, or curved, the pods take a full year to develop and ripen. The sweet ripe pods eventually fall to the ground and are eaten by mammals, such as swine. The seeds contain leucodelphinidin, a chemical compound. Although used extensively for agriculture, carob can still be growing wild in eastern Mediterranean regions. The common Greek name is, or, meaning wooden horn, in Turkey, it is known as keçiboynuzu, meaning goats horn. The various trees known as algarrobo in Latin America belong to a different subfamily, the carob genus, Ceratonia, belongs to the Fabaceae family, and is believed to be an archaic remnant of a part of this family now generally considered extinct. It grows well in temperate and subtropical areas, and tolerates hot. As a xerophyte species, carob is well adapted to the conditions of the Mediterranean region with 250 to 500 mm of rainfall per year. Carob trees can survive long periods, but to grow fruit. Trees prefer well-drained, sandy loams and are intolerant of waterlogging, after irrigation with saline water in summer, carob trees could possibly also recover during rainfalls in winter. In some experiments young carob trees could uphold basical physiological functions at 40 mmol NaCl/l, not all legume species can develop a symbiosis with rhizobia to use atmospheric nitrogen
Carob
–
Carob tree alfarroba χαρουπιά, ξυλοκερατιά keçiboynuzu
Carob
–
Carob tree in Sardinia, Italy
Carob
–
Ceratonia siliqua, ripe carob fruit pods
Carob
–
Chocolate chip cookies with carob powder instead of cocoa powder
82.
Siliqua
–
The siliqua is the modern name given to small, thin, Roman silver coins produced in the 4th century A. D. and later. When the coins were in circulation, the Latin word siliqua was a unit of weight defined as one twenty-fourth of the weight of a Roman solidus, siliqua vicesima quarta pars solidi est, ab arbore, cuius semen est, vocabulum tenens. A siliqua is one-twentyfourth of a solidus and the name is taken from the seed of a tree, the term siliqua comes from the siliqua graeca, the seed of the carob tree, which in the Roman weight system is equivalent to 1/6 of a scruple. The term has been applied in modern times to various silver coins on the premise that the coins were valued at 1/24 of the gold solidus and therefore represented a siliqua of gold in value. Since gold was worth about 14 times as much as silver in ancient Rome, there is little historical evidence to support this premise. The term is one of convenience, as no name for these coins is indicated by contemporary sources, thin silver coins to the 7th century which weigh about 2 to 3 grams are known as siliquae by numismatic convention. The majority of examples suffer striking cracks or extensive clipping, Roman currency Hoxne Hoard, a hoard of 14,212 silver siliquae dating from the early 5th century
Siliqua
–
Jovian siliqua, c. 363
Siliqua
–
Constantine III (usurper)
83.
Tycho Brahe
–
Tycho Brahe, born Tyge Ottesen Brahe, was a Danish nobleman known for his accurate and comprehensive astronomical and planetary observations. He was born in the then Danish peninsula of Scania, well known in his lifetime as an astronomer, astrologer and alchemist, he has been described as the first competent mind in modern astronomy to feel ardently the passion for exact empirical facts. His observations were some five times more accurate than the best available observations at the time, an heir to several of Denmarks principal noble families, he received a comprehensive education. He took an interest in astronomy and in the creation of more instruments of measurement. His system correctly saw the Moon as orbiting Earth, and the planets as orbiting the Sun, furthermore, he was the last of the major naked eye astronomers, working without telescopes for his observations. In his De nova stella of 1573, he refuted the Aristotelian belief in a celestial realm. Using similar measurements he showed that comets were also not atmospheric phenomena, as previously thought, on the island he founded manufactories, such as a paper mill, to provide material for printing his results. He built an observatory at Benátky nad Jizerou, there, from 1600 until his death in 1601, he was assisted by Johannes Kepler, who later used Tychos astronomical data to develop his three laws of planetary motion. Tychos body has been exhumed twice, in 1901 and 2010, to examine the circumstances of his death, both of his grandfathers and all of his great grandfathers had served as members of the Danish kings Privy Council. His paternal grandfather and namesake Thyge Brahe was the lord of Tosterup Castle in Scania, Tychos father Otte Brahe, like his father a royal Privy Councilor, married Beate Bille, who was herself a powerful figure at the Danish court holding several royal land titles. Both parents are buried under the floor of Kågeröd Church, four kilometres east of Knutstorp, Tycho was born at his familys ancestral seat of Knutstorp Castle, about eight kilometres north of Svalöv in then Danish Scania. He was the oldest of 12 siblngs,8 of whom lived to adulthood and his twin brother died before being baptized. Tycho later wrote an ode in Latin to his dead twin, an epitaph, originally from Knutstorp, but now on a plaque near the church door, shows the whole family, including Tycho as a boy. When he was two years old Tycho was taken away to be raised by his uncle Jørgen Thygesen Brahe. It is unclear why the Otte Brahe reached this arrangement with his brother, Tycho later wrote that Jørgen Brahe raised me and generously provided for me during his life until my eighteenth year, he always treated me as his own son and made me his heir. From ages 6 to 12, Tycho attended Latin school, probably in Nykøbing, at age 12, on 19 April 1559, Tycho began studies at the University of Copenhagen. There, following his uncles wishes, he studied law, but also studied a variety of other subjects, at the University, Aristotle was a staple of scientific theory, and Tycho likely received a thorough training in Aristotelian physics and cosmology. He experienced the solar eclipse of 21 August 1560, and was impressed by the fact that it had been predicted
Tycho Brahe
–
Brahe wearing the Order of the Elephant
Tycho Brahe
–
Portrait of Tycho Brahe (1596) Skokloster Castle
Tycho Brahe
–
An artificial nose of the kind Tycho wore. This particular example did not belong to Tycho.
Tycho Brahe
–
Tycho Brahe's grave in Prague, new tomb stone from 1901
84.
Elliptical
–
In mathematics, an ellipse is a curve in a plane surrounding two focal points such that the sum of the distances to the two focal points is constant for every point on the curve. As such, it is a generalization of a circle, which is a type of an ellipse having both focal points at the same location. The shape of an ellipse is represented by its eccentricity, which for an ellipse can be any number from 0 to arbitrarily close to, ellipses are the closed type of conic section, a plane curve resulting from the intersection of a cone by a plane. Ellipses have many similarities with the two forms of conic sections, parabolas and hyperbolas, both of which are open and unbounded. The cross section of a cylinder is an ellipse, unless the section is parallel to the axis of the cylinder and this ratio is called the eccentricity of the ellipse. Ellipses are common in physics, astronomy and engineering, for example, the orbit of each planet in our solar system is approximately an ellipse with the barycenter of the planet–Sun pair at one of the focal points. The same is true for moons orbiting planets and all other systems having two astronomical bodies, the shapes of planets and stars are often well described by ellipsoids. It is also the simplest Lissajous figure formed when the horizontal and vertical motions are sinusoids with the same frequency, a similar effect leads to elliptical polarization of light in optics. The name, ἔλλειψις, was given by Apollonius of Perga in his Conics, in order to omit the special case of a line segment, one presumes 2 a > | F1 F2 |, E =. The midpoint C of the segment joining the foci is called the center of the ellipse. The line through the foci is called the major axis and it contains the vertices V1, V2, which have distance a to the center. The distance c of the foci to the center is called the distance or linear eccentricity. The quotient c a is the eccentricity e, the case F1 = F2 yields a circle and is included. C2 is called the circle of the ellipse. This property should not be confused with the definition of an ellipse with help of a directrix below, for an arbitrary point the distance to the focus is 2 + y 2 and to the second focus 2 + y 2. Hence the point is on the ellipse if the condition is fulfilled 2 + y 2 +2 + y 2 =2 a. The shape parameters a, b are called the major axis. The points V3 =, V4 = are the co-vertices and it follows from the equation that the ellipse is symmetric with respect to both of the coordinate axes and hence symmetric with respect to the origin
Elliptical
–
Drawing an ellipse with two pins, a loop, and a pen
Elliptical
–
An ellipse obtained as the intersection of a cone with an inclined plane.
85.
Square (algebra)
–
In mathematics, a square is the result of multiplying a number by itself. The verb to square is used to denote this operation, squaring is the same as raising to the power 2, and is denoted by a superscript 2, for instance, the square of 3 may be written as 32, which is the number 9. In some cases when superscripts are not available, as for instance in programming languages or plain text files, the adjective which corresponds to squaring is quadratic. The square of an integer may also be called a number or a perfect square. In algebra, the operation of squaring is often generalized to polynomials, other expressions, for instance, the square of the linear polynomial x +1 is the quadratic polynomial x2 + 2x +1. One of the important properties of squaring, for numbers as well as in other mathematical systems, is that. That is, the function satisfies the identity x2 =2. This can also be expressed by saying that the function is an even function. The squaring function preserves the order of numbers, larger numbers have larger squares. In other words, squaring is a function on the interval. Hence, zero is its global minimum, the only cases where the square x2 of a number is less than x occur when 0 < x <1, that is, when x belongs to an open interval. This implies that the square of an integer is never less than the original number, every positive real number is the square of exactly two numbers, one of which is strictly positive and the other of which is strictly negative. Zero is the square of one number, itself. For this reason, it is possible to define the square root function, no square root can be taken of a negative number within the system of real numbers, because squares of all real numbers are non-negative. There are several uses of the squaring function in geometry. The name of the squaring function shows its importance in the definition of the area, the area depends quadratically on the size, the area of a shape n times larger is n2 times greater. The squaring function is related to distance through the Pythagorean theorem and its generalization, Euclidean distance is not a smooth function, the three-dimensional graph of distance from a fixed point forms a cone, with a non-smooth point at the tip of the cone. However, the square of the distance, which has a paraboloid as its graph, is a smooth, the dot product of a Euclidean vector with itself is equal to the square of its length, v⋅v = v2
Square (algebra)
–
The composition of the tiling Image:ConformId.jpg (understood as a function on the complex plane) with the complex square function
Square (algebra)
–
5⋅5, or 5 2 (5 squared), can be shown graphically using a square. Each block represents one unit, 1⋅1, and the entire square represents 5⋅5, or the area of the square.
86.
Vincenzo Viviani
–
Vincenzo Viviani was an Italian mathematician and scientist. He was a pupil of Torricelli and a disciple of Galileo, born and raised in Florence, Viviani studied at a Jesuit school. There, Grand Duke Ferdinando II de Medici furnished him a scholarship to purchase mathematical books and he became a pupil of Evangelista Torricelli and worked on physics and geometry. In 1639, at the age of 17, he was an assistant of Galileo Galilei in Arcetri and he remained a disciple until Galileos death in 1642. From 1655 to 1656, Viviani edited the first edition of Galileos collected works, after Torricellis 1647 death, Viviani was appointed to fill his position at the Accademia dellArte del Disegno in Florence. Viviani was also one of the first members of the Grand Dukes experimental academy, the Accademia del Cimento, in 1660, Viviani and Giovanni Alfonso Borelli conducted an experiment to determine the speed of sound. The currently accepted value is 331.29 m/s at 0 °C or 340.29 m/s at sea level and it has also been claimed that in 1661 he experimented with the rotation of pendulums,190 years before the famous demonstration by Foucault. By 1666, Viviani started to receive many job offers as his reputation as a mathematician grew and that same year, Louis XIV of France offered him a position at the Académie Royale and John II Casimir of Poland offered Viviani a post as his astronomer. Fearful of losing Viviani, the Grand Duke appointed him court mathematician, Viviani accepted this post and turned down his other offers. In 1687, he published a book on engineering, Discorso intorno al difendersi da riempimenti e dalle corrosione de fiumi, upon his death, Viviani left an almost completed work on the resistance of solids, which was subsequently completed and published by Luigi Guido Grandi. In 1737, the Church finally allowed Galileo to be reburied in a grave with an elaborate monument, the monument that was created in the church of Santa Croce was constructed with the help of funds left by Viviani for that specific purpose. Vivianis own remains were moved to Galileos new grave as well, the lunar crater Viviani is named after him. In Florence, Viviani had Galileos life and achievements written in Latin on the façade of his palace, the palace was then renamed Palazzo dei Cartelloni. Racconto istorico della vita di Galileo Galilei Galileos Leaning Tower of Pisa experiment Vivianis theorem Vivianis curve Viviani page at Rice Universitys Galileo Project Vivianis Theorem
Vincenzo Viviani
–
Vincenzo Viviani
Vincenzo Viviani
–
The "Palazzo Viviani" or "Palazzo dei Cartelloni" with plaques and bust dedicated by Viviani to Galilei
87.
Ball
–
A ball is a round object with various uses. It is used in games, where the play of the game follows the state of the ball as it is hit. Balls can also be used for activities, such as catch, marbles. Balls made from hard-wearing materials are used in engineering applications to very low friction bearings. Black-powder weapons use stone and metal balls as projectiles, although many types of balls are today made from rubber, this form was unknown outside the Americas until after the voyages of Columbus. The Spanish were the first Europeans to see bouncing rubber balls which were employed most notably in the Mesoamerican ballgame, balls used in various sports in other parts of the world prior to Columbus were made from other materials such as animal bladders or skins, stuffed with various materials. As balls are one of the most familiar spherical objects to humans, no Old English representative of any of these is known. If ball- was native in Germanic, it may have been a cognate with the Latin foll-is in sense of a blown up or inflated. In the later Middle English spelling balle the word coincided graphically with the French balle ball, French balle is assumed to be of Germanic origin, itself, however. In Ancient Greek the word πάλλα for ball is attested besides the word σφαίρα, a ball, as the essential feature in many forms of gameplay requiring physical exertion, must date from the very earliest times. A rolling object appeals not only to a baby but to a kitten. Some form of game with a ball is found portrayed on Egyptian monuments, in Homer, Nausicaa was playing at ball with her maidens when Odysseus first saw her in the land of the Phaeacians. And Halios and Laodamas performed before Alcinous and Odysseus with ball play, of regular rules for the playing of ball games, little trace remains, if there were any such. Pollux mentions a game called episkyros, which has often been looked on as the origin of football and it seems to have been played by two sides, arranged in lines, how far there was any form of goal seems uncertain. Among the Romans, ball games were looked upon as an adjunct to the bath, and were graduated to the age and health of the bathers and this was struck from player to player, who wore a kind of gauntlet on the arm. These games are known to us through the Romans, though the names are Greek, the various modern games played with a ball or balls and subject to rules are treated under their various names, such as polo, cricket, football, etc. Several sports use a ball in the shape of a prolate spheroid, Ball Buckminster Fullerene Football Kickball Marbles Penny floater Prisoner Ball Shuttlecock Super Ball
Ball
–
Russian leather balls (Russian: мячи), 12th-13th century.
Ball
–
Football from association football (soccer)
Ball
–
Baoding balls
Ball
–
Baseball
88.
Groove (engineering)
–
Examples include, A canal cut in a hard material, usually metal. This canal can be round, oval or an arc in order to another component such as a boss. It can also be on the circumference of a dowel, a bolt and this canal may receive a circlip an o-ring or a gasket. A depression on the circumference of a cast or machined wheel. This depression may receive a cable, a rope or a belt, a longitudinal channel formed in a hot rolled rail profile such as a grooved rail. This groove is for the flange on a train wheel, fluting Glass run channel Labyrinth seal Tongue and groove Tread
Groove (engineering)
–
v
89.
Parchment
–
Parchment is a writing material made from specially prepared untanned skins of animals—primarily sheep, calves, and goats. It has been used as a medium for over two millennia. Vellum is a finer quality parchment made from the skins of kids, lambs and it may be called animal membrane by libraries and museums that wish to avoid distinguishing between parchment and the more restricted term vellum. Today the term parchment is often used in non-technical contexts to refer to any animal skin, particularly goat, sheep or cow, that has been scraped or dried under tension. Vellum in theory refers exclusively to calfskin, and is used to denote a quality of material. The term parchment originally referred only to the skin of sheep and, occasionally, the word parchment evolved from the name of the city of Pergamon which was a thriving center of parchment production during the Hellenistic period. This account, originated in the writings of Pliny the Elder, is dubious because parchment had been in use in Anatolia, in the 2nd century BC a great library was set up in Pergamon that rivaled the famous Library of Alexandria. Writing on prepared animal skins had a history, however. H. Ibscher, and preserved in the Cairo Museum, a roll of the Twelfth Dynasty now in Berlin, the text now in the British Museum. Though the Assyrians and the Babylonians impressed their cuneiform on clay tablets, early Islamic texts are also found on parchment. In the later Middle Ages, especially the 15th century, parchment was largely replaced by paper for most uses except luxury manuscripts, new techniques in paper milling allowed it to be much cheaper than parchment, it was still made of textile rags and of very high quality. With the advent of printing in the fifteenth century, the demands of printers far exceeded the supply of animal skins for parchment. Although most copies of the Gutenberg Bible are on paper, some were printed on parchment,12 of the 48 surviving copies, in 1490, Johannes Trithemius preferred the older methods, because handwriting placed on parchment will be able to endure a thousand years. But how long will printing last, which is dependent on paper and it lasts for two hundred years that is a long time. In fact high quality paper from this period has survived 500 years or more very well, the heyday of parchment use was during the medieval period, but there has been a growing revival of its use among artists since the late 20th century. Although parchment never stopped being used it had ceased to be a choice for artists supports by the end of 15th century Renaissance. This was partly due to its expense and partly due to its working properties. When the water in paint media touches parchments surface, the collagen melts slightly, forming a bed for the paint
Parchment
–
Central European (Northern) type of finished parchment made of goatskin stretched on a wooden frame
Parchment
–
Latin Grant written on fine parchment or vellum with seal dated 1329
Parchment
–
A 1385 copy of the Sachsenspiegel, a German legal code, written on parchment with straps and clasps on the binding
Parchment
–
A Sefer Torah, the traditional form of the Hebrew Bible, is a scroll of parchment.
90.
Sidereal orbital period
–
A sidereal year is the time taken by the Earth to orbit the Sun once with respect to the fixed stars. Hence it is also the time taken for the Sun to return to the position with respect to the fixed stars after apparently travelling once around the ecliptic. It equals 365.25636 SI days for the J2000.0 epoch, the sidereal year differs from the tropical year, the period of time required for the ecliptic longitude of the sun to increase 360 degrees, due to the precession of the equinoxes. The sidereal year is 20 min 24.5 s longer than the tropical year at J2000.0. Before the discovery of the precession of the equinoxes by Hipparchus in the Hellenistic period, anomalistic year Gaussian year Orbital period Julian year Precession Sidereal time Tropical year
Sidereal orbital period
–
Key concepts
91.
Sidereal year
–
A sidereal year is the time taken by the Earth to orbit the Sun once with respect to the fixed stars. Hence it is also the time taken for the Sun to return to the position with respect to the fixed stars after apparently travelling once around the ecliptic. It equals 365.25636 SI days for the J2000.0 epoch, the sidereal year differs from the tropical year, the period of time required for the ecliptic longitude of the sun to increase 360 degrees, due to the precession of the equinoxes. The sidereal year is 20 min 24.5 s longer than the tropical year at J2000.0. Before the discovery of the precession of the equinoxes by Hipparchus in the Hellenistic period, anomalistic year Gaussian year Orbital period Julian year Precession Sidereal time Tropical year
Sidereal year
–
Key concepts
92.
Robert Hooke
–
Robert Hooke FRS was an English natural philosopher, architect and polymath. Allan Chapman has characterised him as Englands Leonardo, Robert Gunthers Early Science in Oxford, a history of science in Oxford during the Protectorate, Restoration and Age of Enlightenment, devotes five of its fourteen volumes to Hooke. Hooke studied at Wadham College, Oxford during the Protectorate where he became one of a tightly knit group of ardent Royalists led by John Wilkins. Here he was employed as an assistant to Thomas Willis and to Robert Boyle and he built some of the earliest Gregorian telescopes and observed the rotations of Mars and Jupiter. In 1665 he inspired the use of microscopes for scientific exploration with his book, based on his microscopic observations of fossils, Hooke was an early proponent of biological evolution. Much of Hookes scientific work was conducted in his capacity as curator of experiments of the Royal Society, much of what is known of Hookes early life comes from an autobiography that he commenced in 1696 but never completed. Richard Waller mentions it in his introduction to The Posthumous Works of Robert Hooke, the work of Waller, along with John Wards Lives of the Gresham Professors and John Aubreys Brief Lives, form the major near-contemporaneous biographical accounts of Hooke. Robert Hooke was born in 1635 in Freshwater on the Isle of Wight to John Hooke, Robert was the last of four children, two boys and two girls, and there was an age difference of seven years between him and the next youngest. Their father John was a Church of England priest, the curate of Freshwaters Church of All Saints, Robert Hooke was expected to succeed in his education and join the Church. John Hooke also was in charge of a school, and so was able to teach Robert. He was a Royalist and almost certainly a member of a group who went to pay their respects to Charles I when he escaped to the Isle of Wight, Robert, too, grew up to be a staunch monarchist. As a youth, Robert Hooke was fascinated by observation, mechanical works and he dismantled a brass clock and built a wooden replica that, by all accounts, worked well enough, and he learned to draw, making his own materials from coal, chalk and ruddle. Hooke quickly mastered Latin and Greek, made study of Hebrew. Here, too, he embarked on his study of mechanics. It appears that Hooke was one of a group of students whom Busby educated in parallel to the work of the school. Contemporary accounts say he was not much seen in the school, in 1653, Hooke secured a choristers place at Christ Church, Oxford. He was employed as an assistant to Dr Thomas Willis. There he met the natural philosopher Robert Boyle, and gained employment as his assistant from about 1655 to 1662, constructing, operating and he did not take his Master of Arts until 1662 or 1663
Robert Hooke
–
Modern portrait of Robert Hooke (Rita Greer 2004), based on descriptions by Aubrey and Waller; no contemporary depictions of Hooke are known to survive.
Robert Hooke
–
Memorial portrait of Robert Hooke at Alum Bay, Isle of Wight, his birthplace, by Rita Greer (2012).
Robert Hooke
–
Robert Boyle
Robert Hooke
–
Diagram of a louse from Hooke's Micrographia
93.
Escape velocity
–
The escape velocity from Earth is about 11.186 km/s at the surface. More generally, escape velocity is the speed at which the sum of a kinetic energy. With escape velocity in a direction pointing away from the ground of a massive body, once escape velocity is achieved, no further impulse need be applied for it to continue in its escape. When given a speed V greater than the speed v e. In these equations atmospheric friction is not taken into account, escape velocity is only required to send a ballistic object on a trajectory that will allow the object to escape the gravity well of the mass M. The existence of escape velocity is a consequence of conservation of energy, by adding speed to the object it expands the possible places that can be reached until with enough energy they become infinite. For a given gravitational potential energy at a position, the escape velocity is the minimum speed an object without propulsion needs to be able to escape from the gravity. Escape velocity is actually a speed because it does not specify a direction, no matter what the direction of travel is, the simplest way of deriving the formula for escape velocity is to use conservation of energy. Imagine that a spaceship of mass m is at a distance r from the center of mass of the planet and its initial speed is equal to its escape velocity, v e. At its final state, it will be a distance away from the planet. The same result is obtained by a calculation, in which case the variable r represents the radial coordinate or reduced circumference of the Schwarzschild metric. All speeds and velocities measured with respect to the field, additionally, the escape velocity at a point in space is equal to the speed that an object would have if it started at rest from an infinite distance and was pulled by gravity to that point. In common usage, the point is on the surface of a planet or moon. On the surface of the Earth, the velocity is about 11.2 km/s. However, at 9,000 km altitude in space, it is less than 7.1 km/s. The escape velocity is independent of the mass of the escaping object and it does not matter if the mass is 1 kg or 1,000 kg, what differs is the amount of energy required. For an object of mass m the energy required to escape the Earths gravitational field is GMm / r, a related quantity is the specific orbital energy which is essentially the sum of the kinetic and potential energy divided by the mass. An object has reached escape velocity when the orbital energy is greater or equal to zero
Escape velocity
–
Luna 1, launched in 1959, was the first man-made object to attain escape velocity from Earth (see below table).
Escape velocity
–
General
94.
Thought experiment
–
A thought experiment considers some hypothesis, theory, or principle for the purpose of thinking through its consequences. Given the structure of the experiment, it may not be possible to perform it, perhaps the key experiment in the history of modern science is Galileos demonstration that falling objects must fall at the same rate regardless of their masses. The experiment is described by Galileo in Discorsi e dimostrazioni matematiche thus, do you not agree with me in this opinion. Hence the heavier body moves with less speed than the lighter, thus you see how, from your assumption that the heavier body moves more rapidly than the lighter one, I infer that the heavier body moves more slowly. Although the extract does not convey the elegance and power of the demonstration terribly well, it is clear that it is a thought experiment, instead, many philosophers prefer to consider Thought Experiments to be merely the use of a hypothetical scenario to help understand the way things are. Thought experiments have used in a variety of fields, including philosophy, law, physics. In philosophy, they have used at least since classical antiquity. In law, they were well-known to Roman lawyers quoted in the Digest, in physics and other sciences, notable thought experiments date from the 19th and especially the 20th century, but examples can be found at least as early as Galileo. Johann Witt-Hansen established that Hans Christian Ørsted was the first to use the Latin-German mixed term Gedankenexperiment circa 1812, Ørsted was also the first to use its entirely German equivalent, Gedankenversuch, in 1820. The English term thought experiment was coined from Machs Gedankenexperiment, prior to its emergence, the activity of posing hypothetical questions that employed subjunctive reasoning had existed for a very long time. However, people had no way of categorizing it or speaking about it and this helps to explain the extremely wide and diverse range of the application of the term thought experiment once it had been introduced into English. In physics and other sciences many thought experiments date from the 19th and especially the 20th Century, in Galileo’s thought experiment, for example, the rearrangement of empirical experience consists in the original idea of combining bodies of different weight. Thought experiments have used in philosophy, physics, and other fields. In law, the hypothetical is frequently used for such experiments. Regardless of their goal, all thought experiments display a patterned way of thinking that is designed to allow us to explain, predict and control events in a better. However, they may make those theories themselves irrelevant, and could create new problems that are just as difficult. Ensure the avoidance of past failures Scientists tend to use thought experiments as imaginary, in these cases, the result of the proxy experiment will often be so clear that there will be no need to conduct a physical experiment at all. Scientists also use thought experiments when particular physical experiments are impossible to conduct, such as Einsteins thought experiment of chasing a light beam, leading to Special Relativity
Thought experiment
–
Temporal representation of a prefactual thought experiment.
Thought experiment
–
A famous example, Schrödinger's cat (1935), presents a cat that might be alive or dead, depending on an earlier random event. It illustrates the problem of the Copenhagen interpretation applied to everyday objects.
Thought experiment
–
Temporal representation of a counterfactual thought experiment.
Thought experiment
–
Temporal representation of a semifactual thought experiment.
95.
Celestial spheres
–
The celestial spheres, or celestial orbs, were the fundamental entities of the cosmological models developed by Plato, Eudoxus, Aristotle, Ptolemy, Copernicus and others. Since it was believed that the stars did not change their positions relative to one another. In modern thought, the orbits of the planets are viewed as the paths of those planets through mostly empty space, when scholars applied Ptolemys epicycles, they presumed that each planetary sphere was exactly thick enough to accommodate them. In Greek antiquity the ideas of celestial spheres and rings first appeared in the cosmology of Anaximander in the early 6th century BC, all these wheel rims had originally been formed out of an original sphere of fire wholly encompassing the Earth, which had disintegrated into many individual rings. Hence, in Anaximanderss cosmogony, in the beginning was the sphere, out of which celestial rings were formed, as viewed from the Earth, the ring of the Sun was highest, that of the Moon was lower, and the sphere of the stars was lowest. Following Anaximander, his pupil Anaximenes held that the stars, Sun, Moon, and planets are all made of fire. But whilst the stars are fastened on a crystal sphere like nails or studs, the Sun, Moon, and planets. And unlike Anaximander, he relegated the fixed stars to the region most distant from the Earth, after Anaximenes, Pythagoras, Xenophanes and Parmenides all held that the universe was spherical. And much later in the fourth century BC Platos Timaeus proposed that the body of the cosmos was made in the most perfect and uniform shape, but it posited that the planets were spherical bodies set in rotating bands or rings rather than wheel rims as in Anaximanders cosmology. Each planet is attached to the innermost of its own set of spheres. In his Metaphysics, Aristotle developed a cosmology of spheres. Aristotle considers that these spheres are made of a fifth element. Each of these concentric spheres is moved by its own god—an unchanging divine unmoved mover, by using eccentrics and epicycles, his geometrical model achieved greater mathematical detail and predictive accuracy than had been exhibited by earlier concentric spherical models of the cosmos. In Ptolemys physical model, each planet is contained in two or more spheres, but in Book 2 of his Planetary Hypotheses Ptolemy depicted thick circular slices rather than spheres as in its Book 1. The planetary spheres were arranged outwards from the spherical, stationary Earth at the centre of the universe in order, the spheres of the Moon, Mercury, Venus, Sun, Mars, Jupiter. In more detailed models the seven planetary spheres contained other secondary spheres within them, in antiquity the order of the lower planets was not universally agreed. Plato and his followers ordered them Moon, Sun, Mercury, Venus, a series of astronomers, beginning with the Muslim astronomer al-Farghānī, used the Ptolemaic model of nesting spheres to compute distances to the stars and planetary spheres. Al-Farghānīs distance to the stars was 20,110 Earth radii which, on the assumption that the radius of the Earth was 3,250 miles, came to 65,357,500 miles
Celestial spheres
–
The Earth within seven celestial spheres, from Bede, De natura rerum, late 11th century
Celestial spheres
–
Geocentric celestial spheres; Peter Apian's Cosmographia (Antwerp, 1539)
Celestial spheres
–
Thomas Digges' 1576 Copernican heliocentric model of the celestial orbs
Celestial spheres
–
Kepler's diagram of the celestial spheres, and of the spaces between them, following the opinion of Copernicus (Mysterium Cosmographicum, 2nd ed., 1621)
96.
Unit conversion
–
Conversion of units is the conversion between different units of measurement for the same quantity, typically through multiplicative conversion factors. The process of conversion depends on the situation and the intended purpose. This may be governed by regulation, contract, technical specifications or other published standards, engineering judgment may include such factors as, The precision and accuracy of measurement and the associated uncertainty of measurement. The statistical confidence interval or tolerance interval of the initial measurement, the number of significant figures of the measurement. The intended use of the measurement including the engineering tolerances, historical definitions of the units and their derivatives used in old measurements, e. g. international foot vs. Some conversions from one system of units to another need to be exact and this is sometimes called soft conversion. It does not involve changing the configuration of the item being measured. By contrast, a conversion or an adaptive conversion may not be exactly equivalent. It changes the measurement to convenient and workable numbers and units in the new system and it sometimes involves a slightly different configuration, or size substitution, of the item. Nominal values are allowed and used. A conversion factor is used to change the units of a quantity without changing its value. The unity bracket method of unit conversion consists of a fraction in which the denominator is equal to the numerator, because of the identity property of multiplication, the value of a number will not change as long as it is multiplied by one. Also, if the numerator and denominator of a fraction are equal to each other, so as long as the numerator and denominator of the fraction are equivalent, they will not affect the value of the measured quantity. There are many applications that offer the thousands of the various units with conversions. For example, the free software movement offers a command line utility GNU units for Linux and this article gives lists of conversion factors for each of a number of physical quantities, which are listed in the index. For each physical quantity, a number of different units are shown, Conversion between units in the metric system can be discerned by their prefixes and are thus not listed in this article. Exceptions are made if the unit is known by another name. Within each table, the units are listed alphabetically, and the SI units are highlighted, notes, See Weight for detail of mass/weight distinction and conversion
Unit conversion
–
Base units
97.
Cavendish experiment
–
Because of the unit conventions then in use, the gravitational constant does not appear explicitly in Cavendishs work. Instead, the result was originally expressed as the gravity of the Earth. His experiment gave the first accurate values for these geophysical constants, the experiment was devised sometime before 1783 by geologist John Michell, who constructed a torsion balance apparatus for it. However, Michell died in 1793 without completing the work, after his death the apparatus passed to Francis John Hyde Wollaston and then to Henry Cavendish, who rebuilt the apparatus but kept close to Michells original plan. Cavendish then carried out a series of measurements with the equipment, the apparatus constructed by Cavendish was a torsion balance made of a six-foot wooden rod suspended from a wire, with a 2-inch diameter 1. 61-pound lead sphere attached to each end. Two 12-inch 348-pound lead balls were located near the smaller balls, about 9 inches away, the experiment measured the faint gravitational attraction between the small balls and the larger ones. The two large balls were positioned on alternate sides of the horizontal arm of the balance. Their mutual attraction to the small balls caused the arm to rotate, the arm stopped rotating when it reached an angle where the twisting force of the wire balanced the combined gravitational force of attraction between the large and small lead spheres. By measuring the angle of the rod and knowing the twisting force of the wire for a given angle, Cavendish found that the Earths density was 5. 448±0.033 times that of water. The period was about 20 minutes, the torsion coefficient could be calculated from this and the mass and dimensions of the balance. Actually, the rod was never at rest, Cavendish had to measure the angle of the rod while it was oscillating. Cavendishs equipment was remarkably sensitive for its time, the force involved in twisting the torsion balance was very small,1. 74×10−7 N, about 1⁄50,000,000 of the weight of the small balls. Through two holes in the walls of the shed, Cavendish used telescopes to observe the movement of the torsion balances horizontal rod, the motion of the rod was only about 0.16 inches. Cavendish was able to measure this small deflection to an accuracy of better than one hundredth of an inch using vernier scales on the ends of the rod, Cavendishs accuracy was not exceeded until C. V. In time, Michells torsion balance became the dominant technique for measuring the gravitational constant and this is why Cavendishs experiment became the Cavendish experiment. The formulation of Newtonian gravity in terms of a gravitational constant did not become standard until long after Cavendishs time, indeed, one of the first references to G is in 1873,75 years after Cavendishs work. Cavendish expressed his result in terms of the density of the Earth, later authors reformulated his results in modern terms. For this reason, historians of science have argued that Cavendish did not measure the gravitational constant, physicists, however, often use units where the gravitational constant takes a different form
Cavendish experiment
–
Detail showing torsion balance arm (m), large ball (W), small ball (x), and isolating box (ABCDE).
Cavendish experiment
–
Vertical section drawing of Cavendish's torsion balance instrument including the building in which it was housed. The large balls were hung from a frame so they could be rotated into position next to the small balls by a pulley from outside. Figure 1 of Cavendish's paper.
98.
Calibration
–
Calibration in measurement technology and metrology is the comparison of measurement values delivered by a device under test with those of a calibration standard of known accuracy. Such a standard could be another measurement device of known accuracy, strictly, the term calibration means just the act of comparison, and does not include any subsequent adjustment. The calibration standard is normally traceable to a national standard held by a National Metrological Institute and this definition states that the calibration process is purely a comparison, but introduces the concept of Measurement uncertainty in relating the accuracies of the device under test and the standard. The increasing need for accuracy and uncertainty and the need to have consistent. In many countries a National Metrology Institute will exist which will maintain primary standards of measurement which will be used to provide traceability to customers instruments by calibration. The NMI supports the metrological infrastructure in that country by establishing an unbroken chain, examples of National Metrology Institutes are NPL in the UK, NIST in the United States, PTB in Germany and many others. This may be done by national standards laboratories operated by the government or by private firms offering metrology services, quality management systems call for an effective metrology system which includes formal, periodic, and documented calibration of all measuring instruments. ISO9000 and ISO17025 standards require that these actions are to a high level. To communicate the quality of a calibration the calibration value is often accompanied by a traceable uncertainty statement to a confidence level. This is evaluated through careful uncertainty analysis, some times a DFS is required to operate machinery in a degraded state. Whenever this does happen, it must be in writing and authorized by a manager with the assistance of a calibration technician. Measuring devices and instruments are categorized according to the quantities they are designed to measure. These vary internationally, e. g. NIST 150-2G in the U. S. the standard instrument for each test device varies accordingly, e. g. a dead weight tester for pressure gauge calibration and a dry block temperature tester for temperature gauge calibration. g. This is the perception of the instruments end-user, however, very few instruments can be adjusted to exactly match the standards they are compared to. For the vast majority of calibrations, the process is actually the comparison of an unknown to a known. The calibration process begins with the design of the instrument that needs to be calibrated. The design has to be able to hold a calibration through its calibration interval, in other words, the design has to be capable of measurements that are within engineering tolerance when used within the stated environmental conditions over some reasonable period of time. Having a design with these characteristics increases the likelihood of the measuring instruments performing as expected
Calibration
–
An example of a device whose calibration is off: a weighing scale that reads ½ ounce without any load.
Calibration
–
Indirect reading design showing a Bourdon tube from the front (left) and the rear (right).
Calibration
Calibration
–
Gas pump with rotary flow indicator (yellow) and nozzle (red)
99.
Ernst Mach
–
Ernst Waldfried Josef Wenzel Mach was an Austrian physicist and philosopher, noted for his contributions to physics such as study of shock waves. The ratio of speed to that of sound is named the Mach number in his honor. Ernst Waldfried Josef Wenzel Mach was born in Brno-Chrlice, Moravia and his father, who had graduated from Charles University in Prague, acted as tutor to the noble Brethon family in Zlín, eastern Moravia. His grandfather, Wenzl Lanhaus, an administrator of the estate Chirlitz, was master builder of the streets there. His activities in that later influenced the theoretical work of Ernst Mach. Some sources give Machs birthplace as Turas/Tuřany, the site of the Chirlitz registry-office, peregrin Weiss baptized Ernst Mach into the Roman Catholic Church in Turas/Tuřany. Despite his Catholic background, he became an atheist and his theory. Up to the age of 14, Mach received his education at home from his parents and he then entered a Gymnasium in Kroměříž, where he studied for three years. In 1855 he became a student at the University of Vienna and his early work focused on the Doppler effect in optics and acoustics. During that period, Mach continued his work in psycho-physics and in sensory perception, in 1867, he took the chair of Experimental Physics at the Charles University, Prague, where he stayed for 28 years before returning to Vienna. Machs main contribution to physics involved his description and photographs of spark shock-waves and he described how when a bullet or shell moved faster than the speed of sound, it created a compression of air in front of it. Using schlieren photography, he and his son Ludwig were able to photograph the shadows of the shock waves. During the early 1890s Ludwig was able to invent an interferometer which allowed for much clearer photographs, one of the best-known of Machs ideas is the so-called Mach principle, concerning the physical origin of inertia. This was never written down by Mach, but was given a verbal form, attributed by Philipp Frank to Mach himself, as, When the subway jerks. Mach also became known for his philosophy developed in close interplay with his science. Mach defended a type of phenomenalism recognizing only sensations as real and this position seemed incompatible with the view of atoms and molecules as external, mind-independent things. He famously declared, after an 1897 lecture by Ludwig Boltzmann at the Imperial Academy of Science in Vienna, from about 1908 to 1911 Machs reluctance to acknowledge the reality of atoms was criticized by Max Planck as being incompatible with physics. In 1898 Mach suffered from cardiac arrest and in 1901 retired from the University of Vienna and was appointed to the chamber of the Austrian parliament
Ernst Mach
–
Ernst Mach (1838–1916)
Ernst Mach
–
Ernst Mach’s photography of a bow shockwave around a supersonic bullet, in 1888.
Ernst Mach
–
Bust of Mach in the Rathauspark (City Hall Park) in Vienna, Austria.
Ernst Mach
–
Spinning chair devised by Mach to investigate the experience of motion
100.
Newton's second law
–
Newtons laws of motion are three physical laws that, together, laid the foundation for classical mechanics. They describe the relationship between a body and the forces acting upon it, and its motion in response to those forces. More precisely, the first law defines the force qualitatively, the second law offers a measure of the force. These three laws have been expressed in different ways, over nearly three centuries, and can be summarised as follows. The three laws of motion were first compiled by Isaac Newton in his Philosophiæ Naturalis Principia Mathematica, Newton used them to explain and investigate the motion of many physical objects and systems. For example, in the volume of the text, Newton showed that these laws of motion, combined with his law of universal gravitation. Newtons laws are applied to objects which are idealised as single point masses, in the sense that the size and this can be done when the object is small compared to the distances involved in its analysis, or the deformation and rotation of the body are of no importance. In this way, even a planet can be idealised as a particle for analysis of its orbital motion around a star, in their original form, Newtons laws of motion are not adequate to characterise the motion of rigid bodies and deformable bodies. Leonhard Euler in 1750 introduced a generalisation of Newtons laws of motion for rigid bodies called Eulers laws of motion, if a body is represented as an assemblage of discrete particles, each governed by Newtons laws of motion, then Eulers laws can be derived from Newtons laws. Eulers laws can, however, be taken as axioms describing the laws of motion for extended bodies, Newtons laws hold only with respect to a certain set of frames of reference called Newtonian or inertial reference frames. Other authors do treat the first law as a corollary of the second, the explicit concept of an inertial frame of reference was not developed until long after Newtons death. In the given mass, acceleration, momentum, and force are assumed to be externally defined quantities. This is the most common, but not the interpretation of the way one can consider the laws to be a definition of these quantities. Newtonian mechanics has been superseded by special relativity, but it is useful as an approximation when the speeds involved are much slower than the speed of light. The first law states that if the net force is zero, the first law can be stated mathematically when the mass is a non-zero constant, as, ∑ F =0 ⇔ d v d t =0. Consequently, An object that is at rest will stay at rest unless a force acts upon it, an object that is in motion will not change its velocity unless a force acts upon it. This is known as uniform motion, an object continues to do whatever it happens to be doing unless a force is exerted upon it. If it is at rest, it continues in a state of rest, if an object is moving, it continues to move without turning or changing its speed
Newton's second law
–
Newton's First and Second laws, in Latin, from the original 1687 Principia Mathematica.
Newton's second law
–
Isaac Newton (1643–1727), the physicist who formulated the laws
101.
Deuterium
–
Deuterium is one of two stable isotopes of hydrogen. The nucleus of deuterium, called a deuteron, contains one proton and one neutron, whereas the far more common hydrogen isotope, Deuterium has a natural abundance in Earths oceans of about one atom in 6420 of hydrogen. Thus deuterium accounts for approximately 0. 0156% of all the naturally occurring hydrogen in the oceans, the abundance of deuterium changes slightly from one kind of natural water to another. The deuterium isotopes name is formed from the Greek deuteros meaning second, Deuterium was discovered and named in 1931 by Harold Urey. When the neutron was discovered in 1932, this made the structure of deuterium obvious. Soon after deuteriums discovery, Urey and others produced samples of water in which the deuterium content had been highly concentrated. Deuterium is destroyed in the interiors of stars faster than it is produced, other natural processes are thought to produce only an insignificant amount of deuterium. Nearly all deuterium found in nature was produced in the Big Bang 13.8 billion years ago and this is the ratio found in the gas giant planets, such as Jupiter. However, other bodies are found to have different ratios of deuterium to hydrogen-1. This is thought to be as a result of natural isotope separation processes that occur from solar heating of ices in comets, like the water-cycle in Earths weather, such heating processes may enrich deuterium with respect to protium. The analysis of ratios in comets found results very similar to the mean ratio in Earths oceans. This reinforces theories that much of Earths ocean water is of cometary origin, the deuterium/protium ratio of the comet 67P/Churyumov-Gerasimenko, as measured by the Rosetta space probe, is about three times that of earth water. This figure is the highest yet measured in a comet, deuterium/protium ratios thus continue to be an active topic of research in both astronomy and climatology. Deuterium is frequently represented by the chemical symbol D, since it is an isotope of hydrogen with mass number 2, it is also represented by 2H. IUPAC allows both D and 2H, although 2H is preferred, a distinct chemical symbol is used for convenience because of the isotopes common use in various scientific processes. In quantum mechanics the energy levels of electrons in atoms depend on the mass of the system of electron. For hydrogen, this amount is about 1837/1836, or 1.000545, the energies of spectroscopic lines for deuterium and light-hydrogen therefore differ by the ratios of these two numbers, which is 1.000272. The wavelengths of all deuterium spectroscopic lines are shorter than the lines of light hydrogen
Deuterium
–
Deuterium discharge tube
Deuterium
–
Full table
Deuterium
–
Ionized deuterium in a fusor reactor giving off its characteristic pinkish-red glow
Deuterium
–
Harold Urey
102.
International Bureau of Weights and Measures
–
The organisation is usually referred to by its French initialism, BIPM. The BIPM reports to the International Committee for Weights and Measures and these organizations are also commonly referred to by their French initialisms. The BIPM was created on 20 May 1875, following the signing of the Metre Convention, under the authority of the Metric Convention, the BIPM helps to ensure uniformity of SI weights and measures around the world. It does so through a series of committees, whose members are the national metrology laboratories of the Conventions member states. The BIPM carries out measurement-related research and it takes part in and organises international comparisons of national measurement standards and performs calibrations for member states. The BIPM has an important role in maintaining accurate worldwide time of day and it combines, analyses, and averages the official atomic time standards of member nations around the world to create a single, official Coordinated Universal Time. The BIPM is also the keeper of the prototype of the kilogram. Metrologia Institute for Reference Materials and Measurements International Organization for Standardization National Institute of Standards and Technology Official website
International Bureau of Weights and Measures
–
Pavillon de Breteuil in Sèvres, France.
International Bureau of Weights and Measures
–
Seal of the BIPM
103.
Proposed redefinition of SI base units
–
The metric system was originally conceived as a system of measurement that was derivable from nature. When the metric system was first introduced in France in 1799 technical limitations necessitated the use of such as the prototype metre. In 1960 the metre was redefined in terms of the wavelength of light from a source, making it derivable from nature. If the proposed redefinition is accepted, the system will, for the first time. The proposal can be summarised as follows, There will still be the seven base units. The second, metre and candela are already defined by physical constants, the new definitions will improve the SI without changing the size of any units, thus ensuring continuity with present measurements. Further details are found in the chapter of the Ninth SI Units Brochure. The last major overhaul of the system was in 1960 when the International System of Units was formally published as a coherent set of units of measure. SI is structured around seven base units that have apparently arbitrary definitions, although the set of units form a coherent system, the definitions do not. The proposal before the CIPM seeks to remedy this by using the quantities of nature as the basis for deriving the base units. This will mean, amongst other things, that the prototype kilogram will cease to be used as the replica of the kilogram. The second and the metre are already defined in such a manner, the basic structure of SI was developed over a period of about 170 years. Since 1960 technological advances have made it possible to address weaknesses in SI. Specifically, the metre was defined as one ten-millionth of the distance from the North Pole to the Equator, although these definitions were chosen so that nobody would own the units, they could not be measured with sufficient convenience or precision for practical use. Instead copies were created in the form of the mètre des Archives, in 1875, by which time the use of the metric system had become widespread in Europe and in Latin America, twenty industrially developed nations met for the Convention of the Metre. They were, CGPM —The Conference meets every four to six years, CIPM —The Committee consists of eighteen eminent scientists, each from a different country, nominated by the CGPM. The CIPM meets annually and is tasked to advise the CGPM, the CIPM has set up a number of sub-committees, each charged with a particular area of interest. One of these, the Consultative Committee for Units, amongst other things, the first CGPM formally approved the use of 40 prototype metres and 40 prototype kilograms from the British firm Johnson Matthey as the standards mandated by the Convention of the Metre
Proposed redefinition of SI base units
–
Mass drift over time of national prototypes K21–K40, plus two of the International Prototype Kilogram 's (IPK's) sister copies: K32 and K8(41). All mass changes are relative to the IPK.
Proposed redefinition of SI base units
–
Current (2013) SI system: Dependence of base unit definitions on other base units (for example, the metre is defined in terms of the distance traveled by light in a specific fraction of a second)
Proposed redefinition of SI base units
–
A watt balance which is being used to measure the Planck constant in terms of the international prototype kilogram.
Proposed redefinition of SI base units
–
A near-perfect sphere of ultra-pure silicon - part of the Avogadro project, an International Avogadro Coordination project to determine the Avogadro number
104.
Mass in special relativity
–
Mass in special relativity incorporates the general understandings from the concept of mass–energy equivalence. Roche states that about 60% of modern authors just use rest mass, for a discussion of mass in general relativity, see mass in general relativity. For a general discussion including mass in Newtonian mechanics, see the article on mass, the term mass in special relativity usually refers to the rest mass of the object, which is the Newtonian mass as measured by an observer moving along with the object. The invariant mass is another name for the rest mass of single particles, the more general invariant mass loosely corresponds to the rest mass of a system. Under such circumstances the invariant mass is equal to the relativistic mass, the concept of invariant mass does not require bound systems of particles, however. As such, it may also be applied to systems of particles in high-speed relative motion. Because of this, it is employed in particle physics for systems which consist of widely separated high-energy particles. If such systems were derived from a particle, then the calculation of the invariant mass of such systems. It is often convenient in calculation that the invariant mass of a system is the energy of the system in the COM frame. As with energy and momentum, the invariant mass of a system cannot be destroyed or changed, the term relativistic mass is also sometimes used. This is the sum total quantity of energy in a body or system, as seen from the center of momentum frame, the relativistic mass is also the invariant mass, as discussed above. For other frames, the relativistic mass includes a contribution from the net energy of the body. Thus, unlike the invariant mass, the relativistic mass depends on the frame of reference. However, for given single frames of reference and for isolated systems, although some authors present relativistic mass as a fundamental concept of the theory, it has been argued that this is wrong as the fundamentals of the theory relate to space–time. There is disagreement over whether the concept is pedagogically useful, the notion of mass as a property of an object from Newtonian mechanics does not bear a precise relationship to the concept in relativity. Roche suggests that mass is only an artifact. If a stationary box contains many particles, it more in its rest frame. Any energy in the box adds to the mass, so that the motion of the particles contributes to the mass of the box
Mass in special relativity
–
Dependency between the rest mass and E, given in 4-momentum (p 0, p 1) coordinates; p 0 c = E
105.
Relativistic mass
–
Mass in special relativity incorporates the general understandings from the concept of mass–energy equivalence. Roche states that about 60% of modern authors just use rest mass, for a discussion of mass in general relativity, see mass in general relativity. For a general discussion including mass in Newtonian mechanics, see the article on mass, the term mass in special relativity usually refers to the rest mass of the object, which is the Newtonian mass as measured by an observer moving along with the object. The invariant mass is another name for the rest mass of single particles, the more general invariant mass loosely corresponds to the rest mass of a system. Under such circumstances the invariant mass is equal to the relativistic mass, the concept of invariant mass does not require bound systems of particles, however. As such, it may also be applied to systems of particles in high-speed relative motion. Because of this, it is employed in particle physics for systems which consist of widely separated high-energy particles. If such systems were derived from a particle, then the calculation of the invariant mass of such systems. It is often convenient in calculation that the invariant mass of a system is the energy of the system in the COM frame. As with energy and momentum, the invariant mass of a system cannot be destroyed or changed, the term relativistic mass is also sometimes used. This is the sum total quantity of energy in a body or system, as seen from the center of momentum frame, the relativistic mass is also the invariant mass, as discussed above. For other frames, the relativistic mass includes a contribution from the net energy of the body. Thus, unlike the invariant mass, the relativistic mass depends on the frame of reference. However, for given single frames of reference and for isolated systems, although some authors present relativistic mass as a fundamental concept of the theory, it has been argued that this is wrong as the fundamentals of the theory relate to space–time. There is disagreement over whether the concept is pedagogically useful, the notion of mass as a property of an object from Newtonian mechanics does not bear a precise relationship to the concept in relativity. Roche suggests that mass is only an artifact. If a stationary box contains many particles, it more in its rest frame. Any energy in the box adds to the mass, so that the motion of the particles contributes to the mass of the box
Relativistic mass
–
Dependency between the rest mass and E, given in 4-momentum (p 0, p 1) coordinates; p 0 c = E
106.
Rest energy
–
More precisely, it is a characteristic of the systems total energy and momentum that is the same in all frames of reference related by Lorentz transformations. If a center of momentum frame exists for the system, then the invariant mass of a system is equal to its mass in that rest frame. In other reference frames, where the momentum is nonzero, the total mass of the system is greater than the invariant mass. Due to mass-energy equivalence, the rest energy of the system is simply the invariant mass times the speed of light squared, similarly, the total energy of the system is its total mass times the speed of light squared. Systems whose four-momentum is a null vector have zero invariant mass, a physical object or particle moving faster than the speed of light would have space-like four-momenta, and these do not appear to exist. Any time-like four-momentum possesses a frame where the momentum is zero. In this case, invariant mass is positive and is referred to as the rest mass, if objects within a system are in relative motion, then the invariant mass of the whole system will differ from the sum of the objects rest masses. This is also equal to the energy of the system divided by c2. See mass–energy equivalence for a discussion of definitions of mass, for example, a scale would measure the kinetic energy of the molecules in a bottle of gas to be part of invariant mass of the bottle, and thus also its rest mass. The same is true for massless particles in such system, which add invariant mass and also rest mass to systems, for an isolated massive system, the center of mass of the system moves in a straight line with a steady sub-luminal velocity. Thus, an observer can always be placed to move along with it. In this frame, which is the center of momentum frame, the momentum is zero. In this frame, which exists under these assumptions, the invariant mass of the system is equal to the system energy divided by c2. This total energy in the center of momentum frame, is the energy which the system may be observed to have. Note that for reasons above, such a rest frame does not exist for single photons, when two or more photons move in different directions, however, a center of mass frame exists. Thus, the mass of a system of several photons moving in different directions is positive, for example, rest mass and invariant mass are zero for individual photons even though they may add mass to the invariant mass of systems. For this reason, invariant mass is in not an additive quantity. Consider the simple case of system, where object A is moving towards another object B which is initially at rest
Rest energy
–
Possible 4-momenta of particles. One has zero invariant mass, the other is massive
107.
Pedagogy
–
Pedagogy is the discipline that deals with the theory and practice of education, it thus concerns the study of how best to teach. Spanning a broad range of practice, its aims range from furthering liberal education to the specifics of vocational education. Instructive strategies are governed by the background knowledge and experience, situation. One example would be the Socratic schools of thought, the teaching of adults, as a specific group, is referred to as andragogy. Johann Friedrich Herbart is the father of the conceptualization of pedagogy, or. Herbarts educational philosophy and pedagogy highlighted the correlation between personal development and the benefits to society. In other words, Herbart proposed that humans become fulfilled once they establish themselves as productive citizens, herbartianism refers to the movement underpinned by Herbarts theoretical perspectives. Referring to the process, Herbart suggested 5 steps as crucial components. Specifically, these 5 steps include, preparation, presentation, association, generalization, Herbart suggests that pedagogy relates to having assumptions as an educator and a specific set of abilities with a deliberate end goal in mind. The word is a derivative of the Greek παιδαγωγία, from παιδαγωγός, itself a synthesis of ἄγω, I lead and it is pronounced variously, as /ˈpɛdəɡɒdʒi/, /ˈpɛdəɡoʊdʒi/, or /ˈpɛdəɡɒɡi/. Negative connotations of pedantry have sometimes been intended, or taken, doctor of Pedagogy, is awarded honorarily by some US universities to distinguished teachers. The term is used to denote an emphasis in education as a specialty in a field. The word pedagogue means leading children some say to the teacher and other leading them, in Denmark, a pedagogue is a practitioner of pedagogy. The term is used for individuals who occupy jobs in pre-school education in Scandinavia. But a pedagogue can occupy various kinds of jobs, e. g. in retirement homes, prisons, orphanages and these are often recognized as social pedagogues as they perform on behalf of society. The pedagogues job is usually distinguished from a teachers by primarily focusing on teaching children life-preparing knowledge such as social skills, there is also a very big focus on care and well-being of the child. Many pedagogical institutions also practice social inclusion, the pedagogues work also consists of supporting the child in their mental and social development. In Denmark all pedagogues are educated at a series of institutes for social educators located in all major cities
Pedagogy
–
Douris Man with wax tablet
Pedagogy
108.
Atomic nuclei
–
After the discovery of the neutron in 1932, models for a nucleus composed of protons and neutrons were quickly developed by Dmitri Ivanenko and Werner Heisenberg. Almost all of the mass of an atom is located in the nucleus, protons and neutrons are bound together to form a nucleus by the nuclear force. The diameter of the nucleus is in the range of 6985175000000000000♠1.75 fm for hydrogen to about 6986150000000000000♠15 fm for the heaviest atoms and these dimensions are much smaller than the diameter of the atom itself, by a factor of about 23,000 to about 145,000. The branch of physics concerned with the study and understanding of the nucleus, including its composition. The nucleus was discovered in 1911, as a result of Ernest Rutherfords efforts to test Thomsons plum pudding model of the atom, the electron had already been discovered earlier by J. J. Knowing that atoms are electrically neutral, Thomson postulated that there must be a charge as well. In his plum pudding model, Thomson suggested that an atom consisted of negative electrons randomly scattered within a sphere of positive charge, to his surprise, many of the particles were deflected at very large angles. This justified the idea of an atom with a dense center of positive charge. The term nucleus is from the Latin word nucleus, a diminutive of nux, in 1844, Michael Faraday used the term to refer to the central point of an atom. The modern atomic meaning was proposed by Ernest Rutherford in 1912, the adoption of the term nucleus to atomic theory, however, was not immediate. In 1916, for example, Gilbert N, the nuclear strong force extends far enough from each baryon so as to bind the neutrons and protons together against the repulsive electrical force between the positively charged protons. The nuclear strong force has a short range, and essentially drops to zero just beyond the edge of the nucleus. The collective action of the charged nucleus is to hold the electrically negative charged electrons in their orbits about the nucleus. The collection of negatively charged electrons orbiting the nucleus display an affinity for certain configurations, which chemical element an atom represents is determined by the number of protons in the nucleus, the neutral atom will have an equal number of electrons orbiting that nucleus. Individual chemical elements can create more stable electron configurations by combining to share their electrons and it is that sharing of electrons to create stable electronic orbits about the nucleus that appears to us as the chemistry of our macro world. Protons define the entire charge of a nucleus, and hence its chemical identity, neutrons are electrically neutral, but contribute to the mass of a nucleus to nearly the same extent as the protons. Neutrons explain the phenomenon of isotopes – varieties of the chemical element which differ only in their atomic mass. They are sometimes viewed as two different quantum states of the particle, the nucleon
Atomic nuclei
–
Nuclear physics
109.
Nucleon
–
In chemistry and physics, nucleons are the proton and the neutron, which are the constituents of the atomic nucleus, and are themselves composed of down and up quarks. The number of nucleons defines an isotopes mass number, the atom is made up of a nucleus surrounded by electrons. Until the 1960s, nucleons were thought to be elementary particles, now they are known to be composite particles, made of three quarks bound together by the so-called strong interaction. The interaction between two or more nucleons is called internucleon interactions or nuclear force, which is ultimately caused by the strong interaction. Nucleons sit at the boundary where particle physics and nuclear physics overlap, Particle physics, particularly quantum chromodynamics, provides the fundamental equations that explain the properties of quarks and of the strong interaction. These equations explain quantitatively how quarks can bind together into protons and neutrons, however, when multiple nucleons are assembled into an atomic nucleus, these fundamental equations become too difficult to solve directly. Instead, nuclides are studied within nuclear physics, which studies nucleons and their interactions by approximations and models and these models can successfully explain nuclide properties, for example, whether or not a certain nuclide undergoes radioactive decay. The proton and neutron are both baryons and both fermions, one carries a positive net charge and the other carries a zero net charge, the protons mass is only 0. 1% less than the neutrons. Thus, they can be viewed as two states of the nucleon, and together form an isospin doublet. In isospin space, neutrons can be transformed into protons via SU symmetries and these nucleons are acted upon equally by the strong interaction, which is invariant under rotation in isospin space. According to the Noether theorem, isospin is conserved with respect to the strong interaction, protons and neutrons are most important and best known for constituting atomic nuclei, but they can also be found on their own, not part of a larger nucleus. A proton on its own is the nucleus of the hydrogen-1 atom, a neutron on its own is unstable, but they can be found in nuclear reactions and are used in scientific analysis. Both the proton and neutron are made of three quarks, the proton is made of two up quarks and one down quark, while the neutron is one up quark and two down quarks. The quarks are held together by the force, or equivalently, by gluons. An up quark has electric charge + 2⁄3 e, and a down quark has charge − 1⁄3 e, so the electric charge of the proton and neutron are +e and 0. The word neutron comes from the fact that it is electrically neutral, the mass of the proton and neutron is quite similar, The proton is 1. 6726×10−27 kg or 938.27 MeV/c2, while the neutron is 1. 6749×10−27 kg or 939.57 MeV/c2. The neutron is roughly 0. 1% heavier, the similarity in mass can be explained roughly by the slight difference in mass of up quark and down quark composing the nucleons. However, detailed explanation remains a problem in particle physics
Nucleon
110.
Thermal energy
–
In thermodynamics, thermal energy refers to the internal energy present in a system due to its temperature. Heat is energy transferred spontaneously from a hotter to a system or body. Heat is energy in transfer, not a property of the system, on the other hand, internal energy is a property of a system. The internal energy of a gas can in this sense be regarded as thermal energy. In this case, however, thermal energy and internal energy are identical, systems that are more complex than ideal gases can undergo phase transitions. Phase transitions can change the energy of the system without changing its temperature. Therefore, the thermal energy cannot be defined solely by the temperature, for these reasons, the concept of the thermal energy of a system is ill-defined and is not used in thermodynamics. In an 1847 lecture entitled On Matter, Living Force, and Heat, James Prescott Joule characterized various terms that are related to thermal energy. He identified the terms latent heat and sensible heat as forms of heat each effecting distinct physical phenomena, namely the potential and kinetic energy of particles, Heat transfer Ocean thermal energy conversion Thermal science Example of incorrect use of heat and thermal energy
Thermal energy
–
Thermal radiation in visible light can be seen on this hot metalwork. Thermal energy would ideally be the amount of heat required to warm the metal to its temperature, but this quantity is not well-defined, as there are many ways to obtain a given body at a given temperature, and each of them may require a different amount of total heat input. Thermal energy, unlike internal energy, is therefore not a state function.
111.
Mass in general relativity
–
The concept of mass in general relativity is more complex than the concept of mass in special relativity. In fact, general relativity does not offer a definition of the term mass. Under some circumstances, the mass of a system in general relativity may not even be defined, concisely, in fundamental units where c =1, the mass of a system in special relativity is the norm of its energy–momentum four vector, otherwise, it is m c. Generalizing this definition to general relativity, however, is problematic, in fact, how, then, does one define a concept as a systems total mass – which is easily defined in classical mechanics. In a way, the ADM energy measures all of the contained in spacetime. Several kinds of proofs that both the ADM mass and the Bondi mass are indeed positive exist, in particular, this means that Minkowski space is indeed stable. However, while there is a variety of proposed definitions such as the Hawking energy, the Geroch energy or Penroses quasi-local energy–momentum based on twistor methods, a non-technical definition of a stationary spacetime is a spacetime where none of the metric coefficients g μ ν are functions of time. The Schwarzschild metric of a hole and the Kerr metric of a rotating black hole are common examples of stationary spacetimes. By definition, a stationary spacetime exhibits time translation symmetry and this is technically called a time-like Killing vector. Because the system has a translation symmetry, Noethers theorem guarantees that it has a conserved energy. Because a stationary system also has a well defined rest frame in which its momentum can be considered to be zero, in general relativity, this mass is called the Komar mass of the system. Komar mass can only be defined for stationary systems, Komar mass can also be defined by a flux integral. This is similar to the way that Gausss law defines the charge enclosed by a surface as the electric force multiplied by the area. The flux integral used to define Komar mass is different from that used to define the electric field, however - the normal force is not the actual force. See the main article for more detail, of the two definitions, the description of Komar mass in terms of a time translation symmetry provides the deepest insight. Such space-times are known as asymptotically flat space-times, for systems in which space-time is asymptotically flat, the ADM and Bondi energy, momentum, and mass can be defined. Note that mass is computed as the length of the four vector. Pi =0, the mass of the system is just /c2
Mass in general relativity
–
General relativity
112.
Gravitational mass
–
In physics, mass is a property of a physical body. It is the measure of a resistance to acceleration when a net force is applied. It also determines the strength of its gravitational attraction to other bodies. The basic SI unit of mass is the kilogram, Mass is not the same as weight, even though mass is often determined by measuring the objects weight using a spring scale, rather than comparing it directly with known masses. An object on the Moon would weigh less than it does on Earth because of the lower gravity and this is because weight is a force, while mass is the property that determines the strength of this force. In Newtonian physics, mass can be generalized as the amount of matter in an object, however, at very high speeds, special relativity postulates that energy is an additional source of mass. Thus, any body having mass has an equivalent amount of energy. In addition, matter is a defined term in science. There are several distinct phenomena which can be used to measure mass, active gravitational mass measures the gravitational force exerted by an object. Passive gravitational mass measures the force exerted on an object in a known gravitational field. The mass of an object determines its acceleration in the presence of an applied force, according to Newtons second law of motion, if a body of fixed mass m is subjected to a single force F, its acceleration a is given by F/m. A bodys mass also determines the degree to which it generates or is affected by a gravitational field and this is sometimes referred to as gravitational mass. The standard International System of Units unit of mass is the kilogram, the kilogram is 1000 grams, first defined in 1795 as one cubic decimeter of water at the melting point of ice. Then in 1889, the kilogram was redefined as the mass of the prototype kilogram. As of January 2013, there are proposals for redefining the kilogram yet again. In this context, the mass has units of eV/c2, the electronvolt and its multiples, such as the MeV, are commonly used in particle physics. The atomic mass unit is 1/12 of the mass of a carbon-12 atom, the atomic mass unit is convenient for expressing the masses of atoms and molecules. Outside the SI system, other units of mass include, the slug is an Imperial unit of mass, the pound is a unit of both mass and force, used mainly in the United States
Gravitational mass
–
Depiction of early balance scales in the Papyrus of Hunefer (dated to the 19th dynasty, ca. 1285 BC). The scene shows Anubis weighing the heart of Hunefer.
Gravitational mass
–
The kilogram is one of the seven SI base units and one of three which is defined ad hoc (i.e. without reference to another base unit).
Gravitational mass
–
Galileo Galilei (1636)
Gravitational mass
–
Distance traveled by a freely falling ball is proportional to the square of the elapsed time
113.
Inertial mass
–
In physics, mass is a property of a physical body. It is the measure of a resistance to acceleration when a net force is applied. It also determines the strength of its gravitational attraction to other bodies. The basic SI unit of mass is the kilogram, Mass is not the same as weight, even though mass is often determined by measuring the objects weight using a spring scale, rather than comparing it directly with known masses. An object on the Moon would weigh less than it does on Earth because of the lower gravity and this is because weight is a force, while mass is the property that determines the strength of this force. In Newtonian physics, mass can be generalized as the amount of matter in an object, however, at very high speeds, special relativity postulates that energy is an additional source of mass. Thus, any body having mass has an equivalent amount of energy. In addition, matter is a defined term in science. There are several distinct phenomena which can be used to measure mass, active gravitational mass measures the gravitational force exerted by an object. Passive gravitational mass measures the force exerted on an object in a known gravitational field. The mass of an object determines its acceleration in the presence of an applied force, according to Newtons second law of motion, if a body of fixed mass m is subjected to a single force F, its acceleration a is given by F/m. A bodys mass also determines the degree to which it generates or is affected by a gravitational field and this is sometimes referred to as gravitational mass. The standard International System of Units unit of mass is the kilogram, the kilogram is 1000 grams, first defined in 1795 as one cubic decimeter of water at the melting point of ice. Then in 1889, the kilogram was redefined as the mass of the prototype kilogram. As of January 2013, there are proposals for redefining the kilogram yet again. In this context, the mass has units of eV/c2, the electronvolt and its multiples, such as the MeV, are commonly used in particle physics. The atomic mass unit is 1/12 of the mass of a carbon-12 atom, the atomic mass unit is convenient for expressing the masses of atoms and molecules. Outside the SI system, other units of mass include, the slug is an Imperial unit of mass, the pound is a unit of both mass and force, used mainly in the United States
Inertial mass
–
Depiction of early balance scales in the Papyrus of Hunefer (dated to the 19th dynasty, ca. 1285 BC). The scene shows Anubis weighing the heart of Hunefer.
Inertial mass
–
The kilogram is one of the seven SI base units and one of three which is defined ad hoc (i.e. without reference to another base unit).
Inertial mass
–
Galileo Galilei (1636)
Inertial mass
–
Distance traveled by a freely falling ball is proportional to the square of the elapsed time
114.
Nonlinear system
–
In mathematics and physical sciences, a nonlinear system is a system in which the change of the output is not proportional to the change of the input. Nonlinear problems are of interest to engineers, physicists and mathematicians, nonlinear systems may appear chaotic, unpredictable or counterintuitive, contrasting with the much simpler linear systems. In other words, in a system of equations, the equation to be solved cannot be written as a linear combination of the unknown variables or functions that appear in them. Systems can be defined as non-linear, regardless of whether or not known linear functions appear in the equations. In particular, an equation is linear if it is linear in terms of the unknown function and its derivatives. As nonlinear equations are difficult to solve, nonlinear systems are approximated by linear equations. This works well up to some accuracy and some range for the input values and it follows that some aspects of the behavior of a nonlinear system appear commonly to be counterintuitive, unpredictable or even chaotic. Although such chaotic behavior may resemble random behavior, it is not random. For example, some aspects of the weather are seen to be chaotic and this nonlinearity is one of the reasons why accurate long-term forecasts are impossible with current technology. Some authors use the term nonlinear science for the study of nonlinear systems and this is disputed by others, Using a term like nonlinear science is like referring to the bulk of zoology as the study of non-elephant animals. In mathematics, a function f is one which satisfies both of the following properties, Additivity or superposition, f = f + f, Homogeneity. Additivity implies homogeneity for any rational α, and, for continuous functions, for a complex α, homogeneity does not follow from additivity. For example, a map is additive but not homogeneous. The equation is called homogeneous if C =0, if f contains differentiation with respect to x, the result will be a differential equation. Nonlinear algebraic equations, which are also called polynomial equations, are defined by equating polynomials to zero, for example, x 2 + x −1 =0. For a single equation, root-finding algorithms can be used to find solutions to the equation. However, systems of equations are more complicated, their study is one motivation for the field of algebraic geometry. It is even difficult to decide whether a given system has complex solutions
Nonlinear system
–
Linearizations of a pendulum
115.
Wave function
–
A wave function in quantum physics is a description of the quantum state of a system. The wave function is a probability amplitude, and the probabilities for the possible results of measurements made on the system can be derived from it. The most common symbols for a function are the Greek letters ψ or Ψ. The wave function is a function of the degrees of freedom corresponding to some set of commuting observables. Once such a representation is chosen, the function can be derived from the quantum state. For a given system, the choice of which commuting degrees of freedom to use is not unique, some particles, like electrons and photons, have nonzero spin, and the wave function for such particles includes spin as an intrinsic, discrete degree of freedom. Other discrete variables can also be included, such as isospin and these values are often displayed in a column matrix. According to the principle of quantum mechanics, wave functions can be added together and multiplied by complex numbers to form new wave functions. The Schrödinger equation determines how wave functions evolve over time, a wave function behaves qualitatively like other waves, such as water waves or waves on a string, because the Schrödinger equation is mathematically a type of wave equation. This explains the name wave function, and gives rise to wave–particle duality, however, the wave function in quantum mechanics describes a kind of physical phenomenon, still open to different interpretations, which fundamentally differs from that of classic mechanical waves. The integral of this quantity, over all the degrees of freedom. This general requirement a wave function must satisfy is called the normalization condition, since the wave function is complex valued, only its relative phase and relative magnitude can be measured. In 1905 Einstein postulated the proportionality between the frequency of a photon and its energy, E = hf, and in 1916 the corresponding relation between photon momentum and wavelength, λ = h/p, the equations represent wave–particle duality for both massless and massive particles. In the 1920s and 1930s, quantum mechanics was developed using calculus and those who used the techniques of calculus included Louis de Broglie, Erwin Schrödinger, and others, developing wave mechanics. Those who applied the methods of linear algebra included Werner Heisenberg, Max Born, Schrödinger subsequently showed that the two approaches were equivalent. However, no one was clear on how to interpret it, at first, Schrödinger and others thought that wave functions represent particles that are spread out with most of the particle being where the wave function is large. This was shown to be incompatible with the scattering of a wave packet representing a particle off a target. While a scattered particle may scatter in any direction, it not break up
Wave function
–
The electron probability density for the first few hydrogen atom electron orbitals shown as cross-sections. These orbitals form an orthonormal basis for the wave function of the electron. Different orbitals are depicted with different scale.
116.
Covariance and contravariance of vectors
–
In multilinear algebra and tensor analysis, covariance and contravariance describe how the quantitative description of certain geometric or physical entities changes with a change of basis. In physics, a basis is sometimes thought of as a set of reference axes, a change of scale on the reference axes corresponds to a change of units in the problem. For instance, in changing scale from meters to centimeters, the components of a velocity vector will multiply by 100. Vectors exhibit this behavior of changing scale inversely to changes in scale to the reference axes, as a result, vectors often have units of distance or distance times some other unit. In contrast, dual vectors typically have units the inverse of distance or the inverse of distance times some other unit, an example of a dual vector is the gradient, which has units of a spatial derivative, or distance−1. The components of dual vectors change in the way as changes to scale of the reference axes. That is, the matrix that transforms the vector of components must be the inverse of the matrix that transforms the basis vectors, the components of vectors are said to be contravariant. In Einstein notation, contravariant components are denoted with upper indices as in v = v i e i, for a dual vector to be basis-independent, the components of the dual vector must co-vary with a change of basis to remain representing the same covector. That is, the components must be transformed by the matrix as the change of basis matrix. The components of vectors are said to be covariant. Examples of covariant vectors generally appear when taking a gradient of a function, in Einstein notation, covariant components are denoted with lower indices as in v = v i e i. Curvilinear coordinate systems, such as cylindrical or spherical coordinates, are used in physical. Tensors are objects in multilinear algebra that can have aspects of both covariance and contravariance, in physics, a vector typically arises as the outcome of a measurement or series of measurements, and is represented as a list of numbers such as. The numbers in the list depend on the choice of coordinate system, for a vector to represent a geometric object, it must be possible to describe how it looks in any other coordinate system. That is to say, the components of the vectors will transform in a way in passing from one coordinate system to another. A contravariant vector has components that transform as the coordinates do under changes of coordinates, including rotation and dilation. The vector itself does not change under these operations, instead, in other words, if the reference axes were rotated in one direction, the component representation of the vector would rotate in exactly the opposite way. Similarly, if the reference axes were stretched in one direction and this important requirement is what distinguishes a contravariant vector from any other triple of physically meaningful quantities
Covariance and contravariance of vectors
–
tangent basis vectors (yellow, left: e 1, e 2, e 3) to the coordinate curves (black),
117.
Dirac equation
–
In particle physics, the Dirac equation is a relativistic wave equation derived by British physicist Paul Dirac in 1928. In its free form, or including electromagnetic interactions, it describes all spin-1/2 massive particles such as electrons and it was validated by accounting for the fine details of the hydrogen spectrum in a completely rigorous way. The equation also implied the existence of a new form of matter, antimatter, previously unsuspected and unobserved, moreover, in the limit of zero mass, the Dirac equation reduces to the Weyl equation. This accomplishment has been described as fully on a par with the works of Newton, Maxwell, in the context of quantum field theory, the Dirac equation is reinterpreted to describe quantum fields corresponding to spin-1/2 particles. The Dirac equation in the originally proposed by Dirac is. The p1, p2, p3 are the components of the momentum, also, c is the speed of light, and ħ is the Planck constant divided by 2π. These fundamental physical constants reflect special relativity and quantum mechanics, respectively, Diracs purpose in casting this equation was to explain the behavior of the relativistically moving electron, and so to allow the atom to be treated in a manner consistent with relativity. His rather modest hope was that the corrections introduced this way might have a bearing on the problem of atomic spectra, the new elements in this equation are the 4 ×4 matrices αk and β, and the four-component wave function ψ. There are four components in ψ because the evaluation of it at any point in configuration space is a bispinor. It is interpreted as a superposition of an electron, a spin-down electron, a spin-up positron. These matrices and the form of the function have a deep mathematical significance. The algebraic structure represented by the matrices had been created some 50 years earlier by the English mathematician W. K. Clifford. In turn, Cliffords ideas had emerged from the work of the German mathematician Hermann Grassmann in his Lineale Ausdehnungslehre. The latter had been regarded as well-nigh incomprehensible by most of his contemporaries, the appearance of something so seemingly abstract, at such a late date, and in such a direct physical manner, is one of the most remarkable chapters in the history of physics. The Dirac equation is similar to the Schrödinger equation for a massive free particle. The left side represents the square of the momentum operator divided by twice the mass, space and time derivatives both enter to second order. This has a consequence for the interpretation of the equation. Because the equation is second order in the derivative, one must specify initial values both of the wave function itself and of its first-time derivative in order to solve definite problems
Dirac equation
118.
Natural units
–
In physics, natural units are physical units of measurement based only on universal physical constants. For example, the charge e is a natural unit of electric charge. It precludes the interpretation of an expression in terms of physical constants, such e and c. In this case, the reinsertion of the powers of e, c. Natural units are natural because the origin of their definition comes only from properties of nature, Planck units are often, without qualification, called natural units, although they constitute only one of several systems of natural units, albeit the best known such system. As with other systems of units, the units of a set of natural units will include definitions and values for length, mass, time, temperature. It is possible to disregard temperature as a physical quantity, since it states the energy per degree of freedom of a particle. Virtually every system of natural units normalizes Boltzmanns constant kB to 1, there are two common ways to relate charge to mass, length, and time, In Lorentz–Heaviside units, Coulombs law is F = q1q2/4πr2, and in Gaussian units, Coulombs law is F = q1q2/r2. Both possibilities are incorporated into different natural unit systems, where, α is the fine-structure constant,2 ≈0.007297, αG is the gravitational coupling constant,2 ≈ 6955175200000000000♠1. 752×10−45. Natural units are most commonly used by setting the units to one, for example, many natural unit systems include the equation c =1 in the unit-system definition, where c is the speed of light. If a velocity v is half the speed of light, then as v = c/2 and c =1, the equation v = 1/2 means the velocity v has the value one-half when measured in Planck units, or the velocity v is one-half the Planck unit of velocity. The equation c =1 can be plugged in anywhere else, for example, Einsteins equation E = mc2 can be rewritten in Planck units as E = m. This equation means The energy of a particle, measured in Planck units of energy, equals the mass of the particle, measured in Planck units of mass. For example, the special relativity equation E2 = p2c2 + m2c4 appears somewhat complicated, Physical interpretation, Natural unit systems automatically subsume dimensional analysis. For example, in Planck units, the units are defined by properties of quantum mechanics, not coincidentally, the Planck unit of length is approximately the distance at which quantum gravity effects become important. Likewise, atomic units are based on the mass and charge of an electron, no prototypes, A prototype is a physical object that defines a unit, such as the International Prototype Kilogram, a physical cylinder of metal whose mass is by definition exactly one kilogram. A prototype definition always has imperfect reproducibility between different places and between different times, and it is an advantage of natural systems that they use no prototypes. Less precise measurements, SI units are designed to be used in precision measurements, for example, the second is defined by an atomic transition frequency in cesium atoms, because this transition frequency can be precisely reproduced with atomic clock technology
Natural units
–
Base units
119.
Higgs mechanism
–
In the Standard Model of particle physics, the Higgs mechanism is essential to explain the generation mechanism of the property mass for gauge bosons. Without the Higgs mechanism, all bosons would be massless, but measurements show that the W+, W−, the Higgs field resolves this conundrum. The simplest description of the mechanism adds a quantum field that permeates all space, below some extremely high temperature, the field causes spontaneous symmetry breaking during interactions. The breaking of symmetry triggers the Higgs mechanism, causing the bosons it interacts with to have mass, in the Standard Model, the phrase Higgs mechanism refers specifically to the generation of masses for the W±, and Z weak gauge bosons through electroweak symmetry breaking. The Higgs mechanism was incorporated into modern particle physics by Steven Weinberg and Abdus Salam, in the standard model, at temperatures high enough that electroweak symmetry is unbroken, all elementary particles are massless. At a critical temperature, the Higgs field becomes tachyonic, the symmetry is broken by condensation. Fermions, such as the leptons and quarks in the Standard Model, can acquire mass as a result of their interaction with the Higgs field. In the standard model, the Higgs field is an SU doublet, the Higgs field, through the interactions specified by its potential, induces spontaneous breaking of three out of the four generators of the gauge group U. This is often written as SU × U, because the phase factor also acts on other fields in particular quarks. Three out of its four components would ordinarily amount to Goldstone bosons, the gauge group of the electroweak part of the standard model is SU × U. The group SU is the group of all 2-by-2 unitary matrices with unit determinant, rotating the coordinates so that the second basis vector points in the direction of the Higgs boson makes the vacuum expectation value of H the spinor. The generators for rotations about the x, y, and z axes are by half the Pauli matrices σx, σy, while the Tx and Ty generators mix up the top and bottom components of the spinor, the Tz rotations only multiply each by opposite phases. This phase can be undone by a U rotation of angle 1/2θ, consequently, under both an SU Tz-rotation and a U rotation by an amount 1/2θ, the vacuum is invariant. This combination of generators preserves the vacuum, and defines the unbroken gauge group in the standard model, the part of the gauge field in this direction stays massless, and amounts to the physical photon. In spite of the introduction of spontaneous symmetry breaking, the mass terms preclude chiral gauge invariance, for these fields the mass terms should always be replaced by a gauge-invariant Higgs mechanism. The quantities γμ are the Dirac matrices, and Gψ is the already-mentioned Yukawa coupling parameter, already now the mass-generation follows the same principle as above, namely from the existence of a finite expectation value | ⟨ ϕ ⟩ |, as described above. Again, this is crucial for the existence of the property mass, spontaneous symmetry breaking offered a framework to introduce bosons into relativistic quantum field theories. However, according to Goldstones theorem, these bosons should be massless, the only observed particles which could be approximately interpreted as Goldstone bosons were the pions, which Yoichiro Nambu related to chiral symmetry breaking
Higgs mechanism
–
Large Hadron Collider tunnel at CERN
Higgs mechanism
–
Philip W. Anderson, the first to propose the mechanism in 1962.
Higgs mechanism
–
Five of the six 2010 APS Sakurai Prize Winners – (L to R) Tom Kibble, Gerald Guralnik, Carl Richard Hagen, François Englert, and Robert Brout
Higgs mechanism
–
Number six: Peter Higgs 2009
120.
Electroweak symmetry breaking
–
In the Standard Model of particle physics, the Higgs mechanism is essential to explain the generation mechanism of the property mass for gauge bosons. Without the Higgs mechanism, all bosons would be massless, but measurements show that the W+, W−, the Higgs field resolves this conundrum. The simplest description of the mechanism adds a quantum field that permeates all space, below some extremely high temperature, the field causes spontaneous symmetry breaking during interactions. The breaking of symmetry triggers the Higgs mechanism, causing the bosons it interacts with to have mass, in the Standard Model, the phrase Higgs mechanism refers specifically to the generation of masses for the W±, and Z weak gauge bosons through electroweak symmetry breaking. The Higgs mechanism was incorporated into modern particle physics by Steven Weinberg and Abdus Salam, in the standard model, at temperatures high enough that electroweak symmetry is unbroken, all elementary particles are massless. At a critical temperature, the Higgs field becomes tachyonic, the symmetry is broken by condensation. Fermions, such as the leptons and quarks in the Standard Model, can acquire mass as a result of their interaction with the Higgs field. In the standard model, the Higgs field is an SU doublet, the Higgs field, through the interactions specified by its potential, induces spontaneous breaking of three out of the four generators of the gauge group U. This is often written as SU × U, because the phase factor also acts on other fields in particular quarks. Three out of its four components would ordinarily amount to Goldstone bosons, the gauge group of the electroweak part of the standard model is SU × U. The group SU is the group of all 2-by-2 unitary matrices with unit determinant, rotating the coordinates so that the second basis vector points in the direction of the Higgs boson makes the vacuum expectation value of H the spinor. The generators for rotations about the x, y, and z axes are by half the Pauli matrices σx, σy, while the Tx and Ty generators mix up the top and bottom components of the spinor, the Tz rotations only multiply each by opposite phases. This phase can be undone by a U rotation of angle 1/2θ, consequently, under both an SU Tz-rotation and a U rotation by an amount 1/2θ, the vacuum is invariant. This combination of generators preserves the vacuum, and defines the unbroken gauge group in the standard model, the part of the gauge field in this direction stays massless, and amounts to the physical photon. In spite of the introduction of spontaneous symmetry breaking, the mass terms preclude chiral gauge invariance, for these fields the mass terms should always be replaced by a gauge-invariant Higgs mechanism. The quantities γμ are the Dirac matrices, and Gψ is the already-mentioned Yukawa coupling parameter, already now the mass-generation follows the same principle as above, namely from the existence of a finite expectation value | ⟨ ϕ ⟩ |, as described above. Again, this is crucial for the existence of the property mass, spontaneous symmetry breaking offered a framework to introduce bosons into relativistic quantum field theories. However, according to Goldstones theorem, these bosons should be massless, the only observed particles which could be approximately interpreted as Goldstone bosons were the pions, which Yoichiro Nambu related to chiral symmetry breaking
Electroweak symmetry breaking
–
Large Hadron Collider tunnel at CERN
Electroweak symmetry breaking
–
Philip W. Anderson, the first to propose the mechanism in 1962.
Electroweak symmetry breaking
–
Five of the six 2010 APS Sakurai Prize Winners – (L to R) Tom Kibble, Gerald Guralnik, Carl Richard Hagen, François Englert, and Robert Brout
Electroweak symmetry breaking
–
Number six: Peter Higgs 2009
121.
Causality
–
In general, a process has many causes, which are said to be causal factors for it, and all lie in its past. An effect can in turn be a cause of other effects. The concept is like those of agency and efficacy, for this reason, a leap of intuition may be needed to grasp it. Accordingly, causality is built into the structure of ordinary language. In this case, failure to recognize that different kinds of cause are being considered can lead to futile debate, of Aristotles four explanatory modes, the one nearest to the concerns of the present article is the efficient one. The topic remains a staple in contemporary philosophy, while studying of meaning of causality semantics traditionally appeal to the chicken or the egg causality dilemma, i. e. which came first, the chicken or the egg. Then it allocates its constituent elements, a cause, an effect and link itself, the nature of cause and effect is a concern of the subject known as metaphysics. A general metaphysical question about cause and effect is what kind of entity can be a cause, one viewpoint on this question is that cause and effect are of one and the same kind of entity, with causality an asymmetric relation between them. That is to say, it would make good sense grammatically to say either A is the cause and B the effect or B is the cause and A the effect, though only one of those two can be actually true. In this view, one opinion, proposed as a principle in process philosophy, is that every cause and every effect is respectively some process, event, becoming. An example is his tripping over the step was the cause, another view is that causes and effects are states of affairs, with the exact natures of those entities being less restrictively defined than in process philosophy. Another viewpoint on the question is the classical one, that a cause. For example, in Aristotles efficient causal explanation, an action can be a cause while an object is its effect. Since causality is a metaphysical notion, considerable effort is needed to establish knowledge of it in particular empirical circumstances. Causality has the properties of antecedence and contiguity and these are topological, and are ingredients for space-time geometry. As developed by Alfred Robb, these allow the derivation of the notions of time. Max Jammer writes the Einstein postulate, opens the way to a straightforward construction of the causal topology. of Minkowski space. Causal efficacy propagates no faster than light, thus, the notion of causality is metaphysically prior to the notions of time and space
Causality
–
The Illustrated Sutra of Cause and Effect. 8th century, Japan
Causality
–
Key concepts
Causality
–
Time Portal
122.
Phase transition
–
The term phase transition is most commonly used to describe transitions between solid, liquid and gaseous states of matter, and, in rare cases, plasma. A phase of a system and the states of matter have uniform physical properties. For example, a liquid may become gas upon heating to the boiling point, the measurement of the external conditions at which the transformation occurs is termed the phase transition. Phase transitions are common in nature and used today in many technologies, the same process, but beginning with a solid instead of a liquid is called a eutectoid transformation. A peritectic transformation, in which a two component single phase solid is heated and transforms into a phase and a liquid phase. A spinodal decomposition, in which a phase is cooled. Transition to a mesophase between solid and liquid, such as one of the crystal phases. The transition between the ferromagnetic and paramagnetic phases of materials at the Curie point. The transition between differently ordered, commensurate or incommensurate, magnetic structures, such as in cerium antimonide, the martensitic transformation which occurs as one of the many phase transformations in carbon steel and stands as a model for displacive phase transformations. Changes in the structure such as between ferrite and austenite of iron. Order-disorder transitions such as in alpha-titanium aluminides, the dependence of the adsorption geometry on coverage and temperature, such as for hydrogen on iron. The emergence of superconductivity in certain metals and ceramics when cooled below a critical temperature, the superfluid transition in liquid helium is an example of this. The breaking of symmetries in the laws of physics during the history of the universe as its temperature cooled. Isotope fractionation occurs during a transition, the ratio of light to heavy isotopes in the involved molecules changes. When water vapor condenses, the heavier water isotopes become enriched in the liquid phase while the lighter isotopes tend toward the vapor phase, Phase transitions occur when the thermodynamic free energy of a system is non-analytic for some choice of thermodynamic variables. This condition generally stems from the interactions of a number of particles in a system. It is important to note that phase transitions can occur and are defined for non-thermodynamic systems, examples include, quantum phase transitions, dynamic phase transitions, and topological phase transitions. In these types of other parameters take the place of temperature
Phase transition
–
A small piece of rapidly melting solid argon simultaneously shows the transitions from solid to liquid and liquid to gas.
Phase transition
–
This diagram shows the nomenclature for the different phase transitions.
123.
Superluminal
–
Faster-than-light communication and travel refer to the propagation of information or matter faster than the speed of light. The special theory of relativity implies that only particles with zero rest mass may travel at the speed of light, although according to current theories matter is still required to travel subluminally with respect to the locally distorted spacetime region, apparent FTL is not excluded by general relativity. Examples of apparent FTL proposals are the Alcubierre drive and the traversable wormhole and this is not quite the same as traveling faster than light, since, Some processes propagate faster than c, but cannot carry information. Neither of these phenomena violates special relativity or creates problems with causality, in the following examples, certain influences may appear to travel faster than light, but they do not convey energy or information faster than light, so they do not violate special relativity. For an Earthbound observer, objects in the sky complete one revolution around the Earth in 1 day, Proxima Centauri, which is the nearest star outside the solar system, is about 4 light-years away. On a geostationary view, Proxima Centauri has a speed many times greater than c as the rim speed of an object moving in a circle is a product of the radius and angular speed. It is also possible on a view for objects such as comets to vary their speed from subluminal to superluminal. Comets may have orbits which take out to more than 1000 AU. The circumference of a circle with a radius of 1000 AU is greater than one light day, in other words, a comet at such a distance is superluminal in a geostatic, and therefore non-inertial, frame. If a laser beam is swept across a distant object, the spot of light can easily be made to move across the object at a speed greater than c. Similarly, a shadow projected onto a distant object can be made to move across the object faster than c, in neither case does the light travel from the source to the object faster than c, nor does any information travel faster than light. However, uniform motion of the source may be removed with a change in reference frame, causing the direction of the static field to change immediately. This is not a change of position which propagates, and thus this change cannot be used to transmit information from the source, no information or matter can be FTL-transmitted or propagated from source to receiver/observer by an electromagnetic field. The rate at which two objects in motion in a frame of reference get closer together is called the mutual or closing speed. This may approach twice the speed of light, as in the case of two particles travelling at close to the speed of light in opposite directions with respect to the reference frame. Imagine two fast-moving particles approaching each other from opposite sides of an accelerator of the collider type. The closing speed would be the rate at which the distance between the two particles is decreasing, from the point of view of an observer standing at rest relative to the accelerator, this rate will be slightly less than twice the speed of light. Special relativity does not prohibit this and it tells us that it is wrong to use Galilean relativity to compute the velocity of one of the particles, as would be measured by an observer traveling alongside the other particle
Superluminal
–
History of the universe - gravitational waves are hypothesized to arise from cosmic inflation, a faster-than-light expansion just after the Big Bang (17 March 2014).
124.
Quantum field theory
–
QFT treats particles as excited states of the underlying physical field, so these are called field quanta. In quantum field theory, quantum mechanical interactions among particles are described by interaction terms among the corresponding underlying quantum fields and these interactions are conveniently visualized by Feynman diagrams, which are a formal tool of relativistically covariant perturbation theory, serving to evaluate particle processes. The first achievement of quantum theory, namely quantum electrodynamics, is still the paradigmatic example of a successful quantum field theory. Ordinarily, quantum mechanics cannot give an account of photons which constitute the prime case of relativistic particles, since photons have rest mass zero, and correspondingly travel in the vacuum at the speed c, a non-relativistic theory such as ordinary QM cannot give even an approximate description. Photons are implicit in the emission and absorption processes which have to be postulated, for instance, the formalism of QFT is needed for an explicit description of photons. In fact most topics in the development of quantum theory were related to the interaction of radiation and matter. However, quantum mechanics as formulated by Dirac, Heisenberg, and Schrödinger in 1926–27 started from atomic spectra, as soon as the conceptual framework of quantum mechanics was developed, a small group of theoreticians tried to extend quantum methods to electromagnetic fields. A good example is the paper by Born, Jordan & Heisenberg. The basic idea was that in QFT the electromagnetic field should be represented by matrices in the way that position. The ideas of QM were thus extended to systems having a number of degrees of freedom. The inception of QFT is usually considered to be Diracs famous 1927 paper on The quantum theory of the emission and absorption of radiation, here Dirac coined the name quantum electrodynamics for the part of QFT that was developed first. Employing the theory of the harmonic oscillator, Dirac gave a theoretical description of how photons appear in the quantization of the electromagnetic radiation field. Later, Diracs procedure became a model for the quantization of fields as well. These first approaches to QFT were further developed during the three years. P. Jordan introduced creation and annihilation operators for fields obeying Fermi–Dirac statistics and these differ from the corresponding operators for Bose–Einstein statistics in that the former satisfy anti-commutation relations while the latter satisfy commutation relations. The methods of QFT could be applied to derive equations resulting from the treatment of particles, e. g. the Dirac equation, the Klein–Gordon equation. Schweber points out that the idea and procedure of second quantization goes back to Jordan, in a number of papers from 1927, some difficult problems concerning commutation relations, statistics, and Lorentz invariance were eventually solved. The first comprehensive account of a theory of quantum fields, in particular
Quantum field theory
125.
Complex number
–
A complex number is a number that can be expressed in the form a + bi, where a and b are real numbers and i is the imaginary unit, satisfying the equation i2 = −1. In this expression, a is the part and b is the imaginary part of the complex number. If z = a + b i, then ℜ z = a, ℑ z = b, Complex numbers extend the concept of the one-dimensional number line to the two-dimensional complex plane by using the horizontal axis for the real part and the vertical axis for the imaginary part. The complex number a + bi can be identified with the point in the complex plane, a complex number whose real part is zero is said to be purely imaginary, whereas a complex number whose imaginary part is zero is a real number. In this way, the numbers are a field extension of the ordinary real numbers. As well as their use within mathematics, complex numbers have applications in many fields, including physics, chemistry, biology, economics, electrical engineering. The Italian mathematician Gerolamo Cardano is the first known to have introduced complex numbers and he called them fictitious during his attempts to find solutions to cubic equations in the 16th century. Complex numbers allow solutions to equations that have no solutions in real numbers. For example, the equation 2 = −9 has no real solution, Complex numbers provide a solution to this problem. The idea is to extend the real numbers with the unit i where i2 = −1. According to the theorem of algebra, all polynomial equations with real or complex coefficients in a single variable have a solution in complex numbers. A complex number is a number of the form a + bi, for example, −3.5 + 2i is a complex number. The real number a is called the part of the complex number a + bi. By this convention the imaginary part does not include the unit, hence b. The real part of a number z is denoted by Re or ℜ. For example, Re = −3.5 Im =2, hence, in terms of its real and imaginary parts, a complex number z is equal to Re + Im ⋅ i. This expression is known as the Cartesian form of z. A real number a can be regarded as a number a + 0i whose imaginary part is 0
Complex number
–
A complex number can be visually represented as a pair of numbers (a, b) forming a vector on a diagram called an Argand diagram, representing the complex plane. "Re" is the real axis, "Im" is the imaginary axis, and i is the imaginary unit which satisfies i 2 = −1.
126.
Square root
–
In mathematics, a square root of a number a is a number y such that y2 = a, in other words, a number y whose square is a. For example,4 and −4 are square roots of 16 because 42 =2 =16, every nonnegative real number a has a unique nonnegative square root, called the principal square root, which is denoted by √a, where √ is called the radical sign or radix. For example, the square root of 9 is 3, denoted √9 =3. The term whose root is being considered is known as the radicand, the radicand is the number or expression underneath the radical sign, in this example 9. Every positive number a has two roots, √a, which is positive, and −√a, which is negative. Together, these two roots are denoted ± √a, although the principal square root of a positive number is only one of its two square roots, the designation the square root is often used to refer to the principal square root. For positive a, the square root can also be written in exponent notation. Square roots of numbers can be discussed within the framework of complex numbers. In Ancient India, the knowledge of theoretical and applied aspects of square and square root was at least as old as the Sulba Sutras, a method for finding very good approximations to the square roots of 2 and 3 are given in the Baudhayana Sulba Sutra. Aryabhata in the Aryabhatiya, has given a method for finding the root of numbers having many digits. It was known to the ancient Greeks that square roots of positive numbers that are not perfect squares are always irrational numbers, numbers not expressible as a ratio of two integers. This is the theorem Euclid X,9 almost certainly due to Theaetetus dating back to circa 380 BC, the particular case √2 is assumed to date back earlier to the Pythagoreans and is traditionally attributed to Hippasus. Mahāvīra, a 9th-century Indian mathematician, was the first to state that square roots of negative numbers do not exist, a symbol for square roots, written as an elaborate R, was invented by Regiomontanus. An R was also used for Radix to indicate square roots in Gerolamo Cardanos Ars Magna, according to historian of mathematics D. E. Smith, Aryabhatas method for finding the root was first introduced in Europe by Cataneo in 1546. According to Jeffrey A. Oaks, Arabs used the letter jīm/ĝīm, the letter jīm resembles the present square root shape. Its usage goes as far as the end of the century in the works of the Moroccan mathematician Ibn al-Yasamin. The symbol √ for the root was first used in print in 1525 in Christoph Rudolffs Coss
Square root
–
First leaf of the complex square root
Square root
–
The mathematical expression 'The (principal) square root of x"
127.
Dark energy
–
In physical cosmology and astronomy, dark energy is an unknown form of energy which is hypothesized to permeate all of space, tending to accelerate the expansion of the universe. Dark energy is the most accepted hypothesis to explain the observations since the 1990s indicating that the universe is expanding at an accelerating rate. Assuming that the model of cosmology is correct, the best current measurements indicate that dark energy contributes 68. 3% of the total energy in the present-day observable universe. The mass–energy of dark matter and ordinary matter contribute 26. 8% and 4. 9%, respectively, the density of dark energy is very low, much less than the density of ordinary matter or dark matter within galaxies. However, it comes to dominate the mass–energy of the universe because it is uniform across space, contributions from scalar fields that are constant in space are usually also included in the cosmological constant. The cosmological constant can be formulated to be equivalent to the zero-point radiation of space i. e. the vacuum energy, scalar fields that change in space can be difficult to distinguish from a cosmological constant because the change may be extremely slow. High-precision measurements of the expansion of the universe are required to understand how the rate changes over time. In general relativity, the evolution of the rate is estimated from the curvature of the universe. Measuring the equation of state for energy is one of the biggest efforts in observational cosmology today. The equilibrium is unstable, if the universe expands slightly, then the expansion releases vacuum energy, likewise, a universe which contracts slightly will continue contracting. These sorts of disturbances are inevitable, due to the distribution of matter throughout the universe. Further, observations made by Edwin Hubble in 1929 showed that the universe appears to be expanding, einstein reportedly referred to his failure to predict the idea of a dynamic universe, in contrast to a static universe, as his greatest blunder. Alan Guth and Alexei Starobinsky proposed in 1980 that a negative pressure field, similar in concept to dark energy, inflation postulates that some repulsive force, qualitatively similar to dark energy, resulted in an enormous and exponential expansion of the universe slightly after the Big Bang. Such expansion is a feature of most current models of the Big Bang. It is unclear what relation, if any, exists between energy and inflation. Even after inflationary models became accepted, the constant was thought to be irrelevant to the current universe. Nearly all inflation models predict that the density of the universe should be very close to the critical density. During the 1980s, most cosmological research focused on models with critical density in matter only, usually 95% cold dark matter, then in 2001, the 2dF Galaxy Redshift Survey gave strong evidence that the matter density is around 30% of critical
Dark energy
–
Diagram representing the accelerated expansion of the universe due to dark energy.
Dark energy
–
A Type Ia supernova (bright spot on the bottom-left) near a galaxy
Dark energy
–
The equation of state of Dark Energy for 4 common models by Redshift. A: CPL Model, B: Jassal Model, C: Barboza & Alcaniz Model, D: Wetterich Model
128.
Pressure
–
Pressure is the force applied perpendicular to the surface of an object per unit area over which that force is distributed. Gauge pressure is the relative to the ambient pressure. Various units are used to express pressure, Pressure may also be expressed in terms of standard atmospheric pressure, the atmosphere is equal to this pressure and the torr is defined as 1⁄760 of this. Manometric units such as the centimetre of water, millimetre of mercury, Pressure is the amount of force acting per unit area. The symbol for it is p or P, the IUPAC recommendation for pressure is a lower-case p. However, upper-case P is widely used. The usage of P vs p depends upon the field in one is working, on the nearby presence of other symbols for quantities such as power and momentum. Mathematically, p = F A where, p is the pressure, F is the normal force and it relates the vector surface element with the normal force acting on it. It is incorrect to say the pressure is directed in such or such direction, the pressure, as a scalar, has no direction. The force given by the relationship to the quantity has a direction. If we change the orientation of the element, the direction of the normal force changes accordingly. Pressure is distributed to solid boundaries or across arbitrary sections of normal to these boundaries or sections at every point. It is a parameter in thermodynamics, and it is conjugate to volume. The SI unit for pressure is the pascal, equal to one newton per square metre and this name for the unit was added in 1971, before that, pressure in SI was expressed simply in newtons per square metre. Other units of pressure, such as pounds per square inch, the CGS unit of pressure is the barye, equal to 1 dyn·cm−2 or 0.1 Pa. Pressure is sometimes expressed in grams-force or kilograms-force per square centimetre, but using the names kilogram, gram, kilogram-force, or gram-force as units of force is expressly forbidden in SI. The technical atmosphere is 1 kgf/cm2, since a system under pressure has potential to perform work on its surroundings, pressure is a measure of potential energy stored per unit volume. It is therefore related to density and may be expressed in units such as joules per cubic metre. Similar pressures are given in kilopascals in most other fields, where the prefix is rarely used
Pressure
–
Mercury column
Pressure
–
Pressure as exerted by particle collisions inside a closed container.
Pressure
–
The effects of an external pressure of 700bar on an aluminum cylinder with 5mm wall thickness
Pressure
–
low pressure chamber in Bundesleistungszentrum Kienbaum, Germany
129.
Effective mass (solid-state physics)
–
The effective mass is a quantity that is used to simplify band structures by constructing an analogy to the behavior of a free particle with that mass. For some purposes and some materials, the mass can be considered to be a simple constant of a material. In general, however, the value of effective mass depends on the purpose for which it is used, for electrons or electron holes in a solid, the effective mass is usually stated in units of the rest mass of an electron, me. In these units it is usually in the range 0.01 to 10 and it can be shown that the electrons placed in these bands behave as free electrons except with a different mass, as long as their energy stays within the range of validity of the approximation above. As a result, the mass in models such as the Drude model must be replaced with the effective mass. One remarkable property is that the mass can become negative. This explains the existence of valence-band holes, the positive-charge, positive-mass quasiparticles that can be found in semiconductors, in any case, if the band structure has the simple parabolic form described above, then the value of effective mass is unambiguous. Unfortunately, this form is not valid for describing most materials. In such complex materials there is no definition of effective mass but instead multiple definitions. The rest of the article describes these effective masses in detail, in some important semiconductors the lowest energies of the conduction band are not symmetrical, as the constant-energy surfaces are now ellipsoids, rather than the spheres in the isotropic case. The offsets k0, x, k0, y, and k0, still, in crystals such as silicon the overall properties such as conductivity appear to be isotropic. This is because there are valleys, each with effective masses rearranged along different axes. The valleys collectively act together to give an isotropic conductivity and it is possible to average the different axes effective masses together in some way, to regain the free electron picture. Other effective masses are more relevant to directly measurable phenomena, a classical particle under the influence of a force accelerates according to Newtons second law, a = m−1F. This intuitive principle appears identically in semiclassical approximations derived from band structure, combining these two equations yields a = ∇ k. The index j is contracted by the use of Einstein notation, since Newtons second law uses the inertial mass, we can identify the inverse of this mass in the equation above as the tensor i j = ℏ −2 ∂2 E ∂ k i ∂ k j. This tensor expresses the change in velocity due to a change in crystal momentum. Its inverse, Minert, is known as the effective mass tensor, the only cases in which it remains constant are those of parabolic bands, described above
Effective mass (solid-state physics)
–
Constant energy ellipsoids in silicon near the six conduction band minima. For each valley (band minimum), the effective masses are m ℓ = 0.92 m e ("longitudinal"; along one axis) and m t = 0.19 m e ("transverse"; along two axes).
Effective mass (solid-state physics)
–
Bulk band structure for Si,Ge,GaAs and InAs generated with tight binding model. Note that Si and Ge are indirect with minima at X and L, while GaAs and InAs are direct band gap materials.
130.
University of Chicago Press
–
The University of Chicago Press is the largest and one of the oldest university presses in the United States. One of its quasi-independent projects is the BiblioVault, a repository for scholarly books. The Press building is located just south of the Midway Plaisance on the University of Chicago campus, the University of Chicago Press was founded in 1891, making it one of the oldest continuously operating university presses in the United States. Its first published book was Robert F. Harpers Assyrian and Babylonian Letters Belonging to the Kouyunjik Collections of the British Museum, for its first three years, the Press was an entity discrete from the university, it was operated by the Boston publishing house D. C. Heath in conjunction with the Chicago printer R. R. Donnelley and this arrangement proved unworkable, however, and in 1894 the university officially assumed responsibility for the Press. In 1902, as part of the university, the Press started working on the Decennial Publications, composed of articles and monographs by scholars and administrators on the state of the university and its facultys research, the Decennial Publications was a radical reorganization of the Press. This allowed the Press, by 1905, to begin publishing books by scholars not of the University of Chicago. A manuscript editing and proofreading department was added to the staff of printers and typesetters, leading, in 1906. By 1931, the Press was an established, leading academic publisher, leading books of that era include Dr. Edgar J. Goodspeeds The New Testament, An American Translation and its successor, Goodspeed and J. M. In 1956, the Press first published books under its imprint. Of the Presss best-known books, most date from the 1950s, including translations of the Complete Greek Tragedies and Richmond Lattimores The Iliad of Homer. That decade also saw the first edition of A Greek-English Lexicon of the New Testament and Other Early Christian Literature, in 1966, Morris Philipson began his thirty-four-year tenure as director of the University of Chicago Press. As the Presss scholarly volume expanded, the Press also advanced as a trade publisher. In 1992, Norman Macleans books A River Runs Through It and Young Men and Fire were national best sellers, in 1982, Philipson was the first director of an academic press to win the Publisher Citation, one of PENs most prestigious awards. Paula Barker Duffy served as director of the Press from 2000 to 2007, under her administration, the Press expanded its distribution operations and created the Chicago Digital Distribution Center and BiblioVault. The Press also launched an electronic work, The Chicago Manual of Style Online. Garrett P. Kiely became the 15th director of the University of Chicago Press on September 1,2007, the Press publishes over 50 new trade titles per year, across many subject areas. It also publishes regional titles, such as The Encyclopedia of Chicago, the Press has recently expanded its digital offerings to include most newly published books as well as key backlist titles
University of Chicago Press
–
University of Chicago Press
131.
International Standard Book Number
–
The International Standard Book Number is a unique numeric commercial book identifier. An ISBN is assigned to each edition and variation of a book, for example, an e-book, a paperback and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, the method of assigning an ISBN is nation-based and varies from country to country, often depending on how large the publishing industry is within a country. The initial ISBN configuration of recognition was generated in 1967 based upon the 9-digit Standard Book Numbering created in 1966, the 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108. Occasionally, a book may appear without a printed ISBN if it is printed privately or the author does not follow the usual ISBN procedure, however, this can be rectified later. Another identifier, the International Standard Serial Number, identifies periodical publications such as magazines, the ISBN configuration of recognition was generated in 1967 in the United Kingdom by David Whitaker and in 1968 in the US by Emery Koltay. The 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108, the United Kingdom continued to use the 9-digit SBN code until 1974. The ISO on-line facility only refers back to 1978, an SBN may be converted to an ISBN by prefixing the digit 0. For example, the edition of Mr. J. G. Reeder Returns, published by Hodder in 1965, has SBN340013818 -340 indicating the publisher,01381 their serial number. This can be converted to ISBN 0-340-01381-8, the check digit does not need to be re-calculated, since 1 January 2007, ISBNs have contained 13 digits, a format that is compatible with Bookland European Article Number EAN-13s. An ISBN is assigned to each edition and variation of a book, for example, an ebook, a paperback, and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, a 13-digit ISBN can be separated into its parts, and when this is done it is customary to separate the parts with hyphens or spaces. Separating the parts of a 10-digit ISBN is also done with either hyphens or spaces, figuring out how to correctly separate a given ISBN number is complicated, because most of the parts do not use a fixed number of digits. ISBN issuance is country-specific, in that ISBNs are issued by the ISBN registration agency that is responsible for country or territory regardless of the publication language. Some ISBN registration agencies are based in national libraries or within ministries of culture, in other cases, the ISBN registration service is provided by organisations such as bibliographic data providers that are not government funded. In Canada, ISBNs are issued at no cost with the purpose of encouraging Canadian culture. In the United Kingdom, United States, and some countries, where the service is provided by non-government-funded organisations. Australia, ISBNs are issued by the library services agency Thorpe-Bowker
International Standard Book Number
–
A 13-digit ISBN, 978-3-16-148410-0, as represented by an EAN-13 bar code
132.
Scientific American
–
Scientific American is an American popular science magazine. Many famous scientists, including Albert Einstein, have contributed articles in the past 170 years and it is the oldest continuously published monthly magazine in the United States. Scientific American was founded by inventor and publisher Rufus M. Porter in 1845 as a weekly newspaper. Throughout its early years, much emphasis was placed on reports of what was going on at the U. S, current issues include a this date in history section, featuring excerpts from articles originally published 50,100, and 150 years earlier. Topics include humorous incidents, wrong-headed theories, and noteworthy advances in the history of science, Porter sold the publication to Alfred Ely Beach and Orson Desaix Munn a mere ten months after founding it. Until 1948, it remained owned by Munn & Company, under Munns grandson, Orson Desaix Munn III, it had evolved into something of a workbench publication, similar to the twentieth-century incarnation of Popular Science. In the years after World War II, the fell into decline. Thus the partners—publisher Gerard Piel, editor Dennis Flanagan, and general manager Donald H. Miller, Miller retired in 1979, Flanagan and Piel in 1984, when Gerard Piels son Jonathan became president and editor, circulation had grown fifteen-fold since 1948. In 1986, it was sold to the Holtzbrinck group of Germany, in the fall of 2008, Scientific American was put under the control of Nature Publishing Group, a division of Holtzbrinck. Donald Miller died in December 1998, Gerard Piel in September 2004, Mariette DiChristina is the current editor-in-chief, after John Rennie stepped down in June 2009. Scientific American published its first foreign edition in 1890, the Spanish-language La America Cientifica, a Russian edition V Mire Nauki was launched in the Soviet Union in 1983, and continues in the present-day Russian Federation. Kexue, a simplified Chinese edition launched in 1979, was the first Western magazine published in the Peoples Republic of China, founded in Chongqing, the simplified Chinese magazine was transferred to Beijing in 2001. Later in 2005, an edition, Global Science, was published instead of Kexue. A traditional Chinese edition, known as 科學人, was introduced to Taiwan in 2002, the Hungarian edition Tudomány existed between 1984 and 1992. In 1986, an Arabic edition, Oloom magazine, was published, in 2002, a Portuguese edition was launched in Brazil. From 1902 to 1911, Scientific American supervised the publication of the Encyclopedia Americana and it originally styled itself The Advocate of Industry and Enterprise and Journal of Mechanical and other Improvements. On the front page of the first issue was the engraving of Improved Rail-Road Cars, the masthead had a commentary as follows, Scientific American published every Thursday morning at No.11 Spruce Street, New York, No.16 State Street, Boston, and No. 2l Arcade Philadelphia, by Rufus Porter, five copies will be sent to one address six months for four dollars in advance
Scientific American
–
Cover of the March 2005 issue
Scientific American
–
PDF of first issue: Scientific American Vol. 1, No. 01 published August 28, 1845
Scientific American
–
Special Navy Supplement, 1898
133.
Dialogue Concerning the Two Chief World Systems
–
The Dialogue Concerning the Two Chief World Systems is a 1632 Italian-language book by Galileo Galilei comparing the Copernican system with the traditional Ptolemaic system. It was translated into Latin as Systema cosmicum in 1635 by Matthias Bernegger, the book was dedicated to Galileos patron, Ferdinando II de Medici, Grand Duke of Tuscany, who received the first printed copy on February 22,1632. In the Copernican system, the Earth and other planets orbit the Sun, while in the Ptolemaic system, the Dialogue was published in Florence under a formal license from the Inquisition. In 1633, Galileo was found to be vehemently suspect of heresy based on the book, in an action that was not announced at the time, the publication of anything else he had written or ever might write was also banned in Catholic countries. While writing the book, Galileo referred to it as his Dialogue on the Tides, and when the manuscript went to the Inquisition for approval, the title was Dialogue on the Ebb and Flow of the Sea. As a result, the title on the title page is Dialogue, which is followed by Galileos name, academic posts. This must be kept in mind when discussing Galileos motives for writing the book, although the book is presented formally as a consideration of both systems, there is no question that the Copernican side gets the better of the argument. He is named after Galileos friend Filippo Salviati, Sagredo is an intelligent layman who is initially neutral. He is named after Galileos friend Giovanni Francesco Sagredo, Simplicio, a dedicated follower of Ptolemy and Aristotle, presents the traditional views and the arguments against the Copernican position. He is supposedly named after Simplicius of Cilicia, a commentator on Aristotle. Colombe was the leader of a group of Florentine opponents of Galileos, the discussion is not narrowly limited to astronomical topics, but ranges over much of contemporary science. Some of this is to show what Galileo considered good science, other parts are important to the debate, answering erroneous arguments against the Earths motion. A classic argument against earth motion is the lack of speed sensations of the surface, though it moves, by the earths rotation. The bulk of Galileos arguments may be divided into three classes, Rebuttals to the objections raised by traditional philosophers, for example, the experiment on the ship. Generally, these arguments have held up well in terms of the knowledge of the four centuries. Just how convincing they ought to have been to a reader in 1632 remains a contentious issue. Galileo attempted a class of argument, Direct physical argument for the Earths motion. As an account of the causation of tides or a proof of the Earths motion, the fundamental argument is internally inconsistent and actually leads to the conclusion that tides do not exist
Dialogue Concerning the Two Chief World Systems
–
A copy of The Dialogo, Florence edition, located at the Tom Slick rare book collection at Southwest Research Institute, in Texas.
Dialogue Concerning the Two Chief World Systems
–
Frontispiece and title page of the Dialogue, 1632
Dialogue Concerning the Two Chief World Systems
–
Actual path of cannonball B is from C to D
134.
Cambridge University Press
–
Cambridge University Press is the publishing business of the University of Cambridge. Granted letters patent by Henry VIII in 1534, it is the worlds oldest publishing house and it also holds letters patent as the Queens Printer. The Presss mission is To further the Universitys mission by disseminating knowledge in the pursuit of education, learning, Cambridge University Press is a department of the University of Cambridge and is both an academic and educational publisher. With a global presence, publishing hubs, and offices in more than 40 countries. Its publishing includes journals, monographs, reference works, textbooks. Cambridge University Press is an enterprise that transfers part of its annual surplus back to the university. Cambridge University Press is both the oldest publishing house in the world and the oldest university press and it originated from Letters Patent granted to the University of Cambridge by Henry VIII in 1534, and has been producing books continuously since the first University Press book was printed. Cambridge is one of the two privileged presses, authors published by Cambridge have included John Milton, William Harvey, Isaac Newton, Bertrand Russell, and Stephen Hawking. In 1591, Thomass successor, John Legate, printed the first Cambridge Bible, the London Stationers objected strenuously, claiming that they had the monopoly on Bible printing. The universitys response was to point out the provision in its charter to print all manner of books. In July 1697 the Duke of Somerset made a loan of £200 to the university towards the house and presse and James Halman, Registrary of the University. It was in Bentleys time, in 1698, that a body of scholars was appointed to be responsible to the university for the Presss affairs. The Press Syndicates publishing committee still meets regularly, and its role still includes the review, John Baskerville became University Printer in the mid-eighteenth century. Baskervilles concern was the production of the finest possible books using his own type-design, a technological breakthrough was badly needed, and it came when Lord Stanhope perfected the making of stereotype plates. This involved making a mould of the surface of a page of type. The Press was the first to use this technique, and in 1805 produced the technically successful, under the stewardship of C. J. Clay, who was University Printer from 1854 to 1882, the Press increased the size and scale of its academic and educational publishing operation. An important factor in this increase was the inauguration of its list of schoolbooks, during Clays administration, the Press also undertook a sizable co-publishing venture with Oxford, the Revised Version of the Bible, which was begun in 1870 and completed in 1885. It was Wright who devised the plan for one of the most distinctive Cambridge contributions to publishing—the Cambridge Histories, the Cambridge Modern History was published between 1902 and 1912
Cambridge University Press
–
The University Printing House, on the main site of the Press
Cambridge University Press
–
The letters patent of Cambridge University Press by Henry VIII allow the Press to print "all manner of books". The fine initial with the king's portrait inside it and the large first line of script are still discernible.
Cambridge University Press
–
The Pitt Building in Cambridge, which used to be the headquarters of Cambridge University Press, and now serves as a conference centre for the Press.
Cambridge University Press
–
On the main site of the Press
135.
Perseus Books
–
Perseus Books Group was an American publishing company founded in 1996 by investor Frank Pearl. It was named Publisher of the Year in 2007 by Publishers Weekly magazine for its role in taking on publishers formerly distributed by Publishers Group West, in April 2016, its publishing business was acquired by Hachette Book Group and its distribution business by Ingram Content Group. After the death of Frank Pearl, Perseus was sold to Centre Lane Partners, the Perseus Books Group currently has 12 imprints, Before Avalon Publishing Group was integrated into the Perseus Books Group, it published on 14 imprint presses. In 2007, some of these imprints were integrated into the Perseus Books Group, Perseus also sold one of their imprints in the restructuring process. Publishers Group West, founded in 1976, based in Berkeley, consortium Book Sales and Distribution, founded in 1985, based in St. Paul, Minnesota. Perseus Distribution, founded in 1999, based in New York City, legato Publishers Group, founded in 2013, based in Chicago
Perseus Books
–
Perseus Books Group
136.
Frank Wilczek
–
Frank Anthony Wilczek is an American theoretical physicist, mathematician and a Nobel laureate. Wilczek, along with David Gross and H. David Politzer, was awarded the Nobel Prize in Physics in 2004 for their discovery of asymptotic freedom in the theory of the strong interaction and he is on the Scientific Advisory Board for the Future of Life Institute. Born in Mineola, New York, of Polish and Italian origin, Wilczek was educated in the schools of Queens. It was around this time Wilczeks parents realized that he was part as a result of Frank Wilczek having been administered an IQ test. Wilczek holds the Herman Feshbach Professorship of Physics at MIT Center for Theoretical Physics and he worked at the Institute for Advanced Study in Princeton and the Institute for Theoretical Physics at the University of California, Santa Barbara and was also a visiting professor at NORDITA. Wilczek became a member of the Royal Netherlands Academy of Arts. He was awarded the Lorentz Medal in 2002, Wilczek won the Lilienfeld Prize of the American Physical Society in 2003. In the same year he was awarded the Faculty of Mathematics and Physics Commemorative Medal from Charles University in Prague and he was the co-recipient of the 2003 High Energy and Particle Physics Prize of the European Physical Society. Wilczek was also the co-recipient of the 2005 King Faisal International Prize for Science, on January 25,2013 Wilczek received an honorary doctorate from the Faculty of Science and Technology at Uppsala University, Sweden. He currently serves on the board for Society for Science & the Public, Wilczek has appeared on an episode of Penn & Teller, Bullshit. Where Penn referred to him as the smartest person ever had on the show, in 2014, Wilczek penned a letter, along with Stephen Hawking and two other scholars, warning that Success in creating AI would be the biggest event in human history. Unfortunately, it also be the last, unless we learn how to avoid the risks. The theory, which was discovered by H. David Politzer, was important for the development of quantum chromodynamics. Wilczek has helped reveal and develop axions, anyons, asymptotic freedom, the color superconducting phases of quark matter and he has worked on condensed matter physics, astrophysics, and particle physics. In 2012 he proposed the idea of a space-time crystal, in 2017, that theory seems to have been proven correct. 2015 A Beautiful Question, Finding Nature’s Deep Design, Allen Lane, the Lightness of Being, Mass, Ether, and the Unification of Forces. Fantastic Realities,49 Mind Journeys And a Trip to Stockholm,2002, On the worlds numerical recipe, Daedalus 131, 142-47. Longing for the Harmonies, Themes and Variations from Modern Physics, foraTV, The Large Hadron Collider and Unified Field Theory A radio interview with Frank Wilczeck Aired on the Lewis Burke Frumkes Radio Show the 10th of April 2011
Frank Wilczek
–
Frank Wilczek
137.
John Baez
–
John Carlos Baez is an American mathematical physicist and a professor of mathematics at the University of California, Riverside in Riverside, California. He is known for his work on spin foams in loop quantum gravity, for some time, his research had focused on applications of higher categories to physics and other things. Baez is also known to fans as the author of This Weeks Finds in Mathematical Physics. He started This Weeks Finds in 1993 for the Usenet community, and it now has a following in its new form and this Weeks Finds anticipated the concept of a personal weblog. Additionally, Baez is known on the World Wide Web as the author of the crackpot index, Baez was born in San Francisco, California. He graduated from Princeton University in Princeton, New Jersey, with a Bachelor of Arts in mathematics in 1982, in 1986, he graduated from the Massachusetts Institute of Technology in Cambridge, Massachusetts, with a Doctor of Philosophy under the direction of Irving Segal. After a post-doctoral period at Yale University in New Haven, Connecticut, from 2010 to 2012, he took a leave of absence to work at the Centre for Quantum Technologies in Singapore and has since worked there in the summers. Baez is also co-founder of the n-Category Café, a blog concerning higher category theory and its applications. The founders of the blog are Baez, David Corfield and Urs Schreiber, the n-Café community is associated with the nLab wiki and nForum forum, which now run independently of n-Café. It is hosted on The University of Texas at Austins official website and his physicist uncle, Albert Baez, interested him in physics as a child. John Baez is married to Lisa Raphals who is a professor of Chinese, bulletin of the American Mathematical Society. Essay by Baez at The World Question Center
John Baez
–
John C. Baez (August 2009)
138.
SI base unit
–
The International System of Units defines seven units of measure as a basic set from which all other SI units can be derived. The SI base units form a set of mutually independent dimensions as required by dimensional analysis commonly employed in science, thus, the kelvin, named after Lord Kelvin, has the symbol K and the ampere, named after André-Marie Ampère, has the symbol A. Many other units, such as the litre, are not part of the SI. The definitions of the units have been modified several times since the Metre Convention in 1875. Since the redefinition of the metre in 1960, the kilogram is the unit that is directly defined in terms of a physical artifact. However, the mole, the ampere, and the candela are linked through their definitions to the mass of the platinum–iridium cylinder stored in a vault near Paris. It has long been an objective in metrology to define the kilogram in terms of a fundamental constant, two possibilities have attracted particular attention, the Planck constant and the Avogadro constant. The 23rd CGPM decided to postpone any formal change until the next General Conference in 2011
SI base unit
–
The seven SI base units and the interdependency of their definitions: for example, to extract the definition of the metre from the speed of light, the definition of the second must be known while the ampere and candela are both dependent on the definition of energy which in turn is defined in terms of length, mass and time.
139.
Length
–
In geometric measurements, length is the most extended dimension of an object. In the International System of Quantities, length is any quantity with dimension distance, in other contexts length is the measured dimension of an object. For example, it is possible to cut a length of a wire which is shorter than wire thickness. Length may be distinguished from height, which is vertical extent, and width or breadth, length is a measure of one dimension, whereas area is a measure of two dimensions and volume is a measure of three dimensions. In most systems of measurement, the unit of length is a base unit, measurement has been important ever since humans settled from nomadic lifestyles and started using building materials, occupying land and trading with neighbours. As society has become more technologically oriented, much higher accuracies of measurement are required in a diverse set of fields. One of the oldest units of measurement used in the ancient world was the cubit which was the length of the arm from the tip of the finger to the elbow. This could then be subdivided into shorter units like the foot, hand or finger, the cubit could vary considerably due to the different sizes of people. After Albert Einsteins special relativity, length can no longer be thought of being constant in all reference frames. Thus a ruler that is one meter long in one frame of reference will not be one meter long in a frame that is travelling at a velocity relative to the first frame. This means length of an object is variable depending on the observer, in the physical sciences and engineering, when one speaks of units of length, the word length is synonymous with distance. There are several units that are used to measure length, in the International System of Units, the basic unit of length is the metre and is now defined in terms of the speed of light. The centimetre and the kilometre, derived from the metre, are commonly used units. In U. S. customary units, English or Imperial system of units, commonly used units of length are the inch, the foot, the yard, and the mile. Units used to denote distances in the vastness of space, as in astronomy, are longer than those typically used on Earth and include the astronomical unit, the light-year. Dimension Distance Orders of magnitude Reciprocal length Smoot Unit of length
Length
–
Base quantity
140.
Second
–
The second is the base unit of time in the International System of Units. It is qualitatively defined as the division of the hour by sixty. SI definition of second is the duration of 9192631770 periods of the corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom. Seconds may be measured using a mechanical, electrical or an atomic clock, SI prefixes are combined with the word second to denote subdivisions of the second, e. g. the millisecond, the microsecond, and the nanosecond. Though SI prefixes may also be used to form multiples of the such as kilosecond. The second is also the unit of time in other systems of measurement, the centimetre–gram–second, metre–kilogram–second, metre–tonne–second. Absolute zero implies no movement, and therefore zero external radiation effects, the second thus defined is consistent with the ephemeris second, which was based on astronomical measurements. The realization of the second is described briefly in a special publication from the National Institute of Standards and Technology. 1 international second is equal to, 1⁄60 minute 1⁄3,600 hour 1⁄86,400 day 1⁄31,557,600 Julian year 1⁄, more generally, = 1⁄, the Hellenistic astronomers Hipparchus and Ptolemy subdivided the day into sixty parts. They also used an hour, simple fractions of an hour. No sexagesimal unit of the day was used as an independent unit of time. The modern second is subdivided using decimals - although the third remains in some languages. The earliest clocks to display seconds appeared during the last half of the 16th century, the second became accurately measurable with the development of mechanical clocks keeping mean time, as opposed to the apparent time displayed by sundials. The earliest spring-driven timepiece with a hand which marked seconds is an unsigned clock depicting Orpheus in the Fremersdorf collection. During the 3rd quarter of the 16th century, Taqi al-Din built a clock with marks every 1/5 minute, in 1579, Jost Bürgi built a clock for William of Hesse that marked seconds. In 1581, Tycho Brahe redesigned clocks that displayed minutes at his observatory so they also displayed seconds, however, they were not yet accurate enough for seconds. In 1587, Tycho complained that his four clocks disagreed by plus or minus four seconds, in 1670, London clockmaker William Clement added this seconds pendulum to the original pendulum clock of Christiaan Huygens. From 1670 to 1680, Clement made many improvements to his clock and this clock used an anchor escapement mechanism with a seconds pendulum to display seconds in a small subdial
Second
–
FOCS 1, a continuous cold caesium fountain atomic clock in Switzerland, started operating in 2004 at an uncertainty of one second in 30 million years.
Second
–
Key concepts
141.
Electric current
–
An electric current is a flow of electric charge. In electric circuits this charge is carried by moving electrons in a wire. It can also be carried by ions in an electrolyte, or by both ions and electrons such as in an ionised gas. The SI unit for measuring a current is the ampere. Electric current is measured using a device called an ammeter, electric currents cause Joule heating, which creates light in incandescent light bulbs. They also create magnetic fields, which are used in motors, inductors and generators, the particles that carry the charge in an electric current are called charge carriers. In metals, one or more electrons from each atom are loosely bound to the atom and these conduction electrons are the charge carriers in metal conductors. The conventional symbol for current is I, which originates from the French phrase intensité de courant, current intensity is often referred to simply as current. The I symbol was used by André-Marie Ampère, after whom the unit of current is named, in formulating the eponymous Ampères force law. The notation travelled from France to Great Britain, where it became standard, in a conductive material, the moving charged particles which constitute the electric current are called charge carriers. In other materials, notably the semiconductors, the carriers can be positive or negative. Positive and negative charge carriers may even be present at the same time, a flow of positive charges gives the same electric current, and has the same effect in a circuit, as an equal flow of negative charges in the opposite direction. Since current can be the flow of positive or negative charges. The direction of current is arbitrarily defined as the same direction as positive charges flow. This is called the direction of current I. If the current flows in the direction, the variable I has a negative value. When analyzing electrical circuits, the direction of current through a specific circuit element is usually unknown. Consequently, the directions of currents are often assigned arbitrarily
Electric current
–
A simple electric circuit, where current is represented by the letter i. The relationship between the voltage (V), resistance (R), and current (I) is V=IR; this is known as Ohm's Law.
142.
Ampere
–
The ampere, often shortened to amp, is a unit of electric current. In the International System of Units the ampere is one of the seven SI base units and it is named after André-Marie Ampère, French mathematician and physicist, considered the father of electrodynamics. SI defines the ampere in terms of base units by measuring the electromagnetic force between electrical conductors carrying electric current. The ampere was then defined as one coulomb of charge per second, in SI, the unit of charge, the coulomb, is defined as the charge carried by one ampere during one second. In the future, the SI definition may shift back to charge as the base unit, ampères force law states that there is an attractive or repulsive force between two parallel wires carrying an electric current. This force is used in the definition of the ampere. The SI unit of charge, the coulomb, is the quantity of electricity carried in 1 second by a current of 1 ampere, conversely, a current of one ampere is one coulomb of charge going past a given point per second,1 A =1 C s. In general, charge Q is determined by steady current I flowing for a time t as Q = It, constant, instantaneous and average current are expressed in amperes and the charge accumulated, or passed through a circuit over a period of time is expressed in coulombs. The relation of the ampere to the coulomb is the same as that of the watt to the joule, the ampere was originally defined as one tenth of the unit of electric current in the centimetre–gram–second system of units. That unit, now known as the abampere, was defined as the amount of current that generates a force of two dynes per centimetre of length between two wires one centimetre apart. The size of the unit was chosen so that the derived from it in the MKSA system would be conveniently sized. The international ampere was a realization of the ampere, defined as the current that would deposit 0.001118 grams of silver per second from a silver nitrate solution. Later, more accurate measurements revealed that this current is 0.99985 A, at present, techniques to establish the realization of an ampere have a relative uncertainty of approximately a few parts in 107, and involve realizations of the watt, the ohm and the volt. Rather than a definition in terms of the force between two current-carrying wires, it has proposed that the ampere should be defined in terms of the rate of flow of elementary charges. Since a coulomb is equal to 6. 2415093×1018 elementary charges. The proposed change would define 1 A as being the current in the direction of flow of a number of elementary charges per second. In 2005, the International Committee for Weights and Measures agreed to study the proposed change, the new definition was discussed at the 25th General Conference on Weights and Measures in 2014 but for the time being was not adopted. The current drawn by typical constant-voltage energy distribution systems is usually dictated by the power consumed by the system, for this reason the examples given below are grouped by voltage level
Ampere
–
Demonstration model of a moving iron ammeter. As the current through the coil increases, the plunger is drawn further into the coil and the pointer deflects to the right.
143.
Thermodynamic temperature
–
Thermodynamic temperature is the absolute measure of temperature and is one of the principal parameters of thermodynamics. Thermodynamic temperature is defined by the law of thermodynamics in which the theoretically lowest temperature is the null or zero point. At this point, absolute zero, the constituents of matter have minimal motion. In the quantum-mechanical description, matter at absolute zero is in its ground state, the International System of Units specifies a particular scale for thermodynamic temperature. It uses the Kelvin scale for measurement and selects the point of water at 273.16 K as the fundamental fixing point. Other scales have been in use historically, the Rankine scale, using the degree Fahrenheit as its unit interval, is still in use as part of the English Engineering Units in the United States in some engineering fields. ITS-90 gives a means of estimating the thermodynamic temperature to a very high degree of accuracy. Internal energy is called the heat energy or thermal energy in conditions when no work is done upon the substance by its surroundings. Internal energy may be stored in a number of ways within a substance, each way constituting a degree of freedom. At equilibrium, each degree of freedom will have on average the energy, k B T /2 where k B is the Boltzmann constant. Temperature is a measure of the random submicroscopic motions and vibrations of the constituents of matter. These motions comprise the internal energy of a substance, more specifically, the thermodynamic temperature of any bulk quantity of matter is the measure of the average kinetic energy per classical degree of freedom of its constituent particles. Translational motions are almost always in the classical regime, translational motions are ordinary, whole-body movements in three-dimensional space in which particles move about and exchange energy in collisions. Figure 1 below shows translational motion in gases, Figure 4 below shows translational motion in solids, Zero kinetic energy remains in a substance at absolute zero. Throughout the scientific world where measurements are made in SI units, many engineering fields in the U. S. however, measure thermodynamic temperature using the Rankine scale. By international agreement, the kelvin and its scale are defined by two points, absolute zero, and the triple point of Vienna Standard Mean Ocean Water. Absolute zero, the lowest possible temperature, is defined as being precisely 0 K, the triple point of water is defined as being precisely 273.16 K and 0.01 °C. This definition does three things, It fixes the magnitude of the unit as being precisely 1 part in 273.15 kelvins
Thermodynamic temperature
–
Fig. 6 Ice and water: two phases of the same substance
Thermodynamic temperature
Thermodynamic temperature
–
Fig. 8 When many of the chemical elements, such as the noble gases and platinum-group metals, freeze to a solid — the most ordered state of matter — their crystal structures have a closest-packed arrangement. This yields the greatest possible packing density and the lowest energy state.
Thermodynamic temperature
–
Helium-4, is a superfluid at or below 2.17 kelvins, (2.17 Celsius degrees above absolute zero)
144.
Kelvin
–
The kelvin is a unit of measure for temperature based upon an absolute scale. It is one of the seven units in the International System of Units and is assigned the unit symbol K. The kelvin is defined as the fraction 1⁄273.16 of the temperature of the triple point of water. In other words, it is defined such that the point of water is exactly 273.16 K. The Kelvin scale is named after the Belfast-born, Glasgow University engineer and physicist William Lord Kelvin, unlike the degree Fahrenheit and degree Celsius, the kelvin is not referred to or typeset as a degree. The kelvin is the unit of temperature measurement in the physical sciences, but is often used in conjunction with the Celsius degree. The definition implies that absolute zero is equivalent to −273.15 °C, Kelvin calculated that absolute zero was equivalent to −273 °C on the air thermometers of the time. This absolute scale is known today as the Kelvin thermodynamic temperature scale, when spelled out or spoken, the unit is pluralised using the same grammatical rules as for other SI units such as the volt or ohm. When reference is made to the Kelvin scale, the word kelvin—which is normally a noun—functions adjectivally to modify the noun scale and is capitalized, as with most other SI unit symbols there is a space between the numeric value and the kelvin symbol. Before the 13th CGPM in 1967–1968, the unit kelvin was called a degree and it was distinguished from the other scales with either the adjective suffix Kelvin or with absolute and its symbol was °K. The latter term, which was the official name from 1948 until 1954, was ambiguous since it could also be interpreted as referring to the Rankine scale. Before the 13th CGPM, the form was degrees absolute. The 13th CGPM changed the name to simply kelvin. Its measured value was 0.01028 °C with an uncertainty of 60 µK, the use of SI prefixed forms of the degree Celsius to express a temperature interval has not been widely adopted. In 2005 the CIPM embarked on a program to redefine the kelvin using a more experimentally rigorous methodology, the current definition as of 2016 is unsatisfactory for temperatures below 20 K and above 1300 K. In particular, the committee proposed redefining the kelvin such that Boltzmanns constant takes the exact value 1. 3806505×10−23 J/K, from a scientific point of view, this will link temperature to the rest of SI and result in a stable definition that is independent of any particular substance. From a practical point of view, the redefinition will pass unnoticed, the kelvin is often used in the measure of the colour temperature of light sources. Colour temperature is based upon the principle that a black body radiator emits light whose colour depends on the temperature of the radiator, black bodies with temperatures below about 4000 K appear reddish, whereas those above about 7500 K appear bluish
Kelvin
–
Lord Kelvin, the namesake of the unit
Kelvin
–
A thermometer calibrated in degrees Celsius (left) and kelvins (right).
145.
Dimension of a physical quantity
–
Converting from one dimensional unit to another is often somewhat complex. Dimensional analysis, or more specifically the method, also known as the unit-factor method, is a widely used technique for such conversions using the rules of algebra. The concept of physical dimension was introduced by Joseph Fourier in 1822, Physical quantities that are measurable have the same dimension and can be directly compared to each other, even if they are originally expressed in differing units of measure. If physical quantities have different dimensions, they cannot be compared by similar units, hence, it is meaningless to ask whether a kilogram is greater than, equal to, or less than an hour. Any physically meaningful equation will have the dimensions on their left and right sides. Checking for dimensional homogeneity is an application of dimensional analysis. Dimensional analysis is routinely used as a check of the plausibility of derived equations and computations. It is generally used to categorize types of quantities and units based on their relationship to or dependence on other units. Many parameters and measurements in the sciences and engineering are expressed as a concrete number – a numerical quantity. Often a quantity is expressed in terms of other quantities, for example, speed is a combination of length and time. Compound relations with per are expressed with division, e. g.60 mi/1 h, other relations can involve multiplication, powers, or combinations thereof. A base unit is a unit that cannot be expressed as a combination of other units, for example, units for length and time are normally chosen as base units. Units for volume, however, can be factored into the units of length. Sometimes the names of units obscure that they are derived units, for example, an ampere is a unit of electric current, which is equivalent to electric charge per unit time and is measured in coulombs per second, so 1 A =1 C/s. Similarly, one newton is 1 kg⋅m/s2, percentages are dimensionless quantities, since they are ratios of two quantities with the same dimensions. In other words, the % sign can be read as 1/100, derivatives with respect to a quantity add the dimensions of the variable one is differentiating with respect to on the denominator. Thus, position has the dimension L, derivative of position with respect to time has dimension LT−1 – length from position, time from the derivative, the second derivative has dimension LT−2. In economics, one distinguishes between stocks and flows, a stock has units of units, while a flow is a derivative of a stock, in some contexts, dimensional quantities are expressed as dimensionless quantities or percentages by omitting some dimensions
Dimension of a physical quantity
–
Base quantity
146.
History of the metric system
–
Concepts similar to those behind the metric system had been discussed in the 16th and 17th centuries. Simon Stevin had published his ideas for a decimal notation and John Wilkins had published a proposal for a system of measurement based on natural units. The work of reforming the old system of weights and measures was sponsored by the revolutionary government, the metric system was to be, in the words of philosopher and mathematician Condorcet, for all people for all time. Reference copies for both units were manufactured and placed in the custody of the French Academy of Sciences, by 1812, due to the unpopularity of the new metric system, France had reverted to units similar to those of their old system. In 1837 the metric system was re-adopted by France, and also during the first half of the 19th century was adopted by the scientific community, maxwell proposed three base units, length, mass and time. This concept worked well with mechanics, but attempts to describe electromagnetic forces in terms of these units encountered difficulties. This impasse was resolved by Giovanni Giorgi, who in 1901 proved that a coherent system that incorporated electromagnetic units had to have a unit as a fourth base unit. The mole was added as a base unit in 1971. Since the end of the 20th century, an effort has been undertaken to redefine the ampere, kilogram, mole, the first practical implementation of the metric system was the system implemented by French Revolutionaries towards the end of the 18th century. Its key features were that, It was decimal in nature and it derived its unit sizes from nature. Units that have different dimensions are related to other in a rational manner. Prefixes are used to denote multiples and sub-multiples of its units and these features had already been explored and expounded by various scholars and academics in the two centuries prior to the French metric system being implemented. Simon Stevin is credited with introducing the system into general use in Europe. Twentieth-century writers such Bigourdan and McGreevy credit the French cleric Gabriel Mouton as the originator of the metric system, in 2007 a proposal for a coherent decimal system of measurement by the English cleric John Wilkins received publicity. During the early era, Roman numerals were used in Europe to represent numbers, but the Arabs represented numbers using the Hindu numeral system. In about 1202, Fibonacci published his book Liber Abaci which introduced the concept of positional notation into Europe and these symbols evolved into the numerals 0,1,2 etc. At that time there was dispute regarding the difference between numbers and irrational numbers and there was no consistency in the way in which decimal fractions were represented. In 1586, Simon Stevin published a pamphlet called De Thiende which historians credit as being the basis of modern notation for decimal fractions
History of the metric system
–
Frontispiece of the publication where John Wilkins proposed a metric system of units in which length, mass, volume and area would be related to each other
History of the metric system
–
James Watt, British inventor and advocate of an international decimalized system of measure
History of the metric system
–
A clock of the republican era showing both decimal and standard time
History of the metric system
–
Repeating circle – the instrument used for triangulation when measuring the meridian
147.
Outline of the metric system
–
The metric system can be described as all of the following, System – set of interacting or interdependent components forming an integrated whole. System of measurement – set of units which can be used to specify anything which can be measured, historically, systems of measurement were initially defined and regulated to support trade and internal commerce. Units were arbitrarily defined by fiat by the entities and were not necessarily well inter-related or self-consistent. When later analyzed and scientifically, some quantities were designated as fundamental units, introduction to the metric system International system of units is the system of units that has been officially endorsed under the Metre Convention since 1960. 1861 - Concept of unit coherence introduced by Maxwell - the base units were the centimetre, gram, History of metrication – metrication is the process by which legacy, national-specific systems of measurement were replaced by the metric system. Centimetre–gram–second system of units was the variant of the metric system that evolved in stages until it was superseded by SI. Gravitational metric system was a variant of the metric system that normalised the acceleration due to gravity. Metre–tonne–second system of units was a variant of the system used in French. Between 1812 and 1839 France used a system, Mesures usuelles History of the metre Prior to 1875 the metric system was controlled by the French Government. In that year, seventeen nations signed the Metre Convention and the management, Metre Convention describes the 1875 treaty and its development to the modern day. Three organisations, the CGPM, CIPM and BIPM were set up under the convention, general Conference on Weights and Measures – a meeting every four to six years of delegates from all member states. The International Committee for Weights and Measures – an advisory body to the CGPM consisting of prominent metrologists, both the European Union and the International Organization for Standardization have issued directives/recommendations to harmonise the use of units of measure. These documents endorse the use of SI for most purposes, European units of measurement directives ISO/IEC80000 New SI definitions – changes in the metric system, or more specifically, the International system of units that is expected to occur in 2018. Metric Association Metric Commission Metrication Board An Essay towards a Real Character and a Philosophical Language Reproduction –34
Outline of the metric system
–
"The metric system is for all people for all time." (Condorcet 1791) Four objects used in making measurements in everyday situations that have metric calibrations are shown: a tape measure calibrated in centimetres, a thermometer calibrated in degrees Celsius, a kilogram mass, and an electrical multimeter which measures volts, amps and ohms.
148.
Position (vector)
–
Usually denoted x, r, or s, it corresponds to the straight-line distances along each axis from O to P, r = O P →. The term position vector is used mostly in the fields of geometry, mechanics. Frequently this is used in two-dimensional or three-dimensional space, but can be generalized to Euclidean spaces in any number of dimensions. These different coordinates and corresponding basis vectors represent the position vector. More general curvilinear coordinates could be used instead, and are in contexts like continuum mechanics, linear algebra allows for the abstraction of an n-dimensional position vector. The notion of space is intuitive since each xi can be any value, the dimension of the position space is n. The coordinates of the vector r with respect to the vectors ei are xi. The vector of coordinates forms the coordinate vector or n-tuple, each coordinate xi may be parameterized a number of parameters t. One parameter xi would describe a curved 1D path, two parameters xi describes a curved 2D surface, three xi describes a curved 3D volume of space, and so on. The linear span of a basis set B = equals the position space R, position vector fields are used to describe continuous and differentiable space curves, in which case the independent parameter needs not be time, but can be arc length of the curve. In the case of one dimension, the position has only one component and it could be, say, a vector in the x-direction, or the radial r-direction. Equivalent notations include, x ≡ x ≡ x, r ≡ r, s ≡ s ⋯ For a position vector r that is a function of time t and these derivatives have common utility in the study of kinematics, control theory, engineering and other sciences. Velocity v = d r d t where dr is a small displacement. By extension, the higher order derivatives can be computed in a similar fashion, study of these higher order derivatives can improve approximations of the original displacement function. Such higher-order terms are required in order to represent the displacement function as a sum of an infinite sequence, enabling several analytical techniques in engineering. A displacement vector can be defined as the action of uniformly translating spatial points in a given direction over a given distance, thus the addition of displacement vectors expresses the composition of these displacement actions and scalar multiplication as scaling of the distance. With this in mind we may define a position vector of a point in space as the displacement vector mapping a given origin to that point. Note thus position vectors depend on a choice of origin for the space, affine space Six degrees of freedom Line element Parametric surface Keller, F. J, Gettys, W. E. et al
Position (vector)
–
Space curve in 3D. The position vector r is parameterized by a scalar t. At r = a the red line is the tangent to the curve, and the blue plane is normal to the curve.
149.
Radian
–
The radian is the standard unit of angular measure, used in many areas of mathematics. The length of an arc of a circle is numerically equal to the measurement in radians of the angle that it subtends. The unit was formerly an SI supplementary unit, but this category was abolished in 1995, separately, the SI unit of solid angle measurement is the steradian. The radian is represented by the symbol rad, so for example, a value of 1.2 radians could be written as 1.2 rad,1.2 r,1. 2rad, or 1. 2c. Radian describes the angle subtended by a circular arc as the length of the arc divided by the radius of the arc. One radian is the angle subtended at the center of a circle by an arc that is equal in length to the radius of the circle. Conversely, the length of the arc is equal to the radius multiplied by the magnitude of the angle in radians. As the ratio of two lengths, the radian is a number that needs no unit symbol, and in mathematical writing the symbol rad is almost always omitted. When quantifying an angle in the absence of any symbol, radians are assumed, and it follows that the magnitude in radians of one complete revolution is the length of the entire circumference divided by the radius, or 2πr / r, or 2π. Thus 2π radians is equal to 360 degrees, meaning that one radian is equal to 180/π degrees, the concept of radian measure, as opposed to the degree of an angle, is normally credited to Roger Cotes in 1714. He described the radian in everything but name, and he recognized its naturalness as a unit of angular measure, the idea of measuring angles by the length of the arc was already in use by other mathematicians. For example, al-Kashi used so-called diameter parts as units where one part was 1/60 radian. The term radian first appeared in print on 5 June 1873, in examination questions set by James Thomson at Queens College, Belfast. He had used the term as early as 1871, while in 1869, Thomas Muir, then of the University of St Andrews, in 1874, after a consultation with James Thomson, Muir adopted radian. As stated, one radian is equal to 180/π degrees, thus, to convert from radians to degrees, multiply by 180/π. The length of circumference of a circle is given by 2 π r, so, to convert from radians to gradians multiply by 200 / π, and to convert from gradians to radians multiply by π /200. This is because radians have a mathematical naturalness that leads to a more elegant formulation of a number of important results, most notably, results in analysis involving trigonometric functions are simple and elegant when the functions arguments are expressed in radians. Because of these and other properties, the trigonometric functions appear in solutions to problems that are not obviously related to the functions geometrical meanings
Radian
–
A chart to convert between degrees and radians
Radian
–
An arc of a circle with the same length as the radius of that circle corresponds to an angle of 1 radian. A full circle corresponds to an angle of 2 π radians.
150.
Solid angle
–
In geometry, a solid angle is the two-dimensional angle in three-dimensional space that an object subtends at a point. It is a measure of how large the object appears to an observer looking from that point, in the International System of Units, a solid angle is expressed in a dimensionless unit called a steradian. A small object nearby may subtend the same angle as a larger object farther away. For example, although the Moon is much smaller than the Sun, indeed, as viewed from any point on Earth, both objects have approximately the same solid angle as well as apparent size. This is evident during a solar eclipse, an objects solid angle in steradians is equal to the area of the segment of a unit sphere, centered at the angles vertex, that the object covers. A solid angle in steradians equals the area of a segment of a sphere in the same way a planar angle in radians equals the length of an arc of a unit circle. Solid angles are used in physics, in particular astrophysics. The solid angle of an object that is far away is roughly proportional to the ratio of area to squared distance. Here area means the area of the object when projected along the viewing direction. The solid angle of a sphere measured from any point in its interior is 4π sr, Solid angles can also be measured in square degrees, in square minutes and square seconds, or in fractions of the sphere, also known as spat. In spherical coordinates there is a formula for the differential, d Ω = sin θ d θ d φ where θ is the colatitude, at the equator you see all of the celestial sphere, at either pole only one half. Let OABC be the vertices of a tetrahedron with an origin at O subtended by the triangular face ABC where a →, b →, c → are the positions of the vertices A, B and C. Define the vertex angle θa to be the angle BOC and define θb, let φab be the dihedral angle between the planes that contain the tetrahedral faces OAC and OBC and define φac, φbc correspondingly. When implementing the above equation care must be taken with the function to avoid negative or incorrect solid angles. One source of errors is that the scalar triple product can be negative if a, b, c have the wrong winding. Computing abs is a sufficient solution since no other portion of the equation depends on the winding, the other pitfall arises when the scalar triple product is positive but the divisor is negative. Indices are cycled, s0 = sn and s1 = sn +1, the solid angle of a latitude-longitude rectangle on a globe is s r, where φN and φS are north and south lines of latitude, and θE and θW are east and west lines of longitude. Mathematically, this represents an arc of angle φN − φS swept around a sphere by θE − θW radians, when longitude spans 2π radians and latitude spans π radians, the solid angle is that of a sphere
Solid angle
–
Any area on a sphere which is equal in area to the square of its radius, when observed from its center, subtends precisely one steradian.
151.
Steradian
–
The steradian or square radian is the SI unit of solid angle. It is used in geometry, and is analogous to the radian which quantifies planar angles. The name is derived from the Greek stereos for solid and the Latin radius for ray and it is useful, however, to distinguish between dimensionless quantities of a different nature, so the symbol sr is used to indicate a solid angle. For example, radiant intensity can be measured in watts per steradian, the steradian was formerly an SI supplementary unit, but this category was abolished in 1995 and the steradian is now considered an SI derived unit. A steradian can be defined as the angle subtended at the center of a unit sphere by a unit area on its surface. For a general sphere of radius r, any portion of its surface with area A = r2 subtends one steradian, because the surface area A of a sphere is 4πr2, the definition implies that a sphere measures 4π steradians. By the same argument, the solid angle that can be subtended at any point is 4π sr. Since A = r2, it corresponds to the area of a cap. Therefore one steradian corresponds to the angle of the cross-section of a simple cone subtending the plane angle 2θ, with θ given by, θ = arccos = arccos = arccos ≈0.572 rad. This angle corresponds to the plane angle of 2θ ≈1.144 rad or 65. 54°. A steradian is also equal to the area of a polygon having an angle excess of 1 radian, to 1/4π of a complete sphere. The solid angle of a cone whose cross-section subtends the angle 2θ is, Ω =2 π s r. In two dimensions, an angle is related to the length of the arc that it spans, θ = l r r a d where l is arc length, r is the radius of the circle. For example, a measurement of the width of an object would be given in radians. At the same time its visible area over ones visible field would be given in steradians. Just as the area of a circle is related to its diameter or radius. One-dimensional circular measure has units of radians or degrees, while two-dimensional spherical measure is expressed in steradians, in higher dimensional mathematical spaces, units for analogous solid angles have not been explicitly named. When they are used, they are dealt with by analogy with the circular or spherical cases and that is, as a proportion of the relevant unit hypersphere taken up by the generalized angle, or point set expressed in spherical coordinates
Steradian
–
A graphical representation of 1 steradian. The sphere has radius r, and in this case the area A of the highlighted surface patch is r 2. The solid angle Ω equals A sr/ r 2 which is 1 sr in this example. The entire sphere has a solid angle of 4π sr.
152.
Kinematic viscosity
–
The viscosity of a fluid is a measure of its resistance to gradual deformation by shear stress or tensile stress. For liquids, it corresponds to the concept of thickness, for example. Viscosity is a property of the fluid which opposes the motion between the two surfaces of the fluid in a fluid that are moving at different velocities. For a given velocity pattern, the stress required is proportional to the fluids viscosity, a fluid that has no resistance to shear stress is known as an ideal or inviscid fluid. Zero viscosity is observed only at low temperatures in superfluids. Otherwise, all fluids have positive viscosity, and are said to be viscous or viscid. A fluid with a high viscosity, such as pitch. The word viscosity is derived from the Latin viscum, meaning mistletoe, the dynamic viscosity of a fluid expresses its resistance to shearing flows, where adjacent layers move parallel to each other with different speeds. It can be defined through the situation known as a Couette flow. This fluid has to be homogeneous in the layer and at different shear stresses, if the speed of the top plate is small enough, the fluid particles will move parallel to it, and their speed will vary linearly from zero at the bottom to u at the top. Each layer of fluid will move faster than the one just below it, in particular, the fluid will apply on the top plate a force in the direction opposite to its motion, and an equal but opposite one to the bottom plate. An external force is required in order to keep the top plate moving at constant speed. The magnitude F of this force is found to be proportional to the u and the area A of each plate. The proportionality factor μ in this formula is the viscosity of the fluid, the ratio u/y is called the rate of shear deformation or shear velocity, and is the derivative of the fluid speed in the direction perpendicular to the plates. Isaac Newton expressed the forces by the differential equation τ = μ ∂ u ∂ y, where τ = F/A. This formula assumes that the flow is moving along parallel lines and this equation can be used where the velocity does not vary linearly with y, such as in fluid flowing through a pipe. Use of the Greek letter mu for the dynamic viscosity is common among mechanical and chemical engineers. However, the Greek letter eta is used by chemists, physicists
Kinematic viscosity
–
Pitch has a viscosity approximately 230 billion (2.3 × 10 11) times that of water.
Kinematic viscosity
–
A simulation of substances with different viscosities. The substance above has lower viscosity than the substance below
Kinematic viscosity
–
Example of the viscosity of milk and water. Liquids with higher viscosities make smaller splashes when poured at the same velocity.
Kinematic viscosity
–
Honey being drizzled.
153.
Kilogram square metre
–
It depends on the bodys mass distribution and the axis chosen, with larger moments requiring more torque to change the bodys rotation. It is a property, the moment of inertia of a composite system is the sum of the moments of inertia of its component subsystems. One of its definitions is the moment of mass with respect to distance from an axis r, I = ∫ Q r 2 d m. For bodies constrained to rotate in a plane, it is sufficient to consider their moment of inertia about a perpendicular to the plane. When a body is rotating, or free to rotate, around an axis, the amount of torque needed to cause any given angular acceleration is proportional to the moment of inertia of the body. Moment of inertia may be expressed in units of kilogram metre squared in SI units, moment of inertia plays the role in rotational kinetics that mass plays in linear kinetics - both characterize the resistance of a body to changes in its motion. The moment of inertia depends on how mass is distributed around an axis of rotation, for a point-like mass, the moment of inertia about some axis is given by mr2, where r is the distance to the axis, and m is the mass. For an extended body, the moment of inertia is just the sum of all the pieces of mass multiplied by the square of their distances from the axis in question. For an extended body of a shape and uniform density. In 1673 Christiaan Huygens introduced this parameter in his study of the oscillation of a body hanging from a pivot, the term moment of inertia was introduced by Leonhard Euler in his book Theoria motus corporum solidorum seu rigidorum in 1765, and it is incorporated into Eulers second law. Comparison of this frequency to that of a simple pendulum consisting of a single point of mass provides a mathematical formulation for moment of inertia of an extended body. Moment of inertia appears in momentum, kinetic energy, and in Newtons laws of motion for a rigid body as a physical parameter that combines its shape. There is a difference in the way moment of inertia appears in planar. The moment of inertia of a flywheel is used in a machine to resist variations in applied torque to smooth its rotational output. Moment of inertia I is defined as the ratio of the angular momentum L of a system to its angular velocity ω around a principal axis, if the angular momentum of a system is constant, then as the moment of inertia gets smaller, the angular velocity must increase. This occurs when spinning figure skaters pull in their arms or divers curl their bodies into a tuck position during a dive. For a simple pendulum, this yields a formula for the moment of inertia I in terms of the mass m of the pendulum and its distance r from the pivot point as. Thus, moment of inertia depends on both the mass m of a body and its geometry, or shape, as defined by the distance r to the axis of rotation
Kilogram square metre
–
Tightrope walker Samuel Dixon using the long rod's moment of inertia for balance while crossing the Niagara River in 1890.
Kilogram square metre
–
Flywheels have large moments of inertia to smooth out mechanical motion. This example is in a Russian museum.
Kilogram square metre
–
Spinning figure skaters can reduce their moment of inertia by pulling in their arms, allowing them to spin faster due to conservation of angular momentum.
Kilogram square metre
–
Pendulums used in Mendenhall gravimeter apparatus, from 1897 scientific journal. The portable gravimeter developed in 1890 by Thomas C. Mendenhall provided the most accurate relative measurements of the local gravitational field of the Earth.
154.
List of equations in classical mechanics
–
Classical mechanics is the branch of physics used to describe the motion of macroscopic objects. It is the most familiar of the theories of physics, the concepts it covers, such as mass, acceleration, and force, are commonly used and known. The subject is based upon a three-dimensional Euclidean space with fixed axes, the point of concurrency of the three axes is known as the origin of the particular space. Classical mechanics utilises many equations—as well as other mathematical concepts—which relate various physical quantities to one another and these include differential equations, manifolds, Lie groups, and ergodic theory. This page gives a summary of the most important of these and this article lists equations from Newtonian mechanics, see analytical mechanics for the more general formulation of classical mechanics. Every conservative force has a potential energy, by following two principles one can consistently assign a non-relative value to U, Wherever the force is zero, its potential energy is defined to be zero as well. Whenever the force does work, potential energy is lost, in the following rotational definitions, the angle can be any angle about the specified axis of rotation. It is customary to use θ, but this does not have to be the angle used in polar coordinate systems. The unit axial vector n ^ = e ^ r × e ^ θ defines the axis of rotation, the precession angular speed of a spinning top is given by, Ω = w r I ω where w is the weight of the spinning flywheel. Euler also worked out analogous laws of motion to those of Newton and these extend the scope of Newtons laws to rigid bodies, but are essentially the same as above. A new equation Euler formulated is, I ⋅ α + ω × = τ where I is the moment of inertia tensor, the previous equations for planar motion can be used here, corollaries of momentum, angular momentum etc. can immediately follow by applying the above definitions. For any object moving in any path in a plane, r = r = r e ^ r the following results apply to the particle. If acceleration is not constant then the general calculus equations above must be used, found by integrating the definitions of position, velocity, for classical mechanics, the transformation law from one inertial or accelerating frame to another is the Galilean transform. Conversely F moves at velocity relative to F, the situation is similar for relative accelerations. SHM, DHM, SHO, and DHO refer to simple harmonic motion, damped harmonic motion, simple harmonic oscillator, mathematical Methods of Classical Mechanics, Springer, ISBN 978-0-387-96890-2 Berkshire, Frank H. Kibble, T. W. B
List of equations in classical mechanics
–
Kinematic quantities of a classical particle: mass m, position r, velocity v, acceleration a.
155.
Newton (unit)
–
The newton is the International System of Units derived unit of force. It is named after Isaac Newton in recognition of his work on classical mechanics, see below for the conversion factors. One newton is the force needed to one kilogram of mass at the rate of one metre per second squared in direction of the applied force. In 1948, the 9th CGPM resolution 7 adopted the name newton for this force, the MKS system then became the blueprint for todays SI system of units. The newton thus became the unit of force in le Système International dUnités. This SI unit is named after Isaac Newton, as with every International System of Units unit named for a person, the first letter of its symbol is upper case. Note that degree Celsius conforms to this rule because the d is lowercase. — Based on The International System of Units, section 5.2. Newtons second law of motion states that F = ma, where F is the applied, m is the mass of the object receiving the force. The newton is therefore, where the symbols are used for the units, N for newton, kg for kilogram, m for metre. In dimensional analysis, F = M L T2 where F is force, M is mass, L is length, at average gravity on earth, a kilogram mass exerts a force of about 9.8 newtons. An average-sized apple exerts about one newton of force, which we measure as the apples weight, for example, the tractive effort of a Class Y steam train and the thrust of an F100 fighter jet engine are both around 130 kN. One kilonewton,1 kN, is 102.0 kgf,1 kN =102 kg ×9.81 m/s2 So for example, a platform rated at 321 kilonewtons will safely support a 32,100 kilograms load. Specifications in kilonewtons are common in safety specifications for, the values of fasteners, Earth anchors. Working loads in tension and in shear, thrust of rocket engines and launch vehicles clamping forces of the various moulds in injection moulding machines used to manufacture plastic parts
Newton (unit)
–
Base units
156.
Joule
–
The joule, symbol J, is a derived unit of energy in the International System of Units. It is equal to the transferred to an object when a force of one newton acts on that object in the direction of its motion through a distance of one metre. It is also the energy dissipated as heat when a current of one ampere passes through a resistance of one ohm for one second. It is named after the English physicist James Prescott Joule, one joule can also be defined as, The work required to move an electric charge of one coulomb through an electrical potential difference of one volt, or one coulomb volt. This relationship can be used to define the volt, the work required to produce one watt of power for one second, or one watt second. This relationship can be used to define the watt and this SI unit is named after James Prescott Joule. As with every International System of Units unit named for a person, note that degree Celsius conforms to this rule because the d is lowercase. — Based on The International System of Units, section 5.2. The CGPM has given the unit of energy the name Joule, the use of newton metres for torque and joules for energy is helpful to avoid misunderstandings and miscommunications. The distinction may be also in the fact that energy is a scalar – the dot product of a vector force. By contrast, torque is a vector – the cross product of a distance vector, torque and energy are related to one another by the equation E = τ θ, where E is energy, τ is torque, and θ is the angle swept. Since radians are dimensionless, it follows that torque and energy have the same dimensions, one joule in everyday life represents approximately, The energy required to lift a medium-size tomato 1 m vertically from the surface of the Earth. The energy released when that same tomato falls back down to the ground, the energy required to accelerate a 1 kg mass at 1 m·s−2 through a 1 m distance in space. The heat required to raise the temperature of 1 g of water by 0.24 °C, the typical energy released as heat by a person at rest every 1/60 s. The kinetic energy of a 50 kg human moving very slowly, the kinetic energy of a 56 g tennis ball moving at 6 m/s. The kinetic energy of an object with mass 1 kg moving at √2 ≈1.4 m/s, the amount of electricity required to light a 1 W LED for 1 s. Since the joule is also a watt-second and the unit for electricity sales to homes is the kW·h. For additional examples, see, Orders of magnitude The zeptojoule is equal to one sextillionth of one joule,160 zeptojoules is equivalent to one electronvolt. The nanojoule is equal to one billionth of one joule, one nanojoule is about 1/160 of the kinetic energy of a flying mosquito
Joule
–
Base units
157.
Integrated Authority File
–
The Integrated Authority File or GND is an international authority file for the organisation of personal names, subject headings and corporate bodies from catalogues. It is used mainly for documentation in libraries and increasingly also by archives, the GND is managed by the German National Library in cooperation with various regional library networks in German-speaking Europe and other partners. The GND falls under the Creative Commons Zero license, the GND specification provides a hierarchy of high-level entities and sub-classes, useful in library classification, and an approach to unambiguous identification of single elements. It also comprises an ontology intended for knowledge representation in the semantic web, available in the RDF format
Integrated Authority File
–
GND screenshot