1.
Matter
–
All the everyday objects that we can bump into, touch or squeeze are ultimately composed of atoms. This ordinary atomic matter is in turn made up of interacting subatomic particles—usually a nucleus of protons and neutrons, typically, science considers these composite particles matter because they have both rest mass and volume. By contrast, massless particles, such as photons, are not considered matter, however, not all particles with rest mass have a classical volume, since fundamental particles such as quarks and leptons are considered point particles with no effective size or volume. Nevertheless, quarks and leptons together make up ordinary matter, Matter exists in states, the classical solid, liquid, and gas, as well as the more exotic plasma, Bose–Einstein condensates, fermionic condensates, and quark–gluon plasma. For much of the history of the natural sciences people have contemplated the nature of matter. The idea that matter was built of discrete building blocks, the so-called particulate theory of matter, was first put forward by the Greek philosophers Leucippus and Democritus, Matter should not be confused with mass, as the two are not the same in modern physics. Matter is itself a physical substance of which systems may be composed, while mass is not a substance, while there are different views on what should be considered matter, the mass of a substance or system is the same irrespective of any such definition of matter. Another difference is that matter has an opposite called antimatter, antimatter has the same mass property as its normal matter counterpart. Different fields of use the term matter in different, and sometimes incompatible. Some of these ways are based on loose historical meanings, from a time there was no reason to distinguish mass from simply a quantity of matter. As such, there is no universally agreed scientific meaning of the word matter. Scientifically, the mass is well-defined, but matter can be defined in several ways. Sometimes in the field of matter is simply equated with particles that exhibit rest mass, such as quarks. However, in physics and chemistry, matter exhibits both wave-like and particle-like properties, the so-called wave–particle duality. A definition of based on its physical and chemical structure is. Such atomic matter is sometimes termed ordinary matter. As an example, deoxyribonucleic acid molecules are matter under this definition because they are made of atoms and this definition can extend to include charged atoms and molecules, so as to include plasmas and electrolytes, which are not obviously included in the atoms definition. Alternatively, one can adopt the protons, neutrons, and electrons definition, at a microscopic level, the constituent particles of matter such as protons, neutrons, and electrons obey the laws of quantum mechanics and exhibit wave–particle duality
Matter
–
Matter
Matter
Matter
Matter
2.
Continuum mechanics
–
Continuum mechanics is a branch of mechanics that deals with the analysis of the kinematics and the mechanical behavior of materials modeled as a continuous mass rather than as discrete particles. The French mathematician Augustin-Louis Cauchy was the first to formulate such models in the 19th century, research in the area continues till today. Modeling an object as a continuum assumes that the substance of the object completely fills the space it occupies, Continuum mechanics deals with physical properties of solids and fluids which are independent of any particular coordinate system in which they are observed. These physical properties are represented by tensors, which are mathematical objects that have the required property of being independent of coordinate system. These tensors can be expressed in coordinate systems for computational convenience, Materials, such as solids, liquids and gases, are composed of molecules separated by space. On a microscopic scale, materials have cracks and discontinuities, a continuum is a body that can be continually sub-divided into infinitesimal elements with properties being those of the bulk material. More specifically, the continuum hypothesis/assumption hinges on the concepts of an elementary volume. This condition provides a link between an experimentalists and a viewpoint on constitutive equations as well as a way of spatial and statistical averaging of the microstructure. The latter then provide a basis for stochastic finite elements. The levels of SVE and RVE link continuum mechanics to statistical mechanics, the RVE may be assessed only in a limited way via experimental testing, when the constitutive response becomes spatially homogeneous. Specifically for fluids, the Knudsen number is used to assess to what extent the approximation of continuity can be made, consider car traffic on a highway---with just one lane for simplicity. Somewhat surprisingly, and in a tribute to its effectiveness, continuum mechanics effectively models the movement of cars via a differential equation for the density of cars. The familiarity of this situation empowers us to understand a little of the continuum-discrete dichotomy underlying continuum modelling in general. To start modelling define that, x measure distance along the highway, t is time, ρ is the density of cars on the highway, cars do not appear and disappear. Consider any group of cars, from the car at the back of the group located at x = a to the particular car at the front located at x = b. The total number of cars in this group N = ∫ a b ρ d x, since cars are conserved d N / d t =0. The only way an integral can be zero for all intervals is if the integrand is zero for all x, consequently, conservation derives the first order nonlinear conservation PDE ∂ ρ ∂ t + ∂ ∂ x =0 for all positions on the highway. This conservation PDE applies not only to car traffic but also to fluids, solids, crowds, animals, plants, bushfires, financial traders and this PDE is one equation with two unknowns, so another equation is needed to form a well posed problem
Continuum mechanics
–
Figure 1. Configuration of a continuum body
3.
Couple (mechanics)
–
In mechanics, a couple is a system of forces with a resultant moment but no resultant force. A better term is force couple or pure moment and its effect is to create rotation without translation, or more generally without any acceleration of the centre of mass. In rigid body mechanics, force couples are free vectors, meaning their effects on a body are independent of the point of application, the resultant moment of a couple is called a torque. This is not to be confused with the term torque as it is used in physics, instead, torque is a special case of moment. Torque has special properties that moment does not have, in particular the property of being independent of reference point, definition A couple is a pair of forces, equal in magnitude, oppositely directed, and displaced by perpendicular distance or moment. The simplest kind of couple consists of two equal and opposite forces whose lines of action do not coincide and this is called a simple couple. The forces have an effect or moment called a torque about an axis which is normal to the plane of the forces. The SI unit for the torque of the couple is newton metre. When d is taken as a vector between the points of action of the forces, then the couple is the product of d and F, i. e. τ = | d × F |. The moment of a force is defined with respect to a certain point P, and in general when P is changed. However, the moment of a couple is independent of the reference point P, in other words, a torque vector, unlike any other moment vector, is a free vector. The proof of claim is as follows, Suppose there are a set of force vectors F1, F2, etc. that form a couple, with position vectors r1, r2. The moment about P is M = r 1 × F1 + r 2 × F2 + ⋯ Now we pick a new reference point P that differs from P by the vector r. The new moment is M ′ = × F1 + × F2 + ⋯ Now the distributive property of the cross product implies M ′ = + r ×, however, the definition of a force couple means that F1 + F2 + ⋯ =0. Therefore, M ′ = r 1 × F1 + r 2 × F2 + ⋯ = M This proves that the moment is independent of reference point, which is proof that a couple is a free vector. A force F applied to a body at a distance d from the center of mass has the same effect as the same force applied directly to the center of mass. The couple produces an acceleration of the rigid body at right angles to the plane of the couple. The force at the center of mass accelerates the body in the direction of the force without change in orientation, conversely, a couple and a force in the plane of the couple can be replaced by a single force, appropriately located
Couple (mechanics)
–
Classical mechanics
4.
D'Alembert's principle
–
DAlemberts principle, also known as the Lagrange–dAlembert principle, is a statement of the fundamental classical laws of motion. It is named after its discoverer, the French physicist and mathematician Jean le Rond dAlembert and it is the dynamic analogue to the principle of virtual work for applied forces in a static system and in fact is more general than Hamiltons principle, avoiding restriction to holonomic systems. A holonomic constraint depends only on the coordinates and time and it does not depend on the velocities. The principle does not apply for irreversible displacements, such as sliding friction, DAlemberts contribution was to demonstrate that in the totality of a dynamic system the forces of constraint vanish. That is to say that the generalized forces Q j need not include constraint forces and it is equivalent to the somewhat more cumbersome Gausss principle of least constraint. The general statement of dAlemberts principle mentions the time derivatives of the momenta of the system. The momentum of the mass is the product of its mass and velocity, p i = m i v i. In many applications, the masses are constant and this reduces to p i ˙ = m i v ˙ i = m i a i. However, some applications involve changing masses and in those cases both terms m ˙ i v i and m i v ˙ i have to remain present, to date, nobody has shown that DAlemberts principle is equivalent to Newtons Second Law. This is true only for very special cases e. g. rigid body constraints. However, a solution to this problem does exist. Consider Newtons law for a system of particles, i, if arbitrary virtual displacements are assumed to be in directions that are orthogonal to the constraint forces, the constraint forces do no work. Such displacements are said to be consistent with the constraints and this leads to the formulation of dAlemberts principle, which states that the difference of applied forces and inertial forces for a dynamic system does no virtual work. There is also a principle for static systems called the principle of virtual work for applied forces. DAlembert showed that one can transform an accelerating rigid body into an equivalent static system by adding the so-called inertial force, the inertial force must act through the center of mass and the inertial torque can act anywhere. The system can then be analyzed exactly as a static system subjected to this force and moment. The advantage is that, in the equivalent static system one can take moments about any point and this often leads to simpler calculations because any force can be eliminated from the moment equations by choosing the appropriate point about which to apply the moment equation. Even in the course of Fundamentals of Dynamics and Kinematics of machines, in textbooks of engineering dynamics this is sometimes referred to as dAlemberts principle
D'Alembert's principle
–
Jean d'Alembert (1717—1783)
D'Alembert's principle
–
Free body diagram of a wire pulling on a mass with weight W, showing the d’Alembert inertia “force” ma.
D'Alembert's principle
–
Free body diagram depicting an inertia moment and an inertia force on a rigid body in free fall with an angular velocity.
5.
Energy
–
In physics, energy is the property that must be transferred to an object in order to perform work on – or to heat – the object, and can be converted in form, but not created or destroyed. The SI unit of energy is the joule, which is the transferred to an object by the mechanical work of moving it a distance of 1 metre against a force of 1 newton. Mass and energy are closely related, for example, with a sensitive enough scale, one could measure an increase in mass after heating an object. Living organisms require available energy to stay alive, such as the humans get from food. Civilisation gets the energy it needs from energy resources such as fuels, nuclear fuel. The processes of Earths climate and ecosystem are driven by the radiant energy Earth receives from the sun, the total energy of a system can be subdivided and classified in various ways. It may also be convenient to distinguish gravitational energy, thermal energy, several types of energy, electric energy. Many of these overlap, for instance, thermal energy usually consists partly of kinetic. Some types of energy are a mix of both potential and kinetic energy. An example is energy which is the sum of kinetic. Whenever physical scientists discover that a phenomenon appears to violate the law of energy conservation. Heat and work are special cases in that they are not properties of systems, in general we cannot measure how much heat or work are present in an object, but rather only how much energy is transferred among objects in certain ways during the occurrence of a given process. Heat and work are measured as positive or negative depending on which side of the transfer we view them from, the distinctions between different kinds of energy is not always clear-cut. In contrast to the definition, energeia was a qualitative philosophical concept, broad enough to include ideas such as happiness. The modern analog of this property, kinetic energy, differs from vis viva only by a factor of two, in 1807, Thomas Young was possibly the first to use the term energy instead of vis viva, in its modern sense. Gustave-Gaspard Coriolis described kinetic energy in 1829 in its modern sense, the law of conservation of energy was also first postulated in the early 19th century, and applies to any isolated system. It was argued for years whether heat was a physical substance, dubbed the caloric, or merely a physical quantity. In 1845 James Prescott Joule discovered the link between mechanical work and the generation of heat and these developments led to the theory of conservation of energy, formalized largely by William Thomson as the field of thermodynamics
Energy
–
In a typical lightning strike, 500 megajoules of electric potential energy is converted into the same amount of energy in other forms, mostly light energy, sound energy and thermal energy.
Energy
–
Thermal energy is energy of microscopic constituents of matter, which may include both kinetic and potential energy.
Energy
–
Thomas Young – the first to use the term "energy" in the modern sense.
Energy
–
A Turbo generator transforms the energy of pressurised steam into electrical energy
6.
Potential energy
–
In physics, potential energy is energy possessed by a body by virtue of its position relative to others, stresses within itself, electric charge, and other factors. The unit for energy in the International System of Units is the joule, the term potential energy was introduced by the 19th century Scottish engineer and physicist William Rankine, although it has links to Greek philosopher Aristotles concept of potentiality. Potential energy is associated with forces that act on a body in a way that the work done by these forces on the body depends only on the initial and final positions of the body in space. These forces, that are called potential forces, can be represented at every point in space by vectors expressed as gradients of a scalar function called potential. Potential energy is the energy of an object. It is the energy by virtue of a position relative to other objects. Potential energy is associated with restoring forces such as a spring or the force of gravity. The action of stretching the spring or lifting the mass is performed by a force that works against the force field of the potential. This work is stored in the field, which is said to be stored as potential energy. If the external force is removed the field acts on the body to perform the work as it moves the body back to the initial position. Suppose a ball which mass is m, and it is in h position in height, if the acceleration of free fall is g, the weight of the ball is mg. There are various types of energy, each associated with a particular type of force. Chemical potential energy, such as the energy stored in fossil fuels, is the work of the Coulomb force during rearrangement of mutual positions of electrons and nuclei in atoms and molecules. Thermal energy usually has two components, the energy of random motions of particles and the potential energy of their mutual positions. Forces derivable from a potential are also called conservative forces, the work done by a conservative force is W = − Δ U where Δ U is the change in the potential energy associated with the force. The negative sign provides the convention that work done against a force field increases potential energy, common notations for potential energy are U, V, also Ep. Potential energy is closely linked with forces, in this case, the force can be defined as the negative of the vector gradient of the potential field. If the work for a force is independent of the path, then the work done by the force is evaluated at the start
Potential energy
–
In the case of a bow and arrow, when the archer does work on the bow, drawing the string back, some of the chemical energy of the archer's body is transformed into elastic potential-energy in the bent limbs of the bow. When the string is released, the force between the string and the arrow does work on the arrow. Thus, the potential energy in the bow limbs is transformed into the kinetic energy of the arrow as it takes flight.
Potential energy
–
A trebuchet uses the gravitational potential energy of the counterweight to throw projectiles over two hundred meters
Potential energy
–
Springs are used for storing elastic potential energy
Potential energy
–
Archery is one of humankind's oldest applications of elastic potential energy
7.
Force
–
In physics, a force is any interaction that, when unopposed, will change the motion of an object. In other words, a force can cause an object with mass to change its velocity, force can also be described intuitively as a push or a pull. A force has both magnitude and direction, making it a vector quantity and it is measured in the SI unit of newtons and represented by the symbol F. The original form of Newtons second law states that the net force acting upon an object is equal to the rate at which its momentum changes with time. In an extended body, each part usually applies forces on the adjacent parts, such internal mechanical stresses cause no accelation of that body as the forces balance one another. Pressure, the distribution of small forces applied over an area of a body, is a simple type of stress that if unbalanced can cause the body to accelerate. Stress usually causes deformation of materials, or flow in fluids. In part this was due to an understanding of the sometimes non-obvious force of friction. A fundamental error was the belief that a force is required to maintain motion, most of the previous misunderstandings about motion and force were eventually corrected by Galileo Galilei and Sir Isaac Newton. With his mathematical insight, Sir Isaac Newton formulated laws of motion that were not improved-on for nearly three hundred years, the Standard Model predicts that exchanged particles called gauge bosons are the fundamental means by which forces are emitted and absorbed. Only four main interactions are known, in order of decreasing strength, they are, strong, electromagnetic, weak, high-energy particle physics observations made during the 1970s and 1980s confirmed that the weak and electromagnetic forces are expressions of a more fundamental electroweak interaction. Since antiquity the concept of force has been recognized as integral to the functioning of each of the simple machines. The mechanical advantage given by a machine allowed for less force to be used in exchange for that force acting over a greater distance for the same amount of work. Analysis of the characteristics of forces ultimately culminated in the work of Archimedes who was famous for formulating a treatment of buoyant forces inherent in fluids. Aristotle provided a discussion of the concept of a force as an integral part of Aristotelian cosmology. In Aristotles view, the sphere contained four elements that come to rest at different natural places therein. Aristotle believed that objects on Earth, those composed mostly of the elements earth and water, to be in their natural place on the ground. He distinguished between the tendency of objects to find their natural place, which led to natural motion, and unnatural or forced motion
Force
–
Aristotle famously described a force as anything that causes an object to undergo "unnatural motion"
Force
–
Forces are also described as a push or pull on an object. They can be due to phenomena such as gravity, magnetism, or anything that might cause a mass to accelerate.
Force
–
Though Sir Isaac Newton 's most famous equation is, he actually wrote down a different form for his second law of motion that did not use differential calculus.
Force
–
Galileo Galilei was the first to point out the inherent contradictions contained in Aristotle's description of forces.
8.
Inertia
–
Inertia is the resistance of any physical object to any change in its state of motion, this includes changes to its speed, direction, or state of rest. It is the tendency of objects to keep moving in a line at constant velocity. The principle of inertia is one of the principles of classical physics that are used to describe the motion of objects. Inertia comes from the Latin word, iners, meaning idle, Inertia is one of the primary manifestations of mass, which is a quantitative property of physical systems. In common usage, the inertia may refer to an objects amount of resistance to change in velocity, or sometimes to its momentum. Thus, an object will continue moving at its current velocity until some force causes its speed or direction to change. On the surface of the Earth, inertia is often masked by the effects of friction and air resistance, both of which tend to decrease the speed of moving objects, and gravity. Aristotle explained the continued motion of projectiles, which are separated from their projector, by the action of the surrounding medium, Aristotle concluded that such violent motion in a void was impossible. Despite its general acceptance, Aristotles concept of motion was disputed on several occasions by notable philosophers over nearly two millennia, for example, Lucretius stated that the default state of matter was motion, not stasis. Philoponus proposed that motion was not maintained by the action of a surrounding medium, although this was not the modern concept of inertia, for there was still the need for a power to keep a body in motion, it proved a fundamental step in that direction. This view was opposed by Averroes and by many scholastic philosophers who supported Aristotle. However, this view did not go unchallenged in the Islamic world, in the 14th century, Jean Buridan rejected the notion that a motion-generating property, which he named impetus, dissipated spontaneously. Buridans position was that an object would be arrested by the resistance of the air. Buridan also maintained that impetus increased with speed, thus, his idea of impetus was similar in many ways to the modern concept of momentum. Buridan also believed that impetus could be not only linear, but also circular in nature, buridans thought was followed up by his pupil Albert of Saxony and the Oxford Calculators, who performed various experiments that further undermined the classical, Aristotelian view. Their work in turn was elaborated by Nicole Oresme who pioneered the practice of demonstrating laws of motion in the form of graphs, benedetti cites the motion of a rock in a sling as an example of the inherent linear motion of objects, forced into circular motion. The law of inertia states that it is the tendency of an object to resist a change in motion. According to Newton, an object will stay at rest or stay in motion unless acted on by a net force, whether it results from gravity, friction, contact
Inertia
–
Galileo Galilei
9.
Moment of inertia
–
It depends on the bodys mass distribution and the axis chosen, with larger moments requiring more torque to change the bodys rotation. It is a property, the moment of inertia of a composite system is the sum of the moments of inertia of its component subsystems. One of its definitions is the moment of mass with respect to distance from an axis r, I = ∫ Q r 2 d m. For bodies constrained to rotate in a plane, it is sufficient to consider their moment of inertia about a perpendicular to the plane. When a body is rotating, or free to rotate, around an axis, the amount of torque needed to cause any given angular acceleration is proportional to the moment of inertia of the body. Moment of inertia may be expressed in units of kilogram metre squared in SI units, moment of inertia plays the role in rotational kinetics that mass plays in linear kinetics - both characterize the resistance of a body to changes in its motion. The moment of inertia depends on how mass is distributed around an axis of rotation, for a point-like mass, the moment of inertia about some axis is given by mr2, where r is the distance to the axis, and m is the mass. For an extended body, the moment of inertia is just the sum of all the pieces of mass multiplied by the square of their distances from the axis in question. For an extended body of a shape and uniform density. In 1673 Christiaan Huygens introduced this parameter in his study of the oscillation of a body hanging from a pivot, the term moment of inertia was introduced by Leonhard Euler in his book Theoria motus corporum solidorum seu rigidorum in 1765, and it is incorporated into Eulers second law. Comparison of this frequency to that of a simple pendulum consisting of a single point of mass provides a mathematical formulation for moment of inertia of an extended body. Moment of inertia appears in momentum, kinetic energy, and in Newtons laws of motion for a rigid body as a physical parameter that combines its shape. There is a difference in the way moment of inertia appears in planar. The moment of inertia of a flywheel is used in a machine to resist variations in applied torque to smooth its rotational output. Moment of inertia I is defined as the ratio of the angular momentum L of a system to its angular velocity ω around a principal axis, if the angular momentum of a system is constant, then as the moment of inertia gets smaller, the angular velocity must increase. This occurs when spinning figure skaters pull in their arms or divers curl their bodies into a tuck position during a dive. For a simple pendulum, this yields a formula for the moment of inertia I in terms of the mass m of the pendulum and its distance r from the pivot point as. Thus, moment of inertia depends on both the mass m of a body and its geometry, or shape, as defined by the distance r to the axis of rotation
Moment of inertia
–
Tightrope walker Samuel Dixon using the long rod's moment of inertia for balance while crossing the Niagara River in 1890.
Moment of inertia
–
Flywheels have large moments of inertia to smooth out mechanical motion. This example is in a Russian museum.
Moment of inertia
–
Spinning figure skaters can reduce their moment of inertia by pulling in their arms, allowing them to spin faster due to conservation of angular momentum.
Moment of inertia
–
Pendulums used in Mendenhall gravimeter apparatus, from 1897 scientific journal. The portable gravimeter developed in 1890 by Thomas C. Mendenhall provided the most accurate relative measurements of the local gravitational field of the Earth.
10.
Power (physics)
–
In physics, power is the rate of doing work. It is the amount of energy consumed per unit time, having no direction, it is a scalar quantity. In the SI system, the unit of power is the joule per second, known as the watt in honour of James Watt, another common and traditional measure is horsepower. Being the rate of work, the equation for power can be written, because this integral depends on the trajectory of the point of application of the force and torque, this calculation of work is said to be path dependent. As a physical concept, power requires both a change in the universe and a specified time in which the change occurs. This is distinct from the concept of work, which is measured in terms of a net change in the state of the physical universe. The output power of a motor is the product of the torque that the motor generates. The power involved in moving a vehicle is the product of the force of the wheels. The dimension of power is divided by time. The SI unit of power is the watt, which is equal to one joule per second, other units of power include ergs per second, horsepower, metric horsepower, and foot-pounds per minute. One horsepower is equivalent to 33,000 foot-pounds per minute, or the required to lift 550 pounds by one foot in one second. Other units include dBm, a logarithmic measure with 1 milliwatt as reference, food calories per hour, Btu per hour. This shows how power is an amount of energy consumed per unit time. If ΔW is the amount of work performed during a period of time of duration Δt and it is the average amount of work done or energy converted per unit of time. The average power is simply called power when the context makes it clear. The instantaneous power is then the value of the average power as the time interval Δt approaches zero. P = lim Δ t →0 P a v g = lim Δ t →0 Δ W Δ t = d W d t. In the case of constant power P, the amount of work performed during a period of duration T is given by, W = P t
Power (physics)
–
Ansel Adams photograph of electrical wires of the Boulder Dam Power Units, 1941–1942
11.
Work (physics)
–
In physics, a force is said to do work if, when acting, there is a displacement of the point of application in the direction of the force. For example, when a ball is held above the ground and then dropped, the SI unit of work is the joule. The SI unit of work is the joule, which is defined as the work expended by a force of one newton through a distance of one metre. The dimensionally equivalent newton-metre is sometimes used as the unit for work, but this can be confused with the unit newton-metre. Usage of N⋅m is discouraged by the SI authority, since it can lead to confusion as to whether the quantity expressed in newton metres is a torque measurement, or a measurement of energy. Non-SI units of work include the erg, the foot-pound, the foot-poundal, the hour, the litre-atmosphere. Due to work having the physical dimension as heat, occasionally measurement units typically reserved for heat or energy content, such as therm, BTU. The work done by a constant force of magnitude F on a point that moves a distance s in a line in the direction of the force is the product W = F s. For example, if a force of 10 newtons acts along a point that travels 2 meters and this is approximately the work done lifting a 1 kg weight from ground level to over a persons head against the force of gravity. Notice that the work is doubled either by lifting twice the weight the distance or by lifting the same weight twice the distance. Work is closely related to energy, the work-energy principle states that an increase in the kinetic energy of a rigid body is caused by an equal amount of positive work done on the body by the resultant force acting on that body. Conversely, a decrease in energy is caused by an equal amount of negative work done by the resultant force. From Newtons second law, it can be shown that work on a free, rigid body, is equal to the change in energy of the velocity and rotation of that body. The work of forces generated by a function is known as potential energy. These formulas demonstrate that work is the associated with the action of a force, so work subsequently possesses the physical dimensions. The work/energy principles discussed here are identical to Electric work/energy principles, constraint forces determine the movement of components in a system, constraining the object within a boundary. Constraint forces ensure the velocity in the direction of the constraint is zero and this only applies for a single particle system. For example, in an Atwood machine, the rope does work on each body, there are, however, cases where this is not true
Work (physics)
–
A baseball pitcher does positive work on the ball by applying a force to it over the distance it moves while in his grip.
Work (physics)
–
A force of constant magnitude and perpendicular to the lever arm
Work (physics)
–
Gravity F = mg does work W = mgh along any descending path
Work (physics)
–
Lotus type 119B gravity racer at Lotus 60th celebration.
12.
Space
–
Space is the boundless three-dimensional extent in which objects and events have relative position and direction. Physical space is conceived in three linear dimensions, although modern physicists usually consider it, with time, to be part of a boundless four-dimensional continuum known as spacetime. The concept of space is considered to be of importance to an understanding of the physical universe. However, disagreement continues between philosophers over whether it is itself an entity, a relationship between entities, or part of a conceptual framework. Many of these classical philosophical questions were discussed in the Renaissance and then reformulated in the 17th century, in Isaac Newtons view, space was absolute—in the sense that it existed permanently and independently of whether there was any matter in the space. Other natural philosophers, notably Gottfried Leibniz, thought instead that space was in fact a collection of relations between objects, given by their distance and direction from one another. In the 18th century, the philosopher and theologian George Berkeley attempted to refute the visibility of spatial depth in his Essay Towards a New Theory of Vision. Kant referred to the experience of space in his Critique of Pure Reason as being a pure a priori form of intuition. In the 19th and 20th centuries mathematicians began to examine geometries that are non-Euclidean, in space is conceived as curved. According to Albert Einsteins theory of relativity, space around gravitational fields deviates from Euclidean space. Experimental tests of general relativity have confirmed that non-Euclidean geometries provide a model for the shape of space. In the seventeenth century, the philosophy of space and time emerged as an issue in epistemology. At its heart, Gottfried Leibniz, the German philosopher-mathematician, and Isaac Newton, unoccupied regions are those that could have objects in them, and thus spatial relations with other places. For Leibniz, then, space was an abstraction from the relations between individual entities or their possible locations and therefore could not be continuous but must be discrete. Space could be thought of in a way to the relations between family members. Although people in the family are related to one another, the relations do not exist independently of the people, but since there would be no observational way of telling these universes apart then, according to the identity of indiscernibles, there would be no real difference between them. According to the principle of sufficient reason, any theory of space that implied that there could be two possible universes must therefore be wrong. Newton took space to be more than relations between objects and based his position on observation and experimentation
Space
–
Gottfried Leibniz
Space
–
A right-handed three-dimensional Cartesian coordinate system used to indicate positions in space.
Space
–
Isaac Newton
Space
–
Immanuel Kant
13.
Speed
–
In everyday use and in kinematics, the speed of an object is the magnitude of its velocity, it is thus a scalar quantity. Speed has the dimensions of distance divided by time, the SI unit of speed is the metre per second, but the most common unit of speed in everyday usage is the kilometre per hour or, in the US and the UK, miles per hour. For air and marine travel the knot is commonly used, the fastest possible speed at which energy or information can travel, according to special relativity, is the speed of light in a vacuum c =299792458 metres per second. Matter cannot quite reach the speed of light, as this would require an amount of energy. In relativity physics, the concept of rapidity replaces the classical idea of speed, italian physicist Galileo Galilei is usually credited with being the first to measure speed by considering the distance covered and the time it takes. Galileo defined speed as the distance covered per unit of time, in equation form, this is v = d t, where v is speed, d is distance, and t is time. A cyclist who covers 30 metres in a time of 2 seconds, objects in motion often have variations in speed. If s is the length of the path travelled until time t, in the special case where the velocity is constant, this can be simplified to v = s / t. The average speed over a time interval is the total distance travelled divided by the time duration. Speed at some instant, or assumed constant during a short period of time, is called instantaneous speed. By looking at a speedometer, one can read the speed of a car at any instant. A car travelling at 50 km/h generally goes for less than one hour at a constant speed, if the vehicle continued at that speed for half an hour, it would cover half that distance. If it continued for one minute, it would cover about 833 m. Different from instantaneous speed, average speed is defined as the distance covered divided by the time interval. For example, if a distance of 80 kilometres is driven in 1 hour, likewise, if 320 kilometres are travelled in 4 hours, the average speed is also 80 kilometres per hour. When a distance in kilometres is divided by a time in hours, average speed does not describe the speed variations that may have taken place during shorter time intervals, and so average speed is often quite different from a value of instantaneous speed. If the average speed and the time of travel are known, using this equation for an average speed of 80 kilometres per hour on a 4-hour trip, the distance covered is found to be 320 kilometres. Linear speed is the distance travelled per unit of time, while speed is the linear speed of something moving along a circular path
Speed
–
Speed can be thought of as the rate at which an object covers distance. A fast-moving object has a high speed and covers a relatively large distance in a given amount of time, while a slow-moving object covers a relatively small amount of distance in the same amount of time.
14.
Torque
–
Torque, moment, or moment of force is rotational force. Just as a force is a push or a pull. Loosely speaking, torque is a measure of the force on an object such as a bolt or a flywheel. For example, pushing or pulling the handle of a wrench connected to a nut or bolt produces a torque that loosens or tightens the nut or bolt, the symbol for torque is typically τ, the lowercase Greek letter tau. When it is called moment of force, it is denoted by M. The SI unit for torque is the newton metre, for more on the units of torque, see Units. This article follows US physics terminology in its use of the word torque, in the UK and in US mechanical engineering, this is called moment of force, usually shortened to moment. In US physics and UK physics terminology these terms are interchangeable, unlike in US mechanical engineering, Torque is defined mathematically as the rate of change of angular momentum of an object. The definition of states that one or both of the angular velocity or the moment of inertia of an object are changing. Moment is the term used for the tendency of one or more applied forces to rotate an object about an axis. For example, a force applied to a shaft causing acceleration, such as a drill bit accelerating from rest. By contrast, a force on a beam produces a moment, but since the angular momentum of the beam is not changing. Similarly with any force couple on an object that has no change to its angular momentum and this article follows the US physics terminology by calling all moments by the term torque, whether or not they cause the angular momentum of an object to change. The concept of torque, also called moment or couple, originated with the studies of Archimedes on levers, the term torque was apparently introduced into English scientific literature by James Thomson, the brother of Lord Kelvin, in 1884. A force applied at an angle to a lever multiplied by its distance from the levers fulcrum is its torque. A force of three newtons applied two metres from the fulcrum, for example, exerts the same torque as a force of one newton applied six metres from the fulcrum. More generally, the torque on a particle can be defined as the product, τ = r × F, where r is the particles position vector relative to the fulcrum. Alternatively, τ = r F ⊥, where F⊥ is the amount of force directed perpendicularly to the position of the particle, any force directed parallel to the particles position vector does not produce a torque
Torque
15.
Velocity
–
The velocity of an object is the rate of change of its position with respect to a frame of reference, and is a function of time. Velocity is equivalent to a specification of its speed and direction of motion, Velocity is an important concept in kinematics, the branch of classical mechanics that describes the motion of bodies. Velocity is a vector quantity, both magnitude and direction are needed to define it. The scalar absolute value of velocity is called speed, being a coherent derived unit whose quantity is measured in the SI system as metres per second or as the SI base unit of. For example,5 metres per second is a scalar, whereas 5 metres per second east is a vector, if there is a change in speed, direction or both, then the object has a changing velocity and is said to be undergoing an acceleration. To have a constant velocity, an object must have a constant speed in a constant direction, constant direction constrains the object to motion in a straight path thus, a constant velocity means motion in a straight line at a constant speed. For example, a car moving at a constant 20 kilometres per hour in a path has a constant speed. Hence, the car is considered to be undergoing an acceleration, Speed describes only how fast an object is moving, whereas velocity gives both how fast and in what direction the object is moving. If a car is said to travel at 60 km/h, its speed has been specified, however, if the car is said to move at 60 km/h to the north, its velocity has now been specified. The big difference can be noticed when we consider movement around a circle and this is because the average velocity is calculated by only considering the displacement between the starting and the end points while the average speed considers only the total distance traveled. Velocity is defined as the rate of change of position with respect to time, average velocity can be calculated as, v ¯ = Δ x Δ t. The average velocity is less than or equal to the average speed of an object. This can be seen by realizing that while distance is always strictly increasing, from this derivative equation, in the one-dimensional case it can be seen that the area under a velocity vs. time is the displacement, x. In calculus terms, the integral of the velocity v is the displacement function x. In the figure, this corresponds to the area under the curve labeled s. Since the derivative of the position with respect to time gives the change in position divided by the change in time, although velocity is defined as the rate of change of position, it is often common to start with an expression for an objects acceleration. As seen by the three green tangent lines in the figure, an objects instantaneous acceleration at a point in time is the slope of the tangent to the curve of a v graph at that point. In other words, acceleration is defined as the derivative of velocity with respect to time, from there, we can obtain an expression for velocity as the area under an a acceleration vs. time graph
Velocity
–
As a change of direction occurs while the cars turn on the curved track, their velocity is not constant.
16.
Routhian mechanics
–
In analytical mechanics, a branch of theoretical physics, Routhian mechanics is a hybrid formulation of Lagrangian mechanics and Hamiltonian mechanics developed by Edward John Routh. Correspondingly, the Routhian is the function which replaces both the Lagrangian and Hamiltonian functions, the Routhian, like the Hamiltonian, can be obtained from a Legendre transform of the Lagrangian, and has a similar mathematical form to the Hamiltonian, but is not exactly the same. The difference between the Lagrangian, Hamiltonian, and Routhian functions are their variables, the Routhian differs from these functions in that some coordinates are chosen to have corresponding generalized velocities, the rest to have corresponding generalized momenta. This choice is arbitrary, and can be done to simplify the problem, in each case the Lagrangian and Hamiltonian functions are replaced by a single function, the Routhian. The full set thus has the advantages of both sets of equations, with the convenience of splitting one set of coordinates to the Hamilton equations, and the rest to the Lagrangian equations. The Lagrangian equations are powerful results, used frequently in theory, however, if cyclic coordinates occur there will still be equations to solve for all the coordinates, including the cyclic coordinates despite their absence in the Lagrangian. Overall fewer equations need to be solved compared to the Lagrangian approach, as with the rest of analytical mechanics, Routhian mechanics is completely equivalent to Newtonian mechanics, all other formulations of classical mechanics, and introduces no new physics. It offers a way to solve mechanical problems. The velocities dqi/dt are expressed as functions of their corresponding momenta by inverting their defining relation, in this context, pi is said to be the momentum canonically conjugate to qi. The Routhian is intermediate between L and H, some coordinates q1, q2, qn are chosen to have corresponding generalized momenta p1, p2. Pn, the rest of the coordinates ζ1, ζ2, ζs to have generalized velocities dζ1/dt, dζ2/dt. Dζs/dt, and time may appear explicitly, where again the generalized velocity dqi/dt is to be expressed as a function of generalized momentum pi via its defining relation. The choice of n coordinates are to have corresponding momenta. The above is used by Landau and Lifshitz, and Goldstein, some authors may define the Routhian to be the negative of the above definition. Below, the Routhian equations of motion are obtained in two ways, in the other useful derivatives are found that can be used elsewhere. Consider the case of a system with two degrees of freedom, q and ζ, with generalized velocities dq/dt and dζ/dt, now change variables, from the set to, simply switching the velocity dq/dt to the momentum p. This change of variables in the differentials is the Legendre transformation, the differential of the new function to replace L will be a sum of differentials in dq, dζ, dp, d, and dt. Notice the Routhian replaces the Hamiltonian and Lagrangian functions in all the equations of motion, the remaining equation states the partial time derivatives of L and R are negatives ∂ L ∂ t = − ∂ R ∂ t. n, and j =1,2
Routhian mechanics
–
Edward John Routh, 1831–1907.
17.
Displacement (vector)
–
A displacement is a vector that is the shortest distance from the initial to the final position of a point P. It quantifies both the distance and direction of an imaginary motion along a line from the initial position to the final position of the point. The velocity then is distinct from the speed which is the time rate of change of the distance traveled along a specific path. The velocity may be defined as the time rate of change of the position vector. For motion over an interval of time, the displacement divided by the length of the time interval defines the average velocity. In dealing with the motion of a body, the term displacement may also include the rotations of the body. In this case, the displacement of a particle of the body is called linear displacement, for a position vector s that is a function of time t, the derivatives can be computed with respect to t. These derivatives have common utility in the study of kinematics, control theory, vibration sensing and other sciences, by extension, the higher order derivatives can be computed in a similar fashion. Study of these higher order derivatives can improve approximations of the displacement function. Such higher-order terms are required in order to represent the displacement function as a sum of an infinite series, enabling several analytical techniques in engineering. The fourth order derivative is called jounce
Displacement (vector)
–
Displacement versus distance traveled along a path
18.
Equations of motion
–
In mathematical physics, equations of motion are equations that describe the behaviour of a physical system in terms of its motion as a function of time. The most general choice are generalized coordinates which can be any convenient variables characteristic of the physical system, the functions are defined in a Euclidean space in classical mechanics, but are replaced by curved spaces in relativity. If the dynamics of a system is known, the equations are the solutions to the equations describing the motion of the dynamics. There are two descriptions of motion, dynamics and kinematics. Dynamics is general, since momenta, forces and energy of the particles are taken into account, in this instance, sometimes the term refers to the differential equations that the system satisfies, and sometimes to the solutions to those equations. However, kinematics is simpler as it concerns only variables derived from the positions of objects, equations of motion can therefore be grouped under these main classifiers of motion. In all cases, the types of motion are translations, rotations, oscillations. A differential equation of motion, usually identified as some physical law, solving the differential equation will lead to a general solution with arbitrary constants, the arbitrariness corresponding to a family of solutions. A particular solution can be obtained by setting the initial values, to state this formally, in general an equation of motion M is a function of the position r of the object, its velocity, and its acceleration, and time t. Euclidean vectors in 3D are denoted throughout in bold and this is equivalent to saying an equation of motion in r is a second order ordinary differential equation in r, M =0, where t is time, and each overdot denotes one time derivative. The initial conditions are given by the constant values at t =0, r, r ˙, the solution r to the equation of motion, with specified initial values, describes the system for all times t after t =0. Sometimes, the equation will be linear and is likely to be exactly solvable. In general, the equation will be non-linear, and cannot be solved exactly so a variety of approximations must be used, the solutions to nonlinear equations may show chaotic behavior depending on how sensitive the system is to the initial conditions. Despite the great strides made in the development of geometry made by Ancient Greeks and surveys in Rome, the exposure of Europe to Arabic numerals and their ease in computations encouraged first the scholars to learn them and then the merchants and invigorated the spread of knowledge throughout Europe. These studies led to a new body of knowledge that is now known as physics, thomas Bradwardine, one of those scholars, extended Aristotelian quantities such as distance and velocity, and assigned intensity and extension to them. Bradwardine suggested a law involving force, resistance, distance, velocity. Nicholas Oresme further extended Bradwardines arguments, for writers on kinematics before Galileo, since small time intervals could not be measured, the affinity between time and motion was obscure. They used time as a function of distance, and in free fall, de Sotos comments are shockingly correct regarding the definitions of acceleration and the observation that during the violent motion of ascent acceleration would be negative
Equations of motion
–
Kinematic quantities of a classical particle of mass m: position r, velocity v, acceleration a.
19.
Fictitious force
–
The force F does not arise from any physical interaction between two objects, but rather from the acceleration a of the non-inertial reference frame itself. As stated by Iro, Such an additional force due to relative motion of two reference frames is called a pseudo-force. Assuming Newtons second law in the form F = ma, fictitious forces are proportional to the mass m. A fictitious force on an object arises when the frame of reference used to describe the motion is accelerating compared to a non-accelerating frame. As a frame can accelerate in any way, so can fictitious forces be as arbitrary. Gravitational force would also be a force based upon a field model in which particles distort spacetime due to their mass. The role of forces in Newtonian mechanics is described by Tonnelat. To solve classical mechanics problems exactly in an Earth-bound reference frame, the Euler force is typically ignored because the variations in the angular velocity of the rotating Earth surface are usually insignificant. Both of the fictitious forces are weak compared to most typical forces in everyday life. For example, Léon Foucault was able to show that the Coriolis force results from the Earths rotation using the Foucault pendulum. If the Earth were to rotate a thousand times faster, people could easily get the impression that such forces are pulling on them. Other accelerations also give rise to forces, as described mathematically below. An example of the detection of a non-inertial, rotating reference frame is the precession of a Foucault pendulum, in the non-inertial frame of the Earth, the fictitious Coriolis force is necessary to explain observations. In an inertial frame outside the Earth, no such force is necessary. Figure 1 shows an accelerating car, when a car accelerates, a passenger feels like theyre being pushed back into the seat. In an inertial frame of reference attached to the road, there is no physical force moving the rider backward, however, in the riders non-inertial reference frame attached to the accelerating car, there is a backward fictitious force. We mention two possible reasons for the force to clarify its existence, Figure 1, to an observer at rest on an inertial reference frame, the car will seem to accelerate. In order for the passenger to stay inside the car, a force must be exerted on the passenger
Fictitious force
20.
Inertial frame of reference
–
In classical physics and special relativity, an inertial frame of reference is a frame of reference that describes time and space homogeneously, isotropically, and in a time-independent manner. The physics of a system in an inertial frame have no causes external to the system, all inertial frames are in a state of constant, rectilinear motion with respect to one another, an accelerometer moving with any of them would detect zero acceleration. Measurements in one frame can be converted to measurements in another by a simple transformation. In general relativity, in any region small enough for the curvature of spacetime and tidal forces to be negligible, systems in non-inertial frames in general relativity dont have external causes because of the principle of geodesic motion. Physical laws take the form in all inertial frames. For example, a ball dropped towards the ground does not go straight down because the Earth is rotating. Someone rotating with the Earth must account for the Coriolis effect—in this case thought of as a force—to predict the horizontal motion, another example of such a fictitious force associated with rotating reference frames is the centrifugal effect, or centrifugal force. The motion of a body can only be described relative to something else—other bodies, observers and these are called frames of reference. If the coordinates are chosen badly, the laws of motion may be more complex than necessary, for example, suppose a free body that has no external forces on it is at rest at some instant. In many coordinate systems, it would begin to move at the next instant, however, a frame of reference can always be chosen in which it remains stationary. Similarly, if space is not described uniformly or time independently, indeed, an intuitive summary of inertial frames can be given as, In an inertial reference frame, the laws of mechanics take their simplest form. In an inertial frame, Newtons first law, the law of inertia, is satisfied, Any free motion has a constant magnitude, the force F is the vector sum of all real forces on the particle, such as electromagnetic, gravitational, nuclear and so forth. The extra terms in the force F′ are the forces for this frame. The first extra term is the Coriolis force, the second the centrifugal force, also, fictitious forces do not drop off with distance. For example, the force that appears to emanate from the axis of rotation in a rotating frame increases with distance from the axis. All observers agree on the forces, F, only non-inertial observers need fictitious forces. The laws of physics in the frame are simpler because unnecessary forces are not present. In Newtons time the stars were invoked as a reference frame
Inertial frame of reference
–
Figure 1: Two frames of reference moving with relative velocity. Frame S' has an arbitrary but fixed rotation with respect to frame S. They are both inertial frames provided a body not subject to forces appears to move in a straight line. If that motion is seen in one frame, it will also appear that way in the other.
21.
Mechanics of planar particle motion
–
This article describes a particle in planar motion when observed from non-inertial reference frames. The most famous examples of motion are related to the motion of two spheres that are gravitationally attracted to one another, and the generalization of this problem to planetary motion. See centrifugal force, two-body problem, orbit and Keplers laws of planetary motion and those problems fall in the general field of analytical dynamics, the determination of orbits from given laws of force. This article is focused more on the issues surrounding planar motion. The Lagrangian approach to fictitious forces is introduced, unlike real forces such as electromagnetic forces, fictitious forces do not originate from physical interactions between objects. The appearance of fictitious forces normally is associated with use of a frame of reference. For solving problems of mechanics in non-inertial reference frames, the advice given in textbooks is to treat the fictitious forces like real forces, elaboration of this point and some citations on the subject follow. Examples are Cartesian coordinates, polar coordinates and curvilinear coordinates, or as seen from a rotating frame. A time-dependent description of observations does not change the frame of reference in which the observations are made, in discussion of a particle moving in a circular orbit, in an inertial frame of reference one can identify the centripetal and tangential forces. It then seems to be no problem to switch hats, change perspective and that switch is unconscious, but real. Suppose we sit on a particle in planar motion. What analysis underlies a switch of hats to introduce fictitious centrifugal, to explore that question, begin in an inertial frame of reference. In Figure 1, the arc length s is the distance the particle has traveled along its path in time t, the path r with components x, y in Cartesian coordinates is described using arc length s as, r =. One way to look at the use of s is to think of the path of the particle as sitting in space, like the left by a skywriter. Any position on this path is described by stating its distance s from some starting point on the path, then an incremental displacement along the path ds is described by, d r = = d s, where primes are introduced to denote derivatives with respect to s. The magnitude of this displacement is ds, showing that, =1, the unit magnitude of these vectors is a consequence of Eq.1. As an aside, notice that the use of vectors that are not aligned along the Cartesian xy-axes does not mean we are no longer in an inertial frame. All it means is that we are using unit vectors that vary with s to describe the path, the radius of curvature is introduced completely formally as,1 ρ = d θ d s
Mechanics of planar particle motion
–
The arc length s(t) measures distance along the skywriter's trail. Image from NASA ASRS
Mechanics of planar particle motion
Mechanics of planar particle motion
–
Figure 2: Two coordinate systems differing by a displacement of origin. Radial motion with constant velocity v in one frame is not radial in the other frame. Angular rate, but
22.
Rigid body dynamics
–
Rigid-body dynamics studies the movement of systems of interconnected bodies under the action of external forces. This excludes bodies that display fluid highly elastic, and plastic behavior, the dynamics of a rigid body system is described by the laws of kinematics and by the application of Newtons second law or their derivative form Lagrangian mechanics. The formulation and solution of rigid body dynamics is an important tool in the simulation of mechanical systems. If a system of particles moves parallel to a fixed plane, in this case, Newtons laws for a rigid system of N particles, Pi, i=1. N, simplify because there is no movement in the k direction. Determine the resultant force and torque at a reference point R, to obtain F = ∑ i =1 N m i A i, T = ∑ i =1 N ×, where ri denotes the planar trajectory of each particle. In this case, the vectors can be simplified by introducing the unit vectors ei from the reference point R to a point ri. Several methods to describe orientations of a body in three dimensions have been developed. They are summarized in the following sections, the first attempt to represent an orientation is attributed to Leonhard Euler. The values of three rotations are called Euler angles. These are three angles, also known as yaw, pitch and roll, Navigation angles and Cardan angles, in aerospace engineering they are usually referred to as Euler angles. Euler also realized that the composition of two rotations is equivalent to a rotation about a different fixed axis. Therefore, the composition of the three angles has to be equal to only one rotation, whose axis was complicated to calculate until matrices were developed. Based on this fact he introduced a way to describe any rotation, with a vector on the rotation axis. Therefore, any orientation can be represented by a vector that leads to it from the reference frame. When used to represent an orientation, the vector is commonly called orientation vector, or attitude vector. A similar method, called axis-angle representation, describes a rotation or orientation using a unit vector aligned with the axis. With the introduction of matrices the Euler theorems were rewritten, the rotations were described by orthogonal matrices referred to as rotation matrices or direction cosine matrices. When used to represent an orientation, a matrix is commonly called orientation matrix
Rigid body dynamics
–
Human body modelled as a system of rigid bodies of geometrical solids. Representative bones were added for better visualization of the walking person.
Rigid body dynamics
–
Movement of each of the components of the Boulton & Watt Steam Engine (1784) is modeled by a continuous set of rigid displacements
23.
Circular motion
–
In physics, circular motion is a movement of an object along the circumference of a circle or rotation along a circular path. It can be uniform, with constant angular rate of rotation and constant speed, the rotation around a fixed axis of a three-dimensional body involves circular motion of its parts. The equations of motion describe the movement of the center of mass of a body, since the objects velocity vector is constantly changing direction, the moving object is undergoing acceleration by a centripetal force in the direction of the center of rotation. Without this acceleration, the object would move in a straight line, in physics, uniform circular motion describes the motion of a body traversing a circular path at constant speed. Since the body describes circular motion, its distance from the axis of rotation remains constant at all times, though the bodys speed is constant, its velocity is not constant, velocity, a vector quantity, depends on both the bodys speed and its direction of travel. This changing velocity indicates the presence of an acceleration, this acceleration is of constant magnitude. This acceleration is, in turn, produced by a force which is also constant in magnitude. For motion in a circle of radius r, the circumference of the circle is C = 2π r, the axis of rotation is shown as a vector ω perpendicular to the plane of the orbit and with a magnitude ω = dθ / dt. The direction of ω is chosen using the right-hand rule, likewise, the acceleration is given by a = ω × v = ω ×, which is a vector perpendicular to both ω and v of magnitude ω |v| = ω2 r and directed exactly opposite to r. In the simplest case the speed, mass and radius are constant, consider a body of one kilogram, moving in a circle of radius one metre, with an angular velocity of one radian per second. The speed is one metre per second, the inward acceleration is one metre per square second, v2/r. It is subject to a force of one kilogram metre per square second. The momentum of the body is one kg·m·s−1, the moment of inertia is one kg·m2. The angular momentum is one kg·m2·s−1, the kinetic energy is 1/2 joule. The circumference of the orbit is 2π metres, the period of the motion is 2π seconds per turn. It is convenient to introduce the unit vector orthogonal to u ^ R as well and it is customary to orient u ^ θ to point in the direction of travel along the orbit. The velocity is the derivative of the displacement, v → = d d t r → = d R d t u ^ R + R d u ^ R d t. Because the radius of the circle is constant, the component of the velocity is zero
Circular motion
–
Figure 1: Velocity v and acceleration a in uniform circular motion at angular rate ω; the speed is constant, but the velocity is always tangent to the orbit; the acceleration has constant magnitude, but always points toward the center of rotation
24.
Centripetal force
–
A centripetal force is a force that makes a body follow a curved path. Its direction is orthogonal to the motion of the body. Isaac Newton described it as a force by which bodies are drawn or impelled, or in any way tend, in Newtonian mechanics, gravity provides the centripetal force responsible for astronomical orbits. One common example involving centripetal force is the case in which a body moves with uniform speed along a circular path, the centripetal force is directed at right angles to the motion and also along the radius towards the centre of the circular path. The mathematical description was derived in 1659 by the Dutch physicist Christiaan Huygens, the direction of the force is toward the center of the circle in which the object is moving, or the osculating circle. The speed in the formula is squared, so twice the speed needs four times the force, the inverse relationship with the radius of curvature shows that half the radial distance requires twice the force. Expressed using the orbital period T for one revolution of the circle, the rope example is an example involving a pull force. The centripetal force can also be supplied as a push force, newtons idea of a centripetal force corresponds to what is nowadays referred to as a central force. Another example of centripetal force arises in the helix that is traced out when a particle moves in a uniform magnetic field in the absence of other external forces. In this case, the force is the centripetal force that acts towards the helix axis. Below are three examples of increasing complexity, with derivations of the formulas governing velocity and acceleration, uniform circular motion refers to the case of constant rate of rotation. Here are two approaches to describing this case, assume uniform circular motion, which requires three things. The object moves only on a circle, the radius of the circle r does not change in time. The object moves with constant angular velocity ω around the circle, therefore, θ = ω t where t is time. Now find the velocity v and acceleration a of the motion by taking derivatives of position with respect to time, consequently, a = − ω2 r. negative shows that the acceleration is pointed towards the center of the circle, hence it is called centripetal. While objects naturally follow a path, this centripetal acceleration describes the circular motion path caused by a centripetal force. The image at right shows the relationships for uniform circular motion. In this subsection, dθ/dt is assumed constant, independent of time, consequently, d r d t = lim Δ t →0 r − r Δ t = d ℓ d t
Centripetal force
–
A body experiencing uniform circular motion requires a centripetal force, towards the axis as shown, to maintain its circular path.
25.
Centrifugal force
–
In Newtonian mechanics, the centrifugal force is an inertial force directed away from the axis of rotation that appears to act on all objects when viewed in a rotating reference frame. When they are analyzed in a coordinate system. The term has also been used for the force that is a reaction to a centripetal force. The centrifugal force is an outward force apparent in a reference frame. All measurements of position and velocity must be relative to some frame of reference. An inertial frame of reference is one that is not accelerating, the use of an inertial frame of reference, which will be the case for all elementary calculations, is often not explicitly stated but may generally be assumed unless stated otherwise. In terms of a frame of reference, the centrifugal force does not exist. All calculations can be performed using only Newtons laws of motion, in its current usage the term centrifugal force has no meaning in an inertial frame. In an inertial frame, an object that has no acting on it travels in a straight line. When measurements are made with respect to a reference frame, however. If it is desired to apply Newtons laws in the frame, it is necessary to introduce new, fictitious. Consider a stone being whirled round on a string, in a horizontal plane, the only real force acting on the stone in the horizontal plane is the tension in the string. There are no forces acting on the stone so there is a net force on the stone in the horizontal plane. In an inertial frame of reference, were it not for this net force acting on the stone, in order to keep the stone moving in a circular path, this force, known as the centripetal force, must be continuously applied to the stone. As soon as it is removed the stone moves in a straight line, in this inertial frame, the concept of centrifugal force is not required as all motion can be properly described using only real forces and Newtons laws of motion. In a frame of reference rotating with the stone around the axis as the stone. However, the tension in the string is still acting on the stone, if Newtons laws were applied in their usual form, the stone would accelerate in the direction of the net applied force, towards the axis of rotation, which it does not do. With this new the net force on the stone is zero, with the addition of this extra inertial or fictitious force Newtons laws can be applied in the rotating frame as if it were an inertial frame
Centrifugal force
–
The interface of two immiscible liquids rotating around a vertical axis is an upward-opening circular paraboloid.
26.
Reactive centrifugal force
–
In classical mechanics, a reactive centrifugal force forms part of an action–reaction pair with a centripetal force. In accordance with Newtons first law of motion, an object moves in a line in the absence of any external forces acting on the object. A curved path may however ensue when a physical acts on it, the two forces will only have the same magnitude in the special cases where circular motion arises and where the axis of rotation is the origin of the rotating frame of reference. It is the force that is the subject of this article. Any force directed away from a center can be called centrifugal, centrifugal simply means directed outward from the center. Similarly, centripetal means directed toward the center, the reactive centrifugal force discussed in this article is not the same thing as the centrifugal pseudoforce, which is usually whats meant by the term centrifugal force. The figure at right shows a ball in circular motion held to its path by a massless string tied to an immovable post. The figure is an example of a real force. In this system a centripetal force upon the ball provided by the string maintains the motion. In this model, the string is assumed massless and the rotational motion frictionless, the string transmits the reactive centrifugal force from the ball to the fixed post, pulling upon the post. Again according to Newtons third law, the post exerts a reaction upon the string, labeled the post reaction, the two forces upon the string are equal and opposite, exerting no net force upon the string, but placing the string under tension. It should be noted, however, that the reason the post appears to be immovable is because it is fixed to the earth. If the rotating ball was tethered to the mast of a boat, for example, even though the reactive centrifugal is rarely used in analyses in the physics literature, the concept is applied within some mechanical engineering concepts. An example of this kind of engineering concept is an analysis of the stresses within a rapidly rotating turbine blade, the blade can be treated as a stack of layers going from the axis out to the edge of the blade. Each layer exerts a force on the immediately adjacent, radially inward layer. At the same time the inner layer exerts a centripetal force on the middle layer, while and the outer layer exerts an elastic centrifugal force. It is the stresses in the blade and their causes that mainly interest mechanical engineers in this situation, another example of a rotating device in which a reactive centrifugal force can be identified used to describe the system behavior is the centrifugal clutch. A centrifugal clutch is used in small engine-powered devices such as saws, go-karts
Reactive centrifugal force
–
A two-shoe centrifugal clutch. The motor spins the input shaft that makes the shoes go around, and the outer drum (removed) turns the output power shaft.
27.
Coriolis force
–
In physics, the Coriolis force is an inertial force that acts on objects that are in motion relative to a rotating reference frame. In a reference frame with clockwise rotation, the acts to the left of the motion of the object. In one with anticlockwise rotation, the acts to the right. Early in the 20th century, the term Coriolis force began to be used in connection with meteorology, deflection of an object due to the Coriolis force is called the Coriolis effect. Newtons laws of motion describe the motion of an object in a frame of reference. When Newtons laws are transformed to a frame of reference. Both forces are proportional to the mass of the object, the Coriolis force is proportional to the rotation rate and the centrifugal force is proportional to its square. The Coriolis force acts in a perpendicular to the rotation axis. The centrifugal force acts outwards in the direction and is proportional to the distance of the body from the axis of the rotating frame. These additional forces are termed inertial forces, fictitious forces or pseudo forces and they allow the application of Newtons laws to a rotating system. They are correction factors that do not exist in a non-accelerating or inertial reference frame, a commonly encountered rotating reference frame is the Earth. The Coriolis effect is caused by the rotation of the Earth, such motions are constrained by the surface of the Earth, so only the horizontal component of the Coriolis force is generally important. This force causes moving objects on the surface of the Earth to be deflected to the right in the Northern Hemisphere, the horizontal deflection effect is greater near the poles, since the effective rotation rate about a local vertical axis is largest there, and smallest at the equator. This effect is responsible for the rotation of large cyclones, riccioli, Grimaldi, and Dechales all described the effect as part of an argument against the heliocentric system of Copernicus. In other words, they argued that the Earths rotation should create the effect, the effect was described in the tidal equations of Pierre-Simon Laplace in 1778. Gaspard-Gustave Coriolis published a paper in 1835 on the yield of machines with rotating parts. That paper considered the forces that are detected in a rotating frame of reference. Coriolis divided these forces into two categories
Coriolis force
–
This low-pressure system over Iceland spins counter-clockwise due to balance between the Coriolis force and the pressure gradient force.
Coriolis force
–
Coordinate system at latitude φ with x -axis east, y -axis north and z -axis upward (that is, radially outward from center of sphere).
Coriolis force
–
Cloud formations in a famous image of Earth from Apollo 17, makes similar circulation directly visible
Coriolis force
–
A carousel is rotating counter-clockwise. Left panel: a ball is tossed by a thrower at 12:00 o'clock and travels in a straight line to the center of the carousel. While it travels, the thrower circles in a counter-clockwise direction. Right panel: The ball's motion as seen by the thrower, who now remains at 12:00 o'clock, because there is no rotation from their viewpoint.
28.
Angular velocity
–
This speed can be measured in the SI unit of angular velocity, radians per second, or in terms of degrees per second, degrees per hour, etc. Angular velocity is usually represented by the symbol omega, the direction of the angular velocity vector is perpendicular to the plane of rotation, in a direction that is usually specified by the right-hand rule. The angular velocity of a particle is measured around or relative to a point, called the origin. As shown in the diagram, if a line is drawn from the origin to the particle, then the velocity of the particle has a component along the radius, if there is no radial component, then the particle moves in a circle. On the other hand, if there is no cross-radial component, a radial motion produces no change in the direction of the particle relative to the origin, so, for the purpose of finding the angular velocity, the radial component can be ignored. Therefore, the rotation is completely produced by the perpendicular motion around the origin, the angular velocity in two dimensions is a pseudoscalar, a quantity that changes its sign under a parity inversion. The positive direction of rotation is taken, by convention, to be in the direction towards the y axis from the x axis, if the parity is inverted, but the orientation of a rotation is not, then the sign of the angular velocity changes. There are three types of angular velocity involved in the movement on an ellipse corresponding to the three anomalies, in three dimensions, the angular velocity becomes a bit more complicated. The angular velocity in case is generally thought of as a vector, or more precisely. It now has not only a magnitude, but a direction as well, the magnitude is the angular speed, and the direction describes the axis of rotation that Eulers rotation theorem guarantees must exist. The right-hand rule indicates the direction of the angular velocity pseudovector. Let u be a vector along the instantaneous rotation axis. This is the definition of a vector space, the only property that presents difficulties to prove is the commutativity of the addition. This can be proven from the fact that the velocity tensor W is skew-symmetric, therefore, R = e W t is a rotation matrix and in a time dt is an infinitesimal rotation matrix. Therefore, it can be expanded as R = I + W ⋅ d t +122 +, in such a frame, each vector is a particular case of the previous case, in which the module of the vector is constant. Though it just a case of a moving particle, this is a very important one for its relationship with the rigid body study. There are two ways to describe the angular velocity of a rotating frame, the angular velocity vector. Both entities are related and they can be calculated from each other, in a consistent way with the general definition, the angular velocity of a frame is defined as the angular velocity of each of the three vectors of the frame
Angular velocity
–
The angular velocity of the particle at P with respect to the origin O is determined by the perpendicular component of the velocity vector v.
29.
Galileo Galilei
–
Galileo Galilei was an Italian polymath, astronomer, physicist, engineer, philosopher, and mathematician. He played a role in the scientific revolution of the seventeenth century. Galileo also worked in applied science and technology, inventing an improved military compass, Galileos championing of heliocentrism and Copernicanism was controversial during his lifetime, when most subscribed to either geocentrism or the Tychonic system. He met with opposition from astronomers, who doubted heliocentrism because of the absence of a stellar parallax. He was tried by the Inquisition, found vehemently suspect of heresy and he spent the rest of his life under house arrest. He has been called the father of observational astronomy, the father of modern physics, the father of scientific method, and the father of science. Galileo was born in Pisa, Italy, on 15 February 1564, the first of six children of Vincenzo Galilei, a famous lutenist, composer, and music theorist, and Giulia, three of Galileos five siblings survived infancy. The youngest, Michelangelo, also became a noted lutenist and composer although he contributed to financial burdens during Galileos young adulthood, Michelangelo was unable to contribute his fair share of their fathers promised dowries to their brothers-in-law, who would later attempt to seek legal remedies for payments due. Michelangelo would also occasionally have to borrow funds from Galileo to support his musical endeavours and these financial burdens may have contributed to Galileos early fire to develop inventions that would bring him additional income. When Galileo Galilei was eight, his family moved to Florence and he then was educated in the Vallombrosa Abbey, about 30 km southeast of Florence. Galileo Bonaiuti was buried in the church, the Basilica of Santa Croce in Florence. It was common for mid-sixteenth century Tuscan families to name the eldest son after the parents surname, hence, Galileo Galilei was not necessarily named after his ancestor Galileo Bonaiuti. The Italian male given name Galileo derives from the Latin Galilaeus, meaning of Galilee, the biblical roots of Galileos name and surname were to become the subject of a famous pun. In 1614, during the Galileo affair, one of Galileos opponents, in it he made a point of quoting Acts 1,11, Ye men of Galilee, why stand ye gazing up into heaven. Despite being a genuinely pious Roman Catholic, Galileo fathered three children out of wedlock with Marina Gamba and they had two daughters, Virginia and Livia, and a son, Vincenzo. Their only worthy alternative was the religious life, both girls were accepted by the convent of San Matteo in Arcetri and remained there for the rest of their lives. Virginia took the name Maria Celeste upon entering the convent and she died on 2 April 1634, and is buried with Galileo at the Basilica of Santa Croce, Florence. Livia took the name Sister Arcangela and was ill for most of her life, Vincenzo was later legitimised as the legal heir of Galileo and married Sestilia Bocchineri
Galileo Galilei
–
Portrait of Galileo Galilei by Giusto Sustermans
Galileo Galilei
–
Galileo's beloved elder daughter, Virginia (Sister Maria Celeste), was particularly devoted to her father. She is buried with him in his tomb in the Basilica of Santa Croce, Florence.
Galileo Galilei
–
Galileo Galilei. Portrait by Leoni
Galileo Galilei
–
Cristiano Banti 's 1857 painting Galileo facing the Roman Inquisition
30.
Isaac Newton
–
His book Philosophiæ Naturalis Principia Mathematica, first published in 1687, laid the foundations of classical mechanics. Newton also made contributions to optics, and he shares credit with Gottfried Wilhelm Leibniz for developing the infinitesimal calculus. Newtons Principia formulated the laws of motion and universal gravitation that dominated scientists view of the universe for the next three centuries. Newtons work on light was collected in his influential book Opticks. He also formulated a law of cooling, made the first theoretical calculation of the speed of sound. Newton was a fellow of Trinity College and the second Lucasian Professor of Mathematics at the University of Cambridge, politically and personally tied to the Whig party, Newton served two brief terms as Member of Parliament for the University of Cambridge, in 1689–90 and 1701–02. He was knighted by Queen Anne in 1705 and he spent the last three decades of his life in London, serving as Warden and Master of the Royal Mint and his father, also named Isaac Newton, had died three months before. Born prematurely, he was a child, his mother Hannah Ayscough reportedly said that he could have fit inside a quart mug. When Newton was three, his mother remarried and went to live with her new husband, the Reverend Barnabas Smith, leaving her son in the care of his maternal grandmother, Newtons mother had three children from her second marriage. From the age of twelve until he was seventeen, Newton was educated at The Kings School, Grantham which taught Latin and Greek. He was removed from school, and by October 1659, he was to be found at Woolsthorpe-by-Colsterworth, Henry Stokes, master at the Kings School, persuaded his mother to send him back to school so that he might complete his education. Motivated partly by a desire for revenge against a bully, he became the top-ranked student. In June 1661, he was admitted to Trinity College, Cambridge and he started as a subsizar—paying his way by performing valets duties—until he was awarded a scholarship in 1664, which guaranteed him four more years until he would get his M. A. He set down in his notebook a series of Quaestiones about mechanical philosophy as he found it, in 1665, he discovered the generalised binomial theorem and began to develop a mathematical theory that later became calculus. Soon after Newton had obtained his B. A. degree in August 1665, in April 1667, he returned to Cambridge and in October was elected as a fellow of Trinity. Fellows were required to become ordained priests, although this was not enforced in the restoration years, however, by 1675 the issue could not be avoided and by then his unconventional views stood in the way. Nevertheless, Newton managed to avoid it by means of a special permission from Charles II. A and he was elected a Fellow of the Royal Society in 1672. Newtons work has been said to distinctly advance every branch of mathematics then studied and his work on the subject usually referred to as fluxions or calculus, seen in a manuscript of October 1666, is now published among Newtons mathematical papers
Isaac Newton
–
Portrait of Isaac Newton in 1689 (age 46) by Godfrey Kneller
Isaac Newton
–
Newton in a 1702 portrait by Godfrey Kneller
Isaac Newton
–
Isaac Newton (Bolton, Sarah K. Famous Men of Science. NY: Thomas Y. Crowell & Co., 1889)
Isaac Newton
–
Replica of Newton's second Reflecting telescope that he presented to the Royal Society in 1672
31.
Edmond Halley
–
Edmond Halley, FRS was an English astronomer, geophysicist, mathematician, meteorologist, and physicist who is best known for computing the orbit of Halleys Comet. He was the second Astronomer Royal in Britain, succeeding John Flamsteed, Halley was born in Haggerston, in east London. His father, Edmond Halley Sr. came from a Derbyshire family and was a wealthy soap-maker in London, as a child, Halley was very interested in mathematics. He studied at St Pauls School, and from 1673 at The Queens College, while still an undergraduate, Halley published papers on the Solar System and sunspots. While at Oxford University, Halley was introduced to John Flamsteed, influenced by Flamsteeds project to compile a catalog of northern stars, Halley proposed to do the same for the Southern Hemisphere. In 1676, Halley visited the south Atlantic island of Saint Helena, while there he observed a transit of Mercury, and realised that a similar transit of Venus could be used to determine the absolute size of the Solar System. He returned to England in May 1678, in the following year he went to Danzig on behalf of the Royal Society to help resolve a dispute. Because astronomer Johannes Hevelius did not use a telescope, his observations had been questioned by Robert Hooke, Halley stayed with Hevelius and he observed and verified the quality of Hevelius observations. In 1679, Halley published the results from his observations on St. Helena as Catalogus Stellarum Australium which included details of 341 southern stars and these additions to contemporary star maps earned him comparison with Tycho Brahe, e. g. the southern Tycho as described by Flamsteed. Halley was awarded his M. A. degree at Oxford, in 1686, Halley published the second part of the results from his Helenian expedition, being a paper and chart on trade winds and monsoons. The symbols he used to represent trailing winds still exist in most modern day weather chart representations, in this article he identified solar heating as the cause of atmospheric motions. He also established the relationship between pressure and height above sea level. His charts were an important contribution to the field of information visualisation. Halley spent most of his time on lunar observations, but was interested in the problems of gravity. One problem that attracted his attention was the proof of Keplers laws of planetary motion, Halleys first calculations with comets were thereby for the orbit of comet Kirch, based on Flamsteeds observations in 1680-1. Although he was to calculate the orbit of the comet of 1682. They indicated a periodicity of 575 years, thus appearing in the years 531 and 1106 and it is now known to have an orbital period of circa 10,000 years. In 1691, Halley built a bell, a device in which the atmosphere was replenished by way of weighted barrels of air sent down from the surface
Edmond Halley
–
Bust of Halley (Royal Observatory, Greenwich)
Edmond Halley
–
Portrait by Thomas Murray, c. 1687
Edmond Halley
–
Halley's grave
Edmond Halley
–
Plaque in South Cloister of Westminster Abbey
32.
Jean le Rond d'Alembert
–
Jean-Baptiste le Rond dAlembert was a French mathematician, mechanician, physicist, philosopher, and music theorist. Until 1759 he was also co-editor with Denis Diderot of the Encyclopédie, DAlemberts formula for obtaining solutions to the wave equation is named after him. The wave equation is referred to as dAlemberts equation. Born in Paris, dAlembert was the son of the writer Claudine Guérin de Tencin and the chevalier Louis-Camus Destouches. Destouches was abroad at the time of dAlemberts birth, days after birth his mother left him on the steps of the Saint-Jean-le-Rond de Paris church. According to custom, he was named after the saint of the church. DAlembert was placed in an orphanage for foundling children, but his father found him and placed him with the wife of a glazier, Madame Rousseau, Destouches secretly paid for the education of Jean le Rond, but did not want his paternity officially recognized. DAlembert first attended a private school, the chevalier Destouches left dAlembert an annuity of 1200 livres on his death in 1726. Under the influence of the Destouches family, at the age of twelve entered the Jansenist Collège des Quatre-Nations. Here he studied philosophy, law, and the arts, graduating as baccalauréat en arts in 1735, in his later life, DAlembert scorned the Cartesian principles he had been taught by the Jansenists, physical promotion, innate ideas and the vortices. The Jansenists steered DAlembert toward a career, attempting to deter him from pursuits such as poetry. Theology was, however, rather unsubstantial fodder for dAlembert and he entered law school for two years, and was nominated avocat in 1738. He was also interested in medicine and mathematics, Jean was first registered under the name Daremberg, but later changed it to dAlembert. The name dAlembert was proposed by Johann Heinrich Lambert for a moon of Venus. In July 1739 he made his first contribution to the field of mathematics, at the time Lanalyse démontrée was a standard work, which dAlembert himself had used to study the foundations of mathematics. DAlembert was also a Latin scholar of note and worked in the latter part of his life on a superb translation of Tacitus. In 1740, he submitted his second scientific work from the field of fluid mechanics Mémoire sur la réfraction des corps solides, in this work dAlembert theoretically explained refraction. In 1741, after failed attempts, dAlembert was elected into the Académie des Sciences
Jean le Rond d'Alembert
–
Jean-Baptiste le Rond d'Alembert, pastel by Maurice Quentin de La Tour
33.
William Rowan Hamilton
–
Sir William Rowan Hamilton PRIA FRSE was an Irish physicist, astronomer, and mathematician, who made important contributions to classical mechanics, optics, and algebra. His studies of mechanical and optical systems led him to discover new mathematical concepts and his best known contribution to mathematical physics is the reformulation of Newtonian mechanics, now called Hamiltonian mechanics. This work has proven central to the study of classical field theories such as electromagnetism. In pure mathematics, he is best known as the inventor of quaternions, Hamilton is said to have shown immense talent at a very early age. Astronomer Bishop Dr. John Brinkley remarked of the 18-year-old Hamilton, This young man, I do not say will be, but is, Hamilton also invented icosian calculus, which he used to investigate closed edge paths on a dodecahedron that visit each vertex exactly once. Hamilton was the fourth of nine born to Sarah Hutton and Archibald Hamilton. Hamiltons father, who was from Dunboyne, worked as a solicitor, by the age of three, Hamilton had been sent to live with his uncle James Hamilton, a graduate of Trinity College who ran a school in Talbots Castle in Trim, Co. His uncle soon discovered that Hamilton had an ability to learn languages. At the age of seven he had made very considerable progress in Hebrew. These included the classical and modern European languages, and Persian, Arabic, Hindustani, Sanskrit, in September 1813 the American calculating prodigy Zerah Colburn was being exhibited in Dublin. Colburn was 9, an older than Hamilton. The two were pitted against each other in a mental arithmetic contest with Colburn emerging the clear victor, in reaction to his defeat, Hamilton dedicated less time to studying languages and more time to studying mathematics. Hamilton was part of a small but well-regarded school of mathematicians associated with Trinity College, Dublin, which he entered at age 18. He studied both classics and mathematics, and was appointed Professor of Astronomy in 1827, prior to his taking up residence at Dunsink Observatory where he spent the rest of his life. Hamilton made important contributions to optics and to classical mechanics and his first discovery was in an early paper that he communicated in 1823 to Dr. Brinkley, who presented it under the title of Caustics in 1824 to the Royal Irish Academy. It was referred as usual to a committee, while their report acknowledged its novelty and value, they recommended further development and simplification before publication. Between 1825 and 1828 the paper grew to an immense size, but it also became more intelligible, and the features of the new method were now easily to be seen. Until this period Hamilton himself seems not to have fully understood either the nature or importance of optics and he proposed for it when he first predicted its existence in the third supplement to his Systems of Rays, read in 1832
William Rowan Hamilton
–
Quaternion Plaque on Broom Bridge
William Rowan Hamilton
–
William Rowan Hamilton (1805–1865)
William Rowan Hamilton
–
Irish commemorative coin celebrating the 200th Anniversary of his birth.
34.
Augustin-Louis Cauchy
–
Baron Augustin-Louis Cauchy FRS FRSE was a French mathematician who made pioneering contributions to analysis. He was one of the first to state and prove theorems of calculus rigorously and he almost singlehandedly founded complex analysis and the study of permutation groups in abstract algebra. A profound mathematician, Cauchy had an influence over his contemporaries. His writings range widely in mathematics and mathematical physics, more concepts and theorems have been named for Cauchy than for any other mathematician. Cauchy was a writer, he wrote approximately eight hundred research articles. Cauchy was the son of Louis François Cauchy and Marie-Madeleine Desestre, Cauchy married Aloise de Bure in 1818. She was a relative of the publisher who published most of Cauchys works. By her he had two daughters, Marie Françoise Alicia and Marie Mathilde, Cauchys father was a high official in the Parisian Police of the New Régime. He lost his position because of the French Revolution that broke out one month before Augustin-Louis was born, the Cauchy family survived the revolution and the following Reign of Terror by escaping to Arcueil, where Cauchy received his first education, from his father. After the execution of Robespierre, it was safe for the family to return to Paris, there Louis-François Cauchy found himself a new bureaucratic job, and quickly moved up the ranks. When Napoleon Bonaparte came to power, Louis-François Cauchy was further promoted, the famous mathematician Lagrange was also a friend of the Cauchy family. On Lagranges advice, Augustin-Louis was enrolled in the École Centrale du Panthéon, most of the curriculum consisted of classical languages, the young and ambitious Cauchy, being a brilliant student, won many prizes in Latin and Humanities. In spite of successes, Augustin-Louis chose an engineering career. In 1805 he placed second out of 293 applicants on this exam, one of the main purposes of this school was to give future civil and military engineers a high-level scientific and mathematical education. The school functioned under military discipline, which caused the young, nevertheless, he finished the Polytechnique in 1807, at the age of 18, and went on to the École des Ponts et Chaussées. He graduated in engineering, with the highest honors. After finishing school in 1810, Cauchy accepted a job as an engineer in Cherbourg. Cauchys first two manuscripts were accepted, the one was rejected
Augustin-Louis Cauchy
–
Cauchy around 1840. Lithography by Zéphirin Belliard after a painting by Jean Roller.
Augustin-Louis Cauchy
–
The title page of a textbook by Cauchy.
Augustin-Louis Cauchy
–
Leçons sur le calcul différentiel, 1829
35.
Net force
–
In physics, net force is the overall force acting on an object. In order to calculate the net force, the body is isolated and it is always possible to determine the torque associated with a point of application of a net force so that it maintains the movement of the object under the original system of forces. With its associated torque, the net force becomes the resultant force and has the effect on the rotational motion of the object as all actual forces taken together. It is possible for a system of forces to define a torque-free resultant force, in this case, the net force when applied at the proper line of action has the same effect on the body as all of the forces at their points of application. It is not always possible to find a torque-free resultant force, the sum of forces acting on a particle is called the total force or the net force. The net force is a force that replaces the effect of the original forces on the particles motion. It gives the particle the same acceleration as all actual forces together as described by the Newtons second law of motion. Force is a quantity, which means that it has a magnitude and a direction. Graphically, a force is represented as line segment from its point of application A to a point B which defines its direction, the length of the segment AB represents the magnitude of the force. Vector calculus was developed in the late 1800s and early 1900s, the parallelogram rule used for the addition of forces, however, dates from antiquity and is noted explicitly by Galileo and Newton. The diagram shows the addition of the forces F →1 and F →2, the sum F → of the two forces is drawn as the diagonal of a parallelogram defined by the two forces. Forces applied to a body can have different points of application. Forces are bound vectors and can be added if they are applied at the same point. The net force on a body applied at a point with the appropriate torque is known as the resultant force. A force is known as a vector which means it has a direction and magnitude. A convenient way to define a force is by a segment from a point A to a point B. If we denote the coordinates of points as A= and B=. The length of the vector B-A defines the magnitude of F and is given by | F | =2 +2 +2, the sum of two forces F1 and F2 applied at A can be computed from the sum of the segments that define them
Net force
–
A diagrammatic method for the addition of forces.
Net force
–
How a force accelerates a body.
Net force
–
Graphical placing of the resultant force.
36.
Gravitation
–
Gravity, or gravitation, is a natural phenomenon by which all things with mass are brought toward one another, including planets, stars and galaxies. Since energy and mass are equivalent, all forms of energy, including light, on Earth, gravity gives weight to physical objects and causes the ocean tides. Gravity has a range, although its effects become increasingly weaker on farther objects. The most extreme example of this curvature of spacetime is a hole, from which nothing can escape once past its event horizon. More gravity results in time dilation, where time lapses more slowly at a lower gravitational potential. Gravity is the weakest of the four fundamental interactions of nature, the gravitational attraction is approximately 1038 times weaker than the strong force,1036 times weaker than the electromagnetic force and 1029 times weaker than the weak force. As a consequence, gravity has an influence on the behavior of subatomic particles. On the other hand, gravity is the dominant interaction at the macroscopic scale, for this reason, in part, pursuit of a theory of everything, the merging of the general theory of relativity and quantum mechanics into quantum gravity, has become an area of research. While the modern European thinkers are credited with development of gravitational theory, some of the earliest descriptions came from early mathematician-astronomers, such as Aryabhata, who had identified the force of gravity to explain why objects do not fall out when the Earth rotates. Later, the works of Brahmagupta referred to the presence of force, described it as an attractive force. Modern work on gravitational theory began with the work of Galileo Galilei in the late 16th and this was a major departure from Aristotles belief that heavier objects have a higher gravitational acceleration. Galileo postulated air resistance as the reason that objects with less mass may fall slower in an atmosphere, galileos work set the stage for the formulation of Newtons theory of gravity. In 1687, English mathematician Sir Isaac Newton published Principia, which hypothesizes the inverse-square law of universal gravitation. Newtons theory enjoyed its greatest success when it was used to predict the existence of Neptune based on motions of Uranus that could not be accounted for by the actions of the other planets. Calculations by both John Couch Adams and Urbain Le Verrier predicted the position of the planet. A discrepancy in Mercurys orbit pointed out flaws in Newtons theory, the issue was resolved in 1915 by Albert Einsteins new theory of general relativity, which accounted for the small discrepancy in Mercurys orbit. The simplest way to test the equivalence principle is to drop two objects of different masses or compositions in a vacuum and see whether they hit the ground at the same time. Such experiments demonstrate that all objects fall at the rate when other forces are negligible
Gravitation
–
Sir Isaac Newton, an English physicist who lived from 1642 to 1727
Gravitation
–
Two-dimensional analogy of spacetime distortion generated by the mass of an object. Matter changes the geometry of spacetime, this (curved) geometry being interpreted as gravity. White lines do not represent the curvature of space but instead represent the coordinate system imposed on the curved spacetime, which would be rectilinear in a flat spacetime.
Gravitation
–
Ball falling freely under gravity. See text for description.
Gravitation
–
Gravity acts on stars that conform our Milky Way.
37.
SI unit
–
The International System of Units is the modern form of the metric system, and is the most widely used system of measurement. It comprises a coherent system of units of measurement built on seven base units, the system also establishes a set of twenty prefixes to the unit names and unit symbols that may be used when specifying multiples and fractions of the units. The system was published in 1960 as the result of an initiative began in 1948. It is based on the system of units rather than any variant of the centimetre-gram-second system. The motivation for the development of the SI was the diversity of units that had sprung up within the CGS systems, the International System of Units has been adopted by most developed countries, however, the adoption has not been universal in all English-speaking countries. The metric system was first implemented during the French Revolution with just the metre and kilogram as standards of length, in the 1830s Carl Friedrich Gauss laid the foundations for a coherent system based on length, mass, and time. In the 1860s a group working under the auspices of the British Association for the Advancement of Science formulated the requirement for a coherent system of units with base units and derived units. Meanwhile, in 1875, the Treaty of the Metre passed responsibility for verification of the kilogram, in 1921, the Treaty was extended to include all physical quantities including electrical units originally defined in 1893. The units associated with these quantities were the metre, kilogram, second, ampere, kelvin, in 1971, a seventh base quantity, amount of substance represented by the mole, was added to the definition of SI. On 11 July 1792, the proposed the names metre, are, litre and grave for the units of length, area, capacity. The committee also proposed that multiples and submultiples of these units were to be denoted by decimal-based prefixes such as centi for a hundredth, on 10 December 1799, the law by which the metric system was to be definitively adopted in France was passed. Prior to this, the strength of the magnetic field had only been described in relative terms. The technique used by Gauss was to equate the torque induced on a magnet of known mass by the earth’s magnetic field with the torque induced on an equivalent system under gravity. The resultant calculations enabled him to assign dimensions based on mass, length, a French-inspired initiative for international cooperation in metrology led to the signing in 1875 of the Metre Convention. Initially the convention only covered standards for the metre and the kilogram, one of each was selected at random to become the International prototype metre and International prototype kilogram that replaced the mètre des Archives and kilogramme des Archives respectively. Each member state was entitled to one of each of the prototypes to serve as the national prototype for that country. Initially its prime purpose was a periodic recalibration of national prototype metres. The official language of the Metre Convention is French and the version of all official documents published by or on behalf of the CGPM is the French-language version
SI unit
–
Stone marking the Austro-Hungarian /Italian border at Pontebba displaying myriametres, a unit of 10 km used in Central Europe in the 19th century (but since deprecated).
SI unit
–
The seven base units in the International System of Units
SI unit
–
Carl Friedrich Gauss
SI unit
–
Thomson
38.
Kilogram
–
The kilogram or kilogramme is the base unit of mass in the International System of Units and is defined as being equal to the mass of the International Prototype of the Kilogram. The avoirdupois pound, used in both the imperial and US customary systems, is defined as exactly 0.45359237 kg, making one kilogram approximately equal to 2.2046 avoirdupois pounds. Other traditional units of weight and mass around the world are also defined in terms of the kilogram, the gram, 1/1000 of a kilogram, was provisionally defined in 1795 as the mass of one cubic centimeter of water at the melting point of ice. The final kilogram, manufactured as a prototype in 1799 and from which the IPK was derived in 1875, had an equal to the mass of 1 dm3 of water at its maximum density. The kilogram is the only SI base unit with an SI prefix as part of its name and it is also the only SI unit that is still directly defined by an artifact rather than a fundamental physical property that can be reproduced in different laboratories. Three other base units and 17 derived units in the SI system are defined relative to the kilogram, only 8 other units do not require the kilogram in their definition, temperature, time and frequency, length, and angle. At its 2011 meeting, the CGPM agreed in principle that the kilogram should be redefined in terms of the Planck constant, the decision was originally deferred until 2014, in 2014 it was deferred again until the next meeting. There are currently several different proposals for the redefinition, these are described in the Proposed Future Definitions section below, the International Prototype Kilogram is rarely used or handled. In the decree of 1795, the term gramme thus replaced gravet, the French spelling was adopted in the United Kingdom when the word was used for the first time in English in 1797, with the spelling kilogram being adopted in the United States. In the United Kingdom both spellings are used, with kilogram having become by far the more common, UK law regulating the units to be used when trading by weight or measure does not prevent the use of either spelling. In the 19th century the French word kilo, a shortening of kilogramme, was imported into the English language where it has used to mean both kilogram and kilometer. In 1935 this was adopted by the IEC as the Giorgi system, now known as MKS system. In 1948 the CGPM commissioned the CIPM to make recommendations for a practical system of units of measurement. This led to the launch of SI in 1960 and the subsequent publication of the SI Brochure, the kilogram is a unit of mass, a property which corresponds to the common perception of how heavy an object is. Mass is a property, that is, it is related to the tendency of an object at rest to remain at rest, or if in motion to remain in motion at a constant velocity. Accordingly, for astronauts in microgravity, no effort is required to hold objects off the cabin floor, they are weightless. However, since objects in microgravity still retain their mass and inertia, the ratio of the force of gravity on the two objects, measured by the scale, is equal to the ratio of their masses. On April 7,1795, the gram was decreed in France to be the weight of a volume of pure water equal to the cube of the hundredth part of the metre
Kilogram
–
A domestic-quality one-kilogram weight made of cast iron (the credit card is for scale). The shape follows OIML recommendation R52 for cast-iron hexagonal weights
Kilogram
–
Measurement of weight – gravitational attraction of the measurand causes a distortion of the spring
Kilogram
–
Measurement of mass – the gravitational force on the measurand is balanced against the gravitational force on the weights.
Kilogram
–
The Arago kilogram, an exact copy of the "Kilogramme des Archives" commissioned in 1821 by the US under supervision of French physicist François Arago that served as the US's first kilogram standard of mass until 1889, when the US converted to primary metric standards and received its current kilogram prototypes, K4 and K20.
39.
Weight
–
In science and engineering, the weight of an object is usually taken to be the force on the object due to gravity. Weight is a vector whose magnitude, often denoted by an italic letter W, is the product of the m of the object. The unit of measurement for weight is that of force, which in the International System of Units is the newton. For example, an object with a mass of one kilogram has a weight of about 9.8 newtons on the surface of the Earth, in this sense of weight, a body can be weightless only if it is far away from any other mass. Although weight and mass are scientifically distinct quantities, the terms are often confused with other in everyday use. There is also a tradition within Newtonian physics and engineering which sees weight as that which is measured when one uses scales. There the weight is a measure of the magnitude of the force exerted on a body. Typically, in measuring an objects weight, the object is placed on scales at rest with respect to the earth, thus, in a state of free fall, the weight would be zero. In this second sense of weight, terrestrial objects can be weightless, ignoring air resistance, the famous apple falling from the tree, on its way to meet the ground near Isaac Newton, is weightless. Further complications in elucidating the various concepts of weight have to do with the theory of relativity according to gravity is modelled as a consequence of the curvature of spacetime. In the teaching community, a debate has existed for over half a century on how to define weight for their students. The current situation is that a set of concepts co-exist. Discussion of the concepts of heaviness and lightness date back to the ancient Greek philosophers and these were typically viewed as inherent properties of objects. Plato described weight as the tendency of objects to seek their kin. To Aristotle weight and levity represented the tendency to restore the order of the basic elements, air, earth, fire. He ascribed absolute weight to earth and absolute levity to fire, archimedes saw weight as a quality opposed to buoyancy, with the conflict between the two determining if an object sinks or floats. The first operational definition of weight was given by Euclid, who defined weight as, weight is the heaviness or lightness of one thing, compared to another, operational balances had, however, been around much longer. According to Aristotle, weight was the cause of the falling motion of an object
Weight
–
Ancient Greek official bronze weights dating from around the 6th century BC, exhibited in the Ancient Agora Museum in Athens, housed in the Stoa of Attalus.
Weight
–
Weighing grain, from the Babur-namah
Weight
–
This top-fuel dragster can accelerate from zero to 160 kilometres per hour (99 mph) in 0.86 seconds. This is a horizontal acceleration of 5.3 g. Combined with the vertical g-force in the stationary case the Pythagorean theorem yields a g-force of 5.4 g. It is this g-force that causes the driver's weight if one uses the operational definition. If one uses the gravitational definition, the driver's weight is unchanged by the motion of the car.
Weight
–
Measuring weight versus mass
40.
Weighing scale
–
Weighing scales are devices to measure weight or calculate mass. Scales and balances are used in commerce, as many products are sold. Very accurate balances, called analytical balances, are used in fields such as chemistry. Although records dating to the 1700s refer to spring scales for measuring weight, the earliest design for such a device dates to 1770 and credits Richard Salter, an early scale-maker. Postal workers could work quickly with spring scales than balance scales because they could be read instantaneously. By the 1940s various electronic devices were being attached to these designs to make more accurate. A spring scale measures weight by reporting the distance that a spring deflects under a load and this contrasts to a balance, which compares the torque on the arm due to a sample weight to the torque on the arm due to a standard reference weight using a horizontal lever. Spring scales measure force, which is the force of constraint acting on an object. They are usually calibrated so that measured force translates to mass at earths gravity, the object to be weighed can be simply hung from the spring or set on a pivot and bearing platform. In a spring scale, the spring either stretches or compresses, by Hookes law, every spring has a proportionality constant that relates how hard it is pulled to how far it stretches. Rack and pinion mechanisms are used to convert the linear spring motion to a dial reading. With proper manufacturing and setup, however, spring scales can be rated as legal for commerce, to remove the temperature error, a commerce-legal spring scale must either have temperature-compensated springs or be used at a fairly constant temperature. To eliminate the effect of gravity variations, a spring scale must be calibrated where it is used. It is also common in high-capacity applications such as crane scales to use force to sense weight. The test force is applied to a piston or diaphragm and transmitted through hydraulic lines to an indicator based on a Bourdon tube or electronic sensor. A digital bathroom scale is a type of electronic weighing machine, the digital bathroom scale is a smart scale which has many functions like smartphone integration, cloud storage, fitness tracking, etc. In electronic versions of spring scales, the deflection of a beam supporting the weight is measured using a strain gauge. The capacity of such devices is only limited by the resistance of the beam to deflection and these scales are used in the modern bakery, grocery, delicatessen, seafood, meat, produce and other perishable goods departments
Weighing scale
–
Digital kitchen scale, a strain gauge scale
Weighing scale
–
Scales used for trade purposes in the state of Florida, as this scale at the checkout in a cafeteria, are inspected for accuracy by the FDACS's Bureau of Weights and Measures.
Weighing scale
–
A two-pan balance
Weighing scale
–
Two 10- decagram masses
41.
Gravitational force
–
Gravity, or gravitation, is a natural phenomenon by which all things with mass are brought toward one another, including planets, stars and galaxies. Since energy and mass are equivalent, all forms of energy, including light, on Earth, gravity gives weight to physical objects and causes the ocean tides. Gravity has a range, although its effects become increasingly weaker on farther objects. The most extreme example of this curvature of spacetime is a hole, from which nothing can escape once past its event horizon. More gravity results in time dilation, where time lapses more slowly at a lower gravitational potential. Gravity is the weakest of the four fundamental interactions of nature, the gravitational attraction is approximately 1038 times weaker than the strong force,1036 times weaker than the electromagnetic force and 1029 times weaker than the weak force. As a consequence, gravity has an influence on the behavior of subatomic particles. On the other hand, gravity is the dominant interaction at the macroscopic scale, for this reason, in part, pursuit of a theory of everything, the merging of the general theory of relativity and quantum mechanics into quantum gravity, has become an area of research. While the modern European thinkers are credited with development of gravitational theory, some of the earliest descriptions came from early mathematician-astronomers, such as Aryabhata, who had identified the force of gravity to explain why objects do not fall out when the Earth rotates. Later, the works of Brahmagupta referred to the presence of force, described it as an attractive force. Modern work on gravitational theory began with the work of Galileo Galilei in the late 16th and this was a major departure from Aristotles belief that heavier objects have a higher gravitational acceleration. Galileo postulated air resistance as the reason that objects with less mass may fall slower in an atmosphere, galileos work set the stage for the formulation of Newtons theory of gravity. In 1687, English mathematician Sir Isaac Newton published Principia, which hypothesizes the inverse-square law of universal gravitation. Newtons theory enjoyed its greatest success when it was used to predict the existence of Neptune based on motions of Uranus that could not be accounted for by the actions of the other planets. Calculations by both John Couch Adams and Urbain Le Verrier predicted the position of the planet. A discrepancy in Mercurys orbit pointed out flaws in Newtons theory, the issue was resolved in 1915 by Albert Einsteins new theory of general relativity, which accounted for the small discrepancy in Mercurys orbit. The simplest way to test the equivalence principle is to drop two objects of different masses or compositions in a vacuum and see whether they hit the ground at the same time. Such experiments demonstrate that all objects fall at the rate when other forces are negligible
Gravitational force
–
Sir Isaac Newton, an English physicist who lived from 1642 to 1727
Gravitational force
–
Two-dimensional analogy of spacetime distortion generated by the mass of an object. Matter changes the geometry of spacetime, this (curved) geometry being interpreted as gravity. White lines do not represent the curvature of space but instead represent the coordinate system imposed on the curved spacetime, which would be rectilinear in a flat spacetime.
Gravitational force
–
Ball falling freely under gravity. See text for description.
Gravitational force
–
Gravity acts on stars that conform our Milky Way.
42.
General relativity
–
General relativity is the geometric theory of gravitation published by Albert Einstein in 1915 and the current description of gravitation in modern physics. General relativity generalizes special relativity and Newtons law of gravitation, providing a unified description of gravity as a geometric property of space and time. In particular, the curvature of spacetime is directly related to the energy and momentum of whatever matter, the relation is specified by the Einstein field equations, a system of partial differential equations. Examples of such differences include gravitational time dilation, gravitational lensing, the redshift of light. The predictions of relativity have been confirmed in all observations. Although general relativity is not the only theory of gravity. Einsteins theory has important astrophysical implications, for example, it implies the existence of black holes—regions of space in which space and time are distorted in such a way that nothing, not even light, can escape—as an end-state for massive stars. The bending of light by gravity can lead to the phenomenon of gravitational lensing, General relativity also predicts the existence of gravitational waves, which have since been observed directly by physics collaboration LIGO. In addition, general relativity is the basis of current cosmological models of an expanding universe. Soon after publishing the special theory of relativity in 1905, Einstein started thinking about how to incorporate gravity into his new relativistic framework. In 1907, beginning with a thought experiment involving an observer in free fall. After numerous detours and false starts, his work culminated in the presentation to the Prussian Academy of Science in November 1915 of what are now known as the Einstein field equations. These equations specify how the geometry of space and time is influenced by whatever matter and radiation are present, the Einstein field equations are nonlinear and very difficult to solve. Einstein used approximation methods in working out initial predictions of the theory, but as early as 1916, the astrophysicist Karl Schwarzschild found the first non-trivial exact solution to the Einstein field equations, the Schwarzschild metric. This solution laid the groundwork for the description of the stages of gravitational collapse. In 1917, Einstein applied his theory to the universe as a whole, in line with contemporary thinking, he assumed a static universe, adding a new parameter to his original field equations—the cosmological constant—to match that observational presumption. By 1929, however, the work of Hubble and others had shown that our universe is expanding and this is readily described by the expanding cosmological solutions found by Friedmann in 1922, which do not require a cosmological constant. Lemaître used these solutions to formulate the earliest version of the Big Bang models, in which our universe has evolved from an extremely hot, Einstein later declared the cosmological constant the biggest blunder of his life
General relativity
–
A simulated black hole of 10 solar masses within the Milky Way, seen from a distance of 600 kilometers.
General relativity
–
Albert Einstein developed the theories of special and general relativity. Picture from 1921.
General relativity
–
Einstein cross: four images of the same astronomical object, produced by a gravitational lens
General relativity
–
Artist's impression of the space-borne gravitational wave detector LISA
43.
Orders of magnitude (mass)
–
To help compare different orders of magnitude, the following lists describe various mass levels between 10−40 kg and 1053 kg. The table below is based on the kilogram, the unit of mass in the International System of Units. The kilogram is the standard unit to include an SI prefix as part of its name. The gram is an SI derived unit of mass, however, the names of all SI mass units are based on gram, rather than on kilogram, thus 103 kg is a megagram, not a kilokilogram. The tonne is a SI-compatible unit of equal to a megagram. The unit is in use for masses above about 103 kg and is often used with SI prefixes. Other units of mass are also in use, historical units include the stone, the pound, the carat, and the grain. For subatomic particles, physicists use the equivalent to the energy represented by an electronvolt. At the atomic level, chemists use the mass of one-twelfth of a carbon-12 atom, astronomers use the mass of the sun. Unlike other physical quantities, mass-energy does not have an a priori expected minimal quantity, as is the case with time or length, plancks law allows for the existence of photons with arbitrarily low energies. This series on orders of magnitude does not have a range of larger masses Mass units conversion calculator Mass units conversion calculator JavaScript
Orders of magnitude (mass)
–
Iron weights up to 50 kilograms depicted in Dictionnaire encyclopédique de l'épicerie et des industries annexes.
44.
SI base units
–
The International System of Units defines seven units of measure as a basic set from which all other SI units can be derived. The SI base units form a set of mutually independent dimensions as required by dimensional analysis commonly employed in science, thus, the kelvin, named after Lord Kelvin, has the symbol K and the ampere, named after André-Marie Ampère, has the symbol A. Many other units, such as the litre, are not part of the SI. The definitions of the units have been modified several times since the Metre Convention in 1875. Since the redefinition of the metre in 1960, the kilogram is the unit that is directly defined in terms of a physical artifact. However, the mole, the ampere, and the candela are linked through their definitions to the mass of the platinum–iridium cylinder stored in a vault near Paris. It has long been an objective in metrology to define the kilogram in terms of a fundamental constant, two possibilities have attracted particular attention, the Planck constant and the Avogadro constant. The 23rd CGPM decided to postpone any formal change until the next General Conference in 2011
SI base units
–
The seven SI base units and the interdependency of their definitions: for example, to extract the definition of the metre from the speed of light, the definition of the second must be known while the ampere and candela are both dependent on the definition of energy which in turn is defined in terms of length, mass and time.
45.
International prototype kilogram
–
The kilogram or kilogramme is the base unit of mass in the International System of Units and is defined as being equal to the mass of the International Prototype of the Kilogram. The avoirdupois pound, used in both the imperial and US customary systems, is defined as exactly 0.45359237 kg, making one kilogram approximately equal to 2.2046 avoirdupois pounds. Other traditional units of weight and mass around the world are also defined in terms of the kilogram, the gram, 1/1000 of a kilogram, was provisionally defined in 1795 as the mass of one cubic centimeter of water at the melting point of ice. The final kilogram, manufactured as a prototype in 1799 and from which the IPK was derived in 1875, had an equal to the mass of 1 dm3 of water at its maximum density. The kilogram is the only SI base unit with an SI prefix as part of its name and it is also the only SI unit that is still directly defined by an artifact rather than a fundamental physical property that can be reproduced in different laboratories. Three other base units and 17 derived units in the SI system are defined relative to the kilogram, only 8 other units do not require the kilogram in their definition, temperature, time and frequency, length, and angle. At its 2011 meeting, the CGPM agreed in principle that the kilogram should be redefined in terms of the Planck constant, the decision was originally deferred until 2014, in 2014 it was deferred again until the next meeting. There are currently several different proposals for the redefinition, these are described in the Proposed Future Definitions section below, the International Prototype Kilogram is rarely used or handled. In the decree of 1795, the term gramme thus replaced gravet, the French spelling was adopted in the United Kingdom when the word was used for the first time in English in 1797, with the spelling kilogram being adopted in the United States. In the United Kingdom both spellings are used, with kilogram having become by far the more common, UK law regulating the units to be used when trading by weight or measure does not prevent the use of either spelling. In the 19th century the French word kilo, a shortening of kilogramme, was imported into the English language where it has used to mean both kilogram and kilometer. In 1935 this was adopted by the IEC as the Giorgi system, now known as MKS system. In 1948 the CGPM commissioned the CIPM to make recommendations for a practical system of units of measurement. This led to the launch of SI in 1960 and the subsequent publication of the SI Brochure, the kilogram is a unit of mass, a property which corresponds to the common perception of how heavy an object is. Mass is a property, that is, it is related to the tendency of an object at rest to remain at rest, or if in motion to remain in motion at a constant velocity. Accordingly, for astronauts in microgravity, no effort is required to hold objects off the cabin floor, they are weightless. However, since objects in microgravity still retain their mass and inertia, the ratio of the force of gravity on the two objects, measured by the scale, is equal to the ratio of their masses. On April 7,1795, the gram was decreed in France to be the weight of a volume of pure water equal to the cube of the hundredth part of the metre
International prototype kilogram
–
A domestic-quality one-kilogram weight made of cast iron (the credit card is for scale). The shape follows OIML recommendation R52 for cast-iron hexagonal weights
International prototype kilogram
–
Measurement of weight – gravitational attraction of the measurand causes a distortion of the spring
International prototype kilogram
–
Measurement of mass – the gravitational force on the measurand is balanced against the gravitational force on the weights.
International prototype kilogram
–
The Arago kilogram, an exact copy of the "Kilogramme des Archives" commissioned in 1821 by the US under supervision of French physicist François Arago that served as the US's first kilogram standard of mass until 1889, when the US converted to primary metric standards and received its current kilogram prototypes, K4 and K20.
46.
Tonne
–
The SI symbol for the tonne is t, adopted at the same time as the unit itself in 1879. Its use is also official, for the metric ton, within the United States, having been adopted by the US National Institute of Standards and it is a symbol, not an abbreviation, and should not be followed by a period. Informal and non-approved symbols or abbreviations include T, mT, MT, in French and all English-speaking countries that are predominantly metric, tonne is the correct spelling. Before metrication in the UK the unit used for most purposes was the Imperial ton of 2,240 pounds avoirdupois, equivalent to 1,016 kg, differing by just 1. 6% from the tonne. Ton and tonne are both derived from a Germanic word in use in the North Sea area since the Middle Ages to designate a large cask. A full tun, standing about a high, could easily weigh a tonne. An English tun of wine weighs roughly a tonne,954 kg if full of water, in the United States, the unit was originally referred to using the French words millier or tonneau, but these terms are now obsolete. The Imperial and US customary units comparable to the tonne are both spelled ton in English, though they differ in mass, one tonne is equivalent to, Metric/SI,1 megagram. Equal to 1000000 grams or 1000 kilograms, megagram, Mg, is the official SI unit. Mg is distinct from mg, milligram, pounds, Exactly 1000/0. 453 592 37 lb, or approximately 2204.622622 lb. US/Short tons, Exactly 1/0. 907 184 74 short tons, or approximately 1.102311311 ST. One short ton is exactly 0.90718474 t, imperial/Long tons, Exactly 1/1. 016 046 9088 long tons, or approximately 0.9842065276 LT. One long ton is exactly 1.0160469088 t, for multiples of the tonne, it is more usual to speak of thousands or millions of tonnes. Kilotonne, megatonne, and gigatonne are more used for the energy of nuclear explosions and other events. When used in context, there is little need to distinguish between metric and other tons, and the unit is spelt either as ton or tonne with the relevant prefix attached. *The equivalent units columns use the short scale large-number naming system used in most English-language countries. †Values in the equivalent short and long tons columns are rounded to five significant figures, ǂThough non-standard, the symbol kt is also sometimes used for knot, a unit of speed for sea-going vessels, and should not be confused with kilotonne. A metric ton unit can mean 10 kilograms within metal trading and it traditionally referred to a metric ton of ore containing 1% of metal. In the case of uranium, the acronym MTU is sometimes considered to be metric ton of uranium, in the petroleum industry the tonne of oil equivalent is a unit of energy, the amount of energy released by burning one tonne of crude oil, approximately 42 GJ
Tonne
–
Base units
47.
Electronvolt
–
In physics, the electronvolt is a unit of energy equal to approximately 1. 6×10−19 joules. By definition, it is the amount of energy gained by the charge of an electron moving across an electric potential difference of one volt. Thus it is 1 volt multiplied by the elementary charge, therefore, one electronvolt is equal to 6981160217662079999♠1. 6021766208×10−19 J. The electronvolt is not a SI unit, and its definition is empirical, like the elementary charge on which it is based, it is not an independent quantity but is equal to 1 J/C √2hα / μ0c0. It is a unit of energy within physics, widely used in solid state, atomic, nuclear. It is commonly used with the metric prefixes milli-, kilo-, in some older documents, and in the name Bevatron, the symbol BeV is used, which stands for billion electronvolts, it is equivalent to the GeV. By mass–energy equivalence, the electronvolt is also a unit of mass and it is common in particle physics, where units of mass and energy are often interchanged, to express mass in units of eV/c2, where c is the speed of light in vacuum. It is common to express mass in terms of eV as a unit of mass. The mass equivalent of 1 eV/c2 is 1 eV / c 2 = ⋅1 V2 =1.783 ×10 −36 kg. For example, an electron and a positron, each with a mass of 0.511 MeV/c2, the proton has a mass of 0.938 GeV/c2. In general, the masses of all hadrons are of the order of 1 GeV/c2, the unified atomic mass unit,1 gram divided by Avogadros number, is almost the mass of a hydrogen atom, which is mostly the mass of the proton. To convert to megaelectronvolts, use the formula,1 u =931.4941 MeV/c2 =0.9314941 GeV/c2, in high-energy physics, the electronvolt is often used as a unit of momentum. A potential difference of 1 volt causes an electron to gain an amount of energy and this gives rise to usage of eV as units of momentum, for the energy supplied results in acceleration of the particle. The dimensions of units are LMT−1. The dimensions of units are L2MT−2. Then, dividing the units of energy by a constant that has units of velocity. In the field of particle physics, the fundamental velocity unit is the speed of light in vacuum c. Thus, dividing energy in eV by the speed of light, the fundamental velocity constant c is often dropped from the units of momentum by way of defining units of length such that the value of c is unity
Electronvolt
–
γ: Gamma rays
48.
Particle physics
–
Particle physics is the branch of physics that studies the nature of the particles that constitute matter and radiation. By our current understanding, these particles are excitations of the quantum fields that also govern their interactions. The currently dominant theory explaining these fundamental particles and fields, along with their dynamics, is called the Standard Model, in more technical terms, they are described by quantum state vectors in a Hilbert space, which is also treated in quantum field theory. All particles and their interactions observed to date can be described almost entirely by a field theory called the Standard Model. The Standard Model, as formulated, has 61 elementary particles. Those elementary particles can combine to form composite particles, accounting for the hundreds of species of particles that have been discovered since the 1960s. The Standard Model has been found to agree with almost all the tests conducted to date. However, most particle physicists believe that it is a description of nature. In recent years, measurements of mass have provided the first experimental deviations from the Standard Model. The idea that all matter is composed of elementary particles dates from at least the 6th century BC, in the 19th century, John Dalton, through his work on stoichiometry, concluded that each element of nature was composed of a single, unique type of particle. Throughout the 1950s and 1960s, a variety of particles were found in collisions of particles from increasingly high-energy beams. It was referred to informally as the particle zoo, the current state of the classification of all elementary particles is explained by the Standard Model. It describes the strong, weak, and electromagnetic fundamental interactions, the species of gauge bosons are the gluons, W−, W+ and Z bosons, and the photons. The Standard Model also contains 24 fundamental particles, which are the constituents of all matter, finally, the Standard Model also predicted the existence of a type of boson known as the Higgs boson. Early in the morning on 4 July 2012, physicists with the Large Hadron Collider at CERN announced they had found a new particle that behaves similarly to what is expected from the Higgs boson, the worlds major particle physics laboratories are, Brookhaven National Laboratory. Its main facility is the Relativistic Heavy Ion Collider, which collides heavy ions such as gold ions and it is the worlds first heavy ion collider, and the worlds only polarized proton collider. Its main projects are now the electron-positron colliders VEPP-2000, operated since 2006 and its main project is now the Large Hadron Collider, which had its first beam circulation on 10 September 2008, and is now the worlds most energetic collider of protons. It also became the most energetic collider of heavy ions after it began colliding lead ions and its main facility is the Hadron Elektron Ring Anlage, which collides electrons and positrons with protons
Particle physics
–
Large Hadron Collider tunnel at CERN
49.
Imperial units
–
The system of imperial units or the imperial system is the system of units first defined in the British Weights and Measures Act of 1824, which was later refined and reduced. The Imperial units replaced the Winchester Standards, which were in effect from 1588 to 1825, the system came into official use across the British Empire. The imperial system developed from what were first known as English units, the Weights and Measures Act of 1824 was initially scheduled to go into effect on 1 May 1825. However, the Weights and Measures Act of 1825 pushed back the date to 1 January 1826, the 1824 Act allowed the continued use of pre-imperial units provided that they were customary, widely known, and clearly marked with imperial equivalents. Apothecaries units are mentioned neither in the act of 1824 nor 1825, at the time, apothecaries weights and measures were regulated in England, Wales, and Berwick-upon-Tweed by the London College of Physicians, and in Ireland by the Dublin College of Physicians. In Scotland, apothecaries units were unofficially regulated by the Edinburgh College of Physicians, the three colleges published, at infrequent intervals, pharmacopoeiae, the London and Dublin editions having the force of law. The Medical Act of 1858 transferred to The Crown the right to publish the official pharmacopoeia and to regulate apothecaries weights, Metric equivalents in this article usually assume the latest official definition. Before this date, the most precise measurement of the imperial Standard Yard was 0.914398416 metres, in 1824, the various different gallons in use in the British Empire were replaced by the imperial gallon, a unit close in volume to the ale gallon. It was originally defined as the volume of 10 pounds of distilled water weighed in air with brass weights with the standing at 30 inches of mercury at a temperature of 62 °F. The Weights and Measures Act of 1985 switched to a gallon of exactly 4.54609 l and these measurements were in use from 1826, when the new imperial gallon was defined, but were officially abolished in the United Kingdom on 1 January 1971. In the USA, though no longer recommended, the system is still used occasionally in medicine. The troy pound was made the unit of mass by the 1824 Act, however, its use was abolished in the UK on 1 January 1879, with only the troy ounce. The Weights and Measures Act 1855 made the pound the primary unit of mass. In all the systems, the unit is the pound. For the yard, the length of a pendulum beating seconds at the latitude of Greenwich at Mean Sea Level in vacuo was defined as 39.01393 inches, the imperial system is one of many systems of English units. Although most of the units are defined in more than one system, some units were used to a much greater extent, or for different purposes. The distinctions between these systems are not drawn precisely. One such distinction is that between these systems and older British/English units/systems or newer additions, the US customary system is historically derived from the English units that were in use at the time of settlement
Imperial units
–
The former Weights and Measures office in Seven Sisters, London.
Imperial units
–
Imperial standards of length 1876 in Trafalgar Square, London.
Imperial units
–
A baby bottle that measures in three measurement systems—metric, imperial (UK), and US customary.
Imperial units
–
A one US gallon gas can purchased near the US-Canada border. It shows equivalences in imperial gallons and litres.
50.
Pound (mass)
–
The pound or pound-mass is a unit of mass used in the imperial, United States customary and other systems of measurement. The international standard symbol for the pound is lb. The unit is descended from the Roman libra, the English word pound is cognate with, among others, German Pfund, Dutch pond, and Swedish pund. All ultimately derive from a borrowing into Proto-Germanic of the Latin expression lībra pondō, usage of the unqualified term pound reflects the historical conflation of mass and weight. This accounts for the modern distinguishing terms pound-mass and pound-force, the United States and countries of the Commonwealth of Nations agreed upon common definitions for the pound and the yard. Since 1 July 1959, the avoirdupois pound has been defined as exactly 0.45359237 kg. In the United Kingdom, the use of the pound was implemented in the Weights and Measures Act 1963.9144 metre exactly. An avoirdupois pound is equal to 16 avoirdupois ounces and to exactly 7,000 grains, the conversion factor between the kilogram and the international pound was therefore chosen to be divisible by 7, and an grain is thus equal to exactly 64.79891 milligrams. The US has not adopted the system despite many efforts to do so. Historically, in different parts of the world, at different points in time, and for different applications, the libra is an ancient Roman unit of mass that was equivalent to approximately 328.9 grams. It was divided into 12 unciae, or ounces, the libra is the origin of the abbreviation for pound, lb. A number of different definitions of the pound have historically used in Britain. Amongst these were the avoirdupois pound and the tower, merchants. Historically, the sterling was a tower pound of silver. In 1528, the standard was changed to the Troy pound, the avoirdupois pound, also known as the wool pound, first came into general use c. It was initially equal to 6992 troy grains, the pound avoirdupois was divided into 16 ounces. During the reign of Queen Elizabeth, the pound was redefined as 7,000 troy grains. Since then, the grain has often been a part of the avoirdupois system
Pound (mass)
–
Various historic pounds from a German textbook dated 1848
Pound (mass)
–
The Tower Pound
51.
Solar mass
–
The solar mass is a standard unit of mass in astronomy, equal to approximately 1.99 ×1030 kilograms. It is used to indicate the masses of stars, as well as clusters, nebulae. It is equal to the mass of the Sun, about two kilograms, M☉ = ×1030 kg The above mass is about 332946 times the mass of Earth. Because Earth follows an orbit around the Sun, its solar mass can be computed from the equation for the orbital period of a small body orbiting a central mass. The value he obtained differs by only 1% from the modern value, the diurnal parallax of the Sun was accurately measured during the transits of Venus in 1761 and 1769, yielding a value of 9″. From the value of the parallax, one can determine the distance to the Sun from the geometry of Earth. The first person to estimate the mass of the Sun was Isaac Newton, in his work Principia, he estimated that the ratio of the mass of Earth to the Sun was about 1/28700. Later he determined that his value was based upon a faulty value for the solar parallax and he corrected his estimated ratio to 1/169282 in the third edition of the Principia. The current value for the parallax is smaller still, yielding an estimated mass ratio of 1/332946. As a unit of measurement, the solar mass came into use before the AU, the mass of the Sun has been decreasing since the time it formed. This occurs through two processes in nearly equal amounts, first, in the Suns core, hydrogen is converted into helium through nuclear fusion, in particular the p–p chain, and this reaction converts some mass into energy in the form of gamma ray photons. Most of this energy eventually radiates away from the Sun, second, high-energy protons and electrons in the atmosphere of the Sun are ejected directly into outer space as a solar wind. The original mass of the Sun at the time it reached the main sequence remains uncertain, the early Sun had much higher mass-loss rates than at present, and it may have lost anywhere from 1–7% of its natal mass over the course of its main-sequence lifetime. The Sun gains a small amount of mass through the impact of asteroids. However, as the Sun already contains 99. 86% of the Solar Systems total mass, M☉ G / c2 ≈1.48 km M☉ G / c3 ≈4.93 μs I. -J. A Bright Young Sun Consistent with Helioseismology and Warm Temperatures on Ancient Earth and Mars
Solar mass
–
Internal structure
Solar mass
–
Size and mass of very large stars: Most massive example, the blue Pistol Star (150 M ☉). Others are Rho Cassiopeiae (40 M ☉), Betelgeuse (20 M ☉), and VY Canis Majoris (17 M ☉). The Sun (1 M ☉) which is not visible in this thumbnail is included to illustrate the scale of example stars. Earth's orbit (grey), Jupiter's orbit (red), and Neptune's orbit (blue) are also given.
52.
Sun
–
The Sun is the star at the center of the Solar System. It is a perfect sphere of hot plasma, with internal convective motion that generates a magnetic field via a dynamo process. It is by far the most important source of energy for life on Earth. Its diameter is about 109 times that of Earth, and its mass is about 330,000 times that of Earth, accounting for about 99. 86% of the total mass of the Solar System. About three quarters of the Suns mass consists of hydrogen, the rest is mostly helium, with smaller quantities of heavier elements, including oxygen, carbon, neon. The Sun is a G-type main-sequence star based on its spectral class and it formed approximately 4.6 billion years ago from the gravitational collapse of matter within a region of a large molecular cloud. Most of this matter gathered in the center, whereas the rest flattened into a disk that became the Solar System. The central mass became so hot and dense that it eventually initiated nuclear fusion in its core and it is thought that almost all stars form by this process. The Sun is roughly middle-aged, it has not changed dramatically for more than four billion years and it is calculated that the Sun will become sufficiently large enough to engulf the current orbits of Mercury, Venus, and probably Earth. The enormous effect of the Sun on Earth has been recognized since prehistoric times, the synodic rotation of Earth and its orbit around the Sun are the basis of the solar calendar, which is the predominant calendar in use today. The English proper name Sun developed from Old English sunne and may be related to south, all Germanic terms for the Sun stem from Proto-Germanic *sunnōn. The English weekday name Sunday stems from Old English and is ultimately a result of a Germanic interpretation of Latin dies solis, the Latin name for the Sun, Sol, is not common in general English language use, the adjectival form is the related word solar. The term sol is used by planetary astronomers to refer to the duration of a solar day on another planet. A mean Earth solar day is approximately 24 hours, whereas a mean Martian sol is 24 hours,39 minutes, and 35.244 seconds. From at least the 4th Dynasty of Ancient Egypt, the Sun was worshipped as the god Ra, portrayed as a falcon-headed divinity surmounted by the solar disk, and surrounded by a serpent. In the New Empire period, the Sun became identified with the dung beetle, in the form of the Sun disc Aten, the Sun had a brief resurgence during the Amarna Period when it again became the preeminent, if not only, divinity for the Pharaoh Akhenaton. The Sun is viewed as a goddess in Germanic paganism, Sól/Sunna, in ancient Roman culture, Sunday was the day of the Sun god. It was adopted as the Sabbath day by Christians who did not have a Jewish background, the symbol of light was a pagan device adopted by Christians, and perhaps the most important one that did not come from Jewish traditions
Sun
–
The Sun in visible wavelength with filtered white light on 8 July 2014. Characteristic limb darkening and numerous sunspots are visible.
Sun
–
During a total solar eclipse, the solar corona can be seen with the naked eye, during the brief period of totality.
Sun
–
Taken by Hinode 's Solar Optical Telescope on 12 January 2007, this image of the Sun reveals the filamentary nature of the plasma connecting regions of different magnetic polarity.
Sun
–
Visible light photograph of sunspot, 13 December 2006
53.
Black hole
–
A black hole is a region of spacetime exhibiting such strong gravitational effects that nothing—not even particles and electromagnetic radiation such as light—can escape from inside it. The theory of relativity predicts that a sufficiently compact mass can deform spacetime to form a black hole. The boundary of the region from which no escape is possible is called the event horizon, although the event horizon has an enormous effect on the fate and circumstances of an object crossing it, no locally detectable features appear to be observed. In many ways a black hole acts like a black body. Moreover, quantum theory in curved spacetime predicts that event horizons emit Hawking radiation. This temperature is on the order of billionths of a kelvin for black holes of stellar mass, objects whose gravitational fields are too strong for light to escape were first considered in the 18th century by John Michell and Pierre-Simon Laplace. Black holes were considered a mathematical curiosity, it was during the 1960s that theoretical work showed they were a generic prediction of general relativity. The discovery of neutron stars sparked interest in gravitationally collapsed compact objects as a possible astrophysical reality, black holes of stellar mass are expected to form when very massive stars collapse at the end of their life cycle. After a black hole has formed, it can continue to grow by absorbing mass from its surroundings, by absorbing other stars and merging with other black holes, supermassive black holes of millions of solar masses may form. There is general consensus that supermassive black holes exist in the centers of most galaxies, despite its invisible interior, the presence of a black hole can be inferred through its interaction with other matter and with electromagnetic radiation such as visible light. Matter that falls onto a black hole can form an accretion disk heated by friction. If there are other stars orbiting a black hole, their orbits can be used to determine the black holes mass, such observations can be used to exclude possible alternatives such as neutron stars.3 million solar masses. On 15 June 2016, a detection of a gravitational wave event from colliding black holes was announced. The idea of a body so massive that light could not escape was briefly proposed by astronomical pioneer John Michell in a letter published in 1783-4. Michell correctly noted that such supermassive but non-radiating bodies might be detectable through their effects on nearby visible bodies. In 1915, Albert Einstein developed his theory of general relativity, only a few months later, Karl Schwarzschild found a solution to the Einstein field equations, which describes the gravitational field of a point mass and a spherical mass. A few months after Schwarzschild, Johannes Droste, a student of Hendrik Lorentz, independently gave the solution for the point mass. This solution had a peculiar behaviour at what is now called the Schwarzschild radius, the nature of this surface was not quite understood at the time
Black hole
–
Predicted appearance of non-rotating black hole with toroidal ring of ionised matter, such as has been proposed as a model for Sagittarius A*. The asymmetry is due to the Doppler effect resulting from the enormous orbital speed needed for centrifugal balance of the very strong gravitational attraction of the hole.
Black hole
–
Simulation of gravitational lensing by a black hole, which distorts the image of a galaxy in the background
Black hole
–
A simple illustration of a non-spinning black hole
Black hole
–
A simulated event in the CMS detector, a collision in which a micro black hole may be created.
54.
Standard gravitational parameter
–
In celestial mechanics, the standard gravitational parameter μ of a celestial body is the product of the gravitational constant G and the mass M of the body. μ = G M For several objects in the Solar System, the SI units of the standard gravitational parameter are m3 s−2. However, units of km3 s−2 are frequently used in the scientific literature, the central body in an orbital system can be defined as the one whose mass is much larger than the mass of the orbiting body, or M ≫ m. This approximation is standard for planets orbiting the Sun or most moons, conversely, measurements of the smaller bodys orbit only provide information on the product, μ, not G and M separately. This can be generalized for elliptic orbits, μ =4 π2 a 3 / T2 where a is the semi-major axis, for parabolic trajectories rv2 is constant and equal to 2μ. For elliptic and hyperbolic orbits μ = 2a| ε |, where ε is the orbital energy. The value for the Earth is called the gravitational constant. However, the M can be out only by dividing the MG by G. The uncertainty of those measures is 1 to 7000, so M will have the same uncertainty, the value for the Sun is called the heliocentric gravitational constant or geopotential of the Sun and equals 1. 32712440018×1020 m3 s−2. Note that the mass is also denoted by μ. Astronomical system of units Planetary mass
Standard gravitational parameter
–
The Schwarzschild radius (r s) represents the ability of mass to cause curvature in space and time.
55.
Physical science
–
Physical science is a branch of natural science that studies non-living systems, in contrast to life science. It in turn has many branches, each referred to as a physical science, in natural science, hypotheses must be verified scientifically to be regarded as scientific theory. Validity, accuracy, and social mechanisms ensuring quality control, such as review and repeatability of findings, are amongst the criteria. Natural science can be broken into two branches, life science, for example biology and physical science. Each of these branches, and all of their sub-branches, are referred to as natural sciences, physics – natural and physical science that involves the study of matter and its motion through space and time, along with related concepts such as energy and force. More broadly, it is the analysis of nature, conducted in order to understand how the universe behaves. Branches of astronomy Chemistry – studies the composition, structure, properties, branches of chemistry Earth science – all-embracing term referring to the fields of science dealing with planet Earth. Earth science is the study of how the natural environment works and it includes the study of the atmosphere, hydrosphere, lithosphere, and biosphere. Branches of Earth science History of physical science – history of the branch of science that studies non-living systems. It in turn has many branches, each referred to as a physical science, however, the term physical creates an unintended, somewhat arbitrary distinction, since many branches of physical science also study biological phenomena. History of astrodynamics – history of the application of ballistics and celestial mechanics to the problems concerning the motion of rockets. History of astrometry – history of the branch of astronomy that involves precise measurements of the positions and movements of stars, History of cosmology – history of the discipline that deals with the nature of the Universe as a whole. History of physical cosmology – history of the study of the largest-scale structures, History of planetary science – history of the scientific study of planets, moons, and planetary systems, in particular those of the Solar System and the processes that form them. History of neurophysics – history of the branch of biophysics dealing with the nervous system, History of chemical physics – history of the branch of physics that studies chemical processes from the point of view of physics. History of computational physics – history of the study and implementation of algorithms to solve problems in physics for which a quantitative theory already exists. History of condensed matter physics – history of the study of the properties of condensed phases of matter. History of cryogenics – history of the cryogenics is the study of the production of low temperature. History of biomechanics – history of the study of the structure and function of biological systems such as humans, animals, plants, organs, History of fluid mechanics – history of the study of fluids and the forces on them
Physical science
–
Chemistry, the central science, partial ordering of the sciences proposed by Balaban and Klein.
56.
Proportionality (mathematics)
–
In mathematics, two variables are proportional if a change in one is always accompanied by a change in the other, and if the changes are always related by use of a constant multiplier. The constant is called the coefficient of proportionality or proportionality constant, if one variable is always the product of the other and a constant, the two are said to be directly proportional. X and y are directly proportional if the ratio y/x is constant, if the product of the two variables is always a constant, the two are said to be inversely proportional. X and y are inversely proportional if the product xy is constant, to express the statement y is directly proportional to x mathematically, we write an equation y = cx, where c is the proportionality constant. Symbolically, this is written as y ∝ x, to express the statement y is inversely proportional to x mathematically, we write an equation y = c/x. We can equivalently write y is proportional to 1/x. An equality of two ratios is called a proportion, for example, a/c = b/d, where no term is zero. Given two variables x and y, y is proportional to x if there is a non-zero constant k such that y = k x. The relation is denoted, using the ∝ or ~ symbol, as y ∝ x. If an object travels at a constant speed, then the distance traveled is directly proportional to the time spent traveling, the circumference of a circle is directly proportional to its diameter, with the constant of proportionality equal to π. Since y = k x is equivalent to x = y, it follows that if y is proportional to x, with proportionality constant k. The concept of inverse proportionality can be contrasted against direct proportionality, consider two variables said to be inversely proportional to each other. If all other variables are constant, the magnitude or absolute value of one inversely proportional variable decreases if the other variable increases. Formally, two variables are proportional if each of the variables is directly proportional to the multiplicative inverse of the other. As an example, the time taken for a journey is inversely proportional to the speed of travel, the graph of two variables varying inversely on the Cartesian coordinate plane is a rectangular hyperbola. The product of the x and y values of each point on the curve equals the constant of proportionality, since neither x nor y can equal zero, the graph never crosses either axis. A variable y is proportional to a variable x, if y is directly proportional to the exponential function of x, that is if there exist non-zero constants k. Likewise, a y is logarithmically proportional to a variable x, if y is directly proportional to the logarithm of x, that is if there exist non-zero constants k
Proportionality (mathematics)
–
Variable y is directly proportional to the variable x.
57.
Pair production
–
Pair production is the creation of an elementary particle and its antiparticle. Examples include creating an electron and a positron, a muon and an antimuon, or a proton, pair production often refers specifically to a photon creating an electron-positron pair near a nucleus but can more generally refer to any neutral boson creating a particle-antiparticle pair. However, all other conserved quantum numbers of the particles must sum to zero – thus the created particles shall have opposite values of each other. For instance, if one particle has electric charge of +1 the other must have electric charge of −1, the probability of pair production in photon-matter interactions increases with photon energy and also increases approximately as the square of atomic number of the nearby atom. For photons with high energy, pair production is the dominant mode of photon interaction with matter. These interactions were first observed in Patrick Blacketts counter-controlled cloud chamber, the photon must have higher energy than the sum of the rest mass energies of an electron and positron for the production to occur. The photon must be near a nucleus in order to satisfy conservation of momentum, because of this, when pair production occurs, the atomic nucleus receives some recoil. The reverse of this process is electron positron annihilation and these properties can be derived through the kinematics of the interaction. Using four vector notation, the conservation of energy-momentum before and after the interaction gives and we can square the conservation equation,2 =2 However, in most cases the recoil of the nuclei is much smaller compared to the energy of the photon and can be neglected. This derivation is a semi-classical approximation, an exact derivation of the kinematics can be done taking into account the full quantum mechanical scattering of photon and nucleus. Cross sections are tabulated for different materials and energies, in 2008 the Titan laser aimed at a 1-millimeter-thick gold target was used to generate positron–electron pairs in large numbers. Pair production is invoked to predict the existence of hypothetical Hawking radiation, according to quantum mechanics, particle pairs are constantly appearing and disappearing as a quantum foam. In a region of strong tidal forces, the two particles in a pair may sometimes be wrenched apart before they have a chance to mutually annihilate. When this happens in the region around a hole, one particle may escape while its antiparticle partner is captured by the black hole. Supernova SN 2006gy is hypothesized to have been a pair production type supernova, annihilation Electron–positron annihilation Meitner–Hupfeld effect Pair-instability supernova Two-photon physics Dirac equation Matter creation Theory of photon-impact bound-free pair production
Pair production
–
Light–matter interaction
58.
Nuclear fusion
–
In nuclear physics, nuclear fusion is a reaction in which two or more atomic nuclei come close enough to form one or more different atomic nuclei and subatomic particles. The difference in mass between the products and reactants is manifested as the release of large amounts of energy and this difference in mass arises due to the difference in atomic binding energy between the atomic nuclei before and after the reaction. Fusion is the process that powers active or main sequence stars, the fusion process that produces a nucleus lighter than iron-56 or nickel-62 will generally yield a net energy release. These elements have the smallest mass per nucleon and the largest binding energy per nucleon, the opposite is true for the reverse process, nuclear fission. This means that the elements, such as hydrogen and helium, are in general more fusable, while the heavier elements. The extreme astrophysical event of a supernova can produce energy to fuse nuclei into elements heavier than iron. During the remainder of that decade the steps of the cycle of nuclear fusion in stars were worked out by Hans Bethe. Research into fusion for military purposes began in the early 1940s as part of the Manhattan Project, fusion was accomplished in 1951 with the Greenhouse Item nuclear test. Nuclear fusion on a scale in an explosion was first carried out on November 1,1952. Research into developing controlled thermonuclear fusion for civil purposes also began in earnest in the 1950s, the protons are positively charged and repel each other but they nonetheless stick together, demonstrating the existence of another force referred to as nuclear attraction. This force, called the nuclear force, overcomes electric repulsion at very close range. The effect of force is not observed outside the nucleus. The same force also pulls the nucleons together allowing ordinary matter to exist, light nuclei, are sufficiently small and proton-poor allowing the nuclear force to overcome the repulsive Coulomb force. This is because the nucleus is small that all nucleons feel the short-range attractive force at least as strongly as they feel the infinite-range Coulomb repulsion. Building up these nuclei from lighter nuclei by fusion thus releases the energy from the net attraction of these particles. For larger nuclei, however, no energy is released, since the force is short-range. Thus, energy is no longer released when such nuclei are made by fusion, instead, fusion reactions create the light elements that power the stars and produce virtually all elements in a process called nucleosynthesis. The fusion of elements in stars releases energy and the mass that always accompanies it
Nuclear fusion
–
The Sun is a main-sequence star, and thus generates its energy by nuclear fusion of hydrogen nuclei into helium. In its core, the Sun fuses 620 million metric tons of hydrogen each second.
Nuclear fusion
–
The Tokamak à configuration variable, research fusion reactor, at the École Polytechnique Fédérale de Lausanne (Switzerland).
Nuclear fusion
–
The only man-made fusion device to achieve ignition to date is the hydrogen bomb. [citation needed] The detonation of the first device, codenamed Ivy Mike, occurred in 1952 and is shown here.
59.
Atomic clocks
–
The principle of operation of an atomic clock is not based on nuclear physics, but rather on atomic physics, it uses the microwave signal that electrons in atoms emit when they change energy levels. Early atomic clocks were based on masers at room temperature, currently, the most accurate atomic clocks first cool the atoms to near absolute zero temperature by slowing them with lasers and probing them in atomic fountains in a microwave-filled cavity. An example of this is the NIST-F1 atomic clock, one of the primary time. The accuracy of an atomic clock depends on two factors, the first factor is temperature of the sample atoms—colder atoms move much more slowly, allowing longer probe times. The second factor is the frequency and intrinsic width of the electronic transition, higher frequencies and narrow lines increase the precision. National standards agencies in many countries maintain a network of atomic clocks which are intercompared and these clocks collectively define a continuous and stable time scale, International Atomic Time. For civil time, another time scale is disseminated, Coordinated Universal Time, UTC is derived from TAI, but approximately synchronised, by using leap seconds, to UT1, which is based on actual rotation of the Earth with respect to the solar time. The idea of using atomic transitions to measure time was suggested by Lord Kelvin in 1879, magnetic resonance, developed in the 1930s by Isidor Rabi, became the practical method for doing this. In 1945, Rabi first publicly suggested that atomic beam magnetic resonance might be used as the basis of a clock, the first atomic clock was an ammonia maser device built in 1949 at the U. S. National Bureau of Standards. It was less accurate than existing quartz clocks, but served to demonstrate the concept, calibration of the caesium standard atomic clock was carried out by the use of the astronomical time scale ephemeris time. This led to the agreed definition of the latest SI second being based on atomic time. Equality of the ET second with the SI second has been verified to within 1 part in 1010, the SI second thus inherits the effect of decisions by the original designers of the ephemeris time scale, determining the length of the ET second. Since the beginning of development in the 1950s, atomic clocks have been based on the transitions in hydrogen-1, caesium-133. The first commercial atomic clock was the Atomichron, manufactured by the National Company, more than 50 were sold between 1956 and 1960. This bulky and expensive instrument was replaced by much smaller rack-mountable devices, such as the Hewlett-Packard model 5060 caesium frequency standard. In August 2004, NIST scientists demonstrated a chip-scale atomic clock, according to the researchers, the clock was believed to be one-hundredth the size of any other. It requires no more than 125 mW, making it suitable for battery-driven applications and this technology became available commercially in 2011. Ion trap experimental optical clocks are more precise than the current caesium standard, in March 2017, NASA plans to deploy the Deep Space Atomic Clock, a miniaturized, ultra-precise mercury-ion atomic clock, into outer space
Atomic clocks
–
FOCS 1, a continuous cold caesium fountain atomic clock in Switzerland, started operating in 2004 at an uncertainty of one second in 30 million years.
Atomic clocks
–
The master atomic clock ensemble at the U.S. Naval Observatory in Washington, D.C., which provides the time standard for the U.S. Department of Defense. The rack mounted units in the background are Symmetricom (formerly HP) 5071A caesium beam clocks. The black units in the foreground are Symmetricom (formerly Sigma-Tau) MHM-2010 hydrogen maser standards.
Atomic clocks
–
Louis Essen (right) and Jack Parry (left) standing next to the world's first caesium-133 atomic clock.
Atomic clocks
–
Chip-scale atomic clocks, such as this one unveiled in 2004, are expected to greatly improve GPS location.
60.
Watt balance
–
A watt balance is an experimental electromechanical weight measuring instrument that measures the weight of a test object very precisely by the strength of an electric current and a voltage. In 2016, metrologists agreed to rename watt balances as Kibble balances, in honour of and it is being developed as a metrological instrument that may one day provide a definition of the kilogram unit of mass based on electronic units, a so-called electronic or electrical kilogram. The name watt balance comes from the fact that the weight of the test mass is proportional to the product of the current and the voltage, which is measured in units of watts. In this new application, the balance will be used in the opposite sense, the weight of the kilogram is then used to compute the mass of the kilogram by accurately determining the local gravitational acceleration. This will define the mass of a kilogram in terms of a current, the principle that is used in the watt balance was proposed by B. P. Kibble of the UK National Physical Laboratory in 1975 for measurement of the gyromagnetic ratio. The main weakness of the balance method is that the result depends on the accuracy with which the dimensions of the coils are measured. The watt balance method has an extra step in which the effect of the geometry of the coils is eliminated. This extra step involves moving the force coil through a magnetic flux at a known speed. This step was done in 1990, in 2014, NRC researchers published the most accurate measurement of the Planck constant to date, with a relative uncertainty of 1. 8×10−8. A conducting wire of length L that carries an electric current I perpendicular to a field of strength B will experience a Laplace force equal to BLI. In the watt balance, the current is varied so that this force exactly counteracts the weight w of a mass m. This is also the principle behind the ampere balance, W is given by the mass m multiplied by the local gravitational acceleration g. Kibbles watt balance avoids the problems of measuring B and L with a calibration step. The same wire is moved through the magnetic field at a known speed v. By Faradays law of induction, a potential difference U is generated across the ends of the wire. The unknown product BL can be eliminated from the equations to give U I = m g v. With U, I, g, and v accurately measured, this gives an accurate value for m. Both sides of the equation have the dimensions of power, measured in watts in the International System of Units, the current watt balance experiments are equivalent to measuring the value of the conventional watt in SI units. The importance of measurements is that they are also a direct measurement of the Planck constant h, h =4 K J2 R K. The principle of the kilogram would be to define the value of the Planck constant in the same way that the meter is defined by the speed of light
Watt balance
–
The NIST watt balance; the vacuum chamber dome, which lowers over the entire apparatus, is visible at top
Watt balance
–
Precision Ampere balance at the US National Bureau of Standards (now NIST) in 1927. The current coils are visible under the balance, attached to the right balance arm. The Watt balance is a development of the Ampere balance.
61.
Mass versus weight
–
In common usage, the mass of an object is often referred to as its weight, though these are in fact different concepts and quantities. In scientific contexts, mass refers loosely to the amount of matter in an object, in other words, an object with a mass of 1.0 kilogram will weigh approximately 9.81 newtons on the surface of the Earth. Its weight will be less on Mars, more on Saturn, and negligible in space far from any significant source of gravity. Objects on the surface of the Earth have weight, although sometimes this weight is difficult to measure, thus, the weightless object floating in water actually transfers its weight to the bottom of the container. Similarly, a balloon has mass but may appear to have no weight or even negative weight, however the weight of the balloon and the gas inside it has merely been transferred to a large area of the Earths surface, making the weight difficult to measure. The weight of an airplane is similarly distributed to the ground. If the airplane is in flight, the same weight-force is distributed to the surface of the Earth as when the plane was on the runway. A better scientific definition of mass is its description as being composed of inertia and this force can be added to by any other kind of force. While the weight of an object varies in proportion to the strength of the gravitational field, accordingly, for an astronaut on a spacewalk in orbit, no effort is required to hold a communications satellite in front of him, it is weightless. However, since objects in orbit retain their mass and inertia, on Earth, a swing seat can demonstrate this relationship between force, mass, and acceleration. Applying the same impetus to a child would produce a much greater speed. Mass is a property, that is, the tendency of an object to remain at constant velocity unless acted upon by an outside force. Inertia is seen when a ball is pushed horizontally on a level, smooth surface. This is quite distinct from its weight, which is the gravitational force of the bowling ball one must counter when holding it off the floor. The weight of the ball on the Moon would be one-sixth of that on the Earth. Consequently, whenever the physics of recoil kinetics dominate and the influence of gravity is a negligible factor, in the physical sciences, the terms mass and weight are rigidly defined as separate measures, as they are different physical properties. For example, in commerce, the net weight of products actually refers to mass. Conversely, the index rating on automobile tires, which specifies the maximum structural load for a tire in kilograms, refers to weight, that is
Mass versus weight
–
If one were to stand behind this girl at the bottom of the arc and try to stop her, one would be acting against her inertia, which arises from mass, not weight.
Mass versus weight
–
Matter's mass strongly influences many familiar kinetic properties.
Mass versus weight
–
A hot air balloon when it has neutral buoyancy has no weight for the men to support but still retains its great mass and inertia.
Mass versus weight
–
A balance-type weighing scale: Unaffected by the strength of gravity.
62.
Gravity of Earth
–
The gravity of Earth, which is denoted by g, refers to the acceleration that is imparted to objects due to the distribution of mass within the Earth. In SI units this acceleration is measured in metres per second squared or equivalently in newtons per kilogram and this quantity is sometimes referred to informally as little g. The precise strength of Earths gravity varies depending on location, the nominal average value at the Earths surface, known as standard gravity is, by definition,9.80665 m/s2. This quantity is denoted variously as gn, ge, g0, gee, the weight of an object on the Earths surface is the downwards force on that object, given by Newtons second law of motion, or F = ma. Gravitational acceleration contributes to the acceleration, but other factors, such as the rotation of the Earth, also contribute. The Earth is not spherically symmetric, but is slightly flatter at the poles while bulging at the Equator, there are consequently slight deviations in both the magnitude and direction of gravity across its surface. The net force as measured by a scale and plumb bob is called effective gravity or apparent gravity, effective gravity includes other factors that affect the net force. These factors vary and include such as centrifugal force at the surface from the Earths rotation. Effective gravity on the Earths surface varies by around 0. 7%, in large cities, it ranges from 9.766 in Kuala Lumpur, Mexico City, and Singapore to 9.825 in Oslo and Helsinki. The surface of the Earth is rotating, so it is not a frame of reference. At latitudes nearer the Equator, the centrifugal force produced by Earths rotation is larger than at polar latitudes. This counteracts the Earths gravity to a small degree – up to a maximum of 0. 3% at the Equator –, the same two factors influence the direction of the effective gravity. Gravity decreases with altitude as one rises above the Earths surface because greater altitude means greater distance from the Earths centre, all other things being equal, an increase in altitude from sea level to 9,000 metres causes a weight decrease of about 0. 29%. It is a misconception that astronauts in orbit are weightless because they have flown high enough to escape the Earths gravity. In fact, at an altitude of 400 kilometres, equivalent to an orbit of the Space Shuttle. Weightlessness actually occurs because orbiting objects are in free-fall, the effect of ground elevation depends on the density of the ground. A person flying at 30000 ft above sea level over mountains will feel more gravity than someone at the same elevation, however, a person standing on the earths surface feels less gravity when the elevation is higher. The following formula approximates the Earths gravity variation with altitude, g h = g 02 Where gh is the acceleration at height h above sea level
Gravity of Earth
–
Earth's gravity measured by NASA's GRACE mission, showing deviations from the theoretical gravity of an idealized smooth Earth, the so-called earth ellipsoid. Red shows the areas where gravity is stronger than the smooth, standard value, and blue reveals areas where gravity is weaker. (Animated version.)
Gravity of Earth
–
Earth's radial density distribution according to the Preliminary Reference Earth Model (PREM).
63.
Kilograms
–
The kilogram or kilogramme is the base unit of mass in the International System of Units and is defined as being equal to the mass of the International Prototype of the Kilogram. The avoirdupois pound, used in both the imperial and US customary systems, is defined as exactly 0.45359237 kg, making one kilogram approximately equal to 2.2046 avoirdupois pounds. Other traditional units of weight and mass around the world are also defined in terms of the kilogram, the gram, 1/1000 of a kilogram, was provisionally defined in 1795 as the mass of one cubic centimeter of water at the melting point of ice. The final kilogram, manufactured as a prototype in 1799 and from which the IPK was derived in 1875, had an equal to the mass of 1 dm3 of water at its maximum density. The kilogram is the only SI base unit with an SI prefix as part of its name and it is also the only SI unit that is still directly defined by an artifact rather than a fundamental physical property that can be reproduced in different laboratories. Three other base units and 17 derived units in the SI system are defined relative to the kilogram, only 8 other units do not require the kilogram in their definition, temperature, time and frequency, length, and angle. At its 2011 meeting, the CGPM agreed in principle that the kilogram should be redefined in terms of the Planck constant, the decision was originally deferred until 2014, in 2014 it was deferred again until the next meeting. There are currently several different proposals for the redefinition, these are described in the Proposed Future Definitions section below, the International Prototype Kilogram is rarely used or handled. In the decree of 1795, the term gramme thus replaced gravet, the French spelling was adopted in the United Kingdom when the word was used for the first time in English in 1797, with the spelling kilogram being adopted in the United States. In the United Kingdom both spellings are used, with kilogram having become by far the more common, UK law regulating the units to be used when trading by weight or measure does not prevent the use of either spelling. In the 19th century the French word kilo, a shortening of kilogramme, was imported into the English language where it has used to mean both kilogram and kilometer. In 1935 this was adopted by the IEC as the Giorgi system, now known as MKS system. In 1948 the CGPM commissioned the CIPM to make recommendations for a practical system of units of measurement. This led to the launch of SI in 1960 and the subsequent publication of the SI Brochure, the kilogram is a unit of mass, a property which corresponds to the common perception of how heavy an object is. Mass is a property, that is, it is related to the tendency of an object at rest to remain at rest, or if in motion to remain in motion at a constant velocity. Accordingly, for astronauts in microgravity, no effort is required to hold objects off the cabin floor, they are weightless. However, since objects in microgravity still retain their mass and inertia, the ratio of the force of gravity on the two objects, measured by the scale, is equal to the ratio of their masses. On April 7,1795, the gram was decreed in France to be the weight of a volume of pure water equal to the cube of the hundredth part of the metre
Kilograms
–
A domestic-quality one-kilogram weight made of cast iron (the credit card is for scale). The shape follows OIML recommendation R52 for cast-iron hexagonal weights
Kilograms
–
Measurement of weight – gravitational attraction of the measurand causes a distortion of the spring
Kilograms
–
Measurement of mass – the gravitational force on the measurand is balanced against the gravitational force on the weights.
Kilograms
–
The Arago kilogram, an exact copy of the "Kilogramme des Archives" commissioned in 1821 by the US under supervision of French physicist François Arago that served as the US's first kilogram standard of mass until 1889, when the US converted to primary metric standards and received its current kilogram prototypes, K4 and K20.
64.
Newtons
–
The newton is the International System of Units derived unit of force. It is named after Isaac Newton in recognition of his work on classical mechanics, see below for the conversion factors. One newton is the force needed to one kilogram of mass at the rate of one metre per second squared in direction of the applied force. In 1948, the 9th CGPM resolution 7 adopted the name newton for this force, the MKS system then became the blueprint for todays SI system of units. The newton thus became the unit of force in le Système International dUnités. This SI unit is named after Isaac Newton, as with every International System of Units unit named for a person, the first letter of its symbol is upper case. Note that degree Celsius conforms to this rule because the d is lowercase. — Based on The International System of Units, section 5.2. Newtons second law of motion states that F = ma, where F is the applied, m is the mass of the object receiving the force. The newton is therefore, where the symbols are used for the units, N for newton, kg for kilogram, m for metre. In dimensional analysis, F = M L T2 where F is force, M is mass, L is length, at average gravity on earth, a kilogram mass exerts a force of about 9.8 newtons. An average-sized apple exerts about one newton of force, which we measure as the apples weight, for example, the tractive effort of a Class Y steam train and the thrust of an F100 fighter jet engine are both around 130 kN. One kilonewton,1 kN, is 102.0 kgf,1 kN =102 kg ×9.81 m/s2 So for example, a platform rated at 321 kilonewtons will safely support a 32,100 kilograms load. Specifications in kilonewtons are common in safety specifications for, the values of fasteners, Earth anchors. Working loads in tension and in shear, thrust of rocket engines and launch vehicles clamping forces of the various moulds in injection moulding machines used to manufacture plastic parts
Newtons
–
Base units
65.
Weightlessness
–
Counterintuitively, a uniform gravitational field does not by itself cause stress or strain, and a body in free fall in such an environment experiences no g-force acceleration and feels weightless. This is also termed zero-g where the term is more understood as meaning zero g-force. In such cases, a sensation of weight, in the sense of a state of stress can occur, in such cases, g-forces are felt, and bodies are not weightless. When the gravitational field is non-uniform, a body in free fall suffers tidal effects and is not stress-free, near a black hole, such tidal effects can be very strong. In the case of the Earth, the effects are minor, especially on objects of relatively small dimension and this condition is known as microgravity and it prevails in orbiting spacecraft. In October 2015, the NASA Office of Inspector General issued a health hazards related to human spaceflight. In Newtonian mechanics the term weight is two distinct interpretations by engineers. Weight1, Under this interpretation, the weight of a body is the force exerted on the body. Near the surface of the earth, a body mass is 1 kg has a weight of approximately 9.81 N, independent of its state of motion, free fall. Weightlessness in this sense can be achieved by removing the body far away from the source of gravity and it can also be attained by placing the body at a neutral point between two gravitating masses. Weight2, Weight can also be interpreted as that quantity which is measured when one uses scales, what is being measured there is the force exerted by the body on the scales. In a standard weighing operation, the body being weighed is in a state of equilibrium as a result of a force exerted on it by the weighing machine cancelling the gravitational field. By Newtons 3rd law, there is an equal and opposite force exerted by the body on the machine, typically, it is a contact force and not uniform across the mass of the body. If the body is placed on the scales in a lift in free fall in pure uniform gravity, the scale would read zero, and this describes the condition in which the body is stress free and undeformed. This is the weightlessness in free fall in a gravitational field. To sum up, we have two notions of weight of which weight1 is dominant, yet weightlessness is typically exemplified not by absence of weight1 but by the absence of stress associated with weight2. This is the sense of weightlessness in what follows below. A body is free, exerts zero weight2, when the only force acting on it is weight1 as when in free fall in a uniform gravitational field
Weightlessness
–
Astronauts on the International Space Station experience only microgravity and thus display an example of weightlessness. Michael Foale can be seen exercising in the foreground.
Weightlessness
–
Zero gravity flight maneuver
Weightlessness
–
NASA's KC-135A plane ascending for a zero gravity maneuver
Weightlessness
–
Zero-gravity testing at the NASA Zero Gravity Research Facility
66.
Earth's gravity
–
The gravity of Earth, which is denoted by g, refers to the acceleration that is imparted to objects due to the distribution of mass within the Earth. In SI units this acceleration is measured in metres per second squared or equivalently in newtons per kilogram and this quantity is sometimes referred to informally as little g. The precise strength of Earths gravity varies depending on location, the nominal average value at the Earths surface, known as standard gravity is, by definition,9.80665 m/s2. This quantity is denoted variously as gn, ge, g0, gee, the weight of an object on the Earths surface is the downwards force on that object, given by Newtons second law of motion, or F = ma. Gravitational acceleration contributes to the acceleration, but other factors, such as the rotation of the Earth, also contribute. The Earth is not spherically symmetric, but is slightly flatter at the poles while bulging at the Equator, there are consequently slight deviations in both the magnitude and direction of gravity across its surface. The net force as measured by a scale and plumb bob is called effective gravity or apparent gravity, effective gravity includes other factors that affect the net force. These factors vary and include such as centrifugal force at the surface from the Earths rotation. Effective gravity on the Earths surface varies by around 0. 7%, in large cities, it ranges from 9.766 in Kuala Lumpur, Mexico City, and Singapore to 9.825 in Oslo and Helsinki. The surface of the Earth is rotating, so it is not a frame of reference. At latitudes nearer the Equator, the centrifugal force produced by Earths rotation is larger than at polar latitudes. This counteracts the Earths gravity to a small degree – up to a maximum of 0. 3% at the Equator –, the same two factors influence the direction of the effective gravity. Gravity decreases with altitude as one rises above the Earths surface because greater altitude means greater distance from the Earths centre, all other things being equal, an increase in altitude from sea level to 9,000 metres causes a weight decrease of about 0. 29%. It is a misconception that astronauts in orbit are weightless because they have flown high enough to escape the Earths gravity. In fact, at an altitude of 400 kilometres, equivalent to an orbit of the Space Shuttle. Weightlessness actually occurs because orbiting objects are in free-fall, the effect of ground elevation depends on the density of the ground. A person flying at 30000 ft above sea level over mountains will feel more gravity than someone at the same elevation, however, a person standing on the earths surface feels less gravity when the elevation is higher. The following formula approximates the Earths gravity variation with altitude, g h = g 02 Where gh is the acceleration at height h above sea level
Earth's gravity
–
Earth's gravity measured by NASA's GRACE mission, showing deviations from the theoretical gravity of an idealized smooth Earth, the so-called earth ellipsoid. Red shows the areas where gravity is stronger than the smooth, standard value, and blue reveals areas where gravity is weaker. (Animated version.)
Earth's gravity
–
Earth's radial density distribution according to the Preliminary Reference Earth Model (PREM).
67.
Force carrier
–
In particle physics, force carriers or messenger particles or intermediate particles are particles that give rise to forces between other particles. These particles are bundles of energy of a kind of field. There is one kind of field for every type of elementary particle, for instance, there is an electron field whose quanta are electrons, and an electromagnetic field whose quanta are photons. The force carrier particles that mediate the electromagnetic, weak, in particle physics, quantum field theories such as the Standard Model describe nature in terms of fields. Each field has a description as the set of particles of a particular type. The energy of a wave in a field is quantized, the Standard Model contains the following particles, each of which is an excitation of a particular field, Gluons, excitations of the strong gauge field. Photons, W bosons, and Z bosons, excitations of the gauge fields. Higgs bosons, excitations of one component of the Higgs field, several types of fermions, described as excitations of fermionic fields. In addition, composite particles such as mesons can be described as excitations of an effective field, gravity is not a part of the Standard Model, but it is thought that there may be particles called gravitons which are the excitations of gravitational waves. The status of this particle is still tentative, because the theory is incomplete, when one particle scatters off another, altering its trajectory, there are two ways to think about the process. In the field picture, we imagine that the field generated by one caused a force on the other. Alternatively, we can imagine one particle emitting a virtual particle which is absorbed by the other, the virtual particle transfers momentum from one particle to the other. The description of forces in terms of particles is limited by the applicability of the perturbation theory from which it is derived. In certain situations, such as low-energy QCD and the description of bound states, the electromagnetic force can be described by the exchange of virtual photons. The nuclear force binding protons and neutrons can be described by a field of which mesons are the excitations. At sufficiently large energies, the interaction between quarks can be described by the exchange of virtual gluons. Beta decay is an example of a due to the exchange of a W boson. Gravitation may be due to the exchange of virtual gravitons, in time, this relationship became known as Coulombs law
Force carrier
–
Large Hadron Collider tunnel at CERN
68.
Little group
–
In mathematics, an action of a group is a way of interpreting the elements of the group as acting on some space in a way that preserves the structure of that space. Common examples of spaces that groups act on are sets, vector spaces, actions of groups on vector spaces are called representations of the group. Some groups can be interpreted as acting on spaces in a canonical way, more generally, symmetry groups such as the homeomorphism group of a topological space or the general linear group of a vector space, as well as their subgroups, also admit canonical actions. A common way of specifying non-canonical actions is to describe a homomorphism φ from a group G to the group of symmetries of a set X. The action of an element g ∈ G on a point x ∈ X is assumed to be identical to the action of its image φ ∈ Sym on the point x. The homomorphism φ is also called the action of G. Thus, if G is a group and X is a set, if X has additional structure, then φ is only called an action if for each g ∈ G, the permutation φ preserves the structure of X. The abstraction provided by group actions is a one, because it allows geometrical ideas to be applied to more abstract objects. Many objects in mathematics have natural group actions defined on them, in particular, groups can act on other groups, or even on themselves. Because of this generality, the theory of group actions contains wide-reaching theorems, such as the orbit stabilizer theorem, the group G is said to act on X. The set X is called a G-set. In complete analogy, one can define a group action of G on X as an operation X × G → X mapping to x. g. =. h for all g, h in G and all x in X, for a left action h acts first and is followed by g, while for a right action g acts first and is followed by h. Because of the formula −1 = h−1g−1, one can construct an action from a right action by composing with the inverse operation of the group. Also, an action of a group G on X is the same thing as a left action of its opposite group Gop on X. It is thus sufficient to only consider left actions without any loss of generality. The trivial action of any group G on any set X is defined by g. x = x for all g in G and all x in X, that is, every group element induces the identity permutation on X. In every group G, left multiplication is an action of G on G, g. x = gx for all g, x in G
Little group
–
Given an equilateral triangle, the counterclockwise rotation by 120° around the center of the triangle maps every vertex of the triangle to another one. The cyclic group C 3 consisting of the rotations by 0°, 120° and 240° acts on the set of the three vertices.
69.
Higgs field
–
The Higgs boson is an elementary particle in the Standard Model of particle physics. It is the excitation of the Higgs field, a fundamental field of crucial importance to particle physics theory first suspected to exist in the 1960s. Unlike other known fields such as the field, it has a non-zero constant value in vacuum. The question of the Higgs fields existence became the last unverified part of the Standard Model of particle physics and it also resolves several other long-standing puzzles, such as the reason for the weak forces extremely short range. Although the Higgs field is believed to permeate the entire Universe, in principle, it can be proved to exist by detecting its excitations, which manifest as Higgs particles, but these are extremely difficult to produce and to detect. On 4 July 2012, the discovery of a new particle with a mass between 125 and 127 GeV/c2 was announced, physicists suspected that it was the Higgs boson and this also means it is the first elementary scalar particle discovered in nature. The Higgs boson is named after Peter Higgs, one of six physicists who, in the 1964 PRL symmetry breaking papers, on December 10,2013, two of them, Peter Higgs and François Englert, were awarded the Nobel Prize in Physics for their work and prediction. Although Higgss name has come to be associated with this theory, in the Standard Model, the Higgs particle is a boson with no spin, electric charge, or colour charge. It is also unstable, decaying into other particles almost immediately. It is an excitation of one of the four components of the Higgs field. The latter constitutes a field, with two neutral and two electrically charged components that form a complex doublet of the weak isospin SU symmetry. The Higgs field is tachyonic, which does not refer to faster-than-light speeds, the Higgs field has a Mexican hat shaped potential with nonzero strength everywhere, which in its vacuum state breaks the weak isospin symmetry of the electroweak interaction. When this happens, three components of the Higgs field are absorbed by the SU and U gauge bosons to become the longitudinal components of the now-massive W and Z bosons of the weak force. The remaining electrically neutral component either manifests as a Higgs particle, or can couple separately to other known as fermions. Some versions of the theory predicted more than one kind of Higgs fields, alternative Higgsless models might have been considered if the Higgs boson had not been discovered. In this model, the forces in nature arise from properties of our universe called gauge invariance. The forces themselves are transmitted by particles known as gauge bosons, field theories had been used with great success in understanding the electromagnetic field and the strong force. The problem was that the requirements in gauge theory predicted that both electromagnetisms gauge boson and the weak forces gauge bosons should have zero mass
Higgs field
–
Large Hadron Collider tunnel at CERN
Higgs field
–
Candidate Higgs boson events from collisions between protons in the LHC. The top event in the CMS experiment shows a decay into two photons (dashed yellow lines and green towers). The lower event in the ATLAS experiment shows a decay into 4 muons (red tracks).
Higgs field
–
The six authors of the 1964 PRL papers, who received the 2010 J. J. Sakurai Prize for their work. From left to right: Kibble, Guralnik, Hagen, Englert, Brout. Right: Higgs.
Higgs field
70.
Observable universe
–
There are at least two trillion galaxies in the observable universe, containing more stars than all the grains of sand on planet Earth. Assuming the universe is isotropic, the distance to the edge of the universe is roughly the same in every direction. That is, the universe is a spherical volume centered on the observer. Every location in the Universe has its own universe, which may or may not overlap with the one centered on Earth. The word observable used in this sense does not depend on modern technology actually permits detection of radiation from an object in this region. It simply indicates that it is possible in principle for light or other signals from the object to reach an observer on Earth, in practice, we can see light only from as far back as the time of photon decoupling in the recombination epoch. That is when particles were first able to emit photons that were not quickly re-absorbed by other particles, before then, the Universe was filled with a plasma that was opaque to photons. The detection of gravitational waves indicates there is now a possibility of detecting signals from before the recombination epoch. The surface of last scattering is the collection of points in space at the distance that photons from the time of photon decoupling just reach us today. These are the photons we detect today as cosmic microwave background radiation, however, with future technology, it may be possible to observe the still older relic neutrino background, or even more distant events via gravitational waves. It is estimated that the diameter of the universe is about 28.5 gigaparsecs. The total mass of matter in the universe can be calculated using the critical density. Some parts of the Universe are too far away for the light emitted since the Big Bang to have had time to reach Earth. In the future, light from distant galaxies will have had time to travel. This fact can be used to define a type of cosmic event horizon whose distance from the Earth changes over time, both popular and professional research articles in cosmology often use the term universe to mean observable universe. It is plausible that the galaxies within our observable universe represent only a fraction of the galaxies in the Universe. If the Universe is finite but unbounded, it is possible that the Universe is smaller than the observable universe. In this case, what we take to be very distant galaxies may actually be duplicate images of nearby galaxies and it is difficult to test this hypothesis experimentally because different images of a galaxy would show different eras in its history, and consequently might appear quite different
Observable universe
–
Hubble Ultra-Deep Field image of a region of the observable universe (equivalent sky area size shown in bottom left corner), near the constellation Fornax. Each spot is a galaxy, consisting of billions of stars. The light from the smallest, most red-shifted galaxies originated nearly 14 billion years ago.
Observable universe
–
Visualization of the whole observable universe. The scale is such that the fine grains represent collections of large numbers of superclusters. The Virgo Supercluster – home of Milky Way – is marked at the center, but is too small to be seen.
Observable universe
–
An example of one of the most common misconceptions about the size of the observable universe. Despite the fact that the universe is 13.8 billion years old, the distance to the edge of the observable universe is not 13.8 billion light-years, because the universe is expanding. This plaque appears at the Rose Center for Earth and Space in New York City.
Observable universe
–
Image (computer simulated) of an area of space more than 50 million light years across, presenting a possible large-scale distribution of light sources in the universe - precise relative contributions of galaxies and quasars are unclear.
71.
Proton
–
A proton is a subatomic particle, symbol p or p+, with a positive electric charge of +1e elementary charge and mass slightly less than that of a neutron. Protons and neutrons, each with masses of one atomic mass unit, are collectively referred to as nucleons. One or more protons are present in the nucleus of every atom, the number of protons in the nucleus is the defining property of an element, and is referred to as the atomic number. Since each element has a number of protons, each element has its own unique atomic number. The word proton is Greek for first, and this name was given to the nucleus by Ernest Rutherford in 1920. In previous years, Rutherford had discovered that the nucleus could be extracted from the nuclei of nitrogen by atomic collisions. Protons were therefore a candidate to be a particle, and hence a building block of nitrogen. In the modern Standard Model of particle physics, protons are hadrons, and like neutrons, although protons were originally considered fundamental or elementary particles, they are now known to be composed of three valence quarks, two up quarks and one down quark. The rest masses of quarks contribute only about 1% of a protons mass, the remainder of a protons mass is due to quantum chromodynamics binding energy, which includes the kinetic energy of the quarks and the energy of the gluon fields that bind the quarks together. At sufficiently low temperatures, free protons will bind to electrons, however, the character of such bound protons does not change, and they remain protons. A fast proton moving through matter will slow by interactions with electrons and nuclei, the result is a protonated atom, which is a chemical compound of hydrogen. In vacuum, when electrons are present, a sufficiently slow proton may pick up a single free electron, becoming a neutral hydrogen atom. Such free hydrogen atoms tend to react chemically with other types of atoms at sufficiently low energies. When free hydrogen atoms react with other, they form neutral hydrogen molecules. Protons are spin-½ fermions and are composed of three quarks, making them baryons. Protons have an exponentially decaying positive charge distribution with a mean square radius of about 0.8 fm. Protons and neutrons are both nucleons, which may be together by the nuclear force to form atomic nuclei. The nucleus of the most common isotope of the atom is a lone proton
Proton
–
Ernest Rutherford at the first Solvay Conference, 1911
Proton
–
The quark structure of the proton. The color assignment of individual quarks is arbitrary, but all three colors must be present. Forces between quarks are mediated by gluons.
72.
General theory of relativity
–
General relativity is the geometric theory of gravitation published by Albert Einstein in 1915 and the current description of gravitation in modern physics. General relativity generalizes special relativity and Newtons law of gravitation, providing a unified description of gravity as a geometric property of space and time. In particular, the curvature of spacetime is directly related to the energy and momentum of whatever matter, the relation is specified by the Einstein field equations, a system of partial differential equations. Examples of such differences include gravitational time dilation, gravitational lensing, the redshift of light. The predictions of relativity have been confirmed in all observations. Although general relativity is not the only theory of gravity. Einsteins theory has important astrophysical implications, for example, it implies the existence of black holes—regions of space in which space and time are distorted in such a way that nothing, not even light, can escape—as an end-state for massive stars. The bending of light by gravity can lead to the phenomenon of gravitational lensing, General relativity also predicts the existence of gravitational waves, which have since been observed directly by physics collaboration LIGO. In addition, general relativity is the basis of current cosmological models of an expanding universe. Soon after publishing the special theory of relativity in 1905, Einstein started thinking about how to incorporate gravity into his new relativistic framework. In 1907, beginning with a thought experiment involving an observer in free fall. After numerous detours and false starts, his work culminated in the presentation to the Prussian Academy of Science in November 1915 of what are now known as the Einstein field equations. These equations specify how the geometry of space and time is influenced by whatever matter and radiation are present, the Einstein field equations are nonlinear and very difficult to solve. Einstein used approximation methods in working out initial predictions of the theory, but as early as 1916, the astrophysicist Karl Schwarzschild found the first non-trivial exact solution to the Einstein field equations, the Schwarzschild metric. This solution laid the groundwork for the description of the stages of gravitational collapse. In 1917, Einstein applied his theory to the universe as a whole, in line with contemporary thinking, he assumed a static universe, adding a new parameter to his original field equations—the cosmological constant—to match that observational presumption. By 1929, however, the work of Hubble and others had shown that our universe is expanding and this is readily described by the expanding cosmological solutions found by Friedmann in 1922, which do not require a cosmological constant. Lemaître used these solutions to formulate the earliest version of the Big Bang models, in which our universe has evolved from an extremely hot, Einstein later declared the cosmological constant the biggest blunder of his life
General theory of relativity
–
A simulated black hole of 10 solar masses within the Milky Way, seen from a distance of 600 kilometers.
General theory of relativity
–
Albert Einstein developed the theories of special and general relativity. Picture from 1921.
General theory of relativity
–
Einstein cross: four images of the same astronomical object, produced by a gravitational lens
General theory of relativity
–
Artist's impression of the space-borne gravitational wave detector LISA
73.
Inclined plane
–
An inclined plane, also known as a ramp, is a flat supporting surface tilted at an angle, with one end higher than the other, used as an aid for raising or lowering a load. The inclined plane is one of the six simple machines defined by Renaissance scientists. Moving an object up an inclined plane requires less force than lifting it straight up, the mechanical advantage of an inclined plane, the factor by which the force is reduced, is equal to the ratio of the length of the sloped surface to the height it spans. The angle of friction, also called the angle of repose, is the maximum angle at which a load can rest motionless on an inclined plane due to friction. This angle is equal to the arctangent of the coefficient of static friction μs between the surfaces, two other simple machines are often considered to be derived from the inclined plane. The wedge can be considered a moving inclined plane or two inclined planes connected at the base, the screw consists of a narrow inclined plane wrapped around a cylinder. The term may refer to a specific implementation, a straight ramp cut into a steep hillside for transporting goods up. It may include cars on rails or pulled up by a cable system, Inclined planes are widely used in the form of loading ramps to load and unload goods on trucks, ships, and planes. Wheelchair ramps are used to people in wheelchairs to get over vertical obstacles without exceeding their strength. Escalators and slanted conveyor belts are also forms of inclined plane, in a funicular or cable railway a railroad car is pulled up a steep inclined plane using cables. Inclined planes also allow heavy objects, including humans, to be safely lowered down a vertical distance by using the normal force of the plane to reduce the gravitational force. Aircraft evacuation slides allow people to rapidly and safely reach the ground from the height of a passenger airliner, other inclined planes are built into permanent structures. Similarly, pedestrian paths and sidewalks have gentle ramps to limit their slope, Inclined planes are also used as entertainment for people to slide down in a controlled way, in playground slides, water slides, ski slopes and skateboard parks. Inclined planes have been used by people since prehistoric times to move heavy objects, the Egyptian pyramids were constructed using inclined planes, Siege ramps enabled ancient armies to surmount fortress walls. The ancient Greeks constructed a paved ramp 6 km long, the Diolkos, however the inclined plane was the last of the six classic simple machines to be recognised as a machine. This is probably because it is a passive, motionless device, although they understood its use in lifting heavy objects, the ancient Greek philosophers who defined the other five simple machines did not include the inclined plane as a machine. This view persisted among a few scientists, as late as 1826 Karl von Langsdorf wrote that an inclined plane. is no more a machine than is the slope of a mountain. The problem of calculating the required to push a weight up an inclined plane was attempted by Greek philosophers Heron of Alexandria and Pappus of Alexandria
Inclined plane
–
Wheelchair ramp, Hotel Montescot, Chartres, France
Inclined plane
–
Using ramps to load a car on a truck
Inclined plane
–
Loading a truck on a ship using a ramp
Inclined plane
–
Aircraft emergency evacuation slide
74.
David Scott
–
David Randolph Dave Scott, is an American engineer, retired U. S. Air Force officer, former test pilot, and former NASA astronaut. He belonged to the group of NASA astronauts, selected in October 1963. As an astronaut, Scott became the person to walk on the Moon. Before becoming an astronaut, Scott graduated from the United States Military Academy at West Point and he graduated from the Air Force Experimental Test Pilot School and Aerospace Research Pilot School. Scott retired from the Air Force in 1975 with the rank of colonel, as an astronaut, Scott made his first flight into space as pilot of the Gemini 8 mission, along with Neil Armstrong, in March 1966, spending just under eleven hours in low Earth orbit. Scott then spent ten days in orbit as Command Module Pilot aboard Apollo 9, his second spaceflight, along with Commander James McDivitt, during this mission, Scott became the last American to fly solo in Earth orbit. Scott was born June 6,1932, on Randolph Field near San Antonio, Texas and was active in the Boy Scouts of America where he achieved its second highest rank, Life Scout. Scott was educated at Texas Military Institute, Riverside Polytechnic High School in Riverside, California, Scott attended The Western High School in Washington, D. C. graduating in June 1949. In D. C. he was a student, on the school swim team. Because of his standing in the class, he was able to choose which branch of the military he would serve. Scott chose the Air Force because he wanted to fly jets and he completed Undergraduate Pilot Training at Webb Air Force Base, Texas, in 1955 and then reported for gunnery training at Laughlin Air Force Base, Texas, and Luke Air Force Base, Arizona. He was assigned to the 32d Tactical Fighter Squadron at Soesterberg Air Base, Netherlands, from April 1956 to July 1960, flying F-86 Sabres, upon completing this tour of duty, he returned to the United States for study at the Massachusetts Institute of Technology. He received both a Master of Science degree in Aeronautics/Astronautics and the degree of Engineer in Aeronautics/Astronautics from MIT in 1962 and he also received an Honorary Doctorate of Astronautical Science from the University of Michigan in 1971. In 1959 he married his first wife, Ann and he also has two children with her, Tracy and Douglas. He is of Scottish descent, and his interests include swimming, handball, skiing. Scott was the first of the Group Three astronauts to be selected to fly and was also the first to command a mission of his own. The crew performed the first successful docking of two vehicles in space and demonstrated great piloting skill in overcoming the problem and bringing the spacecraft to a safe landing. Scott would later perform EVAs on his two subsequent flights, Scott served as Command Module Pilot for Apollo 9
David Scott
–
David Randolph Scott
David Scott
–
Recovery of the Gemini 8 spacecraft from the western Pacific Ocean
David Scott
–
Scott stands in the open hatch of the Apollo 9 Command Module Gumdrop
David Scott
–
One of the first day covers
75.
Apollo 15
–
Apollo 15 was the ninth manned mission in the United States Apollo program, the fourth to land on the Moon, and the eighth successful manned mission. It was the first of what were termed J missions, long stays on the Moon and it was also the first mission on which the Lunar Roving Vehicle was used. The mission began on July 26,1971, and ended on August 7, at the time, NASA called it the most successful manned flight ever achieved. Commander David Scott and Lunar Module Pilot James Irwin spent three days on the Moon, including 18½ hours outside the spacecraft on lunar extra-vehicular activity, the mission landed near Hadley rille, in an area of the Mare Imbrium called Palus Putredinus. The crew explored the area using the first lunar rover, which allowed them to travel farther from the Lunar Module than had been possible on missions without the rover. They collected 77 kilograms of lunar surface material, Scott had attended the University of Michigan, but left before graduating to accept an appointment to the United States Military Academy. The crewmen did their work at either the United States Military Academy or the United States Naval Academy. C. Gordon Fullerton Joseph P. Allen Robert A. Parker Karl G.5 km Apogee,171.3 km Inclination,29. 679° Period,87. There had been a rivalry between that prime and backup crew on that mission, with the prime being all United States Navy. Originally Apollo 15 would have been an H mission, like Apollos 12,13 and 14, but on September 2,1970, NASA announced it was canceling what were to be the current incarnations of the Apollo 15 and Apollo 19 missions. To maximize the return from the missions, Apollo 15 would now fly as a J mission and have the honor of carrying the first lunar rover. One of the changes in the training for Apollo 15 was the geology training. Although on previous flights the crews had trained in field geology. Scott and Irwin would train with Leon Silver, a Caltech geologist who on Earth was interested in the Precambrian, Silver had been suggested by Harrison Schmitt as an alternative to the classroom lecturers that NASA had previously used. Among other things, Silver had made important refinements to the methods for dating rocks using the decay of uranium into lead in the late 1950s, crews began to wear mock-ups of the backpacks they would carry, and communicate using walkie-talkies to a CAPCOM in a tent. The CAPCOM was accompanied by a group of geologists unfamiliar with the area who would rely on the descriptions to interpret the findings. The decision to land at Hadley came in September 1970, the Site Selection Committees had narrowed the field down to two sites — Hadley Rille or the crater Marius, near which were a group of low, possibly volcanic, domes. Although not ultimately his decision, the commander of a mission always held great sway, to David Scott the choice was clear, with Hadley, being exploration at its finest
Apollo 15
–
Jim Irwin with the Lunar Roving Vehicle on the first lunar surface EVA of Apollo 15
Apollo 15
Apollo 15
–
Commander David Scott during geology training in New Mexico on March 19, 1971
Apollo 15
–
Apollo 15 launches on July 26, 1971
76.
Theoretical physics
–
Theoretical physics is a branch of physics that employs mathematical models and abstractions of physical objects and systems to rationalize, explain and predict natural phenomena. This is in contrast to physics, which uses experimental tools to probe these phenomena. The advancement of science depends in general on the interplay between experimental studies and theory, in some cases, theoretical physics adheres to standards of mathematical rigor while giving little weight to experiments and observations. Conversely, Einstein was awarded the Nobel Prize for explaining the photoelectric effect, a physical theory is a model of physical events. It is judged by the extent to which its predictions agree with empirical observations, the quality of a physical theory is also judged on its ability to make new predictions which can be verified by new observations. A physical theory similarly differs from a theory, in the sense that the word theory has a different meaning in mathematical terms. A physical theory involves one or more relationships between various measurable quantities, archimedes realized that a ship floats by displacing its mass of water, Pythagoras understood the relation between the length of a vibrating string and the musical tone it produces. Other examples include entropy as a measure of the uncertainty regarding the positions and motions of unseen particles, Theoretical physics consists of several different approaches. In this regard, theoretical particle physics forms a good example, for instance, phenomenologists might employ empirical formulas to agree with experimental results, often without deep physical understanding. Modelers often appear much like phenomenologists, but try to model speculative theories that have certain desirable features, some attempt to create approximate theories, called effective theories, because fully developed theories may be regarded as unsolvable or too complicated. Other theorists may try to unify, formalise, reinterpret or generalise extant theories, or create completely new ones altogether. Sometimes the vision provided by pure mathematical systems can provide clues to how a system might be modeled, e. g. the notion, due to Riemann and others. Theoretical problems that need computational investigation are often the concern of computational physics, Theoretical advances may consist in setting aside old, incorrect paradigms or may be an alternative model that provides answers that are more accurate or that can be more widely applied. In the latter case, a correspondence principle will be required to recover the previously known result, sometimes though, advances may proceed along different paths. However, an exception to all the above is the wave–particle duality, Physical theories become accepted if they are able to make correct predictions and no incorrect ones. They are also likely to be accepted if they connect a wide range of phenomena. Testing the consequences of a theory is part of the scientific method, Physical theories can be grouped into three categories, mainstream theories, proposed theories and fringe theories. Theoretical physics began at least 2,300 years ago, under the Pre-socratic philosophy, during the Middle Ages and Renaissance, the concept of experimental science, the counterpoint to theory, began with scholars such as Ibn al-Haytham and Francis Bacon
Theoretical physics
–
Visual representation of a Schwarzschild wormhole. Wormholes have never been observed, but they are predicted to exist through mathematical models and scientific theory.
77.
Balance scales
–
Weighing scales are devices to measure weight or calculate mass. Scales and balances are used in commerce, as many products are sold. Very accurate balances, called analytical balances, are used in fields such as chemistry. Although records dating to the 1700s refer to spring scales for measuring weight, the earliest design for such a device dates to 1770 and credits Richard Salter, an early scale-maker. Postal workers could work quickly with spring scales than balance scales because they could be read instantaneously. By the 1940s various electronic devices were being attached to these designs to make more accurate. A spring scale measures weight by reporting the distance that a spring deflects under a load and this contrasts to a balance, which compares the torque on the arm due to a sample weight to the torque on the arm due to a standard reference weight using a horizontal lever. Spring scales measure force, which is the force of constraint acting on an object. They are usually calibrated so that measured force translates to mass at earths gravity, the object to be weighed can be simply hung from the spring or set on a pivot and bearing platform. In a spring scale, the spring either stretches or compresses, by Hookes law, every spring has a proportionality constant that relates how hard it is pulled to how far it stretches. Rack and pinion mechanisms are used to convert the linear spring motion to a dial reading. With proper manufacturing and setup, however, spring scales can be rated as legal for commerce, to remove the temperature error, a commerce-legal spring scale must either have temperature-compensated springs or be used at a fairly constant temperature. To eliminate the effect of gravity variations, a spring scale must be calibrated where it is used. It is also common in high-capacity applications such as crane scales to use force to sense weight. The test force is applied to a piston or diaphragm and transmitted through hydraulic lines to an indicator based on a Bourdon tube or electronic sensor. A digital bathroom scale is a type of electronic weighing machine, the digital bathroom scale is a smart scale which has many functions like smartphone integration, cloud storage, fitness tracking, etc. In electronic versions of spring scales, the deflection of a beam supporting the weight is measured using a strain gauge. The capacity of such devices is only limited by the resistance of the beam to deflection and these scales are used in the modern bakery, grocery, delicatessen, seafood, meat, produce and other perishable goods departments
Balance scales
–
Digital kitchen scale, a strain gauge scale
Balance scales
–
Scales used for trade purposes in the state of Florida, as this scale at the checkout in a cafeteria, are inspected for accuracy by the FDACS's Bureau of Weights and Measures.
Balance scales
–
A two-pan balance
Balance scales
–
Two 10- decagram masses
78.
Nineteenth dynasty of Egypt
–
The Nineteenth Dynasty of ancient Egypt was one of the periods of the Egyptian New Kingdom. Founded by Vizier Ramesses I, whom Pharaoh Horemheb chose as his successor to the throne, the warrior kings of the early 18th Dynasty had encountered only little resistance from neighbouring kingdoms, allowing them to expand their realm of influence easily. The situation had changed radically towards the end of the 18th Dynasty, the Hittites gradually extended their influence into Syria and Canaan to become a major power in international politics, a power that both Seti I and his son Ramesses II would need to deal with. The Pharaohs of the 19th dynasty ruled for one hundred and ten years. Seti Is reign is considered to be 11 years and not 15 years by both J. von Beckerath and Peter Brand, who wrote a biography on this pharaohs reign. Consequently, it will be amended to 11 years or 1290-1279 BC, therefore, Setis father and predecessor would have ruled Egypt between 1292-1290 BC. Many of the pharaohs were buried in the Valley of the Kings in Thebes, more information can be found on the Theban Mapping Project website. New Kingdom Egypt reached the zenith of its power under Seti I and Ramesses II, who campaigned vigorously against the Libyans and the Hittites. The city of Kadesh was first captured by Seti I, who decided to concede it to Muwatalli of Hatti in a peace treaty between Egypt and Hatti. He ultimately accepted that a campaign against the Hittites was a drain on Egypts treasury and military. In his 21st regnal year, Ramesses signed the first recorded peace treaty with Urhi-Teshubs successor, Hattusili III, Ramesses II even married two Hittite princesses, the first after his second Sed Festival. At least as early as Josephus, it was believed that Moses lived during the reign of Ramesses II and this dynasty declined as internal fighting between the heirs of Merneptah for the throne increased. Amenmesse apparently usurped the throne from Merneptahs son and successor, Seti II, after his death, Seti regained power and destroyed most of Amenmesses monuments. Both Bay and Setis chief wife Twosret had a reputation in Ancient Egyptian folklore. After Siptahs death, Twosret ruled Egypt for two years, but she proved unable to maintain her hold on power amid the conspiracies. She was likely ousted in a revolt led by Setnakhte, founder of the Twentieth Dynasty, Nineteenth dynasty of Egypt Family Tree
Nineteenth dynasty of Egypt
–
Seti I
Nineteenth dynasty of Egypt
–
Egyptian and Hittite Empires, around the time of the Battle of Kadesh (1274 BC).
Nineteenth dynasty of Egypt
–
Ramesses II
Nineteenth dynasty of Egypt
–
Seti II
79.
Anubis
–
Anubis or Anpu is the Greek name of a god associated with mummification and the afterlife in ancient Egyptian religion, usually depicted as a canine or a man with a canine head. Like many ancient Egyptian deities, Anubis assumed different roles in various contexts, depicted as a protector of graves as early as the First Dynasty, Anubis was also an embalmer. By the Middle Kingdom he was replaced by Osiris in his role as lord of the underworld, one of his prominent roles was as a god who ushered souls into the afterlife. He attended the weighing scale during the Weighing of the Heart, despite being one of the most ancient and one of the most frequently depicted and mentioned gods in the Egyptian pantheon, Anubis played almost no role in Egyptian myths. Anubis was depicted in black, a color that symbolized both rebirth and the discoloration of the corpse after embalming, Anubis is associated with Wepwawet, another Egyptian god portrayed with a dogs head or in canine form, but with grey or white fur. Historians assume that the two figures were eventually combined and his daughter is the serpent goddess Kebechet. Anubis is a Greek rendering of this gods Egyptian name, in Egypts Early Dynastic period, Anubis was portrayed in full animal form, with a jackal head and body. A jackal god, probably Anubis, is depicted in stone inscriptions from the reigns of Hor-Aha, Djer, the oldest known textual mention of Anubis is in the Pyramid Texts of the Old Kingdom, where he is associated with the burial of the pharaoh. In the Old Kingdom, Anubis was the most important god of the dead and he was replaced in that role by Osiris during the Middle Kingdom. In the Roman era, which started in 30 BC, tomb paintings depict him holding the hand of deceased persons to them to Osiris. The parentage of Anubis varied between myths, times and sources, in early mythology, he was portrayed as a son of Ra. In the Coffin Texts, which were written in the First Intermediate Period, another tradition depicted him as the son of his father Ra and mother Nephthys. George Hart sees this story as an attempt to incorporate the independent deity Anubis into the Osirian pantheon, an Egyptian papyrus from the Roman period simply called Anubis the son of Isis. In the Ptolemaic period, when Egypt became a Hellenistic kingdom ruled by Greek pharaohs, Anubis was merged with the Greek god Hermes, the two gods were considered similar because they both guided souls to the afterlife. The center of this cult was in uten-ha/Sa-ka/ Cynopolis, a place whose Greek name means city of dogs, in Book XI of The Golden Ass by Apuleius, there is evidence that the worship of this god was continued in Rome through at least the 2nd century. Indeed, Hermanubis also appears in the alchemical and hermetical literature of the Middle Ages, in contrast to real wolves, Anubis was a protector of graves and cemeteries. Several epithets attached to his name in Egyptian texts and inscriptions referred to that role, the Jumilhac papyrus recounts another tale where Anubis protected the body of Osiris from Set. Set attempted to attack the body of Osiris by transforming himself into a leopard, Anubis stopped and subdued Set, however, and he branded Sets skin with a hot iron rod
Anubis
–
Anubis attending the mummy of the deceased.
Anubis
–
Statue of Hermanubis, a hybrid of Anubis and the Greek god Hermes (Vatican Museums)
Anubis
–
The "weighing of the heart," from the book of the dead of Hunefer. Anubis is portrayed as both guiding the deceased forward and manipulating the scales, under the scrutiny of the ibis-headed Thoth.
Anubis
–
A crouching or "recumbent" statue of Anubis as a black-coated wolf (from the Tomb of Tutankhamun)
80.
Balance scale
–
Weighing scales are devices to measure weight or calculate mass. Scales and balances are used in commerce, as many products are sold. Very accurate balances, called analytical balances, are used in fields such as chemistry. Although records dating to the 1700s refer to spring scales for measuring weight, the earliest design for such a device dates to 1770 and credits Richard Salter, an early scale-maker. Postal workers could work quickly with spring scales than balance scales because they could be read instantaneously. By the 1940s various electronic devices were being attached to these designs to make more accurate. A spring scale measures weight by reporting the distance that a spring deflects under a load and this contrasts to a balance, which compares the torque on the arm due to a sample weight to the torque on the arm due to a standard reference weight using a horizontal lever. Spring scales measure force, which is the force of constraint acting on an object. They are usually calibrated so that measured force translates to mass at earths gravity, the object to be weighed can be simply hung from the spring or set on a pivot and bearing platform. In a spring scale, the spring either stretches or compresses, by Hookes law, every spring has a proportionality constant that relates how hard it is pulled to how far it stretches. Rack and pinion mechanisms are used to convert the linear spring motion to a dial reading. With proper manufacturing and setup, however, spring scales can be rated as legal for commerce, to remove the temperature error, a commerce-legal spring scale must either have temperature-compensated springs or be used at a fairly constant temperature. To eliminate the effect of gravity variations, a spring scale must be calibrated where it is used. It is also common in high-capacity applications such as crane scales to use force to sense weight. The test force is applied to a piston or diaphragm and transmitted through hydraulic lines to an indicator based on a Bourdon tube or electronic sensor. A digital bathroom scale is a type of electronic weighing machine, the digital bathroom scale is a smart scale which has many functions like smartphone integration, cloud storage, fitness tracking, etc. In electronic versions of spring scales, the deflection of a beam supporting the weight is measured using a strain gauge. The capacity of such devices is only limited by the resistance of the beam to deflection and these scales are used in the modern bakery, grocery, delicatessen, seafood, meat, produce and other perishable goods departments
Balance scale
–
Digital kitchen scale, a strain gauge scale
Balance scale
–
Scales used for trade purposes in the state of Florida, as this scale at the checkout in a cafeteria, are inspected for accuracy by the FDACS's Bureau of Weights and Measures.
Balance scale
–
A two-pan balance
Balance scale
–
Two 10- decagram masses
81.
Tycho Brahe
–
Tycho Brahe, born Tyge Ottesen Brahe, was a Danish nobleman known for his accurate and comprehensive astronomical and planetary observations. He was born in the then Danish peninsula of Scania, well known in his lifetime as an astronomer, astrologer and alchemist, he has been described as the first competent mind in modern astronomy to feel ardently the passion for exact empirical facts. His observations were some five times more accurate than the best available observations at the time, an heir to several of Denmarks principal noble families, he received a comprehensive education. He took an interest in astronomy and in the creation of more instruments of measurement. His system correctly saw the Moon as orbiting Earth, and the planets as orbiting the Sun, furthermore, he was the last of the major naked eye astronomers, working without telescopes for his observations. In his De nova stella of 1573, he refuted the Aristotelian belief in a celestial realm. Using similar measurements he showed that comets were also not atmospheric phenomena, as previously thought, on the island he founded manufactories, such as a paper mill, to provide material for printing his results. He built an observatory at Benátky nad Jizerou, there, from 1600 until his death in 1601, he was assisted by Johannes Kepler, who later used Tychos astronomical data to develop his three laws of planetary motion. Tychos body has been exhumed twice, in 1901 and 2010, to examine the circumstances of his death, both of his grandfathers and all of his great grandfathers had served as members of the Danish kings Privy Council. His paternal grandfather and namesake Thyge Brahe was the lord of Tosterup Castle in Scania, Tychos father Otte Brahe, like his father a royal Privy Councilor, married Beate Bille, who was herself a powerful figure at the Danish court holding several royal land titles. Both parents are buried under the floor of Kågeröd Church, four kilometres east of Knutstorp, Tycho was born at his familys ancestral seat of Knutstorp Castle, about eight kilometres north of Svalöv in then Danish Scania. He was the oldest of 12 siblngs,8 of whom lived to adulthood and his twin brother died before being baptized. Tycho later wrote an ode in Latin to his dead twin, an epitaph, originally from Knutstorp, but now on a plaque near the church door, shows the whole family, including Tycho as a boy. When he was two years old Tycho was taken away to be raised by his uncle Jørgen Thygesen Brahe. It is unclear why the Otte Brahe reached this arrangement with his brother, Tycho later wrote that Jørgen Brahe raised me and generously provided for me during his life until my eighteenth year, he always treated me as his own son and made me his heir. From ages 6 to 12, Tycho attended Latin school, probably in Nykøbing, at age 12, on 19 April 1559, Tycho began studies at the University of Copenhagen. There, following his uncles wishes, he studied law, but also studied a variety of other subjects, at the University, Aristotle was a staple of scientific theory, and Tycho likely received a thorough training in Aristotelian physics and cosmology. He experienced the solar eclipse of 21 August 1560, and was impressed by the fact that it had been predicted
Tycho Brahe
–
Brahe wearing the Order of the Elephant
Tycho Brahe
–
Portrait of Tycho Brahe (1596) Skokloster Castle
Tycho Brahe
–
An artificial nose of the kind Tycho wore. This particular example did not belong to Tycho.
Tycho Brahe
–
Tycho Brahe's grave in Prague, new tomb stone from 1901
82.
Elliptical
–
In mathematics, an ellipse is a curve in a plane surrounding two focal points such that the sum of the distances to the two focal points is constant for every point on the curve. As such, it is a generalization of a circle, which is a type of an ellipse having both focal points at the same location. The shape of an ellipse is represented by its eccentricity, which for an ellipse can be any number from 0 to arbitrarily close to, ellipses are the closed type of conic section, a plane curve resulting from the intersection of a cone by a plane. Ellipses have many similarities with the two forms of conic sections, parabolas and hyperbolas, both of which are open and unbounded. The cross section of a cylinder is an ellipse, unless the section is parallel to the axis of the cylinder and this ratio is called the eccentricity of the ellipse. Ellipses are common in physics, astronomy and engineering, for example, the orbit of each planet in our solar system is approximately an ellipse with the barycenter of the planet–Sun pair at one of the focal points. The same is true for moons orbiting planets and all other systems having two astronomical bodies, the shapes of planets and stars are often well described by ellipsoids. It is also the simplest Lissajous figure formed when the horizontal and vertical motions are sinusoids with the same frequency, a similar effect leads to elliptical polarization of light in optics. The name, ἔλλειψις, was given by Apollonius of Perga in his Conics, in order to omit the special case of a line segment, one presumes 2 a > | F1 F2 |, E =. The midpoint C of the segment joining the foci is called the center of the ellipse. The line through the foci is called the major axis and it contains the vertices V1, V2, which have distance a to the center. The distance c of the foci to the center is called the distance or linear eccentricity. The quotient c a is the eccentricity e, the case F1 = F2 yields a circle and is included. C2 is called the circle of the ellipse. This property should not be confused with the definition of an ellipse with help of a directrix below, for an arbitrary point the distance to the focus is 2 + y 2 and to the second focus 2 + y 2. Hence the point is on the ellipse if the condition is fulfilled 2 + y 2 +2 + y 2 =2 a. The shape parameters a, b are called the major axis. The points V3 =, V4 = are the co-vertices and it follows from the equation that the ellipse is symmetric with respect to both of the coordinate axes and hence symmetric with respect to the origin
Elliptical
–
Drawing an ellipse with two pins, a loop, and a pen
Elliptical
–
An ellipse obtained as the intersection of a cone with an inclined plane.
83.
Square (algebra)
–
In mathematics, a square is the result of multiplying a number by itself. The verb to square is used to denote this operation, squaring is the same as raising to the power 2, and is denoted by a superscript 2, for instance, the square of 3 may be written as 32, which is the number 9. In some cases when superscripts are not available, as for instance in programming languages or plain text files, the adjective which corresponds to squaring is quadratic. The square of an integer may also be called a number or a perfect square. In algebra, the operation of squaring is often generalized to polynomials, other expressions, for instance, the square of the linear polynomial x +1 is the quadratic polynomial x2 + 2x +1. One of the important properties of squaring, for numbers as well as in other mathematical systems, is that. That is, the function satisfies the identity x2 =2. This can also be expressed by saying that the function is an even function. The squaring function preserves the order of numbers, larger numbers have larger squares. In other words, squaring is a function on the interval. Hence, zero is its global minimum, the only cases where the square x2 of a number is less than x occur when 0 < x <1, that is, when x belongs to an open interval. This implies that the square of an integer is never less than the original number, every positive real number is the square of exactly two numbers, one of which is strictly positive and the other of which is strictly negative. Zero is the square of one number, itself. For this reason, it is possible to define the square root function, no square root can be taken of a negative number within the system of real numbers, because squares of all real numbers are non-negative. There are several uses of the squaring function in geometry. The name of the squaring function shows its importance in the definition of the area, the area depends quadratically on the size, the area of a shape n times larger is n2 times greater. The squaring function is related to distance through the Pythagorean theorem and its generalization, Euclidean distance is not a smooth function, the three-dimensional graph of distance from a fixed point forms a cone, with a non-smooth point at the tip of the cone. However, the square of the distance, which has a paraboloid as its graph, is a smooth, the dot product of a Euclidean vector with itself is equal to the square of its length, v⋅v = v2
Square (algebra)
–
The composition of the tiling Image:ConformId.jpg (understood as a function on the complex plane) with the complex square function
Square (algebra)
–
5⋅5, or 5 2 (5 squared), can be shown graphically using a square. Each block represents one unit, 1⋅1, and the entire square represents 5⋅5, or the area of the square.
84.
Solar System
–
The Solar System is the gravitationally bound system comprising the Sun and the objects that orbit it, either directly or indirectly. Of those objects that orbit the Sun directly, the largest eight are the planets, with the remainder being significantly smaller objects, such as dwarf planets, of the objects that orbit the Sun indirectly, the moons, two are larger than the smallest planet, Mercury. The Solar System formed 4.6 billion years ago from the collapse of a giant interstellar molecular cloud. The vast majority of the mass is in the Sun. The four smaller inner planets, Mercury, Venus, Earth and Mars, are terrestrial planets, being composed of rock. The four outer planets are giant planets, being more massive than the terrestrials. All planets have almost circular orbits that lie within a flat disc called the ecliptic. The Solar System also contains smaller objects, the asteroid belt, which lies between the orbits of Mars and Jupiter, mostly contains objects composed, like the terrestrial planets, of rock and metal. Beyond Neptunes orbit lie the Kuiper belt and scattered disc, which are populations of trans-Neptunian objects composed mostly of ices, within these populations are several dozen to possibly tens of thousands of objects large enough that they have been rounded by their own gravity. Such objects are categorized as dwarf planets, identified dwarf planets include the asteroid Ceres and the trans-Neptunian objects Pluto and Eris. In addition to two regions, various other small-body populations, including comets, centaurs and interplanetary dust clouds. Six of the planets, at least four of the dwarf planets, each of the outer planets is encircled by planetary rings of dust and other small objects. The solar wind, a stream of charged particles flowing outwards from the Sun, the heliopause is the point at which pressure from the solar wind is equal to the opposing pressure of the interstellar medium, it extends out to the edge of the scattered disc. The Oort cloud, which is thought to be the source for long-period comets, the Solar System is located in the Orion Arm,26,000 light-years from the center of the Milky Way. For most of history, humanity did not recognize or understand the concept of the Solar System, the invention of the telescope led to the discovery of further planets and moons. The principal component of the Solar System is the Sun, a G2 main-sequence star that contains 99. 86% of the known mass. The Suns four largest orbiting bodies, the giant planets, account for 99% of the mass, with Jupiter. The remaining objects of the Solar System together comprise less than 0. 002% of the Solar Systems total mass, most large objects in orbit around the Sun lie near the plane of Earths orbit, known as the ecliptic
Solar System
–
The Sun and planets of the Solar System (distances not to scale)
Solar System
–
Solar System
Solar System
–
Andreas Cellarius 's illustration of the Copernican system, from the Harmonia Macrocosmica (1660)
Solar System
–
The eight planets of the Solar System (by decreasing size) are Jupiter, Saturn, Uranus, Neptune, Earth, Venus, Mars and Mercury.
85.
Vincenzo Viviani
–
Vincenzo Viviani was an Italian mathematician and scientist. He was a pupil of Torricelli and a disciple of Galileo, born and raised in Florence, Viviani studied at a Jesuit school. There, Grand Duke Ferdinando II de Medici furnished him a scholarship to purchase mathematical books and he became a pupil of Evangelista Torricelli and worked on physics and geometry. In 1639, at the age of 17, he was an assistant of Galileo Galilei in Arcetri and he remained a disciple until Galileos death in 1642. From 1655 to 1656, Viviani edited the first edition of Galileos collected works, after Torricellis 1647 death, Viviani was appointed to fill his position at the Accademia dellArte del Disegno in Florence. Viviani was also one of the first members of the Grand Dukes experimental academy, the Accademia del Cimento, in 1660, Viviani and Giovanni Alfonso Borelli conducted an experiment to determine the speed of sound. The currently accepted value is 331.29 m/s at 0 °C or 340.29 m/s at sea level and it has also been claimed that in 1661 he experimented with the rotation of pendulums,190 years before the famous demonstration by Foucault. By 1666, Viviani started to receive many job offers as his reputation as a mathematician grew and that same year, Louis XIV of France offered him a position at the Académie Royale and John II Casimir of Poland offered Viviani a post as his astronomer. Fearful of losing Viviani, the Grand Duke appointed him court mathematician, Viviani accepted this post and turned down his other offers. In 1687, he published a book on engineering, Discorso intorno al difendersi da riempimenti e dalle corrosione de fiumi, upon his death, Viviani left an almost completed work on the resistance of solids, which was subsequently completed and published by Luigi Guido Grandi. In 1737, the Church finally allowed Galileo to be reburied in a grave with an elaborate monument, the monument that was created in the church of Santa Croce was constructed with the help of funds left by Viviani for that specific purpose. Vivianis own remains were moved to Galileos new grave as well, the lunar crater Viviani is named after him. In Florence, Viviani had Galileos life and achievements written in Latin on the façade of his palace, the palace was then renamed Palazzo dei Cartelloni. Racconto istorico della vita di Galileo Galilei Galileos Leaning Tower of Pisa experiment Vivianis theorem Vivianis curve Viviani page at Rice Universitys Galileo Project Vivianis Theorem
Vincenzo Viviani
–
Vincenzo Viviani
Vincenzo Viviani
–
The "Palazzo Viviani" or "Palazzo dei Cartelloni" with plaques and bust dedicated by Viviani to Galilei
86.
Ball
–
A ball is a round object with various uses. It is used in games, where the play of the game follows the state of the ball as it is hit. Balls can also be used for activities, such as catch, marbles. Balls made from hard-wearing materials are used in engineering applications to very low friction bearings. Black-powder weapons use stone and metal balls as projectiles, although many types of balls are today made from rubber, this form was unknown outside the Americas until after the voyages of Columbus. The Spanish were the first Europeans to see bouncing rubber balls which were employed most notably in the Mesoamerican ballgame, balls used in various sports in other parts of the world prior to Columbus were made from other materials such as animal bladders or skins, stuffed with various materials. As balls are one of the most familiar spherical objects to humans, no Old English representative of any of these is known. If ball- was native in Germanic, it may have been a cognate with the Latin foll-is in sense of a blown up or inflated. In the later Middle English spelling balle the word coincided graphically with the French balle ball, French balle is assumed to be of Germanic origin, itself, however. In Ancient Greek the word πάλλα for ball is attested besides the word σφαίρα, a ball, as the essential feature in many forms of gameplay requiring physical exertion, must date from the very earliest times. A rolling object appeals not only to a baby but to a kitten. Some form of game with a ball is found portrayed on Egyptian monuments, in Homer, Nausicaa was playing at ball with her maidens when Odysseus first saw her in the land of the Phaeacians. And Halios and Laodamas performed before Alcinous and Odysseus with ball play, of regular rules for the playing of ball games, little trace remains, if there were any such. Pollux mentions a game called episkyros, which has often been looked on as the origin of football and it seems to have been played by two sides, arranged in lines, how far there was any form of goal seems uncertain. Among the Romans, ball games were looked upon as an adjunct to the bath, and were graduated to the age and health of the bathers and this was struck from player to player, who wore a kind of gauntlet on the arm. These games are known to us through the Romans, though the names are Greek, the various modern games played with a ball or balls and subject to rules are treated under their various names, such as polo, cricket, football, etc. Several sports use a ball in the shape of a prolate spheroid, Ball Buckminster Fullerene Football Kickball Marbles Penny floater Prisoner Ball Shuttlecock Super Ball
Ball
–
Russian leather balls (Russian: мячи), 12th-13th century.
Ball
–
Football from association football (soccer)
Ball
–
Baoding balls
Ball
–
Baseball
87.
Groove (engineering)
–
Examples include, A canal cut in a hard material, usually metal. This canal can be round, oval or an arc in order to another component such as a boss. It can also be on the circumference of a dowel, a bolt and this canal may receive a circlip an o-ring or a gasket. A depression on the circumference of a cast or machined wheel. This depression may receive a cable, a rope or a belt, a longitudinal channel formed in a hot rolled rail profile such as a grooved rail. This groove is for the flange on a train wheel, fluting Glass run channel Labyrinth seal Tongue and groove Tread
Groove (engineering)
–
v
88.
Sidereal orbital period
–
A sidereal year is the time taken by the Earth to orbit the Sun once with respect to the fixed stars. Hence it is also the time taken for the Sun to return to the position with respect to the fixed stars after apparently travelling once around the ecliptic. It equals 365.25636 SI days for the J2000.0 epoch, the sidereal year differs from the tropical year, the period of time required for the ecliptic longitude of the sun to increase 360 degrees, due to the precession of the equinoxes. The sidereal year is 20 min 24.5 s longer than the tropical year at J2000.0. Before the discovery of the precession of the equinoxes by Hipparchus in the Hellenistic period, anomalistic year Gaussian year Orbital period Julian year Precession Sidereal time Tropical year
Sidereal orbital period
–
Key concepts
89.
Celestial bodies
–
An astronomical object or celestial object is a naturally occurring physical entity, association, or structure that current astronomy has demonstrated to exist in the observable universe. In astronomy, the object and body are often used interchangeably. Examples for astronomical objects include planetary systems, star clusters, nebulae and galaxies, while asteroids, moons, planets, and stars are astronomical bodies. A comet may be identified as both body and object, It is a body when referring to the nucleus of ice and dust. The universe can be viewed as having a hierarchical structure, at the largest scales, the fundamental component of assembly is the galaxy. Galaxies are organized groups and clusters, often within larger superclusters. Disc galaxies encompass lenticular and spiral galaxies with features, such as spiral arms, at the core, most galaxies have a supermassive black hole, which may result in an active galactic nucleus. Galaxies can also have satellites in the form of dwarf galaxies, the constituents of a galaxy are formed out of gaseous matter that assembles through gravitational self-attraction in a hierarchical manner. At this level, the fundamental components are the stars. The great variety of forms are determined almost entirely by the mass, composition. Stars may be found in systems that orbit about each other in a hierarchical organization. A planetary system and various objects such as asteroids, comets and debris. The various distinctive types of stars are shown by the Hertzsprung–Russell diagram —a plot of stellar luminosity versus surface temperature. Each star follows a track across this diagram. If this track takes the star through a region containing a variable type. An example of this is the instability strip, a region of the H-R diagram that includes Delta Scuti, RR Lyrae, the table below lists the general categories of bodies and objects by their location or structure. International Astronomical Naming Commission List of light sources List of Solar System objects Lists of astronomical objects SkyChart, Sky & Telescope Monthly skymaps for every location on Earth
Celestial bodies
Celestial bodies
Celestial bodies
Celestial bodies
90.
Calculus
–
Calculus is the mathematical study of continuous change, in the same way that geometry is the study of shape and algebra is the study of generalizations of arithmetic operations. It has two branches, differential calculus, and integral calculus, these two branches are related to each other by the fundamental theorem of calculus. Both branches make use of the notions of convergence of infinite sequences. Generally, modern calculus is considered to have developed in the 17th century by Isaac Newton. Today, calculus has widespread uses in science, engineering and economics, Calculus is a part of modern mathematics education. A course in calculus is a gateway to other, more advanced courses in mathematics devoted to the study of functions and limits, Calculus has historically been called the calculus of infinitesimals, or infinitesimal calculus. Calculus is also used for naming some methods of calculation or theories of computation, such as calculus, calculus of variations, lambda calculus. The ancient period introduced some of the ideas that led to integral calculus, the method of exhaustion was later discovered independently in China by Liu Hui in the 3rd century AD in order to find the area of a circle. In the 5th century AD, Zu Gengzhi, son of Zu Chongzhi, indian mathematicians gave a non-rigorous method of a sort of differentiation of some trigonometric functions. In the Middle East, Alhazen derived a formula for the sum of fourth powers. He used the results to carry out what would now be called an integration, Cavalieris work was not well respected since his methods could lead to erroneous results, and the infinitesimal quantities he introduced were disreputable at first. The formal study of calculus brought together Cavalieris infinitesimals with the calculus of finite differences developed in Europe at around the same time, pierre de Fermat, claiming that he borrowed from Diophantus, introduced the concept of adequality, which represented equality up to an infinitesimal error term. The combination was achieved by John Wallis, Isaac Barrow, and James Gregory, in other work, he developed series expansions for functions, including fractional and irrational powers, and it was clear that he understood the principles of the Taylor series. He did not publish all these discoveries, and at this time infinitesimal methods were considered disreputable. These ideas were arranged into a calculus of infinitesimals by Gottfried Wilhelm Leibniz. He is now regarded as an independent inventor of and contributor to calculus, unlike Newton, Leibniz paid a lot of attention to the formalism, often spending days determining appropriate symbols for concepts. Leibniz and Newton are usually credited with the invention of calculus. Newton was the first to apply calculus to general physics and Leibniz developed much of the used in calculus today
Calculus
–
Isaac Newton developed the use of calculus in his laws of motion and gravitation.
Calculus
–
Gottfried Wilhelm Leibniz was the first to publish his results on the development of calculus.
Calculus
–
Maria Gaetana Agnesi
Calculus
–
The logarithmic spiral of the Nautilus shell is a classical image used to depict the growth and change related to calculus
91.
Royal Society
–
Founded in November 1660, it was granted a royal charter by King Charles II as The Royal Society. The society is governed by its Council, which is chaired by the Societys President, according to a set of statutes and standing orders. The members of Council and the President are elected from and by its Fellows, the members of the society. As of 2016, there are about 1,600 fellows, allowed to use the postnominal title FRS, there are also royal fellows, honorary fellows and foreign members, the last of which are allowed to use the postnominal title ForMemRS. The Royal Society President is Venkatraman Ramakrishnan, who took up the post on 30 November 2015, since 1967, the society has been based at 6–9 Carlton House Terrace, a Grade I listed building in central London which was previously used by the Embassy of Germany, London. The Royal Society started from groups of physicians and natural philosophers, meeting at variety of locations and they were influenced by the new science, as promoted by Francis Bacon in his New Atlantis, from approximately 1645 onwards. A group known as The Philosophical Society of Oxford was run under a set of rules still retained by the Bodleian Library, after the English Restoration, there were regular meetings at Gresham College. It is widely held that these groups were the inspiration for the foundation of the Royal Society, I will not say, that Mr Oldenburg did rather inspire the French to follow the English, or, at least, did help them, and hinder us. But tis well known who were the men that began and promoted that design. This initial royal favour has continued and, since then, every monarch has been the patron of the society, the societys early meetings included experiments performed first by Hooke and then by Denis Papin, who was appointed in 1684. These experiments varied in their area, and were both important in some cases and trivial in others. The Society returned to Gresham in 1673, there had been an attempt in 1667 to establish a permanent college for the society. Michael Hunter argues that this was influenced by Solomons House in Bacons New Atlantis and, to a lesser extent, by J. V. The first proposal was given by John Evelyn to Robert Boyle in a letter dated 3 September 1659, he suggested a scheme, with apartments for members. The societys ideas were simpler and only included residences for a handful of staff and these plans were progressing by November 1667, but never came to anything, given the lack of contributions from members and the unrealised—perhaps unrealistic—aspirations of the society. During the 18th century, the gusto that had characterised the early years of the society faded, with a number of scientific greats compared to other periods. The pointed lightning conductor had been invented by Benjamin Franklin in 1749, during the same time period, it became customary to appoint society fellows to serve on government committees where science was concerned, something that still continues. The 18th century featured remedies to many of the early problems
Royal Society
–
The entrance to the Royal Society in Carlton House Terrace, London
Royal Society
–
The President, Council, and Fellows of the Royal Society of London for Improving Natural Knowledge
Royal Society
–
John Evelyn, who helped to found the Royal Society
Royal Society
–
Mace granted by Charles II
92.
Escape velocity
–
The escape velocity from Earth is about 11.186 km/s at the surface. More generally, escape velocity is the speed at which the sum of a kinetic energy. With escape velocity in a direction pointing away from the ground of a massive body, once escape velocity is achieved, no further impulse need be applied for it to continue in its escape. When given a speed V greater than the speed v e. In these equations atmospheric friction is not taken into account, escape velocity is only required to send a ballistic object on a trajectory that will allow the object to escape the gravity well of the mass M. The existence of escape velocity is a consequence of conservation of energy, by adding speed to the object it expands the possible places that can be reached until with enough energy they become infinite. For a given gravitational potential energy at a position, the escape velocity is the minimum speed an object without propulsion needs to be able to escape from the gravity. Escape velocity is actually a speed because it does not specify a direction, no matter what the direction of travel is, the simplest way of deriving the formula for escape velocity is to use conservation of energy. Imagine that a spaceship of mass m is at a distance r from the center of mass of the planet and its initial speed is equal to its escape velocity, v e. At its final state, it will be a distance away from the planet. The same result is obtained by a calculation, in which case the variable r represents the radial coordinate or reduced circumference of the Schwarzschild metric. All speeds and velocities measured with respect to the field, additionally, the escape velocity at a point in space is equal to the speed that an object would have if it started at rest from an infinite distance and was pulled by gravity to that point. In common usage, the point is on the surface of a planet or moon. On the surface of the Earth, the velocity is about 11.2 km/s. However, at 9,000 km altitude in space, it is less than 7.1 km/s. The escape velocity is independent of the mass of the escaping object and it does not matter if the mass is 1 kg or 1,000 kg, what differs is the amount of energy required. For an object of mass m the energy required to escape the Earths gravitational field is GMm / r, a related quantity is the specific orbital energy which is essentially the sum of the kinetic and potential energy divided by the mass. An object has reached escape velocity when the orbital energy is greater or equal to zero
Escape velocity
–
Luna 1, launched in 1959, was the first man-made object to attain escape velocity from Earth (see below table).
Escape velocity
–
General
93.
Newton's cannonball
–
Newtons cannonball was a thought experiment Isaac Newton used to hypothesize that the force of gravity was universal, and it was the key force for planetary motion. It appeared in his book A Treatise of the System of the World, in this experiment from his book, Newton visualizes a cannon on top of a very high mountain. If there were no forces of gravitation or air resistance, the cannonball should follow a straight away from Earth. If a gravitational force acts on the cannonball, it follow a different path depending on its initial velocity. If the speed is low, it will fall back on Earth. For example horizontal speed of 0 to 7,000 m/s for Earth, if the speed is very high, it will leave Earth in a parabolic or hyperbolic trajectory. for example horizontal speed of approximately greater than 10,000 m/s for Earth. An image of the page from the System of the World showing Newtons diagram of this experiment was included on the Voyager Golden Record
Newton's cannonball
–
Contents
94.
Thought experiment
–
A thought experiment considers some hypothesis, theory, or principle for the purpose of thinking through its consequences. Given the structure of the experiment, it may not be possible to perform it, perhaps the key experiment in the history of modern science is Galileos demonstration that falling objects must fall at the same rate regardless of their masses. The experiment is described by Galileo in Discorsi e dimostrazioni matematiche thus, do you not agree with me in this opinion. Hence the heavier body moves with less speed than the lighter, thus you see how, from your assumption that the heavier body moves more rapidly than the lighter one, I infer that the heavier body moves more slowly. Although the extract does not convey the elegance and power of the demonstration terribly well, it is clear that it is a thought experiment, instead, many philosophers prefer to consider Thought Experiments to be merely the use of a hypothetical scenario to help understand the way things are. Thought experiments have used in a variety of fields, including philosophy, law, physics. In philosophy, they have used at least since classical antiquity. In law, they were well-known to Roman lawyers quoted in the Digest, in physics and other sciences, notable thought experiments date from the 19th and especially the 20th century, but examples can be found at least as early as Galileo. Johann Witt-Hansen established that Hans Christian Ørsted was the first to use the Latin-German mixed term Gedankenexperiment circa 1812, Ørsted was also the first to use its entirely German equivalent, Gedankenversuch, in 1820. The English term thought experiment was coined from Machs Gedankenexperiment, prior to its emergence, the activity of posing hypothetical questions that employed subjunctive reasoning had existed for a very long time. However, people had no way of categorizing it or speaking about it and this helps to explain the extremely wide and diverse range of the application of the term thought experiment once it had been introduced into English. In physics and other sciences many thought experiments date from the 19th and especially the 20th Century, in Galileo’s thought experiment, for example, the rearrangement of empirical experience consists in the original idea of combining bodies of different weight. Thought experiments have used in philosophy, physics, and other fields. In law, the hypothetical is frequently used for such experiments. Regardless of their goal, all thought experiments display a patterned way of thinking that is designed to allow us to explain, predict and control events in a better. However, they may make those theories themselves irrelevant, and could create new problems that are just as difficult. Ensure the avoidance of past failures Scientists tend to use thought experiments as imaginary, in these cases, the result of the proxy experiment will often be so clear that there will be no need to conduct a physical experiment at all. Scientists also use thought experiments when particular physical experiments are impossible to conduct, such as Einsteins thought experiment of chasing a light beam, leading to Special Relativity
Thought experiment
–
Temporal representation of a prefactual thought experiment.
Thought experiment
–
A famous example, Schrödinger's cat (1935), presents a cat that might be alive or dead, depending on an earlier random event. It illustrates the problem of the Copenhagen interpretation applied to everyday objects.
Thought experiment
–
Temporal representation of a counterfactual thought experiment.
Thought experiment
–
Temporal representation of a semifactual thought experiment.
95.
Celestial spheres
–
The celestial spheres, or celestial orbs, were the fundamental entities of the cosmological models developed by Plato, Eudoxus, Aristotle, Ptolemy, Copernicus and others. Since it was believed that the stars did not change their positions relative to one another. In modern thought, the orbits of the planets are viewed as the paths of those planets through mostly empty space, when scholars applied Ptolemys epicycles, they presumed that each planetary sphere was exactly thick enough to accommodate them. In Greek antiquity the ideas of celestial spheres and rings first appeared in the cosmology of Anaximander in the early 6th century BC, all these wheel rims had originally been formed out of an original sphere of fire wholly encompassing the Earth, which had disintegrated into many individual rings. Hence, in Anaximanderss cosmogony, in the beginning was the sphere, out of which celestial rings were formed, as viewed from the Earth, the ring of the Sun was highest, that of the Moon was lower, and the sphere of the stars was lowest. Following Anaximander, his pupil Anaximenes held that the stars, Sun, Moon, and planets are all made of fire. But whilst the stars are fastened on a crystal sphere like nails or studs, the Sun, Moon, and planets. And unlike Anaximander, he relegated the fixed stars to the region most distant from the Earth, after Anaximenes, Pythagoras, Xenophanes and Parmenides all held that the universe was spherical. And much later in the fourth century BC Platos Timaeus proposed that the body of the cosmos was made in the most perfect and uniform shape, but it posited that the planets were spherical bodies set in rotating bands or rings rather than wheel rims as in Anaximanders cosmology. Each planet is attached to the innermost of its own set of spheres. In his Metaphysics, Aristotle developed a cosmology of spheres. Aristotle considers that these spheres are made of a fifth element. Each of these concentric spheres is moved by its own god—an unchanging divine unmoved mover, by using eccentrics and epicycles, his geometrical model achieved greater mathematical detail and predictive accuracy than had been exhibited by earlier concentric spherical models of the cosmos. In Ptolemys physical model, each planet is contained in two or more spheres, but in Book 2 of his Planetary Hypotheses Ptolemy depicted thick circular slices rather than spheres as in its Book 1. The planetary spheres were arranged outwards from the spherical, stationary Earth at the centre of the universe in order, the spheres of the Moon, Mercury, Venus, Sun, Mars, Jupiter. In more detailed models the seven planetary spheres contained other secondary spheres within them, in antiquity the order of the lower planets was not universally agreed. Plato and his followers ordered them Moon, Sun, Mercury, Venus, a series of astronomers, beginning with the Muslim astronomer al-Farghānī, used the Ptolemaic model of nesting spheres to compute distances to the stars and planetary spheres. Al-Farghānīs distance to the stars was 20,110 Earth radii which, on the assumption that the radius of the Earth was 3,250 miles, came to 65,357,500 miles
Celestial spheres
–
The Earth within seven celestial spheres, from Bede, De natura rerum, late 11th century
Celestial spheres
–
Geocentric celestial spheres; Peter Apian's Cosmographia (Antwerp, 1539)
Celestial spheres
–
Thomas Digges' 1576 Copernican heliocentric model of the celestial orbs
Celestial spheres
–
Kepler's diagram of the celestial spheres, and of the spaces between them, following the opinion of Copernicus (Mysterium Cosmographicum, 2nd ed., 1621)
96.
Cavendish experiment
–
Because of the unit conventions then in use, the gravitational constant does not appear explicitly in Cavendishs work. Instead, the result was originally expressed as the gravity of the Earth. His experiment gave the first accurate values for these geophysical constants, the experiment was devised sometime before 1783 by geologist John Michell, who constructed a torsion balance apparatus for it. However, Michell died in 1793 without completing the work, after his death the apparatus passed to Francis John Hyde Wollaston and then to Henry Cavendish, who rebuilt the apparatus but kept close to Michells original plan. Cavendish then carried out a series of measurements with the equipment, the apparatus constructed by Cavendish was a torsion balance made of a six-foot wooden rod suspended from a wire, with a 2-inch diameter 1. 61-pound lead sphere attached to each end. Two 12-inch 348-pound lead balls were located near the smaller balls, about 9 inches away, the experiment measured the faint gravitational attraction between the small balls and the larger ones. The two large balls were positioned on alternate sides of the horizontal arm of the balance. Their mutual attraction to the small balls caused the arm to rotate, the arm stopped rotating when it reached an angle where the twisting force of the wire balanced the combined gravitational force of attraction between the large and small lead spheres. By measuring the angle of the rod and knowing the twisting force of the wire for a given angle, Cavendish found that the Earths density was 5. 448±0.033 times that of water. The period was about 20 minutes, the torsion coefficient could be calculated from this and the mass and dimensions of the balance. Actually, the rod was never at rest, Cavendish had to measure the angle of the rod while it was oscillating. Cavendishs equipment was remarkably sensitive for its time, the force involved in twisting the torsion balance was very small,1. 74×10−7 N, about 1⁄50,000,000 of the weight of the small balls. Through two holes in the walls of the shed, Cavendish used telescopes to observe the movement of the torsion balances horizontal rod, the motion of the rod was only about 0.16 inches. Cavendish was able to measure this small deflection to an accuracy of better than one hundredth of an inch using vernier scales on the ends of the rod, Cavendishs accuracy was not exceeded until C. V. In time, Michells torsion balance became the dominant technique for measuring the gravitational constant and this is why Cavendishs experiment became the Cavendish experiment. The formulation of Newtonian gravity in terms of a gravitational constant did not become standard until long after Cavendishs time, indeed, one of the first references to G is in 1873,75 years after Cavendishs work. Cavendish expressed his result in terms of the density of the Earth, later authors reformulated his results in modern terms. For this reason, historians of science have argued that Cavendish did not measure the gravitational constant, physicists, however, often use units where the gravitational constant takes a different form
Cavendish experiment
–
Detail showing torsion balance arm (m), large ball (W), small ball (x), and isolating box (ABCDE).
Cavendish experiment
–
Vertical section drawing of Cavendish's torsion balance instrument including the building in which it was housed. The large balls were hung from a frame so they could be rotated into position next to the small balls by a pulley from outside. Figure 1 of Cavendish's paper.
97.
Weighing
–
In science and engineering, the weight of an object is usually taken to be the force on the object due to gravity. Weight is a vector whose magnitude, often denoted by an italic letter W, is the product of the m of the object. The unit of measurement for weight is that of force, which in the International System of Units is the newton. For example, an object with a mass of one kilogram has a weight of about 9.8 newtons on the surface of the Earth, in this sense of weight, a body can be weightless only if it is far away from any other mass. Although weight and mass are scientifically distinct quantities, the terms are often confused with other in everyday use. There is also a tradition within Newtonian physics and engineering which sees weight as that which is measured when one uses scales. There the weight is a measure of the magnitude of the force exerted on a body. Typically, in measuring an objects weight, the object is placed on scales at rest with respect to the earth, thus, in a state of free fall, the weight would be zero. In this second sense of weight, terrestrial objects can be weightless, ignoring air resistance, the famous apple falling from the tree, on its way to meet the ground near Isaac Newton, is weightless. Further complications in elucidating the various concepts of weight have to do with the theory of relativity according to gravity is modelled as a consequence of the curvature of spacetime. In the teaching community, a debate has existed for over half a century on how to define weight for their students. The current situation is that a set of concepts co-exist. Discussion of the concepts of heaviness and lightness date back to the ancient Greek philosophers and these were typically viewed as inherent properties of objects. Plato described weight as the tendency of objects to seek their kin. To Aristotle weight and levity represented the tendency to restore the order of the basic elements, air, earth, fire. He ascribed absolute weight to earth and absolute levity to fire, archimedes saw weight as a quality opposed to buoyancy, with the conflict between the two determining if an object sinks or floats. The first operational definition of weight was given by Euclid, who defined weight as, weight is the heaviness or lightness of one thing, compared to another, operational balances had, however, been around much longer. According to Aristotle, weight was the cause of the falling motion of an object
Weighing
–
Ancient Greek official bronze weights dating from around the 6th century BC, exhibited in the Ancient Agora Museum in Athens, housed in the Stoa of Attalus.
Weighing
–
Weighing grain, from the Babur-namah
Weighing
–
This top-fuel dragster can accelerate from zero to 160 kilometres per hour (99 mph) in 0.86 seconds. This is a horizontal acceleration of 5.3 g. Combined with the vertical g-force in the stationary case the Pythagorean theorem yields a g-force of 5.4 g. It is this g-force that causes the driver's weight if one uses the operational definition. If one uses the gravitational definition, the driver's weight is unchanged by the motion of the car.
Weighing
–
Measuring weight versus mass
98.
Spring scales
–
A spring scale or spring balance or newton meter is a type of weighing scale. It consists of spring fixed at one end with a hook to attach an object at the other and it works by Hookes Law, which states that the force needed to extend a spring is proportional to the distance that spring is extended from its rest position. Therefore, the markings on the spring balance are equally spaced. A spring scale can not measure mass, only weight, also, the spring in the scale can permanently stretch with repeated use. A spring scale will only read correctly in a frame of reference where the acceleration in the axis is constant. This can be shown by taking a spring scale into an elevator, if two or more spring balances are hung one below the other in series, each of the scales will read approximately the same, the full weight of the body hung on the lower scale. The scale on top would read slightly heavier due to supporting the weight of the lower scale itself. Spring balances come in different sizes, generally, small scales that measure newtons will have a less firm spring than larger ones that measure tens, hundreds or thousands of newtons or even more depending on the scale of newtons used. The largest spring scale ranged in measurement from 5000-8000 newtons, a spring balance may be labeled in both units of force and mass. Strictly speaking, only the values are correctly labeled. Main uses of spring balances are industrial, especially related to weighing heavy loads such as trucks, storage silos and they are also common in science education as basic accelerators. They are used when the accuracy afforded by other types of scales can be sacrificed for simplicity, cheapness, a spring balance measures the weight of an object by opposing the force of gravity acting with the force of an extended spring. The first spring balance in Britain was made around 1770 by Richard Salter of Bilston and he and his nephews John & George founded the firm of George Salter & Co. still notable makers of scales and balances, who in 1838 patented the spring balance. They also applied the same spring balance principle to steam locomotive safety valves, weighing scale Media related to spring balance at Wikimedia Commons Media related to spring scales at Wikimedia Commons
Spring scales
–
Spring balance, measuring in gram.
Spring scales
–
Example of spiral balancer for sash windows
99.
Hooke's law
–
Hookes law is a principle of physics that states that the force needed to extend or compress a spring by some distance X is proportional to that distance. That is, F = kX, where k is a constant factor characteristic of the spring, its stiffness, the law is named after 17th-century British physicist Robert Hooke. He first stated the law in 1676 as a Latin anagram and he published the solution of his anagram in 1678 as, ut tensio, sic vis. Hooke states in the 1678 work that he was aware of the law already in 1660, an elastic body or material for which this equation can be assumed is said to be linear-elastic or Hookean. Hookes law is only a linear approximation to the real response of springs. Many materials will deviate from Hookes law well before those elastic limits are reached. On the other hand, Hookes law is an approximation for most solid bodies, as long as the forces. For this reason, Hookes law is used in all branches of science and engineering. It is also the principle behind the spring scale, the manometer. The modern theory of elasticity generalizes Hookes law to say that the strain of an object or material is proportional to the stress applied to it. In this general form, Hookes law makes it possible to deduce the relation between strain and stress for complex objects in terms of properties of the materials it is made of. Consider a simple helical spring that has one end attached to some fixed object, suppose that the spring has reached a state of equilibrium, where its length is not changing anymore. Let X be the amount by which the end of the spring was displaced from its relaxed position. Hookes law states that F = k X or, equivalently, X = F k where k is a real number. Moreover, the formula holds when the spring is compressed. According to this formula, the graph of the applied force F as a function of the displacement X will be a line passing through the origin. Hookes law for a spring is often stated under the convention that F is the force exerted by the spring on whatever is pulling its free end. In that case, the equation becomes F = − k X since the direction of the force is opposite to that of the displacement
Hooke's law
–
Hooke's law: the force is proportional to the extension
100.
Ernst Mach
–
Ernst Waldfried Josef Wenzel Mach was an Austrian physicist and philosopher, noted for his contributions to physics such as study of shock waves. The ratio of speed to that of sound is named the Mach number in his honor. Ernst Waldfried Josef Wenzel Mach was born in Brno-Chrlice, Moravia and his father, who had graduated from Charles University in Prague, acted as tutor to the noble Brethon family in Zlín, eastern Moravia. His grandfather, Wenzl Lanhaus, an administrator of the estate Chirlitz, was master builder of the streets there. His activities in that later influenced the theoretical work of Ernst Mach. Some sources give Machs birthplace as Turas/Tuřany, the site of the Chirlitz registry-office, peregrin Weiss baptized Ernst Mach into the Roman Catholic Church in Turas/Tuřany. Despite his Catholic background, he became an atheist and his theory. Up to the age of 14, Mach received his education at home from his parents and he then entered a Gymnasium in Kroměříž, where he studied for three years. In 1855 he became a student at the University of Vienna and his early work focused on the Doppler effect in optics and acoustics. During that period, Mach continued his work in psycho-physics and in sensory perception, in 1867, he took the chair of Experimental Physics at the Charles University, Prague, where he stayed for 28 years before returning to Vienna. Machs main contribution to physics involved his description and photographs of spark shock-waves and he described how when a bullet or shell moved faster than the speed of sound, it created a compression of air in front of it. Using schlieren photography, he and his son Ludwig were able to photograph the shadows of the shock waves. During the early 1890s Ludwig was able to invent an interferometer which allowed for much clearer photographs, one of the best-known of Machs ideas is the so-called Mach principle, concerning the physical origin of inertia. This was never written down by Mach, but was given a verbal form, attributed by Philipp Frank to Mach himself, as, When the subway jerks. Mach also became known for his philosophy developed in close interplay with his science. Mach defended a type of phenomenalism recognizing only sensations as real and this position seemed incompatible with the view of atoms and molecules as external, mind-independent things. He famously declared, after an 1897 lecture by Ludwig Boltzmann at the Imperial Academy of Science in Vienna, from about 1908 to 1911 Machs reluctance to acknowledge the reality of atoms was criticized by Max Planck as being incompatible with physics. In 1898 Mach suffered from cardiac arrest and in 1901 retired from the University of Vienna and was appointed to the chamber of the Austrian parliament
Ernst Mach
–
Ernst Mach (1838–1916)
Ernst Mach
–
Ernst Mach’s photography of a bow shockwave around a supersonic bullet, in 1888.
Ernst Mach
–
Bust of Mach in the Rathauspark (City Hall Park) in Vienna, Austria.
Ernst Mach
–
Spinning chair devised by Mach to investigate the experience of motion
101.
Percy W. Bridgman
–
Percy Williams Bridgman was an American physicist who won the 1946 Nobel Prize in Physics for his work on the physics of high pressures. He also wrote extensively on the method and on other aspects of the philosophy of science. Known to family and friends as Peter, Bridgman was born in Cambridge, Massachusetts, Bridgmans parents were both born in New England. His father, Raymond Landon Bridgman, was religious and idealistic. His mother, Mary Ann Maria Williams, was described as more conventional, sprightly, Bridgman attended both elementary and high school in Auburndale, where he excelled at competitions in the classroom, on the playground, and while playing chess. Described as both shy and proud, his life consisted of family music, card games, and domestic. The family was religious, reading the Bible each morning and attending a Congregational Church. Bridgman entered Harvard University in 1900, and studied physics through to his Ph. D, from 1910 until his retirement, he taught at Harvard, becoming a full professor in 1919. In 1905, he began investigating the properties of matter under high pressure, a machinery malfunction led him to modify his pressure apparatus, the result was a new device enabling him to create pressures eventually exceeding 100,000 kgf/cm2. This was an improvement over previous machinery, which could achieve pressures of only 3,000 kgf/cm2. Bridgman is also known for his studies of electrical conduction in metals and he developed the Bridgman seal and is the eponym for Bridgmans thermodynamic equations. Bridgman made many improvements to his pressure apparatus over the years. His philosophy of science book The Logic of Modern Physics advocated operationalism, in 1938 he participated in the International Committee composed to organise the International Congresses for the Unity of Science. He was also one of the 11 signatories to the Russell–Einstein Manifesto, Bridgman married Olive Ware, of Hartford, Connecticut, in 1912. Wares father, Edmund Asa Ware, was the founder and first president of Atlanta University, the couple had two children and were married for 50 years, living most of that time in Cambridge. The family also had a home in Randolph, New Hampshire. Bridgman was a penetrating analytical thinker with a fertile mechanical imagination and he was a skilled plumber and carpenter, known to shun the assistance of professionals in these matters. He was also fond of music and played the piano, and took pride in his flower, Bridgman committed suicide by gunshot after suffering from metastatic cancer for some time
Percy W. Bridgman
–
Percy Williams Bridgman
102.
Newton's second law
–
Newtons laws of motion are three physical laws that, together, laid the foundation for classical mechanics. They describe the relationship between a body and the forces acting upon it, and its motion in response to those forces. More precisely, the first law defines the force qualitatively, the second law offers a measure of the force. These three laws have been expressed in different ways, over nearly three centuries, and can be summarised as follows. The three laws of motion were first compiled by Isaac Newton in his Philosophiæ Naturalis Principia Mathematica, Newton used them to explain and investigate the motion of many physical objects and systems. For example, in the volume of the text, Newton showed that these laws of motion, combined with his law of universal gravitation. Newtons laws are applied to objects which are idealised as single point masses, in the sense that the size and this can be done when the object is small compared to the distances involved in its analysis, or the deformation and rotation of the body are of no importance. In this way, even a planet can be idealised as a particle for analysis of its orbital motion around a star, in their original form, Newtons laws of motion are not adequate to characterise the motion of rigid bodies and deformable bodies. Leonhard Euler in 1750 introduced a generalisation of Newtons laws of motion for rigid bodies called Eulers laws of motion, if a body is represented as an assemblage of discrete particles, each governed by Newtons laws of motion, then Eulers laws can be derived from Newtons laws. Eulers laws can, however, be taken as axioms describing the laws of motion for extended bodies, Newtons laws hold only with respect to a certain set of frames of reference called Newtonian or inertial reference frames. Other authors do treat the first law as a corollary of the second, the explicit concept of an inertial frame of reference was not developed until long after Newtons death. In the given mass, acceleration, momentum, and force are assumed to be externally defined quantities. This is the most common, but not the interpretation of the way one can consider the laws to be a definition of these quantities. Newtonian mechanics has been superseded by special relativity, but it is useful as an approximation when the speeds involved are much slower than the speed of light. The first law states that if the net force is zero, the first law can be stated mathematically when the mass is a non-zero constant, as, ∑ F =0 ⇔ d v d t =0. Consequently, An object that is at rest will stay at rest unless a force acts upon it, an object that is in motion will not change its velocity unless a force acts upon it. This is known as uniform motion, an object continues to do whatever it happens to be doing unless a force is exerted upon it. If it is at rest, it continues in a state of rest, if an object is moving, it continues to move without turning or changing its speed
Newton's second law
–
Newton's First and Second laws, in Latin, from the original 1687 Principia Mathematica.
Newton's second law
–
Isaac Newton (1643–1727), the physicist who formulated the laws
103.
Newton's third law
–
Newtons laws of motion are three physical laws that, together, laid the foundation for classical mechanics. They describe the relationship between a body and the forces acting upon it, and its motion in response to those forces. More precisely, the first law defines the force qualitatively, the second law offers a measure of the force. These three laws have been expressed in different ways, over nearly three centuries, and can be summarised as follows. The three laws of motion were first compiled by Isaac Newton in his Philosophiæ Naturalis Principia Mathematica, Newton used them to explain and investigate the motion of many physical objects and systems. For example, in the volume of the text, Newton showed that these laws of motion, combined with his law of universal gravitation. Newtons laws are applied to objects which are idealised as single point masses, in the sense that the size and this can be done when the object is small compared to the distances involved in its analysis, or the deformation and rotation of the body are of no importance. In this way, even a planet can be idealised as a particle for analysis of its orbital motion around a star, in their original form, Newtons laws of motion are not adequate to characterise the motion of rigid bodies and deformable bodies. Leonhard Euler in 1750 introduced a generalisation of Newtons laws of motion for rigid bodies called Eulers laws of motion, if a body is represented as an assemblage of discrete particles, each governed by Newtons laws of motion, then Eulers laws can be derived from Newtons laws. Eulers laws can, however, be taken as axioms describing the laws of motion for extended bodies, Newtons laws hold only with respect to a certain set of frames of reference called Newtonian or inertial reference frames. Other authors do treat the first law as a corollary of the second, the explicit concept of an inertial frame of reference was not developed until long after Newtons death. In the given mass, acceleration, momentum, and force are assumed to be externally defined quantities. This is the most common, but not the interpretation of the way one can consider the laws to be a definition of these quantities. Newtonian mechanics has been superseded by special relativity, but it is useful as an approximation when the speeds involved are much slower than the speed of light. The first law states that if the net force is zero, the first law can be stated mathematically when the mass is a non-zero constant, as, ∑ F =0 ⇔ d v d t =0. Consequently, An object that is at rest will stay at rest unless a force acts upon it, an object that is in motion will not change its velocity unless a force acts upon it. This is known as uniform motion, an object continues to do whatever it happens to be doing unless a force is exerted upon it. If it is at rest, it continues in a state of rest, if an object is moving, it continues to move without turning or changing its speed
Newton's third law
–
Newton's First and Second laws, in Latin, from the original 1687 Principia Mathematica.
Newton's third law
–
Isaac Newton (1643–1727), the physicist who formulated the laws
104.
Deuterium
–
Deuterium is one of two stable isotopes of hydrogen. The nucleus of deuterium, called a deuteron, contains one proton and one neutron, whereas the far more common hydrogen isotope, Deuterium has a natural abundance in Earths oceans of about one atom in 6420 of hydrogen. Thus deuterium accounts for approximately 0. 0156% of all the naturally occurring hydrogen in the oceans, the abundance of deuterium changes slightly from one kind of natural water to another. The deuterium isotopes name is formed from the Greek deuteros meaning second, Deuterium was discovered and named in 1931 by Harold Urey. When the neutron was discovered in 1932, this made the structure of deuterium obvious. Soon after deuteriums discovery, Urey and others produced samples of water in which the deuterium content had been highly concentrated. Deuterium is destroyed in the interiors of stars faster than it is produced, other natural processes are thought to produce only an insignificant amount of deuterium. Nearly all deuterium found in nature was produced in the Big Bang 13.8 billion years ago and this is the ratio found in the gas giant planets, such as Jupiter. However, other bodies are found to have different ratios of deuterium to hydrogen-1. This is thought to be as a result of natural isotope separation processes that occur from solar heating of ices in comets, like the water-cycle in Earths weather, such heating processes may enrich deuterium with respect to protium. The analysis of ratios in comets found results very similar to the mean ratio in Earths oceans. This reinforces theories that much of Earths ocean water is of cometary origin, the deuterium/protium ratio of the comet 67P/Churyumov-Gerasimenko, as measured by the Rosetta space probe, is about three times that of earth water. This figure is the highest yet measured in a comet, deuterium/protium ratios thus continue to be an active topic of research in both astronomy and climatology. Deuterium is frequently represented by the chemical symbol D, since it is an isotope of hydrogen with mass number 2, it is also represented by 2H. IUPAC allows both D and 2H, although 2H is preferred, a distinct chemical symbol is used for convenience because of the isotopes common use in various scientific processes. In quantum mechanics the energy levels of electrons in atoms depend on the mass of the system of electron. For hydrogen, this amount is about 1837/1836, or 1.000545, the energies of spectroscopic lines for deuterium and light-hydrogen therefore differ by the ratios of these two numbers, which is 1.000272. The wavelengths of all deuterium spectroscopic lines are shorter than the lines of light hydrogen
Deuterium
–
Deuterium discharge tube
Deuterium
–
Full table
Deuterium
–
Ionized deuterium in a fusor reactor giving off its characteristic pinkish-red glow
Deuterium
–
Harold Urey
105.
International Bureau of Weights and Measures
–
The organisation is usually referred to by its French initialism, BIPM. The BIPM reports to the International Committee for Weights and Measures and these organizations are also commonly referred to by their French initialisms. The BIPM was created on 20 May 1875, following the signing of the Metre Convention, under the authority of the Metric Convention, the BIPM helps to ensure uniformity of SI weights and measures around the world. It does so through a series of committees, whose members are the national metrology laboratories of the Conventions member states. The BIPM carries out measurement-related research and it takes part in and organises international comparisons of national measurement standards and performs calibrations for member states. The BIPM has an important role in maintaining accurate worldwide time of day and it combines, analyses, and averages the official atomic time standards of member nations around the world to create a single, official Coordinated Universal Time. The BIPM is also the keeper of the prototype of the kilogram. Metrologia Institute for Reference Materials and Measurements International Organization for Standardization National Institute of Standards and Technology Official website
International Bureau of Weights and Measures
–
Pavillon de Breteuil in Sèvres, France.
International Bureau of Weights and Measures
–
Seal of the BIPM
106.
Proposed redefinition of SI base units
–
The metric system was originally conceived as a system of measurement that was derivable from nature. When the metric system was first introduced in France in 1799 technical limitations necessitated the use of such as the prototype metre. In 1960 the metre was redefined in terms of the wavelength of light from a source, making it derivable from nature. If the proposed redefinition is accepted, the system will, for the first time. The proposal can be summarised as follows, There will still be the seven base units. The second, metre and candela are already defined by physical constants, the new definitions will improve the SI without changing the size of any units, thus ensuring continuity with present measurements. Further details are found in the chapter of the Ninth SI Units Brochure. The last major overhaul of the system was in 1960 when the International System of Units was formally published as a coherent set of units of measure. SI is structured around seven base units that have apparently arbitrary definitions, although the set of units form a coherent system, the definitions do not. The proposal before the CIPM seeks to remedy this by using the quantities of nature as the basis for deriving the base units. This will mean, amongst other things, that the prototype kilogram will cease to be used as the replica of the kilogram. The second and the metre are already defined in such a manner, the basic structure of SI was developed over a period of about 170 years. Since 1960 technological advances have made it possible to address weaknesses in SI. Specifically, the metre was defined as one ten-millionth of the distance from the North Pole to the Equator, although these definitions were chosen so that nobody would own the units, they could not be measured with sufficient convenience or precision for practical use. Instead copies were created in the form of the mètre des Archives, in 1875, by which time the use of the metric system had become widespread in Europe and in Latin America, twenty industrially developed nations met for the Convention of the Metre. They were, CGPM —The Conference meets every four to six years, CIPM —The Committee consists of eighteen eminent scientists, each from a different country, nominated by the CGPM. The CIPM meets annually and is tasked to advise the CGPM, the CIPM has set up a number of sub-committees, each charged with a particular area of interest. One of these, the Consultative Committee for Units, amongst other things, the first CGPM formally approved the use of 40 prototype metres and 40 prototype kilograms from the British firm Johnson Matthey as the standards mandated by the Convention of the Metre
Proposed redefinition of SI base units
–
Mass drift over time of national prototypes K21–K40, plus two of the International Prototype Kilogram 's (IPK's) sister copies: K32 and K8(41). All mass changes are relative to the IPK.
Proposed redefinition of SI base units
–
Current (2013) SI system: Dependence of base unit definitions on other base units (for example, the metre is defined in terms of the distance traveled by light in a specific fraction of a second)
Proposed redefinition of SI base units
–
A watt balance which is being used to measure the Planck constant in terms of the international prototype kilogram.
Proposed redefinition of SI base units
–
A near-perfect sphere of ultra-pure silicon - part of the Avogadro project, an International Avogadro Coordination project to determine the Avogadro number
107.
Relativistic energy-momentum equation
–
Unlike either of those equations, the energy-momentum equation relates the total energy to the rest mass m0. All three equations hold true simultaneously, special cases of the relation include, If the body is a massless particle, then reduces to E = pc. For photons, this is the relation, discovered in 19th century classical electromagnetism, between radiant momentum and radiant energy. If the bodys speed v is less than c, then reduces to E = 1/2m0v2 + m0c2, that is. If the body is at rest, i. e. in its frame, we have E = E0 and m = m0, thus the energy-momentum relation. A more general form of relation holds for general relativity, the invariant mass is an invariant for all frames of reference, not just in inertial frames in flat spacetime, but also accelerated frames traveling through curved spacetime. Although we still have, in flat spacetime, E ′2 −2 =2, the quantities E, p, E′, p′ are all related by a Lorentz transformation. The relation allows one to sidestep Lorentz transformations when determining only the magnitudes of the energy, again in flat spacetime, this translates to, E2 −2 = E ′2 −2 =2. In relativistic quantum theory, it is applicable to all particles. This article will use the notation for the square of a vector as the dot product of a vector with itself. The equation can be derived in a number of ways, two of the simplest include, considering the dynamics of a massive particle, evaluating the norm of the four-momentum of the system. This is completely general for all particles, and is easy to extend to multi-particle systems, the elimination of the Lorentz factor also eliminates implicit velocity dependence of the particle in, as well as any inferences to the relativistic mass of a massive particle. This approach is not general as massless particles are not considered, naively setting m0 =0 would mean that E =0 and p =0 and no energy-momentum relation could be derived, which is not correct. In Minkowski space, energy and momentum are two components of a Minkowski four-vector, namely the four-momentum, P =. Where the factor of 2 arises because the metric is a tensor. As each component of the metric has space and time dependence in general, in natural units where c =1, the energy–momentum equation reduces to E2 = p 2 + m 02. In particle physics, energy is given in units of electron volts, momentum in units of eV·c−1. Energy may also in theory be expressed in units of grams, for example, the first atomic bomb liberated about 1 gram of heat, and the largest thermonuclear bombs have generated a kilogram or more of heat
Relativistic energy-momentum equation
108.
Pedagogy
–
Pedagogy is the discipline that deals with the theory and practice of education, it thus concerns the study of how best to teach. Spanning a broad range of practice, its aims range from furthering liberal education to the specifics of vocational education. Instructive strategies are governed by the background knowledge and experience, situation. One example would be the Socratic schools of thought, the teaching of adults, as a specific group, is referred to as andragogy. Johann Friedrich Herbart is the father of the conceptualization of pedagogy, or. Herbarts educational philosophy and pedagogy highlighted the correlation between personal development and the benefits to society. In other words, Herbart proposed that humans become fulfilled once they establish themselves as productive citizens, herbartianism refers to the movement underpinned by Herbarts theoretical perspectives. Referring to the process, Herbart suggested 5 steps as crucial components. Specifically, these 5 steps include, preparation, presentation, association, generalization, Herbart suggests that pedagogy relates to having assumptions as an educator and a specific set of abilities with a deliberate end goal in mind. The word is a derivative of the Greek παιδαγωγία, from παιδαγωγός, itself a synthesis of ἄγω, I lead and it is pronounced variously, as /ˈpɛdəɡɒdʒi/, /ˈpɛdəɡoʊdʒi/, or /ˈpɛdəɡɒɡi/. Negative connotations of pedantry have sometimes been intended, or taken, doctor of Pedagogy, is awarded honorarily by some US universities to distinguished teachers. The term is used to denote an emphasis in education as a specialty in a field. The word pedagogue means leading children some say to the teacher and other leading them, in Denmark, a pedagogue is a practitioner of pedagogy. The term is used for individuals who occupy jobs in pre-school education in Scandinavia. But a pedagogue can occupy various kinds of jobs, e. g. in retirement homes, prisons, orphanages and these are often recognized as social pedagogues as they perform on behalf of society. The pedagogues job is usually distinguished from a teachers by primarily focusing on teaching children life-preparing knowledge such as social skills, there is also a very big focus on care and well-being of the child. Many pedagogical institutions also practice social inclusion, the pedagogues work also consists of supporting the child in their mental and social development. In Denmark all pedagogues are educated at a series of institutes for social educators located in all major cities
Pedagogy
–
Douris Man with wax tablet
Pedagogy
109.
Atomic nuclei
–
After the discovery of the neutron in 1932, models for a nucleus composed of protons and neutrons were quickly developed by Dmitri Ivanenko and Werner Heisenberg. Almost all of the mass of an atom is located in the nucleus, protons and neutrons are bound together to form a nucleus by the nuclear force. The diameter of the nucleus is in the range of 6985175000000000000♠1.75 fm for hydrogen to about 6986150000000000000♠15 fm for the heaviest atoms and these dimensions are much smaller than the diameter of the atom itself, by a factor of about 23,000 to about 145,000. The branch of physics concerned with the study and understanding of the nucleus, including its composition. The nucleus was discovered in 1911, as a result of Ernest Rutherfords efforts to test Thomsons plum pudding model of the atom, the electron had already been discovered earlier by J. J. Knowing that atoms are electrically neutral, Thomson postulated that there must be a charge as well. In his plum pudding model, Thomson suggested that an atom consisted of negative electrons randomly scattered within a sphere of positive charge, to his surprise, many of the particles were deflected at very large angles. This justified the idea of an atom with a dense center of positive charge. The term nucleus is from the Latin word nucleus, a diminutive of nux, in 1844, Michael Faraday used the term to refer to the central point of an atom. The modern atomic meaning was proposed by Ernest Rutherford in 1912, the adoption of the term nucleus to atomic theory, however, was not immediate. In 1916, for example, Gilbert N, the nuclear strong force extends far enough from each baryon so as to bind the neutrons and protons together against the repulsive electrical force between the positively charged protons. The nuclear strong force has a short range, and essentially drops to zero just beyond the edge of the nucleus. The collective action of the charged nucleus is to hold the electrically negative charged electrons in their orbits about the nucleus. The collection of negatively charged electrons orbiting the nucleus display an affinity for certain configurations, which chemical element an atom represents is determined by the number of protons in the nucleus, the neutral atom will have an equal number of electrons orbiting that nucleus. Individual chemical elements can create more stable electron configurations by combining to share their electrons and it is that sharing of electrons to create stable electronic orbits about the nucleus that appears to us as the chemistry of our macro world. Protons define the entire charge of a nucleus, and hence its chemical identity, neutrons are electrically neutral, but contribute to the mass of a nucleus to nearly the same extent as the protons. Neutrons explain the phenomenon of isotopes – varieties of the chemical element which differ only in their atomic mass. They are sometimes viewed as two different quantum states of the particle, the nucleon
Atomic nuclei
–
Nuclear physics
110.
Conservation of energy
–
In physics, the law of conservation of energy states that the total energy of an isolated system remains constant—it is said to be conserved over time. Energy can neither be created nor destroyed, rather, it transforms from one form to another, for instance, chemical energy can be converted to kinetic energy in the explosion of a stick of dynamite. A consequence of the law of conservation of energy is that a perpetual motion machine of the first kind cannot exist and that is to say, no system without an external energy supply can deliver an unlimited amount of energy to its surroundings. Ancient philosophers as far back as Thales of Miletus c.550 BCE had inklings of the conservation of some underlying substance of everything is made. However, there is no reason to identify this with what we know today as mass-energy. Empedocles wrote that in his system, composed of four roots, nothing comes to be or perishes, instead. In 1605, Simon Stevinus was able to solve a number of problems in statics based on the principle that perpetual motion was impossible. Essentially, he pointed out that the height a moving body rises is equal to the height from which it falls, and used this observation to infer the idea of inertia. The remarkable aspect of this observation is that the height that a moving body ascends to does not depend on the shape of the surface that the body is moving on. In 1669, Christian Huygens published his laws of collision, among the quantities he listed as being invariant before and after the collision of bodies were both the sum of their linear momentums as well as the sum of their kinetic energies. However, the difference between elastic and inelastic collision was not understood at the time and this led to the dispute among later researchers as to which of these conserved quantities was the more fundamental. In his Horologium Oscillatorium, he gave a much clearer statement regarding the height of ascent of a moving body, Huygens study of the dynamics of pendulum motion was based on a single principle, that the center of gravity of heavy objects cannot lift itself. The fact that energy is scalar, unlike linear momentum which is a vector. It was Leibniz during 1676–1689 who first attempted a mathematical formulation of the kind of energy which is connected with motion. Using Huygens work on collision, Leibniz noticed that in mechanical systems. He called this quantity the vis viva or living force of the system, the principle represents an accurate statement of the approximate conservation of kinetic energy in situations where there is no friction. Many physicists at that time, such as Newton, held that the conservation of momentum and it was later shown that both quantities are conserved simultaneously, given the proper conditions such as an elastic collision. In 1687, Isaac Newton published his Principia, which was organized around the concept of force and momentum
Conservation of energy
–
Gottfried Leibniz
Conservation of energy
–
Gaspard-Gustave Coriolis
Conservation of energy
–
James Prescott Joule
111.
Mass in general relativity
–
The concept of mass in general relativity is more complex than the concept of mass in special relativity. In fact, general relativity does not offer a definition of the term mass. Under some circumstances, the mass of a system in general relativity may not even be defined, concisely, in fundamental units where c =1, the mass of a system in special relativity is the norm of its energy–momentum four vector, otherwise, it is m c. Generalizing this definition to general relativity, however, is problematic, in fact, how, then, does one define a concept as a systems total mass – which is easily defined in classical mechanics. In a way, the ADM energy measures all of the contained in spacetime. Several kinds of proofs that both the ADM mass and the Bondi mass are indeed positive exist, in particular, this means that Minkowski space is indeed stable. However, while there is a variety of proposed definitions such as the Hawking energy, the Geroch energy or Penroses quasi-local energy–momentum based on twistor methods, a non-technical definition of a stationary spacetime is a spacetime where none of the metric coefficients g μ ν are functions of time. The Schwarzschild metric of a hole and the Kerr metric of a rotating black hole are common examples of stationary spacetimes. By definition, a stationary spacetime exhibits time translation symmetry and this is technically called a time-like Killing vector. Because the system has a translation symmetry, Noethers theorem guarantees that it has a conserved energy. Because a stationary system also has a well defined rest frame in which its momentum can be considered to be zero, in general relativity, this mass is called the Komar mass of the system. Komar mass can only be defined for stationary systems, Komar mass can also be defined by a flux integral. This is similar to the way that Gausss law defines the charge enclosed by a surface as the electric force multiplied by the area. The flux integral used to define Komar mass is different from that used to define the electric field, however - the normal force is not the actual force. See the main article for more detail, of the two definitions, the description of Komar mass in terms of a time translation symmetry provides the deepest insight. Such space-times are known as asymptotically flat space-times, for systems in which space-time is asymptotically flat, the ADM and Bondi energy, momentum, and mass can be defined. Note that mass is computed as the length of the four vector. Pi =0, the mass of the system is just /c2
Mass in general relativity
–
General relativity
112.
Inertial mass
–
In physics, mass is a property of a physical body. It is the measure of a resistance to acceleration when a net force is applied. It also determines the strength of its gravitational attraction to other bodies. The basic SI unit of mass is the kilogram, Mass is not the same as weight, even though mass is often determined by measuring the objects weight using a spring scale, rather than comparing it directly with known masses. An object on the Moon would weigh less than it does on Earth because of the lower gravity and this is because weight is a force, while mass is the property that determines the strength of this force. In Newtonian physics, mass can be generalized as the amount of matter in an object, however, at very high speeds, special relativity postulates that energy is an additional source of mass. Thus, any body having mass has an equivalent amount of energy. In addition, matter is a defined term in science. There are several distinct phenomena which can be used to measure mass, active gravitational mass measures the gravitational force exerted by an object. Passive gravitational mass measures the force exerted on an object in a known gravitational field. The mass of an object determines its acceleration in the presence of an applied force, according to Newtons second law of motion, if a body of fixed mass m is subjected to a single force F, its acceleration a is given by F/m. A bodys mass also determines the degree to which it generates or is affected by a gravitational field and this is sometimes referred to as gravitational mass. The standard International System of Units unit of mass is the kilogram, the kilogram is 1000 grams, first defined in 1795 as one cubic decimeter of water at the melting point of ice. Then in 1889, the kilogram was redefined as the mass of the prototype kilogram. As of January 2013, there are proposals for redefining the kilogram yet again. In this context, the mass has units of eV/c2, the electronvolt and its multiples, such as the MeV, are commonly used in particle physics. The atomic mass unit is 1/12 of the mass of a carbon-12 atom, the atomic mass unit is convenient for expressing the masses of atoms and molecules. Outside the SI system, other units of mass include, the slug is an Imperial unit of mass, the pound is a unit of both mass and force, used mainly in the United States
Inertial mass
–
Depiction of early balance scales in the Papyrus of Hunefer (dated to the 19th dynasty, ca. 1285 BC). The scene shows Anubis weighing the heart of Hunefer.
Inertial mass
–
The kilogram is one of the seven SI base units and one of three which is defined ad hoc (i.e. without reference to another base unit).
Inertial mass
–
Galileo Galilei (1636)
Inertial mass
–
Distance traveled by a freely falling ball is proportional to the square of the elapsed time
113.
Nonlinear system
–
In mathematics and physical sciences, a nonlinear system is a system in which the change of the output is not proportional to the change of the input. Nonlinear problems are of interest to engineers, physicists and mathematicians, nonlinear systems may appear chaotic, unpredictable or counterintuitive, contrasting with the much simpler linear systems. In other words, in a system of equations, the equation to be solved cannot be written as a linear combination of the unknown variables or functions that appear in them. Systems can be defined as non-linear, regardless of whether or not known linear functions appear in the equations. In particular, an equation is linear if it is linear in terms of the unknown function and its derivatives. As nonlinear equations are difficult to solve, nonlinear systems are approximated by linear equations. This works well up to some accuracy and some range for the input values and it follows that some aspects of the behavior of a nonlinear system appear commonly to be counterintuitive, unpredictable or even chaotic. Although such chaotic behavior may resemble random behavior, it is not random. For example, some aspects of the weather are seen to be chaotic and this nonlinearity is one of the reasons why accurate long-term forecasts are impossible with current technology. Some authors use the term nonlinear science for the study of nonlinear systems and this is disputed by others, Using a term like nonlinear science is like referring to the bulk of zoology as the study of non-elephant animals. In mathematics, a function f is one which satisfies both of the following properties, Additivity or superposition, f = f + f, Homogeneity. Additivity implies homogeneity for any rational α, and, for continuous functions, for a complex α, homogeneity does not follow from additivity. For example, a map is additive but not homogeneous. The equation is called homogeneous if C =0, if f contains differentiation with respect to x, the result will be a differential equation. Nonlinear algebraic equations, which are also called polynomial equations, are defined by equating polynomials to zero, for example, x 2 + x −1 =0. For a single equation, root-finding algorithms can be used to find solutions to the equation. However, systems of equations are more complicated, their study is one motivation for the field of algebraic geometry. It is even difficult to decide whether a given system has complex solutions
Nonlinear system
–
Linearizations of a pendulum
114.
Higgs boson
–
The Higgs boson is an elementary particle in the Standard Model of particle physics. It is the excitation of the Higgs field, a fundamental field of crucial importance to particle physics theory first suspected to exist in the 1960s. Unlike other known fields such as the field, it has a non-zero constant value in vacuum. The question of the Higgs fields existence became the last unverified part of the Standard Model of particle physics and it also resolves several other long-standing puzzles, such as the reason for the weak forces extremely short range. Although the Higgs field is believed to permeate the entire Universe, in principle, it can be proved to exist by detecting its excitations, which manifest as Higgs particles, but these are extremely difficult to produce and to detect. On 4 July 2012, the discovery of a new particle with a mass between 125 and 127 GeV/c2 was announced, physicists suspected that it was the Higgs boson and this also means it is the first elementary scalar particle discovered in nature. The Higgs boson is named after Peter Higgs, one of six physicists who, in the 1964 PRL symmetry breaking papers, on December 10,2013, two of them, Peter Higgs and François Englert, were awarded the Nobel Prize in Physics for their work and prediction. Although Higgss name has come to be associated with this theory, in the Standard Model, the Higgs particle is a boson with no spin, electric charge, or colour charge. It is also unstable, decaying into other particles almost immediately. It is an excitation of one of the four components of the Higgs field. The latter constitutes a field, with two neutral and two electrically charged components that form a complex doublet of the weak isospin SU symmetry. The Higgs field is tachyonic, which does not refer to faster-than-light speeds, the Higgs field has a Mexican hat shaped potential with nonzero strength everywhere, which in its vacuum state breaks the weak isospin symmetry of the electroweak interaction. When this happens, three components of the Higgs field are absorbed by the SU and U gauge bosons to become the longitudinal components of the now-massive W and Z bosons of the weak force. The remaining electrically neutral component either manifests as a Higgs particle, or can couple separately to other known as fermions. Some versions of the theory predicted more than one kind of Higgs fields, alternative Higgsless models might have been considered if the Higgs boson had not been discovered. In this model, the forces in nature arise from properties of our universe called gauge invariance. The forces themselves are transmitted by particles known as gauge bosons, field theories had been used with great success in understanding the electromagnetic field and the strong force. The problem was that the requirements in gauge theory predicted that both electromagnetisms gauge boson and the weak forces gauge bosons should have zero mass
Higgs boson
–
Large Hadron Collider tunnel at CERN
Higgs boson
–
Candidate Higgs boson events from collisions between protons in the LHC. The top event in the CMS experiment shows a decay into two photons (dashed yellow lines and green towers). The lower event in the ATLAS experiment shows a decay into 4 muons (red tracks).
Higgs boson
–
The six authors of the 1964 PRL papers, who received the 2010 J. J. Sakurai Prize for their work. From left to right: Kibble, Guralnik, Hagen, Englert, Brout. Right: Higgs.
Higgs boson
115.
Particle
–
A particle is a minute fragment or quantity of matter. In the physical sciences, a particle is a small localized object to which can be ascribed several physical or chemical properties such as volume or mass. Particles can also be used to create models of even larger objects depending on their density. The term particle is rather general in meaning, and is refined as needed by various scientific fields, something that is composed of particles may be referred to as being particulate. However, the particulate is most frequently used to refer to pollutants in the Earths atmosphere. The concept of particles is particularly useful when modelling nature, as the treatment of many phenomena can be complex. It can be used to make simplifying assumptions concerning the processes involved, francis Sears and Mark Zemansky, in University Physics, give the example of calculating the landing location and speed of a baseball thrown in the air. The treatment of large numbers of particles is the realm of statistical physics, the term particle is usually applied differently to three classes of sizes. The term macroscopic particle, usually refers to particles much larger than atoms and these are usually abstracted as point-like particles, or even invisible. This is even though they have volumes, shapes, structures, examples of macroscopic particles would include powder, dust, sand, pieces of debris during a car accident, or even objects as big as the stars of a galaxy. Another type, microscopic particles usually refers to particles of sizes ranging from atoms to molecules, such as carbon dioxide, nanoparticles and these particles are studied in chemistry, as well as atomic and molecular physics. The smallest of particles are the particles, which refer to particles smaller than atoms. These particles are studied in particle physics, because of their extremely small size, the study of microscopic and subatomic particles fall in the realm of quantum mechanics. Particles can also be classified according to composition, composite particles refer to particles that have composition – that is particles which are made of other particles. For example, an atom is made of six protons, eight neutrons. By contrast, elementary particles refer to particles that are not made of other particles, according to our current understanding of the world, only a very small number of these exist, such as the leptons, quarks or gluons. However it is possible some of these might turn up to be composite particles after all. While composite particles can very often be considered point-like, elementary particles are truly punctual, both elementary and composite particles, are known to undergo particle decay
Particle
–
Arc welders need to protect themselves from welding sparks, which are heated metal particles that fly off the welding surface. Different particles are formed at different temperatures.
Particle
–
Galaxies are so large that stars can be considered particles relative to them
116.
Phase transition
–
The term phase transition is most commonly used to describe transitions between solid, liquid and gaseous states of matter, and, in rare cases, plasma. A phase of a system and the states of matter have uniform physical properties. For example, a liquid may become gas upon heating to the boiling point, the measurement of the external conditions at which the transformation occurs is termed the phase transition. Phase transitions are common in nature and used today in many technologies, the same process, but beginning with a solid instead of a liquid is called a eutectoid transformation. A peritectic transformation, in which a two component single phase solid is heated and transforms into a phase and a liquid phase. A spinodal decomposition, in which a phase is cooled. Transition to a mesophase between solid and liquid, such as one of the crystal phases. The transition between the ferromagnetic and paramagnetic phases of materials at the Curie point. The transition between differently ordered, commensurate or incommensurate, magnetic structures, such as in cerium antimonide, the martensitic transformation which occurs as one of the many phase transformations in carbon steel and stands as a model for displacive phase transformations. Changes in the structure such as between ferrite and austenite of iron. Order-disorder transitions such as in alpha-titanium aluminides, the dependence of the adsorption geometry on coverage and temperature, such as for hydrogen on iron. The emergence of superconductivity in certain metals and ceramics when cooled below a critical temperature, the superfluid transition in liquid helium is an example of this. The breaking of symmetries in the laws of physics during the history of the universe as its temperature cooled. Isotope fractionation occurs during a transition, the ratio of light to heavy isotopes in the involved molecules changes. When water vapor condenses, the heavier water isotopes become enriched in the liquid phase while the lighter isotopes tend toward the vapor phase, Phase transitions occur when the thermodynamic free energy of a system is non-analytic for some choice of thermodynamic variables. This condition generally stems from the interactions of a number of particles in a system. It is important to note that phase transitions can occur and are defined for non-thermodynamic systems, examples include, quantum phase transitions, dynamic phase transitions, and topological phase transitions. In these types of other parameters take the place of temperature
Phase transition
–
A small piece of rapidly melting solid argon simultaneously shows the transitions from solid to liquid and liquid to gas.
Phase transition
–
This diagram shows the nomenclature for the different phase transitions.
117.
Superluminal
–
Faster-than-light communication and travel refer to the propagation of information or matter faster than the speed of light. The special theory of relativity implies that only particles with zero rest mass may travel at the speed of light, although according to current theories matter is still required to travel subluminally with respect to the locally distorted spacetime region, apparent FTL is not excluded by general relativity. Examples of apparent FTL proposals are the Alcubierre drive and the traversable wormhole and this is not quite the same as traveling faster than light, since, Some processes propagate faster than c, but cannot carry information. Neither of these phenomena violates special relativity or creates problems with causality, in the following examples, certain influences may appear to travel faster than light, but they do not convey energy or information faster than light, so they do not violate special relativity. For an Earthbound observer, objects in the sky complete one revolution around the Earth in 1 day, Proxima Centauri, which is the nearest star outside the solar system, is about 4 light-years away. On a geostationary view, Proxima Centauri has a speed many times greater than c as the rim speed of an object moving in a circle is a product of the radius and angular speed. It is also possible on a view for objects such as comets to vary their speed from subluminal to superluminal. Comets may have orbits which take out to more than 1000 AU. The circumference of a circle with a radius of 1000 AU is greater than one light day, in other words, a comet at such a distance is superluminal in a geostatic, and therefore non-inertial, frame. If a laser beam is swept across a distant object, the spot of light can easily be made to move across the object at a speed greater than c. Similarly, a shadow projected onto a distant object can be made to move across the object faster than c, in neither case does the light travel from the source to the object faster than c, nor does any information travel faster than light. However, uniform motion of the source may be removed with a change in reference frame, causing the direction of the static field to change immediately. This is not a change of position which propagates, and thus this change cannot be used to transmit information from the source, no information or matter can be FTL-transmitted or propagated from source to receiver/observer by an electromagnetic field. The rate at which two objects in motion in a frame of reference get closer together is called the mutual or closing speed. This may approach twice the speed of light, as in the case of two particles travelling at close to the speed of light in opposite directions with respect to the reference frame. Imagine two fast-moving particles approaching each other from opposite sides of an accelerator of the collider type. The closing speed would be the rate at which the distance between the two particles is decreasing, from the point of view of an observer standing at rest relative to the accelerator, this rate will be slightly less than twice the speed of light. Special relativity does not prohibit this and it tells us that it is wrong to use Galilean relativity to compute the velocity of one of the particles, as would be measured by an observer traveling alongside the other particle
Superluminal
–
History of the universe - gravitational waves are hypothesized to arise from cosmic inflation, a faster-than-light expansion just after the Big Bang (17 March 2014).
118.
Ferromagnetism
–
Not to be confused with Ferrimagnetism, for an overview see Magnetism. Ferromagnetism is the mechanism by which certain materials form permanent magnets. In physics, several different types of magnetism are distinguished, an everyday example of ferromagnetism is a refrigerator magnet used to hold notes on a refrigerator door. The attraction between a magnet and ferromagnetic material is the quality of magnetism first apparent to the ancient world, permanent magnets are either ferromagnetic or ferrimagnetic, as are the materials that are noticeably attracted to them. Only a few substances are ferromagnetic, the common ones are iron, nickel, cobalt and most of their alloys, some compounds of rare earth metals, and a few naturally occurring minerals, including some varieties of lodestone. Historically, the term ferromagnetism was used for any material that could exhibit spontaneous magnetization and this general definition is still in common use. In particular, a material is ferromagnetic in this sense only if all of its magnetic ions add a positive contribution to the net magnetization. If some of the magnetic ions subtract from the net magnetization, if the moments of the aligned and anti-aligned ions balance completely so as to have zero net magnetization, despite the magnetic ordering, then it is an antiferromagnet. These alignment effects only occur at temperatures below a critical temperature. Among the first investigations of ferromagnetism are the works of Aleksandr Stoletov on measurement of the magnetic permeability of ferromagnetics. The table on the right lists a selection of ferromagnetic and ferrimagnetic compounds, ferromagnetism is a property not just of the chemical make-up of a material, but of its crystalline structure and microstructure. There are ferromagnetic metal alloys whose constituents are not themselves ferromagnetic, called Heusler alloys, conversely there are non-magnetic alloys, such as types of stainless steel, composed almost exclusively of ferromagnetic metals. Amorphous ferromagnetic metallic alloys can be made by rapid quenching of a liquid alloy. These have the advantage that their properties are isotropic, this results in low coercivity, low hysteresis loss, high permeability. One such typical material is a transition metal-metalloid alloy, made from about 80% transition metal, a relatively new class of exceptionally strong ferromagnetic materials are the rare-earth magnets. They contain lanthanide elements that are known for their ability to carry large magnetic moments in well-localized f-orbitals, a number of actinide compounds are ferromagnets at room temperature or exhibit ferromagnetism upon cooling. PuP is a paramagnet with cubic symmetry at room temperature, in its ferromagnetic state, PuPs easy axis is in the <100> direction. In NpFe2 the easy axis is <111>, above TC ≈500 K NpFe2 is also paramagnetic and cubic
Ferromagnetism
–
A magnet made of alnico, an iron alloy, with its keeper. Ferromagnetism is the theory which explains how materials become magnets.
119.
Scalar field
–
In mathematics and physics, a scalar field associates a scalar value to every point in a space. The scalar may either be a number or a physical quantity. Examples used in include the temperature distribution throughout space, the pressure distribution in a fluid. These fields are the subject of field theory. Mathematically, a field on a region U is a real or complex-valued function or distribution on U. A scalar field is a field of order zero. Physically, a field is additionally distinguished by having units of measurement associated with it. Scalar fields are contrasted with other physical quantities such as vector fields, more subtly, scalar fields are often contrasted with pseudoscalar fields. In physics, scalar fields often describe the energy associated with a particular force. The force is a field, which can be obtained as the gradient of the potential energy scalar field. Examples include, Potential fields, such as the Newtonian gravitational potential, a temperature, humidity or pressure field, such as those used in meteorology. In quantum field theory, a field is associated with spin-0 particles. The scalar field may be real or complex valued, complex scalar fields represent charged particles. These include the charged Higgs field of the Standard Model, as well as the charged pions mediating the nuclear interaction. This mechanism is known as the Higgs mechanism, a candidate for the Higgs boson was first detected at CERN in 2012. In scalar theories of gravitation scalar fields are used to describe the gravitational field, scalar-tensor theories represent the gravitational interaction through both a tensor and a scalar. Such attempts are for example the Jordan theory as a generalization of the Kaluza–Klein theory, scalar fields like the Higgs field can be found within scalar-tensor theories, using as scalar field the Higgs field of the Standard Model. This field interacts gravitationally and Yukawa-like with the particles that get mass through it, scalar fields are found within superstring theories as dilaton fields, breaking the conformal symmetry of the string, though balancing the quantum anomalies of this tensor
Scalar field
–
A scalar field such as temperature or pressure, where intensity of the field is represented by different hues of color.
120.
Complex number
–
A complex number is a number that can be expressed in the form a + bi, where a and b are real numbers and i is the imaginary unit, satisfying the equation i2 = −1. In this expression, a is the part and b is the imaginary part of the complex number. If z = a + b i, then ℜ z = a, ℑ z = b, Complex numbers extend the concept of the one-dimensional number line to the two-dimensional complex plane by using the horizontal axis for the real part and the vertical axis for the imaginary part. The complex number a + bi can be identified with the point in the complex plane, a complex number whose real part is zero is said to be purely imaginary, whereas a complex number whose imaginary part is zero is a real number. In this way, the numbers are a field extension of the ordinary real numbers. As well as their use within mathematics, complex numbers have applications in many fields, including physics, chemistry, biology, economics, electrical engineering. The Italian mathematician Gerolamo Cardano is the first known to have introduced complex numbers and he called them fictitious during his attempts to find solutions to cubic equations in the 16th century. Complex numbers allow solutions to equations that have no solutions in real numbers. For example, the equation 2 = −9 has no real solution, Complex numbers provide a solution to this problem. The idea is to extend the real numbers with the unit i where i2 = −1. According to the theorem of algebra, all polynomial equations with real or complex coefficients in a single variable have a solution in complex numbers. A complex number is a number of the form a + bi, for example, −3.5 + 2i is a complex number. The real number a is called the part of the complex number a + bi. By this convention the imaginary part does not include the unit, hence b. The real part of a number z is denoted by Re or ℜ. For example, Re = −3.5 Im =2, hence, in terms of its real and imaginary parts, a complex number z is equal to Re + Im ⋅ i. This expression is known as the Cartesian form of z. A real number a can be regarded as a number a + 0i whose imaginary part is 0
Complex number
–
A complex number can be visually represented as a pair of numbers (a, b) forming a vector on a diagram called an Argand diagram, representing the complex plane. "Re" is the real axis, "Im" is the imaginary axis, and i is the imaginary unit which satisfies i 2 = −1.
121.
Particle decay
–
Particle decay is the spontaneous process of one unstable subatomic particle transforming into multiple other particles. The particles created in this process must each be less massive than the original, a particle is unstable if there is at least one allowed final state that it can decay into. Unstable particles will often have multiple ways of decaying, each with its own associated probability, decays are mediated by one or several fundamental forces. The particles in the state may themselves be unstable and subject to further decay. All data is from the Particle Data Group, note that this section uses natural units, where c = ℏ =1. The lifetime of a particle is given by the inverse of its rate, Γ. One may integrate over the space to obtain the total decay rate for the specified final state. If a particle has multiple decay branches or modes with different final states, the branching ratio for each mode is given by its decay rate divided by the full decay rate. Note that this section uses natural units, where c = ℏ =1, say a parent particle of mass M decays into two particles, labeled 1 and 2. In the rest frame of the parent particle, | p →1 | = | p →2 | =1 /22 M, also, in spherical coordinates, d 3 p → = | p → |2 d | p → | d ϕ d. The mass of a particle is formally a complex number, with the real part being its mass in the usual sense. When the imaginary part is large compared to the real part, for a particle of mass M + i Γ, the particle can travel for time 1/M, but decays after time of order of 1 / Γ. If Γ > M then the particle usually decays before it completes its travel, relativistic Breit-Wigner distribution Particle physics List of particles Weak interaction J. D. Jackson. The Particle Adventure Particle Data Group, Lawrence Berkeley National Laboratory
Particle decay
–
...while in the Lab Frame the parent particle is probably moving at a speed close to the speed of light so the two emitted particles would come out at angles different from those in the center of momentum frame.
122.
Eigenvalue
–
In linear algebra, an eigenvector or characteristic vector of a linear transformation is a non-zero vector whose direction does not change when that linear transformation is applied to it. This condition can be written as the equation T = λ v, there is a correspondence between n by n square matrices and linear transformations from an n-dimensional vector space to itself. For this reason, it is equivalent to define eigenvalues and eigenvectors using either the language of matrices or the language of linear transformations. Geometrically an eigenvector, corresponding to a real eigenvalue, points in a direction that is stretched by the transformation. If the eigenvalue is negative, the direction is reversed, Eigenvalues and eigenvectors feature prominently in the analysis of linear transformations. The prefix eigen- is adopted from the German word eigen for proper, inherent, own, individual, special, specific, peculiar, or characteristic. In essence, an eigenvector v of a linear transformation T is a vector that. Applying T to the eigenvector only scales the eigenvector by the scalar value λ and this condition can be written as the equation T = λ v, referred to as the eigenvalue equation or eigenequation. In general, λ may be any scalar, for example, λ may be negative, in which case the eigenvector reverses direction as part of the scaling, or it may be zero or complex. The Mona Lisa example pictured at right provides a simple illustration, each point on the painting can be represented as a vector pointing from the center of the painting to that point. The linear transformation in this example is called a shear mapping, the vectors pointing to each point in the original image are therefore tilted right or left and made longer or shorter by the transformation. Notice that points along the horizontal axis do not move at all when this transformation is applied, therefore, any vector that points directly to the right or left with no vertical component is an eigenvector of this transformation because the mapping does not change its direction. Moreover, these all have an eigenvalue equal to one because the mapping does not change their length. Linear transformations can take different forms, mapping vectors in a variety of vector spaces. Alternatively, the transformation could take the form of an n by n matrix. For a matrix, eigenvalues and eigenvectors can be used to decompose the matrix, the set of all eigenvectors of T corresponding to the same eigenvalue, together with the zero vector, is called an eigenspace or characteristic space of T. If the set of eigenvectors of T form a basis of the domain of T, Eigenvalues are often introduced in the context of linear algebra or matrix theory. Historically, however, they arose in the study of quadratic forms, in the 18th century Euler studied the rotational motion of a rigid body and discovered the importance of the principal axes
Eigenvalue
–
In this shear mapping the red arrow changes direction but the blue arrow does not. The blue arrow is an eigenvector of this shear mapping because it doesn't change direction, and since its length is unchanged, its eigenvalue is 1.
123.
Dark energy
–
In physical cosmology and astronomy, dark energy is an unknown form of energy which is hypothesized to permeate all of space, tending to accelerate the expansion of the universe. Dark energy is the most accepted hypothesis to explain the observations since the 1990s indicating that the universe is expanding at an accelerating rate. Assuming that the model of cosmology is correct, the best current measurements indicate that dark energy contributes 68. 3% of the total energy in the present-day observable universe. The mass–energy of dark matter and ordinary matter contribute 26. 8% and 4. 9%, respectively, the density of dark energy is very low, much less than the density of ordinary matter or dark matter within galaxies. However, it comes to dominate the mass–energy of the universe because it is uniform across space, contributions from scalar fields that are constant in space are usually also included in the cosmological constant. The cosmological constant can be formulated to be equivalent to the zero-point radiation of space i. e. the vacuum energy, scalar fields that change in space can be difficult to distinguish from a cosmological constant because the change may be extremely slow. High-precision measurements of the expansion of the universe are required to understand how the rate changes over time. In general relativity, the evolution of the rate is estimated from the curvature of the universe. Measuring the equation of state for energy is one of the biggest efforts in observational cosmology today. The equilibrium is unstable, if the universe expands slightly, then the expansion releases vacuum energy, likewise, a universe which contracts slightly will continue contracting. These sorts of disturbances are inevitable, due to the distribution of matter throughout the universe. Further, observations made by Edwin Hubble in 1929 showed that the universe appears to be expanding, einstein reportedly referred to his failure to predict the idea of a dynamic universe, in contrast to a static universe, as his greatest blunder. Alan Guth and Alexei Starobinsky proposed in 1980 that a negative pressure field, similar in concept to dark energy, inflation postulates that some repulsive force, qualitatively similar to dark energy, resulted in an enormous and exponential expansion of the universe slightly after the Big Bang. Such expansion is a feature of most current models of the Big Bang. It is unclear what relation, if any, exists between energy and inflation. Even after inflationary models became accepted, the constant was thought to be irrelevant to the current universe. Nearly all inflation models predict that the density of the universe should be very close to the critical density. During the 1980s, most cosmological research focused on models with critical density in matter only, usually 95% cold dark matter, then in 2001, the 2dF Galaxy Redshift Survey gave strong evidence that the matter density is around 30% of critical
Dark energy
–
Diagram representing the accelerated expansion of the universe due to dark energy.
Dark energy
–
A Type Ia supernova (bright spot on the bottom-left) near a galaxy
Dark energy
–
The equation of state of Dark Energy for 4 common models by Redshift. A: CPL Model, B: Jassal Model, C: Barboza & Alcaniz Model, D: Wetterich Model
124.
Pressure
–
Pressure is the force applied perpendicular to the surface of an object per unit area over which that force is distributed. Gauge pressure is the relative to the ambient pressure. Various units are used to express pressure, Pressure may also be expressed in terms of standard atmospheric pressure, the atmosphere is equal to this pressure and the torr is defined as 1⁄760 of this. Manometric units such as the centimetre of water, millimetre of mercury, Pressure is the amount of force acting per unit area. The symbol for it is p or P, the IUPAC recommendation for pressure is a lower-case p. However, upper-case P is widely used. The usage of P vs p depends upon the field in one is working, on the nearby presence of other symbols for quantities such as power and momentum. Mathematically, p = F A where, p is the pressure, F is the normal force and it relates the vector surface element with the normal force acting on it. It is incorrect to say the pressure is directed in such or such direction, the pressure, as a scalar, has no direction. The force given by the relationship to the quantity has a direction. If we change the orientation of the element, the direction of the normal force changes accordingly. Pressure is distributed to solid boundaries or across arbitrary sections of normal to these boundaries or sections at every point. It is a parameter in thermodynamics, and it is conjugate to volume. The SI unit for pressure is the pascal, equal to one newton per square metre and this name for the unit was added in 1971, before that, pressure in SI was expressed simply in newtons per square metre. Other units of pressure, such as pounds per square inch, the CGS unit of pressure is the barye, equal to 1 dyn·cm−2 or 0.1 Pa. Pressure is sometimes expressed in grams-force or kilograms-force per square centimetre, but using the names kilogram, gram, kilogram-force, or gram-force as units of force is expressly forbidden in SI. The technical atmosphere is 1 kgf/cm2, since a system under pressure has potential to perform work on its surroundings, pressure is a measure of potential energy stored per unit volume. It is therefore related to density and may be expressed in units such as joules per cubic metre. Similar pressures are given in kilopascals in most other fields, where the prefix is rarely used
Pressure
–
Mercury column
Pressure
–
Pressure as exerted by particle collisions inside a closed container.
Pressure
–
The effects of an external pressure of 700bar on an aluminum cylinder with 5mm wall thickness
Pressure
–
low pressure chamber in Bundesleistungszentrum Kienbaum, Germany
125.
International System of Quantities
–
The International System of Quantities is a system based on seven base quantities, length, mass, time, electric current, thermodynamic temperature, amount of substance, and luminous intensity. Other quantities such as area, pressure, and electrical resistance are derived from these base quantities by clear, the ISQ defines the quantities that are measured with the SI units and also includes many other quantities in modern science and technology. The ISQ is defined in the international standard ISO/IEC80000, and was finalised in 2009 with the publication of ISO 80000-1. The 14 parts of ISO/IEC80000 define quantities used in disciplines such as mechanics, light, acoustics, electromagnetism, information technology, chemistry, mathematics. A base quantity is a quantity in a subset of a given system of quantities that is chosen by convention. The ISQ defines seven base quantities, the symbols for them, as for other quantities, are written in italics. The dimension of a quantity does not include magnitude or units. The conventional symbolic representation of the dimension of a quantity is a single upper-case letter in roman sans-serif type. A derived quantity is a quantity in a system of quantities that is a defined in terms of the quantities of that system. The ISQ defines many derived quantities, the conventional symbolic representation of the dimension of a derived quantity is the product of powers of the dimensions of the base quantities according to the definition of the derived quantity. The dimension of a quantity is denoted by L a M b T c I d Θ e N f J g, the symbol may be omitted if its exponent is zero. For example, in the ISQ, the quantity dimension of velocity is denoted L T −1, the following table lists some quantities defined by the ISQ. A quantity of one is historically known as a dimensionless quantity, all its dimensional exponents are zero. Such a quantity can be regarded as a quantity in the form of the ratio of two quantities of the same dimension. In the ISQ, the level of a quantity Q is defined as logr, an example of level is sound pressure level. All levels of the ISQ are derived quantities, B. N. Taylor, Ambler Thompson, International System of Units, National Institute of Standards and Technology 2008 edition, ISBN 1-4379-1558-2
International System of Quantities
–
Base quantity
126.
Avogadro constant
–
In chemistry and physics, the Avogadro constant is the number of constituent particles, usually atoms or molecules, that are contained in the amount of substance given by one mole. Thus, it is the proportionality factor that relates the mass of a compound to the mass of a sample. Avogadros constant, often designated with the symbol NA or L, has the value 7023602214085700000♠6. 022140857×1023 mol−1 in the International System of Units and this number is also known as Loschmidt constant in German literature. The constant was later redefined as the number of atoms in 12 grams of the isotope carbon-12, for instance, to a first approximation,1 gram of hydrogen element, having the atomic number 1, has 7023602200000000000♠6. 022×1023 hydrogen atoms. Similarly,12 grams of 12C, with the mass number 12, has the number of carbon atoms. Avogadros number is a quantity, and has the same numerical value of the Avogadro constant given in base units. In contrast, the Avogadro constant has the dimension of reciprocal amount of substance, the Avogadro constant can also be expressed as 0.602214. ML mol−1 Å−3, which can be used to convert from volume per molecule in cubic ångströms to molar volume in millilitres per mole, revisions in the base set of SI units necessitated redefinitions of the concepts of chemical quantity. Avogadros number, and its definition, was deprecated in favor of the Avogadro constant, the French physicist Jean Perrin in 1909 proposed naming the constant in honor of Avogadro. Perrin won the 1926 Nobel Prize in Physics, largely for his work in determining the Avogadro constant by several different methods, accurate determinations of Avogadros number require the measurement of a single quantity on both the atomic and macroscopic scales using the same unit of measurement. This became possible for the first time when American physicist Robert Millikan measured the charge on an electron in 1910, the electric charge per mole of electrons is a constant called the Faraday constant and had been known since 1834 when Michael Faraday published his works on electrolysis. By dividing the charge on a mole of electrons by the charge on a single electron the value of Avogadros number is obtained, since 1910, newer calculations have more accurately determined the values for the Faraday constant and the elementary charge. Perrin originally proposed the name Avogadros number to refer to the number of molecules in one gram-molecule of oxygen, with this recognition, the Avogadro constant was no longer a pure number, but had a unit of measurement, the reciprocal mole. While it is rare to use units of amount of other than the mole, the Avogadro constant can also be expressed in units such as the pound mole. NA = 7026273159734000000♠2. 73159734×1026 −1 = 7025170724843400000♠1. 707248434×1025 −1 Avogadros constant is a factor between macroscopic and microscopic observations of nature. As such, it provides the relationship between other physical constants and properties. The Avogadro constant also enters into the definition of the atomic mass unit. The earliest accurate method to measure the value of the Avogadro constant was based on coulometry
Avogadro constant
–
Amedeo Avogadro
Avogadro constant
–
Achim Leistner at the Australian Centre for Precision Optics (ACPO) holding a one-kilogram single-crystal silicon sphere for the International Avogadro Coordination.
127.
University of Chicago Press
–
The University of Chicago Press is the largest and one of the oldest university presses in the United States. One of its quasi-independent projects is the BiblioVault, a repository for scholarly books. The Press building is located just south of the Midway Plaisance on the University of Chicago campus, the University of Chicago Press was founded in 1891, making it one of the oldest continuously operating university presses in the United States. Its first published book was Robert F. Harpers Assyrian and Babylonian Letters Belonging to the Kouyunjik Collections of the British Museum, for its first three years, the Press was an entity discrete from the university, it was operated by the Boston publishing house D. C. Heath in conjunction with the Chicago printer R. R. Donnelley and this arrangement proved unworkable, however, and in 1894 the university officially assumed responsibility for the Press. In 1902, as part of the university, the Press started working on the Decennial Publications, composed of articles and monographs by scholars and administrators on the state of the university and its facultys research, the Decennial Publications was a radical reorganization of the Press. This allowed the Press, by 1905, to begin publishing books by scholars not of the University of Chicago. A manuscript editing and proofreading department was added to the staff of printers and typesetters, leading, in 1906. By 1931, the Press was an established, leading academic publisher, leading books of that era include Dr. Edgar J. Goodspeeds The New Testament, An American Translation and its successor, Goodspeed and J. M. In 1956, the Press first published books under its imprint. Of the Presss best-known books, most date from the 1950s, including translations of the Complete Greek Tragedies and Richmond Lattimores The Iliad of Homer. That decade also saw the first edition of A Greek-English Lexicon of the New Testament and Other Early Christian Literature, in 1966, Morris Philipson began his thirty-four-year tenure as director of the University of Chicago Press. As the Presss scholarly volume expanded, the Press also advanced as a trade publisher. In 1992, Norman Macleans books A River Runs Through It and Young Men and Fire were national best sellers, in 1982, Philipson was the first director of an academic press to win the Publisher Citation, one of PENs most prestigious awards. Paula Barker Duffy served as director of the Press from 2000 to 2007, under her administration, the Press expanded its distribution operations and created the Chicago Digital Distribution Center and BiblioVault. The Press also launched an electronic work, The Chicago Manual of Style Online. Garrett P. Kiely became the 15th director of the University of Chicago Press on September 1,2007, the Press publishes over 50 new trade titles per year, across many subject areas. It also publishes regional titles, such as The Encyclopedia of Chicago, the Press has recently expanded its digital offerings to include most newly published books as well as key backlist titles
University of Chicago Press
–
University of Chicago Press
128.
Dialogue Concerning the Two Chief World Systems
–
The Dialogue Concerning the Two Chief World Systems is a 1632 Italian-language book by Galileo Galilei comparing the Copernican system with the traditional Ptolemaic system. It was translated into Latin as Systema cosmicum in 1635 by Matthias Bernegger, the book was dedicated to Galileos patron, Ferdinando II de Medici, Grand Duke of Tuscany, who received the first printed copy on February 22,1632. In the Copernican system, the Earth and other planets orbit the Sun, while in the Ptolemaic system, the Dialogue was published in Florence under a formal license from the Inquisition. In 1633, Galileo was found to be vehemently suspect of heresy based on the book, in an action that was not announced at the time, the publication of anything else he had written or ever might write was also banned in Catholic countries. While writing the book, Galileo referred to it as his Dialogue on the Tides, and when the manuscript went to the Inquisition for approval, the title was Dialogue on the Ebb and Flow of the Sea. As a result, the title on the title page is Dialogue, which is followed by Galileos name, academic posts. This must be kept in mind when discussing Galileos motives for writing the book, although the book is presented formally as a consideration of both systems, there is no question that the Copernican side gets the better of the argument. He is named after Galileos friend Filippo Salviati, Sagredo is an intelligent layman who is initially neutral. He is named after Galileos friend Giovanni Francesco Sagredo, Simplicio, a dedicated follower of Ptolemy and Aristotle, presents the traditional views and the arguments against the Copernican position. He is supposedly named after Simplicius of Cilicia, a commentator on Aristotle. Colombe was the leader of a group of Florentine opponents of Galileos, the discussion is not narrowly limited to astronomical topics, but ranges over much of contemporary science. Some of this is to show what Galileo considered good science, other parts are important to the debate, answering erroneous arguments against the Earths motion. A classic argument against earth motion is the lack of speed sensations of the surface, though it moves, by the earths rotation. The bulk of Galileos arguments may be divided into three classes, Rebuttals to the objections raised by traditional philosophers, for example, the experiment on the ship. Generally, these arguments have held up well in terms of the knowledge of the four centuries. Just how convincing they ought to have been to a reader in 1632 remains a contentious issue. Galileo attempted a class of argument, Direct physical argument for the Earths motion. As an account of the causation of tides or a proof of the Earths motion, the fundamental argument is internally inconsistent and actually leads to the conclusion that tides do not exist
Dialogue Concerning the Two Chief World Systems
–
A copy of The Dialogo, Florence edition, located at the Tom Slick rare book collection at Southwest Research Institute, in Texas.
Dialogue Concerning the Two Chief World Systems
–
Frontispiece and title page of the Dialogue, 1632
Dialogue Concerning the Two Chief World Systems
–
Actual path of cannonball B is from C to D
129.
ArXiv
–
In many fields of mathematics and physics, almost all scientific papers are self-archived on the arXiv repository. Begun on August 14,1991, arXiv. org passed the half-million article milestone on October 3,2008, by 2014 the submission rate had grown to more than 8,000 per month. The arXiv was made possible by the low-bandwidth TeX file format, around 1990, Joanne Cohn began emailing physics preprints to colleagues as TeX files, but the number of papers being sent soon filled mailboxes to capacity. Additional modes of access were added, FTP in 1991, Gopher in 1992. The term e-print was quickly adopted to describe the articles and its original domain name was xxx. lanl. gov. Due to LANLs lack of interest in the rapidly expanding technology, in 1999 Ginsparg changed institutions to Cornell University and it is now hosted principally by Cornell, with 8 mirrors around the world. Its existence was one of the factors that led to the current movement in scientific publishing known as open access. Mathematicians and scientists regularly upload their papers to arXiv. org for worldwide access, Ginsparg was awarded a MacArthur Fellowship in 2002 for his establishment of arXiv. The annual budget for arXiv is approximately $826,000 for 2013 to 2017, funded jointly by Cornell University Library, annual donations were envisaged to vary in size between $2,300 to $4,000, based on each institution’s usage. As of 14 January 2014,174 institutions have pledged support for the period 2013–2017 on this basis, in September 2011, Cornell University Library took overall administrative and financial responsibility for arXivs operation and development. Ginsparg was quoted in the Chronicle of Higher Education as saying it was supposed to be a three-hour tour, however, Ginsparg remains on the arXiv Scientific Advisory Board and on the arXiv Physics Advisory Committee. The lists of moderators for many sections of the arXiv are publicly available, additionally, an endorsement system was introduced in 2004 as part of an effort to ensure content that is relevant and of interest to current research in the specified disciplines. Under the system, for categories that use it, an author must be endorsed by an established arXiv author before being allowed to submit papers to those categories. Endorsers are not asked to review the paper for errors, new authors from recognized academic institutions generally receive automatic endorsement, which in practice means that they do not need to deal with the endorsement system at all. However, the endorsement system has attracted criticism for allegedly restricting scientific inquiry, perelman appears content to forgo the traditional peer-reviewed journal process, stating, If anybody is interested in my way of solving the problem, its all there – let them go and read about it. The arXiv generally re-classifies these works, e. g. in General mathematics, papers can be submitted in any of several formats, including LaTeX, and PDF printed from a word processor other than TeX or LaTeX. The submission is rejected by the software if generating the final PDF file fails, if any image file is too large. ArXiv now allows one to store and modify an incomplete submission, the time stamp on the article is set when the submission is finalized
ArXiv
–
arXiv
ArXiv
–
A screenshot of the arXiv taken in 1994, using the browser NCSA Mosaic. At the time, HTML forms were a new technology.
130.
Perseus Books
–
Perseus Books Group was an American publishing company founded in 1996 by investor Frank Pearl. It was named Publisher of the Year in 2007 by Publishers Weekly magazine for its role in taking on publishers formerly distributed by Publishers Group West, in April 2016, its publishing business was acquired by Hachette Book Group and its distribution business by Ingram Content Group. After the death of Frank Pearl, Perseus was sold to Centre Lane Partners, the Perseus Books Group currently has 12 imprints, Before Avalon Publishing Group was integrated into the Perseus Books Group, it published on 14 imprint presses. In 2007, some of these imprints were integrated into the Perseus Books Group, Perseus also sold one of their imprints in the restructuring process. Publishers Group West, founded in 1976, based in Berkeley, consortium Book Sales and Distribution, founded in 1985, based in St. Paul, Minnesota. Perseus Distribution, founded in 1999, based in New York City, legato Publishers Group, founded in 2013, based in Chicago
Perseus Books
–
Perseus Books Group
131.
Wikisource
–
Wikisource is an online digital library of free content textual sources on a wiki, operated by the Wikimedia Foundation. Wikisource is the name of the project as a whole and the name for each instance of that project, the projects aims are to host all forms of free text, in many languages, and translations. Originally conceived as an archive to store useful or important historical texts, the project officially began in November 24,2003 under the name Project Sourceberg. The name Wikisource was adopted later that year and it received its own domain name seven months later, the project has come under criticism for lack of reliability but it is also cited by organisations such as the National Archives and Records Administration. The project holds works that are either in the domain or freely licensed, professionally published works or historical source documents, not vanity products. Verification was initially made offline, or by trusting the reliability of digital libraries. Now works are supported by online scans via the ProofreadPage extension, some individual Wikisources, each representing a specific language, now only allow works backed up with scans. While the bulk of its collection are texts, Wikisource as a whole hosts other media, some Wikisources allow user-generated annotations, subject to the specific policies of the Wikisource in question. Wikisources early history included several changes of name and location, the original concept for Wikisource was as storage for useful or important historical texts. These texts were intended to support Wikipedia articles, by providing evidence and original source texts. The collection was focused on important historical and cultural material. The project was originally called Project Sourceberg during its planning stages, in 2001, there was a dispute on Wikipedia regarding the addition of primary source material, leading to edit wars over their inclusion or deletion. Project Sourceberg was suggested as a solution to this, perhaps Project Sourceberg can mainly work as an interface for easily linking from Wikipedia to a Project Gutenberg file, and as an interface for people to easily submit new work to PG. Wed want to complement Project Gutenberg--how, exactly, and Jimmy Wales adding like Larry, Im interested that we think it over to see what we can add to Project Gutenberg. It seems unlikely that primary sources should in general be editable by anyone -- I mean, Shakespeare is Shakespeare, unlike our commentary on his work, the project began its activity at ps. wikipedia. org. The contributors understood the PS subdomain to mean either primary sources or Project Sourceberg, however, this resulted in Project Sourceberg occupying the subdomain of the Pashto Wikipedia. A vote on the name changed it to Wikisource on December 6,2003. Despite the change in name, the project did not move to its permanent URL until July 23,2004, since Wikisource was initially called Project Sourceberg, its first logo was a picture of an iceberg
Wikisource
–
The original Wikisource logo
Wikisource
–
Screenshot of wikisource.org home page
Wikisource
–
::: Original text
Wikisource
–
::: Action of the modernizing tool
132.
Frank Wilczek
–
Frank Anthony Wilczek is an American theoretical physicist, mathematician and a Nobel laureate. Wilczek, along with David Gross and H. David Politzer, was awarded the Nobel Prize in Physics in 2004 for their discovery of asymptotic freedom in the theory of the strong interaction and he is on the Scientific Advisory Board for the Future of Life Institute. Born in Mineola, New York, of Polish and Italian origin, Wilczek was educated in the schools of Queens. It was around this time Wilczeks parents realized that he was part as a result of Frank Wilczek having been administered an IQ test. Wilczek holds the Herman Feshbach Professorship of Physics at MIT Center for Theoretical Physics and he worked at the Institute for Advanced Study in Princeton and the Institute for Theoretical Physics at the University of California, Santa Barbara and was also a visiting professor at NORDITA. Wilczek became a member of the Royal Netherlands Academy of Arts. He was awarded the Lorentz Medal in 2002, Wilczek won the Lilienfeld Prize of the American Physical Society in 2003. In the same year he was awarded the Faculty of Mathematics and Physics Commemorative Medal from Charles University in Prague and he was the co-recipient of the 2003 High Energy and Particle Physics Prize of the European Physical Society. Wilczek was also the co-recipient of the 2005 King Faisal International Prize for Science, on January 25,2013 Wilczek received an honorary doctorate from the Faculty of Science and Technology at Uppsala University, Sweden. He currently serves on the board for Society for Science & the Public, Wilczek has appeared on an episode of Penn & Teller, Bullshit. Where Penn referred to him as the smartest person ever had on the show, in 2014, Wilczek penned a letter, along with Stephen Hawking and two other scholars, warning that Success in creating AI would be the biggest event in human history. Unfortunately, it also be the last, unless we learn how to avoid the risks. The theory, which was discovered by H. David Politzer, was important for the development of quantum chromodynamics. Wilczek has helped reveal and develop axions, anyons, asymptotic freedom, the color superconducting phases of quark matter and he has worked on condensed matter physics, astrophysics, and particle physics. In 2012 he proposed the idea of a space-time crystal, in 2017, that theory seems to have been proven correct. 2015 A Beautiful Question, Finding Nature’s Deep Design, Allen Lane, the Lightness of Being, Mass, Ether, and the Unification of Forces. Fantastic Realities,49 Mind Journeys And a Trip to Stockholm,2002, On the worlds numerical recipe, Daedalus 131, 142-47. Longing for the Harmonies, Themes and Variations from Modern Physics, foraTV, The Large Hadron Collider and Unified Field Theory A radio interview with Frank Wilczeck Aired on the Lewis Burke Frumkes Radio Show the 10th of April 2011
Frank Wilczek
–
Frank Wilczek
133.
SI base unit
–
The International System of Units defines seven units of measure as a basic set from which all other SI units can be derived. The SI base units form a set of mutually independent dimensions as required by dimensional analysis commonly employed in science, thus, the kelvin, named after Lord Kelvin, has the symbol K and the ampere, named after André-Marie Ampère, has the symbol A. Many other units, such as the litre, are not part of the SI. The definitions of the units have been modified several times since the Metre Convention in 1875. Since the redefinition of the metre in 1960, the kilogram is the unit that is directly defined in terms of a physical artifact. However, the mole, the ampere, and the candela are linked through their definitions to the mass of the platinum–iridium cylinder stored in a vault near Paris. It has long been an objective in metrology to define the kilogram in terms of a fundamental constant, two possibilities have attracted particular attention, the Planck constant and the Avogadro constant. The 23rd CGPM decided to postpone any formal change until the next General Conference in 2011
SI base unit
–
The seven SI base units and the interdependency of their definitions: for example, to extract the definition of the metre from the speed of light, the definition of the second must be known while the ampere and candela are both dependent on the definition of energy which in turn is defined in terms of length, mass and time.
134.
Length
–
In geometric measurements, length is the most extended dimension of an object. In the International System of Quantities, length is any quantity with dimension distance, in other contexts length is the measured dimension of an object. For example, it is possible to cut a length of a wire which is shorter than wire thickness. Length may be distinguished from height, which is vertical extent, and width or breadth, length is a measure of one dimension, whereas area is a measure of two dimensions and volume is a measure of three dimensions. In most systems of measurement, the unit of length is a base unit, measurement has been important ever since humans settled from nomadic lifestyles and started using building materials, occupying land and trading with neighbours. As society has become more technologically oriented, much higher accuracies of measurement are required in a diverse set of fields. One of the oldest units of measurement used in the ancient world was the cubit which was the length of the arm from the tip of the finger to the elbow. This could then be subdivided into shorter units like the foot, hand or finger, the cubit could vary considerably due to the different sizes of people. After Albert Einsteins special relativity, length can no longer be thought of being constant in all reference frames. Thus a ruler that is one meter long in one frame of reference will not be one meter long in a frame that is travelling at a velocity relative to the first frame. This means length of an object is variable depending on the observer, in the physical sciences and engineering, when one speaks of units of length, the word length is synonymous with distance. There are several units that are used to measure length, in the International System of Units, the basic unit of length is the metre and is now defined in terms of the speed of light. The centimetre and the kilometre, derived from the metre, are commonly used units. In U. S. customary units, English or Imperial system of units, commonly used units of length are the inch, the foot, the yard, and the mile. Units used to denote distances in the vastness of space, as in astronomy, are longer than those typically used on Earth and include the astronomical unit, the light-year. Dimension Distance Orders of magnitude Reciprocal length Smoot Unit of length
Length
–
Base quantity
135.
Electric current
–
An electric current is a flow of electric charge. In electric circuits this charge is carried by moving electrons in a wire. It can also be carried by ions in an electrolyte, or by both ions and electrons such as in an ionised gas. The SI unit for measuring a current is the ampere. Electric current is measured using a device called an ammeter, electric currents cause Joule heating, which creates light in incandescent light bulbs. They also create magnetic fields, which are used in motors, inductors and generators, the particles that carry the charge in an electric current are called charge carriers. In metals, one or more electrons from each atom are loosely bound to the atom and these conduction electrons are the charge carriers in metal conductors. The conventional symbol for current is I, which originates from the French phrase intensité de courant, current intensity is often referred to simply as current. The I symbol was used by André-Marie Ampère, after whom the unit of current is named, in formulating the eponymous Ampères force law. The notation travelled from France to Great Britain, where it became standard, in a conductive material, the moving charged particles which constitute the electric current are called charge carriers. In other materials, notably the semiconductors, the carriers can be positive or negative. Positive and negative charge carriers may even be present at the same time, a flow of positive charges gives the same electric current, and has the same effect in a circuit, as an equal flow of negative charges in the opposite direction. Since current can be the flow of positive or negative charges. The direction of current is arbitrarily defined as the same direction as positive charges flow. This is called the direction of current I. If the current flows in the direction, the variable I has a negative value. When analyzing electrical circuits, the direction of current through a specific circuit element is usually unknown. Consequently, the directions of currents are often assigned arbitrarily
Electric current
–
A simple electric circuit, where current is represented by the letter i. The relationship between the voltage (V), resistance (R), and current (I) is V=IR; this is known as Ohm's Law.
136.
Kelvin
–
The kelvin is a unit of measure for temperature based upon an absolute scale. It is one of the seven units in the International System of Units and is assigned the unit symbol K. The kelvin is defined as the fraction 1⁄273.16 of the temperature of the triple point of water. In other words, it is defined such that the point of water is exactly 273.16 K. The Kelvin scale is named after the Belfast-born, Glasgow University engineer and physicist William Lord Kelvin, unlike the degree Fahrenheit and degree Celsius, the kelvin is not referred to or typeset as a degree. The kelvin is the unit of temperature measurement in the physical sciences, but is often used in conjunction with the Celsius degree. The definition implies that absolute zero is equivalent to −273.15 °C, Kelvin calculated that absolute zero was equivalent to −273 °C on the air thermometers of the time. This absolute scale is known today as the Kelvin thermodynamic temperature scale, when spelled out or spoken, the unit is pluralised using the same grammatical rules as for other SI units such as the volt or ohm. When reference is made to the Kelvin scale, the word kelvin—which is normally a noun—functions adjectivally to modify the noun scale and is capitalized, as with most other SI unit symbols there is a space between the numeric value and the kelvin symbol. Before the 13th CGPM in 1967–1968, the unit kelvin was called a degree and it was distinguished from the other scales with either the adjective suffix Kelvin or with absolute and its symbol was °K. The latter term, which was the official name from 1948 until 1954, was ambiguous since it could also be interpreted as referring to the Rankine scale. Before the 13th CGPM, the form was degrees absolute. The 13th CGPM changed the name to simply kelvin. Its measured value was 0.01028 °C with an uncertainty of 60 µK, the use of SI prefixed forms of the degree Celsius to express a temperature interval has not been widely adopted. In 2005 the CIPM embarked on a program to redefine the kelvin using a more experimentally rigorous methodology, the current definition as of 2016 is unsatisfactory for temperatures below 20 K and above 1300 K. In particular, the committee proposed redefining the kelvin such that Boltzmanns constant takes the exact value 1. 3806505×10−23 J/K, from a scientific point of view, this will link temperature to the rest of SI and result in a stable definition that is independent of any particular substance. From a practical point of view, the redefinition will pass unnoticed, the kelvin is often used in the measure of the colour temperature of light sources. Colour temperature is based upon the principle that a black body radiator emits light whose colour depends on the temperature of the radiator, black bodies with temperatures below about 4000 K appear reddish, whereas those above about 7500 K appear bluish
Kelvin
–
Lord Kelvin, the namesake of the unit
Kelvin
–
A thermometer calibrated in degrees Celsius (left) and kelvins (right).
137.
Amount of substance
–
Amount of substance is a standards-defined quantity that measures the size of an ensemble of elementary entities, such as atoms, molecules, electrons, and other particles. It is sometimes referred to as chemical amount, the International System of Units defines the amount of substance to be proportional to the number of elementary entities present. The SI unit for amount of substance is the mole and it has the unit symbol mol. The proportionality constant is the inverse of the Avogadro constant, the mole is defined as the amount of substance that contains an equal number of elementary entities as there are atoms in 12g of the isotope carbon-12. This number is called Avogadros number and has the value 6. 022140857×1023 and it is the numerical value of the Avogadro constant which has the unit 1/mol, and relates the molar mass of an amount of substance to its mass. Therefore, the amount of substance of a sample is calculated as the sample mass divided by the mass of the substance. Amount of substance appears in thermodynamic relations such as the gas law. Another unit of amount of substance in use in engineering in the United States is the pound-mole. When quoting an amount of substance, it is necessary to specify the entity involved, unless there is no risk of ambiguity. One mole of chlorine could refer either to chlorine atoms, as in 58.44 g of sodium chloride, or to chlorine molecules, the simplest way to avoid ambiguity is to replace the term substance by the name of the entity or to quote the empirical formula. The main derived quantity in which amount of substance enters into the numerator is amount of substance concentration and this name is often abbreviated to amount concentration, except in clinical chemistry where substance concentration is the preferred term to avoid ambiguity with mass concentration. The term molar concentration is incorrect, but commonly used, the alchemists, and especially the early metallurgists, probably had some notion of amount of substance, but there are no surviving records of any generalization of the idea beyond a set of recipes. In 1758, Mikhail Lomonosov questioned the idea that mass was the measure of the quantity of matter. The development of the concept of amount of substance was coincidental with, and vital to,1777, Wenzel publishes Lessons on Affinity, in which he demonstrates that the proportions of the base component and the acid component remain the same during reactions between two neutral salts. 1789, Lavoisier publishes Treatise of Elementary Chemistry, introducing the concept of a chemical element,1792, Richter publishes the first volume of Stoichiometry or the Art of Measuring the Chemical Elements. The term stoichiometry is used for the first time, the first tables of equivalent weights are published for acid–base reactions. Richter also notes that, for an acid, the equivalent mass of the acid is proportional to the mass of oxygen in the base. 1794, Prousts Law of definite proportions generalizes the concept of equivalent weights to all types of chemical reaction,1805, Dalton publishes his first paper on modern atomic theory, including a Table of the relative weights of the ultimate particles of gaseous and other bodies
Amount of substance
–
Base quantity
138.
Mole (unit)
–
The mole is the unit of measurement in the International System of Units for amount of substance. This number is expressed by the Avogadro constant, which has a value of 6. 022140857×1023 mol−1, the mole is one of the base units of the SI, and has the unit symbol mol. The mole is used in chemistry as a convenient way to express amounts of reactants and products of chemical reactions. For example, the chemical equation 2 H2 + O2 →2 H2O implies that 2 moles of dihydrogen and 1 mole of dioxygen react to form 2 moles of water. The mole may also be used to express the number of atoms, ions, the concentration of a solution is commonly expressed by its molarity, defined as the number of moles of the dissolved substance per litre of solution. For example, the relative molecular mass of natural water is about 18.015, therefore. The term gram-molecule was formerly used for essentially the same concept, the term gram-atom has been used for a related but distinct concept, namely a quantity of a substance that contains Avogadros number of atoms, whether isolated or combined in molecules. Thus, for example,1 mole of MgBr2 is 1 gram-molecule of MgBr2 but 3 gram-atoms of MgBr2, in honor of the unit, some chemists celebrate October 23, which is a reference to the 1023 scale of the Avogadro constant, as Mole Day. Some also do the same for February 6 and June 2, thus, by definition, one mole of pure 12C has a mass of exactly 12 g. It also follows from the definition that X moles of any substance will contain the number of molecules as X moles of any other substance. The mass per mole of a substance is called its molar mass, the number of elementary entities in a sample of a substance is technically called its amount. Therefore, the mole is a convenient unit for that physical quantity, one can determine the chemical amount of a known substance, in moles, by dividing the samples mass by the substances molar mass. Other methods include the use of the volume or the measurement of electric charge. The mass of one mole of a substance depends not only on its molecular formula, since the definition of the gram is not mathematically tied to that of the atomic mass unit, the number NA of molecules in a mole must be determined experimentally. The value adopted by CODATA in 2010 is NA =6. 02214129×1023 ±0. 00000027×1023, in 2011 the measurement was refined to 6. 02214078×1023 ±0. 00000018×1023. The number of moles of a sample is the sample mass divided by the mass of the material. The history of the mole is intertwined with that of mass, atomic mass unit, Avogadros number. The first table of atomic mass was published by John Dalton in 1805
Mole (unit)
–
Base units
139.
Dimension of a physical quantity
–
Converting from one dimensional unit to another is often somewhat complex. Dimensional analysis, or more specifically the method, also known as the unit-factor method, is a widely used technique for such conversions using the rules of algebra. The concept of physical dimension was introduced by Joseph Fourier in 1822, Physical quantities that are measurable have the same dimension and can be directly compared to each other, even if they are originally expressed in differing units of measure. If physical quantities have different dimensions, they cannot be compared by similar units, hence, it is meaningless to ask whether a kilogram is greater than, equal to, or less than an hour. Any physically meaningful equation will have the dimensions on their left and right sides. Checking for dimensional homogeneity is an application of dimensional analysis. Dimensional analysis is routinely used as a check of the plausibility of derived equations and computations. It is generally used to categorize types of quantities and units based on their relationship to or dependence on other units. Many parameters and measurements in the sciences and engineering are expressed as a concrete number – a numerical quantity. Often a quantity is expressed in terms of other quantities, for example, speed is a combination of length and time. Compound relations with per are expressed with division, e. g.60 mi/1 h, other relations can involve multiplication, powers, or combinations thereof. A base unit is a unit that cannot be expressed as a combination of other units, for example, units for length and time are normally chosen as base units. Units for volume, however, can be factored into the units of length. Sometimes the names of units obscure that they are derived units, for example, an ampere is a unit of electric current, which is equivalent to electric charge per unit time and is measured in coulombs per second, so 1 A =1 C/s. Similarly, one newton is 1 kg⋅m/s2, percentages are dimensionless quantities, since they are ratios of two quantities with the same dimensions. In other words, the % sign can be read as 1/100, derivatives with respect to a quantity add the dimensions of the variable one is differentiating with respect to on the denominator. Thus, position has the dimension L, derivative of position with respect to time has dimension LT−1 – length from position, time from the derivative, the second derivative has dimension LT−2. In economics, one distinguishes between stocks and flows, a stock has units of units, while a flow is a derivative of a stock, in some contexts, dimensional quantities are expressed as dimensionless quantities or percentages by omitting some dimensions
Dimension of a physical quantity
–
Base quantity
140.
History of the metric system
–
Concepts similar to those behind the metric system had been discussed in the 16th and 17th centuries. Simon Stevin had published his ideas for a decimal notation and John Wilkins had published a proposal for a system of measurement based on natural units. The work of reforming the old system of weights and measures was sponsored by the revolutionary government, the metric system was to be, in the words of philosopher and mathematician Condorcet, for all people for all time. Reference copies for both units were manufactured and placed in the custody of the French Academy of Sciences, by 1812, due to the unpopularity of the new metric system, France had reverted to units similar to those of their old system. In 1837 the metric system was re-adopted by France, and also during the first half of the 19th century was adopted by the scientific community, maxwell proposed three base units, length, mass and time. This concept worked well with mechanics, but attempts to describe electromagnetic forces in terms of these units encountered difficulties. This impasse was resolved by Giovanni Giorgi, who in 1901 proved that a coherent system that incorporated electromagnetic units had to have a unit as a fourth base unit. The mole was added as a base unit in 1971. Since the end of the 20th century, an effort has been undertaken to redefine the ampere, kilogram, mole, the first practical implementation of the metric system was the system implemented by French Revolutionaries towards the end of the 18th century. Its key features were that, It was decimal in nature and it derived its unit sizes from nature. Units that have different dimensions are related to other in a rational manner. Prefixes are used to denote multiples and sub-multiples of its units and these features had already been explored and expounded by various scholars and academics in the two centuries prior to the French metric system being implemented. Simon Stevin is credited with introducing the system into general use in Europe. Twentieth-century writers such Bigourdan and McGreevy credit the French cleric Gabriel Mouton as the originator of the metric system, in 2007 a proposal for a coherent decimal system of measurement by the English cleric John Wilkins received publicity. During the early era, Roman numerals were used in Europe to represent numbers, but the Arabs represented numbers using the Hindu numeral system. In about 1202, Fibonacci published his book Liber Abaci which introduced the concept of positional notation into Europe and these symbols evolved into the numerals 0,1,2 etc. At that time there was dispute regarding the difference between numbers and irrational numbers and there was no consistency in the way in which decimal fractions were represented. In 1586, Simon Stevin published a pamphlet called De Thiende which historians credit as being the basis of modern notation for decimal fractions
History of the metric system
–
Frontispiece of the publication where John Wilkins proposed a metric system of units in which length, mass, volume and area would be related to each other
History of the metric system
–
James Watt, British inventor and advocate of an international decimalized system of measure
History of the metric system
–
A clock of the republican era showing both decimal and standard time
History of the metric system
–
Repeating circle – the instrument used for triangulation when measuring the meridian
141.
System of measurement
–
A system of measurement is a collection of units of measurement and rules relating them to each other. Systems of measurement have historically been important, regulated and defined for the purposes of science and commerce, systems of measurement in modern use include the metric system, the imperial system, and United States customary units. The French Revolution gave rise to the system, and this has spread around the world. In most systems, length, mass, and time are base quantities, later science developments showed that either electric charge or electric current could be added to extend the set of base quantities by which many other metrological units could be easily defined. Other quantities, such as power and speed, are derived from the set, for example. Such arrangements were satisfactory in their own contexts, the preference for a more universal and consistent system only gradually spread with the growth of science. Changing a measurement system has substantial financial and cultural costs which must be offset against the advantages to be obtained using a more rational system. However pressure built up, including scientists and engineers for conversion to a more rational. The unifying characteristic is that there was some definition based on some standard, eventually cubits and strides gave way to customary units to met the needs of merchants and scientists. In the metric system and other recent systems, a basic unit is used for each base quantity. Often secondary units are derived from the units by multiplying by powers of ten. Thus the basic unit of length is the metre, a distance of 1.234 m is 1,234 millimetres. Metrication is complete or nearly complete in almost all countries, US customary units are heavily used in the United States and to some degree in Liberia. Traditional Burmese units of measurement are used in Burma, U. S. units are used in limited contexts in Canada due to the large volume of trade, there is also considerable use of Imperial weights and measures, despite de jure Canadian conversion to metric. In the United States, metric units are used almost universally in science, widely in the military, and partially in industry, but customary units predominate in household use. At retail stores, the liter is a used unit for volume, especially on bottles of beverages. Some other standard non-SI units are still in use, such as nautical miles and knots in aviation. Metric systems of units have evolved since the adoption of the first well-defined system in France in 1795, during this evolution the use of these systems has spread throughout the world, first to non-English-speaking countries, and then to English speaking countries
System of measurement
–
A baby bottle that measures in three measurement systems—imperial (UK), US customary, and metric.
142.
SI units
–
The International System of Units is the modern form of the metric system, and is the most widely used system of measurement. It comprises a coherent system of units of measurement built on seven base units, the system also establishes a set of twenty prefixes to the unit names and unit symbols that may be used when specifying multiples and fractions of the units. The system was published in 1960 as the result of an initiative began in 1948. It is based on the system of units rather than any variant of the centimetre-gram-second system. The motivation for the development of the SI was the diversity of units that had sprung up within the CGS systems, the International System of Units has been adopted by most developed countries, however, the adoption has not been universal in all English-speaking countries. The metric system was first implemented during the French Revolution with just the metre and kilogram as standards of length, in the 1830s Carl Friedrich Gauss laid the foundations for a coherent system based on length, mass, and time. In the 1860s a group working under the auspices of the British Association for the Advancement of Science formulated the requirement for a coherent system of units with base units and derived units. Meanwhile, in 1875, the Treaty of the Metre passed responsibility for verification of the kilogram, in 1921, the Treaty was extended to include all physical quantities including electrical units originally defined in 1893. The units associated with these quantities were the metre, kilogram, second, ampere, kelvin, in 1971, a seventh base quantity, amount of substance represented by the mole, was added to the definition of SI. On 11 July 1792, the proposed the names metre, are, litre and grave for the units of length, area, capacity. The committee also proposed that multiples and submultiples of these units were to be denoted by decimal-based prefixes such as centi for a hundredth, on 10 December 1799, the law by which the metric system was to be definitively adopted in France was passed. Prior to this, the strength of the magnetic field had only been described in relative terms. The technique used by Gauss was to equate the torque induced on a magnet of known mass by the earth’s magnetic field with the torque induced on an equivalent system under gravity. The resultant calculations enabled him to assign dimensions based on mass, length, a French-inspired initiative for international cooperation in metrology led to the signing in 1875 of the Metre Convention. Initially the convention only covered standards for the metre and the kilogram, one of each was selected at random to become the International prototype metre and International prototype kilogram that replaced the mètre des Archives and kilogramme des Archives respectively. Each member state was entitled to one of each of the prototypes to serve as the national prototype for that country. Initially its prime purpose was a periodic recalibration of national prototype metres. The official language of the Metre Convention is French and the version of all official documents published by or on behalf of the CGPM is the French-language version
SI units
–
Stone marking the Austro-Hungarian /Italian border at Pontebba displaying myriametres, a unit of 10 km used in Central Europe in the 19th century (but since deprecated).
SI units
–
The seven base units in the International System of Units
SI units
–
Carl Friedrich Gauss
SI units
–
Thomson
143.
Distance
–
Distance is a numerical description of how far apart objects are. In physics or everyday usage, distance may refer to a physical length, in most cases, distance from A to B is interchangeable with distance from B to A. In mathematics, a function or metric is a generalization of the concept of physical distance. A metric is a function that behaves according to a set of rules. The circumference of the wheel is 2π × radius, and assuming the radius to be 1, in engineering ω = 2πƒ is often used, where ƒ is the frequency. Chessboard distance, formalized as Chebyshev distance, is the number of moves a king must make on a chessboard to travel between two squares. Distance measures in cosmology are complicated by the expansion of the universe, the term distance is also used by analogy to measure non-physical entities in certain ways. In computer science, there is the notion of the distance between two strings. For example, the dog and dot, which vary by only one letter, are closer than dog and cat. In this way, many different types of distances can be calculated, such as for traversal of graphs, comparison of distributions and curves, distance cannot be negative, and distance travelled never decreases. Distance is a quantity or a magnitude, whereas displacement is a vector quantity with both magnitude and direction. Directed distance is a positive, zero, or negative scalar quantity, the distance covered by a vehicle, person, animal, or object along a curved path from a point A to a point B should be distinguished from the straight-line distance from A to B. For example, whatever the distance covered during a trip from A to B and back to A. In general the straight-line distance does not equal distance travelled, except for journeys in a straight line, directed distances are distances with a directional sense. They can be determined along straight lines and along curved lines, for instance, just labelling the two endpoints as A and B can indicate the sense, if the ordered sequence is assumed, which implies that A is the starting point. A displacement is a kind of directed distance defined in mechanics. A directed distance is called displacement when it is the distance along a line from A and B. This implies motion of the particle, the distance traveled by a particle must always be greater than or equal to its displacement, with equality occurring only when the particle moves along a straight path
Distance
–
d (A, B) > d (A, C) + d (C, B)
144.
Area
–
Area is the quantity that expresses the extent of a two-dimensional figure or shape, or planar lamina, in the plane. Surface area is its analog on the surface of a three-dimensional object. It is the analog of the length of a curve or the volume of a solid. The area of a shape can be measured by comparing the shape to squares of a fixed size, in the International System of Units, the standard unit of area is the square metre, which is the area of a square whose sides are one metre long. A shape with an area of three square metres would have the area as three such squares. In mathematics, the square is defined to have area one. There are several formulas for the areas of simple shapes such as triangles, rectangles. Using these formulas, the area of any polygon can be found by dividing the polygon into triangles, for shapes with curved boundary, calculus is usually required to compute the area. Indeed, the problem of determining the area of plane figures was a motivation for the historical development of calculus. For a solid such as a sphere, cone, or cylinder. Formulas for the areas of simple shapes were computed by the ancient Greeks. Area plays an important role in modern mathematics, in addition to its obvious importance in geometry and calculus, area is related to the definition of determinants in linear algebra, and is a basic property of surfaces in differential geometry. In analysis, the area of a subset of the plane is defined using Lebesgue measure, in general, area in higher mathematics is seen as a special case of volume for two-dimensional regions. Area can be defined through the use of axioms, defining it as a function of a collection of certain plane figures to the set of real numbers and it can be proved that such a function exists. An approach to defining what is meant by area is through axioms, area can be defined as a function from a collection M of special kind of plane figures to the set of real numbers which satisfies the following properties, For all S in M, a ≥0. If S and T are in M then so are S ∪ T and S ∩ T, if S and T are in M with S ⊆ T then T − S is in M and a = a − a. If a set S is in M and S is congruent to T then T is also in M, every rectangle R is in M. If the rectangle has length h and breadth k then a = hk, let Q be a set enclosed between two step regions S and T
Area
–
A square metre quadrat made of PVC pipe.
Area
–
The combined area of these three shapes is approximately 15.57 squares.
145.
Solid angle
–
In geometry, a solid angle is the two-dimensional angle in three-dimensional space that an object subtends at a point. It is a measure of how large the object appears to an observer looking from that point, in the International System of Units, a solid angle is expressed in a dimensionless unit called a steradian. A small object nearby may subtend the same angle as a larger object farther away. For example, although the Moon is much smaller than the Sun, indeed, as viewed from any point on Earth, both objects have approximately the same solid angle as well as apparent size. This is evident during a solar eclipse, an objects solid angle in steradians is equal to the area of the segment of a unit sphere, centered at the angles vertex, that the object covers. A solid angle in steradians equals the area of a segment of a sphere in the same way a planar angle in radians equals the length of an arc of a unit circle. Solid angles are used in physics, in particular astrophysics. The solid angle of an object that is far away is roughly proportional to the ratio of area to squared distance. Here area means the area of the object when projected along the viewing direction. The solid angle of a sphere measured from any point in its interior is 4π sr, Solid angles can also be measured in square degrees, in square minutes and square seconds, or in fractions of the sphere, also known as spat. In spherical coordinates there is a formula for the differential, d Ω = sin θ d θ d φ where θ is the colatitude, at the equator you see all of the celestial sphere, at either pole only one half. Let OABC be the vertices of a tetrahedron with an origin at O subtended by the triangular face ABC where a →, b →, c → are the positions of the vertices A, B and C. Define the vertex angle θa to be the angle BOC and define θb, let φab be the dihedral angle between the planes that contain the tetrahedral faces OAC and OBC and define φac, φbc correspondingly. When implementing the above equation care must be taken with the function to avoid negative or incorrect solid angles. One source of errors is that the scalar triple product can be negative if a, b, c have the wrong winding. Computing abs is a sufficient solution since no other portion of the equation depends on the winding, the other pitfall arises when the scalar triple product is positive but the divisor is negative. Indices are cycled, s0 = sn and s1 = sn +1, the solid angle of a latitude-longitude rectangle on a globe is s r, where φN and φS are north and south lines of latitude, and θE and θW are east and west lines of longitude. Mathematically, this represents an arc of angle φN − φS swept around a sphere by θE − θW radians, when longitude spans 2π radians and latitude spans π radians, the solid angle is that of a sphere
Solid angle
–
Any area on a sphere which is equal in area to the square of its radius, when observed from its center, subtends precisely one steradian.
146.
Kinematic viscosity
–
The viscosity of a fluid is a measure of its resistance to gradual deformation by shear stress or tensile stress. For liquids, it corresponds to the concept of thickness, for example. Viscosity is a property of the fluid which opposes the motion between the two surfaces of the fluid in a fluid that are moving at different velocities. For a given velocity pattern, the stress required is proportional to the fluids viscosity, a fluid that has no resistance to shear stress is known as an ideal or inviscid fluid. Zero viscosity is observed only at low temperatures in superfluids. Otherwise, all fluids have positive viscosity, and are said to be viscous or viscid. A fluid with a high viscosity, such as pitch. The word viscosity is derived from the Latin viscum, meaning mistletoe, the dynamic viscosity of a fluid expresses its resistance to shearing flows, where adjacent layers move parallel to each other with different speeds. It can be defined through the situation known as a Couette flow. This fluid has to be homogeneous in the layer and at different shear stresses, if the speed of the top plate is small enough, the fluid particles will move parallel to it, and their speed will vary linearly from zero at the bottom to u at the top. Each layer of fluid will move faster than the one just below it, in particular, the fluid will apply on the top plate a force in the direction opposite to its motion, and an equal but opposite one to the bottom plate. An external force is required in order to keep the top plate moving at constant speed. The magnitude F of this force is found to be proportional to the u and the area A of each plate. The proportionality factor μ in this formula is the viscosity of the fluid, the ratio u/y is called the rate of shear deformation or shear velocity, and is the derivative of the fluid speed in the direction perpendicular to the plates. Isaac Newton expressed the forces by the differential equation τ = μ ∂ u ∂ y, where τ = F/A. This formula assumes that the flow is moving along parallel lines and this equation can be used where the velocity does not vary linearly with y, such as in fluid flowing through a pipe. Use of the Greek letter mu for the dynamic viscosity is common among mechanical and chemical engineers. However, the Greek letter eta is used by chemists, physicists
Kinematic viscosity
–
Pitch has a viscosity approximately 230 billion (2.3 × 10 11) times that of water.
Kinematic viscosity
–
A simulation of substances with different viscosities. The substance above has lower viscosity than the substance below
Kinematic viscosity
–
Example of the viscosity of milk and water. Liquids with higher viscosities make smaller splashes when poured at the same velocity.
Kinematic viscosity
–
Honey being drizzled.
147.
Kilogram square metre
–
It depends on the bodys mass distribution and the axis chosen, with larger moments requiring more torque to change the bodys rotation. It is a property, the moment of inertia of a composite system is the sum of the moments of inertia of its component subsystems. One of its definitions is the moment of mass with respect to distance from an axis r, I = ∫ Q r 2 d m. For bodies constrained to rotate in a plane, it is sufficient to consider their moment of inertia about a perpendicular to the plane. When a body is rotating, or free to rotate, around an axis, the amount of torque needed to cause any given angular acceleration is proportional to the moment of inertia of the body. Moment of inertia may be expressed in units of kilogram metre squared in SI units, moment of inertia plays the role in rotational kinetics that mass plays in linear kinetics - both characterize the resistance of a body to changes in its motion. The moment of inertia depends on how mass is distributed around an axis of rotation, for a point-like mass, the moment of inertia about some axis is given by mr2, where r is the distance to the axis, and m is the mass. For an extended body, the moment of inertia is just the sum of all the pieces of mass multiplied by the square of their distances from the axis in question. For an extended body of a shape and uniform density. In 1673 Christiaan Huygens introduced this parameter in his study of the oscillation of a body hanging from a pivot, the term moment of inertia was introduced by Leonhard Euler in his book Theoria motus corporum solidorum seu rigidorum in 1765, and it is incorporated into Eulers second law. Comparison of this frequency to that of a simple pendulum consisting of a single point of mass provides a mathematical formulation for moment of inertia of an extended body. Moment of inertia appears in momentum, kinetic energy, and in Newtons laws of motion for a rigid body as a physical parameter that combines its shape. There is a difference in the way moment of inertia appears in planar. The moment of inertia of a flywheel is used in a machine to resist variations in applied torque to smooth its rotational output. Moment of inertia I is defined as the ratio of the angular momentum L of a system to its angular velocity ω around a principal axis, if the angular momentum of a system is constant, then as the moment of inertia gets smaller, the angular velocity must increase. This occurs when spinning figure skaters pull in their arms or divers curl their bodies into a tuck position during a dive. For a simple pendulum, this yields a formula for the moment of inertia I in terms of the mass m of the pendulum and its distance r from the pivot point as. Thus, moment of inertia depends on both the mass m of a body and its geometry, or shape, as defined by the distance r to the axis of rotation
Kilogram square metre
–
Tightrope walker Samuel Dixon using the long rod's moment of inertia for balance while crossing the Niagara River in 1890.
Kilogram square metre
–
Flywheels have large moments of inertia to smooth out mechanical motion. This example is in a Russian museum.
Kilogram square metre
–
Spinning figure skaters can reduce their moment of inertia by pulling in their arms, allowing them to spin faster due to conservation of angular momentum.
Kilogram square metre
–
Pendulums used in Mendenhall gravimeter apparatus, from 1897 scientific journal. The portable gravimeter developed in 1890 by Thomas C. Mendenhall provided the most accurate relative measurements of the local gravitational field of the Earth.
148.
Joule
–
The joule, symbol J, is a derived unit of energy in the International System of Units. It is equal to the transferred to an object when a force of one newton acts on that object in the direction of its motion through a distance of one metre. It is also the energy dissipated as heat when a current of one ampere passes through a resistance of one ohm for one second. It is named after the English physicist James Prescott Joule, one joule can also be defined as, The work required to move an electric charge of one coulomb through an electrical potential difference of one volt, or one coulomb volt. This relationship can be used to define the volt, the work required to produce one watt of power for one second, or one watt second. This relationship can be used to define the watt and this SI unit is named after James Prescott Joule. As with every International System of Units unit named for a person, note that degree Celsius conforms to this rule because the d is lowercase. — Based on The International System of Units, section 5.2. The CGPM has given the unit of energy the name Joule, the use of newton metres for torque and joules for energy is helpful to avoid misunderstandings and miscommunications. The distinction may be also in the fact that energy is a scalar – the dot product of a vector force. By contrast, torque is a vector – the cross product of a distance vector, torque and energy are related to one another by the equation E = τ θ, where E is energy, τ is torque, and θ is the angle swept. Since radians are dimensionless, it follows that torque and energy have the same dimensions, one joule in everyday life represents approximately, The energy required to lift a medium-size tomato 1 m vertically from the surface of the Earth. The energy released when that same tomato falls back down to the ground, the energy required to accelerate a 1 kg mass at 1 m·s−2 through a 1 m distance in space. The heat required to raise the temperature of 1 g of water by 0.24 °C, the typical energy released as heat by a person at rest every 1/60 s. The kinetic energy of a 50 kg human moving very slowly, the kinetic energy of a 56 g tennis ball moving at 6 m/s. The kinetic energy of an object with mass 1 kg moving at √2 ≈1.4 m/s, the amount of electricity required to light a 1 W LED for 1 s. Since the joule is also a watt-second and the unit for electricity sales to homes is the kW·h. For additional examples, see, Orders of magnitude The zeptojoule is equal to one sextillionth of one joule,160 zeptojoules is equivalent to one electronvolt. The nanojoule is equal to one billionth of one joule, one nanojoule is about 1/160 of the kinetic energy of a flying mosquito
Joule
–
Base units
149.
Mass
–
In physics, mass is a property of a physical body. It is the measure of a resistance to acceleration when a net force is applied. It also determines the strength of its gravitational attraction to other bodies. The basic SI unit of mass is the kilogram, Mass is not the same as weight, even though mass is often determined by measuring the objects weight using a spring scale, rather than comparing it directly with known masses. An object on the Moon would weigh less than it does on Earth because of the lower gravity and this is because weight is a force, while mass is the property that determines the strength of this force. In Newtonian physics, mass can be generalized as the amount of matter in an object, however, at very high speeds, special relativity postulates that energy is an additional source of mass. Thus, any body having mass has an equivalent amount of energy. In addition, matter is a defined term in science. There are several distinct phenomena which can be used to measure mass, active gravitational mass measures the gravitational force exerted by an object. Passive gravitational mass measures the force exerted on an object in a known gravitational field. The mass of an object determines its acceleration in the presence of an applied force, according to Newtons second law of motion, if a body of fixed mass m is subjected to a single force F, its acceleration a is given by F/m. A bodys mass also determines the degree to which it generates or is affected by a gravitational field and this is sometimes referred to as gravitational mass. The standard International System of Units unit of mass is the kilogram, the kilogram is 1000 grams, first defined in 1795 as one cubic decimeter of water at the melting point of ice. Then in 1889, the kilogram was redefined as the mass of the prototype kilogram. As of January 2013, there are proposals for redefining the kilogram yet again. In this context, the mass has units of eV/c2, the electronvolt and its multiples, such as the MeV, are commonly used in particle physics. The atomic mass unit is 1/12 of the mass of a carbon-12 atom, the atomic mass unit is convenient for expressing the masses of atoms and molecules. Outside the SI system, other units of mass include, the slug is an Imperial unit of mass, the pound is a unit of both mass and force, used mainly in the United States
Mass
–
Depiction of early balance scales in the Papyrus of Hunefer (dated to the 19th dynasty, ca. 1285 BC). The scene shows Anubis weighing the heart of Hunefer.
Mass
–
The kilogram is one of the seven SI base units and one of three which is defined ad hoc (i.e. without reference to another base unit).
Mass
–
Galileo Galilei (1636)
Mass
–
Distance traveled by a freely falling ball is proportional to the square of the elapsed time