In quantum mechanics and particle physics, spin is an intrinsic form of angular momentum carried by elementary particles, composite particles, atomic nuclei. Spin is one of two types of angular momentum in quantum mechanics, the other being orbital angular momentum; the orbital angular momentum operator is the quantum-mechanical counterpart to the classical angular momentum of orbital revolution and appears when there is periodic structure to its wavefunction as the angle varies. The existence of spin angular momentum is inferred from experiments, such as the Stern–Gerlach experiment, in which silver atoms were observed to possess two possible discrete angular momenta despite having no orbital angular momentum. In some ways, spin is like a vector quantity. All elementary particles of a given kind have the same magnitude of spin angular momentum, indicated by assigning the particle a spin quantum number; the SI unit of spin is the or, just as with classical angular momentum. In practice, spin is given as a dimensionless spin quantum number by dividing the spin angular momentum by the reduced Planck constant ħ, which has the same units of angular momentum, although this is not the full computation of this value.
The "spin quantum number" is called "spin", leaving its meaning as the unitless "spin quantum number" to be inferred from context. When combined with the spin-statistics theorem, the spin of electrons results in the Pauli exclusion principle, which in turn underlies the periodic table of chemical elements. Wolfgang Pauli in 1924 was the first to propose a doubling of electron states due to a two-valued non-classical "hidden rotation". In 1925, George Uhlenbeck and Samuel Goudsmit at Leiden University suggested the simple physical interpretation of a particle spinning around its own axis, in the spirit of the old quantum theory of Bohr and Sommerfeld. Ralph Kronig anticipated the Uhlenbeck-Goudsmit model in discussion with Hendrik Kramers several months earlier in Copenhagen, but did not publish; the mathematical theory was worked out in depth by Pauli in 1927. When Paul Dirac derived his relativistic quantum mechanics in 1928, electron spin was an essential part of it; as the name suggests, spin was conceived as the rotation of a particle around some axis.
This picture is correct so far as spin obeys the same mathematical laws as quantized angular momenta do. On the other hand, spin has some peculiar properties that distinguish it from orbital angular momenta: Spin quantum numbers may take half-integer values. Although the direction of its spin can be changed, an elementary particle cannot be made to spin faster or slower; the spin of a charged particle is associated with a magnetic dipole moment with a g-factor differing from 1. This could only occur classically if the internal charge of the particle were distributed differently from its mass; the conventional definition of the spin quantum number, s, is s = n/2, where n can be any non-negative integer. Hence the allowed values of s are 1/2, 1, 3/2, 2, etc.. The value of s for an elementary particle depends only on the type of particle, cannot be altered in any known way; the spin angular momentum, S, of any physical system is quantized. The allowed values of S are S = ℏ s = h 4 π n, where h is the Planck constant and ℏ = h/2π is the reduced Planck constant.
In contrast, orbital angular momentum can only take on integer values of s. Those particles with half-integer spins, such as 1/2, 3/2, 5/2, are known as fermions, while those particles with integer spins, such as 0, 1, 2, are known as bosons; the two families of particles obey different rules and broadly have different roles in the world around us. A key distinction between the two families is. In contrast, bosons obey the rules of Bose–Einstein statistics and have no such restriction, so they may "bunch together" if in identical states. Composite particles can have spins different from their component particles. For example, a helium atom in the ground state has spin 0 and behaves like a boson though the quarks and electrons which make it up are all fermions; this has some profound consequences: Quarks and leptons, which make up what is classically known as matter, are all fermions with spin 1/2. The common idea that "matter takes up space" comes from the Pauli exclusion principle acting on these particles to prevent the fermions that make up matter from being in the same quantum state.
Further compaction would require electrons to occupy the same energy states, therefore a kind of pressure acts to resist the fermions being overly close. Elementary fermions with other spins are not known to exist. Elementary particles which are thought of as carrying forces are all bosons with spin 1, they include the photon which carries the electromagnetic force, the gluon, the W and Z bosons. The ability of bosons to occupy the same quantu
Wave–particle duality is the concept in quantum mechanics that every particle or quantum entity may be described in terms not only of particles, but of waves. It expresses the inability of the classical concepts "particle" or "wave" to describe the behaviour of quantum-scale objects; as Albert Einstein wrote: It seems as though we must use sometimes the one theory and sometimes the other, while at times we may use either. We are faced with a new kind of difficulty. We have two contradictory pictures of reality. Through the work of Max Planck, Albert Einstein, Louis de Broglie, Arthur Compton, Niels Bohr, many others, current scientific theory holds that all particles exhibit a wave nature and vice versa; this phenomenon has been verified not only for elementary particles, but for compound particles like atoms and molecules. For macroscopic particles, because of their short wavelengths, wave properties cannot be detected. Although the use of the wave-particle duality has worked well in physics, the meaning or interpretation has not been satisfactorily resolved.
Bohr regarded the "duality paradox" as a metaphysical fact of nature. A given kind of quantum object will exhibit sometimes wave, sometimes particle, character, in different physical settings, he saw such duality as one aspect of the concept of complementarity. Bohr regarded renunciation of the cause-effect relation, or complementarity, of the space-time picture, as essential to the quantum mechanical account. Werner Heisenberg considered the question further, he saw the duality as present for all quantic entities, but not quite in the usual quantum mechanical account considered by Bohr. He saw it in what is called second quantization, which generates an new concept of fields which exist in ordinary space-time, causality still being visualizable. Classical field values are replaced by an new kind of field value, as considered in quantum field theory. Turning the reasoning around, ordinary quantum mechanics can be deduced as a specialized consequence of quantum field theory. Democritus argued that all things in the universe, including light, are composed of indivisible sub-components.
At the beginning of the 11th Century, the Arabic scientist Ibn al-Haytham wrote the first comprehensive Book of optics describing reflection and the operation of a pinhole lens via rays of light traveling from the point of emission to the eye. He asserted. In 1630, René Descartes popularized and accredited the opposing wave description in his treatise on light, The World, showing that the behavior of light could be re-created by modeling wave-like disturbances in a universal medium i.e. luminiferous aether. Beginning in 1670 and progressing over three decades, Isaac Newton developed and championed his corpuscular theory, arguing that the straight lines of reflection demonstrated light's particle nature, only particles could travel in such straight lines, he explained refraction by positing that particles of light accelerated laterally upon entering a denser medium. Around the same time, Newton's contemporaries Robert Hooke and Christiaan Huygens, Augustin-Jean Fresnel, mathematically refined the wave viewpoint, showing that if light traveled at different speeds in different media, refraction could be explained as the medium-dependent propagation of light waves.
The resulting Huygens–Fresnel principle was successful at reproducing light's behavior and was subsequently supported by Thomas Young's discovery of wave interference of light by his double-slit experiment in 1801. The wave view did not displace the ray and particle view, but began to dominate scientific thinking about light in the mid 19th century, since it could explain polarization phenomena that the alternatives could not. James Clerk Maxwell discovered that he could apply his discovered Maxwell's equations, along with a slight modification to describe self-propagating waves of oscillating electric and magnetic fields, it became apparent that visible light, ultraviolet light, infrared light were all electromagnetic waves of differing frequency. In 1901, Max Planck published an analysis that succeeded in reproducing the observed spectrum of light emitted by a glowing object. To accomplish this, Planck had to make a mathematical assumption of quantized energy of the oscillators i.e. atoms of the black body that emit radiation.
Einstein proposed that electromagnetic radiation itself is quantized, not the energy of radiating atoms. Black-body radiation, the emission of electromagnetic energy due to an object's heat, could not be explained from classical arguments alone; the equipartition theorem of classical mechanics, the basis of all classical thermodynamic theories, stated that an object's energy is partitioned among the object's vibrational modes. But applying the same reasoning to the electromagnetic emission of such a thermal object was not so successful; that thermal objects emit light had been long known. Since light was known to be waves of electromagnetism, physicists hoped to describe this emission via classical laws; this became known as the black body problem. Since the equipartition theorem worked so well in describing the vibrational modes of the thermal object itself, it was natural to assume that it would perform well in describing the radiative emission of such objects, but a problem arose if each mode received an equal partition of energy, the short wavelength modes would consume all the energy.
This became clear when plo
The Davisson–Germer experiment was a 1923-7 experiment by Clinton Davisson and Lester Germer at Western Electric, in which electrons, scattered by the surface of a crystal of nickel metal, displayed a diffraction pattern. This confirmed the hypothesis, advanced by Louis de Broglie in 1924, of wave-particle duality, was an experimental milestone in the creation of quantum mechanics. According to Maxwell's equations in the late 19th century, light was thought to consist of waves of electromagnetic fields and matter was thought to consist of localized particles. However, this was challenged in Albert Einstein's 1905 paper on the photoelectric effect, which described light as discrete and localized quanta of energy, which won him the Nobel Prize in Physics in 1921. In 1924 Louis de Broglie presented his thesis concerning the wave–particle duality theory, which proposed the idea that all matter displays the wave–particle duality of photons. According to de Broglie, for all matter and for radiation alike, the energy E of the particle was related to the frequency of its associated wave ν by the Planck relation: E = h ν And that the momentum of the particle p was related to its wavelength by what is now known as the de Broglie relation: λ = h p, where h is Planck's constant.
An important contribution to the Davisson–Germer experiment was made by Walter M. Elsasser in Göttingen in the 1920s, who remarked that the wave-like nature of matter might be investigated by electron scattering experiments on crystalline solids, just as the wave-like nature of X-rays had been confirmed through X-ray scattering experiments on crystalline solids; this suggestion of Elsasser was communicated by his senior colleague Max Born to physicists in England. When the Davisson and Germer experiment was performed, the results of the experiment were explained by Elsasser's proposition; however the initial intention of the Davisson and Germer experiment was not to confirm the de Broglie hypothesis, but rather to study the surface of nickel. In 1927 at Bell Labs, Clinton Davisson and Lester Germer fired slow moving electrons at a crystalline nickel target; the angular dependence of the reflected electron intensity was measured and was determined to have the same diffraction pattern as those predicted by Bragg for X-rays.
At the same time George Paget Thomson independently demonstrated the same effect firing electrons through metal films to produce a diffraction pattern, Davisson and Thomson shared the Nobel Prize in Physics in 1937. The Davisson–Germer experiment confirmed the de Broglie hypothesis that matter has wave-like behavior. This, in combination with the Compton effect discovered by Arthur Compton, established the wave–particle duality hypothesis, a fundamental step in quantum theory. Davisson began work in 1921 to study electron bombardment and secondary electron emissions. A series of experiments continued through 1925. Davisson and Germer's actual objective was to study the surface of a piece of nickel by directing a beam of electrons at the surface and observing how many electrons bounced off at various angles, they expected that because of the small size of electrons the smoothest crystal surface would be too rough and thus the electron beam would experience diffused reflection. The experiment consisted of firing an electron beam at a nickel crystal, perpendicular to the surface of the crystal, measuring how the number of reflected electrons varied as the angle between the detector and the nickel surface varied.
The electron gun was a heated filament that released thermally excited electrons which were accelerated through an electric potential difference, giving them a certain amount of kinetic energy, towards the nickel crystal. To avoid collisions of the electrons with other atoms on their way towards the surface, the experiment was conducted in a vacuum chamber. To measure the number of electrons that were scattered at different angles, a faraday cup electron detector that could be moved on an arc path about the crystal was used; the detector was designed to accept only elastically scattered electrons. During the experiment, air accidentally entered the chamber, producing an oxide film on the nickel surface. To remove the oxide and Germer heated the specimen in a high temperature oven, not knowing that this caused the polycrystalline structure of the nickel to form large single crystal areas with crystal planes continuous over the width of the electron beam; when they started the experiment again and the electrons hit the surface, they were scattered by nickel atoms in crystal planes of the crystal.
This, in 1925, generated a diffraction pattern with unexpected peaks. On a break, Davisson attended the Oxford meeting of the British Association for the Advancement of Science in summer 1926. At this meeting, he learned of the recent advances in quantum mechanics. To Davisson's surprise, Max Born gave a lecture that used diffraction curves from Davisson's 1923 research which he had published in Science that year, using the data as confirmation of the de Broglie hypothesis, he learned that in prior years, other scientists – Walter Elsasser, E. G. Dymond, Blackett, James Chadwick, Charles Ellis – had attempted similar diffraction experiments, but were unable to generate low enough vacuums or detect the low-intensity beams needed. Returning to the United States, Davisson made mod
The Afshar experiment is a variation of the double slit experiment in quantum mechanics and carried out by Shahriar Afshar while at the private, Boston-based Institute for Radiation-Induced Mass Studies. The results were presented at a Harvard seminar in March 2004; the experiment gives information about which of two paths a photon takes through the apparatus while allowing interference between the two paths to be observed, by showing that a grid of wires, placed at the nodes of the interference pattern, does not alter the beams. Afshar claimed that the experiment violates the principle of complementarity of quantum mechanics, which states that the particle and wave aspects of quantum objects cannot be observed at the same time, the Englert–Greenberger duality relation; the experiment has been repeated by a number of investigators and its results have been confirmed, but its interpretation is controversial, some disagree that it violates complementarity, while disagreeing amongst themselves as to why.
Afshar's experiment uses a variant of Thomas Young's classic double-slit experiment to create interference patterns to investigate complementarity. Such interferometer experiments have two "arms" or paths a photon may take. One of Afshar's assertions is that, in his experiment, it is possible to check for interference fringes of a photon stream while at the same time observing each photon's path. Shahriar S. Afshar's experimental work was done at the Institute for Radiation-Induced Mass Studies in Boston in 2001 and reproduced at Harvard University in 2003, while he was a research scholar there; the results were presented at a Harvard seminar in March 2004, published as conference proceeding by The International Society for Optical Engineering. The experiment was featured as the cover story in the July 2004 edition of New Scientist; the New Scientist feature article itself generated many responses, including various letters to the editor that appeared in the August 7 and August 14, 2004 issues, arguing against the conclusions being drawn by Afshar, with John G. Cramer's response.
Afshar presented his work at the American Physical Society meeting in Los Angeles, in late March 2005. His peer-reviewed paper was published in Foundations of Physics in January 2007. Afshar claims that his experiment invalidates the complementarity principle and has far-reaching implications for the understanding of quantum mechanics, challenging the Copenhagen interpretation. According to Cramer, Afshar's results support Cramer's own transactional interpretation of quantum mechanics and challenge the many-worlds interpretation of quantum mechanics; this claim has not been published in a peer reviewed journal. The experiment uses a setup similar to that for the double-slit experiment. In Afshar's variant, light generated by a laser passes through two spaced circular pinholes. After the dual pinholes, a lens refocuses the light so that the image of each pinhole falls on separate photon-detectors. A photon that goes through pinhole number one impinges only on detector number one, if it goes through pinhole two it impinges only on detector number two, why we see the pinholes separately in the image plane close to the mirrors before the photon-detectors.
When the light acts as a wave, because of quantum interference one can observe that there are regions that the photons avoid, called dark fringes. A grid of thin wires is placed just before the lens so that the wires lie in the dark fringes of an interference pattern, produced by the dual pinhole setup. If one of the pinholes is blocked, the interference pattern will no longer be formed, some of the light will be blocked by the wires; the image quality is reduced. When one pinhole is closed, the grid of wires causes appreciable diffraction in the light, blocks a certain amount of light received by the corresponding photon-detector. However, when both pinholes are open, the effect of the wires is negligible, comparable to the case in which there are no wires placed in front of the lens, because the wires lie in the dark fringes, which the photons avoid; the effect is not dependent on the light intensity. Afshar's conclusion is that the light exhibits wave-like behavior when going past the wires, since the light goes through the spaces between the wires, but avoids the wires themselves, when both slits were open, but exhibits particle-like behavior after going through the lens, with photons going to a given photo-detector.
Afshar argues that this behavior contradicts the principle of complementarity, since it shows both complementary wave and particle characteristics in the same experiment for the same photons. A number of scientists have published criticisms of Afshar's interpretation of his results, some of which reject the claims of a violation of complementarity, while differing in the way they explain how complementarity copes with the experiment. Afshar has responded to these critics in his academic talks, his blog, other forums; the most recent work claims that Afshar's core claim, that the Englert–Greenberger duality relation is violated, is contested. They re-ran the experiment, using a different method for measuring the visibility of the interference pattern than that used by Afshar, found no violation of complementarity, concluding "This result demonstrates that the experiment can be explained by the Copenhagen interpretation of quantum mechanics." Below is a synopsis of the papers by critics highlighting their main arguments, the disagreements they have amongst themselves: Some researchers claim th
Quantum mechanics, including quantum field theory, is a fundamental theory in physics which describes nature at the smallest scales of energy levels of atoms and subatomic particles. Classical physics, the physics existing before quantum mechanics, describes nature at ordinary scale. Most theories in classical physics can be derived from quantum mechanics as an approximation valid at large scale. Quantum mechanics differs from classical physics in that energy, angular momentum and other quantities of a bound system are restricted to discrete values. Quantum mechanics arose from theories to explain observations which could not be reconciled with classical physics, such as Max Planck's solution in 1900 to the black-body radiation problem, from the correspondence between energy and frequency in Albert Einstein's 1905 paper which explained the photoelectric effect. Early quantum theory was profoundly re-conceived in the mid-1920s by Erwin Schrödinger, Werner Heisenberg, Max Born and others; the modern theory is formulated in various specially developed mathematical formalisms.
In one of them, a mathematical function, the wave function, provides information about the probability amplitude of position and other physical properties of a particle. Important applications of quantum theory include quantum chemistry, quantum optics, quantum computing, superconducting magnets, light-emitting diodes, the laser, the transistor and semiconductors such as the microprocessor and research imaging such as magnetic resonance imaging and electron microscopy. Explanations for many biological and physical phenomena are rooted in the nature of the chemical bond, most notably the macro-molecule DNA. Scientific inquiry into the wave nature of light began in the 17th and 18th centuries, when scientists such as Robert Hooke, Christiaan Huygens and Leonhard Euler proposed a wave theory of light based on experimental observations. In 1803, Thomas Young, an English polymath, performed the famous double-slit experiment that he described in a paper titled On the nature of light and colours.
This experiment played a major role in the general acceptance of the wave theory of light. In 1838, Michael Faraday discovered cathode rays; these studies were followed by the 1859 statement of the black-body radiation problem by Gustav Kirchhoff, the 1877 suggestion by Ludwig Boltzmann that the energy states of a physical system can be discrete, the 1900 quantum hypothesis of Max Planck. Planck's hypothesis that energy is radiated and absorbed in discrete "quanta" matched the observed patterns of black-body radiation. In 1896, Wilhelm Wien empirically determined a distribution law of black-body radiation, known as Wien's law in his honor. Ludwig Boltzmann independently arrived at this result by considerations of Maxwell's equations. However, it underestimated the radiance at low frequencies. Planck corrected this model using Boltzmann's statistical interpretation of thermodynamics and proposed what is now called Planck's law, which led to the development of quantum mechanics. Following Max Planck's solution in 1900 to the black-body radiation problem, Albert Einstein offered a quantum-based theory to explain the photoelectric effect.
Around 1900–1910, the atomic theory and the corpuscular theory of light first came to be accepted as scientific fact. Among the first to study quantum phenomena in nature were Arthur Compton, C. V. Raman, Pieter Zeeman, each of whom has a quantum effect named after him. Robert Andrews Millikan studied the photoelectric effect experimentally, Albert Einstein developed a theory for it. At the same time, Ernest Rutherford experimentally discovered the nuclear model of the atom, for which Niels Bohr developed his theory of the atomic structure, confirmed by the experiments of Henry Moseley. In 1913, Peter Debye extended Niels Bohr's theory of atomic structure, introducing elliptical orbits, a concept introduced by Arnold Sommerfeld; this phase is known as old quantum theory. According to Planck, each energy element is proportional to its frequency: E = h ν, where h is Planck's constant. Planck cautiously insisted that this was an aspect of the processes of absorption and emission of radiation and had nothing to do with the physical reality of the radiation itself.
In fact, he considered his quantum hypothesis a mathematical trick to get the right answer rather than a sizable discovery. However, in 1905 Albert Einstein interpreted Planck's quantum hypothesis realistically and used it to explain the photoelectric effect, in which shining light on certain materials can eject electrons from the material, he won the 1921 Nobel Prize in Physics for this work. Einstein further developed this idea to show that an electromagnetic wave such as light could be described as a particle, with a discrete quantum of energy, dependent on its frequency; the foundations of quantum mechanics were established during the first half of the 20th century by Max Planck, Niels Bohr, Werner Heisenberg, Louis de Broglie, Arthur Compton, Albert Einstein, Erwin Schrödinger, Max Born, John von Neumann, Paul Dirac, Enrico Fermi, Wolfgang Pauli, Max von Laue, Freeman Dyson, David Hilbert, Wi
In quantum mechanics, the uncertainty principle is any of a variety of mathematical inequalities asserting a fundamental limit to the precision with which certain pairs of physical properties of a particle, known as complementary variables or canonically conjugate variables such as position x and momentum p, can be known. Introduced first in 1927, by the German physicist Werner Heisenberg, it states that the more the position of some particle is determined, the less its momentum can be known, vice versa; the formal inequality relating the standard deviation of position σx and the standard deviation of momentum σp was derived by Earle Hesse Kennard that year and by Hermann Weyl in 1928: where ħ is the reduced Planck constant, h/. The uncertainty principle has been confused with a related effect in physics, called the observer effect, which notes that measurements of certain systems cannot be made without affecting the systems, that is, without changing something in a system. Heisenberg utilized such an observer effect at the quantum level as a physical "explanation" of quantum uncertainty.
It has since become clearer, that the uncertainty principle is inherent in the properties of all wave-like systems, that it arises in quantum mechanics due to the matter wave nature of all quantum objects. Thus, the uncertainty principle states a fundamental property of quantum systems and is not a statement about the observational success of current technology, it must be emphasized that measurement does not mean only a process in which a physicist-observer takes part, but rather any interaction between classical and quantum objects regardless of any observer. Since the uncertainty principle is such a basic result in quantum mechanics, typical experiments in quantum mechanics observe aspects of it. Certain experiments, may deliberately test a particular form of the uncertainty principle as part of their main research program; these include, for example, tests of number–phase uncertainty relations in superconducting or quantum optics systems. Applications dependent on the uncertainty principle for their operation include low-noise technology such as that required in gravitational wave interferometers.
The uncertainty principle is not apparent on the macroscopic scales of everyday experience. So it is helpful to demonstrate how it applies to more understood physical situations. Two alternative frameworks for quantum physics offer different explanations for the uncertainty principle; the wave mechanics picture of the uncertainty principle is more visually intuitive, but the more abstract matrix mechanics picture formulates it in a way that generalizes more easily. Mathematically, in wave mechanics, the uncertainty relation between position and momentum arises because the expressions of the wavefunction in the two corresponding orthonormal bases in Hilbert space are Fourier transforms of one another. A nonzero function and its Fourier transform cannot both be localized. A similar tradeoff between the variances of Fourier conjugates arises in all systems underlain by Fourier analysis, for example in sound waves: A pure tone is a sharp spike at a single frequency, while its Fourier transform gives the shape of the sound wave in the time domain, a delocalized sine wave.
In quantum mechanics, the two key points are that the position of the particle takes the form of a matter wave, momentum is its Fourier conjugate, assured by the de Broglie relation p = ħk, where k is the wavenumber. In matrix mechanics, the mathematical formulation of quantum mechanics, any pair of non-commuting self-adjoint operators representing observables are subject to similar uncertainty limits. An eigenstate of an observable represents the state of the wavefunction for a certain measurement value. For example, if a measurement of an observable A is performed the system is in a particular eigenstate Ψ of that observable. However, the particular eigenstate of the observable A need not be an eigenstate of another observable B: If so it does not have a unique associated measurement for it, as the system is not in an eigenstate of that observable. According to the de Broglie hypothesis, every object in the universe is a wave, i.e. a situation which gives rise to this phenomenon. The position of the particle is described by a wave function Ψ.
The time-independent wave function of a single-moded plane wave of wavenumber k0 or momentum p0 is ψ ∝ e i k 0 x = e i p 0 x / ℏ. The Born rule states that this should be interpreted as a probability density amplitude function in the sense that the probability of finding the particle between a and b is P = ∫ a b | ψ | 2 d x. In the case of the single-moded plane wave, | ψ | 2 is a uniform distribution. In other words, the particle position is uncertain in the sense that it could be esse
Classical mechanics describes the motion of macroscopic objects, from projectiles to parts of machinery, astronomical objects, such as spacecraft, planets and galaxies. If the present state of an object is known it is possible to predict by the laws of classical mechanics how it will move in the future and how it has moved in the past; the earliest development of classical mechanics is referred to as Newtonian mechanics. It consists of the physical concepts employed by and the mathematical methods invented by Isaac Newton and Gottfried Wilhelm Leibniz and others in the 17th century to describe the motion of bodies under the influence of a system of forces. More abstract methods were developed, leading to the reformulations of classical mechanics known as Lagrangian mechanics and Hamiltonian mechanics; these advances, made predominantly in the 18th and 19th centuries, extend beyond Newton's work through their use of analytical mechanics. They are, with some modification used in all areas of modern physics.
Classical mechanics provides accurate results when studying large objects that are not massive and speeds not approaching the speed of light. When the objects being examined have about the size of an atom diameter, it becomes necessary to introduce the other major sub-field of mechanics: quantum mechanics. To describe velocities that are not small compared to the speed of light, special relativity is needed. In case that objects become massive, general relativity becomes applicable. However, a number of modern sources do include relativistic mechanics into classical physics, which in their view represents classical mechanics in its most developed and accurate form; the following introduces the basic concepts of classical mechanics. For simplicity, it models real-world objects as point particles; the motion of a point particle is characterized by a small number of parameters: its position and the forces applied to it. Each of these parameters is discussed in turn. In reality, the kind of objects that classical mechanics can describe always have a non-zero size.
Objects with non-zero size have more complicated behavior than hypothetical point particles, because of the additional degrees of freedom, e.g. a baseball can spin while it is moving. However, the results for point particles can be used to study such objects by treating them as composite objects, made of a large number of collectively acting point particles; the center of mass of a composite object behaves like a point particle. Classical mechanics uses common-sense notions of how matter and forces interact, it assumes that matter and energy have definite, knowable attributes such as location in space and speed. Non-relativistic mechanics assumes that forces act instantaneously; the position of a point particle is defined in relation to a coordinate system centered on an arbitrary fixed reference point in space called the origin O. A simple coordinate system might describe the position of a particle P with a vector notated by an arrow labeled r that points from the origin O to point P. In general, the point particle does not need to be stationary relative to O.
In cases where P is moving relative to O, r is defined as a function of time. In pre-Einstein relativity, time is considered an absolute, i.e. the time interval, observed to elapse between any given pair of events is the same for all observers. In addition to relying on absolute time, classical mechanics assumes Euclidean geometry for the structure of space; the velocity, or the rate of change of position with time, is defined as the derivative of the position with respect to time: v = d r d t. In classical mechanics, velocities are directly subtractive. For example, if one car travels east at 60 km/h and passes another car traveling in the same direction at 50 km/h, the slower car perceives the faster car as traveling east at 60 − 50 = 10 km/h. However, from the perspective of the faster car, the slower car is moving 10 km/h to the west denoted as -10 km/h where the sign implies opposite direction. Velocities are directly additive as vector quantities. Mathematically, if the velocity of the first object in the previous discussion is denoted by the vector u = ud and the velocity of the second object by the vector v = ve, where u is the speed of the first object, v is the speed of the second object, d and e are unit vectors in the directions of motion of each object then the velocity of the first object as seen by the second object is u ′ = u − v. Similarly, the first object sees the velocity of the second object as v ′ = v − u.
When both objects are moving in the same direction, this equation can be simplified to u ′ = d. Or, by ignoring direction, the difference can be given in terms of speed only: u ′ = u − v; the acceleration, or rate of change of velocity, is th