Classical mechanics describes the motion of macroscopic objects, from projectiles to parts of machinery, astronomical objects, such as spacecraft, planets and galaxies. If the present state of an object is known it is possible to predict by the laws of classical mechanics how it will move in the future and how it has moved in the past; the earliest development of classical mechanics is referred to as Newtonian mechanics. It consists of the physical concepts employed by and the mathematical methods invented by Isaac Newton and Gottfried Wilhelm Leibniz and others in the 17th century to describe the motion of bodies under the influence of a system of forces. More abstract methods were developed, leading to the reformulations of classical mechanics known as Lagrangian mechanics and Hamiltonian mechanics; these advances, made predominantly in the 18th and 19th centuries, extend beyond Newton's work through their use of analytical mechanics. They are, with some modification used in all areas of modern physics.
Classical mechanics provides accurate results when studying large objects that are not massive and speeds not approaching the speed of light. When the objects being examined have about the size of an atom diameter, it becomes necessary to introduce the other major sub-field of mechanics: quantum mechanics. To describe velocities that are not small compared to the speed of light, special relativity is needed. In case that objects become massive, general relativity becomes applicable. However, a number of modern sources do include relativistic mechanics into classical physics, which in their view represents classical mechanics in its most developed and accurate form; the following introduces the basic concepts of classical mechanics. For simplicity, it models real-world objects as point particles; the motion of a point particle is characterized by a small number of parameters: its position and the forces applied to it. Each of these parameters is discussed in turn. In reality, the kind of objects that classical mechanics can describe always have a non-zero size.
Objects with non-zero size have more complicated behavior than hypothetical point particles, because of the additional degrees of freedom, e.g. a baseball can spin while it is moving. However, the results for point particles can be used to study such objects by treating them as composite objects, made of a large number of collectively acting point particles; the center of mass of a composite object behaves like a point particle. Classical mechanics uses common-sense notions of how matter and forces interact, it assumes that matter and energy have definite, knowable attributes such as location in space and speed. Non-relativistic mechanics assumes that forces act instantaneously; the position of a point particle is defined in relation to a coordinate system centered on an arbitrary fixed reference point in space called the origin O. A simple coordinate system might describe the position of a particle P with a vector notated by an arrow labeled r that points from the origin O to point P. In general, the point particle does not need to be stationary relative to O.
In cases where P is moving relative to O, r is defined as a function of time. In pre-Einstein relativity, time is considered an absolute, i.e. the time interval, observed to elapse between any given pair of events is the same for all observers. In addition to relying on absolute time, classical mechanics assumes Euclidean geometry for the structure of space; the velocity, or the rate of change of position with time, is defined as the derivative of the position with respect to time: v = d r d t. In classical mechanics, velocities are directly subtractive. For example, if one car travels east at 60 km/h and passes another car traveling in the same direction at 50 km/h, the slower car perceives the faster car as traveling east at 60 − 50 = 10 km/h. However, from the perspective of the faster car, the slower car is moving 10 km/h to the west denoted as -10 km/h where the sign implies opposite direction. Velocities are directly additive as vector quantities. Mathematically, if the velocity of the first object in the previous discussion is denoted by the vector u = ud and the velocity of the second object by the vector v = ve, where u is the speed of the first object, v is the speed of the second object, d and e are unit vectors in the directions of motion of each object then the velocity of the first object as seen by the second object is u ′ = u − v. Similarly, the first object sees the velocity of the second object as v ′ = v − u.
When both objects are moving in the same direction, this equation can be simplified to u ′ = d. Or, by ignoring direction, the difference can be given in terms of speed only: u ′ = u − v; the acceleration, or rate of change of velocity, is th
Quantum tunnelling or tunneling is the quantum mechanical phenomenon where a subatomic particle passes through a potential barrier that it cannot surmount under the provision of classical mechanics. Quantum tunnelling plays an essential role in several physical phenomena, such as the nuclear fusion that occurs in main sequence stars like the Sun, it has important applications in the tunnel diode, quantum computing, in the scanning tunnelling microscope. The effect was predicted in the early 20th century, its acceptance as a general physical phenomenon came mid-century. Fundamental quantum mechanical concepts are central to this phenomenon, which makes quantum tunnelling one of the novel implications of quantum mechanics. Quantum tunneling is projected to create physical limits to the size of the transistors used in microprocessors, due to electrons being able to tunnel past them if the transistors are too small. Tunnelling is explained in terms of the Heisenberg uncertainty principle that the quantum object can be known as a wave or as a particle in general.
Quantum tunnelling was developed from the study of radioactivity, discovered in 1896 by Henri Becquerel. Radioactivity was examined further by Marie Curie and Pierre Curie, for which they earned the Nobel Prize in Physics in 1903. Ernest Rutherford and Egon Schweidler studied its nature, verified empirically by Friedrich Kohlrausch; the idea of the half-life and the possibility of predicting decay was created from their work. In 1901, Robert Francis Earhart, while investigating the conduction of gases between spaced electrodes using the Michelson interferometer to measure the spacing, discovered an unexpected conduction regime. J. J. Thomson commented. In 1911 and 1914, then-graduate student Franz Rother, employing Earhart's method for controlling and measuring the electrode separation but with a sensitive platform galvanometer, directly measured steady field emission currents. In 1926, using a still newer platform galvanometer of sensitivity 26 pA, measured the field emission currents in a "hard" vacuum between spaced electrodes.
Quantum tunneling was first noticed in 1927 by Friedrich Hund when he was calculating the ground state of the double-well potential and independently in the same year by Leonid Mandelstam and Mikhail Leontovich in their analysis of the implications of the new Schrödinger wave equation for the motion of a particle in a confining potential of a limited spatial extent. Its first application was a mathematical explanation for alpha decay, done in 1928 by George Gamow and independently by Ronald Gurney and Edward Condon; the two researchers solved the Schrödinger equation for a model nuclear potential and derived a relationship between the half-life of the particle and the energy of emission that depended directly on the mathematical probability of tunnelling. After attending a seminar by Gamow, Max Born recognised the generality of tunnelling, he realised that it was not restricted to nuclear physics, but was a general result of quantum mechanics that applies to many different systems. Shortly thereafter, both groups considered the case of particles tunnelling into the nucleus.
The study of semiconductors and the development of transistors and diodes led to the acceptance of electron tunnelling in solids by 1957. The work of Leo Esaki, Ivar Giaever and Brian Josephson predicted the tunnelling of superconducting Cooper pairs, for which they received the Nobel Prize in Physics in 1973. In 2016, the quantum tunneling of water was discovered. Quantum tunnelling falls under the domain of quantum mechanics: the study of what happens at the quantum scale; this process cannot be directly perceived, but much of its understanding is shaped by the microscopic world, which classical mechanics cannot adequately explain. To understand the phenomenon, particles attempting to travel between potential barriers can be compared to a ball trying to roll over a hill. Classical mechanics predicts that particles that do not have enough energy to classically surmount a barrier will not be able to reach the other side. Thus, a ball without sufficient energy to surmount the hill would roll back down.
Or, lacking the energy to penetrate a wall, it would bounce back or in the extreme case, bury itself inside the wall. In quantum mechanics, these particles can, with a small probability, tunnel to the other side, thus crossing the barrier. Here, the "ball" could, in a sense, borrow energy from its surroundings to tunnel through the wall or "roll over the hill", paying it back by making the reflected electrons more energetic than they otherwise would have been; the reason for this difference comes from the treatment of matter in quantum mechanics as having properties of waves and particles. One interpretation of this duality involves the Heisenberg uncertainty principle, which defines a limit on how the position and the momentum of a particle can be known at the same time; this implies that there are no solutions with a probability of zero, though a solution may approach infinity if, for example, the calculation for its position was taken as a probability of 1, the other, i.e. its speed, would have to be infinity.
Hence, the probability of a given particle's existence on the opposite side of an intervening barrier is non-zero, such particles will appear on the'other' side with a relative frequency proportional to this probability. The wave function of a particle summarises everything that can be known about a physical s
In physics, interference is a phenomenon in which two waves superpose to form a resultant wave of greater, lower, or the same amplitude. Constructive and destructive interference result from the interaction of waves that are correlated or coherent with each other, either because they come from the same source or because they have the same or nearly the same frequency. Interference effects can be observed with all types of waves, for example, radio, surface water waves, gravity waves, or matter waves; the resulting images or graphs are called interferograms. The principle of superposition of waves states that when two or more propagating waves of same type are incident on the same point, the resultant amplitude at that point is equal to the vector sum of the amplitudes of the individual waves. If a crest of a wave meets a crest of another wave of the same frequency at the same point the amplitude is the sum of the individual amplitudes—this is constructive interference. If a crest of one wave meets a trough of another wave the amplitude is equal to the difference in the individual amplitudes—this is known as destructive interference.
Constructive interference occurs when the phase difference between the waves is an multiple of π, whereas destructive interference occurs when the difference is an odd multiple of π. If the difference between the phases is intermediate between these two extremes the magnitude of the displacement of the summed waves lies between the minimum and maximum values. Consider, for example, what happens when two identical stones are dropped into a still pool of water at different locations; each stone generates a circular wave propagating outwards from the point where the stone was dropped. When the two waves overlap, the net displacement at a particular point is the sum of the displacements of the individual waves. At some points, these will be in phase, will produce a maximum displacement. In other places, the waves will be in anti-phase, there will be no net displacement at these points. Thus, parts of the surface will be stationary—these are seen in the figure above and to the right as stationary blue-green lines radiating from the centre.
Interference of light is a common phenomenon that can be explained classically by the superposition of waves, however a deeper understanding of light interference requires knowledge of wave-particle duality of light, due to quantum mechanics. Prime examples of light interference are the famous double-slit experiment, laser speckle, anti-reflective coatings and interferometers. Traditionally the classical wave model is taught as a basis for understanding optical interference, based on the Huygens–Fresnel principle; the above can be demonstrated in one dimension by deriving the formula for the sum of two waves. The equation for the amplitude of a sinusoidal wave traveling to the right along the x-axis is W 1 = A cos where A is the peak amplitude, k = 2 π / λ is the wavenumber and ω = 2 π f is the angular frequency of the wave. Suppose a second wave of the same frequency and amplitude but with a different phase is traveling to the right W 2 = A cos where φ is the phase difference between the waves in radians.
The two waves will superpose and add: the sum of the two waves is W 1 + W 2 = A. Using the trigonometric identity for the sum of two cosines: cos a + cos b = 2 cos cos , this can be written W 1 + W 2 = 2 A cos cos ; this represents a wave at the original frequency, traveling to the right like the components, whose amplitude is proportional to the cosine of φ / 2. Constructive interference: If the phase difference is an multiple of π: φ = …, − 4 π, − 2 π, 0, 2 π, 4 π, …
Quantum teleportation is a process by which quantum information can be transmitted from one location to another, with the help of classical communication and shared quantum entanglement between the sending and receiving location. Because it depends on classical communication, which can proceed no faster than the speed of light, it cannot be used for faster-than-light transport or communication of classical bits. While it has proven possible to teleport one or more qubits of information between two quanta, this has not yet been achieved between anything larger than molecules. Although the name is inspired by the teleportation used in fiction, quantum teleportation is limited to the transfer of information rather than matter itself. Quantum teleportation is not a form of transportation, but of communication: it provides a way of transporting a qubit from one location to another without having to move a physical particle along with it; the term was coined by physicist Charles Bennett. The seminal paper first expounding the idea of quantum teleportation was published by C. H. Bennett, G. Brassard, C.
Crépeau, R. Jozsa, A. Peres, W. K. Wootters in 1993. Quantum teleportation was first realized in single photons being demonstrated in various material systems such as atoms, ions and superconducting circuits; the latest reported record distance for quantum teleportation is 1,400 km by the group of Jian-Wei Pan using the Micius satellite for space-based quantum teleportation. In matters relating to quantum or classical information theory, it is convenient to work with the simplest possible unit of information, the two-state system. In classical information, this is a bit represented using one or zero; the quantum analog of a bit is qubit. Qubits encode a type of information, called quantum information, which differs from "classical" information. For example, quantum information can be neither destroyed. Quantum teleportation provides a mechanism of moving a qubit from one location to another, without having to physically transport the underlying particle to which that qubit is attached. Much like the invention of the telegraph allowed classical bits to be transported at high speed across continents, quantum teleportation holds the promise that one day, qubits could be moved likewise.
As of 2015, the quantum states of single photons, photon modes, single atoms, atomic ensembles, defect centers in solids, single electrons, superconducting circuits have been employed as information bearers. The movement of qubits does not require the movement of "things" any more than communication over the internet does: no quantum object needs to be transported, but it is necessary to communicate two classical bits per teleported qubit from the sender to the receiver; the actual teleportation protocol requires that an entangled quantum state or Bell state be created, its two parts shared between two locations. In essence, a certain kind of quantum channel between two sites must be established first, before a qubit can be moved. Teleportation requires a classical information channel to be established, as two classical bits must be transmitted to accompany each qubit; the reason for this is that the results of the measurements must be communicated, this must be done over ordinary classical communication channels.
The need for such classical channels may, at first, seem disappointing. What's more, Bell states are most shared using photons from lasers, so teleportation could be done, in principle, through open space, i.e. without the need to send the light through cables or optical fibers. The quantum states of single atoms have been teleported. An atom consists of several parts: the qubits in the electronic state or electron shells surrounding the atomic nucleus, the qubits in the nucleus itself, the electrons and neutrons making up the atom. Physicists have teleported the qubits encoded in the electronic state of atoms, it is therefore inaccurate to say "an atom has been teleported". The quantum state of an atom has. Thus, performing this kind of teleportation requires a stock of atoms at the receiving site, available for having qubits imprinted on them; the importance of teleporting nuclear state is unclear: nuclear state does affect the atom, e.g. in hyperfine splitting, but whether such state would need to be teleported in some futuristic "practical" application is debatable.
An important aspect of quantum information theory is entanglement, which imposes statistical correlations between otherwise distinct physical systems. These correlations hold when measurements are chosen and performed independently, out of causal contact from one another, as verified in Bell test experiments. Thus, an observation resulting from a measurement choice made at one point in spacetime seems to instantaneously affect outcomes in another region though light hasn't yet had time to travel the distance; however such correlations can never be used to transmit any information faster than the speed of light, a statement encapsulated in the no-communication theorem. Thus, teleportation, as a whole, can never be superluminal, as a qubit cannot be reconstructed until the accompanying classical information arrives. Understanding quantum teleportation requires a good grounding in finite-dimensional linear algebra, Hilbert spaces and projection matrixes. A qub
History of quantum mechanics
The history of quantum mechanics is a fundamental part of the history of modern physics. Quantum mechanics' history, as it interlaces with the history of quantum chemistry, began with a number of different scientific discoveries: the 1838 discovery of cathode rays by Michael Faraday. Albert Einstein in 1905, in order to explain the photoelectric effect reported by Heinrich Hertz in 1887, postulated with Max Planck's quantum hypothesis that light itself is made of individual quantum particles, which in 1926 came to be called photons by Gilbert N. Lewis; the photoelectric effect was observed upon shining light of particular wavelengths on certain materials, such as metals, which caused electrons to be ejected from those materials only if the light quantum energy was greater than the work function of the metal's surface. The phrase "quantum mechanics" was coined by the group of physicists including Max Born, Werner Heisenberg, Wolfgang Pauli, at the University of Göttingen in the early 1920s, was first used in Born's 1924 paper "Zur Quantenmechanik".
In the years to follow, this theoretical basis began to be applied to chemical structure and bonding. Ludwig Boltzmann suggested in 1877 that the energy levels of a physical system, such as a molecule, could be discrete, he was a founder of the Austrian Mathematical Society, together with the mathematicians Gustav von Escherich and Emil Müller. Boltzmann's rationale for the presence of discrete energy levels in molecules such as those of iodine gas had its origins in his statistical thermodynamics and statistical mechanics theories and was backed up by mathematical arguments, as would be the case twenty years with the first quantum theory put forward by Max Planck. In 1900, the German physicist Max Planck reluctantly introduced the idea that energy is quantized in order to derive a formula for the observed frequency dependence of the energy emitted by a black body, called Planck's law, that included a Boltzmann distribution. Planck's law can be stated as follows: I = 2 h ν 3 c 2 1 e h ν k T − 1, where: I is the energy per unit time radiated per unit area of emitting surface in the normal direction per unit solid angle per unit frequency by a black body at temperature T.
The earlier Wien approximation may be derived from Planck's law by assuming h ν ≫ k T. Moreover, the application of Planck's quantum theory to the electron allowed Ștefan Procopiu in 1911–1913, subsequently Niels Bohr in 1913, to calculate the magnetic moment of the electron, called the "magneton". In 1905, Einstein explained the photoelectric effect by postulating that light, or more all electromagnetic radiation, can be divided into a finite number of "energy quanta" that are localized points in space. From the introduction section of his March 1905 quantum paper, "On a heuristic viewpoint concerning the emission and transformation of light", Einstein states: "According to the assumption to be contemplated here, when a light ray is spreading from a point, the energy is not distributed continuously over ever-increasing spaces, but consists of a finite number of'energy quanta' that are localized in points in space, move without dividing, can be absorbed or generated only as a whole." This statement has been called the most revolutionary sentence written by a physicist of the twentieth century.
These energy quanta came to be called "photons", a term introduced by Gilbert N. Lewis in 1926; the idea that each photon had to consist of energy in terms of quanta was a remarkable achievement. In 1913, Bohr explained the spectral lines of the hydrogen atom, again by using quantization, in his paper of July 1913 On the Constitution of Atoms and Molecules; these theories, though successful, were phenomenological: during this time, there was no rigorous justification for quantization, aside from Henri Poincaré's discussion of Planck's theory in his 1912 paper Sur la théorie des quant
Popper's experiment is an experiment proposed by the philosopher Karl Popper to put to the test different interpretations of quantum mechanics. In fact, as early as 1934, Popper started criticising the more accepted Copenhagen interpretation, a popular subjectivist interpretation of quantum mechanics. Therefore, in his most famous book Logik der Forschung he proposed a first experiment alleged to empirically discriminate between the Copenhagen Interpretation and a realist interpretation, which he advocated. Einstein, wrote a letter to Popper about the experiment in which he raised some crucial objections and Popper himself declared that this first attempt was "a gross mistake for which I have been sorry and ashamed of since". Popper, came back to the foundations of quantum mechanics from 1948, when he developed his criticism of determinism in both quantum and classical physics; as a matter of fact, Popper intensified his research activities on the foundations of quantum mechanics throughout the 1950s and 1960s developing his interpretation of quantum mechanics in terms of real existing probabilities thanks to the support of a number of distinguished physicists.
In 1980, Popper proposed his more important, yet overlooked, contribution to QM: a "new simplified version of the EPR experiment". The experiment was however published only two years in the third volume of the Postscript to the Logic of Scientific Discovery; the most known interpretation of quantum mechanics is the Copenhagen interpretation put forward by Niels Bohr and his school. It maintains that observations lead to a wavefunction collapse, thereby suggesting the counter-intuitive result that two well separated, non-interacting systems require action-at-a-distance. Popper argued that such non-locality conflicts with common sense, would lead to a subjectivist interpretation of phenomena, depending on the role of the'observer'. While the EPR argument was always meant to be a thought experiment, put forward to shed light on the intrinsic paradoxa of QM, Popper's proposed an experiment which could have been experimentally implemented, participated to a physics conference organised in Bari in 1983, to present his experiment and propose to the experimentalists to carry it out it.
The actual realisation of Popper's experiment required the new techniques which make use of the phenomenon of Spontaneous Parametric Down Conversion, which were not yet been exploited at that time, so that the experiment was performed only in 1999, five years after Popper had died. Contrarily to the first proposal of 1934, Popper's experiment of 1980 exploits couples of entangled particles, in order to put to the test Heisenberg's uncertainty principle. Indeed, Popper maintains: I wish to suggest a crucial experiment to test whether knowledge alone is sufficient to create'uncertainty' and, with it, scatter, or whether it is the physical situation, responsible for the scatter. Popper's proposed experiment consists of a low-intensity source of particles that can generate pairs of particles traveling to the left and to the right along the x-axis; the beam's low intensity is "so that the probability is high that two particles recorded at the same time on the left and on the right are those which have interacted before emission."There are two slits, one each in the paths of the two particles.
Behind the slits are semicircular arrays of counters which can detect the particles after they pass through the slits. "These counters are coincident counters that they only detect particles that have passed at the same time through A and B." Popper argued that because the slits localize the particles to a narrow region along the y-axis, from the uncertainty principle they experience large uncertainties in the y-components of their momenta. This larger spread in the momentum will show up as particles being detected at positions that lie outside the regions where particles would reach based on their initial momentum spread. Popper suggests that we count the particles in coincidence, i.e. we count only those particles behind slit B, whose partner has gone through slit A. Particles which are not able to pass through slit A are ignored; the Heisenberg scatter for both the beams of particles going to the right and to the left, is tested "by making the two slits A and B wider or narrower. If the slits are narrower counters should come into play which are higher up and lower down, seen from the slits.
The coming into play of these counters is indicative of the wider scattering angles which go with a narrower slit, according to the Heisenberg relations." Now the slit at A is made small and the slit at B wide. Popper wrote that, according to the EPR argument, we have measured position "y" for both particles with the precision Δ y, not just for particle passing through slit A; this is because from the initial entangled EPR state we can calculate the position of the particle 2, once the position of particle 1 is known, with the same precision. We can do this, argues Popper though slit B is wide open. Therefore, Popper states that "fairly precise "knowledge"" about the y position of particle 2 is made, and since it is, according to the Copenhagen interpretation, our knowledge, described by the theory – and by the Heisenberg relations — it should be expected that the momentum p y of particle 2 scatters as much as that of particle 1 though the slit A is much narrower than the widel
In geometry and physics, spinors are elements of a vector space that can be associated with Euclidean space. Like geometric vectors and more general tensors, spinors transform linearly when the Euclidean space is subjected to a slight rotation. However, when a sequence of such small rotations is composed to form an overall final rotation, the resulting spinor transformation depends on which sequence of small rotations was used: unlike vectors and tensors, a spinor transforms to its negative when the space is rotated through a complete turn from 0° to 360°; this property characterizes spinors. It is possible to associate a similar notion of spinor to Minkowski space in which case the Lorentz transformations of special relativity play the role of rotations. Spinors were introduced in geometry by Élie Cartan in 1913. In the 1920s physicists discovered that spinors are essential to describe the intrinsic angular momentum, or "spin", of the electron and other subatomic particles. Spinors are characterized by the specific way.
They change in different ways depending not just on the overall final rotation, but the details of how that rotation was achieved. There are two topologically distinguishable classes of paths through rotations that result in the same overall rotation, as famously illustrated by the belt trick puzzle; these two inequivalent classes yield spinor transformations of opposite sign. The spin group is the group of all rotations keeping track of the class, it doubly covers the rotation group, since each rotation can be obtained in two inequivalent ways as the endpoint of a path. The space of spinors by definition is equipped with a linear representation of the spin group, meaning that elements of the spin group act as linear transformations on the space of spinors, in a way that genuinely depends on the homotopy class. In mathematical terms, spinors are described by a double-valued projective representation of the rotation group SO. Although spinors can be defined purely as elements of a representation space of the spin group, they are defined as elements of a vector space that carries a linear representation of the Clifford algebra.
The Clifford algebra is an associative algebra that can be constructed from Euclidean space and its inner product in a basis-independent way. Both the spin group and its Lie algebra are embedded inside the Clifford algebra in a natural way, in applications the Clifford algebra is the easiest to work with. After choosing an orthonormal basis of Euclidean space, a representation of the Clifford algebra is generated by gamma matrices, matrices that satisfy a set of canonical anti-commutation relations; the spinors are the column vectors. In three Euclidean dimensions, for instance, the Pauli spin matrices are a set of gamma matrices, the two-component complex column vectors on which these matrices act are spinors. However, the particular matrix representation of the Clifford algebra, hence what constitutes a "column vector", involves the choice of basis and gamma matrices in an essential way; as a representation of the spin group, this realization of spinors as column vectors will either be irreducible if the dimension is odd, or it will decompose into a pair of so-called "half-spin" or Weyl representations if the dimension is even.
What characterizes spinors and distinguishes them from geometric vectors and other tensors is subtle. Consider applying a rotation to the coordinates of a system. No object in the system itself has moved, only the coordinates have, so there will always be a compensating change in those coordinate values when applied to any object of the system. Geometrical vectors, for example, have components that will undergo the same rotation as the coordinates. More broadly, any tensor associated with the system has coordinate descriptions that adjust to compensate for changes to the coordinate system itself. Spinors do not appear at this level of the description of a physical system, when one is concerned only with the properties of a single isolated rotation of the coordinates. Rather, spinors appear when we imagine that instead of a single rotation, the coordinate system is rotated between some initial and final configuration. For any of the familiar and intuitive quantities associated with the system, the transformation law does not depend on the precise details of how the coordinates arrived at their final configuration.
Spinors, on the other hand, are constructed in such a way that makes them sensitive to how the gradual rotation of the coordinates arrived there: they exhibit path-dependence. It turns out that, for any final configuration of the coordinates, there are two inequivalent gradual rotations of the coordinate system that result in this same configuration; this ambiguity is called the homotopy class of the gradual rotation. The belt trick puzzle famously demonstrates two different rotations, one through an angle of 2π and the other through an angle of 4π, having the same final configurations but different classes. Spinors exhibit a sign-reversal that genuinely depends on this homotopy class; this distinguishes them from other tensors, none of which can feel the class. Spinors can be exhibited as concrete objects using a choice of Cartesian coordinates. In three Euclidean dimensions, for instance, spinors can be constructed by making a choice of Pauli spin matrices corresponding to the thre