Quantum tunnelling or tunneling is the quantum mechanical phenomenon where a subatomic particle passes through a potential barrier that it cannot surmount under the provision of classical mechanics. Quantum tunnelling plays an essential role in several physical phenomena, such as the nuclear fusion that occurs in main sequence stars like the Sun, it has important applications in the tunnel diode, quantum computing, in the scanning tunnelling microscope. The effect was predicted in the early 20th century, its acceptance as a general physical phenomenon came mid-century. Fundamental quantum mechanical concepts are central to this phenomenon, which makes quantum tunnelling one of the novel implications of quantum mechanics. Quantum tunneling is projected to create physical limits to the size of the transistors used in microprocessors, due to electrons being able to tunnel past them if the transistors are too small. Tunnelling is explained in terms of the Heisenberg uncertainty principle that the quantum object can be known as a wave or as a particle in general.
Quantum tunnelling was developed from the study of radioactivity, discovered in 1896 by Henri Becquerel. Radioactivity was examined further by Marie Curie and Pierre Curie, for which they earned the Nobel Prize in Physics in 1903. Ernest Rutherford and Egon Schweidler studied its nature, verified empirically by Friedrich Kohlrausch; the idea of the half-life and the possibility of predicting decay was created from their work. In 1901, Robert Francis Earhart, while investigating the conduction of gases between spaced electrodes using the Michelson interferometer to measure the spacing, discovered an unexpected conduction regime. J. J. Thomson commented. In 1911 and 1914, then-graduate student Franz Rother, employing Earhart's method for controlling and measuring the electrode separation but with a sensitive platform galvanometer, directly measured steady field emission currents. In 1926, using a still newer platform galvanometer of sensitivity 26 pA, measured the field emission currents in a "hard" vacuum between spaced electrodes.
Quantum tunneling was first noticed in 1927 by Friedrich Hund when he was calculating the ground state of the double-well potential and independently in the same year by Leonid Mandelstam and Mikhail Leontovich in their analysis of the implications of the new Schrödinger wave equation for the motion of a particle in a confining potential of a limited spatial extent. Its first application was a mathematical explanation for alpha decay, done in 1928 by George Gamow and independently by Ronald Gurney and Edward Condon; the two researchers solved the Schrödinger equation for a model nuclear potential and derived a relationship between the half-life of the particle and the energy of emission that depended directly on the mathematical probability of tunnelling. After attending a seminar by Gamow, Max Born recognised the generality of tunnelling, he realised that it was not restricted to nuclear physics, but was a general result of quantum mechanics that applies to many different systems. Shortly thereafter, both groups considered the case of particles tunnelling into the nucleus.
The study of semiconductors and the development of transistors and diodes led to the acceptance of electron tunnelling in solids by 1957. The work of Leo Esaki, Ivar Giaever and Brian Josephson predicted the tunnelling of superconducting Cooper pairs, for which they received the Nobel Prize in Physics in 1973. In 2016, the quantum tunneling of water was discovered. Quantum tunnelling falls under the domain of quantum mechanics: the study of what happens at the quantum scale; this process cannot be directly perceived, but much of its understanding is shaped by the microscopic world, which classical mechanics cannot adequately explain. To understand the phenomenon, particles attempting to travel between potential barriers can be compared to a ball trying to roll over a hill. Classical mechanics predicts that particles that do not have enough energy to classically surmount a barrier will not be able to reach the other side. Thus, a ball without sufficient energy to surmount the hill would roll back down.
Or, lacking the energy to penetrate a wall, it would bounce back or in the extreme case, bury itself inside the wall. In quantum mechanics, these particles can, with a small probability, tunnel to the other side, thus crossing the barrier. Here, the "ball" could, in a sense, borrow energy from its surroundings to tunnel through the wall or "roll over the hill", paying it back by making the reflected electrons more energetic than they otherwise would have been; the reason for this difference comes from the treatment of matter in quantum mechanics as having properties of waves and particles. One interpretation of this duality involves the Heisenberg uncertainty principle, which defines a limit on how the position and the momentum of a particle can be known at the same time; this implies that there are no solutions with a probability of zero, though a solution may approach infinity if, for example, the calculation for its position was taken as a probability of 1, the other, i.e. its speed, would have to be infinity.
Hence, the probability of a given particle's existence on the opposite side of an intervening barrier is non-zero, such particles will appear on the'other' side with a relative frequency proportional to this probability. The wave function of a particle summarises everything that can be known about a physical s
In physics, interference is a phenomenon in which two waves superpose to form a resultant wave of greater, lower, or the same amplitude. Constructive and destructive interference result from the interaction of waves that are correlated or coherent with each other, either because they come from the same source or because they have the same or nearly the same frequency. Interference effects can be observed with all types of waves, for example, radio, surface water waves, gravity waves, or matter waves; the resulting images or graphs are called interferograms. The principle of superposition of waves states that when two or more propagating waves of same type are incident on the same point, the resultant amplitude at that point is equal to the vector sum of the amplitudes of the individual waves. If a crest of a wave meets a crest of another wave of the same frequency at the same point the amplitude is the sum of the individual amplitudes—this is constructive interference. If a crest of one wave meets a trough of another wave the amplitude is equal to the difference in the individual amplitudes—this is known as destructive interference.
Constructive interference occurs when the phase difference between the waves is an multiple of π, whereas destructive interference occurs when the difference is an odd multiple of π. If the difference between the phases is intermediate between these two extremes the magnitude of the displacement of the summed waves lies between the minimum and maximum values. Consider, for example, what happens when two identical stones are dropped into a still pool of water at different locations; each stone generates a circular wave propagating outwards from the point where the stone was dropped. When the two waves overlap, the net displacement at a particular point is the sum of the displacements of the individual waves. At some points, these will be in phase, will produce a maximum displacement. In other places, the waves will be in anti-phase, there will be no net displacement at these points. Thus, parts of the surface will be stationary—these are seen in the figure above and to the right as stationary blue-green lines radiating from the centre.
Interference of light is a common phenomenon that can be explained classically by the superposition of waves, however a deeper understanding of light interference requires knowledge of wave-particle duality of light, due to quantum mechanics. Prime examples of light interference are the famous double-slit experiment, laser speckle, anti-reflective coatings and interferometers. Traditionally the classical wave model is taught as a basis for understanding optical interference, based on the Huygens–Fresnel principle; the above can be demonstrated in one dimension by deriving the formula for the sum of two waves. The equation for the amplitude of a sinusoidal wave traveling to the right along the x-axis is W 1 = A cos where A is the peak amplitude, k = 2 π / λ is the wavenumber and ω = 2 π f is the angular frequency of the wave. Suppose a second wave of the same frequency and amplitude but with a different phase is traveling to the right W 2 = A cos where φ is the phase difference between the waves in radians.
The two waves will superpose and add: the sum of the two waves is W 1 + W 2 = A. Using the trigonometric identity for the sum of two cosines: cos a + cos b = 2 cos cos , this can be written W 1 + W 2 = 2 A cos cos ; this represents a wave at the original frequency, traveling to the right like the components, whose amplitude is proportional to the cosine of φ / 2. Constructive interference: If the phase difference is an multiple of π: φ = …, − 4 π, − 2 π, 0, 2 π, 4 π, …
Quantum entanglement is a physical phenomenon that occurs when pairs or groups of particles are generated, interact, or share spatial proximity in ways such that the quantum state of each particle cannot be described independently of the state of the others when the particles are separated by a large distance. Measurements of physical properties such as position, momentum and polarization, performed on entangled particles are found to be correlated. For example, if a pair of particles is generated in such a way that their total spin is known to be zero, one particle is found to have clockwise spin on a certain axis, the spin of the other particle, measured on the same axis, will be found to be counterclockwise, as is to be expected due to their entanglement. However, this behavior gives rise to paradoxical effects: any measurement of a property of a particle performs an irreversible collapse on that particle and will change the original quantum state. In the case of entangled particles, such a measurement will be on the entangled system as a whole.
Such phenomena were the subject of a 1935 paper by Albert Einstein, Boris Podolsky, Nathan Rosen, several papers by Erwin Schrödinger shortly thereafter, describing what came to be known as the EPR paradox. Einstein and others considered such behavior to be impossible, as it violated the local realism view of causality and argued that the accepted formulation of quantum mechanics must therefore be incomplete. However, the counterintuitive predictions of quantum mechanics were verified experimentally in tests where the polarization or spin of entangled particles were measured at separate locations, statistically violating Bell's inequality. In earlier tests it couldn't be ruled out that the test result at one point could have been subtly transmitted to the remote point, affecting the outcome at the second location; however so-called "loophole-free" Bell tests have been performed in which the locations were separated such that communications at the speed of light would have taken longer—in one case 10,000 times longer—than the interval between the measurements.
According to some interpretations of quantum mechanics, the effect of one measurement occurs instantly. Other interpretations which don't recognize wavefunction collapse dispute that there is any "effect" at all. However, all interpretations agree that entanglement produces correlation between the measurements and that the mutual information between the entangled particles can be exploited, but that any transmission of information at faster-than-light speeds is impossible. Quantum entanglement has been demonstrated experimentally with photons, electrons, molecules as large as buckyballs, small diamonds; the utilization of entanglement in communication and computation is a active area of research. The counterintuitive predictions of quantum mechanics about correlated systems were first discussed by Albert Einstein in 1935, in a joint paper with Boris Podolsky and Nathan Rosen. In this study, the three formulated the EPR paradox, a thought experiment that attempted to show that quantum mechanical theory was incomplete.
They wrote: "We are thus forced to conclude that the quantum-mechanical description of physical reality given by wave functions is not complete."However, the three scientists did not coin the word entanglement, nor did they generalize the special properties of the state they considered. Following the EPR paper, Erwin Schrödinger wrote a letter to Einstein in German in which he used the word Verschränkung "to describe the correlations between two particles that interact and separate, as in the EPR experiment."Schrödinger shortly thereafter published a seminal paper defining and discussing the notion of "entanglement." In the paper he recognized the importance of the concept, stated: "I would not call one but rather the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought." Like Einstein, Schrödinger was dissatisfied with the concept of entanglement, because it seemed to violate the speed limit on the transmission of information implicit in the theory of relativity.
Einstein famously derided entanglement as "spukhafte Fernwirkung" or "spooky action at a distance." The EPR paper generated significant interest among physicists which inspired much discussion about the foundations of quantum mechanics, but produced little other published work. So, despite the interest, the weak point in EPR's argument was not discovered until 1964, when John Stewart Bell proved that one of their key assumptions, the principle of locality, as applied to the kind of hidden variables interpretation hoped for by EPR, was mathematically inconsistent with the predictions of quantum theory. Bell demonstrated an upper limit, seen in Bell's inequality, regarding the strength of correlations that can be produced in any theory obeying local realism, he showed that quantum theory predicts violations of this limit for certain entangled systems, his inequality is experimentally testable, there have been numerous relevant experiments, starting with the pioneering work of Stuart Freedman and John Clauser in 1972 and Alain Aspect's experiments in 1982, all of which have shown agreement with quantum mechanics rather than the principle of local realism.
Until each had left open at least one loophole by which it was possible to question the validity of the results. However, in 2015 an experiment was performed that closed both the detection and locality loopholes, was heralded as "loophole-free".
The Aharonov–Bohm effect, sometimes called the Ehrenberg–Siday–Aharonov–Bohm effect, is a quantum mechanical phenomenon in which an electrically charged particle is affected by an electromagnetic potential, despite being confined to a region in which both the magnetic field B and electric field E are zero. The underlying mechanism is the coupling of the electromagnetic potential with the complex phase of a charged particle's wave function, the Aharonov–Bohm effect is accordingly illustrated by interference experiments; the most described case, sometimes called the Aharonov–Bohm solenoid effect, takes place when the wave function of a charged particle passing around a long solenoid experiences a phase shift as a result of the enclosed magnetic field, despite the magnetic field being negligible in the region through which the particle passes and the particle's wavefunction being negligible inside the solenoid. This phase shift has been observed experimentally. There are magnetic Aharonov–Bohm effects on bound energies and scattering cross sections, but these cases have not been experimentally tested.
An electric Aharonov–Bohm phenomenon was predicted, in which a charged particle is affected by regions with different electrical potentials but zero electric field, but this has no experimental confirmation yet. A separate "molecular" Aharonov–Bohm effect was proposed for nuclear motion in multiply connected regions, but this has been argued to be a different kind of geometric phase as it is "neither nonlocal nor topological", depending only on local quantities along the nuclear path. Werner Ehrenberg and Raymond E. Siday first predicted the effect in 1949. Yakir Aharonov and David Bohm published their analysis in 1959. After publication of the 1959 paper, Bohm was informed of Ehrenberg and Siday's work, acknowledged and credited in Bohm and Aharonov's subsequent 1961 paper; the effect was confirmed experimentally, with a large error, while Bohm was still alive. By the time the error was down to a respectable value, Bohm had died. In the 18th and 19th centuries, physics was dominated by Newtonian dynamics, with its emphasis on forces.
Electromagnetic phenomena were elucidated by a series of experiments involving the measurement of forces between charges and magnets in various configurations. A description arose according to which charges and magnets acted as local sources of propagating force fields, which acted on other charges and currents locally through the Lorentz force law. In this framework, because one of the observed properties of the electric field was that it was irrotational, one of the observed properties of the magnetic field was that it was divergenceless, it was possible to express an electrostatic field as the gradient of a scalar potential and a stationary magnetic field as the curl of a vector potential; the language of potentials generalised seamlessly to the dynamic case but, since all physical effects were describable in terms of the fields which were the derivatives of the potentials, potentials were not uniquely determined by physical effects: potentials were only defined up to an arbitrary additive constant electrostatic potential and an irrotational stationary magnetic vector potential.
The Aharonov–Bohm effect is important conceptually because it bears on three issues apparent in the recasting of classical electromagnetic theory as a gauge theory, which before the advent of quantum mechanics could be argued to be a mathematical reformulation with no physical consequences. The Aharonov–Bohm thought experiments and their experimental realization imply that the issues were not just philosophical; the three issues are: whether potentials are "physical" or just a convenient tool for calculating force fields. Because of reasons like these, the Aharonov–Bohm effect was chosen by the New Scientist magazine as one of the "seven wonders of the quantum world", it is argued that Aharonov–Bohm effect illustrates the physicality of electromagnetic potentials, Φ and A, in quantum mechanics. Classically it was possible to argue that only the electromagnetic fields are physical, while the electromagnetic potentials are purely mathematical constructs, that due to gauge freedom aren't unique for a given electromagnetic field.
However, Vaidman has challenged this interpretation by showing that the AB effect can be explained without the use of potentials so long as one gives a full quantum mechanical treatment to the source charges that produce the electromagnetic field. According to this view, the potential in quantum mechanics is just as physical as it was classically. Aharonov and Rohrlich responded that the effect may be due to a local gauge potential or due to non-local gauge-invariant fields. Two papers published in the journal in 2017 Physical Review A have demonstrated a quantum mechanical solution for the system, their analysis shows that the phase shift can be viewed as generated by solenoid's vector potential acting on the electron or the electron's vector potential acting on the solenoid or the electron and solenoid currents acting on the quantized vector potential. The Aharonov–Bohm effect illustrates that the Lagrangian approach to dynamics, based on energies, is not just a computational aid to the Newtonian approach, based on forces.
Thus the Aharonov–Bohm effect validates the view that forces are an incomplete way to formulate physics, potential energies must be used instead. In fact R
The Schrödinger equation is a linear partial differential equation that describes the wave function or state function of a quantum-mechanical system. It is a key result in quantum mechanics, its discovery was a significant landmark in the development of the subject; the equation is named after Erwin Schrödinger, who derived the equation in 1925, published it in 1926, forming the basis for the work that resulted in his Nobel Prize in Physics in 1933. In classical mechanics, Newton's second law is used to make a mathematical prediction as to what path a given physical system will take over time following a set of known initial conditions. Solving this equation gives the position and the momentum of the physical system as a function of the external force F on the system; those two parameters are sufficient to describe its state at each time instant. In quantum mechanics, the analogue of Newton's law is Schrödinger's equation; the concept of a wave function is a fundamental postulate of quantum mechanics.
Using these postulates, Schrödinger's equation can be derived from the fact that the time-evolution operator must be unitary, must therefore be generated by the exponential of a self-adjoint operator, the quantum Hamiltonian. This derivation is explained below. In the Copenhagen interpretation of quantum mechanics, the wave function is the most complete description that can be given of a physical system. Solutions to Schrödinger's equation describe not only molecular and subatomic systems, but macroscopic systems even the whole universe. Schrödinger's equation is central to all applications of quantum mechanics including quantum field theory which combines special relativity with quantum mechanics. Theories of quantum gravity, such as string theory do not modify Schrödinger's equation; the Schrödinger equation is not the only way to study quantum mechanical systems and make predictions. The other formulations of quantum mechanics include matrix mechanics, introduced by Werner Heisenberg, the path integral formulation, developed chiefly by Richard Feynman.
Paul Dirac incorporated the Schrödinger equation into a single formulation. The form of the Schrödinger equation depends on the physical situation; the most general form is the time-dependent Schrödinger equation, which gives a description of a system evolving with time: where i is the imaginary unit, ℏ = h 2 π is the reduced Planck constant, Ψ is the state vector of the quantum system, t is time, H ^ is the Hamiltonian operator. The position-space wave function of the quantum system is nothing but the components in the expansion of the state vector in terms of the position eigenvector | r ⟩, it is a scalar function, expressed as Ψ = ⟨ r | Ψ ⟩. The momentum-space wave function can be defined as Ψ ~ = ⟨ p | Ψ ⟩, where | p ⟩ is the momentum eigenvector; the most famous example is the nonrelativistic Schrödinger equation for the wave function in position space Ψ of a single particle subject to a potential V, such as that due to an electric field. Where m is the particle's mass, ∇ 2 is the Laplacian.
This is a diffusion equation, but unlike the heat equation, this one is a wave equation given the imaginary unit present in the transient term. The term "Schrödinger equation" can refer to both the general equation, or the specific nonrelativistic version; the general equation is indeed quite general, used throughout quantum mechanics, for everything from the Dirac equation to quantum field theory, by plugging in diverse expressions for the Hamiltonian. The specific nonrelativistic version is a classical approximation to reality and yields accurate results in many situations, but only to a certain extent. To apply the Schrödinger equation, write down the Hamiltonian for the system, accounting for the kinetic and potential energies of the particles constituting the system insert it into the Schrödinger equation; the resulting partial differential equation is solved for the wave function, which contains information about the system. The time-dependent Schrödinger equation described above predicts that wave functions can form standing waves, called stationary states.
These states are important as their individual study simplifies the task of solving the time-dependent Schrödinger equation for any state. Stationary states can be described by a simpler form of the Schrödinger equation, the time-independe
History of quantum mechanics
The history of quantum mechanics is a fundamental part of the history of modern physics. Quantum mechanics' history, as it interlaces with the history of quantum chemistry, began with a number of different scientific discoveries: the 1838 discovery of cathode rays by Michael Faraday. Albert Einstein in 1905, in order to explain the photoelectric effect reported by Heinrich Hertz in 1887, postulated with Max Planck's quantum hypothesis that light itself is made of individual quantum particles, which in 1926 came to be called photons by Gilbert N. Lewis; the photoelectric effect was observed upon shining light of particular wavelengths on certain materials, such as metals, which caused electrons to be ejected from those materials only if the light quantum energy was greater than the work function of the metal's surface. The phrase "quantum mechanics" was coined by the group of physicists including Max Born, Werner Heisenberg, Wolfgang Pauli, at the University of Göttingen in the early 1920s, was first used in Born's 1924 paper "Zur Quantenmechanik".
In the years to follow, this theoretical basis began to be applied to chemical structure and bonding. Ludwig Boltzmann suggested in 1877 that the energy levels of a physical system, such as a molecule, could be discrete, he was a founder of the Austrian Mathematical Society, together with the mathematicians Gustav von Escherich and Emil Müller. Boltzmann's rationale for the presence of discrete energy levels in molecules such as those of iodine gas had its origins in his statistical thermodynamics and statistical mechanics theories and was backed up by mathematical arguments, as would be the case twenty years with the first quantum theory put forward by Max Planck. In 1900, the German physicist Max Planck reluctantly introduced the idea that energy is quantized in order to derive a formula for the observed frequency dependence of the energy emitted by a black body, called Planck's law, that included a Boltzmann distribution. Planck's law can be stated as follows: I = 2 h ν 3 c 2 1 e h ν k T − 1, where: I is the energy per unit time radiated per unit area of emitting surface in the normal direction per unit solid angle per unit frequency by a black body at temperature T.
The earlier Wien approximation may be derived from Planck's law by assuming h ν ≫ k T. Moreover, the application of Planck's quantum theory to the electron allowed Ștefan Procopiu in 1911–1913, subsequently Niels Bohr in 1913, to calculate the magnetic moment of the electron, called the "magneton". In 1905, Einstein explained the photoelectric effect by postulating that light, or more all electromagnetic radiation, can be divided into a finite number of "energy quanta" that are localized points in space. From the introduction section of his March 1905 quantum paper, "On a heuristic viewpoint concerning the emission and transformation of light", Einstein states: "According to the assumption to be contemplated here, when a light ray is spreading from a point, the energy is not distributed continuously over ever-increasing spaces, but consists of a finite number of'energy quanta' that are localized in points in space, move without dividing, can be absorbed or generated only as a whole." This statement has been called the most revolutionary sentence written by a physicist of the twentieth century.
These energy quanta came to be called "photons", a term introduced by Gilbert N. Lewis in 1926; the idea that each photon had to consist of energy in terms of quanta was a remarkable achievement. In 1913, Bohr explained the spectral lines of the hydrogen atom, again by using quantization, in his paper of July 1913 On the Constitution of Atoms and Molecules; these theories, though successful, were phenomenological: during this time, there was no rigorous justification for quantization, aside from Henri Poincaré's discussion of Planck's theory in his 1912 paper Sur la théorie des quant
In quantum mechanics, the uncertainty principle is any of a variety of mathematical inequalities asserting a fundamental limit to the precision with which certain pairs of physical properties of a particle, known as complementary variables or canonically conjugate variables such as position x and momentum p, can be known. Introduced first in 1927, by the German physicist Werner Heisenberg, it states that the more the position of some particle is determined, the less its momentum can be known, vice versa; the formal inequality relating the standard deviation of position σx and the standard deviation of momentum σp was derived by Earle Hesse Kennard that year and by Hermann Weyl in 1928: where ħ is the reduced Planck constant, h/. The uncertainty principle has been confused with a related effect in physics, called the observer effect, which notes that measurements of certain systems cannot be made without affecting the systems, that is, without changing something in a system. Heisenberg utilized such an observer effect at the quantum level as a physical "explanation" of quantum uncertainty.
It has since become clearer, that the uncertainty principle is inherent in the properties of all wave-like systems, that it arises in quantum mechanics due to the matter wave nature of all quantum objects. Thus, the uncertainty principle states a fundamental property of quantum systems and is not a statement about the observational success of current technology, it must be emphasized that measurement does not mean only a process in which a physicist-observer takes part, but rather any interaction between classical and quantum objects regardless of any observer. Since the uncertainty principle is such a basic result in quantum mechanics, typical experiments in quantum mechanics observe aspects of it. Certain experiments, may deliberately test a particular form of the uncertainty principle as part of their main research program; these include, for example, tests of number–phase uncertainty relations in superconducting or quantum optics systems. Applications dependent on the uncertainty principle for their operation include low-noise technology such as that required in gravitational wave interferometers.
The uncertainty principle is not apparent on the macroscopic scales of everyday experience. So it is helpful to demonstrate how it applies to more understood physical situations. Two alternative frameworks for quantum physics offer different explanations for the uncertainty principle; the wave mechanics picture of the uncertainty principle is more visually intuitive, but the more abstract matrix mechanics picture formulates it in a way that generalizes more easily. Mathematically, in wave mechanics, the uncertainty relation between position and momentum arises because the expressions of the wavefunction in the two corresponding orthonormal bases in Hilbert space are Fourier transforms of one another. A nonzero function and its Fourier transform cannot both be localized. A similar tradeoff between the variances of Fourier conjugates arises in all systems underlain by Fourier analysis, for example in sound waves: A pure tone is a sharp spike at a single frequency, while its Fourier transform gives the shape of the sound wave in the time domain, a delocalized sine wave.
In quantum mechanics, the two key points are that the position of the particle takes the form of a matter wave, momentum is its Fourier conjugate, assured by the de Broglie relation p = ħk, where k is the wavenumber. In matrix mechanics, the mathematical formulation of quantum mechanics, any pair of non-commuting self-adjoint operators representing observables are subject to similar uncertainty limits. An eigenstate of an observable represents the state of the wavefunction for a certain measurement value. For example, if a measurement of an observable A is performed the system is in a particular eigenstate Ψ of that observable. However, the particular eigenstate of the observable A need not be an eigenstate of another observable B: If so it does not have a unique associated measurement for it, as the system is not in an eigenstate of that observable. According to the de Broglie hypothesis, every object in the universe is a wave, i.e. a situation which gives rise to this phenomenon. The position of the particle is described by a wave function Ψ.
The time-independent wave function of a single-moded plane wave of wavenumber k0 or momentum p0 is ψ ∝ e i k 0 x = e i p 0 x / ℏ. The Born rule states that this should be interpreted as a probability density amplitude function in the sense that the probability of finding the particle between a and b is P = ∫ a b | ψ | 2 d x. In the case of the single-moded plane wave, | ψ | 2 is a uniform distribution. In other words, the particle position is uncertain in the sense that it could be esse