The Schrödinger equation is a linear partial differential equation that describes the wave function or state function of a quantum-mechanical system. It is a key result in quantum mechanics, its discovery was a significant landmark in the development of the subject; the equation is named after Erwin Schrödinger, who derived the equation in 1925, published it in 1926, forming the basis for the work that resulted in his Nobel Prize in Physics in 1933. In classical mechanics, Newton's second law is used to make a mathematical prediction as to what path a given physical system will take over time following a set of known initial conditions. Solving this equation gives the position and the momentum of the physical system as a function of the external force F on the system; those two parameters are sufficient to describe its state at each time instant. In quantum mechanics, the analogue of Newton's law is Schrödinger's equation; the concept of a wave function is a fundamental postulate of quantum mechanics.
Using these postulates, Schrödinger's equation can be derived from the fact that the time-evolution operator must be unitary, must therefore be generated by the exponential of a self-adjoint operator, the quantum Hamiltonian. This derivation is explained below. In the Copenhagen interpretation of quantum mechanics, the wave function is the most complete description that can be given of a physical system. Solutions to Schrödinger's equation describe not only molecular and subatomic systems, but macroscopic systems even the whole universe. Schrödinger's equation is central to all applications of quantum mechanics including quantum field theory which combines special relativity with quantum mechanics. Theories of quantum gravity, such as string theory do not modify Schrödinger's equation; the Schrödinger equation is not the only way to study quantum mechanical systems and make predictions. The other formulations of quantum mechanics include matrix mechanics, introduced by Werner Heisenberg, the path integral formulation, developed chiefly by Richard Feynman.
Paul Dirac incorporated the Schrödinger equation into a single formulation. The form of the Schrödinger equation depends on the physical situation; the most general form is the time-dependent Schrödinger equation, which gives a description of a system evolving with time: where i is the imaginary unit, ℏ = h 2 π is the reduced Planck constant, Ψ is the state vector of the quantum system, t is time, H ^ is the Hamiltonian operator. The position-space wave function of the quantum system is nothing but the components in the expansion of the state vector in terms of the position eigenvector | r ⟩, it is a scalar function, expressed as Ψ = ⟨ r | Ψ ⟩. The momentum-space wave function can be defined as Ψ ~ = ⟨ p | Ψ ⟩, where | p ⟩ is the momentum eigenvector; the most famous example is the nonrelativistic Schrödinger equation for the wave function in position space Ψ of a single particle subject to a potential V, such as that due to an electric field. Where m is the particle's mass, ∇ 2 is the Laplacian.
This is a diffusion equation, but unlike the heat equation, this one is a wave equation given the imaginary unit present in the transient term. The term "Schrödinger equation" can refer to both the general equation, or the specific nonrelativistic version; the general equation is indeed quite general, used throughout quantum mechanics, for everything from the Dirac equation to quantum field theory, by plugging in diverse expressions for the Hamiltonian. The specific nonrelativistic version is a classical approximation to reality and yields accurate results in many situations, but only to a certain extent. To apply the Schrödinger equation, write down the Hamiltonian for the system, accounting for the kinetic and potential energies of the particles constituting the system insert it into the Schrödinger equation; the resulting partial differential equation is solved for the wave function, which contains information about the system. The time-dependent Schrödinger equation described above predicts that wave functions can form standing waves, called stationary states.
These states are important as their individual study simplifies the task of solving the time-dependent Schrödinger equation for any state. Stationary states can be described by a simpler form of the Schrödinger equation, the time-independe
Wave–particle duality is the concept in quantum mechanics that every particle or quantum entity may be described in terms not only of particles, but of waves. It expresses the inability of the classical concepts "particle" or "wave" to describe the behaviour of quantum-scale objects; as Albert Einstein wrote: It seems as though we must use sometimes the one theory and sometimes the other, while at times we may use either. We are faced with a new kind of difficulty. We have two contradictory pictures of reality. Through the work of Max Planck, Albert Einstein, Louis de Broglie, Arthur Compton, Niels Bohr, many others, current scientific theory holds that all particles exhibit a wave nature and vice versa; this phenomenon has been verified not only for elementary particles, but for compound particles like atoms and molecules. For macroscopic particles, because of their short wavelengths, wave properties cannot be detected. Although the use of the wave-particle duality has worked well in physics, the meaning or interpretation has not been satisfactorily resolved.
Bohr regarded the "duality paradox" as a metaphysical fact of nature. A given kind of quantum object will exhibit sometimes wave, sometimes particle, character, in different physical settings, he saw such duality as one aspect of the concept of complementarity. Bohr regarded renunciation of the cause-effect relation, or complementarity, of the space-time picture, as essential to the quantum mechanical account. Werner Heisenberg considered the question further, he saw the duality as present for all quantic entities, but not quite in the usual quantum mechanical account considered by Bohr. He saw it in what is called second quantization, which generates an new concept of fields which exist in ordinary space-time, causality still being visualizable. Classical field values are replaced by an new kind of field value, as considered in quantum field theory. Turning the reasoning around, ordinary quantum mechanics can be deduced as a specialized consequence of quantum field theory. Democritus argued that all things in the universe, including light, are composed of indivisible sub-components.
At the beginning of the 11th Century, the Arabic scientist Ibn al-Haytham wrote the first comprehensive Book of optics describing reflection and the operation of a pinhole lens via rays of light traveling from the point of emission to the eye. He asserted. In 1630, René Descartes popularized and accredited the opposing wave description in his treatise on light, The World, showing that the behavior of light could be re-created by modeling wave-like disturbances in a universal medium i.e. luminiferous aether. Beginning in 1670 and progressing over three decades, Isaac Newton developed and championed his corpuscular theory, arguing that the straight lines of reflection demonstrated light's particle nature, only particles could travel in such straight lines, he explained refraction by positing that particles of light accelerated laterally upon entering a denser medium. Around the same time, Newton's contemporaries Robert Hooke and Christiaan Huygens, Augustin-Jean Fresnel, mathematically refined the wave viewpoint, showing that if light traveled at different speeds in different media, refraction could be explained as the medium-dependent propagation of light waves.
The resulting Huygens–Fresnel principle was successful at reproducing light's behavior and was subsequently supported by Thomas Young's discovery of wave interference of light by his double-slit experiment in 1801. The wave view did not displace the ray and particle view, but began to dominate scientific thinking about light in the mid 19th century, since it could explain polarization phenomena that the alternatives could not. James Clerk Maxwell discovered that he could apply his discovered Maxwell's equations, along with a slight modification to describe self-propagating waves of oscillating electric and magnetic fields, it became apparent that visible light, ultraviolet light, infrared light were all electromagnetic waves of differing frequency. In 1901, Max Planck published an analysis that succeeded in reproducing the observed spectrum of light emitted by a glowing object. To accomplish this, Planck had to make a mathematical assumption of quantized energy of the oscillators i.e. atoms of the black body that emit radiation.
Einstein proposed that electromagnetic radiation itself is quantized, not the energy of radiating atoms. Black-body radiation, the emission of electromagnetic energy due to an object's heat, could not be explained from classical arguments alone; the equipartition theorem of classical mechanics, the basis of all classical thermodynamic theories, stated that an object's energy is partitioned among the object's vibrational modes. But applying the same reasoning to the electromagnetic emission of such a thermal object was not so successful; that thermal objects emit light had been long known. Since light was known to be waves of electromagnetism, physicists hoped to describe this emission via classical laws; this became known as the black body problem. Since the equipartition theorem worked so well in describing the vibrational modes of the thermal object itself, it was natural to assume that it would perform well in describing the radiative emission of such objects, but a problem arose if each mode received an equal partition of energy, the short wavelength modes would consume all the energy.
This became clear when plo
In physics, interference is a phenomenon in which two waves superpose to form a resultant wave of greater, lower, or the same amplitude. Constructive and destructive interference result from the interaction of waves that are correlated or coherent with each other, either because they come from the same source or because they have the same or nearly the same frequency. Interference effects can be observed with all types of waves, for example, radio, surface water waves, gravity waves, or matter waves; the resulting images or graphs are called interferograms. The principle of superposition of waves states that when two or more propagating waves of same type are incident on the same point, the resultant amplitude at that point is equal to the vector sum of the amplitudes of the individual waves. If a crest of a wave meets a crest of another wave of the same frequency at the same point the amplitude is the sum of the individual amplitudes—this is constructive interference. If a crest of one wave meets a trough of another wave the amplitude is equal to the difference in the individual amplitudes—this is known as destructive interference.
Constructive interference occurs when the phase difference between the waves is an multiple of π, whereas destructive interference occurs when the difference is an odd multiple of π. If the difference between the phases is intermediate between these two extremes the magnitude of the displacement of the summed waves lies between the minimum and maximum values. Consider, for example, what happens when two identical stones are dropped into a still pool of water at different locations; each stone generates a circular wave propagating outwards from the point where the stone was dropped. When the two waves overlap, the net displacement at a particular point is the sum of the displacements of the individual waves. At some points, these will be in phase, will produce a maximum displacement. In other places, the waves will be in anti-phase, there will be no net displacement at these points. Thus, parts of the surface will be stationary—these are seen in the figure above and to the right as stationary blue-green lines radiating from the centre.
Interference of light is a common phenomenon that can be explained classically by the superposition of waves, however a deeper understanding of light interference requires knowledge of wave-particle duality of light, due to quantum mechanics. Prime examples of light interference are the famous double-slit experiment, laser speckle, anti-reflective coatings and interferometers. Traditionally the classical wave model is taught as a basis for understanding optical interference, based on the Huygens–Fresnel principle; the above can be demonstrated in one dimension by deriving the formula for the sum of two waves. The equation for the amplitude of a sinusoidal wave traveling to the right along the x-axis is W 1 = A cos where A is the peak amplitude, k = 2 π / λ is the wavenumber and ω = 2 π f is the angular frequency of the wave. Suppose a second wave of the same frequency and amplitude but with a different phase is traveling to the right W 2 = A cos where φ is the phase difference between the waves in radians.
The two waves will superpose and add: the sum of the two waves is W 1 + W 2 = A. Using the trigonometric identity for the sum of two cosines: cos a + cos b = 2 cos cos , this can be written W 1 + W 2 = 2 A cos cos ; this represents a wave at the original frequency, traveling to the right like the components, whose amplitude is proportional to the cosine of φ / 2. Constructive interference: If the phase difference is an multiple of π: φ = …, − 4 π, − 2 π, 0, 2 π, 4 π, …
The Stern–Gerlach experiment demonstrated that the spatial orientation of angular momentum is quantized. Thus an atomic-scale system was shown to have intrinsically quantum properties. In the original experiment, silver atoms were sent through a spatially varying magnetic field, which deflected them before they struck a detector screen, such as a glass slide. Particles with non-zero magnetic moment are deflected, due to the magnetic field gradient, from a straight path; the screen reveals discrete points of accumulation, rather than a continuous distribution, owing to their quantized spin. This experiment was decisive in convincing physicists of the reality of angular-momentum quantization in all atomic-scale systems; the experiment was first conducted by the German physicists Otto Stern and Walter Gerlach in 1922. The Stern–Gerlach experiment involves sending a beam of silver atoms through an inhomogeneous magnetic field and observing their deflection; the results show that particles possess an intrinsic angular momentum, analogous to the angular momentum of a classically spinning object, but that takes only certain quantized values.
Another important result is that only one component of a particle's spin can be measured at one time, meaning that the measurement of the spin along the z-axis destroys information about a particle's spin along the x and y axis. The experiment is conducted using electrically neutral particles such as silver atoms; this avoids the large deflection in the path of a charged particle moving through a magnetic field and allows spin-dependent effects to dominate. If the particle is treated as a classical spinning magnetic dipole, it will precess in a magnetic field because of the torque that the magnetic field exerts on the dipole. If it moves through a homogeneous magnetic field, the forces exerted on opposite ends of the dipole cancel each other out and the trajectory of the particle is unaffected. However, if the magnetic field is inhomogeneous the force on one end of the dipole will be greater than the opposing force on the other end, so that there is a net force which deflects the particle's trajectory.
If the particles were classical spinning objects, one would expect the distribution of their spin angular momentum vectors to be random and continuous. Each particle would be deflected by an amount proportional to its magnetic moment, producing some density distribution on the detector screen. Instead, the particles passing through the Stern–Gerlach apparatus are deflected either up or down by a specific amount; this was a measurement of the quantum observable now known as spin angular momentum, which demonstrated possible outcomes of a measurement where the observable has a discrete set of values or point spectrum. Although some discrete quantum phenomena, such as atomic spectra, were observed much earlier, the Stern–Gerlach experiment allowed scientists to observe separation between discrete quantum states for the first time in the history of science. By now, it is known that, quantum angular momentum of any kind has a discrete spectrum, sometimes expressed as "angular momentum is quantized".
If the experiment is conducted using charged particles like electrons, there will be a Lorentz force that tends to bend the trajectory in a circle. This force can be cancelled by an electric field of appropriate magnitude oriented transverse to the charged particle's path.i Electrons are spin-1⁄2 particles. These have only two possible spin angular momentum values measured along any axis, + ℏ 2 or − ℏ 2, a purely quantum mechanical phenomenon; because its value is always the same, it is regarded as an intrinsic property of electrons, is sometimes known as "intrinsic angular momentum". If one measures the spin along a vertical axis, electrons are described as "spin up" or "spin down", based on the magnetic moment pointing up or down, respectively. To mathematically describe the experiment with spin + 1 2 particles, it is easiest to use Dirac's bra–ket notation; as the particles pass through the Stern–Gerlach device, they are deflected either up or down, observed by the detector which resolves to either spin up or spin down.
These are described by the angular momentum quantum number j, which can take on one of the two possible allowed values, either + ℏ 2 or − ℏ 2. The act of observing the momentum along the z axis corresponds to the operator J z. In mathematical terms, the initial state of the particles is | ψ ⟩ = c 1 | ψ j = + ℏ 2 ⟩ + c 2 | ψ j = − ℏ 2 ⟩ where constants c 1 and c 2 are complex numbers; this initial state spin can point in any direction. The sq
The Davisson–Germer experiment was a 1923-7 experiment by Clinton Davisson and Lester Germer at Western Electric, in which electrons, scattered by the surface of a crystal of nickel metal, displayed a diffraction pattern. This confirmed the hypothesis, advanced by Louis de Broglie in 1924, of wave-particle duality, was an experimental milestone in the creation of quantum mechanics. According to Maxwell's equations in the late 19th century, light was thought to consist of waves of electromagnetic fields and matter was thought to consist of localized particles. However, this was challenged in Albert Einstein's 1905 paper on the photoelectric effect, which described light as discrete and localized quanta of energy, which won him the Nobel Prize in Physics in 1921. In 1924 Louis de Broglie presented his thesis concerning the wave–particle duality theory, which proposed the idea that all matter displays the wave–particle duality of photons. According to de Broglie, for all matter and for radiation alike, the energy E of the particle was related to the frequency of its associated wave ν by the Planck relation: E = h ν And that the momentum of the particle p was related to its wavelength by what is now known as the de Broglie relation: λ = h p, where h is Planck's constant.
An important contribution to the Davisson–Germer experiment was made by Walter M. Elsasser in Göttingen in the 1920s, who remarked that the wave-like nature of matter might be investigated by electron scattering experiments on crystalline solids, just as the wave-like nature of X-rays had been confirmed through X-ray scattering experiments on crystalline solids; this suggestion of Elsasser was communicated by his senior colleague Max Born to physicists in England. When the Davisson and Germer experiment was performed, the results of the experiment were explained by Elsasser's proposition; however the initial intention of the Davisson and Germer experiment was not to confirm the de Broglie hypothesis, but rather to study the surface of nickel. In 1927 at Bell Labs, Clinton Davisson and Lester Germer fired slow moving electrons at a crystalline nickel target; the angular dependence of the reflected electron intensity was measured and was determined to have the same diffraction pattern as those predicted by Bragg for X-rays.
At the same time George Paget Thomson independently demonstrated the same effect firing electrons through metal films to produce a diffraction pattern, Davisson and Thomson shared the Nobel Prize in Physics in 1937. The Davisson–Germer experiment confirmed the de Broglie hypothesis that matter has wave-like behavior. This, in combination with the Compton effect discovered by Arthur Compton, established the wave–particle duality hypothesis, a fundamental step in quantum theory. Davisson began work in 1921 to study electron bombardment and secondary electron emissions. A series of experiments continued through 1925. Davisson and Germer's actual objective was to study the surface of a piece of nickel by directing a beam of electrons at the surface and observing how many electrons bounced off at various angles, they expected that because of the small size of electrons the smoothest crystal surface would be too rough and thus the electron beam would experience diffused reflection. The experiment consisted of firing an electron beam at a nickel crystal, perpendicular to the surface of the crystal, measuring how the number of reflected electrons varied as the angle between the detector and the nickel surface varied.
The electron gun was a heated filament that released thermally excited electrons which were accelerated through an electric potential difference, giving them a certain amount of kinetic energy, towards the nickel crystal. To avoid collisions of the electrons with other atoms on their way towards the surface, the experiment was conducted in a vacuum chamber. To measure the number of electrons that were scattered at different angles, a faraday cup electron detector that could be moved on an arc path about the crystal was used; the detector was designed to accept only elastically scattered electrons. During the experiment, air accidentally entered the chamber, producing an oxide film on the nickel surface. To remove the oxide and Germer heated the specimen in a high temperature oven, not knowing that this caused the polycrystalline structure of the nickel to form large single crystal areas with crystal planes continuous over the width of the electron beam; when they started the experiment again and the electrons hit the surface, they were scattered by nickel atoms in crystal planes of the crystal.
This, in 1925, generated a diffraction pattern with unexpected peaks. On a break, Davisson attended the Oxford meeting of the British Association for the Advancement of Science in summer 1926. At this meeting, he learned of the recent advances in quantum mechanics. To Davisson's surprise, Max Born gave a lecture that used diffraction curves from Davisson's 1923 research which he had published in Science that year, using the data as confirmation of the de Broglie hypothesis, he learned that in prior years, other scientists – Walter Elsasser, E. G. Dymond, Blackett, James Chadwick, Charles Ellis – had attempted similar diffraction experiments, but were unable to generate low enough vacuums or detect the low-intensity beams needed. Returning to the United States, Davisson made mod
The Stark effect is the shifting and splitting of spectral lines of atoms and molecules due to the presence of an external electric field. It is the electric-field analogue of the Zeeman effect, where a spectral line is split into several components due to the presence of the magnetic field. Although coined for the static case, it is used in the wider context to describe effect of time-dependent electric fields. In particular, the Stark effect is responsible for the pressure broadening of spectral lines by charged particles in plasmas. For majority of spectral lines, the Stark effect is either quadratic with a high accuracy; the Stark effect can be observed both for absorption lines. The latter is sometimes called the inverse Stark effect, but this term is no longer used in the modern literature; the effect is named after the German physicist Johannes Stark, who discovered it in 1913. It was independently discovered in the same year by the Italian physicist Antonino Lo Surdo, in Italy it is thus sometimes called the Stark–Lo Surdo effect.
The discovery of this effect contributed to the development of quantum theory and was rewarded with the Nobel Prize in Physics for Johannes Stark in the year 1919. Inspired by the magnetic Zeeman effect, by Lorentz's explanation of it, Woldemar Voigt performed classical mechanical calculations of quasi-elastically bound electrons in an electric field. By using experimental indices of refraction he gave an estimate of the Stark splittings; this estimate was a few orders of magnitude too low. Not deterred by this prediction, Stark undertook measurements on excited states of the hydrogen atom and succeeded in observing splittings. By the use of the Bohr–Sommerfeld quantum theory, Paul Epstein and Karl Schwarzschild were independently able to derive equations for the linear and quadratic Stark effect in hydrogen. Four years Hendrik Kramers derived formulas for intensities of spectral transitions. Kramers included the effect of fine structure, which includes corrections for relativistic kinetic energy and coupling between electron spin and orbital motion.
The first quantum mechanical treatment was by Wolfgang Pauli. Erwin Schrödinger discussed at length the Stark effect in his third paper on quantum theory, once in the manner of the 1916 work of Epstein and once by his perturbation approach. Epstein reconsidered the linear and quadratic Stark effect from the point of view of the new quantum theory, he derived equations for the line intensities which were a decided improvement over Kramers' results obtained by the old quantum theory. While first-order perturbation effects for the Stark effect in hydrogen are in agreement for the Bohr–Sommerfeld model and the quantum-mechanical theory of the atom, higher-order effects are not. Measurements of the Stark effect under high field strengths confirmed the correctness of the quantum theory over the Bohr model. An electric field pointing from left to right, for example, tends to pull nuclei to the right and electrons to the left. In another way of viewing it, if an electronic state has its electron disproportionately to the left, its energy is lowered, while if it has the electron disproportionately to the right, its energy is raised.
Other things being equal, the effect of the electric field is greater for outer electron shells, because the electron is more distant from the nucleus, so it travels farther left and farther right. The Stark effect can lead to splitting of degenerate energy levels. For example, in the Bohr model, an electron has the same energy whether it is in the 2s state or any of the 2p states. However, in an electric field, there will be hybrid orbitals of the 2s and 2p states where the electron tends to be to the left, which will acquire a lower energy, other hybrid orbitals where the electron tends to be to the right, which will acquire a higher energy. Therefore, the degenerate energy levels will split into lower and higher energy levels; the Stark effect originates from the interaction between a charge distribution and an external electric field. Before turning to quantum mechanics we describe the interaction classically and consider a continuous charge distribution ρ. If this charge distribution is non-polarizable its interaction energy with an external electrostatic potential V is E i n t = ∫ ρ V d r 3.
If the electric field is of macroscopic origin and the charge distribution is microscopic, it is reasonable to assume that the electric field is uniform over the charge distribution. That is, V is given by a two-term Taylor expansion, V = V − ∑ i = 1 3 r i F i, with the electric field: F i ≡ − | 0,where
Wheeler's delayed-choice experiment
Wheeler's delayed-choice experiment is several thought experiments in quantum physics, proposed by John Archibald Wheeler, with the most prominent among them appearing in 1978 and 1984. These experiments are attempts to decide whether light somehow "senses" the experimental apparatus in the double-slit experiment it will travel through and adjusts its behavior to fit by assuming the appropriate determinate state for it, or whether light remains in an indeterminate state, neither wave nor particle until measured; the common intention of these several types of experiments is to first do something that, some interpretations of theory say, would make each photon "decide" whether it was going to behave as a particle or behave as a wave, before the photon had time to reach the detection device, create another change in the system that would make it seem that the photon had "chosen" to behave in the opposite way. Some interpreters of these experiments contend that a photon either is a wave or is a particle, that it cannot be both at the same time.
Wheeler's intent was to investigate the time-related conditions under which a photon makes this transition between alleged states of being. His work has been productive of many revealing experiments, he may not have anticipated the possibility that other researchers would tend toward the conclusion that a photon retains both its "wave nature" and "particle nature" until the time it ends its life, e.g. by being absorbed by an electron, which acquires its energy and therefore rises to a higher-energy orbital in its atom. However, he himself seems to be clear on this point, he says: The thing that causes people to argue about when and how the photon learns that the experimental apparatus is in a certain configuration and changes from wave to particle to fit the demands of the experiment's configuration is the assumption that a photon had some physical form before the astronomers observed it. Either it was a particle. Quantum phenomena are neither waves nor particles but are intrinsically undefined until the moment they are measured.
This line of experimentation proved difficult to carry out when it was first conceived. It has proven valuable over the years since it has led researchers to provide "increasingly sophisticated demonstrations of the wave–particle duality of single quanta"; as one experimenter explains, "Wave and particle behavior can coexist simultaneously." "Wheeler's delayed-choice experiment" refers to a series of thought experiments in quantum physics, the first being proposed by him in 1978. Another prominent version was proposed in 1983. All of these experiments try to get at the same fundamental issues in quantum physics. Many of them are discussed in Wheeler's 1978 article "The'Past' and the'Delayed-Choice' Double-Slit Experiment", reproduced in A. R. Marlow's Mathematical Foundations of Quantum Theory, pp. 9–48. According to the complementarity principle, a photon can manifest properties of a particle or of a wave, but not both at the same time. What characteristic is manifested depends on whether experimenters use a device intended to observe particles or to observe waves.
When this statement is applied strictly, one could argue that by determining the detector type one could force the photon to become manifest only as a particle or only as a wave. Detection of a photon is a destructive process; when a photon is detected it "appears" in the consequences of its demise, e.g. by being absorbed by an electron in a photomultiplier that accepts its energy, used to trigger the cascade of events that produces a "click" from that device. A photon always appears at some localized point in space and time. In the apparatuses that detect photons, the locations on its detection screen that indicate reception of the photon give an indication of whether or not it was manifesting its wave nature during its flight from photon source to the detection device. Therefore, it is said that in a double-slit experiment a photon exhibits its wave nature when it passes through both of the slits and appears as a dim wash of illumination across the detection screen, manifests its particle nature when it passes through only one slit and appears on the screen as a localized scintillation.
Given the interpretation of quantum physics that says a photon is either in its guise as a wave or in its guise as a particle, the question arises: When does the photon decide whether it is going to travel as a wave or as a particle? Suppose that a traditional double-slit experiment is prepared so that either of the slits can be blocked. If both slits are open and a series of photons are emitted by the laser an interference pattern will emerge on the detection screen; the interference pattern can only be explained as a consequence of wave phenomena, so experimenters can conclude that each photon "decides" to travel as a wave as soon as it is emitted. If only one slit is available there will be no interference pattern, so experimenters may conclude that each photon "decides" to travel as a particle as soon as it is emitted. One way to investigate the question of when a photon decides whether to act as a wave or a particle in an experiment is to use the interferometer method. Here is a simple schematic diagram of an interferometer in two configurations: If a single photon is emitted into the entry port of the apparatus at the lower-left corner, it encounters a beam-splitter.
Because of the equal probabilities for transmission or reflection the photon will either continue straight ahead, be reflected by the mirror at the lower-right corner, be detected by the detector at the top of the apparatus, or it will