Quantum entanglement is a physical phenomenon that occurs when pairs or groups of particles are generated, interact, or share spatial proximity in ways such that the quantum state of each particle cannot be described independently of the state of the others when the particles are separated by a large distance. Measurements of physical properties such as position, momentum and polarization, performed on entangled particles are found to be correlated. For example, if a pair of particles is generated in such a way that their total spin is known to be zero, one particle is found to have clockwise spin on a certain axis, the spin of the other particle, measured on the same axis, will be found to be counterclockwise, as is to be expected due to their entanglement. However, this behavior gives rise to paradoxical effects: any measurement of a property of a particle performs an irreversible collapse on that particle and will change the original quantum state. In the case of entangled particles, such a measurement will be on the entangled system as a whole.
Such phenomena were the subject of a 1935 paper by Albert Einstein, Boris Podolsky, Nathan Rosen, several papers by Erwin Schrödinger shortly thereafter, describing what came to be known as the EPR paradox. Einstein and others considered such behavior to be impossible, as it violated the local realism view of causality and argued that the accepted formulation of quantum mechanics must therefore be incomplete. However, the counterintuitive predictions of quantum mechanics were verified experimentally in tests where the polarization or spin of entangled particles were measured at separate locations, statistically violating Bell's inequality. In earlier tests it couldn't be ruled out that the test result at one point could have been subtly transmitted to the remote point, affecting the outcome at the second location; however so-called "loophole-free" Bell tests have been performed in which the locations were separated such that communications at the speed of light would have taken longer—in one case 10,000 times longer—than the interval between the measurements.
According to some interpretations of quantum mechanics, the effect of one measurement occurs instantly. Other interpretations which don't recognize wavefunction collapse dispute that there is any "effect" at all. However, all interpretations agree that entanglement produces correlation between the measurements and that the mutual information between the entangled particles can be exploited, but that any transmission of information at faster-than-light speeds is impossible. Quantum entanglement has been demonstrated experimentally with photons, electrons, molecules as large as buckyballs, small diamonds; the utilization of entanglement in communication and computation is a active area of research. The counterintuitive predictions of quantum mechanics about correlated systems were first discussed by Albert Einstein in 1935, in a joint paper with Boris Podolsky and Nathan Rosen. In this study, the three formulated the EPR paradox, a thought experiment that attempted to show that quantum mechanical theory was incomplete.
They wrote: "We are thus forced to conclude that the quantum-mechanical description of physical reality given by wave functions is not complete."However, the three scientists did not coin the word entanglement, nor did they generalize the special properties of the state they considered. Following the EPR paper, Erwin Schrödinger wrote a letter to Einstein in German in which he used the word Verschränkung "to describe the correlations between two particles that interact and separate, as in the EPR experiment."Schrödinger shortly thereafter published a seminal paper defining and discussing the notion of "entanglement." In the paper he recognized the importance of the concept, stated: "I would not call one but rather the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought." Like Einstein, Schrödinger was dissatisfied with the concept of entanglement, because it seemed to violate the speed limit on the transmission of information implicit in the theory of relativity.
Einstein famously derided entanglement as "spukhafte Fernwirkung" or "spooky action at a distance." The EPR paper generated significant interest among physicists which inspired much discussion about the foundations of quantum mechanics, but produced little other published work. So, despite the interest, the weak point in EPR's argument was not discovered until 1964, when John Stewart Bell proved that one of their key assumptions, the principle of locality, as applied to the kind of hidden variables interpretation hoped for by EPR, was mathematically inconsistent with the predictions of quantum theory. Bell demonstrated an upper limit, seen in Bell's inequality, regarding the strength of correlations that can be produced in any theory obeying local realism, he showed that quantum theory predicts violations of this limit for certain entangled systems, his inequality is experimentally testable, there have been numerous relevant experiments, starting with the pioneering work of Stuart Freedman and John Clauser in 1972 and Alain Aspect's experiments in 1982, all of which have shown agreement with quantum mechanics rather than the principle of local realism.
Until each had left open at least one loophole by which it was possible to question the validity of the results. However, in 2015 an experiment was performed that closed both the detection and locality loopholes, was heralded as "loophole-free".
The Avogadro constant, named after scientist Amedeo Avogadro, is the number of constituent particles molecules, atoms or ions that are contained in the amount of substance given by one mole. It is the proportionality factor that relates the molar mass of a substance to the mass of a sample, is designated with the symbol NA or L, has the value 6.022140857×1023 mol−1 in the International System of Units. Previous definitions of chemical quantity involved the Avogadro number, a historical term related to the Avogadro constant, but defined differently: the Avogadro number was defined by Jean Baptiste Perrin as the number of atoms in one gram-molecule of atomic hydrogen, meaning one gram of hydrogen; this number is known as Loschmidt constant in German literature. The constant was redefined as the number of atoms in 12 grams of the isotope carbon-12, still generalized to relate amounts of a substance to their molecular weight. For instance, the number of nucleons in one mole of any sample of ordinary matter is, to a first approximation, 6×1023 times its molecular weight.
12 grams of 12C, with the mass number 12, has a similar number of carbon atoms, 6.022×1023. The Avogadro number is a dimensionless quantity, has the same numerical value of the Avogadro constant when given in base units. In contrast, the Avogadro constant has the dimension of reciprocal amount of substance; the Avogadro constant can be expressed as 0.6023... mL⋅mol−1⋅Å−3, which can be used to convert from volume per molecule in cubic ångströms to molar volume in millilitres per mole. Pending revisions in the base set of SI units necessitated redefinitions of the concepts of chemical quantity; the Avogadro number, its definition, was deprecated in favor of the Avogadro constant and its definition. Based on measurements made through the middle of 2017 which calculated a value for the Avogadro constant of NA = 6.022140758×1023 mol−1, the redefinition of SI units is planned to take effect on 20 May 2019. The value of the constant will be fixed to 6.02214076×1023 mol−1. The Avogadro constant is named after the early 19th-century Italian scientist Amedeo Avogadro, who, in 1811, first proposed that the volume of a gas is proportional to the number of atoms or molecules regardless of the nature of the gas.
The French physicist Jean Perrin in 1909 proposed naming the constant in honor of Avogadro. Perrin won the 1926 Nobel Prize in Physics for his work in determining the Avogadro constant by several different methods; the value of the Avogadro constant was first indicated by Johann Josef Loschmidt, who in 1865 estimated the average diameter of the molecules in the air by a method, equivalent to calculating the number of particles in a given volume of gas. This latter value, the number density n0 of particles in an ideal gas, is now called the Loschmidt constant in his honor, is related to the Avogadro constant, NA, by n 0 = p 0 N A R T 0, where p0 is the pressure, R is the gas constant, T0 is the absolute temperature; the connection with Loschmidt is the origin of the symbol L sometimes used for the Avogadro constant, German-language literature may refer to both constants by the same name, distinguished only by the units of measurement. Accurate determinations of the Avogadro constant require the measurement of a single quantity on both the atomic and macroscopic scales using the same unit of measurement.
This became possible for the first time when American physicist Robert Millikan measured the charge on an electron in 1910. The electric charge per mole of electrons is a constant called the Faraday constant and had been known since 1834 when Michael Faraday published his works on electrolysis. By dividing the charge on a mole of electrons by the charge on a single electron the value of the Avogadro number is obtained. Since 1910, newer calculations have more determined the values for the Faraday constant and the elementary charge. Perrin proposed the name Avogadro's number to refer to the number of molecules in one gram-molecule of oxygen, this term is still used in introductory works; the change in name to Avogadro constant came with the introduction of the mole as a base unit in the International System of Units in 1971, which regarded amount of substance as an independent dimension of measurement. With this recognition, the Avogadro constant was no longer a pure number, but had a unit of measurement, the reciprocal mole.
While it is rare to use units of amount of substance other than the mole, the Avogadro constant can be expressed by pound-mole and ounce-mole. The current definition of the mole links it to the kilogram; the revised definition breaks that link by making a mole a specific number of entities of the substance in question. Previous definition: The mole is the amount of substance of a system that contains as many elementary entities as there are atoms in 0.012 kilogram of carbon-12. When the mole is used, the elementary entities must be specified and may be atoms, ions, other particles, or specified groups of such particles. 2019 definition: The mole, symbol mol, is the SI unit of amount of substance. One mole contains 6.02214076×1023 elementary entities. This number is the fixed numerical value of the Avogadro constant, NA, when expressed in the unit mol−1 and is called the Avogadro number; the am
Quantum dots are tiny semiconductor particles a few nanometres in size, having optical and electronic properties that differ from larger LED particles. They are a central theme in nanotechnology; when the quantum dots are illuminated by UV light, some of the electrons receive enough energy to break free from the atoms. This capability allows them to move around the nanoparticle, creating a conductance band in which electrons are free to move through a material and conduct electricity; when these electrons drop back into the outer orbit around the atom, as illustrated in the following figure, they emit light. The color of that light depends on the energy difference between the conductance band and the valence band. In the language of materials science, nanoscale semiconductor materials confine either electrons or electron holes. Quantum dots are sometimes referred to as artificial atoms, emphasizing their singularity, having bound, discrete electronic states, like occurring atoms or molecules.
Quantum dots have properties intermediate between bulk semiconductors and discrete atoms or molecules. Their optoelectronic properties change as a function of both shape. Larger QDs of 5–6 nm diameter emit longer wavelengths, with colors such as orange or red. Smaller QDs emit shorter wavelengths, yielding colors like blue and green, although the specific colors and sizes vary depending on the exact composition of the QD; because of their tunable properties, QDs are of wide interest. Potential applications include transistors, solar cells, LEDs, diode lasers and second-harmonic generation, quantum computing, medical imaging, their small size allows for QDs to be suspended in solution, which may lead to use in inkjet printing and spin-coating. They have been used in Langmuir-Blodgett thin-films; these processing techniques result in less expensive and less time-consuming methods of semiconductor fabrication. There are several ways to prepare the principal ones involving colloids. Colloidal semiconductor nanocrystals are synthesized from solutions, much like traditional chemical processes.
The main difference remains dissolved. Heating the solution at high temperature, the precursors decompose forming monomers which nucleate and generate nanocrystals. Temperature is a critical factor in determining optimal conditions for the nanocrystal growth, it must be high enough to allow for rearrangement and annealing of atoms during the synthesis process while being low enough to promote crystal growth. The concentration of monomers is another critical factor that has to be stringently controlled during nanocrystal growth; the growth process of nanocrystals can occur in two different regimes, "focusing" and "defocusing". At high monomer concentrations, the critical size is small, resulting in growth of nearly all particles. In this regime, smaller particles grow faster than large ones resulting in "focusing" of the size distribution to yield nearly monodisperse particles; the size focusing is optimal when the monomer concentration is kept such that the average nanocrystal size present is always larger than the critical size.
Over time, the monomer concentration diminishes, the critical size becomes larger than the average size present, the distribution "defocuses". There are colloidal methods to produce many different semiconductors. Typical dots are made of binary compounds such as lead sulfide, lead selenide, cadmium selenide, cadmium sulfide, cadmium telluride, indium arsenide, indium phosphide. Dots may be made from ternary compounds such as cadmium selenide sulfide; these quantum dots can contain as few as 100 to 100,000 atoms within the quantum dot volume, with a diameter of ≈10 to 50 atoms. This corresponds to about 2 to 10 nanometers, at 10 nm in diameter, nearly 3 million quantum dots could be lined up end to end and fit within the width of a human thumb. Large batches of quantum dots may be synthesized via colloidal synthesis. Due to this scalability and the convenience of benchtop conditions, colloidal synthetic methods are promising for commercial applications, it is acknowledged to be the least toxic of all the different forms of synthesis.
Plasma synthesis has evolved to be one of the most popular gas-phase approaches for the production of quantum dots those with covalent bonds. For example and germanium quantum dots have been synthesized by using nonthermal plasma; the size, shape and composition of quantum dots can all be controlled in nonthermal plasma. Doping that seems quite challenging for quantum dots has been realized in plasma synthesis. Quantum dots synthesized by plasma are in the form of powder, for which surface modification may be carried out; this can lead to excellent dispersion of quantum dots in either organic solvents or water. Self-assembled quantum dots are between 5 and 50 nm in size. Quantum dots defined by lithographically patterned gate electrodes, or by etching on two-dimensional electron gasses in semiconductor heterostructures can have lateral dimensions between 20 and 100 nm; some quantum dots are small regions of one material buried in another with a larger band gap. These can be so-called core–shell structures, e.g. with CdSe in the core and ZnS in the shell, or from special forms of silica called ormosil.
Sub-monolayer shells can be effective ways of passivating the quantum dots, such as PbS cores with sub-monolayer CdS shells. Quantum dots sometimes occur spontaneously in quantum well structures due to monolayer fluctuations in the well's thickness. Self-asse
Photon polarization is the quantum mechanical description of the classical polarized sinusoidal plane electromagnetic wave. An individual photon can be described as having right or left circular polarization, or a superposition of the two. Equivalently, a photon can be described as having horizontal or vertical linear polarization, or a superposition of the two; the description of photon polarization contains many of the physical concepts and much of the mathematical machinery of more involved quantum descriptions, such as the quantum mechanics of an electron in a potential well. Polarization is an example of a qubit degree of freedom, which forms a fundamental basis for an understanding of more complicated quantum phenomena. Much of the mathematical machinery of quantum mechanics, such as state vectors, probability amplitudes, unitary operators, Hermitian operators, emerge from the classical Maxwell's equations in the description; the quantum polarization state vector for the photon, for instance, is identical with the Jones vector used to describe the polarization of a classical wave.
Unitary operators emerge from the classical requirement of the conservation of energy of a classical wave propagating through lossless media that alter the polarization state of the wave. Hermitian operators follow for infinitesimal transformations of a classical polarization state. Many of the implications of the mathematical machinery are verified experimentally. In fact, many of the experiments can be performed with two pairs of polaroid sunglasses; the connection with quantum mechanics is made through the identification of a minimum packet size, called a photon, for energy in the electromagnetic field. The identification is based on the theories of Planck and the interpretation of those theories by Einstein; the correspondence principle allows the identification of momentum and angular momentum, as well as energy, with the photon. The wave is linearly polarized when the phase angles α x, α y are equal, α x = α y = d e f α; this represents a wave with phase α polarized at an angle θ with respect to the x axis.
In that case the Jones vector can be written | ψ ⟩ = exp . The state vectors for linear polarization in x or y are special cases of this state vector. If unit vectors are defined such that | x ⟩ = d e f and | y ⟩ = d e f the linearly polarized polarization state can be written in the "x-y basis" as | ψ ⟩ = cos θ exp | x ⟩ + sin θ exp | y ⟩ = ψ x | x ⟩ + ψ y | y ⟩. If the phase angles α x and α y differ by π / 2 and the x amplitude equals the y amplitude the wave is circularly polarized; the Jones vector becomes | ψ ⟩ = 1 2 exp where the plus sign indicates right circular polarization and the minus sign indicates left circular polarization. In the case of circular polarization, the electric field vector of constant magnitude rotates in the x-y plane. If unit vectors are defined such that | R ⟩ = d e f 1 2 and | L ⟩ = d e f 1 2 (
Quantum field theory
In theoretical physics, quantum field theory is a theoretical framework that combines classical field theory, special relativity, quantum mechanics and is used to construct physical models of subatomic particles and quasiparticles. QFT treats particles as excited states of their underlying fields, which are—in a sense—more fundamental than the basic particles. Interactions between particles are described by interaction terms in the Lagrangian involving their corresponding fields; each interaction can be visually represented by Feynman diagrams, which are formal computational tools, in the process of relativistic perturbation theory. As a successful theoretical framework today, quantum field theory emerged from the work of generations of theoretical physicists spanning much of the 20th century, its development began in the 1920s with the description of interactions between light and electrons, culminating in the first quantum field theory — quantum electrodynamics. A major theoretical obstacle soon followed with the appearance and persistence of various infinities in perturbative calculations, a problem only resolved in the 1950s with the invention of the renormalization procedure.
A second major barrier came with QFT's apparent inability to describe the weak and strong interactions, to the point where some theorists called for the abandonment of the field theoretic approach. The development of gauge theory and the completion of the Standard Model in the 1970s led to a renaissance of quantum field theory. Quantum field theory is the result of the combination of classical field theory, quantum mechanics, special relativity. A brief overview of these theoretical precursors is in order; the earliest successful classical field theory is one that emerged from Newton's law of universal gravitation, despite the complete absence of the concept of fields from his 1687 treatise Philosophiæ Naturalis Principia Mathematica. The force of gravity as described by Newton is an "action at a distance" — its effects on faraway objects are instantaneous, no matter the distance. In an exchange of letters with Richard Bentley, Newton stated that "it is inconceivable that inanimate brute matter should, without the mediation of something else, not material, operate upon and affect other matter without mutual contact."
It was not until the 18th century that mathematical physicists discovered a convenient description of gravity based on fields — a numerical quantity assigned to every point in space indicating the action of gravity on any particle at that point. However, this was considered a mathematical trick. Fields began to take on an existence of their own with the development of electromagnetism in the 19th century. Michael Faraday coined the English term "field" in 1845, he introduced fields as properties of space having physical effects. He argued against "action at a distance", proposed that interactions between objects occur via space-filling "lines of force"; this description of fields remains to this day. The theory of classical electromagnetism was completed in 1862 with Maxwell's equations, which described the relationship between the electric field, the magnetic field, electric current, electric charge. Maxwell's equations implied the existence of electromagnetic waves, a phenomenon whereby electric and magnetic fields propagate from one spatial point to another at a finite speed, which turns out to be the speed of light.
Action-at-a-distance was thus conclusively refuted. Despite the enormous success of classical electromagnetism, it was unable to account for the discrete lines in atomic spectra, nor for the distribution of blackbody radiation in different wavelengths. Max Planck's study of blackbody radiation marked the beginning of quantum mechanics, he treated atoms, which absorb and emit electromagnetic radiation, as tiny oscillators with the crucial property that their energies can only take on a series of discrete, rather than continuous, values. These are known as quantum harmonic oscillators; this process of restricting energies to discrete values is called quantization. Building on this idea, Albert Einstein proposed in 1905 an explanation for the photoelectric effect, that light is composed of individual packets of energy called photons; this implied that the electromagnetic radiation, while being waves in the classical electromagnetic field exists in the form of particles. In 1913, Niels Bohr introduced the Bohr model of atomic structure, wherein electrons within atoms can only take on a series of discrete, rather than continuous, energies.
This is another example of quantization. The Bohr model explained the discrete nature of atomic spectral lines. In 1924, Louis de Broglie proposed the hypothesis of wave-particle duality, that microscopic particles exhibit both wave-like and particle-like properties under different circumstances. Uniting these scattered ideas, a coherent discipline, quantum mechanics, was formulated between 1925 and 1926, with important contributions from de Broglie, Werner Heisenberg, Max Born, Erwin Schrödinger, Paul Dirac, Wolfgang Pauli.:22-23In the same year as his paper on the photoelectric effect, Einstein published his theory of special relativity, built on Maxwell's electromagnetism. New rules, called Lorentz transformation, were given for the way time and space coordinates of an event change under changes in the observer's velocity, the distinction between time and space was blurred.:19 It was proposed that all physical laws must be the same for observers at different velocities, i.e. that physical laws be invariant under Lorentz transformations.
Two difficulties remained. Observationally, the Schrödinger equation underlying q
Hermann von Helmholtz
Hermann Ludwig Ferdinand von Helmholtz was a German physician and physicist who made significant contributions in several scientific fields. The largest German association of research institutions, the Helmholtz Association, is named after him. In physiology and psychology, he is known for his mathematics of the eye, theories of vision, ideas on the visual perception of space, color vision research, on the sensation of tone, perception of sound, empiricism in the physiology of perception. In physics, he is known for his theories on the conservation of energy, work in electrodynamics, chemical thermodynamics, on a mechanical foundation of thermodynamics; as a philosopher, he is known for his philosophy of science, ideas on the relation between the laws of perception and the laws of nature, the science of aesthetics, ideas on the civilizing power of science. Helmholtz was born in Potsdam the son of the local Gymnasium headmaster, Ferdinand Helmholtz, who had studied classical philology and philosophy, and, a close friend of the publisher and philosopher Immanuel Hermann Fichte.
Helmholtz's work was influenced by the philosophy of Johann Gottlieb Immanuel Kant. He tried to trace their theories in empirical matters like physiology; as a young man, Helmholtz was interested in natural science, but his father wanted him to study medicine at the Charité because there was financial support for medical students. Trained in physiology, Helmholtz wrote on many other topics, ranging from theoretical physics, to the age of the Earth, to the origin of the Solar System. Helmholtz's first academic position was as a teacher of Anatomy at the Academy of Arts in Berlin in 1848, he moved to take a post of associate professor of physiology at the Prussian University of Königsberg, where he was appointed in 1849. In 1855 he accepted a full professorship of physiology at the University of Bonn, he was not happy in Bonn and three years he transferred to the University of Heidelberg, in Baden, where he served as professor of physiology. In 1871 he accepted his final university position, as professor of physics at the Humboldt University in Berlin.
His first important scientific achievement, an 1847 treatise on the conservation of energy, was written in the context of his medical studies and philosophical background. His work on energy conservation came about while studying muscle metabolism, he tried to demonstrate that no energy is lost in muscle movement, motivated by the implication that there were no vital forces necessary to move a muscle. This was a rejection of the speculative tradition of Naturphilosophie, at that time a dominant philosophical paradigm in German physiology. Drawing on the earlier work of Sadi Carnot, Benoît Paul Émile Clapeyron and James Prescott Joule, he postulated a relationship between mechanics, light and magnetism by treating them all as manifestations of a single force, or energy in today's terminology, he published his theories in his book Über die Erhaltung der Kraft. In the 1850s and 60s, building on the publications of William Thomson and William Rankine popularized the idea of the heat death of the universe.
In fluid dynamics, Helmholtz made several contributions, including Helmholtz's theorems for vortex dynamics in inviscid fluids. Helmholtz was a pioneer in the scientific study of human audition. Inspired by psychophysics, he was interested in the relationships between measurable physical stimuli and their correspondent human perceptions. For example, the amplitude of a sound wave can be varied, causing the sound to appear louder or softer, but a linear step in sound pressure amplitude does not result in a linear step in perceived loudness; the physical sound needs to be increased exponentially in order for equal steps to seem linear, a fact, used in current electronic devices to control volume. Helmholtz paved the way in experimental studies on the relationship between the physical energy and its appreciation, with the goal in mind to develop "psychophysical laws." The sensory physiology of Helmholtz was the basis of the work of Wilhelm Wundt, a student of Helmholtz, considered one of the founders of experimental psychology.
More explicitly than Helmholtz, Wundt described his research as a form of empirical philosophy and as a study of the mind as something separate. Helmholtz had, in his early repudiation of Naturphilosophie, stressed the importance of materialism, was focusing more on the unity of "mind" and body. In 1851, Helmholtz revolutionized the field of ophthalmology with the invention of the ophthalmoscope; this made. Helmholtz's interests at that time were focused on the physiology of the senses, his main publication, titled Handbuch der Physiologischen Optik, provided empirical theories on depth perception, color vision, motion perception, became the fundamental reference work in his field during the second half of the nineteenth century. In the third and final volume, published in 1867, Helmholtz described the importance of unconscious inferences for perception; the Handbuch was first translated into English under the editorship of James P. C. Southall on behalf of the Optical Society of America in 1924-5.
His theory of accommodation went unchallenged until the final decade of the 20th century. Helmholtz continued to work for several decades on several editions of the handbook updating his work because of his dispute with Ewald Hering who held opposite views on spatial and color vision; this dispute divided the discipline
Black-body radiation is the thermal electromagnetic radiation within or surrounding a body in thermodynamic equilibrium with its environment, or emitted by a black body. It has a specific spectrum and intensity that depends only on the body's temperature, assumed for the sake of calculations and theory to be uniform and constant; the thermal radiation spontaneously emitted by many ordinary objects can be approximated as black-body radiation. A insulated enclosure, in thermal equilibrium internally contains black-body radiation and will emit it through a hole made in its wall, provided the hole is small enough to have negligible effect upon the equilibrium. A black-body at room temperature appears black, as most of the energy it radiates is infra-red and cannot be perceived by the human eye; because the human eye cannot perceive light waves at lower frequencies, a black body, viewed in the dark at the lowest just faintly visible temperature, subjectively appears grey though its objective physical spectrum peak is in the infrared range.
When it becomes a little hotter, it appears dull red. As its temperature increases further it becomes yellow and blue-white. Although planets and stars are neither in thermal equilibrium with their surroundings nor perfect black bodies, black-body radiation is used as a first approximation for the energy they emit. Black holes are near-perfect black bodies, in the sense that they absorb all the radiation that falls on them, it has been proposed that they emit black-body radiation, with a temperature that depends on the mass of the black hole. The term black body was introduced by Gustav Kirchhoff in 1860. Black-body radiation is called thermal radiation, cavity radiation, complete radiation or temperature radiation. Black-body radiation has a characteristic, continuous frequency spectrum that depends only on the body's temperature, called the Planck spectrum or Planck's law; the spectrum is peaked at a characteristic frequency that shifts to higher frequencies with increasing temperature, at room temperature most of the emission is in the infrared region of the electromagnetic spectrum.
As the temperature increases past about 500 degrees Celsius, black bodies start to emit significant amounts of visible light. Viewed in the dark by the human eye, the first faint glow appears as a "ghostly" grey. With rising temperature, the glow becomes visible when there is some background surrounding light: first as a dull red yellow, a "dazzling bluish-white" as the temperature rises; when the body appears white, it is emitting a substantial fraction of its energy as ultraviolet radiation. The Sun, with an effective temperature of 5800 K, is an approximate black body with an emission spectrum peaked in the central, yellow-green part of the visible spectrum, but with significant power in the ultraviolet as well. Black-body radiation provides insight into the thermodynamic equilibrium state of cavity radiation. All normal matter emits electromagnetic radiation; the radiation represents a conversion of a body's internal energy into electromagnetic energy, is therefore called thermal radiation.
It is a spontaneous process of radiative distribution of entropy. Conversely all normal matter absorbs electromagnetic radiation to some degree. An object that absorbs all radiation falling on it, at all wavelengths, is called a black body; when a black body is at a uniform temperature, its emission has a characteristic frequency distribution that depends on the temperature. Its emission is called black-body radiation; the concept of the black body is an idealization. Graphite and lamp black, with emissivities greater than 0.95, are good approximations to a black material. Experimentally, black-body radiation may be established best as the stable steady state equilibrium radiation in a cavity in a rigid body, at a uniform temperature, opaque and is only reflective. A closed box of graphite walls at a constant temperature with a small hole on one side produces a good approximation to ideal black-body radiation emanating from the opening. Black-body radiation has the unique stable distribution of radiative intensity that can persist in thermodynamic equilibrium in a cavity.
In equilibrium, for each frequency the total intensity of radiation, emitted and reflected from a body is determined by the equilibrium temperature, does not depend upon the shape, material or structure of the body. For a black body there is no reflected radiation, so the spectral radiance is due to emission. In addition, a black body is a diffuse emitter. Black-body radiation may be viewed as the radiation from a black body at thermal equilibrium. Black-body radiation becomes a visible glow of light if the temperature of the object is high enough; the Draper point is the temperature at which all solids glow a dim red, about 798 K. At 1000 K, a small opening in the wall of a large uniformly heated opaque-walled cavity, viewed from outside, looks red. No matter how the oven is constructed, or of what material, as long as it is built so that all light entering is absorbed by its walls, it will contain a good approximation to black-body radiation; the spectrum, therefore color, of the light that comes out will be a function of