Optics

Optics is the branch of physics that studies the behaviour and properties of light, including its interactions with matter and the construction of instruments that use or detect it. Optics describes the behaviour of visible and infrared light; because light is an electromagnetic wave, other forms of electromagnetic radiation such as X-rays and radio waves exhibit similar properties. Most optical phenomena can be accounted for using the classical electromagnetic description of light. Complete electromagnetic descriptions of light are, however difficult to apply in practice. Practical optics is done using simplified models; the most common of these, geometric optics, treats light as a collection of rays that travel in straight lines and bend when they pass through or reflect from surfaces. Physical optics is a more comprehensive model of light, which includes wave effects such as diffraction and interference that cannot be accounted for in geometric optics; the ray-based model of light was developed first, followed by the wave model of light.

Progress in electromagnetic theory in the 19th century led to the discovery that light waves were in fact electromagnetic radiation. Some phenomena depend on the fact that light has both particle-like properties. Explanation of these effects requires quantum mechanics; when considering light's particle-like properties, the light is modelled as a collection of particles called "photons". Quantum optics deals with the application of quantum mechanics to optical systems. Optical science is relevant to and studied in many related disciplines including astronomy, various engineering fields and medicine. Practical applications of optics are found in a variety of technologies and everyday objects, including mirrors, telescopes, microscopes and fibre optics. Optics began with the development of lenses by Mesopotamians; the earliest known lenses, made from polished crystal quartz, date from as early as 700 BC for Assyrian lenses such as the Layard/Nimrud lens. The ancient Romans and Greeks filled glass spheres with water to make lenses.

These practical developments were followed by the development of theories of light and vision by ancient Greek and Indian philosophers, the development of geometrical optics in the Greco-Roman world. The word optics comes from the ancient Greek word ὀπτική, meaning "appearance, look". Greek philosophy on optics broke down into two opposing theories on how vision worked, the "intromission theory" and the "emission theory"; the intro-mission approach saw vision as coming from objects casting off copies of themselves that were captured by the eye. With many propagators including Democritus, Epicurus and their followers, this theory seems to have some contact with modern theories of what vision is, but it remained only speculation lacking any experimental foundation. Plato first articulated the emission theory, the idea that visual perception is accomplished by rays emitted by the eyes, he commented on the parity reversal of mirrors in Timaeus. Some hundred years Euclid wrote a treatise entitled Optics where he linked vision to geometry, creating geometrical optics.

He based his work on Plato's emission theory wherein he described the mathematical rules of perspective and described the effects of refraction qualitatively, although he questioned that a beam of light from the eye could instantaneously light up the stars every time someone blinked. Ptolemy, in his treatise Optics, held an extramission-intromission theory of vision: the rays from the eye formed a cone, the vertex being within the eye, the base defining the visual field; the rays were sensitive, conveyed information back to the observer's intellect about the distance and orientation of surfaces. He summarised much of Euclid and went on to describe a way to measure the angle of refraction, though he failed to notice the empirical relationship between it and the angle of incidence. During the Middle Ages, Greek ideas about optics were resurrected and extended by writers in the Muslim world. One of the earliest of these was Al-Kindi who wrote on the merits of Aristotelian and Euclidean ideas of optics, favouring the emission theory since it could better quantify optical phenomena.

In 984, the Persian mathematician Ibn Sahl wrote the treatise "On burning mirrors and lenses" describing a law of refraction equivalent to Snell's law. He used this law to compute optimum shapes for curved mirrors. In the early 11th century, Alhazen wrote the Book of Optics in which he explored reflection and refraction and proposed a new system for explaining vision and light based on observation and experiment, he rejected the "emission theory" of Ptolemaic optics with its rays being emitted by the eye, instead put forward the idea that light reflected in all directions in straight lines from all points of the objects being viewed and entered the eye, although he was unable to explain how the eye captured the rays. Alhazen's work was ignored in the Arabic world but it was anonymously translated into Latin around 1200 A. D. and further summarised and expanded on by the Polish monk Witelo making it a standard text on optics in Europe for the next 400 years. In the 13th century in medieval Europe, English bishop Robert Grosseteste wrote on a wide range of scientific topics, discussed light from four different perspectives: an epistemology of light, a metaphysics or cosmogony of light, an etiology or physics of light, a theology of light, basing it on the works Aristotle and Platonism.

Grosseteste's most famous disciple, Roger Bacon, wrote w

Impulse (physics)

In classical mechanics, impulse is the integral of a force, F, over the time interval, t, for which it acts. Since force is a vector quantity, impulse is a vector in the same direction. Impulse applied to an object produces an equivalent vector change in its linear momentum in the same direction; the SI unit of impulse is the newton second, the dimensionally equivalent unit of momentum is the kilogram meter per second. The corresponding English engineering units are the slug-foot per second. A resultant force causes acceleration and a change in the velocity of the body for as long as it acts. A resultant force applied over a longer time therefore produces a bigger change in linear momentum than the same force applied briefly: the change in momentum is equal to the product of the average force and duration. Conversely, a small force applied for a long time produces the same change in momentum—the same impulse—as a larger force applied briefly. J = F average The impulse is the integral of the resultant force with respect to time: J = ∫ F d t Impulse J produced from time t1 to t2 is defined to be J = ∫ t 1 t 2 F d t where F is the resultant force applied from t1 to t2.

From Newton's second law, force is related to momentum p by F = d p d t Therefore, J = ∫ t 1 t 2 d p d t d t = ∫ p 1 p 2 d p = p 2 − p 1 = Δ p where Δp is the change in linear momentum from time t1 to t2. This is called the impulse-momentum theorem; as a result, an impulse may be regarded as the change in momentum of an object to which a resultant force is applied. The impulse may be expressed in a simpler form when the mass is constant: J = ∫ t 1 t 2 F d t = Δ p = m v 2 − m v 1 where F is the resultant force applied, t1 and t2 are times when the impulse begins and ends m is the mass of the object, v2 is the final velocity of the object at the end of the time interval, v1 is the initial velocity of the object when the time interval begins. Impulse has the same dimensions as momentum. In the International System of Units, these are kg⋅m/s = N⋅s. In English engineering units, they are slug⋅ft/s = lbf⋅s; the term "impulse" is used to refer to a fast-acting force or impact. This type of impulse is idealized so that the change in momentum produced by the force happens with no change in time.

This sort of change is a step change, is not physically possible. However, this is a useful model for computing the effects of ideal collisions. Additionally, in rocketry, the term "total impulse" is used and is considered synonymous with the term "impulse"; the application of Newton's second law for variable mass allows impulse and momentum to be used as analysis tools for jet- or rocket-propelled vehicles. In the case of rockets, the impulse imparted can be normalized by unit of propellant expended, to create a performance parameter, specific impulse; this fact can be used to derive the Tsiolkovsky rocket equation, which relates the vehicle's propulsive change in velocity to the engine's specific impulse and the vehicle's propellant-mass ratio. Wave–particle duality defines the impulse of a wave collision; the preservation of momentum in the collision is called phase matching. Applications include: Compton effect Nonlinear optics Acousto-optic modulator Electron phonon scattering Serway, Raymond A..

Physics for Scientists and Engineers. Brooks/Cole. ISBN 0-534-40842-7. Tipler, Paul. Physics for Scientists and Engineers: Mechanics and Waves, Thermodynamics. W. H. Freeman. ISBN 0-7167-0809-4. Dynamics

Powder diffraction

Powder diffraction is a scientific technique using X-ray, neutron, or electron diffraction on powder or microcrystalline samples for structural characterization of materials. An instrument dedicated to performing such powder measurements is called a powder diffractometer. Powder diffraction stands in contrast to single crystal diffraction techniques, which work best with a single, well-ordered crystal. A diffractometer produces waves at a known frequency, determined by their source; the source is x-rays, because they are the only kind of energy with the correct frequency for inter-atomic-scale diffraction. However and neutrons are common sources, with their frequency determined by their de Broglie wavelength; when these waves reach the sample, the atoms of the sample act just like a diffraction grating, producing bright spots at particular angles. By measuring the angle where these bright spots occur, the spacing of the diffraction grating can be determined by Bragg's law; because the sample itself is the diffraction grating, this spacing is the atomic spacing.

The distinction between powder and single crystal diffraction is the degree of texturing in the sample. Single crystals have maximal texturing, are said to be anisotropic. In contrast, in powder diffraction, every possible crystalline orientation is represented in a powdered sample, the isotropic case. PXRD operates under the assumption. Therefore, a statistically significant number of each plane of the crystal structure will be in the proper orientation to diffract the X-rays. Therefore, each plane will be represented in the signal. In practice, it is sometimes necessary to rotate the sample orientation to eliminate the effects of texturing and achieve true randomness. Mathematically, crystals can be described by a Bravais lattice with some regularity in the spacing between atoms; because of this regularity, we can describe this structure in a different way using the reciprocal lattice, related to the original structure by a Fourier transform. This three-dimensional space can be described with reciprocal axes x*, y*, z* or alternatively in spherical coordinates q, φ*, χ*.

In powder diffraction, intensity is homogeneous over φ* and χ*, only q remains as an important measurable quantity. This is because orientational averaging causes the three-dimensional reciprocal space, studied in single crystal diffraction to be projected onto a single dimension; when the scattered radiation is collected on a flat plate detector, the rotational averaging leads to smooth diffraction rings around the beam axis, rather than the discrete Laue spots observed in single crystal diffraction. The angle between the beam axis and the ring is called the scattering angle and in X-ray crystallography always denoted as 2θ. In accordance with Bragg's law, each ring corresponds to a particular reciprocal lattice vector G in the sample crystal; this leads to the definition of the scattering vector as: | G | = q = 2 k sin = 4 π λ sin . In this equation, G is the reciprocal lattice vector, q is the length of the reciprocal lattice vector, k is the momentum transfer vector, θ is half of the scattering angle, λ is the wavelength of the radiation.

Powder diffraction data are presented as a diffractogram in which the diffracted intensity, I, is shown as a function either of the scattering angle 2θ or as a function of the scattering vector length q. The latter variable has the advantage that the diffractogram no longer depends on the value of the wavelength λ; the advent of synchrotron sources has widened the choice of wavelength considerably. To facilitate comparability of data obtained with different wavelengths the use of q is therefore recommended and gaining acceptability. Relative to other methods of analysis, powder diffraction allows for rapid, non-destructive analysis of multi-component mixtures without the need for extensive sample preparation; this gives laboratories around the world the ability to analyze unknown materials and perform materials characterization in such fields as metallurgy, forensic science, condensed matter physics, the biological and pharmaceutical sciences. Identification is performed by comparison of the diffraction pattern to a known standard or to a database such as the International Centre for Diffraction Data's Powder Diffraction File or the Cambridge Structural Database.

Advances in hardware and software improved optics and fast detectors, have improved the analytical capability of the technique relative to the speed of the analysis. The fundamental physics upon which the technique is based provides high precision and accuracy in the measurement of interplanar spacings, sometimes to fractions of an Ångström, resulting in authoritative identification used in patents, criminal cases and other areas of law enforcement; the ability to analyze multiphase materials allows analysis of how materials interact in a particular matrix such as a pharmaceutical tablet, a circuit board, a mechanical weld, a geologic core sampling and concrete, or a pigment found in an historic painting. The method has been used for the identification and classification of minerals, but it can be used for any materials amorphous ones, so long as a suitable reference pattern is known or can be constructed; the most widespread use of powder diffract

Wave

In physics and related fields, a wave is a disturbance of a field in which a physical attribute oscillates at each point or propagates from each point to neighboring points, or seems to move through space. The waves most studied in physics are mechanical and electromagnetic. A mechanical wave is a local deformation in some physical medium that propagates from particle to particle by creating local stresses that cause strain in neighboring particles too. For example, sound waves in air are variations of the local pressure that propagate by collisions between gas molecules. Other examples of mechanical waves are seismic waves, gravity waves and shock waves. An electromagnetic wave consists of a combination of variable electric and magnetic fields, that propagates through space according to Maxwell's equations. Electromagnetic waves can travel through vacuum. Other types of waves include gravitational waves, which are disturbances in a gravitational field that propagate according to general relativity.

Mechanical and electromagnetic waves may seem to travel through space. In mathematics and electronics waves are studied as signals. On the other hand, some waves do not appear to move at all, like hydraulic jumps. Some, like the probability waves of quantum mechanics, may be static in both space. A plane seems to travel in a definite direction, has constant value over any plane perpendicular to that direction. Mathematically, the simplest waves are the sinusoidal ones. Complicated waves can be described as the sum of many sinusoidal plane waves. A plane wave can be transverse, if its effect at each point is described by a vector, perpendicular to the direction of propagation or energy transfer. While mechanical waves can be both transverse and longitudinal, electromagnetic waves are transverse in free space. Consider a traveling transverse wave on a string. Consider the string to have a single spatial dimension. Consider this wave as traveling in the x direction in space. For example, let the positive x direction be to the right, the negative x direction be to the left.

With constant amplitude u with constant velocity v, where v is independent of wavelength independent of amplitude. With constant waveform, or shapeThis wave can be described by the two-dimensional functions u = F u = G or, more by d'Alembert's formula: u = F + G. representing two component waveforms F and G traveling through the medium in opposite directions. A generalized representation of this wave can be obtained as the partial differential equation 1 v 2 ∂ 2 u ∂ t 2 = ∂ 2 u ∂ x 2. General solutions are based upon Duhamel's principle; the form or shape of F in d'Alembert's formula involves the argument x − vt. Constant values of this argument correspond to constant values of F, these constant values occur if x increases at the same rate that vt increases; that is, the wave shaped like the function F will move in the positive x-direction at velocity v. In the case of a periodic function F with period λ, that is, F = F, the periodicity of F in space means that a snapshot of the wave at a given time t finds the wave varying periodically in space with period λ.

In a similar fashion, this periodicity of F implies a periodicity in time as well: F = F provided vT = λ, so an observation of the wave at a fixed location x finds the wave undulating periodically in time with period T = λ/v. The amplitude of a wave may be constant, or may be modulated so as to vary with time and/or position; the outline of the variation in amplitude is called the envelope of the w

Electron diffraction

Electron diffraction refers to the wave nature of electrons. However, from a technical or practical point of view, it may be regarded as a technique used to study matter by firing electrons at a sample and observing the resulting interference pattern; this phenomenon is known as wave–particle duality, which states that a particle of matter can be described as a wave. For this reason, an electron can be regarded as a wave much like water waves; this technique is similar to neutron diffraction. Electron diffraction is most used in solid state physics and chemistry to study the crystal structure of solids. Experiments are performed in a transmission electron microscope, or a scanning electron microscope as electron backscatter diffraction. In these instruments, electrons are accelerated by an electrostatic potential in order to gain the desired energy and determine their wavelength before they interact with the sample to be studied; the periodic structure of a crystalline solid acts as a diffraction grating, scattering the electrons in a predictable manner.

Working back from the observed diffraction pattern, it may be possible to deduce the structure of the crystal producing the diffraction pattern. However, the technique is limited by phase problem. Apart from the study of "periodically perfect" crystals, i.e. electron crystallography, electron diffraction is a useful technique to study the short range order of amorphous solids, short-range ordering of imperfections such as vacancies, the geometry of gaseous molecules, the properties of short-range ordering of vacancies. The de Broglie hypothesis, formulated in 1924, predicts that particles should behave as waves. De Broglie's formula was confirmed three years for electrons with the observation of electron diffraction in two independent experiments. At the University of Aberdeen, George Paget Thomson passed a beam of electrons through a thin metal film and observed the predicted interference patterns. Around the same time at Bell Labs, Clinton Joseph Davisson and Lester Halbert Germer guided their beam through a crystalline grid.

In 1937, Thomson and Davisson shared the Nobel Prize for Physics for their discovery. Unlike other types of radiation used in diffraction studies of materials, such as X-rays and neutrons, electrons are charged particles and interact with matter through the Coulomb forces; this means that the incident electrons feel the influence of both the positively charged atomic nuclei and the surrounding electrons. In comparison, X-rays interact with the spatial distribution of the valence electrons, while neutrons are scattered by the atomic nuclei through the strong nuclear forces. In addition, the magnetic moment of neutrons is non-zero, they are therefore scattered by magnetic fields; because of these different forms of interaction, the three types of radiation are suitable for different studies. In the kinematical approximation for electron diffraction, the intensity of a diffracted beam is given by: I g = | ψ g | 2 ∝ | F g | 2. Here ψ g is the wavefunction of the diffracted beam and F g is the so-called structure factor, given by: F g = ∑ i f i e − 2 π i g ⋅ r i where g is the scattering vector of the diffracted beam, r i is the position of an atom i in the unit cell, f i is the scattering power of the atom called the atomic form factor.

The sum is over all atoms in the unit cell. The structure factor describes the way in which an incident beam of electrons is scattered by the atoms of a crystal unit cell, taking into account the different scattering power of the elements through the factor f i. Since the atoms are spatially distributed in the unit cell, there will be a difference in phase when considering the scattered amplitude from two atoms; this phase shift is taken into account by the exponential term in the equation. The atomic form factor, or scattering power, of an element depends on the type of radiation considered; because electrons interact with matter though different processes than for example X-rays, the atomic form factors for the two cases are not the same. The wavelength of an electron is given by the de Broglie equation λ = h p. Here h p the relativistic momentum of the electron. Λ is called the de Broglie wavelength. The electrons are accelerated in an electric potential U to the desired velocity: v = 2 e U m 0 m 0 {\displaysty

Wavelength

In physics, the wavelength is the spatial period of a periodic wave—the distance over which the wave's shape repeats. It is thus the inverse of the spatial frequency. Wavelength is determined by considering the distance between consecutive corresponding points of the same phase, such as crests, troughs, or zero crossings and is a characteristic of both traveling waves and standing waves, as well as other spatial wave patterns. Wavelength is designated by the Greek letter lambda; the term wavelength is sometimes applied to modulated waves, to the sinusoidal envelopes of modulated waves or waves formed by interference of several sinusoids. Assuming a sinusoidal wave moving at a fixed wave speed, wavelength is inversely proportional to frequency of the wave: waves with higher frequencies have shorter wavelengths, lower frequencies have longer wavelengths. Wavelength depends on the medium. Examples of wave-like phenomena are sound waves, water waves and periodic electrical signals in a conductor.

A sound wave is a variation in air pressure, while in light and other electromagnetic radiation the strength of the electric and the magnetic field vary. Water waves are variations in the height of a body of water. In a crystal lattice vibration, atomic positions vary. Wavelength is a measure of the distance between repetitions of a shape feature such as peaks, valleys, or zero-crossings, not a measure of how far any given particle moves. For example, in sinusoidal waves over deep water a particle near the water's surface moves in a circle of the same diameter as the wave height, unrelated to wavelength; the range of wavelengths or frequencies for wave phenomena is called a spectrum. The name originated with the visible light spectrum but now can be applied to the entire electromagnetic spectrum as well as to a sound spectrum or vibration spectrum. In linear media, any wave pattern can be described in terms of the independent propagation of sinusoidal components; the wavelength λ of a sinusoidal waveform traveling at constant speed v is given by λ = v f, where v is called the phase speed of the wave and f is the wave's frequency.

In a dispersive medium, the phase speed itself depends upon the frequency of the wave, making the relationship between wavelength and frequency nonlinear. In the case of electromagnetic radiation—such as light—in free space, the phase speed is the speed of light, about 3×108 m/s, thus the wavelength of a 100 MHz electromagnetic wave is about: 3×108 m/s divided by 108 Hz = 3 metres. The wavelength of visible light ranges from deep red 700 nm, to violet 400 nm. For sound waves in air, the speed of sound is 343 m/s; the wavelengths of sound frequencies audible to the human ear are thus between 17 m and 17 mm, respectively. Note that the wavelengths in audible sound are much longer than those in visible light. A standing wave is an undulatory motion. A sinusoidal standing wave includes stationary points of no motion, called nodes, the wavelength is twice the distance between nodes; the upper figure shows three standing waves in a box. The walls of the box are considered to require the wave to have nodes at the walls of the box determining which wavelengths are allowed.

For example, for an electromagnetic wave, if the box has ideal metal walls, the condition for nodes at the walls results because the metal walls cannot support a tangential electric field, forcing the wave to have zero amplitude at the wall. The stationary wave can be viewed as the sum of two traveling sinusoidal waves of oppositely directed velocities. Wavelength and wave velocity are related just as for a traveling wave. For example, the speed of light can be determined from observation of standing waves in a metal box containing an ideal vacuum. Traveling sinusoidal waves are represented mathematically in terms of their velocity v, frequency f and wavelength λ as: y = A cos = A cos where y is the value of the wave at any position x and time t, A is the amplitude of the wave, they are commonly expressed in terms of wavenumber k and angular frequency ω as: y = A cos = A cos in which wavelength and wavenumber are related to velocity and frequency as: k = 2 π λ = 2 π f v = ω

Condensed matter physics

Condensed matter physics is the field of physics that deals with the macroscopic and microscopic physical properties of matter. In particular it is concerned with the "condensed" phases that appear whenever the number of constituents in a system is large and the interactions between the constituents are strong; the most familiar examples of condensed phases are solids and liquids, which arise from the electromagnetic forces between atoms. Condensed matter physicists seek to understand the behavior of these phases by using physical laws. In particular, they include the laws of quantum mechanics and statistical mechanics; the most familiar condensed phases are solids and liquids while more exotic condensed phases include the superconducting phase exhibited by certain materials at low temperature, the ferromagnetic and antiferromagnetic phases of spins on crystal lattices of atoms, the Bose–Einstein condensate found in ultracold atomic systems. The study of condensed matter physics involves measuring various material properties via experimental probes along with using methods of theoretical physics to develop mathematical models that help in understanding physical behavior.

The diversity of systems and phenomena available for study makes condensed matter physics the most active field of contemporary physics: one third of all American physicists self-identify as condensed matter physicists, the Division of Condensed Matter Physics is the largest division at the American Physical Society. The field overlaps with chemistry, materials science, nanotechnology, relates to atomic physics and biophysics; the theoretical physics of condensed matter shares important concepts and methods with that of particle physics and nuclear physics. A variety of topics in physics such as crystallography, elasticity, etc. were treated as distinct areas until the 1940s, when they were grouped together as solid state physics. Around the 1960s, the study of physical properties of liquids was added to this list, forming the basis for the new, related specialty of condensed matter physics. According to physicist Philip Warren Anderson, the term was coined by him and Volker Heine, when they changed the name of their group at the Cavendish Laboratories, Cambridge from Solid state theory to Theory of Condensed Matter in 1967, as they felt it did not exclude their interests in the study of liquids, nuclear matter, so on.

Although Anderson and Heine helped popularize the name "condensed matter", it had been present in Europe for some years, most prominently in the form of a journal published in English and German by Springer-Verlag titled Physics of Condensed Matter, launched in 1963. The funding environment and Cold War politics of the 1960s and 1970s were factors that lead some physicists to prefer the name "condensed matter physics", which emphasized the commonality of scientific problems encountered by physicists working on solids, liquids and other complex matter, over "solid state physics", associated with the industrial applications of metals and semiconductors; the Bell Telephone Laboratories was one of the first institutes to conduct a research program in condensed matter physics. References to "condensed" state can be traced to earlier sources. For example, in the introduction to his 1947 book Kinetic Theory of Liquids, Yakov Frenkel proposed that "The kinetic theory of liquids must accordingly be developed as a generalization and extension of the kinetic theory of solid bodies.

As a matter of fact, it would be more correct to unify them under the title of'condensed bodies'". One of the first studies of condensed states of matter was by English chemist Humphry Davy, in the first decades of the nineteenth century. Davy observed that of the forty chemical elements known at the time, twenty-six had metallic properties such as lustre and high electrical and thermal conductivity; this indicated that the atoms in John Dalton's atomic theory were not indivisible as Dalton claimed, but had inner structure. Davy further claimed that elements that were believed to be gases, such as nitrogen and hydrogen could be liquefied under the right conditions and would behave as metals. In 1823, Michael Faraday an assistant in Davy's lab liquefied chlorine and went on to liquefy all known gaseous elements, except for nitrogen and oxygen. Shortly after, in 1869, Irish chemist Thomas Andrews studied the phase transition from a liquid to a gas and coined the term critical point to describe the condition where a gas and a liquid were indistinguishable as phases, Dutch physicist Johannes van der Waals supplied the theoretical framework which allowed the prediction of critical behavior based on measurements at much higher temperatures.

By 1908, James Dewar and Heike Kamerlingh Onnes were able to liquefy hydrogen and newly discovered helium, respectively. Paul Drude in 1900 proposed the first theoretical model for a classical electron moving through a metallic solid. Drude's model described properties of metals in terms of a gas of free electrons, was the first microscopic model to explain empirical observations such as the Wiedemann–Franz law. However, despite the success of Drude's free electron model, it had one notable problem: it was unable to explain the electronic contribution to the specific heat and magnetic properties of metals, the temperature dependence of resistivity at low temperatures. In 1911, three years after helium was first liquefied, Onnes working at University of Leiden discovered superconductivity in mercury, when he observed the electrical resistivity of mercury to vanish at temperatures below a certain value; the phenomenon surprised the best theoretical physicists of the time, it remain