Brewster's angle is an angle of incidence at which light with a particular polarization is transmitted through a transparent dielectric surface, with no reflection. When unpolarized light is incident at this angle, the light, reflected from the surface is therefore polarized; this special angle of incidence is named after the Scottish physicist Sir David Brewster. When light encounters a boundary between two media with different refractive indices, some of it is reflected as shown in the figure above; the fraction, reflected is described by the Fresnel equations, is dependent upon the incoming light's polarization and angle of incidence. The Fresnel equations predict that light with the p polarization will not be reflected if the angle of incidence is θ B = arctan, where n1 is the refractive index of the initial medium through which the light propagates, n2 is the index of the other medium; this equation is known as Brewster's law, the angle defined by it is Brewster's angle. The physical mechanism for this can be qualitatively understood from the manner in which electric dipoles in the media respond to p-polarized light.
One can imagine that light incident on the surface is absorbed, re-radiated by oscillating electric dipoles at the interface between the two media. The polarization of propagating light is always perpendicular to the direction in which the light is travelling; the dipoles that produce the transmitted light oscillate in the polarization direction of that light. These same oscillating dipoles generate the reflected light. However, dipoles do not radiate any energy in the direction of the dipole moment. If the refracted light is p-polarized and propagates perpendicular to the direction in which the light is predicted to be specularly reflected, the dipoles point along the specular reflection direction and therefore no light can be reflected. With simple geometry this condition can be expressed as θ 1 + θ 2 = 90 ∘, where θ1 is the angle of reflection and θ2 is the angle of refraction. Using Snell's law, n 1 sin θ 1 = n 2 sin θ 2, one can calculate the incident angle θ1 = θB at which no light is reflected: n 1 sin θ B = n 2 sin = n 2 cos θ B.
Solving for θB gives θ B = arctan. For a glass medium in air, Brewster's angle for visible light is 56°, while for an air-water interface, it is 53°. Since the refractive index for a given medium changes depending on the wavelength of light, Brewster's angle will vary with wavelength; the phenomenon of light being polarized by reflection from a surface at a particular angle was first observed by Étienne-Louis Malus in 1808. He attempted to relate the polarizing angle to the refractive index of the material, but was frustrated by the inconsistent quality of glasses available at that time. In 1815, Brewster experimented with higher-quality materials and showed that this angle was a function of the refractive index, defining Brewster's law. Brewster's angle is referred to as the "polarizing angle", because light that reflects from a surface at this angle is polarized perpendicular to the plane of incidence. A glass plate or a stack of plates placed at Brewster's angle in a light beam can, thus, be used as a polarizer.
The concept of a polarizing angle can be extended to the concept of a Brewster wavenumber to cover planar interfaces between two linear bianisotropic materials. In the case of reflection at Brewster's angle, the reflected and refracted rays are mutually perpendicular. For magnetic materials, Brewster's angle can exist for only one of the incident wave polarizations, as determined by the relative strengths of the dielectric permittivity and magnetic permeability; this has implications for the existence of generalized Brewster angles for dielectric metasurfaces. Polarized sunglasses use the principle of Brewster's angle to reduce glare from the sun reflecting off horizontal surfaces such as water or road. In a large range of angles around Brewster's angle, the reflection of p-polarized light is lower than s-polarized light. Thus, if the sun is low in the sky, reflected light is s-polarized. Polarizing sunglasses use a polarizing material such as Polaroid sheets to block horizontally-polarized light, preferentially blocking reflections from horizontal surfaces.
The effect is stro
Argus was a two-beam high power infrared neodymium doped silica glass laser with a 20 cm output aperture built at Lawrence Livermore National Laboratory in 1976 for the study of inertial confinement fusion. Argus advanced the study of laser-target interaction and paved the way for the construction of its successor, the 20 beam Shiva laser, it was known from some of the earlier experiments in ICF that when large laser systems amplified their beams beyond a certain point, nonlinear optical effects would begin to appear due to the intense nature of the light. The most serious effect among these was "Kerr lensing", because the beam is so intense, that during its passage through either air or glass the electric field of the light alters the index of refraction of the material and causes the beam at the most intense points to "self focus" down to filament like structures of high intensity; when a beam collapses into high intensity filaments like this, it can exceed the optical damage threshold of laser glass and other optics damaging them by creating pits and grey tracks through the glass.
These effects became so severe after just the first few amplification stages of early lasers, that it was seen as impossible to exceed the gigawatt level for ICF lasers without destroying the laser itself after just a few shots. In order to improve the quality of the amplified beams, LLNL had started experimenting with the use of spatial filters in the single-beam Cyclops laser, built the previous year; the basic idea was to extend the laser device into a long "beamline", over which any imperfections that accumulated in the beam would be successively removed after every amplification stage. A series of tubes with lenses on either end would focus the light down to a point where it would pass through a pinhole which would reject stray unfocused light, smoothing the beam and eliminating the high intensity spots which would have otherwise been further amplified causing damage to down-beam optics; the technique was so successful on Argus it was referred to as being "the savior of laser ICF". After the success of Cyclops in beam smoothing, the next step was to further increase the energy and power in the resulting beams.
Argus used a series of five groups of amplifiers and spatial filters arranged along the beamlines, each one boosting power until it reached a total of about 1 kilojoule and 1-2 terawatts per beam. These intensities would have been impossible to achieve without the use of spatial filtering. Argus was designed to characterize large laser beamlines and laser-target interactions, there was no attempt to achieve the fusion ignition state in the device as this was understood to be impossible at the energies Argus was capable of delivering. Argus however, was used to further explore higher yields of the so-called "exploding pusher" type targets and to develop x-ray diagnostic cameras to view the hot plasma in such targets, a technique crucial to characterization of target performance on ICF lasers. Argus was capable of producing a total of about 4 terawatts of power in short pulses of up to about 100 picoseconds, or about 2 terawatts of power in a longer 1 nanosecond pulse on a 100 micrometer diameter fusion fuel capsule target.
It became the first laser to perform experiments using X-rays produced by irradiating a hohlraum. The reduced production of hard X-ray energy via the production of hot electrons while using frequency doubled and tripled laser light was first noticed on Argus; this technique would be validated in the direct drive mode and subsequently used to enhance laser energy to target plasma coupling efficiency in experiments on nearly all subsequent laser inertial confinement devices. Argus was shut down and dismantled in September 1981. Maximum fusion yield for target implosions on Argus was about 109 neutrons per shot. Laser Lawrence Livermore National Laboratory List of laser articles List of laser types https://web.archive.org/web/20041109063036/http://www.llnl.gov/50science/lasers.html http://www.osti.gov/bridge/servlets/purl/16710-UOC0xx/native/16710.pdf http://adsabs.harvard.edu/abs/1978ApOpt..17..999S
The W-71 nuclear warhead was a US thermonuclear warhead developed at Lawrence Livermore National Laboratory in California and deployed on the LIM-49A Spartan missile, a component of the Safeguard Program, an anti-ballistic missile defense system deployed by the US in the 1970s. The W-71 warhead was designed to intercept incoming enemy warheads at long range, as far as 450 miles from the launch point; the interception took place at such high altitudes, comparable to low earth orbit, where there is no air. At these altitudes, x-rays resulting from the nuclear explosion can destroy incoming reentry vehicles at distances on the order of 10 miles, which made the problem of guiding the missile to the required accuracies much simpler than earlier designs that had lethal ranges of less than 1,000 feet; the W-71 warhead had a yield of around 5 megatons of TNT. The warhead package was a cylinder, 42 inches in diameter and 101 inches long; the complete warhead weighed around 2,850 pounds. The W71 produced great amounts of x-rays, needed to minimize fission output and debris to reduce the radar blackout effect that fission products and debris produce on anti-ballistic missile radar systems.
The W71 design emerged in the mid-1960s as the result of studies of earlier high-altitude nuclear tests carried out before the Partial Nuclear Test Ban Treaty of 1963. A number of tests those of Operation Fishbowl in 1962, demonstrated a number of poorly understood or underestimated effects. Among these was the behaviour of x-rays created during the explosion; these tended to react with the atmosphere within a few tens of meters at low altitudes. At high altitudes, lacking an atmosphere to interact with, the mean free path of the x-rays could be on the order of tens of kilometers; this presented a new method of attacking enemy nuclear reentry vehicles while still at long range from their targets. X-rays hitting the warhead's outermost layer will react by heating a thin layer of the material so that shock waves develop that can cause the heat shield material on the outside of the RV to separate or flake off; the RV would break up during reentry. The major advantage of this attack is that it takes place over long distances, as great as 30 kilometres, which covers the majority of the threat tube containing the warhead and the various radar decoys and clutter material that accompanies it.
The ABM had to approach within less than 800 feet of the warhead to damage it through neutron heating, which presented a serious problem attempting to locate the warhead within a threat tube, at least a kilometer across and about ten long. Bell received a contract to begin conversion of the earlier LIM-49 Nike Zeus missile for the extended range role in March 1965; the result was the Zeus EX, or DM-15X2, which used the original Zeus' first stage as the second stage along with a new first stage to offer much greater range. The design was renamed keeping the original LIM-49 designation. Tests of the new missile stated in April 1970 from Meck Island, part of the Kwajalein Test Range, set up to test the earlier Nike Zeus system; because of a perceived need to deploy the system, the team took a "do it once, do it right" approach in which the original test items were designed to be the production models. The warhead for Spartan was designed by Lawrence Livermore National Laboratory, drawing on previous experience from Operation Plowshare.
A nuclear explosion at high altitude has the disadvantage of creating a significant amount of electronic noise and an effect known as nuclear blackout that blinds radars over a large area. Some of these effects are due to the fission fragments being released by the explosion, so care was taken to design the bomb to be "clean" to reduce these effects. Project Plowshares had explored the design of such clean bombs as part of an effort to use nuclear explosives for civilian uses where the production of long-lived radionuclides had to be minimized. To maximize the production of x-rays, the W-71 is reported to have used a gold tamper, rather than the usual depleted uranium or lead; the lining serves the primary purpose of capturing x-ray energy within the bomb casing while the primary is exploding and triggering the secondary. For this purpose, most any high-z metal will work, depleted uranium is used because the neutrons released by the secondary will cause fission in this material and add a significant amount of energy to the total explosive release.
In this case the increase in blast energy would have no effect as there is little or no atmosphere to carry that energy, so this reaction is of little value. The use of gold maximizes the production of x-rays as gold efficiently radiates thermal x-rays; this efficient release of x-rays when heated is the same reason that inertial confinement fusion experiments like the National Ignition Facility use gold-covered targets. In Congressional testimony on potential dismantling of the W71, a DOE official described the warhead as "a gold mine". Another advantage of using a gold tamper and lining is that neutron capture events form Au-198 which has a half life of 2.697 days and beta decay energy of 0.41 MeV, in the hard x-ray to gamma ray spectrum. This helps reduce the nuclear blackout effects. Under good conditions, the W-71 warhead had a lethal exo-atmospheric radius as much as 30 miles, although it was stated to be 12 miles against "soft" targets, as little as 4 miles against hardened warheads. There were 30 to 39 units were produced between 1974 and 1975.
The weapons went into service, but were taken r
Neodymium is a chemical element with symbol Nd and atomic number 60. It is a soft silvery metal. Neodymium was discovered in 1885 by the Austrian chemist Carl Auer von Welsbach, it is present in significant quantities in the ore minerals bastnäsite. Neodymium is not found in metallic form or unmixed with other lanthanides, it is refined for general use. Although neodymium is classed as a rare earth, it is a common element, no rarer than cobalt, nickel, or copper, is distributed in the Earth's crust. Most of the world's commercial neodymium is mined in China. Neodymium compounds were first commercially used as glass dyes in 1927, they remain a popular additive in glasses; the color of neodymium compounds—due to the Nd3+ ion—is a reddish-purple but it changes with the type of lighting, due to the interaction of the sharp light absorption bands of neodymium with ambient light enriched with the sharp visible emission bands of mercury, trivalent europium or terbium. Some neodymium-doped glasses are used in lasers that emit infrared with wavelengths between 1047 and 1062 nanometers.
These have been used in extremely-high-power applications, such as experiments in inertial confinement fusion. Neodymium is used with various other substrate crystals, such as yttrium aluminium garnet in the Nd:YAG laser; this laser emits infrared at a wavelength of about 1064 nanometers. The Nd:YAG laser is one of the most used solid-state lasers. Another important use of neodymium is as a component in the alloys used to make high-strength neodymium magnets—powerful permanent magnets; these magnets are used in such products as microphones, professional loudspeakers, in-ear headphones, high performance hobby DC electric motors, computer hard disks, where low magnet mass or strong magnetic fields are required. Larger neodymium magnets are used in generators. Neodymium, a rare-earth metal, was present in the classical mischmetal at a concentration of about 18%. Metallic neodymium has a bright, silvery metallic luster, but as one of the more reactive lanthanide rare-earth metals, it oxidizes in ordinary air.
The oxide layer that forms peels off, exposing the metal to further oxidation. Thus, a centimeter-sized sample of neodymium oxidizes within a year. Neodymium exists in two allotropic forms, with a transformation from a double hexagonal to a body-centered cubic structure taking place at about 863 °C. Neodymium metal tarnishes in air and it burns at about 150 °C to form neodymium oxide: 4 Nd + 3 O2 → 2 Nd2O3Neodymium is a quite electropositive element, it reacts with cold water, but quite with hot water to form neodymium hydroxide: 2 Nd + 6 H2O → 2 Nd3 + 3 H2 Neodymium metal reacts vigorously with all the halogens: 2 Nd + 3 F2 → 2 NdF3 2 Nd + 3 Cl2 → 2 NdCl3 2 Nd + 3 Br2 → 2 NdBr3 2 Nd + 3 I2 → 2 NdI3 Neodymium dissolves in dilute sulfuric acid to form solutions that contain the lilac Nd ion; these exist as a 3+ complexes: 2 Nd + 3 H2SO4 → 2 Nd3+ + 3 SO2−4 + 3 H2 Neodymium compounds include halides: neodymium fluoride. Occurring neodymium is a mixture of five stable isotopes, 142Nd, 143Nd, 145Nd, 146Nd and 148Nd, with 142Nd being the most abundant, two radioisotopes, 144Nd and 150Nd.
In all, 31 radioisotopes of neodymium have been detected as of 2010, with the most stable radioisotopes being the occurring ones: 144Nd and 150Nd. All of the remaining radioactive isotopes have half-lives that are shorter than eleven days, the majority of these have half-lives that are shorter than 70 seconds. Neodymium has 13 known meta states, with the most stable one being 139mNd, 135mNd and 133m1Nd; the primary decay modes before the most abundant stable isotope, 142Nd, are electron capture and positron decay, the primary mode after is beta minus decay. The primary decay products before 142Nd are element Pr isotopes and the primary products after are element Pm isotopes. Neodymium was discovered by Baron Carl Auer von Welsbach, an Austrian chemist, in Vienna in 1885, he separated neodymium, as well as the element praseodymium, from a material known as didymium by means of fractional crystallization of the double ammonium nitrate tetrahydrates from nitric acid, while following the separation by spectroscopic analysis.
The name neodymium is derived from the Greek words neos and didymos, twin. Double nitrate crystallization was the means of commercial neodymium purification until the 1950s. Lindsay Chemical Division was the first to commercialize large-scale ion-exchange purification of neodymium. Starting in the 1950s, high purity
Optics is the branch of physics that studies the behaviour and properties of light, including its interactions with matter and the construction of instruments that use or detect it. Optics describes the behaviour of visible and infrared light; because light is an electromagnetic wave, other forms of electromagnetic radiation such as X-rays and radio waves exhibit similar properties. Most optical phenomena can be accounted for using the classical electromagnetic description of light. Complete electromagnetic descriptions of light are, however difficult to apply in practice. Practical optics is done using simplified models; the most common of these, geometric optics, treats light as a collection of rays that travel in straight lines and bend when they pass through or reflect from surfaces. Physical optics is a more comprehensive model of light, which includes wave effects such as diffraction and interference that cannot be accounted for in geometric optics; the ray-based model of light was developed first, followed by the wave model of light.
Progress in electromagnetic theory in the 19th century led to the discovery that light waves were in fact electromagnetic radiation. Some phenomena depend on the fact that light has both particle-like properties. Explanation of these effects requires quantum mechanics; when considering light's particle-like properties, the light is modelled as a collection of particles called "photons". Quantum optics deals with the application of quantum mechanics to optical systems. Optical science is relevant to and studied in many related disciplines including astronomy, various engineering fields and medicine. Practical applications of optics are found in a variety of technologies and everyday objects, including mirrors, telescopes, microscopes and fibre optics. Optics began with the development of lenses by Mesopotamians; the earliest known lenses, made from polished crystal quartz, date from as early as 700 BC for Assyrian lenses such as the Layard/Nimrud lens. The ancient Romans and Greeks filled glass spheres with water to make lenses.
These practical developments were followed by the development of theories of light and vision by ancient Greek and Indian philosophers, the development of geometrical optics in the Greco-Roman world. The word optics comes from the ancient Greek word ὀπτική, meaning "appearance, look". Greek philosophy on optics broke down into two opposing theories on how vision worked, the "intromission theory" and the "emission theory"; the intro-mission approach saw vision as coming from objects casting off copies of themselves that were captured by the eye. With many propagators including Democritus, Epicurus and their followers, this theory seems to have some contact with modern theories of what vision is, but it remained only speculation lacking any experimental foundation. Plato first articulated the emission theory, the idea that visual perception is accomplished by rays emitted by the eyes, he commented on the parity reversal of mirrors in Timaeus. Some hundred years Euclid wrote a treatise entitled Optics where he linked vision to geometry, creating geometrical optics.
He based his work on Plato's emission theory wherein he described the mathematical rules of perspective and described the effects of refraction qualitatively, although he questioned that a beam of light from the eye could instantaneously light up the stars every time someone blinked. Ptolemy, in his treatise Optics, held an extramission-intromission theory of vision: the rays from the eye formed a cone, the vertex being within the eye, the base defining the visual field; the rays were sensitive, conveyed information back to the observer's intellect about the distance and orientation of surfaces. He summarised much of Euclid and went on to describe a way to measure the angle of refraction, though he failed to notice the empirical relationship between it and the angle of incidence. During the Middle Ages, Greek ideas about optics were resurrected and extended by writers in the Muslim world. One of the earliest of these was Al-Kindi who wrote on the merits of Aristotelian and Euclidean ideas of optics, favouring the emission theory since it could better quantify optical phenomena.
In 984, the Persian mathematician Ibn Sahl wrote the treatise "On burning mirrors and lenses" describing a law of refraction equivalent to Snell's law. He used this law to compute optimum shapes for curved mirrors. In the early 11th century, Alhazen wrote the Book of Optics in which he explored reflection and refraction and proposed a new system for explaining vision and light based on observation and experiment, he rejected the "emission theory" of Ptolemaic optics with its rays being emitted by the eye, instead put forward the idea that light reflected in all directions in straight lines from all points of the objects being viewed and entered the eye, although he was unable to explain how the eye captured the rays. Alhazen's work was ignored in the Arabic world but it was anonymously translated into Latin around 1200 A. D. and further summarised and expanded on by the Polish monk Witelo making it a standard text on optics in Europe for the next 400 years. In the 13th century in medieval Europe, English bishop Robert Grosseteste wrote on a wide range of scientific topics, discussed light from four different perspectives: an epistemology of light, a metaphysics or cosmogony of light, an etiology or physics of light, a theology of light, basing it on the works Aristotle and Platonism.
Grosseteste's most famous disciple, Roger Bacon, wrote w
IBM Blue Gene
Blue Gene is an IBM project aimed at designing supercomputers that can reach operating speeds in the PFLOPS range, with low power consumption. The project created three generations of supercomputers, Blue Gene/L, Blue Gene/P, Blue Gene/Q. Blue Gene systems have led the TOP500 and Green500 rankings of the most powerful and most power efficient supercomputers, respectively. Blue Gene systems have consistently scored top positions in the Graph500 list; the project was awarded the 2009 National Medal of Innovation. As of 2015, IBM seems to have ended the development of the Blue Gene family though no public announcement has been made. IBM's continuing efforts of the supercomputer scene seems to be concentrated around OpenPower, using accelerators such as FPGAs and GPUs to battle the end of Moore's law. In December 1999, IBM announced a US$100 million research initiative for a five-year effort to build a massively parallel computer, to be applied to the study of biomolecular phenomena such as protein folding.
The project had two main goals: to advance our understanding of the mechanisms behind protein folding via large-scale simulation, to explore novel ideas in massively parallel machine architecture and software. Major areas of investigation included: how to use this novel platform to meet its scientific goals, how to make such massively parallel machines more usable, how to achieve performance targets at a reasonable cost, through novel machine architectures; the initial design for Blue Gene was based on an early version of the Cyclops64 architecture, designed by Monty Denneau. The initial research and development work was pursued at IBM T. J. Watson Research led by William R. Pulleyblank. At IBM, Alan Gara started working on an extension of the QCDOC architecture into a more general-purpose supercomputer: The 4D nearest-neighbor interconnection network was replaced by a network supporting routing of messages from any node to any other. DOE started funding the development of this system and it became known as Blue Gene/L.
In November 2004 a 16-rack system, with each rack holding 1,024 compute nodes, achieved first place in the TOP500 list, with a Linpack performance of 70.72 TFLOPS. It thereby overtook NEC's Earth Simulator, which had held the title of the fastest computer in the world since 2002. From 2004 through 2007 the Blue Gene/L installation at LLNL expanded to 104 racks, achieving 478 TFLOPS Linpack and 596 TFLOPS peak; the LLNL BlueGene/L installation held the first position in the TOP500 list for 3.5 years, until in June 2008 it was overtaken by IBM's Cell-based Roadrunner system at Los Alamos National Laboratory, the first system to surpass the 1 PetaFLOPS mark. The system was built in MN IBM plant. While the LLNL installation was the largest Blue Gene/L installation, many smaller installations followed. In November 2006, there were 27 computers on the TOP500 list using the Blue Gene/L architecture. All these computers were listed as having an architecture of eServer Blue Gene Solution. For example, three racks of Blue Gene/L were housed at the San Diego Supercomputer Center.
While the TOP500 measures performance on a single benchmark application, Blue Gene/L set records for performance on a wider set of applications. Blue Gene/L was the first supercomputer to run over 100 TFLOPS sustained on a real-world application, namely a three-dimensional molecular dynamics code, simulating solidification of molten metal under high pressure and temperature conditions; this achievement won the 2005 Gordon Bell Prize. In June 2006, NNSA and IBM announced that Blue Gene/L achieved 207.3 TFLOPS on a quantum chemical application. At Supercomputing 2006, Blue Gene/L was awarded the winning prize in all HPC Challenge Classes of awards. In 2007, a team from the IBM Almaden Research Center and the University of Nevada ran an artificial neural network half as complex as the brain of a mouse for the equivalent of a second; the name Blue Gene comes from what it was designed to do, help biologists understand the processes of protein folding and gene development. "Blue" is a traditional moniker that IBM uses for many of the company itself.
The original Blue Gene design was renamed "Blue Gene/C" and Cyclops64. The "L" in Blue Gene/L comes from "Light" as that design's original name was "Blue Light"; the "P" version was designed to be a petascale design. "Q" is just the letter after "P". There is no Blue Gene/R; the Blue Gene/L supercomputer was unique in the following aspects: Trading the speed of processors for lower power consumption. Blue Gene/L used low frequency and low power embedded PowerPC cores with floating point accelerators. While the performance of each chip was low, the system could achieve better power efficiency for applications that could use large numbers of nodes. Dual processors per node with two working modes: co-processor mode where one processor handles computation and the other handles communication. System-on-a-chip design. All node components were embedded on one chip, with the exception of 512 MB external DRAM. A large number of nodes Three-dimensional torus interconnect with auxiliary networks for global communications, I/O, management Lightweight OS per node for minimum system overhead (syst
In optics, the refractive index or index of refraction of a material is a dimensionless number that describes how fast light propagates through the material. It is defined as n = c v, where c is the speed of light in vacuum and v is the phase velocity of light in the medium. For example, the refractive index of water is 1.333, meaning that light travels 1.333 times as fast in vacuum as in water. The refractive index determines how much the path of light is bent, or refracted, when entering a material; this is described by Snell's law of refraction, n1 sinθ1 = n2 sinθ2, where θ1 and θ2 are the angles of incidence and refraction of a ray crossing the interface between two media with refractive indices n1 and n2. The refractive indices determine the amount of light, reflected when reaching the interface, as well as the critical angle for total internal reflection and Brewster's angle; the refractive index can be seen as the factor by which the speed and the wavelength of the radiation are reduced with respect to their vacuum values: the speed of light in a medium is v = c/n, the wavelength in that medium is λ = λ0/n, where λ0 is the wavelength of that light in vacuum.
This implies that vacuum has a refractive index of 1, that the frequency of the wave is not affected by the refractive index. As a result, the energy of the photon, therefore the perceived color of the refracted light to a human eye which depends on photon energy, is not affected by the refraction or the refractive index of the medium. While the refractive index affects wavelength, it depends on photon frequency and energy so the resulting difference in the bending angle causes white light to split into its constituent colors; this is called dispersion. It can be observed in prisms and rainbows, chromatic aberration in lenses. Light propagation in absorbing materials can be described using a complex-valued refractive index; the imaginary part handles the attenuation, while the real part accounts for refraction. The concept of refractive index applies within the full electromagnetic spectrum, from X-rays to radio waves, it can be applied to wave phenomena such as sound. In this case the speed of sound is used instead of that of light, a reference medium other than vacuum must be chosen.
The refractive index n of an optical medium is defined as the ratio of the speed of light in vacuum, c = 299792458 m/s, the phase velocity v of light in the medium, n = c v. The phase velocity is the speed at which the crests or the phase of the wave moves, which may be different from the group velocity, the speed at which the pulse of light or the envelope of the wave moves; the definition above is sometimes referred to as the absolute refractive index or the absolute index of refraction to distinguish it from definitions where the speed of light in other reference media than vacuum is used. Air at a standardized pressure and temperature has been common as a reference medium. Thomas Young was the person who first used, invented, the name "index of refraction", in 1807. At the same time he changed this value of refractive power into a single number, instead of the traditional ratio of two numbers; the ratio had the disadvantage of different appearances. Newton, who called it the "proportion of the sines of incidence and refraction", wrote it as a ratio of two numbers, like "529 to 396".
Hauksbee, who called it the "ratio of refraction", wrote it as a ratio with a fixed numerator, like "10000 to 7451.9". Hutton wrote it as a ratio with a fixed denominator, like 1.3358 to 1. Young did not use a symbol for the index of refraction, in 1807. In the next years, others started using different symbols: n, m, µ; the symbol n prevailed. For visible light most transparent media have refractive indices between 1 and 2. A few examples are given in the adjacent table; these values are measured at the yellow doublet D-line of sodium, with a wavelength of 589 nanometers, as is conventionally done. Gases at atmospheric pressure have refractive indices close to 1 because of their low density. All solids and liquids have refractive indices above 1.3, with aerogel as the clear exception. Aerogel is a low density solid that can be produced with refractive index in the range from 1.002 to 1.265. Moissanite lies at the other end of the range with a refractive index as high as 2.65. Most plastics have refractive indices in the range from 1.3 to 1.7, but some high-refractive-index polymers can have values as high as 1.76.
For infrared light refractive indices can be higher. Germanium is transparent in the wavelength region from 2 to 14 µm and has a refractive index of about 4. A type of new materials, called topological insulator, was found holding higher refractive index of up to 6 in near to mid infrared frequency range. Moreover, topological insulator material are transparent; these excellent properties make them a type of significant materials for infrared optics. According to the theory of relativity, no information can travel faster than the speed of light in vacuum, but this does not mean that the refractive index cannot be lower than 1; the refractive index measures the phase velocity of light. The phase velocity is the speed at which the crests of the wave move and can be faster than the speed of light in vacuum, thereby give a refractive index below 1; this can occur close to resonance frequencies, for absorbing media, in plasmas, for X-rays. In the X-ray regime the refractive indices are