In optics, the refractive index or index of refraction of a material is a dimensionless number that describes how fast light propagates through the material. It is defined as n = c v, where c is the speed of light in vacuum and v is the phase velocity of light in the medium. For example, the refractive index of water is 1.333, meaning that light travels 1.333 times as fast in vacuum as in water. The refractive index determines how much the path of light is bent, or refracted, when entering a material; this is described by Snell's law of refraction, n1 sinθ1 = n2 sinθ2, where θ1 and θ2 are the angles of incidence and refraction of a ray crossing the interface between two media with refractive indices n1 and n2. The refractive indices determine the amount of light, reflected when reaching the interface, as well as the critical angle for total internal reflection and Brewster's angle; the refractive index can be seen as the factor by which the speed and the wavelength of the radiation are reduced with respect to their vacuum values: the speed of light in a medium is v = c/n, the wavelength in that medium is λ = λ0/n, where λ0 is the wavelength of that light in vacuum.
This implies that vacuum has a refractive index of 1, that the frequency of the wave is not affected by the refractive index. As a result, the energy of the photon, therefore the perceived color of the refracted light to a human eye which depends on photon energy, is not affected by the refraction or the refractive index of the medium. While the refractive index affects wavelength, it depends on photon frequency and energy so the resulting difference in the bending angle causes white light to split into its constituent colors; this is called dispersion. It can be observed in prisms and rainbows, chromatic aberration in lenses. Light propagation in absorbing materials can be described using a complex-valued refractive index; the imaginary part handles the attenuation, while the real part accounts for refraction. The concept of refractive index applies within the full electromagnetic spectrum, from X-rays to radio waves, it can be applied to wave phenomena such as sound. In this case the speed of sound is used instead of that of light, a reference medium other than vacuum must be chosen.
The refractive index n of an optical medium is defined as the ratio of the speed of light in vacuum, c = 299792458 m/s, the phase velocity v of light in the medium, n = c v. The phase velocity is the speed at which the crests or the phase of the wave moves, which may be different from the group velocity, the speed at which the pulse of light or the envelope of the wave moves; the definition above is sometimes referred to as the absolute refractive index or the absolute index of refraction to distinguish it from definitions where the speed of light in other reference media than vacuum is used. Air at a standardized pressure and temperature has been common as a reference medium. Thomas Young was the person who first used, invented, the name "index of refraction", in 1807. At the same time he changed this value of refractive power into a single number, instead of the traditional ratio of two numbers; the ratio had the disadvantage of different appearances. Newton, who called it the "proportion of the sines of incidence and refraction", wrote it as a ratio of two numbers, like "529 to 396".
Hauksbee, who called it the "ratio of refraction", wrote it as a ratio with a fixed numerator, like "10000 to 7451.9". Hutton wrote it as a ratio with a fixed denominator, like 1.3358 to 1. Young did not use a symbol for the index of refraction, in 1807. In the next years, others started using different symbols: n, m, µ; the symbol n prevailed. For visible light most transparent media have refractive indices between 1 and 2. A few examples are given in the adjacent table; these values are measured at the yellow doublet D-line of sodium, with a wavelength of 589 nanometers, as is conventionally done. Gases at atmospheric pressure have refractive indices close to 1 because of their low density. All solids and liquids have refractive indices above 1.3, with aerogel as the clear exception. Aerogel is a low density solid that can be produced with refractive index in the range from 1.002 to 1.265. Moissanite lies at the other end of the range with a refractive index as high as 2.65. Most plastics have refractive indices in the range from 1.3 to 1.7, but some high-refractive-index polymers can have values as high as 1.76.
For infrared light refractive indices can be higher. Germanium is transparent in the wavelength region from 2 to 14 µm and has a refractive index of about 4. A type of new materials, called topological insulator, was found holding higher refractive index of up to 6 in near to mid infrared frequency range. Moreover, topological insulator material are transparent; these excellent properties make them a type of significant materials for infrared optics. According to the theory of relativity, no information can travel faster than the speed of light in vacuum, but this does not mean that the refractive index cannot be lower than 1; the refractive index measures the phase velocity of light. The phase velocity is the speed at which the crests of the wave move and can be faster than the speed of light in vacuum, thereby give a refractive index below 1; this can occur close to resonance frequencies, for absorbing media, in plasmas, for X-rays. In the X-ray regime the refractive indices are
The viscosity of a fluid is a measure of its resistance to deformation at a given rate. For liquids, it corresponds to the informal concept of "thickness": for example, syrup has a higher viscosity than water. Viscosity can be conceptualized as quantifying the frictional force that arises between adjacent layers of fluid that are in relative motion. For instance, when a fluid is forced through a tube, it flows more near the tube's axis than near its walls. In such a case, experiments show; this is because a force is required to overcome the friction between the layers of the fluid which are in relative motion: the strength of this force is proportional to the viscosity. A fluid that has no resistance to shear stress is known as an inviscid fluid. Zero viscosity is observed only at low temperatures in superfluids. Otherwise, the second law of thermodynamics requires all fluids to have positive viscosity. A fluid with a high viscosity, such as pitch, may appear to be a solid; the word "viscosity" is derived from the Latin "viscum", meaning mistletoe and a viscous glue made from mistletoe berries.
In materials science and engineering, one is interested in understanding the forces, or stresses, involved in the deformation of a material. For instance, if the material were a simple spring, the answer would be given by Hooke's law, which says that the force experienced by a spring is proportional to the distance displaced from equilibrium. Stresses which can be attributed to the deformation of a material from some rest state are called elastic stresses. In other materials, stresses are present which can be attributed to the rate of change of the deformation over time; these are called. For instance, in a fluid such as water the stresses which arise from shearing the fluid do not depend on the distance the fluid has been sheared. Viscosity is the material property which relates the viscous stresses in a material to the rate of change of a deformation. Although it applies to general flows, it is easy to visualize and define in a simple shearing flow, such as a planar Couette flow. In the Couette flow, a fluid is trapped between two infinitely large plates, one fixed and one in parallel motion at constant speed u.
If the speed of the top plate is low enough in steady state the fluid particles move parallel to it, their speed varies from 0 at the bottom to u at the top. Each layer of fluid moves faster than the one just below it, friction between them gives rise to a force resisting their relative motion. In particular, the fluid applies on the top plate a force in the direction opposite to its motion, an equal but opposite force on the bottom plate. An external force is therefore required in order to keep the top plate moving at constant speed. In many fluids, the flow velocity is observed to vary linearly from zero at the bottom to u at the top. Moreover, the magnitude F of the force acting on the top plate is found to be proportional to the speed u and the area A of each plate, inversely proportional to their separation y: F = μ A u y; the proportionality factor μ is the viscosity of the fluid, with units of Pa ⋅ s. The ratio u / y is called the rate of shear deformation or shear velocity, is the derivative of the fluid speed in the direction perpendicular to the plates.
If the velocity does not vary linearly with y the appropriate generalization is τ = μ ∂ u ∂ y, where τ = F / A, ∂ u / ∂ y is the local shear velocity. This expression is referred to as Newton's law of viscosity. In shearing flows with planar symmetry, it is what defines μ, it is a special case of the general definition of viscosity, which can be expressed in coordinate-free form. Use of the Greek letter mu for the viscosity is common among mechanical and chemical engineers, as well as physicists. However, the Greek letter eta is used by chemists and the IUPAC; the viscosity μ is sometimes referred to as the shear viscosity. However, at least one author discourages the use of this terminology, noting that μ can appear in nonshearing flows in addition to shearing flows. In general terms, the viscous stresses in a fluid are defined as those resulting from the relative velocity of different fluid particles; as such, the viscous stresses. If the velocity gradients are small to a first approximation the v
Epoxy is either any of the basic components or the cured end products of epoxy resins, as well as a colloquial name for the epoxide functional group. Epoxy resins known as polyepoxides, are a class of reactive prepolymers and polymers which contain epoxide groups. Epoxy resins may be reacted either with themselves through catalytic homopolymerisation, or with a wide range of co-reactants including polyfunctional amines, phenols and thiols; these co-reactants are referred to as hardeners or curatives, the cross-linking reaction is referred to as curing. Reaction of polyepoxides with themselves or with polyfunctional hardeners forms a thermosetting polymer with favorable mechanical properties and high thermal and chemical resistance. Epoxy has a wide range of applications, including metal coatings, use in electronics/electrical components/LEDs, high tension electrical insulators, paint brush manufacturing, fiber-reinforced plastic materials and structural adhesives. Epoxy is sometimes used as a glue.
Epoxy resins are low molecular weight pre-polymers or higher molecular weight polymers which contain at least two epoxide groups. The epoxide group is sometimes referred to as a glycidyl or oxirane group. A wide range of epoxy resins are produced industrially; the raw materials for epoxy resin production are today petroleum derived, although some plant derived sources are now becoming commercially available. Epoxy resins are polymeric or semi-polymeric materials or an oligomer, as such exist as pure substances, since variable chain length results from the polymerisation reaction used to produce them. High purity grades can be produced for certain applications, e.g. using a distillation purification process. One downside of high purity liquid grades is their tendency to form crystalline solids due to their regular structure, which require melting to enable processing. An important criterion for epoxy resins is the epoxide group content; this is expressed as the specific amount of substance of epoxide groups in the material B under consideration, calculated as the ratio of the amount of substance of epoxide groups in this material B, n, divided by the mass m of the material B under consideration, in this case, the mass of the resin.
The SI unit for this quantity multiples thereof. Several deprecated quantities are still in use, including the so-called "epoxide number", not a number and should therefore not be referred to as such, but instead is the ratio of the amount of substance of epoxide groups, n, the mass m of the material B, with the SI unit "mol/kg"; the inverse of the epoxide number is called the "epoxide equivalent weight", the ratio of the mass of a sample B of the resin and the amount of substance of epoxide groups present in that sample B, with the SI unit "kg/mol", is a deprecated quantity. The specific amount of substance of epoxide groups is used to calculate the mass of co-reactant to use when curing epoxy resins. Epoxies are cured with stoichiometric or near-stoichiometric quantities of curative to achieve maximum physical properties; as with other classes of thermoset polymer materials, blending different grades of epoxy resin, as well as use of additives, plasticizers or fillers is common to achieve the desired processing or final properties, or to reduce cost.
Use of blending and fillers is referred to as formulating. Important epoxy resins are produced from combining epichlorohydrin and bisphenol A to give bisphenol A diglycidyl ethers. Increasing the ratio of bisphenol A to epichlorohydrin during manufacture produces higher molecular weight linear polyethers with glycidyl end groups, which are semi-solid to hard crystalline materials at room temperature depending on the molecular weight achieved; this route of synthesis is known as the "taffy" process. More modern manufacturing methods of higher molecular weight epoxy resins is to start with liquid epoxy resin and add a calculated amount of bisphenol A and a catalyst is added and the reaction heated to circa 160 °C; this process is known as "advancement". There are numerous patents and articles on this process, popular for over 20 years; as the molecular weight of the resin increases, the epoxide content reduces and the material behaves more and more like a thermoplastic. High molecular weight polycondensates form a class known as phenoxy resins and contain no epoxide groups.
These resins do however contain hydroxyl groups throughout the backbone, which may undergo other cross-linking reactions, e.g. with aminoplasts and isocyanates. Bisphenol F may undergo epoxy resin formation in a similar fashion to bisphenol A; these resins have lower viscosity and a higher mean epoxy content per gramme than bisphenol A resins, which gives them increased chemical resistance. Reaction of phenols with formaldehyde and subsequent glycidylation with epichlorohydrin produces epoxidised novolacs, such as epoxy phenol novolacs and epoxy cresol novolacs; these are viscous to solid resins with typical mean epoxide functionality of around 2 to 6. The high epoxide functionality of these resins forms a crosslinked polymer network displaying high temperature and chemical resistance, but low flexibility. A related class is cycloaliphatic epoxy resin, which contains one or more cycloaliphatic rings in the molecule (e.g. 3,4-epoxycyclohexylmethyl-3,4-epoxycyc
Telecommunication is the transmission of signs, messages, writings and sounds or information of any nature by wire, optical or other electromagnetic systems. Telecommunication occurs when the exchange of information between communication participants includes the use of technology, it is transmitted either electrically over physical media, such as cables, or via electromagnetic radiation. Such transmission paths are divided into communication channels which afford the advantages of multiplexing. Since the Latin term communicatio is considered the social process of information exchange, the term telecommunications is used in its plural form because it involves many different technologies. Early means of communicating over a distance included visual signals, such as beacons, smoke signals, semaphore telegraphs, signal flags, optical heliographs. Other examples of pre-modern long-distance communication included audio messages such as coded drumbeats, lung-blown horns, loud whistles. 20th- and 21st-century technologies for long-distance communication involve electrical and electromagnetic technologies, such as telegraph and teleprinter, radio, microwave transmission, fiber optics, communications satellites.
A revolution in wireless communication began in the first decade of the 20th century with the pioneering developments in radio communications by Guglielmo Marconi, who won the Nobel Prize in Physics in 1909, other notable pioneering inventors and developers in the field of electrical and electronic telecommunications. These included Charles Wheatstone and Samuel Morse, Alexander Graham Bell, Edwin Armstrong and Lee de Forest, as well as Vladimir K. Zworykin, John Logie Baird and Philo Farnsworth; the word telecommunication is a compound of the Greek prefix tele, meaning distant, far off, or afar, the Latin communicare, meaning to share. Its modern use is adapted from the French, because its written use was recorded in 1904 by the French engineer and novelist Édouard Estaunié. Communication was first used as an English word in the late 14th century, it comes from Old French comunicacion, from Latin communicationem, noun of action from past participle stem of communicare "to share, divide out.
Homing pigeons have been used throughout history by different cultures. Pigeon post had Persian roots, was used by the Romans to aid their military. Frontinus said; the Greeks conveyed the names of the victors at the Olympic Games to various cities using homing pigeons. In the early 19th century, the Dutch government used the system in Sumatra, and in 1849, Paul Julius Reuter started a pigeon service to fly stock prices between Aachen and Brussels, a service that operated for a year until the gap in the telegraph link was closed. In the Middle Ages, chains of beacons were used on hilltops as a means of relaying a signal. Beacon chains suffered the drawback that they could only pass a single bit of information, so the meaning of the message such as "the enemy has been sighted" had to be agreed upon in advance. One notable instance of their use was during the Spanish Armada, when a beacon chain relayed a signal from Plymouth to London. In 1792, Claude Chappe, a French engineer, built the first fixed visual telegraphy system between Lille and Paris.
However semaphore suffered from the need for skilled operators and expensive towers at intervals of ten to thirty kilometres. As a result of competition from the electrical telegraph, the last commercial line was abandoned in 1880. On 25 July 1837 the first commercial electrical telegraph was demonstrated by English inventor Sir William Fothergill Cooke, English scientist Sir Charles Wheatstone. Both inventors viewed their device as "an improvement to the electromagnetic telegraph" not as a new device. Samuel Morse independently developed a version of the electrical telegraph that he unsuccessfully demonstrated on 2 September 1837, his code was an important advance over Wheatstone's signaling method. The first transatlantic telegraph cable was completed on 27 July 1866, allowing transatlantic telecommunication for the first time; the conventional telephone was invented independently by Alexander Bell and Elisha Gray in 1876. Antonio Meucci invented the first device that allowed the electrical transmission of voice over a line in 1849.
However Meucci's device was of little practical value because it relied upon the electrophonic effect and thus required users to place the receiver in their mouth to "hear" what was being said. The first commercial telephone services were set-up in 1878 and 1879 on both sides of the Atlantic in the cities of New Haven and London. Starting in 1894, Italian inventor Guglielmo Marconi began developing a wireless communication using the newly discovered phenomenon of radio waves, showing by 1901 that they could be transmitted across the Atlantic Ocean; this was the start of wireless telegraphy by radio. Voice and music had little early success. World War I accelerated the development of radio for military communications. After the war, commercial radio AM broadcasting began in the 1920s and became an important mass medium for entertainment and news. World War II again accelerated development of radio for the wartime purposes of aircraft and land communication, radio navigation and radar. Development of stereo FM broadcasting of radio
The optical microscope referred to as the light microscope, is a type of microscope that uses visible light and a system of lenses to magnify images of small objects. Optical microscopes are the oldest design of microscope and were invented in their present compound form in the 17th century. Basic optical microscopes can be simple, although many complex designs aim to improve resolution and sample contrast. Used in the classroom and at home unlike the electron microscope, used for closer viewing; the image from an optical microscope can be captured by normal, photosensitive cameras to generate a micrograph. Images were captured by photographic film, but modern developments in CMOS and charge-coupled device cameras allow the capture of digital images. Purely digital microscopes are now available which use a CCD camera to examine a sample, showing the resulting image directly on a computer screen without the need for eyepieces. Alternatives to optical microscopy which do not use visible light include scanning electron microscopy and transmission electron microscopy and scanning probe microscopy.
On 8 October 2014, the Nobel Prize in Chemistry was awarded to Eric Betzig, William Moerner and Stefan Hell for "the development of super-resolved fluorescence microscopy," which brings "optical microscopy into the nanodimension". There are two basic types of optical microscopes: compound microscopes. A simple microscope is one. A compound microscope uses several lenses to enhance the magnification of an object; the vast majority of modern research microscopes are compound microscopes while some cheaper commercial digital microscopes are simple single lens microscopes. Compound microscopes can be further divided into a variety of other types of microscopes which differ in their optical configurations and intended purposes. A regular microscope uses a lens or set of lenses to enlarge an object through angular magnification alone, giving the viewer an erect enlarged virtual image; the use of a single convex lens or groups of lenses are found in simple magnification devices such as the magnifying glass and eyepieces for telescopes and microscopes.
A compound microscope uses a lens close to the object being viewed to collect light which focuses a real image of the object inside the microscope. That image is magnified by a second lens or group of lenses that gives the viewer an enlarged inverted virtual image of the object; the use of a compound objective/eyepiece combination allows for much higher magnification. Common compound microscopes feature exchangeable objective lenses, allowing the user to adjust the magnification. A compound microscope enables more advanced illumination setups, such as phase contrast. There are many variants of the compound optical microscope design for specialized purposes; some of these are physical design differences allowing specialization for certain purposes: Stereo microscope, a low-powered microscope which provides a stereoscopic view of the sample used for dissection. Comparison microscope, which has two separate light paths allowing direct comparison of two samples via one image in each eye. Inverted microscope, for studying samples from below.
Fiber optic connector inspection microscope, designed for connector end-face inspection Traveling microscope, for studying samples of high optical resolution. Other microscope variants are designed for different illumination techniques: Petrographic microscope, whose design includes a polarizing filter, rotating stage and gypsum plate to facilitate the study of minerals or other crystalline materials whose optical properties can vary with orientation. Polarizing microscope, similar to the petrographic microscope. Phase contrast microscope, which applies the phase contrast illumination method. Epifluorescence microscope, designed for analysis of samples which include fluorophores. Confocal microscope, a used variant of epifluorescent illumination which uses a scanning laser to illuminate a sample for fluorescence. Two-photon microscope, used to image fluorescence deeper in scattering media and reduce photobleaching in living samples. Student microscope – an low-power microscope with simplified controls and sometimes low quality optics designed for school use or as a starter instrument for children.
Ultramicroscope, an adapted light microscope that uses light scattering to allow viewing of tiny particles whose diameter is below or near the wavelength of visible light. Microscopes can be or wholly computer-controlled with various levels of automation. Digital microscopy allows greater analysis of a microscope image, for example measurements of distances and areas and quantitaton of a fluorescent or histological stain. Low-powered digital microscopes, USB microscopes, are commercially available; these are webcams with a high-powered macro lens and do not use transillumination. The camera attached directly to the USB port of a computer, so that the images are shown directly on the monitor, they offer modest magnifications without the need to use eyepieces, at low cost. High power illumination is provided by an LED source or sources adjacent to the camera lens. Digital microscopy with low light levels to avoid damage to vulnerable biological samples is available using sensitive photon-counting digital
Conservation and restoration of cultural heritage
The conservation and restoration of cultural heritage focuses on protection and care of tangible cultural heritage, including artworks, architecture and museum collections. Conservation activities include preventive conservation, documentation, research and education; this field is allied with conservation science and registrars. Conservation of cultural heritage involves protection and restoration using "any methods that prove effective in keeping that property in as close to its original condition as possible for as long as possible." Conservation of cultural heritage is associated with art collections and museums and involves collection care and management through tracking, documentation, storage, preventative conservation, restoration. The scope has widened from art conservation, involving protection and care of artwork and architecture, to conservation of cultural heritage including protection and care of a broad set of other cultural and historical works. Conservation of cultural heritage can be described as a type of ethical stewardship.
Conservation of cultural heritage applies simple ethical guidelines: Minimal intervention. There are compromises between preserving appearance, maintaining original design and material properties, ability to reverse changes. Reversibility is now emphasized so as to reduce problems with future treatment and use. In order for conservators to decide upon an appropriate conservation strategy and apply their professional expertise accordingly, they must take into account views of the stakeholder, the values and meaning of the work, the physical needs of the material. Cesare Brandi in his Theory of Restoration, describes restoration as "the methodological moment in which the work of art is appreciated in its material form and in its historical and aesthetic duality, with a view to transmitting it to the future"; some consider the tradition of conservation of cultural heritage in Europe to have begun in 1565 with the restoration of the Sistine Chapel frescoes, but more ancient examples include the work of Cassiodorus.
The care of cultural heritage has a long history, one, aimed at fixing and mending objects for their continued use and aesthetic enjoyment. Until the early 20th century, artists were the ones called upon to repair damaged artworks. During the 19th century, the fields of science and art became intertwined as scientists such as Michael Faraday began to study the damaging effects of the environment to works of art. Louis Pasteur carried out scientific analysis on paint as well; however the first organized attempt to apply a theoretical framework to the conservation of cultural heritage came with the founding in the United Kingdom of the Society for the Protection of Ancient Buildings in 1877. The society was founded by William Morris and Philip Webb, both of whom were influenced by the writings of John Ruskin. During the same period, a French movement with similar aims was being developed under the direction of Eugène Viollet-le-Duc, an architect and theorist, famous for his restorations of medieval buildings.
Conservation of cultural heritage as a distinct field of study developed in Germany, where in 1888 Friedrich Rathgen became the first chemist to be employed by a Museum, the Koniglichen Museen, Berlin. He not only developed a scientific approach to the care of objects in the collections, but disseminated this approach by publishing a Handbook of Conservation in 1898; the early development of conservation of cultural heritage in any area of the world is linked to the creation of positions for chemists within museums. In the United Kingdom, pioneering research into painting materials and conservation and stone conservation was conducted by Arthur Pillans Laurie, academic chemist and Principal of Heriot-Watt University from 1900. Laurie's interests were fostered by William Holman Hunt. In 1924 the chemist Dr Harold Plenderleith began to work at the British Museum with Dr. Alexander Scott in the created Research Laboratory, although he was employed by the Department of Scientific and Industrial Research in the early years.
Plenderleith's appointment may be said to have given birth to the conservation profession in the UK, although there had been craftsmen in many museums and in the commercial art world for generations. This department was created by the museum to address the deteriorating condition of objects in the collection, damages which were a result of their being stored in the London Underground tunnels during the First World War; the creation of this department moved the focus for the development of conservation theory and practice from Germany to Britain, made the latter a prime force in this fledgling field. In 1956 Plenderleith wrote a significant handbook called The Conservation of Antiquities and Works of Art, which supplanted Rathgen's earlier tome and set new standards for the development of art and conservation science. In the United States, the development of conservation of cultural heritage can be traced to the Fogg Art Museum, Edward Waldo Forbes, its director from 1909 to 1944, he encouraged technical investigation, was Chairman of the Advisory Committee for the first technical journal, Technical Studies in the Field of the Fine Arts, published by the Fogg from 1932 to 1942.
He brought onto the museum staff chemists. Rutherford John Gettens was the first of such in the US to be permanently employed by an art museum, he worked with the founder and first editor of Technical Studies. Gettens and Stout co-authored Painting Materials
Photodetectors called photosensors, are sensors of light or other electromagnetic radiation. A photo detector has a p -- n junction; the absorbed photons make electron–hole pairs in the depletion region. Photodiodes and photo transistors are a few examples of photo detectors. Solar cells convert some of the light energy absorbed into electrical energy. Photodetectors may be classified by their mechanism for detection: Photoemission or photoelectric effect: Photons cause electrons to transition from the conduction band of a material to free electrons in a vacuum or gas. Thermal: Photons cause electrons to transition to mid-gap states decay back to lower bands, inducing phonon generation and thus heat. Polarization: Photons induce changes in polarization states of suitable materials, which may lead to change in index of refraction or other polarization effects. Photochemical: Photons induce a chemical change in a material. Weak interaction effects: photons induce secondary effects such as in photon drag detectors or gas pressure changes in Golay cells.
Photodetectors may be used in different configurations. Single sensors may detect overall light levels. A 1-D array of photodetectors, as in a spectrophotometer or a Line scanner, may be used to measure the distribution of light along a line. A 2-D array of photodetectors may be used as an image sensor to form images from the pattern of light before it. A photodetector or array is covered by an illumination window, sometimes having an anti-reflective coating. There are a number of performance metrics called figures of merit, by which photodetectors are characterized and compared Spectral response: The response of a photodetector as a function of photon frequency. Quantum efficiency: The number of carriers generated per photon. Responsivity: The output current divided by total light power falling upon the photodetector. Noise-equivalent power: The amount of light power needed to generate a signal comparable in size to the noise of the device. Detectivity: The square root of the detector area divided by the noise equivalent power.
Gain: The output current of a photodetector divided by the current directly produced by the photons incident on the detectors, i.e. the built-in current gain. Dark current: The current flowing through a photodetector in the absence of light. Response time: The time needed for a photodetector to go from 10% to 90% of final output. Noise spectrum: The intrinsic noise voltage or current as a function of frequency; this can be represented in the form of a noise spectral density. Nonlinearity: The RF-output is limited by the nonlinearity of the photodetector Grouped by mechanism, photodetectors include the following devices: Gaseous ionization detectors are used in experimental particle physics to detect photons and particles with sufficient energy to ionize gas atoms or molecules. Electrons and ions generated by ionization cause a current flow. Photomultiplier tubes containing a photocathode which emits electrons when illuminated, the electrons are amplified by a chain of dynodes. Phototubes containing a photocathode which emits electrons when illuminated, such that the tube conducts a current proportional to the light intensity.
Microchannel plate detectors are silicon-based photomultipliers. Active-pixel sensors are image sensors. Made in a complementary metal-oxide-semiconductor process, known as CMOS image sensors, APSs are used in cell phone cameras, web cameras, some DSLRs. Cadmium zinc telluride radiation detectors can operate in direct-conversion mode at room temperature, unlike some other materials which require liquid nitrogen cooling, their relative advantages include high sensitivity for x-rays and gamma-rays, due to the high atomic numbers of Cd and Te, better energy resolution than scintillator detectors. Charge-coupled devices, which are used to record images in astronomy, digital photography, digital cinematography. Before the 1990s, photographic plates were most common in astronomy; the next generation of astronomical instruments, such as the Astro-E2, include cryogenic detectors. HgCdTe infrared detectors. Detection occurs when an infrared photon of sufficient energy kicks an electron from the valence band to the conduction band.
Such an electron is collected by a suitable external readout integrated circuits and transformed into an electric signal. LEDs which are reverse-biased to act as photodiodes. See LEDs as Photodiode Light Sensors. Photoresistors or Light Dependent Resistors; the resistance of LDRs decreases with increasing intensity of light falling on it. Photodiodes which can operate in photovoltaic mode or photoconductive mode. Photodiodes are combined with low-noise analog electronics to convert the photocurrent into a voltage that can be digitized. Phototransistors, which act like amplifying photodiodes. Quantum dot photoconductors or photodiodes, which can handle wavelengths in the visible and infrared spectral regions. Semiconductor detectors are employed as particle detectors. Silicon drift detectors are X-ray radiation detectors used in x-ray spectrometry and electron microscopy. Photovoltaic cells or solar cells which produce a voltage and supply an electric current when illuminated. Bolometers measure the power of incident electromagnetic radiation via the heating of a material with a temperature-dependent electrical resistance.
A microbolometer is a specific type of bolometer used as a detector in a thermal camera. Cryogenic detectors are sufficiently sensitive to measure the energy of single x-ray and infrared photons