In the broadest definition, a sensor is a device, module, or subsystem whose purpose is to detect events or changes in its environment and send the information to other electronics a computer processor. A sensor is always used with other electronics. Sensors are used in everyday objects such as touch-sensitive elevator buttons and lamps which dim or brighten by touching the base, besides innumerable applications of which most people are never aware. With advances in micromachinery and easy-to-use microcontroller platforms, the uses of sensors have expanded beyond the traditional fields of temperature, pressure or flow measurement, for example into MARG sensors. Moreover, analog sensors such as potentiometers and force-sensing resistors are still used. Applications include manufacturing and machinery and aerospace, medicine and many other aspects of our day-to-day life. A sensor's sensitivity indicates how much the sensor's output changes when the input quantity being measured changes. For instance, if the mercury in a thermometer moves 1 cm when the temperature changes by 1 °C, the sensitivity is 1 cm/°C.
Some sensors can affect what they measure. Sensors are designed to have a small effect on what is measured. Technological progress allows more and more sensors to be manufactured on a microscopic scale as microsensors using MEMS technology. In most cases, a microsensor reaches a higher speed and sensitivity compared with macroscopic approaches. A good sensor obeys the following rules:: it is sensitive to the measured property it is insensitive to any other property to be encountered in its application, it does not influence the measured property. Most sensors have a linear transfer function; the sensitivity is defined as the ratio between the output signal and measured property. For example, if a sensor measures temperature and has a voltage output, the sensitivity is a constant with the units; the sensitivity is the slope of the transfer function. Converting the sensor's electrical output to the measured units requires dividing the electrical output by the slope. In addition, an offset is added or subtracted.
For example, -40 must be added to the output. For an analog sensor signal to be processed, or used in digital equipment, it needs to be converted to a digital signal, using an analog-to-digital converter. Since sensors cannot replicate an ideal transfer function, several types of deviations can occur which limit sensor accuracy: Since the range of the output signal is always limited, the output signal will reach a minimum or maximum when the measured property exceeds the limits; the full scale range defines the minimum values of the measured property. The sensitivity may in practice differ from the value specified; this is called a sensitivity error. This is an error in the slope of a linear transfer function. If the output signal differs from the correct value by a constant, the sensor has an offset error or bias; this is an error in the y-intercept of a linear transfer function. Nonlinearity is deviation of a sensor's transfer function from a straight line transfer function; this is defined by the amount the output differs from ideal behavior over the full range of the sensor noted as a percentage of the full range.
Deviation caused by rapid changes of the measured property over time is a dynamic error. This behavior is described with a bode plot showing sensitivity error and phase shift as a function of the frequency of a periodic input signal. If the output signal changes independent of the measured property, this is defined as drift. Long term drift over months or years is caused by physical changes in the sensor. Noise is a random deviation of the signal. A hysteresis error causes the output value to vary depending on the previous input values. If a sensor's output is different depending on whether a specific input value was reached by increasing vs. decreasing the input the sensor has a hysteresis error. If the sensor has a digital output, the output is an approximation of the measured property; this error is called quantization error. If the signal is monitored digitally, the sampling frequency can cause a dynamic error, or if the input variable or added noise changes periodically at a frequency near a multiple of the sampling rate, aliasing errors may occur.
The sensor may to some extent be sensitive to properties other than the property being measured. For example, most sensors are influenced by the temperature of their environment. A hysteresis error causes the output value to vary depending on the previous input values. If a sensor's output is different depending on whether a specific input value was reached by increasing vs. decreasing the input the sensor has a hysteresis error. All these deviations can be classified as random errors. Systematic errors can sometimes be compensated for by means of some kind of calibration strategy. Noise is a random error that can be reduced by signal processing, such as filtering at the expense of the dynamic behavior of the sensor; the resolution of a sensor is the smallest change it can detect in the quantity that it is measuring. The resolution of a sensor with a digital output is the resolution of the digital output; the resolution is related to the precision with which the mea
Digital cinematography is the process of capturing a motion picture using digital image sensors rather than through film stock. As digital technology has improved in recent years, this practice has become dominant. Since the mid-2010s, most of the movies across the world are captured as well as distributed digitally. Many vendors have brought products to market, including traditional film camera vendors like Arri and Panavision, as well as new vendors like RED, Silicon Imaging, Vision Research and companies which have traditionally focused on consumer and broadcast video equipment, like Sony, GoPro, Panasonic; as of 2017, professional 4K digital film cameras are equal to 35mm film in their resolution and dynamic range capacity, digital film still has a different look to analog film. Some filmmakers still prefer to use analogue picture formats to achieve the desired results. Beginning in the late 1980s, Sony began marketing the concept of "electronic cinematography," utilizing its analog Sony HDVS professional video cameras.
The effort met with little success. However, this led to one of the earliest high definition video shot feature movies and Julia. Rainbow was the world's first film utilizing extensive digital post production techniques. Shot with Sony's first Solid State Electronic Cinematography cameras and featuring over 35 minutes of digital image processing and visual effects, all post production, sound effects and scoring were completed digitally; the Digital High Definition image was transferred to 35mm negative via electron beam recorder for theatrical release. The first digitally filmed and post produced feature film was Windhorse, shot in Tibet and Nepal in 1996 on a prototype of the digital-beta Sony DVW-700WS and the prosumer Sony DCE-VX1000; the offline editing and the online post and color work were all digital. The film, transferred to 35mm negative for theatrical release, won Best U. S. Feature at the Santa Barbara Film Festival in 1998. In 1998, with the introduction of HDCAM recorders and 1920 × 1080 pixel digital professional video cameras based on CCD technology, the idea, now re-branded as "digital cinematography," began to gain traction in the market.
Shot and released in 1998, The Last Broadcast is believed by some to be the first feature-length video shot and edited on consumer-level digital equipment. In May 1999 George Lucas challenged the supremacy of the movie-making medium of film for the first time by including footage filmed with high-definition digital cameras in Star Wars: Episode I – The Phantom Menace; the digital footage blended seamlessly with the footage shot on film and he announced that year he would film its sequels on hi-def digital video. In 1999, digital projectors were installed in four theaters for the showing of The Phantom Menace. In June 2000, Star Wars: Episode II – Attack of the Clones began principal photography shot using a Sony HDW-F900 camera as Lucas had stated; the film was released in May 2002. In May 2001 Once Upon a Time in Mexico was shot in 24 frame-per-second high-definition digital video developed by George Lucas using a Sony HDW-F900 camera, following Robert Rodriguez's introduction to the camera at Lucas' Skywalker Ranch facility whilst editing the sound for Spy Kids.
Two lesser-known movies and Russian Ark, had been shot with the same camera, the latter notably consisting of a single long take. Today, cameras from companies like Sony, Panasonic, JVC and Canon offer a variety of choices for shooting high-definition video. At the high-end of the market, there has been an emergence of cameras aimed at the digital cinema market; these cameras from Sony, Vision Research, Silicon Imaging, Grass Valley and Red offer resolution and dynamic range that exceeds that of traditional video cameras, which are designed for the limited needs of broadcast television. In 2009, Slumdog Millionaire became the first movie shot in digital to be awarded the Academy Award for Best Cinematography and the highest-grossing movie in the history of cinema, not only was shot on digital cameras as well, but made the main revenues at the box office no longer by film, but digital projection. In late 2013, Paramount became the first major studio to distribute movies to theaters in digital format eliminating 35mm film entirely.
Anchorman 2 was the last Paramount production to include a 35mm film version, while The Wolf of Wall Street was the first major movie distributed digitally. Digital cinematography captures motion pictures digitally in a process analogous to digital photography. While there is no clear technical distinction that separates the images captured in digital cinematography from video, the term "digital cinematography" is applied only in cases where digital acquisition is substituted for film acquisition, such as when shooting a feature film; the term is applied when digital acquisition is substituted for video acquisition, as with live broadcast television programs. Professional cameras include the Sony CineAlta Series, Blackmagic Cinema Camera, RED ONE, Arriflex D-20, D-21 and Alexa, Panavisions Genesis, Silicon Imaging SI-2K, Thomson Viper, Vision Research Phantom, IMAX 3D camera based on two Vision Research Phantom cores, Weisscam HS-1 and HS-2, GS Vitec noX, the Fusion Camera System. Independent micro-budget filmmakers have pressed low-cost consumer and prosumer cameras into service for digital filmmaking.
Flagship smartphones like the Apple iPhone have been used to shoot movies like Unsane and Tangerine and in January 2018, Unsane's director and Oscar winner Ste
A dynode is an electrode in a vacuum tube that serves as an electron multiplier through secondary emission. The first tube to incorporate a dynode was the dynatron, an ancestor of the magnetron, which used a single dynode. Photomultiplier and video camera tubes include a series of dynodes, each at a more positive electrical potential than its predecessor. Secondary emission occurs at the surface of each dynode; such an arrangement is able to amplify the tiny current emitted by the photocathode by a factor of one million. The electrons emitted from the cathode are accelerated toward the first dynode, maintained 90 to 100 V positive with respect to the cathode; each accelerated photoelectron that strikes the dynode surface produces several electrons. These electrons are accelerated toward the second dynode, held 90 to 100 V more positive than the first dynode, each electron that strikes the surface of the second dynode produces several more electrons, which are accelerated toward the third dynode, so on.
By the time this process has been repeated at each of the dynodes, 105 to 107 electrons have been produced for each incident photon, dependent on the number of dynodes. For conventional dynode materials, such as BeO and MgO, a multiplication factor of 10 can be achieved by each dynode stage; the dynode takes its name from the dynatron. Albert Hull did not use the term dynode in his 1918 paper on the dynatron, but used the term extensively in his 1922 paper. In the latter paper, he defined a dynode as a "plate that emits impact electrons... when it is part of a dynatron." Microchannel plate detector Photoelectric effect Particle detector Photodetector
In optics, the refractive index or index of refraction of a material is a dimensionless number that describes how fast light propagates through the material. It is defined as n = c v, where c is the speed of light in vacuum and v is the phase velocity of light in the medium. For example, the refractive index of water is 1.333, meaning that light travels 1.333 times as fast in vacuum as in water. The refractive index determines how much the path of light is bent, or refracted, when entering a material; this is described by Snell's law of refraction, n1 sinθ1 = n2 sinθ2, where θ1 and θ2 are the angles of incidence and refraction of a ray crossing the interface between two media with refractive indices n1 and n2. The refractive indices determine the amount of light, reflected when reaching the interface, as well as the critical angle for total internal reflection and Brewster's angle; the refractive index can be seen as the factor by which the speed and the wavelength of the radiation are reduced with respect to their vacuum values: the speed of light in a medium is v = c/n, the wavelength in that medium is λ = λ0/n, where λ0 is the wavelength of that light in vacuum.
This implies that vacuum has a refractive index of 1, that the frequency of the wave is not affected by the refractive index. As a result, the energy of the photon, therefore the perceived color of the refracted light to a human eye which depends on photon energy, is not affected by the refraction or the refractive index of the medium. While the refractive index affects wavelength, it depends on photon frequency and energy so the resulting difference in the bending angle causes white light to split into its constituent colors; this is called dispersion. It can be observed in prisms and rainbows, chromatic aberration in lenses. Light propagation in absorbing materials can be described using a complex-valued refractive index; the imaginary part handles the attenuation, while the real part accounts for refraction. The concept of refractive index applies within the full electromagnetic spectrum, from X-rays to radio waves, it can be applied to wave phenomena such as sound. In this case the speed of sound is used instead of that of light, a reference medium other than vacuum must be chosen.
The refractive index n of an optical medium is defined as the ratio of the speed of light in vacuum, c = 299792458 m/s, the phase velocity v of light in the medium, n = c v. The phase velocity is the speed at which the crests or the phase of the wave moves, which may be different from the group velocity, the speed at which the pulse of light or the envelope of the wave moves; the definition above is sometimes referred to as the absolute refractive index or the absolute index of refraction to distinguish it from definitions where the speed of light in other reference media than vacuum is used. Air at a standardized pressure and temperature has been common as a reference medium. Thomas Young was the person who first used, invented, the name "index of refraction", in 1807. At the same time he changed this value of refractive power into a single number, instead of the traditional ratio of two numbers; the ratio had the disadvantage of different appearances. Newton, who called it the "proportion of the sines of incidence and refraction", wrote it as a ratio of two numbers, like "529 to 396".
Hauksbee, who called it the "ratio of refraction", wrote it as a ratio with a fixed numerator, like "10000 to 7451.9". Hutton wrote it as a ratio with a fixed denominator, like 1.3358 to 1. Young did not use a symbol for the index of refraction, in 1807. In the next years, others started using different symbols: n, m, µ; the symbol n prevailed. For visible light most transparent media have refractive indices between 1 and 2. A few examples are given in the adjacent table; these values are measured at the yellow doublet D-line of sodium, with a wavelength of 589 nanometers, as is conventionally done. Gases at atmospheric pressure have refractive indices close to 1 because of their low density. All solids and liquids have refractive indices above 1.3, with aerogel as the clear exception. Aerogel is a low density solid that can be produced with refractive index in the range from 1.002 to 1.265. Moissanite lies at the other end of the range with a refractive index as high as 2.65. Most plastics have refractive indices in the range from 1.3 to 1.7, but some high-refractive-index polymers can have values as high as 1.76.
For infrared light refractive indices can be higher. Germanium is transparent in the wavelength region from 2 to 14 µm and has a refractive index of about 4. A type of new materials, called topological insulator, was found holding higher refractive index of up to 6 in near to mid infrared frequency range. Moreover, topological insulator material are transparent; these excellent properties make them a type of significant materials for infrared optics. According to the theory of relativity, no information can travel faster than the speed of light in vacuum, but this does not mean that the refractive index cannot be lower than 1; the refractive index measures the phase velocity of light. The phase velocity is the speed at which the crests of the wave move and can be faster than the speed of light in vacuum, thereby give a refractive index below 1; this can occur close to resonance frequencies, for absorbing media, in plasmas, for X-rays. In the X-ray regime the refractive indices are
A charge-coupled device is a device for the movement of electrical charge from within the device to an area where the charge can be manipulated, for example conversion into a digital value. This is achieved by "shifting" the signals between stages within the device one at a time. CCDs move charge between capacitive bins in the device, with the shift allowing for the transfer of charge between bins. In recent years CCD has become a major technology for digital imaging. In a CCD image sensor, pixels are represented by p-doped metal-oxide-semiconductors capacitors; these capacitors are biased above the threshold for inversion when image acquisition begins, allowing the conversion of incoming photons into electron charges at the semiconductor-oxide interface. Although CCDs are not the only technology to allow for light detection, CCD image sensors are used in professional and scientific applications where high-quality image data are required. In applications with less exacting quality demands, such as consumer and professional digital cameras, active pixel sensors known as complementary metal-oxide-semiconductors are used.
The charge-coupled device was invented in 1969 in the United States at AT&T Bell Labs by Willard Boyle and George E. Smith; the lab was working on semiconductor bubble memory when Boyle and Smith conceived of the design of what they termed, in their notebook, "Charge'Bubble' Devices". The device could be used as a shift register; the essence of the design was the ability to transfer charge along the surface of a semiconductor from one storage capacitor to the next. The concept was similar in principle to the bucket-brigade device, developed at Philips Research Labs during the late 1960s; the first patent on the application of CCDs to imaging was assigned to Michael Tompsett. The initial paper describing the concept listed possible uses as a memory, a delay line, an imaging device; the first experimental device demonstrating the principle was a row of spaced metal squares on an oxidized silicon surface electrically accessed by wire bonds. The first working CCD made with integrated circuit technology was a simple 8-bit shift register.
This device had input and output circuits and was used to demonstrate its use as a shift register and as a crude eight pixel linear imaging device. Development of the device progressed at a rapid rate. By 1971, Bell researchers led by Michael Tompsett were able to capture images with simple linear devices. Several companies, including Fairchild Semiconductor, RCA and Texas Instruments, picked up on the invention and began development programs. Fairchild's effort, led by ex-Bell researcher Gil Amelio, was the first with commercial devices, by 1974 had a linear 500-element device and a 2-D 100 x 100 pixel device. Steven Sasson, an electrical engineer working for Kodak, invented the first digital still camera using a Fairchild 100 x 100 CCD in 1975; the first KH-11 KENNEN reconnaissance satellite equipped with charge-coupled device array technology for imaging was launched in December 1976. Under the leadership of Kazuo Iwama, Sony started a large development effort on CCDs involving a significant investment.
Sony managed to mass-produce CCDs for their camcorders. Before this happened, Iwama died in August 1982. In January 2006, Boyle and Smith were awarded the National Academy of Engineering Charles Stark Draper Prize, in 2009 they were awarded the Nobel Prize for Physics, for their invention of the CCD concept. Michael Tompsett was awarded the 2010 National Medal of Technology and Innovation for pioneering work and electronic technologies including the design and development of the first charge coupled device imagers, he was awarded the 2012 IEEE Edison Medal "For pioneering contributions to imaging devices including CCD Imagers and thermal imagers". In a CCD for capturing images, there is a photoactive region, a transmission region made out of a shift register. An image is projected through a lens onto the capacitor array, causing each capacitor to accumulate an electric charge proportional to the light intensity at that location. A one-dimensional array, used in line-scan cameras, captures a single slice of the image, whereas a two-dimensional array, used in video and still cameras, captures a two-dimensional picture corresponding to the scene projected onto the focal plane of the sensor.
Once the array has been exposed to the image, a control circuit causes each capacitor to transfer its contents to its neighbor. The last capacitor in the array dumps its charge into a charge amplifier, which converts the charge into a voltage. By repeating this process, the controlling circuit converts the entire contents of the array in the semiconductor to a sequence of voltages. In a digital device, these voltages are sampled and stored in memory. Before the MOS capacitors are exposed to light, they are biased into the depletion region; the gate is biased at a positive potential, above the threshold for strong inversion, which will result in the creation
An image sensor or imager is a sensor that detects and conveys information used to make an image. It does so by converting the variable attenuation of light waves into signals, small bursts of current that convey the information; the waves can be other electromagnetic radiation. Image sensors are used in electronic imaging devices of both analog and digital types, which include digital cameras, camera modules, medical imaging equipment, night vision equipment such as thermal imaging devices, radar and others; as technology changes, digital imaging tends to replace analog imaging. Early analog sensors for visible light were video camera tubes. Used types are semiconductor charge-coupled devices or active pixel sensors in complementary metal–oxide–semiconductor or N-type metal-oxide-semiconductor technologies. Analog sensors for invisible radiation tend to involve vacuum tubes of various kinds. Digital sensors include flat panel detectors. In February 2018, researchers at Dartmouth College announced a new image sensing technology that the researchers call QIS, for Quanta Image Sensor.
Instead of pixels, QIS chips have what the researchers call "jots." Each jot can detect a single particle of light, called a photon. Cameras integrated in small consumer products use CMOS sensors, which are cheaper and have lower power consumption in battery powered devices than CCDs. CCD sensors are used for high end broadcast quality video cameras, MOS sensors dominate in still photography and consumer goods where overall cost is a major concern. Both types of sensor accomplish the same task of capturing light and converting it into electrical signals; each cell of a CCD image sensor is an analog device. When light strikes the chip it is held as a small electrical charge in each photo sensor; the charges in the line of pixels nearest to the output amplifiers are amplified and output each line of pixels shifts its charges one line closer to the amplifier, filling the empty line closest to the amplifiers. This process is repeated until all the lines of pixels have had their charge amplified and output.
A CMOS image sensor has an amplifier for each pixel compared to the few amplifiers of a CCD. This results in less area for the capture of photons than a CCD, but this problem has been overcome by using microlenses in front of each photodiode, which focus light into the photodiode that would have otherwise hit the amplifier and not be detected; some CMOS imaging sensors use Back-side illumination to increase the number of photons that hit the photodiode. CMOS sensors can be implemented with fewer components, use less power, and/or provide faster readout than CCD sensors, they are less vulnerable to static electricity discharges. Another design, a hybrid CCD/CMOS architecture consists of CMOS readout integrated circuits that are bump bonded to a CCD imaging substrate – a technology, developed for infrared staring arrays and has been adapted to silicon-based detector technology. Another approach is to utilize the fine dimensions available in modern CMOS technology to implement a CCD like structure in CMOS technology: such structures can be achieved by separating individual poly-silicon gates by a small gap.
There are many parameters that can be used to evaluate the performance of an image sensor, including dynamic range, signal-to-noise ratio, low-light sensitivity. For sensors of comparable types, the signal-to-noise ratio and dynamic range improve as the size increases. There are several main types of color image sensors, differing by the type of color-separation mechanism: Bayer filter sensor, low-cost and most common, using a color filter array that passes red and blue light to selected pixel sensors; each individual sensor element is made sensitive to red and blue by means of a color gel made of chemical dye placed over each individual element. Although inexpensive to manufacture, this technique lacks the color purity of dichroic filters; because the color gel segment must be separated from the others by a freme, less of the areal density of a Bayer filter sensor is available to capture light, making the Bayer filter sensor less sensitive than other color sensors of similar size. This loss can be negated by using larger sensor size, albeit at greater cost.
The most common Bayer filter matrix uses two green pixels, one each for red and blue. This results in less resolution for red and blue colors, corresponding to the human eye's reduced sensitivity at the limits of the visual spectrum; the missing color samples may interpolated using a demosaicing algorithm, or ignored altogether by lossy compression. In order to improve color information, techniques like color co-site sampling use a piezo mechanism to shift the color sensor in pixel steps. Foveon X3 sensor, using an array of layered pixel sensors, separating light via the inherent wavelength-dependent absorption property of silicon, such that every location senses all three color channels; this method is similar to. 3CCD, using three discrete image sensors, with the color separation done by a dichroic prism. The dichroic elements provide a sharper color separation, thus improving color quality; because each sensor is sensitive within its passband, at full resolution, 3-CCD sensors produce better color quality and better low light performance.
3-CCD sensors produce a full 4:4:4 signal, preferred in television broadcasting, video editing and chroma key visual effects. Special sensors are
A photodiode is a semiconductor device that converts light into an electrical current. The current is generated. Photodiodes may contain optical filters, built-in lenses, may have large or small surface areas. Photodiodes have a slower response time as their surface area increases; the common, traditional solar cell used to generate electric solar power is a large area photodiode. Photodiodes are similar to regular semiconductor diodes except that they may be either exposed or packaged with a window or optical fiber connection to allow light to reach the sensitive part of the device. Many diodes designed for use specially as a photodiode use a PIN junction rather than a p–n junction, to increase the speed of response. A photodiode is designed to operate in reverse bias. A photodiode is a p -- PIN structure; when a photon of sufficient energy strikes the diode, it creates an electron-hole pair. This mechanism is known as the inner photoelectric effect. If the absorption occurs in the junction's depletion region, or one diffusion length away from it, these carriers are swept from the junction by the built-in electric field of the depletion region.
Thus holes move toward the anode, electrons toward the cathode, a photocurrent is produced. The total current through the photodiode is the sum of the dark current and the photocurrent, so the dark current must be minimized to maximize the sensitivity of the device; when used in zero bias or photovoltaic mode, the flow of photocurrent out of the device is restricted and a voltage builds up. This mode exploits the photovoltaic effect, the basis for solar cells – a traditional solar cell is just a large area photodiode. In this mode the diode is reverse biased; this reduces the response time because the additional reverse bias increases the width of the depletion layer, which decreases the junction's capacitance. The reverse bias increases the dark current without much change in the photocurrent. For a given spectral distribution, the photocurrent is linearly proportional to the illuminance. Although this mode is faster, the photoconductive mode tends to exhibit more electronic noise; the leakage current of a good PIN diode is so low that the Johnson–Nyquist noise of the load resistance in a typical circuit dominates.
Avalanche photodiodes are photodiodes with structure optimized for operating with high reverse bias, approaching the reverse breakdown voltage. This allows each photo-generated carrier to be multiplied by avalanche breakdown, resulting in internal gain within the photodiode, which increases the effective responsivity of the device. A phototransistor is a light-sensitive transistor. A common type of phototransistor, called a photobipolar transistor, is in essence a bipolar transistor encased in a transparent case so that light can reach the base–collector junction, it was invented by Dr. John N. Shive at Bell Labs in 1948, but it was not announced until 1950; the electrons that are generated by photons in the base–collector junction are injected into the base, this photodiode current is amplified by the transistor's current gain β. If the base and collector leads are used and the emitter is left unconnected, the phototransistor becomes a photodiode. While phototransistors have a higher responsivity for light they are not able to detect low levels of light any better than photodiodes.
Phototransistors have longer response times. Field-effect phototransistors known as photoFETs, are light-sensitive field-effect transistors. Unlike photobipolar transistors, photoFETs control drain-source current by creating a gate voltage. A Solaristor is a two-terminal gate-less phototransistor. A compact class of two-terminal phototransistors or solaristors have been demonstrated in 2018 by ICN2 researchers; the novel concept is a two-in-one power source plus transistor device that runs on solar energy by exploiting a memresistive effect in the flow of photogenerated carriers. The material used to make a photodiode is critical to defining its properties, because only photons with sufficient energy to excite electrons across the material's bandgap will produce significant photocurrents. Materials used to produce photodiodes include: Because of their greater bandgap, silicon-based photodiodes generate less noise than germanium-based photodiodes. Any p–n junction, if illuminated, is a photodiode.
Semiconductor devices such as diodes, transistors and ICs contain p–n junctions, will not function if they are illuminated by unwanted electromagnetic radiation of wavelength suitable to produce a photocurrent. If these housings are not opaque to high-energy radiation, transistors and ICs can malfunction due to induced photo-currents. Background radiation from the packaging is significant. Radiation hardening mitigates these effects. In some cases, the effect is wanted, for example to use LEDs as light-sensitive devices or for energy harvesting sometimes called light-emitting and -absorbing diodes. Critical performance parameters of a photodiode include: Responsivity The Spectral responsivity is a ratio of the generated photocurrent to incident light power, expressed in A/W when used in photoconductive mode; the wavelength-dependence may be expressed as a Quantum efficiency, or the ratio of the number of photogenerated carriers to incident photons, a unitless quantity. Dark current The current th