In photography, a viewfinder is what the photographer looks through to compose, and, in many cases, to focus the picture. Most viewfinders are separate, suffer parallax, while the single-lens reflex camera lets the viewfinder use the main optical system. Viewfinders are used in many cameras of different types: still and movie, film and digital. A zoom camera zooms its finder in sync with its lens, one exception being rangefinder cameras. Before the development of microelectronics and electronic display devices, only optical viewfinders existed. Direct viewfinders are miniature Galilean telescopes. A declining minority of point and shoot cameras use them. Parallax error results from the viewfinder being offset from the lens axis, to point above and to one side of the lens; the error varies with distance, being negligible for distant scenes, large for close-ups. Viewfinders show lines to indicate the edge of the region which would be included in the photograph; some sophisticated 20th century cameras with direct viewfinders had coincidence rangefinders with separate windows from the viewfinder integrated with it.
Cameras with interchangeable lenses had to indicate the field of view of each lens in the viewfinder. Simple reflecting viewfinders comprised a small forward-looking lens, a small mirror at 45° behind it, a lens at the top; such viewfinders were integrated into box cameras, fitted to the side of folding cameras. These viewfinders were fitted to inexpensive cameras. For many sports and press applications optical viewfinders gave too small an image and were inconvenient to use for scenes that were changing rapidly. For these purposes a simple arrangement of two wire rectangles, a smaller one nearer the eye and a larger one further away, was used, with no optics; this was fast and convenient to use, but not accurate. A sportsfinder is sometimes known as an Albada finder, it is a "viewfinder used with a camera held at eye level. Twin-lens reflex cameras had a large lens above the taking lens, a large mirror at 45°, projecting an image onto a ground glass screen viewable from above, with the camera at waist level.
The viewfinder lens was of similar size and focal length to the taking lens, though the optical quality was less critical. These cameras were expensive. Both single- and twin-lens reflexes allowed focussing by adjusting the lens until the image was sharp. Single-lens reflex cameras viewed the scene through the taking lens. Early SLRs were plate cameras, with a mechanism to insert a mirror between the lens and the film which reflected the light upwards, where it could be seen at waist level on a ground glass screen; when ready to take the picture, the mirror was pivoted out of the way. SLRs had a mechanism which flipped the mirror out of the way when the shutter button was pressed, followed by the shutter opening. Instead of a waist-level arrangement, a prism was used to allow the camera to be held to the eye; the big advantage of the SLR was that other optical device, could be used. The live preview feature of digital cameras share this advantage of the SLR, as they show the image as it will be recorded, with no additional optics or parallax error.
Viewfinders can be optical or electronic. An optical viewfinder is a reversed telescope mounted to see what the camera will see, its drawbacks are many, but it has advantages. An electronic viewfinder is a CRT, LCD or OLED based display device, though only the LCD is commonplace today due to size and weight. In addition to its primary purpose, an electronic viewfinder can be used to replay captured material, as an on-screen display to browse through menus. A still camera's optical viewfinder has one or more small supplementary LED displays surrounding the view of the scene. On a film camera, these displays show shooting information such as the shutter speed and aperture and, for autofocus cameras, provide an indication that the image is focussed. Digital still cameras will also display information such as the current ISO setting and the number of remaining shots which can be taken in a burst. Another display which overlays the view of the scene is provided, it shows the location and state of the camera's provided auto-focus points.
This overlay can provide lines or a grid which assist in picture composition. It is not uncommon for a camera to have two viewfinders. For example, a digital still camera may have an electronic one; the latter can be used to replay captured material, has an on-screen display, can be switched off to save power. A camcorder may have both electronic; the first is viewed through a ma
Computer data storage
Computer data storage called storage or memory, is a technology consisting of computer components and recording media that are used to retain digital data. It is a core function and fundamental component of computers; the central processing unit of a computer is. In practice all computers use a storage hierarchy, which puts fast but expensive and small storage options close to the CPU and slower but larger and cheaper options farther away; the fast volatile technologies are referred to as "memory", while slower persistent technologies are referred to as "storage". In the Von Neumann architecture, the CPU consists of two main parts: The control unit and the arithmetic logic unit; the former controls the flow of data between the CPU and memory, while the latter performs arithmetic and logical operations on data. Without a significant amount of memory, a computer would be able to perform fixed operations and output the result, it would have to be reconfigured to change its behavior. This is acceptable for devices such as desk calculators, digital signal processors, other specialized devices.
Von Neumann machines differ in having a memory in which they store their operating instructions and data. Such computers are more versatile in that they do not need to have their hardware reconfigured for each new program, but can be reprogrammed with new in-memory instructions. Most modern computers are von Neumann machines. A modern digital computer represents data using the binary numeral system. Text, pictures and nearly any other form of information can be converted into a string of bits, or binary digits, each of which has a value of 1 or 0; the most common unit of storage is the byte, equal to 8 bits. A piece of information can be handled by any computer or device whose storage space is large enough to accommodate the binary representation of the piece of information, or data. For example, the complete works of Shakespeare, about 1250 pages in print, can be stored in about five megabytes with one byte per character. Data are encoded by assigning a bit pattern to digit, or multimedia object.
Many standards exist for encoding. By adding bits to each encoded unit, redundancy allows the computer to both detect errors in coded data and correct them based on mathematical algorithms. Errors occur in low probabilities due to random bit value flipping, or "physical bit fatigue", loss of the physical bit in storage of its ability to maintain a distinguishable value, or due to errors in inter or intra-computer communication. A random bit flip is corrected upon detection. A bit, or a group of malfunctioning physical bits is automatically fenced-out, taken out of use by the device, replaced with another functioning equivalent group in the device, where the corrected bit values are restored; the cyclic redundancy check method is used in communications and storage for error detection. A detected error is retried. Data compression methods allow in many cases to represent a string of bits by a shorter bit string and reconstruct the original string when needed; this utilizes less storage for many types of data at the cost of more computation.
Analysis of trade-off between storage cost saving and costs of related computations and possible delays in data availability is done before deciding whether to keep certain data compressed or not. For security reasons certain types of data may be kept encrypted in storage to prevent the possibility of unauthorized information reconstruction from chunks of storage snapshots; the lower a storage is in the hierarchy, the lesser its bandwidth and the greater its access latency is from the CPU. This traditional division of storage to primary, secondary and off-line storage is guided by cost per bit. In contemporary usage, "memory" is semiconductor storage read-write random-access memory DRAM or other forms of fast but temporary storage. "Storage" consists of storage devices and their media not directly accessible by the CPU hard disk drives, optical disc drives, other devices slower than RAM but non-volatile. Memory has been called core memory, main memory, real storage or internal memory. Meanwhile, non-volatile storage devices have been referred to as secondary storage, external memory or auxiliary/peripheral storage.
Primary storage referred to as memory, is the only one directly accessible to the CPU. The CPU continuously reads instructions executes them as required. Any data operated on is stored there in uniform manner. Early computers used delay lines, Williams tubes, or rotating magnetic drums as primary storage. By 1954, those unreliable methods were replaced by magnetic core memory. Core memory remained dominant until the 1970s, when advances in integrated circuit technology allowed semiconductor memory to become economically competitive; this led to modern random-access memo
An image sensor or imager is a sensor that detects and conveys information used to make an image. It does so by converting the variable attenuation of light waves into signals, small bursts of current that convey the information; the waves can be other electromagnetic radiation. Image sensors are used in electronic imaging devices of both analog and digital types, which include digital cameras, camera modules, medical imaging equipment, night vision equipment such as thermal imaging devices, radar and others; as technology changes, digital imaging tends to replace analog imaging. Early analog sensors for visible light were video camera tubes. Used types are semiconductor charge-coupled devices or active pixel sensors in complementary metal–oxide–semiconductor or N-type metal-oxide-semiconductor technologies. Analog sensors for invisible radiation tend to involve vacuum tubes of various kinds. Digital sensors include flat panel detectors. In February 2018, researchers at Dartmouth College announced a new image sensing technology that the researchers call QIS, for Quanta Image Sensor.
Instead of pixels, QIS chips have what the researchers call "jots." Each jot can detect a single particle of light, called a photon. Cameras integrated in small consumer products use CMOS sensors, which are cheaper and have lower power consumption in battery powered devices than CCDs. CCD sensors are used for high end broadcast quality video cameras, MOS sensors dominate in still photography and consumer goods where overall cost is a major concern. Both types of sensor accomplish the same task of capturing light and converting it into electrical signals; each cell of a CCD image sensor is an analog device. When light strikes the chip it is held as a small electrical charge in each photo sensor; the charges in the line of pixels nearest to the output amplifiers are amplified and output each line of pixels shifts its charges one line closer to the amplifier, filling the empty line closest to the amplifiers. This process is repeated until all the lines of pixels have had their charge amplified and output.
A CMOS image sensor has an amplifier for each pixel compared to the few amplifiers of a CCD. This results in less area for the capture of photons than a CCD, but this problem has been overcome by using microlenses in front of each photodiode, which focus light into the photodiode that would have otherwise hit the amplifier and not be detected; some CMOS imaging sensors use Back-side illumination to increase the number of photons that hit the photodiode. CMOS sensors can be implemented with fewer components, use less power, and/or provide faster readout than CCD sensors, they are less vulnerable to static electricity discharges. Another design, a hybrid CCD/CMOS architecture consists of CMOS readout integrated circuits that are bump bonded to a CCD imaging substrate – a technology, developed for infrared staring arrays and has been adapted to silicon-based detector technology. Another approach is to utilize the fine dimensions available in modern CMOS technology to implement a CCD like structure in CMOS technology: such structures can be achieved by separating individual poly-silicon gates by a small gap.
There are many parameters that can be used to evaluate the performance of an image sensor, including dynamic range, signal-to-noise ratio, low-light sensitivity. For sensors of comparable types, the signal-to-noise ratio and dynamic range improve as the size increases. There are several main types of color image sensors, differing by the type of color-separation mechanism: Bayer filter sensor, low-cost and most common, using a color filter array that passes red and blue light to selected pixel sensors; each individual sensor element is made sensitive to red and blue by means of a color gel made of chemical dye placed over each individual element. Although inexpensive to manufacture, this technique lacks the color purity of dichroic filters; because the color gel segment must be separated from the others by a freme, less of the areal density of a Bayer filter sensor is available to capture light, making the Bayer filter sensor less sensitive than other color sensors of similar size. This loss can be negated by using larger sensor size, albeit at greater cost.
The most common Bayer filter matrix uses two green pixels, one each for red and blue. This results in less resolution for red and blue colors, corresponding to the human eye's reduced sensitivity at the limits of the visual spectrum; the missing color samples may interpolated using a demosaicing algorithm, or ignored altogether by lossy compression. In order to improve color information, techniques like color co-site sampling use a piezo mechanism to shift the color sensor in pixel steps. Foveon X3 sensor, using an array of layered pixel sensors, separating light via the inherent wavelength-dependent absorption property of silicon, such that every location senses all three color channels; this method is similar to. 3CCD, using three discrete image sensors, with the color separation done by a dichroic prism. The dichroic elements provide a sharper color separation, thus improving color quality; because each sensor is sensitive within its passband, at full resolution, 3-CCD sensors produce better color quality and better low light performance.
3-CCD sensors produce a full 4:4:4 signal, preferred in television broadcasting, video editing and chroma key visual effects. Special sensors are
A digital camera or digicam is a camera that captures photographs in digital memory. Most cameras produced today are digital, while there are still dedicated digital cameras, many more cameras are now being incorporated into mobile devices, portable touchscreen computers, which can, among many other purposes, use their cameras to initiate live videotelephony and directly edit and upload imagery to others. However, high-end, high-definition dedicated cameras are still used by professionals. Digital and movie cameras share an optical system using a lens with a variable diaphragm to focus light onto an image pickup device; the diaphragm and shutter admit the correct amount of light to the imager, just as with film but the image pickup device is electronic rather than chemical. However, unlike film cameras, digital cameras can display images on a screen after being recorded, store and delete images from memory. Many digital cameras can record moving videos with sound; some digital cameras can perform other elementary image editing.
The history of the digital camera began with Eugene F. Lally of the Jet Propulsion Laboratory, thinking about how to use a mosaic photosensor to capture digital images, his 1961 idea was to take pictures of the planets and stars while travelling through space to give information about the astronauts' position. As with Texas Instruments employee Willis Adcock's filmless camera in 1972, the technology had yet to catch up with the concept; the Cromemco Cyclops was an all-digital camera introduced as a commercial product in 1975. Its design was published as a hobbyist construction project in the February 1975 issue of Popular Electronics magazine, it used a 32×32 Metal Oxide Semiconductor sensor. Steven Sasson, an engineer at Eastman Kodak and built the first self-contained electronic camera that used a charge-coupled device image sensor in 1975. Early uses were military and scientific. In 1986, Japanese company Nikon introduced the first digital single-lens reflex camera, the Nikon SVC. In the mid-to-late 1990s, DSLR cameras became common among consumers.
By the mid-2000s, DSLR cameras had replaced film cameras. In 2000, Sharp introduced the J-SH04 J-Phone, in Japan. By the mid-2000s, higher-end cell phones had an integrated digital camera. By the beginning of the 2010s all smartphones had an integrated digital camera; the two major types of digital image sensor are CCD and CMOS. A CCD sensor has one amplifier for all the pixels, while each pixel in a CMOS active-pixel sensor has its own amplifier. Compared to CCDs, CMOS sensors use less power. Cameras with a small sensor use a back-side-illuminated CMOS sensor. Overall final image quality is more dependent on the image processing capability of the camera, than on sensor type; the resolution of a digital camera is limited by the image sensor that turns light into discrete signals. The brighter the image at a given point on the sensor, the larger the value, read for that pixel. Depending on the physical structure of the sensor, a color filter array may be used, which requires demosaicing to recreate a full-color image.
The number of pixels in the sensor determines the camera's "pixel count". In a typical sensor, the pixel count is the product of the number of columns. For example, a 1,000 by 1,000 pixel sensor would have 1 megapixel. Final quality of an image depends on all optical transformations in the chain of producing the image. Carl Zeiss points out. In case of a digital camera, a simplistic way of expressing it is that the lens determines the maximum sharpness of the image while the image sensor determines the maximum resolution; the illustration on the right can be said to compare a lens with poor sharpness on a camera with high resolution, to a lens with good sharpness on a camera with lower resolution. Since the first digital backs were introduced, there have been three main methods of capturing the image, each based on the hardware configuration of the sensor and color filters. Single-shot capture systems use either one sensor chip with a Bayer filter mosaic, or three separate image sensors which are exposed to the same image via a beam splitter.
Multi-shot exposes the sensor to the image in a sequence of three or more openings of the lens aperture. There are several methods of application of the multi-shot technique; the most common was to use a single image sensor with three filters passed in front of the sensor in sequence to obtain the additive color information. Another multiple shot method is called Microscanning; this method uses a single sensor chip with a Bayer filter and physically moved the sensor on the focus plane of the lens to construct a higher resolution image than the native resolution of the chip. A third version combined the two methods without a Bayer filter on the chip; the third method is called scanning because the sensor moves across the focal plane much like the sensor of an image scanner. The linear or tri-linear sensors in scanning cameras utilize only a single line of photosensors, or three lines for the three colors. Scanning may be accomplished by rotating the whole camera. A digital rotating line camera offers images of high total resolution.
The choice of method for a given capture is determined by the subject matter. It is inappropriate to attempt to capture a subject that moves with anything but a single-shot sys
Canon EF-S lens mount
The Canon EF-S lens mount is a derivative of the EF lens mount created for a subset of Canon digital single-lens reflex cameras with APS-C sized image sensors. It was released in 2003. Cameras with the EF-S mount are backward compatible with the EF lenses and, as such, have a flange focal distance of 44.0 mm. Such cameras, have more clearance, allowing lens elements to be closer to the sensor than in the EF mount. Only Canon cameras released after 2003 with APS-C sized; the "S" in EF-S has variously been described by Canon as coming from either "Small image circle", or "Short back focus". The combination of a smaller sensor and shorter back-focus distance enhances the possibilities for wide angle and wide angle lenses, enables all lenses designed for the EF-S mount to be made smaller, lighter and less expensive. Although not all Canon EF-S lenses use this short back focus, they cannot be mounted on DSLRs with sensors larger than APS-C. However, some lenses produced by third-party manufacturers may feature the standard EF mount if they do not require the shorter back focus but only have a small image circle.
Such lenses will give noticeable vignetting if used on full frame sensor cameras. To a lesser degree, vignetting occurs with APS-H sensor sizes, such as several cameras of the 1D series; the cameras that can use the EF-S mount are: By design, it is physically impossible to mount EF-S lenses on EF-only cameras. This is because the increased proximity of the lens to the sensor means that on full-frame sensor or 35mm film EF cameras the lens itself would obstruct the mirror's movement and cause damage to the lens and/or camera. While it is possible to modify the lens such that the physical obstruction is removed, allowing for mounting to EF mount cameras, the rear of the lens would still obstruct the mirror. An additional reason is that the lenses produce a smaller image circle of illumination. An EF-S lens alignment mark is indicated by a small white rectangle, whereas the EF employs a small red dot; the lens will insert into the body when the alignment marks on each are matched, the lens can be rotated and locked into the operating position.
EF-S camera bodies have both EF alignment marks, while EF bodies have only EF marks. Some have reported success attaching EF-S lenses to full-frame bodies with the use of an extension tube. Attachment of EF-S lenses on EF bodies can be accomplished by removing the small plastic ring seen in the photo above. Although vignetting is still an issue, photos can be taken, infinity focus achieved; this modification comes with caveats, one being that on some lenses, like the EF-S 10-22mm, at the 10mm setting, the element protrudes too far back toward EF mount camera bodies. The 10D, D60, earlier cameras share the EF-only mount with the full frame EOS camera bodies, with the APS-H size EOS camera bodies, despite having a smaller sensor and therefore a smaller mirror; the EF-S lens mount is a new offering from Canon, so the selection of available lenses is limited compared to the full EF range, but it is backward compatible with the EF mount, can therefore still accept all EF lenses. The variety of EF-S prime lenses is limited in comparison to EF-S zoom lenses, with three primes to nine zooms.
EF-S lenses are popular due to their lower cost and zoom lenses are preferred by amateur photographers. As of April 2017, no EF-S lens has been produced with the "L" designation or with diffractive optics, only three EF-S prime lenses have been produced. EF lenses at the Canon Camera Museum EF Lens Lineup at Canon USA EF/EF-S lens chart EF-S lenses compatible with a 10D
A liquid-crystal display is a flat-panel display or other electronically modulated optical device that uses the light-modulating properties of liquid crystals. Liquid crystals do not emit light directly, instead using a backlight or reflector to produce images in color or monochrome. LCDs are available to display arbitrary images or fixed images with low information content, which can be displayed or hidden, such as preset words and seven-segment displays, as in a digital clock, they use the same basic technology, except that arbitrary images are made up of a large number of small pixels, while other displays have larger elements. LCDs can either be on or off, depending on the polarizer arrangement. For example, a character positive LCD with a backlight will have black lettering on a background, the color of the backlight, a character negative LCD will have a black background with the letters being of the same color as the backlight. Optical filters are added to white on blue LCDs to give them their characteristic appearance.
LCDs are used in a wide range of applications, including LCD televisions, computer monitors, instrument panels, aircraft cockpit displays, indoor and outdoor signage. Small LCD screens are common in portable consumer devices such as digital cameras, watches and mobile telephones, including smartphones. LCD screens are used on consumer electronics products such as DVD players, video game devices and clocks. LCD screens have replaced bulky cathode ray tube displays in nearly all applications. LCD screens are available in a wider range of screen sizes than CRT and plasma displays, with LCD screens available in sizes ranging from tiny digital watches to large television receivers. LCDs are being replaced by OLEDs, which can be made into different shapes, have a lower response time, wider color gamut infinite color contrast and viewing angles, lower weight for a given display size and a slimmer profile and lower power consumption. OLEDs, are more expensive for a given display size due to the expensive electroluminescent materials or phosphors that they use.
Due to the use of phosphors, OLEDs suffer from screen burn-in and there is no way to recycle OLED displays, whereas LCD panels can be recycled, although the technology required to recycle LCDs is not yet widespread. Attempts to increase the lifespan of LCDs are quantum dot displays, which offer similar performance as an OLED display, but the Quantum dot sheet that gives these displays their characteristics can not yet be recycled. Since LCD screens do not use phosphors, they suffer image burn-in when a static image is displayed on a screen for a long time, e.g. the table frame for an airline flight schedule on an indoor sign. LCDs are, susceptible to image persistence; the LCD screen can be disposed of more safely than a CRT can. Its low electrical power consumption enables it to be used in battery-powered electronic equipment more efficiently than CRTs can be. By 2008, annual sales of televisions with LCD screens exceeded sales of CRT units worldwide, the CRT became obsolete for most purposes.
Each pixel of an LCD consists of a layer of molecules aligned between two transparent electrodes, two polarizing filters, the axes of transmission of which are perpendicular to each other. Without the liquid crystal between the polarizing filters, light passing through the first filter would be blocked by the second polarizer. Before an electric field is applied, the orientation of the liquid-crystal molecules is determined by the alignment at the surfaces of electrodes. In a twisted nematic device, the surface alignment directions at the two electrodes are perpendicular to each other, so the molecules arrange themselves in a helical structure, or twist; this induces the rotation of the polarization of the incident light, the device appears gray. If the applied voltage is large enough, the liquid crystal molecules in the center of the layer are completely untwisted and the polarization of the incident light is not rotated as it passes through the liquid crystal layer; this light will be polarized perpendicular to the second filter, thus be blocked and the pixel will appear black.
By controlling the voltage applied across the liquid crystal layer in each pixel, light can be allowed to pass through in varying amounts thus constituting different levels of gray. Color LCD systems use the same technique, with color filters used to generate red and blue pixels; the optical effect of a TN device in the voltage-on state is far less dependent on variations in the device thickness than that in the voltage-off state. Because of this, TN displays with low information content and no backlighting are operated between crossed polarizers such that they appear bright with no voltage; as most of 2010-era LCDs are used in television sets and smartphones, they have high-resolution matrix arrays of pixels to display arbitrary images using backlighting with a dark background. When no image is displayed, different arrangements are used. For this purpose, TN LCDs are operated between parallel polarizers, whereas IPS LCDs feature crossed polarizers. In many applications IPS LCDs have replaced TN LCDs, in particular in smartphones su
Field of view
The field of view is the extent of the observable world, seen at any given moment. In the case of optical instruments or sensors it is a solid angle through which a detector is sensitive to electromagnetic radiation. In the context of human vision, the term "field of view" is only used in the sense of a restriction to what is visible by external apparatus, like when wearing spectacles or virtual reality goggles. Note that eye movements are allowed in the definition but do not change the field of view. If the analogy of the eye's retina working as a sensor is drawn upon, the corresponding concept in human is the visual field, it is defined as "the number of degrees of visual angle during stable fixation of the eyes". Note that eye movements are excluded in the definition. Different animals have different visual fields, among others, on the placement of the eyes. Humans have a over 210-degree forward-facing horizontal arc of their visual field, while some birds have a complete or nearly complete 360-degree visual field.
The vertical range of the visual field in humans is around 150 degrees. The range of visual abilities is not uniform across the visual field, varies from animal to animal. For example, binocular vision, the basis for stereopsis and is important for depth perception, covers 114 degrees of the visual field in humans; some birds have a scant 20 degrees of binocular vision. Color vision and the ability to perceive shape and motion vary across the visual field; the physiological basis for, the much higher concentration of color-sensitive cone cells and color-sensitive parvocellular retinal ganglion cells in the fovea – the central region of the retina, together with a larger representation in the visual cortex – in comparison to the higher concentration of color-insensitive rod cells and motion-sensitive magnocellular retinal ganglion cells in the visual periphery, smaller cortical representation. Since cone cells require brighter light sources to be activated, the result of this distribution is further that peripheral vision is much more sensitive at night relative to foveal vision.
Many optical instruments binoculars or spotting scopes, are advertised with their field of view specified in one of two ways: angular field of view, linear field of view. Angular field of view is specified in degrees, while linear field of view is a ratio of lengths. For example, binoculars with a 5.8 degree field of view might be advertised as having a field of view of 102 mm per meter. As long as the FOV is less than about 10 degrees or so, the following approximation formulas allow one to convert between linear and angular field of view. Let A be the angular field of view in degrees. Let M be the linear field of view in millimeters per meter. Using the small-angle approximation: A ≈ 360 ∘ 2 π ⋅ M 1000 ≈ 0.0573 × M M ≈ 2 π ⋅ 1000 360 ∘ ⋅ A ≈ 17.45 × A In machine vision the lens focal length and image sensor size sets up the fixed relationship between the field of view and the working distance. Field of view is the area of the inspection captured on the camera’s imager; the size of the field of view and the size of the camera’s imager directly affect the image resolution.
Working distance is the distance between the back of the target object. In remote sensing, the solid angle through which a detector element is sensitive to electromagnetic radiation at any one time, is called instantaneous field of view or IFOV. A measure of the spatial resolution of a remote sensing imaging system, it is expressed as dimensions of visible ground area, for some known sensor altitude. Single pixel IFOV is related to concept of resolved pixel size, ground resolved distance, ground sample distance and modulation transfer function. In astronomy, the field of view is expressed as an angular area viewed by the instrument, in square degrees, or for higher magnification instruments, in square arc-minutes. For reference the Wide Field Channel on the Advanced Camera for Surveys on the Hubble Space Telescope has a field of view of 10 sq. arc-minutes, the High Resolution Channel of the same instrument has a field of view of 0.15 sq. arc-minutes. Ground-based survey telescopes have much wider fields of view.
The photographic plates used by the UK Schmidt Telescope had a field of view of 30 sq. degrees. The 1.8 m Pan-STARRS telescope, with the most advanced digital camera to date has a field of view of 7 sq. degrees. In the near infra-red WFCAM on UKIRT has a field of view of 0.2 sq. degrees and the VISTA telescope has a field of view of 0.6 sq. degrees. Until digital cameras could only cover a small field of view compared to photographic plates, although they beat photographic plates in quantum efficiency