1.
Angle of view
–
In photography, angle of view describes the angular extent of a given scene that is imaged by a camera. It is used interchangeably with the general term field of view. It is important to distinguish the angle of view from the angle of coverage, typically the image circle produced by a lens is large enough to cover the film or sensor completely, possibly including some vignetting toward the edge. A cameras angle of view not only on the lens. Digital sensors are usually smaller than 35mm film, and this causes the lens to have an angle of view than with 35mm film. In everyday digital cameras, the factor can range from around 1, to 1.6. For lenses projecting rectilinear images of distant objects, the focal length. Calculations for lenses producing non-rectilinear images are more complex and in the end not very useful in most practical applications. Angle of view may be measured horizontally, vertically, or diagonally, for example, for 35mm film which is 36 mm wide and 24mm high, d =36 mm would be used to obtain the horizontal angle of view and d =24 mm for the vertical angle. Because this is a function, the angle of view does not vary quite linearly with the reciprocal of the focal length. However, except for wide-angle lenses, it is reasonable to approximate α ≈ d f radians or 180 d π f degrees. The effective focal length is equal to the stated focal length of the lens. Angle of view can also be determined using FOV tables or paper or software lens calculators, consider a 35 mm camera with a lens having a focal length of F =50 mm. The dimensions of the 35 mm image format are 24 mm ×36 mm, here α is defined to be the angle-of-view, since it is the angle enclosing the largest object whose image can fit on the film. We want to find the relationship between, the angle α the opposite side of the triangle, d /2 the adjacent side, S2 Using basic trigonometry, we find. For macro photography, we neglect the difference between S2 and F. From the thin lens formula,1 F =1 S1 +1 S2, a second effect which comes into play in macro photography is lens asymmetry. The lens asymmetry causes an offset between the plane and pupil positions
2.
Cropping (image)
–
Cropping is the removal of the outer parts of an image to improve framing, accentuate subject matter or change aspect ratio. Depending on the application, this may be performed on a photograph, artwork or film footage. The practice is common to the film, broadcasting, photographic, graphic design, in the printing, graphic design and photography industries, cropping is the removal of unwanted areas from a photographic or illustrated image. It is considered one of the few editing actions permissible in modern photojournalism along with balance, colour correction. A crop made from the top and bottom of a photograph may produce an aspect which mimics the panoramic format, both of these formats are not cropped as such, rather the product of highly specialised optical configuration and camera design. Aspect ratio concerns are an issue in film making. Rather than cropping, the cinematographer traditionally uses mattes to increase the latitude for alternative aspect ratios in projection and broadcast. This process has become standard in the United Kingdom, in TV shows where many archive clips are used, which gives them a zoomed-in, cramped image with significantly reduced resolution. Another option is a process called pillarboxing, where bands are placed down the sides of the screen. See this article for a description of the problem. Typical cropping in cinematographic and broadcast applications Various methods may be used following cropping or may be used on the original image, however, using texture synthesis, it is possible to artificially add a band around an image, synthetically uncropping it. An uncrop plug-in exists for the GIMP image editor
3.
Aspect ratio
–
The aspect ratio of a geometric shape is the ratio of its sizes in different dimensions. For example, the ratio of a rectangle is the ratio of its longer side to its shorter side – the ratio of width to height. The aspect ratio is expressed as two separated by a colon. The values x and y do not represent actual widths and heights but, rather, as an example,8,5,16,10 and 1.6,1 are three ways of representing the same aspect ratio. In objects of more than two dimensions, such as hyperrectangles, the ratio can still be defined as the ratio of the longest side to the shortest side. The term is most commonly used reference to, Graphic / image Image aspect ratio Display aspect ratio. A square has the smallest possible ratio of 1,1. An ellipse with a ratio of 1,1 is a circle. A circle has the minimal DWAR which is 1, a square has a DWAR of sqrt. The Cube-Volume Aspect Ratio of a set is the d-th root of the ratio of the d-volume of the smallest enclosing axes-parallel d-cube. A square has the minimal CVAR which is 1, a circle has a CVAR of sqrt. An axis-parallel rectangle of width W and height H, where W>H, has a CVAR of sqrt = sqrt, if the dimension d is fixed, then all reasonable definitions of aspect ratio are equivalent to within constant factors. Aspect ratios are mathematically expressed as x, y, in digital images there is a subtle distinction between the Display Aspect Ratio and the Storage Aspect Ratio, see Distinctions. Ratio Equidimensional ratios in 3D List of film formats Squeeze mapping Vertical orientation
4.
Interpolation
–
In the mathematical field of numerical analysis, interpolation is a method of constructing new data points within the range of a discrete set of known data points. It is often required to interpolate the value of that function for a value of the independent variable. A different problem which is related to interpolation is the approximation of a complicated function by a simple function. Suppose the formula for some given function is known, but too complex to evaluate efficiently, a few known data points from the original function can be used to create an interpolation based on a simpler function. In the examples below if we consider x as a topological space, the classical results about interpolation of operators are the Riesz–Thorin theorem and the Marcinkiewicz theorem. There are also many other subsequent results, for example, suppose we have a table like this, which gives some values of an unknown function f. Interpolation provides a means of estimating the function at intermediate points, there are many different interpolation methods, some of which are described below. Some of the concerns to take into account when choosing an appropriate algorithm are, how many data points are needed. The simplest interpolation method is to locate the nearest data value, one of the simplest methods is linear interpolation. Consider the above example of estimating f, since 2.5 is midway between 2 and 3, it is reasonable to take f midway between f =0.9093 and f =0.1411, which yields 0.5252. Another disadvantage is that the interpolant is not differentiable at the point xk, the following error estimate shows that linear interpolation is not very precise. Denote the function which we want to interpolate by g, then the linear interpolation error is | f − g | ≤ C2 where C =18 max r ∈ | g ″ |. In words, the error is proportional to the square of the distance between the data points, the error in some other methods, including polynomial interpolation and spline interpolation, is proportional to higher powers of the distance between the data points. These methods also produce smoother interpolants, polynomial interpolation is a generalization of linear interpolation. Note that the interpolant is a linear function. We now replace this interpolant with a polynomial of higher degree, consider again the problem given above. The following sixth degree polynomial goes through all the seven points, substituting x =2.5, we find that f =0.5965. Generally, if we have n points, there is exactly one polynomial of degree at most n−1 going through all the data points
5.
Optical resolution
–
Optical resolution describes the ability of an imaging system to resolve detail in the object that is being imaged. An imaging system may have many components including a lens and recording. Each of these contributes to the resolution of the system. Resolution depends on the distance between two distinguishable radiating points, the sections below describe the theoretical estimates of resolution, but the real values may differ. The results below are based on models of Airy discs. In low-contrast systems, the resolution may be lower than predicted by the theory outlined below. Real optical systems are complex and practical difficulties often increase the distance between point sources. The resolution of a system is based on the distance r at which the points can be distinguished as individuals. Several standards are used to determine, quantitatively, whether or not the points can be distinguished. One of the methods specifies that, on the line between the center of one point and the next, the contrast between the maximum and minimum intensity be at least 26% lower than the maximum. This corresponds to the overlap of one disk on the first dark ring in the other. This standard for separation is known as the Rayleigh criterion. In symbols, the distance is defined as follows, r =1.22 λ2 n sin θ =0, in confocal laser-scanned microscopes, the full-width half half-maximum of the point spread function is often used to avoid the difficulty of measuring the Airy disc. This, combined with the illumination pattern, results in better resolution. R =0.4 λ N A Also common in the literature is a formula for resolution that treats the above-mentioned concerns about contrast differently. The resolution predicted by this formula is proportional to the Rayleigh-based formula, for estimating theoretical resolution, it may be adequate. R = λ2 n sin θ = λ2 N A When a condenser is used to illuminate the sample, the shape of the pencil of light emanating from the condenser must also be included. R =1.22 λ N A o b j + N A c o n d In a properly configured microscope, the above estimates of resolution are specific to the case in which two identical very small samples that radiate incoherently in all directions
6.
Lossy compression
–
In information technology, lossy compression or irreversible compression is the class of data encoding methods that uses inexact approximations and partial data discarding to represent the content. These techniques are used to reduce data size for storage, handling, different versions of the photo of the cat above show how higher degrees of approximation create coarser images as more details are removed. This is opposed to data compression which does not degrade the data. The amount of data reduction possible using lossy compression is often higher than through lossless techniques. Well-designed lossy compression technology often reduces file sizes significantly before degradation is noticed by the end-user, even when noticeable by the user, further data reduction may be desirable. Lossy compression is most commonly used to compress multimedia data, especially in such as streaming media. By contrast, lossless compression is required for text and data files, such as bank records. A picture, for example, is converted to a file by considering it to be an array of dots and specifying the color. If the picture contains an area of the color, it can be compressed without loss by saying 200 red dots instead of red dot. The original data contains an amount of information, and there is a lower limit to the size of file that can carry all the information. Basic information theory says there is an absolute limit in reducing the size of this data. When data is compressed, its entropy increases, and it cannot increase indefinitely, as an intuitive example, most people know that a compressed ZIP file is smaller than the original file, but repeatedly compressing the same file will not reduce the size to nothing. Most compression algorithms can recognize when further compression would be pointless, in many cases, files or data streams contain more information than is needed for a particular purpose. Developing lossy compression techniques as closely matched to human perception as possible is a complex task, the terms irreversible and reversible are preferred over lossy and lossless respectively for some applications, such as medical image compression, to circumvent the negative implications of loss. The type and amount of loss can affect the utility of the images, artifacts or undesirable effects of compression may be clearly discernible yet the result still useful for the intended purpose. Or lossy compressed images may be visually lossless, or in the case of medical images and this is because uncompressed audio can only reduce file size by lowering bit rate or depth, whereas compressing audio can reduce size while maintaining bit rate and depth. This compression becomes a loss of the least significant data. From this point of view, perceptual encoding is not essentially about discarding data, green, and 50 pixels of blue vs. red, which are proportional to human sensitivity to each component
7.
Image compression
–
Image compression is a type of data compression applied to digital images, to reduce their cost for storage or transmission. Algorithms may take advantage of visual perception and the properties of image data to provide superior results compared with generic compression methods. Image compression may be lossy or lossless, lossless compression is preferred for archival purposes and often for medical imaging, technical drawings, clip art, or comics. Lossy compression methods, especially used at low bit rates. Lossy methods are suitable for natural images such as photographs in applications where minor loss of fidelity is acceptable to achieve a substantial reduction in bit rate. Lossy compression that produces negligible differences may be called visually lossless, the selected colors are specified in the color palette in the header of the compressed image. Each pixel just references the index of a color in the color palette and this is the most commonly used method. In particular, a Fourier-related transform such as the Discrete Cosine Transform is widely used, N. Ahmed, T. Natarajan and K. R. Rao, Discrete Cosine Transform, IEEE Trans. The DCT is sometimes referred to as DCT-II in the context of a family of discrete cosine transforms, the more recently developed wavelet transform is also used extensively, followed by quantization and entropy coding. Other names for scalability are progressive coding or embedded bitstreams, despite its contrary nature, scalability also may be found in lossless codecs, usually in form of coarse-to-fine pixel scans. Scalability is especially useful for previewing images while downloading them or for providing quality access to e. g. databases. There are several types of scalability, Quality progressive or layer progressive, resolution progressive, First encode a lower image resolution, then encode the difference to higher resolutions. Component progressive, First encode grey, then color, certain parts of the image are encoded with higher quality than others. This may be combined with scalability, compressed data may contain information about the image which may be used to categorize, search, or browse images. Such information may include color and texture statistics, small preview images, Compression algorithms require different amounts of processing power to encode and decode. Some high compression algorithms require high processing power, the quality of a compression method often is measured by the Peak signal-to-noise ratio. Image compression from MIT OpenCourseWare Image Coding Fundamentals A study about image compression Data Compression Basics FAQ, from comp. compression IPRG Open group related to image processing research resources
8.
Camera phone
–
A camera phone is a mobile phone which is able to capture photographs. Most camera phones also record video, most camera phones are simpler than separate digital cameras. Their usual fixed-focus lenses and smaller sensors limit their performance in poor lighting, lacking a physical shutter, some have a long shutter lag. Photoflash is typically provided by an LED source which illuminates less intensely over a longer exposure time than a bright. Optical zoom and tripod screws are rare and none has a hot shoe for attaching an external flash, some also lack a USB connection or a removable memory card. Most have Bluetooth and WiFi, and can make geotagged photographs, some of the more expensive camera phones have only a few of these technical disadvantages, but with bigger image sensors, their capabilities approach those of low-end point-and-shoot cameras. In the smartphone era, the sales increase of camera phones caused point-and-shoot camera sales to peak about 2010. Most model lines improve their cameras every year or two, most smartphones only have a menu choice to start a camera application program and an on-screen button to activate the shutter. Some also have a camera button, for quickness and convenience. The principal advantages of camera phones are cost and compactness, indeed for a user who carries a mobile phone anyway, smartphones that are camera phones may run mobile applications to add capabilities such as geotagging and image stitching. However, the screen, being a general purpose control, lacks the agility of a separate cameras dedicated buttons. Nearly all camera phones use CMOS image sensors, due to reduced power consumption compared to CCD type cameras, which are also used. Some of camera phones even use more expensive Back Side Illuminated CMOS which uses energy lesser than CMOS, although more expensive than CMOS and CCD. The latest generation of cameras also apply distortion, vignetting. Most camera phones have a digital zoom feature, an external camera can be added, coupled wirelessly to the phone by Wi-Fi. They are compatible with most smartphones, images are usually saved in the JPEG file format, except for some high-end camera phones which have also RAW feature and the Android 5.0 Lollipop has facility of it. Windows Phones can be configured to operate as a camera if the phone is asleep. An external flash can be employed, to improve performance, Phones usually store pictures and video in a directory called /DCIM in the internal memory
9.
Focal length
–
The focal length of an optical system is a measure of how strongly the system converges or diverges light. For an optical system in air, it is the distance over which initially collimated rays are brought to a focus. A system with a focal length has greater optical power than one with a long focal length. For a thin lens in air, the length is the distance from the center of the lens to the principal foci of the lens. For a converging lens, the length is positive, and is the distance at which a beam of collimated light will be focused to a single spot. For a diverging lens, the length is negative, and is the distance to the point from which a collimated beam appears to be diverging after passing through the lens. The focal length of a lens can be easily measured by using it to form an image of a distant light source on a screen. The lens is moved until an image is formed on the screen. In this case 1/u is negligible, and the length is then given by f ≈ v. Back focal length or back focal distance is the distance from the vertex of the last optical surface of the system to the focal point. For an optical system in air, the focal length gives the distance from the front. If the surrounding medium is not air, then the distance is multiplied by the index of the medium. Some authors call these distances the front/rear focal lengths, distinguishing them from the front/rear focal distances, defined above. In general, the length or EFL is the value that describes the ability of the optical system to focus light. The other parameters are used in determining where an image will be formed for an object position. The quantity 1/f is also known as the power of the lens. The corresponding front focal distance is, FFD = f, in the sign convention used here, the value of R1 will be positive if the first lens surface is convex, and negative if it is concave. The value of R2 is negative if the surface is convex