1.
Adobe Photoshop
–
Adobe Photoshop is a raster graphics editor developed and published by Adobe Systems for macOS and Windows. Photoshop was created in 1988 by Thomas and John Knoll and it can edit and compose raster images in multiple layers and supports masks, alpha compositing and several color models including RGB, CMYK, CIELAB, spot color and duotone. Photoshop has vast support for file formats but also uses its own PSD. In addition to graphics, it has limited abilities to edit or render text, vector graphics, 3D graphics. Photoshops featureset can be expanded by Photoshop plug-ins, programs developed and distributed independently of Photoshop that can run inside it, Photoshops naming scheme was initially based on version numbers. Photoshop CS3 through CS6 were also distributed in two different editions, Standard and Extended, in June 2013, with the introduction of Creative Cloud branding, Photoshops licensing scheme was changed to that of software as a service rental model and the CS suffixes were replaced with CC. Historically, Photoshop was bundled with software such as Adobe ImageReady, Adobe Fireworks, Adobe Bridge, Adobe Device Central. Alongside Photoshop, Adobe also develops and publishes Photoshop Elements, Photoshop Lightroom, Photoshop Express, collectively, they are branded as The Adobe Photoshop Family. It is currently a licensed software, Photoshop was developed in 1987 by the American brothers Thomas and John Knoll, who sold the distribution license to Adobe Systems Incorporated in 1988. Thomas Knoll, a PhD student at the University of Michigan, began writing a program on his Macintosh Plus to display images on a monochrome display. This program, called Display, caught the attention of his brother John Knoll, an Industrial Light & Magic employee, Thomas took a six-month break from his studies in 1988 to collaborate with his brother on the program. Thomas renamed the program ImagePro, but the name was already taken, during this time, John traveled to Silicon Valley and gave a demonstration of the program to engineers at Apple and Russell Brown, art director at Adobe. Both showings were successful, and Adobe decided to purchase the license to distribute in September 1988, while John worked on plug-ins in California, Thomas remained in Ann Arbor writing code. Photoshop 1.0 was released on 19 February 1990 for Macintosh exclusively, the Barneyscan version included advanced color editing features that were stripped from the first Adobe shipped version. The handling of color slowly improved with each release from Adobe, at the time Photoshop 1.0 was released, digital retouching on dedicated high end systems, such as the Scitex, cost around $300 an hour for basic photo retouching. Photoshop files have default file extension as. PSD, which stands for Photoshop Document, a PSD file stores an image with support for most imaging options available in Photoshop. These include layers with masks, transparency, text, alpha channels and spot colors, clipping paths and this is in contrast to many other file formats that restrict content to provide streamlined, predictable functionality. A PSD file has a height and width of 30,000 pixels
2.
Digital Negative
–
Digital Negative is a patented, open, non-free lossless raw image format written by Adobe used for digital photography. It was launched on September 27,2004, the launch was accompanied by the first version of the DNG specification, plus various products, including a free-of-charge DNG converter utility. All Adobe photo manipulation software released since the launch supports DNG, DNG is based on the TIFF/EP standard format, and mandates significant use of metadata. Adobe stated that if there were a consensus that DNG should be controlled by a standards body, Adobe has submitted DNG to ISO for incorporation into their revision of TIFF/EP. Given the existence of other raw image formats, Adobes creation of DNG as a competing format implies that DNG is unusual and these objectives and the associated characteristics of DNG, as well as assessments of whether these objectives are met, are described below. Increasingly, professional archivists and conservationists, working for respectable organizations and these objectives are repeatedly emphasized in Adobe documents, Digital image preservation, to be suitable for the purpose of preserving digital images as an authentic resource for future generations. Assessment, The US Library of Congress states that DNG is an alternative to other raw image formats, Less desirable file formats, RAW, Suggested alternatives. An unresolved restriction is that any edit/development settings stored in the DNG file by a product are unlikely to be recognized by a product from a different company. In-camera use by manufacturers, to be suitable for many camera manufacturers to use as a native or optional raw image format in many cameras. Assessment, About 12 camera manufacturers have used DNG in-camera, about 38 camera models have used DNG. Raw image formats for more than 230 camera models can be converted to DNG, multi-vendor interoperability, to be suitable for workflows where different hardware and software components share raw image files and/or transmit and receive them. Self-contained file format, a DNG file contains the data needed to render an image without needing additional knowledge of the characteristics of the camera. Version control scheme, it has a version scheme built into it that allows the DNG specification, DNG writers, lossless and lossy compression, DNG support optional lossless and also lossy compression. The lossy compressions losses are practically indistinguishable in real world images, a DNG file always contains data for one main image, plus metadata, and optionally contains at least one JPEG preview. It normally has the extension dng or DNG, DNG conforms to TIFF/EP and is structured according to TIFF. DNG supports various formats of metadata, and specifies a set of mandated metadata, DNG is both a raw image format and a format that supports non-raw, or partly processed, images. The latter format is known as Linear DNG, all images that can be supported as raw images can also be supported as Linear DNG. Images from the Foveon X3 sensor or similar, hence especially Sigma cameras, DNG can contain raw image data from sensors with various configurations of color filter array
3.
Minolta RD-175
–
The Minolta RD-175 was probably the first digital SLR which was hand portable. Up until 1995 when this was introduced, the only digital SLR on the market had a bulky external digital storage system. There were other primitive digital cameras but they were lower resolution. Minolta combined an existing SLR with a three way splitter and three separate CCD image sensors, giving 1. 75M pixel resolution, the base of the DSLR was the Minolta Maxxum 500si Super. Agfa produced a version of the RD-175 retailed as the Agfa ActionCam, the RD-175 was also notable as the first consumer digital camera to be used professionally, being used to create the full-motion claymation adventure game The Neverhood. The light bundled on the sensor area increased the effective sensitivity by 2 2⁄3 stops. The only usable ISO was 800, the three images were combined digitally and interpolated to the final size of 1.75 mega-pixels. Images were stored on an internal PCMCIA hard drive, the camera used Minolta AF A-mount lenses with a crop factor of 2. Nikon E series Telecompressor Minolta RD-175 by digicammuseum. com RD-175/Agfa ActionCam review by John Henshall Agfa ActionCam on Jarle Aaslands NikonWeb. com site Example images at Pbase. com
4.
Red Digital Cinema Camera Company
–
The Red Digital Cinema Camera Company is an American company that manufactures digital cinematography and photography cameras and accessories. The company’s headquarters are in Irvine, California, with studios in Hollywood and it has offices in London, Shanghai and Singapore, retail stores in Hollywood, New York and Miami as well as various authorized resellers and service centers around the world. Red Digital Cinema was founded in 2005 by Jim Jannard, who had previously founded Oakley, the company started with the intent to deliver an affordable 4k digital cinema camera. At the 2006 NAB show, Jannard announced that Red would build a 4K digital cinema camera, in March 2007, director Peter Jackson completed a camera test of two prototype Red One cameras, which became the 12-minute World War I film Crossing the Line. On seeing the film, director Steven Soderbergh told Jannard. I have to shoot with this, Soderbergh took two prototype Red ONEs into the jungle to shoot his film. Red Digital delivered the first Red camera in August 2007, called the Red One it was able to capture 4K images at up to 60 frames per second in the proprietary Redcode format. The Red One provided filmmakers customizable features and out-of-the-box functionality with feature film quality known to 35mm film cameras, in 2009, Red released Redcine-X, a post-production workflow for both motion and stills, the R3D Software Development Kit, and introduced the world to the concept of DSMC. In 2010, Red offered an upgrade to owners of the original Mysterium sensor to the newer M-X sensor. Also in that year, Red acquired the historic Ren-Mar Studios in Hollywood. In 2013, Red began taking pre-orders for their newest camera, in 2015, Red announced a new camera body called DSMC2. The Weapon 8K VV and Weapon 6K were the first two cameras announced within this line of cameras followed by Red Raven 4. 5K and Scarlet-W 5K, all of these cameras leveraged Red Dragon sensor technology. In 2016 a new 8K S35 sensor, called Helium was introduced two new cameras Red Epic-W and Weapon 8K S35. In early January 2017 this was given the highest sensor score ever,108, the Red One was Red Digital Cinema’s first production camera. It was able to capture up to 120 frames per second at 2K resolution and 30 frames per second at 4K resolution, later an upgrade to new sensor with 14 megapixel sensor was offered. The DSMC camera system was introduced with the Epic-X as a digital stills. After this a new line called Scarlet was introduced that provided lower end specifications at a more affordable price. Initially equipped with a 5K imaging sensor, upgrades to a 6K sensor with higher dynamic range were announced later, the DSMC2 family of cameras was introduced in 2015 as the new form factor for all cameras up to 2020
5.
Sony Cyber-shot DSC-R1
–
The Sony Cyber-shot DSC-R1 is a bridge digital camera announced by Sony in 2005. It featured a 10.3 megapixel APS-C CMOS sensor, a size used in DSLRs. This was the first time such a sensor was incorporated into a bridge camera. Besides the APS-C sensor, the DSC-R1 also featured a 14. 3–71.5 mm Carl Zeiss Vario-Sonnar T* lens, an optical viewfinder instead does not amplify the light, so that it becomes difficult to frame and manually focus when there is not sufficient light. Supports RAW and the disadvantages, no interchangeable lenses, the supplied lens only covers the 24–120 mm zoom range. Furthermore, there is some small time shift, i. e. the image appears with a small delay, low frame rate and slow contrast-detection autofocus
6.
Cyber-shot
–
Cyber-shot is Sonys line of point-and-shoot digital cameras introduced in 1996. Cyber-shot model names use a DSC prefix, which is an initialism for Digital Still Camera, many Cyber-shot models feature Carl Zeiss trademarked lenses, while others use Sony, or Sony G lenses. All Cyber-shot cameras accept Sonys proprietary Memory Stick or Memory Stick PRO Duo flash memory, select models have also supported CompactFlash. Current Cyber-shot cameras support Memory Stick PRO Duo, SD, SDHC, currently the W and T-series use Sony N-type batteries While most H-series use G-type batteries. From 2006 to 2009, Sony Ericsson used the Cyber-shot brand in a line of mobile phones, on March 31,2012 Sony unveiled the Sony Cybershot DSC W690 as the worlds thinnest 10x optical zoom camera. Some Cyber-shot models can take 3D stills by shooting two images using two different focus settings, the technology uses one lens only for the process, and users can later see the images on a 3D TV or on a regular 2D screen. The cameras have been available since 2010, cyber-Shot models such as the DSC-HX20V and the DSC-HX200V have a built-in GPS so the user can have their photos automatically geotagged as they are being taken. The feature can also serve as a compass as it shows the position on the camera screen. Tru Black is a developed by Sony which allows a better visualization of the screen, even when there is too much light. It enables LCD screens to change the display contrast in order to enhance the controlling reflectance. In other words, when light hits a display with Tru Black technology, all current Cyber-shot cameras are equipped with a panoramic technology branded as Sweep Panorama, which enables the user to capture wide format photographs using only one lens. The photos can be taken and displayed in 2D or 3D, Sony Alpha Sony SmartShot Sony Picture Motion Browser
7.
Image sensor
–
An image sensor or imaging sensor is a sensor that detects and conveys the information that constitutes an image. It does so by converting the variable attenuation of light waves into signals, the waves can be light or other electromagnetic radiation. As technology changes, digital imaging tends to replace analog imaging, early analog sensors for visible light were video camera tubes. Currently, used types are semiconductor charge-coupled devices or active pixel sensors in complementary metal–oxide–semiconductor or N-type metal-oxide-semiconductor technologies, analog sensors for invisible radiation tend to involve vacuum tubes of various kinds. Digital sensors include flat panel detectors, today, most digital cameras use a CMOS sensor, because CMOS sensors perform better than CCDs. An example is the fact that they incorporate an integrated circuit, CCD is still in use for cheap low entry cameras, but weak in burst mode. Both types of sensor accomplish the task of capturing light. Each cell of a CCD image sensor is an analog device, when light strikes the chip it is held as a small electrical charge in each photo sensor. This process is repeated until all the lines of pixels have had their charge amplified. A CMOS image sensor has an amplifier for each compared to the few amplifiers of a CCD. Some CMOS imaging sensors also use Back-side illumination to increase the number of photons that hit the photodiode, CMOS sensors can potentially be implemented with fewer components, use less power, and/or provide faster readout than CCD sensors. They are also vulnerable to static electricity discharges. Another approach is to utilize the very fine dimensions available in modern CMOS technology to implement a CCD like structure entirely in CMOS technology and this can be achieved by separating individual poly-silicon gates by a very small gap. These hybrid sensors are still in the phase and can potentially harness the benefits of both CCD and CMOS imagers. There are many parameters that can be used to evaluate the performance of a sensor, including dynamic range, signal-to-noise ratio. For sensors of comparable types, the ratio and dynamic range improve as the size increases. In order to avoid interpolated color information, techniques like color co-site sampling use a mechanism to shift the color sensor in pixel steps. 3CCD, using three discrete image sensors, with the color separation done by a dichroic prism, while in general digital cameras use a flat sensor, Sony prototyped a curved sensor in 2014 to reduce/eliminate Petzval field curvature that occurs with a flat sensor
8.
Digital camera
–
A digital camera or digicam is a camera that produces digital images that can be stored in a computer, displayed on a screen and printed. Most cameras sold today are digital, and digital cameras are incorporated into many devices ranging from PDAs, Digital and movie cameras share an optical system, typically using a lens with a variable diaphragm to focus light onto an image pickup device. The diaphragm and shutter admit the correct amount of light to the imager, just as with film, however, unlike film cameras, digital cameras can display images on a screen immediately after being recorded, and store and delete images from memory. Many digital cameras can also record moving videos with sound, some digital cameras can crop and stitch pictures and perform other elementary image editing. The history of the camera began with Eugene F. Lally of the Jet Propulsion Laboratory. His 1961 idea was to take pictures of the planets and stars while travelling through space to give information about the astronauts position, unfortunately, as with Texas Instruments employee Willis Adcocks filmless camera in 1972, the technology had yet to catch up with the concept. Steven Sasson as an engineer at Eastman Kodak invented and built the first electronic camera using a charge-coupled device image sensor in 1975, earlier ones used a camera tube, later ones digitized the signal. Early uses were military and scientific, followed by medical. In the mid to late 1990s digital cameras became common among consumers, by the mid-2000s digital cameras had largely replaced film cameras, and higher-end cell phones had an integrated digital camera. By the beginning of the 2010s, almost all smartphones had a digital camera. The two major types of image sensor are CCD and CMOS. A CCD sensor has one amplifier for all the pixels, while each pixel in a CMOS active-pixel sensor has its own amplifier, compared to CCDs, CMOS sensors use less power. Cameras with a small sensor use a back-side-illuminated CMOS sensor, overall final image quality is more dependent on the image processing capability of the camera, than on sensor type. The resolution of a camera is often limited by the image sensor that turns light into that discrete signals. The brighter the image at a point on the sensor. Depending on the structure of the sensor, a color filter array may be used. The number of pixels in the sensor determines the cameras pixel count, in a typical sensor, the pixel count is the product of the number of rows and the number of columns. For example, a 1,000 by 1,000 pixel sensor would have 1,000,000 pixels, final quality of an image depends on all optical transformations in the chain of producing the image
9.
Image scanner
–
Commonly used in offices are variations of the desktop flatbed scanner where the document is placed on a glass window for scanning. Mechanically driven scanners that move the document are typically used for large-format documents, a rotary scanner, used for high-speed document scanning, is a type of drum scanner that uses a CCD array instead of a photomultiplier. Non-contact planetary scanners essentially photograph delicate books and documents, all these scanners produce two-dimensional images of subjects that are usually flat, but sometimes solid, 3D scanners produce information on the three-dimensional structure of solid objects. Digital cameras can be used for the same purposes as dedicated scanners, when compared to a true scanner, a camera image is subject to a degree of distortion, reflections, shadows, low contrast, and blur due to camera shake. Resolution is sufficient for less demanding applications, Digital cameras offer advantages of speed, portability and non-contact digitizing of thick documents without damaging the book spine. As of 2010 scanning technologies were combining 3D scanners with digital cameras to create full-color, in the biomedical research area, detection devices for DNA microarrays are called scanners as well. These scanners are high-resolution systems, similar to microscopes, the detection is done via CCD or a photomultiplier tube. Modern scanners are considered the successors of early telephotography and fax input devices and it used electromagnets to drive and synchronize movement of pendulums at the source and the distant location, to scan and reproduce images. It could transmit handwriting, signatures, or drawings within an area of up to 150 x 100mm, Édouard Belins Belinograph of 1913, scanned using a photocell and transmitted over ordinary phone lines, formed the basis for the AT&T Wirephoto service. In Europe, services similar to a wirephoto were called a Belino and it was used by news agencies from the 1920s to the mid-1990s, and consisted of a rotating drum with a single photodetector at a standard speed of 60 or 120 rpm. They send a linear analog AM signal through standard telephone lines to receptors. Color photos were sent as three separated RGB filtered images consecutively, but only for special events due to transmission costs, Drum scanners capture image information with photomultiplier tubes, rather than the charge-coupled device arrays found in flatbed scanners and inexpensive film scanners. Modern color drum scanners use three matched PMTs, which red, blue, and green light, respectively. Light from the artwork is split into separate red, blue. Photomultipliers offer superior dynamic range and for this reason drum scanners can extract more detail from very dark areas of a transparency than flatbed scanners using CCD sensors. The smaller dynamic range of the CCD sensors, versus photomultiplier tubes, can lead to loss of shadow detail, while mechanics vary by manufacturer, most drum scanners pass light from halogen lamps though a focusing system to illuminate both reflective and transmissive originals. The drum scanner gets its name from the clear acrylic cylinder, depending on size, it is possible to mount originals up to 20x28, but maximum size varies by manufacturer. One of the features of drum scanners is the ability to control sample area
10.
Raster graphics editor
–
An image viewer program is usually preferred over a raster graphics editor for viewing images. Vector graphics editors are often contrasted with raster graphics editors, yet their capabilities complement each other, the technical difference between vector and raster editors stem from the difference between vector and raster images. Vector graphics are created mathematically, using geometric formulas, a raster image is made up of rows and columns of dots, called pixels, and is generally more photo-realistic. This is the form for digital cameras, whether it be a. raw file or. jpg file. The image is represented pixel by pixel, like a jigsaw puzzle. RGB, HSV, or by using a color dropper Edit and convert between various color models
11.
Gamut
–
In color reproduction, including computer graphics and photography, the gamut, or color gamut /ˈɡæmət/, is a certain complete subset of colors. The most common refers to the subset of colors which can be accurately represented in a given circumstance. Another sense, less used but not less correct, refers to the complete set of colors found within an image at a given time. In the 1850s, the term was applied to a range of colors or hue, for example by Thomas De Quincey, in color theory, the gamut of a device or process is that portion of the color space that can be represented, or reproduced. When certain colors cannot be expressed within a color model. For example, while pure red can be expressed in the RGB color space, it cannot be expressed in the CMYK color space, a device that is able to reproduce the entire visible color space is an unrealized goal within the engineering of color displays and printing processes. While modern techniques allow increasingly good approximations, the complexity of systems often makes them impractical. While processing an image, the most convenient color model used is the RGB model. Printing the image requires transforming the image from the original RGB color space to the printers CMYK color space, during this process, the colors from the RGB which are out of gamut must be somehow converted to approximate values within the CMYK space gamut. Simply trimming only the colors which are out of gamut to the closest colors in the space would burn the image. There are several algorithms approximating this transformation, but none of them can be truly perfect and this is why identifying the colors in an image which are out of gamut in the target color space as soon as possible during processing is critical for the quality of the final product. Gamuts are commonly represented as areas in the CIE1931 chromaticity diagram as shown at right, the cone drawn in grey corresponds roughly to the CIE diagram at right, with the added dimension of brightness. The axes in these diagrams are the responses of the short-wavelength, middle-wavelength, the other letters indicate black, red, green, blue, cyan, magenta, yellow, and white colors. The exact positions of the apexes depends on the spectra of the phosphors in the computer monitor. The gamut of the CMYK color space is, ideally, approximately the same as that for RGB, with slightly different apexes, depending on both the exact properties of the dyes and the light source. In practice, due to the way raster-printed colors interact with other and the paper and due to their non-ideal absorption spectra. The gamut of colors in nature has a similar, though more rounded. An object that only a narrow band of wavelengths will have a color close to the edge of the CIE diagram
12.
Color space
–
A color space is a specific organization of colors. In combination with physical device profiling, it allows for reproducible representations of color, for example, Adobe RGB and sRGB are two different absolute color spaces, both based on the RGB color model. When defining a color space, the reference standard is the CIELAB or CIEXYZ color spaces. For example, although several specific color spaces are based on the RGB color model, colors can be created in printing with color spaces based on the CMYK color model, using the subtractive primary colors of pigment. The resulting 3-D space provides a position for every possible color that can be created by combining those three pigments. Colors can be created on computer monitors with color spaces based on the RGB color model, a three-dimensional representation would assign each of the three colors to the X, Y, and Z axes. Note that colors generated on given monitor will be limited by the medium, such as the phosphor or filters. Another way of creating colors on a monitor is with an HSL or HSV color space, based on hue, saturation, with such a space, the variables are assigned to cylindrical coordinates. Many color spaces can be represented as three-dimensional values in this manner, but some have more, or fewer dimensions, Color space conversion is the translation of the representation of a color from one basis to another. The RGB color model is implemented in different ways, depending on the capabilities of the system used, by far the most common general-used incarnation as of 2006 is the 24-bit implementation, with 8 bits, or 256 discrete levels of color per channel. Any color space based on such a 24-bit RGB model is limited to a range of 256×256×256 ≈16.7 million colors. Some implementations use 16 bits per component for 48 bits total and this is especially important when working with wide-gamut color spaces, or when a large number of digital filtering algorithms are used consecutively. The same principle applies for any color space based on the color model. CIE1931 XYZ color space was one of the first attempts to produce a space based on measurements of human color perception. The CIERGB color space is a companion of CIE XYZ. Additional derivatives of CIE XYZ include the CIELUV, CIEUVW, RGB uses additive color mixing, because it describes what kind of light needs to be emitted to produce a given color. RGB stores individual values for red, green and blue, RGBA is RGB with an additional channel, alpha, to indicate transparency. Common color spaces based on the RGB model include sRGB, Adobe RGB, ProPhoto RGB, scRGB, one starts with a white substrate, and uses ink to subtract color from white to create an image
13.
JPEG
–
JPEG is a commonly used method of lossy compression for digital images, particularly for those images produced by digital photography. The degree of compression can be adjusted, allowing a selectable tradeoff between size and image quality. JPEG typically achieves 10,1 compression with little loss in image quality. JPEG compression is used in a number of file formats. These format variations are not distinguished, and are simply called JPEG. The term JPEG is an initialism/acronym for the Joint Photographic Experts Group, the MIME media type for JPEG is image/jpeg, except in older Internet Explorer versions, which provides a MIME type of image/pjpeg when uploading JPEG images. JPEG files usually have an extension of. jpg or. jpeg. JPEG/JFIF supports a maximum size of 65, 535×65,535 pixels. JPEG stands for Joint Photographic Experts Group, the name of the committee created the JPEG standard. The Joint stood for ISO TC97 WG8 and CCITT SGVIII, in 1987 ISO TC97 became ISO/IEC JTC1 and in 1992 CCITT became ITU-T. Currently on the JTC1 side JPEG is one of two sub-groups of ISO/IEC Joint Technical Committee 1, Subcommittee 29, Working Group 1 – titled as Coding of still pictures, on the ITU-T side ITU-T SG16 is the respective body. The original JPEG group was organized in 1986, issuing the first JPEG standard in 1992, which was approved in September 1992 as ITU-T Recommendation T.81 and in 1994 as ISO/IEC 10918-1. The JPEG standard specifies the codec, which defines how an image is compressed into a stream of bytes and decompressed back into an image, the Exif and JFIF standards define the commonly used file formats for interchange of JPEG-compressed images. JPEG standards are formally named as Information technology – Digital compression, ISO/IEC10918 consists of the following parts, Ecma International TR/98 specifies the JPEG File Interchange Format, the first edition was published in June 2009. The JPEG compression algorithm is at its best on photographs and paintings of scenes with smooth variations of tone. For web usage, where the amount of used for an image is important. JPEG/Exif is also the most common format saved by digital cameras, on the other hand, JPEG may not be as well suited for line drawings and other textual or iconic graphics, where the sharp contrasts between adjacent pixels can cause noticeable artifacts. Such images may be saved in a lossless graphics format such as TIFF, GIF, PNG
14.
Negative (photography)
–
In the case of color negatives, the colors are also reversed into their respective complementary colors. Typical color negatives have a dull orange tint due to an automatic color-masking feature that ultimately results in improved color reproduction. Negatives are normally used to make prints on photographic paper by projecting the negative onto the paper with a photographic enlarger or making a contact print. The paper is also darkened in proportion to its exposure to light, so a second reversal results which restores light, negatives were once commonly made on a thin sheet of glass rather than a plastic film, and some of the earliest negatives were made on paper. It is incorrect to call an image a negative solely because it is on a transparent material, transparent prints can be made by printing a negative onto special positive film, as is done to make traditional motion picture film prints for use in theaters. Some films used in cameras are designed to be developed by reversal processing, positives on film or glass are known as transparencies or diapositives, and if mounted in small frames designed for use in a slide projector or magnifying viewer they are commonly called slides. A positive image is a normal image, a negative image is a total inversion, in which light areas appear dark and vice versa. A negative color image is additionally color-reversed, with red areas appearing cyan, greens appearing magenta and blues appearing yellow, film negatives usually have less contrast, but a wider dynamic range, than the final printed positive images. The contrast typically increases when they are printed onto photographic paper, when negative film images are brought into the digital realm, their contrast may be adjusted at the time of scanning or, more usually, during subsequent post-processing. Film for cameras that use the 35 mm still format is sold as a strip of emulsion-coated and perforated plastic spooled in a light-tight cassette. Before each exposure, a mechanism inside the camera is used to pull an unexposed area of the out of the cassette. When all exposures have been made the strip is rewound into the cassette, after the film is chemically developed, the strip shows a series of small negative images. It is usually cut into sections for easier handling. Each of these images may be referred to as a negative. They are the images, from which all positive prints will derive. However, when an image is created from a negative image a positive image results. This makes most chemical-based photography a two-step process, which uses negative film, special films and development processes have been devised so that positive images can be created directly on the film, these are called positive, or slide, or reversal films and reversal processing. Scanning film negatives at the Wayback Machine NegaTIV -. -
15.
Photographic processing
–
Photographic processing or development is the chemical means by which photographic film or paper is treated after photographic exposure to produce a negative or positive image. Photographic processing transforms the latent image into an image, makes this permanent. All processes based upon the process are similar, regardless of the film or papers manufacturer. Exceptional variations include instant films such as made by Polaroid. Kodachrome required Kodaks proprietary K-14 process, Kodachrome film production ceased in 2009, and K-14 processing is no longer available as of December 30,2010. Ilfochrome materials use the dye destruction process, all photographic processing use a series of chemical baths. Processing, especially the development stages, requires very close control of temperature, agitation, the film may be soaked in water to swell the gelatin layer, facilitating the action of the subsequent chemical treatments. The developer converts the latent image to macroscopic particles of metallic silver, a stop bath, † typically a dilute solution of acetic acid or citric acid, halts the action of the developer. A rinse with water may be substituted. The fixer makes the image permanent and light-resistant by dissolving remaining silver halide, a common fixer is hypo, specifically ammonium thiosulfate. Washing in clean water removes any remaining fixer, residual fixer can corrode the silver image, leading to discolouration, staining and fading. The washing time can be reduced and the more completely removed if a hypo clearing agent is used after the fixer. Film may be rinsed in a solution of a non-ionic wetting agent to assist uniform drying. Film is then dried in an environment, cut and placed into protective sleeves. Once the film is processed, it is referred to as a negative. The negative may now be printed, the negative is placed in an enlarger, many different techniques can be used during the enlargement process. Two examples of enlargement techniques are dodging and burning, alternatively, the negative may be scanned for digital printing or web viewing after adjustment, retouching, and/or manipulation. † In modern automatic processing machines, the bath is replaced by mechanical squeegee or pinching rollers
16.
Photographic film
–
This article is mainly concerned with still photography film. For motion picture film, please see film stock, photographic film is a strip or sheet of transparent plastic film base coated on one side with a gelatin emulsion containing microscopically small light-sensitive silver halide crystals. The sizes and other characteristics of the crystals determine the sensitivity, contrast, the emulsion will gradually darken if left exposed to light, but the process is too slow and incomplete to be of any practical use. Instead, a short exposure to the image formed by a camera lens is used to produce only a very slight chemical change. This creates an invisible latent image in the emulsion, which can be developed into a visible photograph. In addition to light, all films are sensitive to ultraviolet, X-rays. Unmodified silver halide crystals are only to the blue part of the visible spectrum. This problem was overcome with the discovery that certain dyes, called sensitizing dyes, first orthochromatic and finally panchromatic films were developed. Panchromatic film renders all colors in shades of gray approximately matching their subjective brightness, by similar techniques special-purpose films can be made sensitive to the infrared region of the spectrum. In black-and-white photographic film there is one layer of silver halide crystals. When the exposed silver halide grains are developed, the silver halide crystals are converted to metallic silver, color film has at least three sensitive layers, incorporating different combinations of sensitizing dyes. Typically the blue-sensitive layer is on top, followed by a filter layer to stop any remaining blue light from affecting the layers below. Next come a green-and-blue sensitive layer, and a sensitive layer. During development, the silver halide crystals are converted to metallic silver. Because the by-products are created in direct proportion to the amount of exposure and development, following development, the silver is converted back to silver halide crystals in the bleach step. It is removed from the film during the process of fixing the image on the film with a solution of ammonium thiosulfate or sodium thiosulfate, fixing leaves behind only the formed color dyes, which combine to make up the colored visible image. Later color films, like Kodacolor II, have as many as 12 emulsion layers, the earliest practical photographic process, the daguerreotype, introduced in 1839, did not use film. The light-sensitive chemicals were formed on the surface of a copper sheet
17.
Color balance
–
In photography and image processing, color balance is the global adjustment of the intensities of the colors. An important goal of this adjustment is to specific colors – particularly neutral colors – correctly. Hence, the method is sometimes called gray balance, neutral balance. Color balance changes the mixture of colors in an image and is used for color correction. Generalized versions of color balance are used to correct colors other than neutrals or to change them for effect. Image data acquired by sensors – either film or electronic image sensors – must be transformed from the values to new values that are appropriate for color reproduction or display. In film photography, color balance is achieved by using color correction filters over the lights or on the camera lens. It is particularly important that neutral colors in a scene appear neutral in the reproduction, most digital cameras have means to select color correction based on the type of scene lighting, using either manual lighting selection, automatic white balance, or custom white balance. The algorithms for these processes perform generalized chromatic adaptation, many methods exist for color balancing. Setting a button on a camera is a way for the user to indicate to the processor the nature of the scene lighting, another option on some cameras is a button which one may press when the camera is pointed at a gray card or other neutral colored object. This captures an image of the ambient light, which enables a digital camera to set the color balance for that light. There is a literature on how one might estimate the ambient lighting from the camera data. A variety of algorithms have been proposed, and the quality of these has been debated, a few examples and examination of the references therein will lead the reader to many others. Examples are Retinex, a neural network or a Bayesian method. Color balancing an image not only the neutrals, but other colors as well. An image that is not color balanced is said to have a color cast, Color balancing may be thought in terms of removing this color cast. Color balance is related to color constancy. Algorithms and techniques used to color constancy are frequently used for color balancing
18.
Color grading
–
Color grading is the process of altering and enhancing the color of a motion picture, video image, or still image either electronically, photo-chemically or digitally. Color grading encompasses both color correction and the generation of color effects. Whether for theatrical film, video distribution, or print, color grading is generally now performed digitally in a color suite, the earlier photo-chemical film process, known as color timing, was performed at a photographic laboratory. The earliest film grading technique, known as timing, involved changing the duration of exposure processes during the film development process. Color timing was used for color correction, but could also be used for artistic purposes. Color timing was specified in printer points, since it could not be performed in real time, color timing for film processing involved considerable skill in being able to predict correct exposures. For complex work, wedges were sometimes processed to aid the choice of the correct grading, with the advent of television, broadcasters quickly realized the limitations of live television broadcasts and they turned to broadcasting feature films from release prints directly from a telecine. This was before 1956 when Ampex introduced the first Quadruplex videotape recorder VRX-1000, live television shows could also be recorded to film and aired at different times in different time zones by filming a video monitor. The heart of this system was the kinescope, a device for recording a television broadcast to film, the early telecine hardware was the film chain for broadcasting from film and utilized a film projector connected to a video camera. Today, telecine is synonymous with color timing as tools and technologies have advanced to make color timing ubiquitous in a video environment, in a Cathode-ray tube system, an electron beam is projected at a phosphor-coated envelope, producing a spot of light the size of a single pixel. This beam is scanned across a film frame from left to right. Horizontal scanning of the frame is then accomplished as the film moves past the CRTs beam, once this photon beam passes through the film frame, it encounters a series of dichroic mirrors which separate the image into its primary red, green and blue components. From there, each beam is reflected onto a photomultiplier tube where the photons are converted into an electronic signal to be recorded to tape. In a charge-coupled device telecine, a light is shined through the exposed film image onto a prism. Each beam of colored light is projected at a different CCD. The CCD converts the light into a signal, and the telecine electronics modulate these into a video signal that can then be color graded. The Ursa Gold brought about color grading in the full 4,4,4 color space, Color correction control systems started with the Rank Cintel TOPSY in 1978. In 1984 Da Vinci Systems introduced their first color corrector, an interface that would manipulate the color voltages on the Rank Cintel MkIII systems
19.
Dynamic range
–
Dynamic range, abbreviated DR, DNR, or DYR is the ratio between the largest and smallest values that a certain quantity can assume. It is often used in the context of signals, like sound and it is measured either as a ratio or as a base-10 or base-2 logarithmic value of the difference between the smallest and largest signal values, in parallel to the common usage for audio signals. The human senses of sight and hearing have a high dynamic range. A human is capable of hearing anything from a quiet murmur in a room to the sound of the loudest heavy metal concert. Such a difference can exceed 100 dB which represents a factor of 100,000 in amplitude, a human cannot perform these feats of perception at both extremes of the scale at the same time. The eyes take time to adjust to different light levels, the instantaneous dynamic range of human audio perception is similarly subject to masking so that, for example, a whisper cannot be heard in loud surroundings. In practice, it is difficult to achieve the full dynamic range experienced by humans using electronic equipment. For example, a good quality LCD has a range of around 1000,1. Paper reflectance can achieve a range of about 100,1. A professional ENG camcorder such as the Sony Digital Betacam achieves a range of greater than 90 dB in audio recording. A nighttime scene will usually contain duller colours and will often be lit with blue lighting, the dynamic range of human hearing is roughly 140 dB, varying with frequency, from the threshold of hearing to the threshold of pain. The dynamic range of music as normally perceived in a concert hall doesnt exceed 80 dB, the dynamic range differs from the ratio of the maximum to minimum amplitude a given device can record, as a properly dithered recording device can record signals well below the noise RMS amplitude. Digital audio with undithered 20-bit digitization is theoretically capable of 120 dB dynamic range, 24-bit digital audio calculates to 144 dB dynamic range. Multiple noise processes determine the noise floor of a system, noise can be picked up from microphone self-noise, preamp noise, wiring and interconnection noise, media noise, etc. Early 78 rpm phonograph discs had a range of up to 40 dB, soon reduced to 30 dB. Ampex tape recorders in the 1950s achieved 60 dB in practical usage, the peak of professional analog magnetic recording tape technology reached 90 dB dynamic range in the midband frequencies at 3% distortion, or about 80 dB in practical broadband applications. The Dolby SR noise reduction gave a 20 dB further increased range resulting in 110 dB in the midband frequencies at 3% distortion. Specialized bias and record head improvements by Nakamichi and Tandberg combined with Dolby C noise reduction yielded 72 dB dynamic range for the cassette
20.
Endianness
–
Endianness refers to the sequential order used to numerically interpret a range of bytes in computer memory as a larger, composed word value. It also describes the order of transmission over a digital link. Little-endian format reverses the order of the sequence and stores the least significant byte at the first location with the most significant byte being stored last. The order of bits within a byte can also have endianness, however, both big and little forms of endianness are widely used in digital electronics. As examples, the IBM z/Architecture mainframes use big-endian while the Intel x86 processors use little-endian, the designers chose endianness in the 1960s and 1970s respectively. Big-endian is the most common format in data networking, fields in the protocols of the Internet protocol suite, such as IPv4, IPv6, TCP, for this reason, big-endian byte order is also referred to as network byte order. Little-endian storage is popular for microprocessors, in due to significant influence on microprocessor designs by Intel Corporation. Mixed forms also exist, for instance the ordering of bytes in a 16-bit word may differ from the ordering of 16-bit words within a 32-bit word, such cases are sometimes referred to as mixed-endian or middle-endian. There are also some bi-endian processors that operate in either little-endian or big-endian mode, big-endianness may be demonstrated by writing a decimal number, say one hundred twenty-three, on paper in the usual positional notation understood by a numerate reader,123. The digits are written starting from the left and to the right, with the most significant digit,1 and this is analogous to the lowest address of memory being used first. This is an example of a big-endian convention taken from daily life, the little-endian way of writing the same number, one hundred twenty-three, would place the hundreds-digit 1 in the right-most position,321. A person following conventional big-endian place-value order, who is not aware of this ordering, would read a different number. Endianness in computing is similar, but it applies to the ordering of bytes. The illustrations to the right, where a is a memory address, danny Cohen introduced the terms Little-Endian and Big-Endian for byte ordering in an article from 1980. Computer memory consists of a sequence of storage cells, each cell is identified in hardware and software by its memory address. If the total number of cells in memory is n. Computer programs often use data structures of fields that may consist of data than is stored in one memory cell. For the purpose of this article where its use as an operand of an instruction is relevant, in addition to that, it has to be of numeric type in some positional number system
21.
Color filter array
–
In photography, a color filter array, or color filter mosaic, is a mosaic of tiny color filters placed over the pixel sensors of an image sensor to capture color information. Color filters are needed because the typical photosensors detect light intensity with little or no wavelength specificity, since sensors are made of semiconductors they obey solid-state physics. The color filters filter the light by wavelength range, such that the filtered intensities include information about the color of light. For example, the Bayer filter gives information about the intensity of light in red, green, the raw image data captured by the image sensor is then converted to a full-color image by a demosaicing algorithm which is tailored for each type of color filter. The spectral transmittance of the CFA elements along with the demosaicing algorithm jointly determine the color rendition, the sensors passband quantum efficiency and span of the CFAs spectral responses are typically wider than the visible spectrum, thus all visible colors can be distinguished. The responses of the filters do not generally correspond to the CIE color matching functions, so a translation is required to convert the tristimulus values into a common. The Foveon X3 sensor uses a different structure such that a pixel utilizes properties of multi-junctions to stack blue, green and this arrangement does not require a demosaicing algorithm because each pixel has information about each color. Dick Merrill of Foveon distinguishes the approaches as vertical color filter for the Foveon X3 versus lateral color filter for the CFA, sugiyama filed for a patent on such an arrangement in 2005. A CYGM matrix is a CFA that uses mostly secondary colors, other variants include CMY and CMYW matrices. Diazonaphthoquinone -novolac photoresist is one used as the carrier for making color filters from color dyes. There is some interference between the dyes and the light needed to properly expose the polymer, though solutions have been found for this problem. Color photoresists sometimes used include those with chemical monikers CMCR101R, CMCR101G, CMCR101B, CMCR106R, CMCR106G, a few sources discuss other specific chemical substances, attending optical properties, and optimal manufacturing processes of color filter arrays. For instance, Nakamura said that materials for on-chip color filter arrays fall into two categories, pigment and dye, pigment based CFAs have become the dominant option because they offer higher heat resistance and light resistance compared to dye based CFAs. In either case, thicknesses ranging up to 1 micrometre are readily available and he provides a bibliography focusing on the number, types, aliasing effects, moire patterns, and spatial frequencies of the absorptive filters. Theuwissen makes no mention of the materials utilized in CFA manufacture, at least one early example of an on-chip design utilized gelatin filters. The gelatin is sectionalized, via photolithography, and subsequently dyed, aoki reveals that a CYWG arrangement was used, with the G filter being an overlap of the Y and C filters. Adams et al. state Several factors influence the CFAs design, first, the individual CFA filters are usually layers of transmissive organic or pigment dyes. Ensuring that the dyes have the right mechanical properties—such as ease of application, durability and this makes it difficult, at best, to fine-tune the spectral responsivities
22.
ICC profile
–
In color management, an ICC profile is a set of data that characterizes a color input or output device, or a color space, according to standards promulgated by the International Color Consortium. Profiles describe the attributes of a particular device or viewing requirement by defining a mapping between the device source or target color space and a profile connection space. This PCS is either CIELAB or CIEXYZ, mappings may be specified using tables, to which interpolation is applied, or through a series of parameters for transformations. Every device that captures or displays color can be profiled, the ICC defines the format precisely but does not define algorithms or processing details. This means there is room for variation between different applications and systems that work with ICC profiles, since late 2010, the current version of the specification is 4.3. Details are at http, //www. color. org/iccmax/ To see how this works in practice, suppose we have a particular RGB and CMYK color space, the first step is to obtain the two ICC profiles concerned. To perform the conversion, each RGB triplet is first converted to the Profile connection space using the RGB profile, if necessary the PCS is converted between CIELAB and CIEXYZ, a well defined transformation. Then the PCS is converted to the four values of C, M, Y, K required using the second profile, so a profile is essentially a mapping from a color space to the PCS, and from the PCS to the color space. The profile might do this using tables of values to be interpolated. A profile might define several mappings, according to rendering intent and these mappings allow a choice between closest possible color matching, and remapping the entire color range to allow for different gamuts. The reference illuminant of the Profile connection space is a 16-bit fractional approximation of D50, different source/destination white points are adapted using the Bradford transformation. Another kind of profile is the device link profile, instead of mapping between a device color space and a PCS, it maps between two specific device spaces. While this is less flexible, it allows for an accurate or purposeful conversion of color between devices. For example, a conversion between two CMYK devices could ensure that colors using only black ink convert to target colors using only black ink, the ICC profile specification, currently being progressed as International Standard ISO 15076-1,2005, is widely referred to in other standards. The following International and de facto standards are known to reference to ICC profiles. A test page for browsers ICC profiles in Adobe Photoshop CoCa - Open source ICC profile creator by Andrew Stawowczyk Long ICC profiles in MATLAB
23.
Database
–
A database is a well organized collection of data. It is the collection of schemas, tables, queries, reports, views, a database management system is a computer software application that interacts with the user, other applications, and the database itself to capture and analyze data. A general-purpose DBMS is designed to allow the definition, creation, querying, update, well-known DBMSs include MySQL, PostgreSQL, MongoDB, MariaDB, Microsoft SQL Server, Oracle, Sybase, SAP HANA, MemSQL and IBM DB2. Sometimes a DBMS is loosely referred to as a database, formally, a database refers to a set of related data and the way it is organized. The DBMS provides various functions that allow entry, storage and retrieval of large quantities of information, because of the close relationship between them, the term database is often used casually to refer to both a database and the DBMS used to manipulate it. Outside the world of information technology, the term database is often used to refer to any collection of related data. This article is concerned only with databases where the size and usage requirements necessitate use of a management system. Update – Insertion, modification, and deletion of the actual data, retrieval – Providing information in a form directly usable or for further processing by other applications. The retrieved data may be available in a form basically the same as it is stored in the database or in a new form obtained by altering or combining existing data from the database. Both a database and its DBMS conform to the principles of a database model. Database system refers collectively to the model, database management system. Physically, database servers are dedicated computers that hold the actual databases and run only the DBMS, Database servers are usually multiprocessor computers, with generous memory and RAID disk arrays used for stable storage. RAID is used for recovery of data if any of the disks fail, hardware database accelerators, connected to one or more servers via a high-speed channel, are also used in large volume transaction processing environments. DBMSs are found at the heart of most database applications, DBMSs may be built around a custom multitasking kernel with built-in networking support, but modern DBMSs typically rely on a standard operating system to provide these functions. Since DBMSs comprise a significant market, computer and storage vendors often take into account DBMS requirements in their own development plans, databases are used to support internal operations of organizations and to underpin online interactions with customers and suppliers. Databases are used to hold information and more specialized data. A DBMS has evolved into a software system and its development typically requires thousands of human years of development effort. Some general-purpose DBMSs such as Adabas, Oracle and DB2 have been undergoing upgrades since the 1970s, general-purpose DBMSs aim to meet the needs of as many applications as possible, which adds to the complexity
24.
Exif
–
It is not used in JPEG2000, PNG, or GIF. This standard consists of the Exif image file specification and the Exif audio file specification, the Japan Electronic Industries Development Association produced the initial definition of Exif. Version 2.1 of the specification is dated 12 June 1998, JEITA established Exif version 2.2, dated 20 February 2002 and released in April 2002. Version 2.21 is dated 11 July 2003, but was released in September 2003 following the release of DCF2.0, the latest, version 2.3, released on 26 April 2010 and revised in May 2013, was jointly formulated by JEITA and CIPA. Exif is supported by almost all camera manufacturers, the metadata tags defined in the Exif standard cover a broad spectrum, Date and time information. Digital cameras will record the current date and time and save this in the metadata, a thumbnail for previewing the picture on the cameras LCD screen, in file managers, or in photo manipulation software. The Exif tag structure is borrowed from TIFF files, on several image specific properties, there is a large overlap between the tags defined in the TIFF, Exif, TIFF/EP, and DCF standards. For descriptive metadata, there is an overlap between Exif, IPTC Information Interchange Model and XMP info, which also can be embedded in a JPEG file, the Metadata Working Group has guidelines on mapping tags between these standards. When Exif is employed for JPEG files, the Exif data are stored in one of JPEGs defined utility Application Segments, the APP1, when Exif is employed in TIFF files, the TIFF Private Tag 0x8769 defines a sub-Image File Directory that holds the Exif specified TIFF Tags. Formats specified in Exif standard are defined as structures that are based on Exif-JPEG. When these formats are used as Exif/DCF files together with the DCF specification, their scope shall cover devices, recording media, the Exif format has standard tags for location information. As of 2014 many cameras and most mobile phones have a built-in GPS receiver that stores the information in the Exif header when a picture is taken. Some other cameras have a separate GPS receiver that fits into the connector or hot shoe. The process of adding information to a photograph is known as geotagging. Photo-sharing communities like Panoramio, locr or Flickr equally allow their users to upload geocoded pictures or to add geolocation information online, Exif data are embedded within the image file itself. While many recent image manipulation programs recognize and preserve Exif data when writing to a modified image, many image gallery programs also recognise Exif data and optionally display it alongside the images. The Exif format has a number of drawbacks, mostly relating to its use of file structures. For this reason most image editors damage or remove the Exif metadata to some extent upon saving, the standard defines a MakerNote tag, which allows camera manufacturers to place any custom format metadata in the file
25.
Thumbnail
–
Thumbnails are reduced-size versions of pictures or videos, used to help in recognizing and organizing them, serving the same role for images as a normal text index does for words. Some web designers produce thumbnails with HTML or client-side scripting that makes the users browser shrink the picture and this results in no saved bandwidth, and the visual quality of browser resizing is usually less than ideal. Displaying a significant part of the picture instead of the frame can allow the use of a smaller thumbnail while maintaining recognizability. For example, when thumbnailing a full-body portrait of a person, however, this may mislead the viewer about what the image contains, so is more suited to artistic presentations than searching or catalogue browsing. In 2002, the court in the US case Kelly v. Arriba Soft Corporation ruled that it was use for Internet search engines to use thumbnail images to help web users find what they seek. The word thumbnail is a reference to the thumbnail and alludes towards the small size of the image or picture. The word was then used figuratively, in noun and adjective form, to refer to anything small or concise, such as a biographical essay. The use of the thumbnail in the specific context of computer images as a small graphical representation, as of a larger graphic. The Denver Public Library Digitization and Cataloguing Program produces thumbnails that are 160 pixels in the long dimension, the California Digital Library Guidelines for Digital Images recommend 150-200 pixels for each dimension. Picture Australia requires thumbnails to be 150 pixels in the long dimension, the International Dunhuang Project Standards for Digitization and Image Management specifies a height of 96 pixels at 72 ppi. DeviantArt automatically produces thumbnails that are maximum 150 pixels in the long dimension, flickr automatically produces thumbnails that are a maximum 240 pixels in the long dimension, or smaller 75×75 pixels. It also applies unsharp mask to them, picasa automatically produces thumbnails that are a maximum 144 pixels in the long dimension, or 160×160 pixels album thumbnails. The term vignette is sometimes used to describe an image that is smaller than the original, larger than a thumbnail, thumbnail sketches are similar to doodles, but may include as much detail as a small sketch. Image organizer Contact print, a cognate of the thumbnail Thumbshots
26.
Keykode
–
Keykode is a variation of time code used in the post-production process which is designed to uniquely identify film frames in a film stock. Edge numbers are a series of numbers with key lettering printed along the edge of a 35 mm negative at intervals of one foot and on a 16 mm negative at intervals of six inches. The numbers are placed on the negative at the time of manufacturing by one of two methods, Latent image exposes the edge of the film while it passes through the perforation machine and this method is primarily utilized for color negative films. Visible ink is sometimes utilized to imprint on the edge of the film - again in manufacturing - at the time of perforations, the ink, which is not affected by photographic chemicals, is normally printed onto the base surface of the film. The numbers are visible on both the raw stock and processed film and this method is primarily utilized for black & white negative film. The edge numbers serve a number of purposes, every key frame is numbered with a multi-digit identifier that may be referred to later. In addition, a date of manufacturing is imprinted, then the type of emulsion and this information is transferred from the negative to the positive prints. The print may be edited and handled while the original negative remains safely untouched, laboratories can also imprint their own edge numbers on the processed film negative or print to identify the film for their own means. This is normally done in yellow ink, to do this, Kodak utilized the USS-128 barcode alongside the human-readable edge numbers. They also improved the quality and readability of the information to make it easier to identify. The Keykode consists of 12 characters in human-readable form followed by the information in barcode form. Keykode is a form of metadata identifier for film negatives, the next six numbers in the Keykode are the identification number for that roll of film. On Kodak film stocks, it remains consistent for the entire roll, fuji Stocks will increment this number when the frame number advances past 9999. Computers read the frame offset by adding digits to the Keykode after the plus sign, in this case, a frame offset of two frames is specified. The number of frames within a film foot depends on both the width and the frame pulldown itself, and can also be uneven within the same roll. EASTMAN52791673301122 KD These numbers are consistent for a batch of film. EASTMAN is the manufacturer,5279 is the stock type identifier. The next three numbers is the batch number
27.
Phase One (company)
–
Phase One is a Danish company specializing in high-end digital photography equipment and software. They manufacture open platform based medium format camera systems and solutions and their own RAW processing software, Capture One, supports many DSLRs besides their backs. PODAS workshops is a series of photography workshops designed for digital photographers interested in working with medium format. PODAS is a part of the Phase One educational division, each attendee receives a Phase One digital camera system for the duration of the workshop. On 18 February 2014, it was announced that UK-based private equity firm Silverfleet Capital would acquire a 60% majority stake in the company, in 2009, Phase One purchased a major stake in Japanese Mamiya and the two companies develop products together. The V-Grip Air supports a Profoto Air flash trigger for wireless flash synchronization, the 645DF+/645DF supports digital back interfaces including the IQ and P+ series digital backs as well as 3rd party digital backs from Hasselblad, Leaf and others. In 2012, Phase One released two specialty cameras, iXR which is specifically for reproduction and iXA which is made specifically for aerial photography. Both uses the 645 lenses as the normal 645 cameras, main difference on this camera is they have no viewfinder and very few mechanical moving parts. In 2013, Phase One signed a distribution agreement with Digital Transitions to deliver advanced digitization solutions for cultural heritage preservation imaging projects worldwide. In 2014, Phase One launched a medium format digital back with a CMOS/active pixel sensor, all Phase One digital backs launched prior to the IQ250 have sensors based on the CCD technology. In 2015, Phase One introduced the XF camera system, a new digital camera platform, medium format system. At the same time, the IQ3 series digital backs were introduced and it is the first camera to utilize a USB3 connection. At the time of the release, this is not very widespread, also, it is the first camera to include a high resolution multi-touch display, which is similar to the Retina screen used in the iPhone 4. Also, a new high resolution LCD screen was implemented with better resolution, the P series are fully untethered backs available for many different camera mounts. The H series are tethered backs available for different camera mounts. Camera back connects through standard 6pin IEEE1394, originally this type of camera back was released as the Lightphase, a continuation of Phase Ones previous tradition of using the name phase in the name of the product. This changed with the release of the H20, which was originally called Lightphase H20, the Scan backs are tethered digital scan backs. All use SCSI connection except for PowerPhase FX, which uses IEEE1394 Firewire, the very early models, which were known as the CB6x and FC70, were made in plastic and had external control unit that connected to a computer through NuBus
28.
Hasselblad
–
Victor Hasselblad AB is a Swedish manufacturer of medium-format cameras, photographic equipment and image scanners based in Gothenburg, Sweden. The company is best known for the classic medium-format cameras it produced since World War II, perhaps the most famous use of the Hasselblad camera was during the Apollo program missions when humans first landed on the Moon. Almost all of the photographs taken during these missions used modified Hasselblad cameras. The company was established in 1841 in Gothenburg, Sweden, by Fritz Wiktor Hasselblad, as a company, F. W. Hasselblad. The founders son, Arvid Viktor Hasselblad, was interested in photography, Hasselblads corporate website quotes him as saying I certainly don’t think that we will earn much money on this, but at least it will allow us to take pictures for free. In 1877, Arvid Hasselblad commissioned the construction of Hasselblads long-time headquarters building, while on honeymoon, Arvid Hasselblad met George Eastman, founder of Eastman Kodak. In 1888, Hasselblad became the sole Swedish distributor of Eastmans products, the business was so successful that in 1908, the photographic operations were spun off into their own corporation, Fotografiska AB. Operations included a network of shops and photo labs. Management of the company passed to Karl Erik Hasselblad, Arvids son. Karl Erik wanted his son, Victor Hasselblad, to have an understanding of the camera business. Due to disputes within the family, particularly with his father, Victor left the business and in 1937 started his own store and lab in Gothenburg. During World War II, the Swedish military captured a fully functioning German aerial surveillance camera from a downed German plane and this was probably a Handkamera HK12.5 / 7x9, which bore the codename GXN. The Swedish government realised the advantage of developing an aerial camera for their own use. By late 1941, the operation had over 20 employees and the Swedish Air Force asked for another camera, one which would have a larger negative, between 1941 and 1945, Hasselblad delivered 342 cameras to the Swedish military. In 1942, Karl Erik Hasselblad died and Victor took control of the family business, during the war, in addition to the military cameras, Hasselblad produced watch and clock parts, over 95,000 by the wars end. After the war, watch and clock production continued, and other work was also carried out, including producing a slide projector. Victor Hasselblads real ambition was to make high-quality civilian cameras, in 1945–1946, the first design drawings and wooden models were made for a camera to be called the Rossex. An internal design competition was held for elements of the camera, one of the winners was Sixten Sason, in 1948, the camera later known as the 1600 F was released
29.
Kodak
–
Eastman Kodak Company, commonly referred to as Kodak, is an American technology company that produces imaging products with its historic basis on photography. The company is headquartered in Rochester, New York and is incorporated in New Jersey, Kodak provides packaging, functional printing, graphic communications and professional services for businesses around the world. Its main business segments are Print Systems, Enterprise Inkjet Systems, Micro 3D Printing and Packaging, Software and Solutions and it is best known for photographic film products. Kodak was founded by George Eastman and Henry A, during most of the 20th century, Kodak held a dominant position in photographic film. The companys ubiquity was such that its Kodak moment tagline entered the lexicon to describe a personal event that was demanded to be recorded for posterity. Kodak began to struggle financially in the late 1990s, as a result of the decline in sales of photographic film, as a part of a turnaround strategy, Kodak began to focus on digital photography and digital printing, and attempted to generate revenues through aggressive patent litigation. In January 2012, Kodak filed for Chapter 11 bankruptcy protection in the United States District Court for the Southern District of New York. In February 2012, Kodak announced that it would stop making digital cameras, pocket video cameras and digital picture frames, in January 2013, the Court approved financing for Kodak to emerge from bankruptcy by mid 2013. Kodak sold many of its patents for approximately $525,000,000 to a group of companies under the names Intellectual Ventures, on September 3,2013, the company emerged from bankruptcy having shed its large legacy liabilities and exited several businesses. Personalized Imaging and Document Imaging are now part of Kodak Alaris, on March 12,2014, it announced that the board of directors elected Jeffrey J. Clarke as chief executive officer and a member of its board of directors. As late as 1976, Kodak commanded 90% of film sales, Japanese competitor Fujifilm entered the U. S. market with lower-priced film and supplies, but Kodak did not believe that American consumers would ever desert its brand. Kodak passed on the opportunity to become the film of the 1984 Los Angeles Olympics, Fuji won these sponsorship rights. Fuji opened a plant in the U. S. and its aggressive marketing. Fuji went from a 10% share in the early 1990s to 17% in 1997, fujis films soon also found a competitive edge in higher-speed negative films, with a tighter grain structure. The complaint was lodged by the United States with the World Trade Organization, on January 30,1998, the WTO announced a sweeping rejection of Kodaks complaints about the film market in Japan. Although Kodak developed a camera in 1975, the first of its kind. In the 1990s, Kodak planned a journey to move to digital technology. CEO George M. C. Fisher reached out to Microsoft, Apples pioneering QuickTake consumer digital cameras, introduced in 1994, had the Apple label but were produced by Kodak
30.
Canon Inc.
–
It is headquartered in Ōta, Tokyo, Japan. Canon has a listing on the Tokyo Stock Exchange and is a constituent of the TOPIX index. It has a listing on the New York Stock Exchange. At the beginning of 2015, Canon was the tenth largest public company in Japan when measured by market capitalization, the company was originally named Seikikōgaku kenkyūsho. In 1934 it produced the Kwanon, a prototype for Japan’s first-ever 35 mm camera with a plane based shutter. In 1947 the company name was changed to Canon Camera Co. Inc. shortened to Canon Inc. in 1969, the name Canon comes from Buddhist bodhisattva Guan Yin, previously transliterated as Kuanyin, Kwannon, or Kwanon in English. The origins of Canon date back to the founding of Precision Optical Instruments Laboratory in Japan in 1937 by Takeshi Mitarai, Goro Yoshida, Saburo Uchida and Takeo Maeda. During its early years the company did not have any facilities to produce its own optical glass, between 1933 and 1936 ‘The Kwanon’, a copy of the Leica design, Japan’s first 35 mm focal plane-shutter camera, was developed in prototype form. In 1940 Canon developed Japans first indirect X-ray camera, Canon introduced a field zoom lens for television broadcasting in 1958 and in 1959 introduced the Reflex Zoom 8, the world’s first movie camera with a zoom lens, and the Canonflex. In 1961 Canon introduced the Rangefinder camera, Canon 7, in 1965 Canon introduced the Canon Pellix, a single lens reflex camera with a semi-transparent stationary mirror which enabled the taking of pictures through the mirror. In 1971 Canon introduced the F-1, a high-end SLR camera, in 1976 Canon launched the AE-1, the world’s first camera with an embedded micro-computer. In 1982 Wildlife as Canon Sees It print ads first appeared in National Geographic magazine, Canon introduced the world’s first Inkjet printer using bubble jet technology in 1985. Canon introduced Canon Electro-Optical System in 1987, named after the goddess of the dawn, EOS650 autofocus SLR camera is introduced. Also in 1987 the Canon Foundation was established, in 1988 Canon introduced Kyosei philosophy. The EOS1 Flagship Professional SLR line was launched in 1989, in the same year the EOS RT, the worlds first AF SLR with a fixed, semi-transparent pellicle mirror, was unveiled. In 1992 Canon launched the EOS5, the camera with eye-controlled AF. In 1995 Canon introduced the first commercially available SLR lens with image stabilization. EOS-1N RS, the worlds fastest AF SLR camera with a shooting speed of 10 frame/s at the time
31.
Seiko Epson
–
Seiko Epson Corporation, or simply Epson, is a Japanese electronics company and one of the worlds largest manufacturers of computer printers, and information and imaging related equipment. It is one of three companies of the Seiko Group, a name traditionally known for manufacturing Seiko timepieces since its founding. In 1968 the company moved its UK headquarters to Audenshaw, Manchester, after acquiring the Jones Sewing Machine Company, Daiwa Kogyo was supported by an investment from the Hattori family and began as a manufacturer of watch parts for Daini Seikosha. The company started operation in a 2, 500-square-foot renovated miso storehouse with 22 employees, in 1943, Daini Seikosha established a factory in Suwa for manufacturing Seiko watches with Daiwa Kogyo. In 1959, the Suwa Factory of Daini Seikosha was split up, Ltd, the forerunner of the Seiko Epson Corporation. The company has developed many timepiece technologies, the watches made by the company are sold through the Seiko Watch Corporation, a subsidiary of Seiko Holdings Corporation. In 1961, Suwa Seikosha established a company called Shinshu Seiki Co. as a subsidiary to supply parts for Seiko watches. In September 1968, Shinshu Seiki launched the worlds first mini-printer, in June 1975, the name Epson was coined for the next generation of printers based on the EP-101 which was released to the public. In April of the same year Epson America Inc. was established to sell printers for Shinshu Seiki Co, in June 1978, the TX-80, eighty-column dot-matrix printer was released to the market, and was mainly used as a system printer for the Commodore PET Computer. After two years of development, an improved model, the MX-80, was launched in October 1980. It was soon described in the companys advertising as the best selling printer in the United States, in November 1985, Suwa Seikosha Co. Ltd. and the Epson Corporation merged to form Seiko Epson Corporation. Shortly after in 1994, Epson released the first high resolution color inkjet printer, newer models of the Stylus series employed Epson’s special DURABrite ink. They also had two hard drives, the HD850 and the HD860 MFM interface. The specifications are reference The WINN L. ROSCH Hardware bible 3rd addition SAMS publishing, in 1994 Epson started outsourcing sales reps to help sell their products in retail stores in the United States. In 1994 Epson started the Epson Weekend Warrior sales program, the purpose of the program was to help improve sales, improve retail sales reps knowledge of Epson products and to address Epson customer service in a retail environment. Reps were assigned on weekend shift, typically around 12–20 hours a week, Epson started the Weekend Warrior program with TMG Marketing, later with Keystone Marketing Inc, then to Mosaic, and now with Campaigners INC. The Mosaic contract expired with Epson on June 24,2007 and Epson is now represented by Campaigners, the sales reps of Campaigners, Inc. are not outsourced as Epson hired rack jobbers to ensure their retail customers displayed products properly. This frees up their regular sales force to concentrate on profitable sales solutions to VARs and system integrators, in June 2003, the company became public following their listing on the 1st section of the Tokyo Stock Exchange
32.
Mamiya
–
Mamiya Digital Imaging Co. Ltd. is a Japanese company that manufactures high-end cameras and other related photographic and optical equipment. With headquarters in Tokyo, it has two manufacturing plants and a workforce of over 200 people, the company was founded in May 1940 by camera designer Seiichi Mamiya and financial backer Tsunejiro Sugawara. Mamiya originally achieved fame for its professional medium-format film cameras such as the Mamiya Six and it later developed the industry workhorse RB67 series, the RZ67 and the twin-lens reflex Mamiya C-series, used by advanced amateur and professional photographers. Many Mamiya models over the past six decades have become collectors items, Mamiya also manufactured the last models in the Omega line of medium format cameras. Mamiya entered other business markets over time by purchasing other companies, until 2000, it made fishing equipment such as fishing rods and fishing reels. In 2006, the Mamiya Op Co. Ltd. Inc. transferred the camera, the original company, doing business as Mamiya-OP, continues to exist and makes a variety of industrial and electronics products. It also makes golf clubs, golf club shafts and grips, in 2009, Phase One, a medium format digital camera back manufacturer from Denmark, purchased a major stake in Mamiya. The re-branding offers a product development and establishment of a more efficient customer sales. Mamiya started manufacturing 135-film cameras in 1949, with 135-film point-and-shoot compact cameras being introduced later, the excellent Mamiya-35 series of rangefinder cameras was followed by the Mamiya Prismat SLR in 1961 and the Mamiya TL/DTL in the mid-to-late 1960s. The SX, XTL and NC1000 were other 135-film SLR camera models introduced by Mamiya, One of Mamiyas last 135-film SLR designs was the Z-series. The Mamiya ZM, introduced in 1982, was essentially a version of the ZE-2. It was the last Mamiya 135-film camera produced and it had an aperture-priority automatic time control, based on center-weighted TTL readings, an automatic shutter-speed range from 4 seconds to 1/1000, and a manual range from 2 seconds to 1/1000. Visual and audio signals indicated over- or under-exposure, pending battery failure, metering modes, shutter release, self-timer, manual time settings and the ergonomics of the camera body were also improved. Mamiya made a series of square format twin lens reflex throughout the middle of the twentieth century and these were developed into the C cameras which have interchangeable lenses as well as bellows focus. In 1970, Mamiya introduced the RB67 6×7 cm professional single lens reflex, the RB67, a large, heavy, medium-format camera with built-in closeup bellows was innovative and successful. The RB67 had a back which enabled photographs to be taken in either landscape or portrait orientation without rotating the camera, at the expense of additional weight. The RB67 soon became used by professional studio photographers. The 6x7 frame was described as being ideal, as the 56mm x 67mm negatives required very little cropping to fit on standard 10 x 8 paper, when comparing the RB67 to full frame 135 cameras there is a so-called crop factor of a half
33.
Leaf (Israeli company)
–
Leaf, previously a division of Scitex and later Kodak, is now a subsidiary of Phase One. Leaf manufactures high end digital backs for medium format and large format cameras, in 1991, Leaf introduced the first medium format digital camera back, the Leaf DCB1, nicknamed ‘The Brick’, which had a resolution of 4 million pixels. As of 2012, Leaf produces the Credo line of digital camera backs, until 2010, Leaf also produced photography workflow software Leaf Capture. After Leafs DCB, the digital backs evolved into two lines, the Aptus and the Valeo. The main difference is that the Aptus models have a 3. 5-inch touchscreen, the Valeos can still be used untethered by using the DP-67 software or the more recent WiView software on a Compaq iPAQ. The iPAQs are connected via Bluetooth with the digital backs, for untethered usage, you will need battery packs for both the Aptus and the Valeo, otherwise the back must be powered via a Firewire connection to a computer. The Valeo models need a Leaf Digital Magazine as well for untethered use, Credo is the current generation of digital camera backs from Leaf, and the first Leaf back based on Phase One platform and not Leaf legacy platforms. Some of the changes are that the no longer has a cooling fan. Also to be noted is the new hi res capacitive touch screen which no longer require stylus pen, the back also boots in one second with Phase One OS. Previous Windows CE operating system took up to 10 seconds to reboot, the differences between Credo and Aptus II, Interface, Credo has both FW800 and USB3 interfaces. Aptus II has only FW800 interface, Aptus uses a proprietary L shaped FW connector, while Credo uses a straight regular connector. Thick head connectors will not fit into either backs, Credo uses standard USB3 type B connector and standard cables may be used. Officially only 3 meter USB3 cables are supported, but 5 and 7.5 meters were tested and worked with repeaters, FW cables in Credo and Aptus are supported up to 10 meters. Battery, Credo uses a battery, and Aptus uses an external battery. Credo cant operate without a battery inside it. When Aptus is tethered it can work only without battery, as the battery port, the Credo battery is charged when using powered FW800, but not when using USB. It is possible to use low powered Windows tablets to tether Credo, Aptus can load a double capacity and size battery, which Credo cant, since the battery goes into a slot in the Credo body. Screen, Credo has a screen but with more resolution