A digital camera or digicam is a camera that captures photographs in digital memory. Most cameras produced today are digital, while there are still dedicated digital cameras, many more cameras are now being incorporated into mobile devices, portable touchscreen computers, which can, among many other purposes, use their cameras to initiate live videotelephony and directly edit and upload imagery to others. However, high-end, high-definition dedicated cameras are still used by professionals. Digital and movie cameras share an optical system using a lens with a variable diaphragm to focus light onto an image pickup device; the diaphragm and shutter admit the correct amount of light to the imager, just as with film but the image pickup device is electronic rather than chemical. However, unlike film cameras, digital cameras can display images on a screen after being recorded, store and delete images from memory. Many digital cameras can record moving videos with sound; some digital cameras can perform other elementary image editing.
The history of the digital camera began with Eugene F. Lally of the Jet Propulsion Laboratory, thinking about how to use a mosaic photosensor to capture digital images, his 1961 idea was to take pictures of the planets and stars while travelling through space to give information about the astronauts' position. As with Texas Instruments employee Willis Adcock's filmless camera in 1972, the technology had yet to catch up with the concept; the Cromemco Cyclops was an all-digital camera introduced as a commercial product in 1975. Its design was published as a hobbyist construction project in the February 1975 issue of Popular Electronics magazine, it used a 32×32 Metal Oxide Semiconductor sensor. Steven Sasson, an engineer at Eastman Kodak and built the first self-contained electronic camera that used a charge-coupled device image sensor in 1975. Early uses were military and scientific. In 1986, Japanese company Nikon introduced the first digital single-lens reflex camera, the Nikon SVC. In the mid-to-late 1990s, DSLR cameras became common among consumers.
By the mid-2000s, DSLR cameras had replaced film cameras. In 2000, Sharp introduced the J-SH04 J-Phone, in Japan. By the mid-2000s, higher-end cell phones had an integrated digital camera. By the beginning of the 2010s all smartphones had an integrated digital camera; the two major types of digital image sensor are CCD and CMOS. A CCD sensor has one amplifier for all the pixels, while each pixel in a CMOS active-pixel sensor has its own amplifier. Compared to CCDs, CMOS sensors use less power. Cameras with a small sensor use a back-side-illuminated CMOS sensor. Overall final image quality is more dependent on the image processing capability of the camera, than on sensor type; the resolution of a digital camera is limited by the image sensor that turns light into discrete signals. The brighter the image at a given point on the sensor, the larger the value, read for that pixel. Depending on the physical structure of the sensor, a color filter array may be used, which requires demosaicing to recreate a full-color image.
The number of pixels in the sensor determines the camera's "pixel count". In a typical sensor, the pixel count is the product of the number of columns. For example, a 1,000 by 1,000 pixel sensor would have 1 megapixel. Final quality of an image depends on all optical transformations in the chain of producing the image. Carl Zeiss points out. In case of a digital camera, a simplistic way of expressing it is that the lens determines the maximum sharpness of the image while the image sensor determines the maximum resolution; the illustration on the right can be said to compare a lens with poor sharpness on a camera with high resolution, to a lens with good sharpness on a camera with lower resolution. Since the first digital backs were introduced, there have been three main methods of capturing the image, each based on the hardware configuration of the sensor and color filters. Single-shot capture systems use either one sensor chip with a Bayer filter mosaic, or three separate image sensors which are exposed to the same image via a beam splitter.
Multi-shot exposes the sensor to the image in a sequence of three or more openings of the lens aperture. There are several methods of application of the multi-shot technique; the most common was to use a single image sensor with three filters passed in front of the sensor in sequence to obtain the additive color information. Another multiple shot method is called Microscanning; this method uses a single sensor chip with a Bayer filter and physically moved the sensor on the focus plane of the lens to construct a higher resolution image than the native resolution of the chip. A third version combined the two methods without a Bayer filter on the chip; the third method is called scanning because the sensor moves across the focal plane much like the sensor of an image scanner. The linear or tri-linear sensors in scanning cameras utilize only a single line of photosensors, or three lines for the three colors. Scanning may be accomplished by rotating the whole camera. A digital rotating line camera offers images of high total resolution.
The choice of method for a given capture is determined by the subject matter. It is inappropriate to attempt to capture a subject that moves with anything but a single-shot sys
Interoperability is a characteristic of a product or system, whose interfaces are understood, to work with other products or systems, at present or in the future, in either implementation or access, without any restrictions. While the term was defined for information technology or systems engineering services to allow for information exchange, a broader definition takes into account social and organizational factors that impact system to system performance. Task of building coherent services for users when the individual components are technically different and managed by different organizations If two or more systems are capable of communicating with each other, they exhibit syntactic interoperability when using specified data formats and communication protocols. XML or SQL standards are among the tools of syntactic interoperability; this is true for lower-level data formats, such as ensuring alphabetical characters are stored in a same variation of ASCII or a Unicode format in all the communicating systems.
Beyond the ability of two or more computer systems to exchange information, semantic interoperability is the ability to automatically interpret the information exchanged meaningfully and in order to produce useful results as defined by the end users of both systems. To achieve semantic interoperability, both sides must refer to a common information exchange reference model; the content of the information exchange requests are unambiguously defined: what is sent is the same as what is understood. The possibility of promoting this result by user-driven convergence of disparate interpretations of the same information has been object of study by research prototypes such as S3DB. Cross-domain interoperability involves multiple social, political, legal entities working together for a common interest and/or information exchange. Interoperability imply Open standards ab-initio, i.e. by definition. Interoperability imply exchanges between a range of products, or similar products from several different vendors, or between past and future revisions of the same product.
Interoperability may be developed post-facto, as a special measure between two products, while excluding the rest, by using Open standards. When a vendor is forced to adapt its system to a dominant system, not based on Open standards, it is not interoperability but only compatibility. Open standards rely on a broadly consultative and inclusive group including representatives from vendors and others holding a stake in the development that discusses and debates the technical and economic merits and feasibility of a proposed common protocol. After the doubts and reservations of all members are addressed, the resulting common document is endorsed as a common standard; this document is subsequently released to the public, henceforth becomes an open standard. It is published and is available or at a nominal cost to any and all comers, with no further encumbrances. Various vendors and individuals can use the standards document to make products that implement the common protocol defined in the standard, are thus interoperable by design, with no specific liability or advantage for any customer for choosing one product over another on the basis of standardised features.
The vendors' products compete on the quality of their implementation, user interface, ease of use, price, a host of other factors, while keeping the customers data intact and transferable if he chooses to switch to another competing product for business reasons. Post facto interoperability may be the result of the absolute market dominance of a particular product in contravention of any applicable standards, or if any effective standards were not present at the time of that product's introduction; the vendor behind that product can choose to ignore any forthcoming standards and not co-operate in any standardisation process at all, using its near-monopoly to insist that its product sets the de facto standard by its market dominance. This is not a problem if the product's implementation is open and minimally encumbered, but it may as well be both closed and encumbered; because of the network effect, achieving interoperability with such a product is both critical for any other vendor if it wishes to remain relevant in the market, difficult to accomplish because of lack of co-operation on equal terms with the original vendor, who may well see the new vendor as a potential competitor and threat.
The newer implementations rely on clean-room reverse engineering in the absence of technical data to achieve interoperability. The original vendors can provide such technical data to others in the name of'encouraging competition,' but such data is invariably encumbered, may be of limited use. Availability of such data is not equivalent to an open standard, because: The data is provided by the original vendor on a discretionary basis, who has every interest in blocking the effective implementation of competing solutions, may subtly alter or change its product in newer revisions, so that competitors' implementations are but not quite interoperable, leading customers to consider them unreliable or of a lower quality; these changes can either not be passed on to other vendors at all, or passed on after a strategic delay, maintaining the market dominance of the original vendor. The data itself may be encumbered, e.g. by patents or pricing, leading to a dependence of all competing solutions on the original vendor, leading a revenue stream from the competitors' customers back to the original vendor.
This revenue stream is only a result of the origina
USB mass storage device class
The USB mass storage device class is a set of computing communications protocols defined by the USB Implementers Forum that makes a USB device accessible to a host computing device and enables file transfers between the host and the USB device. To a host, the USB device acts as an external hard drive. Devices connected to computers via this standard include: External magnetic hard drives External optical drives, including CD and DVD reader and writer drives Portable flash memory devices Solid-state drives Adapters between standard flash memory cards and USB connections Digital cameras Digital audio and portable media players Card readers PDAs Mobile phonesDevices supporting this standard are known as MSC devices. While MSC is the original abbreviation, UMS has come into common use. Most mainstream operating systems include support for USB mass storage devices. Microsoft Windows has supported MSC since Windows 2000. There is no support for USB supplied by Microsoft in Windows before Windows 95 and Windows NT 4.0.
Windows 95 OSR2.1, an update to the operating system, featured limited support for USB. During that time no generic USB mass-storage driver was produced by Microsoft, a device-specific driver was needed for each type of USB storage device. Third-party, freeware drivers became available for Windows 98 and Windows 98SE, third-party drivers are available for Windows NT 4.0. Windows 2000 has support for standard USB mass-storage devices. Windows Mobile supports accessing most USB mass-storage devices formatted with FAT on devices with USB Host. However, portable devices cannot provide enough power for hard-drive disk enclosures without a self-powered USB hub. A Windows Mobile device cannot display its file system as a mass-storage device unless the device implementer adds that functionality. However, third-party applications add MSC emulation to most WM devices. Only memory cards can be exported, due to file-systems issues; the AutoRun feature of Windows worked on all removable media, allowing USB storage devices to become a portal for computer viruses.
Beginning with Windows 7, Microsoft limited AutoRun to CD and DVD drives, updating previous Windows versions. Neither MS-DOS nor most compatible operating systems included support for USB. Third-party generic drivers, such as Duse, USBASPI and DOSUSB, are available to support USB mass-storage devices. FreeDOS supports USB mass storage as an Advanced SCSI Programming Interface interface. Apple Computer's Mac OS 9 and X support USB mass storage; the Linux kernel has supported USB mass-storage devices since its 2.4 series, a backport to kernel 2.2.18 has been made. In Linux, more features exist in addition to the generic drivers for USB mass-storage device class devices, including quirks, bug fixes and additional functionality for devices and controllers; this includes a certain portion of Android-based devices, through support USB-OTG, since Android uses the Linux kernel. Solaris has supported devices since its version 2.8, NetBSD since its version 1.5, FreeBSD since its version 4.0 and OpenBSD since its version 2.7.
Digital UNIX, has supported USB and USB mass-storage devices since its version 4.0E. AIX has supported USB mass-storage devices since 6.1 T3 versions. The Xbox 360 and PlayStation 3 support most mass-storage devices for the data transfer of media such as pictures and music; as of April 2010, the Xbox 360 used a mass-storage device for saved games and the PS3 allowed transfers between devices on a mass-storage device. Independent developers have released drivers for the TI-84 Plus and TI-84 Plus Silver Edition to access USB mass-storage devices. In these calculators, the usb8x driver supports the msd8x user-interface application; the USB mass-storage specification provides an interface to a number of industry-standard command sets, allowing a device to disclose its subclass. In practice, there is little support for specifying a command set via its subclass. Subclass codes specify the following command sets: Reduced Block Commands SFF-8020i, MMC-2 QIC-157 Uniform Floppy Interface SFF-8070i SCSI transparent command set The specification does not require a particular file system on conforming devices.
Based on the specified command set and any subset, it provides a means to read and write sectors of data. Operating systems may treat a USB mass-storage device like a hard drive; because of its relative simplicity, the most-common file system on embedded devices such as USB flash
Standardization or standardisation is the process of implementing and developing technical standards based on the consensus of different parties that include firms, interest groups, standards organizations and governments Standardization can help to maximize compatibility, safety, repeatability, or quality. It can facilitate commoditization of custom processes. In social sciences, including economics, the idea of standardization is close to the solution for a coordination problem, a situation in which all parties can realize mutual gains, but only by making mutually consistent decisions; this view includes the case of "spontaneous standardization processes", to produce de facto standards. Standard weights and measures were developed by the Indus Valley Civilization; the centralized weight and measure system served the commercial interest of Indus merchants as smaller weight measures were used to measure luxury goods while larger weights were employed for buying bulkier items, such as food grains etc.
Weights existed in categories. Technical standardisation enabled gauging devices to be used in angular measurement and measurement for construction. Uniform units of length were used in the planning of towns such as Lothal, Kalibangan, Dolavira and Mohenjo-daro; the weights and measures of the Indus civilization reached Persia and Central Asia, where they were further modified. Shigeo Iwata describes the excavated weights unearthed from the Indus civilization: A total of 558 weights were excavated from Mohenjodaro and Chanhu-daro, not including defective weights, they did not find statistically significant differences between weights that were excavated from five different layers, each measuring about 1.5 m in depth. This was evidence; the 13.7-g weight seems to be one of the units used in the Indus valley. The notation was based on decimal systems. 83% of the weights which were excavated from the above three cities were cubic, 68% were made of chert. The implementation of standards in industry and commerce became important with the onset of the Industrial Revolution and the need for high-precision machine tools and interchangeable parts.
Henry Maudslay developed the first industrially practical screw-cutting lathe in 1800. This allowed for the standardisation of screw thread sizes for the first time and paved the way for the practical application of interchangeability to nuts and bolts. Before this, screw threads were made by chipping and filing. Nuts were rare. Metal bolts passing through wood framing to a metal fastening on the other side were fastened in non-threaded ways. Maudslay standardized the screw threads used in his workshop and produced sets of taps and dies that would make nuts and bolts to those standards, so that any bolt of the appropriate size would fit any nut of the same size; this was a major advance in workshop technology. Maudslay's work, as well as the contributions of other engineers, accomplished a modest amount of industry standardization. Joseph Whitworth's screw thread measurements were adopted as the first national standard by companies around the country in 1841, it came to be known as the British Standard Whitworth, was adopted in other countries.
This new standard specified a 55° thread angle and a thread depth of 0.640327p and a radius of 0.137329p, where p is the pitch. The thread pitch increased with diameter in steps specified on a chart. An example of the use of the Whitworth thread is the Royal Navy's Crimean War gunboats; these were the first instance of "mass-production" techniques being applied to marine engineering. With the adoption of BSW by British railway lines, many of which had used their own standard both for threads and for bolt head and nut profiles, improving manufacturing techniques, it came to dominate British manufacturing. American Unified Coarse was based on the same imperial fractions; the Unified thread angle has flattened crests. Thread pitch is the same in both systems except that the thread pitch for the 1⁄2 in bolt is 12 threads per inch in BSW versus 13 tpi in the UNC. By the end of the 19th century, differences in standards between companies, was making trade difficult and strained. For instance, an iron and steel dealer recorded his displeasure in The Times: "Architects and engineers specify such unnecessarily diverse types of sectional material or given work that anything like economical and continuous manufacture becomes impossible.
In this country no two professional men are agreed upon the size and weight of a girder to employ for given work." The Engineering Standards Committee was established in London in 1901 as the world's first national standards body. It subsequently extended its standardization work and became the British Engineering Standards Association in 1918, adopting the name British Standards Institution in 1931 after receiving its Royal Charter in 1929; the national standards were adopted universally throughout the country, enabled the markets to act more rationally and efficiently, with an increased level of cooperation. After the First World War, similar national bodies were established in other countries; the Deutsches Institut für Normung was set up in Germany in 1917, followed by its counterparts, the American National Standard Institute and the French Commissi
Konica Minolta, Inc. is a Japanese multinational technology company headquartered in Marunouchi, Tokyo, with offices in 49 countries worldwide. The company manufactures business and industrial imaging products, including copiers, laser printers, multi-functional peripherals and digital print systems for the production printing market. Konica Minolta's Managed Print Service is called Optimised Print Services; the company makes optical devices, including lenses and LCD film. It once had camera and photo operations inherited from Konica and Minolta but they were sold in 2006 to Sony, with Sony's Alpha series being the successor SLR division brand. Konica Minolta was formed by a merger between Japanese imaging firms Konica and Minolta, announced on January 7, 2003 with the corporate structure completing the re-organization in October 2003. Different group companies, such as the operations in the headquarters and national operating companies, began the process around the same time, however the exact dates vary for each group company.
Konica Minolta uses a "Globe Mark" logo, similar to but different from that of the former company. It uses the same corporate slogan as the former Minolta company: "The Essentials of Imaging". On January 19, 2006 the company announced that it was quitting the camera business due to high financial losses. SLR camera service operations were handed over to Sony starting on March 31, 2006 and Sony has continued development of cameras that are compatible with Minolta autofocus lenses. In the negotiations, Konica Minolta wanted cooperation with Sony in camera equipment production rather than a sell-out deal, but Sony vehemently refused, saying that it would either acquire everything or leave everything that had to do with the camera equipment sector of KM. Konica Minolta withdrew from the photography business on September 30, 2006. Three thousand seven hundred employees were laid off. Konica Minolta closed down its photo imaging division in March 2007; the color film, color paper, photo chemical and digital mini-lab machine divisions have ceased operations.
Dai Nippon Printing purchased Konica's Odawara factory, with plans to continue to produce paper under Dai Nippon's brand. CPAC acquired the Konica chemical factory. Konica expanded its business presence and sells its products in the Americas, Asia Pacific, Middle East, Africa. Minolta had been a competitor in the 35 mm SLR market since the development of the manual-focus SRT and other models in the mid-1960s. Minolta positioned most of its cameras to compete in the amateur market, though it did produce a high quality MF SLR in the XD-11. Minolta's last MF SLR cameras were the X370 and X700. Shanghai Optical Co. purchased tools and production plant from Minolta at different times, making some X300 series for Minolta branding, continues to release MD mount film SLRs compatible with the old system under the Seagull name. Until the sale of Konica Minolta's Photo Imaging unit to Sony in 2006, Konica Minolta produced the former Minolta range of 35 mm autofocus single-lens reflex cameras, variously named "Minolta Maxxum" in North America, "Minolta Dynax" in Europe, "Minolta Alpha" in Japan and the rest of Asia.
This range was introduced in 1985 with the Minolta Maxxum 7000, culminated with the professional Maxxum 9 made in a titanium body and technically advanced 7. The final Minolta 35 mm SLR AF cameras were the Maxxum 70, built in China. Konica Minolta had a line of digital point and shoot cameras to compete in the digital photography market, their Dimage line included imaging software as well as film scanners. They created a new category of "SLR-like" cameras with the introduction of the DiMAGE 7 and DiMAGE 5; these cameras mixed many of the features of a traditional SLR camera with the special abilities of a digital camera. They had a mechanical zoom ring and electronic focus ring on the lens barrel and used an electronic viewfinder showing 100 per cent of the lens view, they added many high level features such as a histogram and made the cameras TTL-compatible with Minolta's final generation of flashes for film SLRs. The controls were designed to be used by people familiar with SLR cameras, however the manual zoom auto-focus lens was not interchangeable.
The model 5 had a 1/1.8-inch sensor with 3.3 megapixels, the fixed zoom was equal to a 35–250 mm. The Dimage 7 7i, 7Hi and A1 had 5-megapixel sensors for which the same lens provided 28–200 mm equivalent coverage; the A2 and A200 increased the sensor resolution to 8 megapixels. The Dimage 5 and 7 original models were more sensitive to infrared light than models, which incorporated more aggressive IR sensor filters, so have become popular for infrared photography; the A1/A2/A200 integrated a piezoelectrically actuated anti-camera-shake system. Before the closure of the Photo Imaging unit, the Dimâge lineup included the long-zoom Z line, the E/G lines, the thin/light X line, the advanced A line; the DiMAGE G500 was a five-megapixel compact digital camera manufactured by Konica Minolta in 2003. It comes in a stainless steel case, 3x zoom lens with a retractable barrel, dual Secure Digital and MagicGate card slots, the camera has a 1.3-second startup time. Minolta made some early forays into digital SLRs with the RD-175 in 1995 and the Minolta Dimâge RD 3000 in 1999 but were the last of t
In computing, a file system or filesystem controls how data is stored and retrieved. Without a file system, information placed in a storage medium would be one large body of data with no way to tell where one piece of information stops and the next begins. By separating the data into pieces and giving each piece a name, the information is isolated and identified. Taking its name from the way paper-based information systems are named, each group of data is called a "file"; the structure and logic rules used to manage the groups of information and their names is called a "file system". There are many different kinds of file systems; each one has different structure and logic, properties of speed, security and more. Some file systems have been designed to be used for specific applications. For example, the ISO 9660 file system is designed for optical discs. File systems can be used on numerous different types of storage devices that use different kinds of media; as of 2019, hard disk drives have been key storage devices and are projected to remain so for the foreseeable future.
Other kinds of media that are used include SSDs, magnetic tapes, optical discs. In some cases, such as with tmpfs, the computer's main memory is used to create a temporary file system for short-term use; some file systems are used on local data storage devices. Some file systems are "virtual", meaning that the supplied "files" are computed on request or are a mapping into a different file system used as a backing store; the file system manages access to the metadata about those files. It is responsible for arranging storage space. Before the advent of computers the term file system was used to describe a method of storing and retrieving paper documents. By 1961 the term was being applied to computerized filing alongside the original meaning. By 1964 it was in general use. A file system consists of three layers. Sometimes the layers are explicitly separated, sometimes the functions are combined; the logical file system is responsible for interaction with the user application. It provides the application program interface for file operations — OPEN, CLOSE, READ, etc. and passes the requested operation to the layer below it for processing.
The logical file system "manage open file table entries and per-process file descriptors." This layer provides "file access, directory operations and protection."The second optional layer is the virtual file system. "This interface allows support for multiple concurrent instances of physical file systems, each of, called a file system implementation."The third layer is the physical file system. This layer is concerned with the physical operation of the storage device, it processes physical blocks being written. It handles buffering and memory management and is responsible for the physical placement of blocks in specific locations on the storage medium; the physical file system interacts with the device drivers or with the channel to drive the storage device. Note: this only applies to file systems used in storage devices. File systems allocate space in a granular manner multiple physical units on the device; the file system is responsible for organizing files and directories, keeping track of which areas of the media belong to which file and which are not being used.
For example, in Apple DOS of the early 1980s, 256-byte sectors on 140 kilobyte floppy disk used a track/sector map. This results in unused space when a file is not an exact multiple of the allocation unit, sometimes referred to as slack space. For a 512-byte allocation, the average unused space is 256 bytes. For 64 KB clusters, the average unused space is 32 KB; the size of the allocation unit is chosen. Choosing the allocation size based on the average size of the files expected to be in the file system can minimize the amount of unusable space; the default allocation may provide reasonable usage. Choosing an allocation size, too small results in excessive overhead if the file system will contain very large files. File system fragmentation occurs; as a file system is used, files are created and deleted. When a file is created the file system allocates space for the data; some file systems permit or require specifying an initial space allocation and subsequent incremental allocations as the file grows.
As files are deleted the space they were allocated is considered available for use by other files. This creates alternating unused areas of various sizes; this is free space fragmentation. When a file is created and there is not an area of contiguous space available for its initial allocation the space must be assigned in fragments; when a file is modified such that it becomes larger it may exceed the space allocated to it, another allocation must be assigned elsewhere and the file becomes fragmented. A filename is used to identify a storage location in the file system. Most file systems have restrictions on the length of filenames. In some file systems, filenames are not case sensitive. Most modern file systems allow filenames to contain a wide range of characters from the Unicode character set. However, they may have restrictions on the use of certain s
Exchangeable image file format is a standard that specifies the formats for images and ancillary tags used by digital cameras and other systems handling image and sound files recorded by digital cameras. The specification uses the following existing file formats with the addition of specific metadata tags: JPEG discrete cosine transform for compressed image files, TIFF Rev. 6.0 for uncompressed image files, RIFF WAV for audio files. It is not used in JPEG 2000 or GIF; this standard consists of the Exif audio file specification. The Japan Electronic Industries Development Association produced the initial definition of Exif. Version 2.1 of the specification is dated 12 June 1998. JEITA established Exif version 2.2, dated 20 February 2002 and released in April 2002. Version 2.21 is dated 11 July 2003, but was released in September 2003 following the release of DCF 2.0. The latest, version 2.3, released on 26 April 2010 and revised in May 2013, was jointly formulated by JEITA and CIPA. Exif is supported by all camera manufacturers.
The metadata tags defined in the Exif standard cover a broad spectrum: Date and time information. Digital cameras will save this in the metadata. Camera settings; this includes static information such as the camera model and make, information that varies with each image such as orientation, shutter speed, focal length, metering mode, ISO speed information. A thumbnail for previewing the picture on the camera's LCD screen, in file managers, or in photo manipulation software. Descriptions Copyright information; the Exif tag structure is borrowed from TIFF files. On several image specific properties, there is a large overlap between the tags defined in the TIFF, Exif, TIFF/EP, DCF standards. For descriptive metadata, there is an overlap between Exif, IPTC Information Interchange Model and XMP info, which can be embedded in a JPEG file; the Metadata Working Group has guidelines on mapping tags between these standards. When Exif is employed for JPEG files, the Exif data are stored in one of JPEG's defined utility Application Segments, the APP1, which in effect holds an entire TIFF file within.
When Exif is employed in TIFF files, the TIFF Private Tag 0x8769 defines a sub-Image File Directory that holds the Exif specified TIFF Tags. In addition, Exif defines a Global Positioning System sub-IFD using the TIFF Private Tag 0x8825, holding location information, an "Interoperability IFD" specified within the Exif sub-IFD, using the Exif tag 0xA005. Formats specified in Exif standard are defined as folder structures that are based on Exif-JPEG and recording formats for memory; when these formats are used as Exif/DCF files together with the DCF specification, their scope shall cover devices, recording media, application software that handle them. The Exif format has standard tags for location information; as of 2014 many cameras and mobile phones have a built-in GPS receiver that stores the location information in the Exif header when a picture is taken. Some other cameras have a separate GPS receiver that fits into hot shoe. Recorded GPS data can be added to any digital photograph on a computer, either by correlating the time stamps of the photographs with a GPS record from a hand-held GPS receiver or manually by using a map or mapping software.
The process of adding geographic information to a photograph is known as geotagging. Photo-sharing communities like Panoramio, locr or Flickr allow their users to upload geocoded pictures or to add geolocation information online. Exif data are embedded within the image file itself. While many recent image manipulation programs recognize and preserve Exif data when writing to a modified image, this is not the case for most older programs. Many image gallery programs recognise Exif data and optionally display it alongside the images. Software libraries, such as libexif for C and Adobe XMP Toolkit or Exiv2 for C++, Metadata Extractor for Java, PIL/Pillow for Python or ExifTool for Perl, parse Exif data from files and read/write Exif tag values; the Exif format has a number of drawbacks relating to its use of legacy file structures. The derivation of Exif from the TIFF file structure using offset pointers in the files means that data can be spread anywhere within a file, which means that software is to corrupt any pointers or corresponding data that it doesn't decode/encode.
For this reason most image editors remove the Exif metadata to some extent upon saving. The standard defines a MakerNote tag, which allows camera manufacturers to place any custom format metadata in the file; this is used by camera manufacturers to store camera settings not listed in the Exif standard, such as shooting modes, post-processing settings, serial number, focusing modes, etc. As the tag contents are proprietary and manufacturer-specific, it can be difficult to retrieve this information from an image or to properly preserve it when rewriting an image. Manufacturers can encrypt portions of the information. Exif is often used in images created by scanners, but the standard makes no provisions for any scanner-specific information. Photo manipulation software sometimes fails to update the em