The Eastman Kodak Company is an American technology company that produces camera-related products with its historic basis on photography. The company is headquartered in Rochester, New York, is incorporated in New Jersey. Kodak provides packaging, functional printing, graphic communications and professional services for businesses around the world, its main business segments are Print Systems, Enterprise Inkjet Systems, Micro 3D Printing and Packaging and Solutions, Consumer and Film. It is best known for photographic film products. Kodak was founded by George Eastman and Henry A. Strong on September 4, 1888. During most of the 20th century, Kodak held a dominant position in photographic film; the company's ubiquity was such that its "Kodak moment" tagline entered the common lexicon to describe a personal event, demanded to be recorded for posterity. Kodak began to struggle financially in the late 1990s, as a result of the decline in sales of photographic film and its slowness in transitioning to digital photography, despite developing the first self-contained digital camera.
As a part of a turnaround strategy, Kodak began to focus on digital photography and digital printing, attempted to generate revenues through aggressive patent litigation. In January 2012, Kodak filed for Chapter 11 bankruptcy protection in the United States District Court for the Southern District of New York. In February 2012, Kodak announced that it would stop making digital cameras, pocket video cameras and digital picture frames and focus on the corporate digital imaging market. Digital cameras are still sold under the Kodak brand by JK Imaging Ltd thanks to an agreement with Kodak. In August 2012, Kodak announced its intention to sell its photographic film, commercial scanners and kiosk operations, as a measure to emerge from bankruptcy, but not its motion picture film operations. In January 2013, the Court approved financing for Kodak to emerge from bankruptcy by mid 2013. Kodak sold many of its patents for $525,000,000 to a group of companies under the names Intellectual Ventures and RPX Corporation.
On September 3, 2013, the company emerged from bankruptcy having shed its large legacy liabilities and exited several businesses. Personalized Imaging and Document Imaging are now part of Kodak Alaris, a separate company owned by the UK-based Kodak Pension Plan. From the company's founding by George Eastman in 1888, Kodak followed the razor and blades strategy of selling inexpensive cameras and making large margins from consumables – film and paper; as late as 1976, Kodak commanded 90% of film sales and 85% of camera sales in the U. S. Japanese competitor Fujifilm entered the U. S. market with lower-priced film and supplies, but Kodak did not believe that American consumers would desert its brand. Kodak passed on the opportunity to become the official film of the 1984 Los Angeles Olympics. Fuji opened a film plant in the U. S. and its aggressive marketing and price cutting began taking market share from Kodak. Fuji went from a 10% share in the early 1990s to 17% in 1997. Fuji made headway into the professional market with specialty transparency films such as Velvia and Provia, which competed with Kodak's signature professional product, but used the more economical and common E-6 processing machines which were standard in most processing labs, rather than the dedicated machines required by Kodachrome.
Fuji's films soon found a competitive edge in higher-speed negative films, with a tighter grain structure. In May 1995, Kodak filed a petition with the US Commerce Department under section 301 of the Commerce Act arguing that its poor performance in the Japanese market was a direct result of unfair practices adopted by Fuji; the complaint was lodged by the United States with the World Trade Organization. On January 30, 1998, the WTO announced a "sweeping rejection of Kodak's complaints" about the film market in Japan. Kodak's financial results for the year ending December 1997 showed that company's revenues dropped from $15.97 billion in 1996 to $14.36 billion in 1997, a fall of more than 10%. Kodak's market share declined from 80.1% to 74.7% in the United States, a one-year drop of five percentage points that had observers suggesting that Kodak was slow to react to changes and underestimated its rivals. Although from the 1970s both Fuji and Kodak recognized the upcoming threat of digital photography, although both sought diversification as a mitigation strategy, Fuji was more successful at diversification.
Although Kodak developed a digital camera in 1975, the first of its kind, the product was dropped for fear it would threaten Kodak's photographic film business. In the 1990s, Kodak planned a decade-long journey to move to digital technology. CEO George M. C. Fisher reached out to other new consumer merchandisers. Apple's pioneering QuickTake consumer digital cameras, introduced in 1994, had the Apple label but were produced by Kodak; the DC-20 and DC-25 launched in 1996. Overall, there was little implementation of the new digital strategy. Kodak's core business faced no pressure from competing technologies, as Kodak executives could not fathom a world without traditional film there was little incentive to deviate from that course. Consumers switched to the digital offering from companies such as Sony. In 2001 film sales dropped, attributed by Kodak to the financial shocks caused by the September 11 attacks. Executives hoped that Kodak might be able to slow the sh
Range encoding is an entropy coding method defined by G. Nigel N. Martin in a 1979 paper, which rediscovered the FIFO arithmetic code first introduced by Richard Clark Pasco in 1976. Given a stream of symbols and their probabilities, a range coder produces a space-efficient stream of bits to represent these symbols and, given the stream and the probabilities, a range decoder reverses the process. Range coding is similar to arithmetic encoding, except that encoding is done with digits in any base, instead of with bits, so it is faster when using larger bases at small cost in compression efficiency. After the expiration of the first arithmetic coding patent, range encoding appeared to be free of patent encumbrances; this drove interest in the technique in the open source community. Since that time, patents on various well-known arithmetic coding techniques have expired. Range encoding conceptually encodes all the symbols of the message into one number, unlike Huffman coding which assigns each symbol a bit-pattern and concatenates all the bit-patterns together.
Thus range encoding can achieve greater compression ratios than the one-bit-per-symbol lower bound on Huffman encoding and it does not suffer the inefficiencies that Huffman does when dealing with probabilities that are not exact powers of two. The central concept behind range encoding is this: given a large-enough range of integers, a probability estimation for the symbols, the initial range can be divided into sub-ranges whose sizes are proportional to the probability of the symbol they represent; each symbol of the message can be encoded in turn, by reducing the current range down to just that sub-range which corresponds to the next symbol to be encoded. The decoder must have the same probability estimation the encoder used, which can either be sent in advance, derived from transferred data or be part of the compressor and decompressor; when all symbols have been encoded identifying the sub-range is enough to communicate the entire message. A single integer is sufficient to identify the sub-range, it may not be necessary to transmit the entire integer.
Suppose we want to encode the message "AABA<EOM>", where <EOM> is the end-of-message symbol. For this example it is assumed that the decoder knows that we intend to encode five symbols in the base 10 number system ) using the probability distribution; the encoder breaks down the range [0, 100000) into three subranges: A: [ 0, 60000) B: [ 60000, 80000) <EOM>: [ 80000, 100000) Since our first symbol is an A, it reduces our initial range down to [0, 60000). The second symbol choice leaves us with three sub-ranges of this range. We show them following the already-encoded'A': AA: [ 0, 36000) AB: [ 36000, 48000) A<EOM>: [ 48000, 60000) With two symbols encoded, our range is now [0, 36000) and our third symbol leads to the following choices: AAA: [ 0, 21600) AAB: [ 21600, 28800) AA<EOM>: [ 28800, 36000) This time it is the second of our three choices that represent the message we want to encode, our range becomes [21600, 28800). It may look harder to determine our sub-ranges in this case, but it is not: we can subtract the lower bound from the upper bound to determine that there are 7200 numbers in our range.
Adding back the lower bound gives us our ranges: AABA: [21600, 25920) AABB: [25920, 27360) AAB<EOM>: [27360, 28800) Finally, with our range narrowed down to [21600, 25920), we have just one more symbol to encode. Using the same technique as before for dividing up the range between the lower and upper bound, we find the three sub-ranges are: AABAA: [21600, 24192) AABAB: [24192, 25056) AABA<EOM>: [25056, 25920) And since <EOM> is our final symbol, our final range is [25056, 25920). Because all five-digit integers starting with "251" fall within our final range, it is one of the three-digit prefixes we could transmit that would unambiguously convey our original message; the central problem may appear to be selecting an initial range large enough that no matter how many symbols we have to encode, we will always have a current range large enough to divide into non-zero sub-ranges. In practice, this is not a problem, because instead of starting with a large range and narrowing it down, the encoder works with a smaller range of numbers at any given time.
After some number of digits have been encoded, the leftmost digits will not change. In the example after encoding just three symbols, we knew that our final result would start with "2". More digits are shifted in on the right; this is illustrated in the following code: To finish off we may need to emit a few extra digits. The top digit of low is too small so we need to increment it, but we have to make sure we don't increment it past low+range. So first we need to make sure. One problem that can occur with the Encode function above is that range might become small but low and low+range still have differ
University of Reading
The University of Reading is a public university located in Reading, England. It was founded in 1892 as Reading, a University of Oxford extension college; the institution received the power to grant its own degrees in 1926 by Royal Charter from King George V and was the only university to receive such a charter between the two world wars. The university is categorised as a red brick university, reflecting its original foundation in the 19th century, it has four major campuses. In the United Kingdom, the campuses on London Road and Whiteknights are based in the town of Reading itself, Greenlands is based on the banks of the River Thames, Buckinghamshire, it has a campus in Iskandar Puteri, Malaysia. The university has been arranged into 16 academic schools since 2016. Reading was ranked 35th in the UK amongst multi-faculty institutions for the quality of its research and 28th for its Research Power in the 2014 Research Excellence Framework. In total, 98% of the University's research is labelled as'internationally recognised', 78% as'internationally excellent and 27% as'world leading'.
Reading was the first university to win a Queen's Award for Export Achievement, in 1989. The annual income of the institution for 2016–17 was £275.3 million of which £35.4 million was from research grants and contracts, with an expenditure of £297.5 million. In 2019 it was reported; the university owes its first origins to the Schools of Art and Science established in Reading in 1860 and 1870. In 1892 the College at Reading was founded as an extension college by Christ Church, a college of the University of Oxford; the first President was the geographer Sir Halford John Mackinder. The Schools of Art and Science were transferred to the new college by Reading Town Council in the same year; the new college received its first treasury grant in 1901. Three years it was given a site, now the university's London Road Campus, by the Palmer family of Huntley & Palmers fame; the same family supported the opening of Wantage Hall in 1908, of the Research Institute in Dairying in 1912. The college first was unsuccessful at that time.
However a second petition, in 1925, was successful, the charter was granted on 17 March 1926. With the charter, the college became the University of Reading, the only new university to be created in the United Kingdom between the two world wars, it was added to the Combined English Universities constituency in 1928 in time for the 1929 general election. In 1947 the university purchased Whiteknights Park, to become its principal campus. In 1984 the University started a merger with Bulmershe College of Higher Education, completed in 1989. In October 2006, the Senior Management Board proposed the closure of its Physics Department to future undergraduate application; this was ascribed to financial reasons and lack of alternative ideas and caused considerable controversy, not least a debate in Parliament over the closure which prompted heated discussion of higher education issues in general. On 10 October the Senate voted to close the Department of Physics, a move confirmed by the Council on 20 November.
Other departments closed in recent years include Music, Sociology and Mechanical Engineering. The university council decided in March 2009 to close the School of Health and Social Care, a school whose courses have been oversubscribed. In January 2008, the university announced its merger with the Henley Management College to create the university's new Henley Business School, bringing together Henley College's expertise in MBAs with the University's existing Business School and ICMA Centre; the merger took formal effect on 1 August 2008, with the new business school split across the university's existing Whiteknights Campus and its new Greenlands Campus that housed Henley Management College. A restructuring of the university was announced in September 2009, which would bring together all the academic schools into three faculties, these being the Faculty of Science, the Faculty of Humanities and Social sciences, Henley Business School; the move was predicted to result in the loss of some jobs in the film and television department, which has since moved into a brand new £11.5 million building on Whiteknights Campus.
In late 2009 it was announced that the London Road Campus was to undergo a £30 million renovation, preparatory to becoming the new home of the university's Institute of Education. The Institute moved to its new home in January 2012.. The refurbishment was funded by the sale of the adjoining site of Mansfield Hall, a former hall of residence, for demolition and replacement by private sector student accommodation; the university is a lead sponsor of UTC Reading, a new university technical college which opened in September 2013. In 2016 a move to reorganise the structure of Reading University provoked student protests. On 21 March 2016, staff announced a vote of no confidence in the Vice Chancellor Sir David Bell. 88% of those who voted backed the no confidence motion. In 2019 The Guardian reported that the university was in "a financial and governance crisis" after reporting itself to regulators over a £121 million loan; the university is sole trustee of the charitable National Institute for Research in Dairying trust, after selling trust land had borrowed the £121 million proceeds from the trust, despite the potential conflict of interest in the decision making.
Including this loan, the university has debts of £300 million, as well as having an operating deficit of over £40 million for the past two years. The university maintains over 1.6 square kilometres of grounds, in four distinct c
Lempel–Ziv–Storer–Szymanski is a lossless data compression algorithm, a derivative of LZ77, created in 1982 by James Storer and Thomas Szymanski. LZSS was described in article "Data compression via textual substitution" published in Journal of the ACM. LZSS is a dictionary encoding technique, it attempts to replace a string of symbols with a reference to a dictionary location of the same string. The main difference between LZ77 and LZSS is that in LZ77 the dictionary reference could be longer than the string it was replacing. In LZSS, such references are omitted. Furthermore, LZSS uses one-bit flags to indicate whether the next chunk of data is a literal or a reference to an offset/length pair. Here is the beginning of Dr. Seuss's Green Eggs and Ham, with character numbers at the beginning of lines for convenience. Green Eggs and Ham is an optimal example to illustrate LZSS compression because the book itself only contains 50 unique words, despite having a word count of 170. Thus, words are repeated, however not in succession.
0: I am Sam 9: 10: Sam I am 19: 20: That Sam-I-am! 35: That Sam-I-am! 50: I do not like 64: that Sam-I-am! 79: 80: Do you like green eggs and ham? 112: 113: I do not like them, Sam-I-am. 143: I do not like green eggs and ham. This text takes 177 bytes in uncompressed form. Assuming a break point of 2 bytes, one byte newlines, this text compressed with LZSS becomes 94 bytes long: 0: I am Sam 9: 10: 16: 17: That-I-am! I do not like 45: t 49: Do you green eggs and ham? 78: them.. Note: this does not include the 12 bytes of flags indicating whether the next chunk of text is a pointer or a literal. Adding it, the text becomes 106 bytes long, still shorter than the original 177 bytes. Many popular archivers like PKZip, ARJ, RAR, ZOO, LHarc use LZSS rather than LZ77 as the primary compression algorithm. Most implementations stem from 1989 code by Haruhiko Okumura. Version 4 of the Allegro library can encode and decode an LZSS format, but the feature was cut from version 5; the Game Boy Advance BIOS can decode a modified LZSS format.
Asymmetric numeral systems
Asymmetric numeral systems is a family of entropy encoding methods introduced by Jarosław Duda from Jagiellonian University, used in data compression since 2014 due to improved performance compared to used methods, being up to 30 times faster. ANS combines the compression ratio of arithmetic coding, with a processing cost similar to that of Huffman coding. In the tabled ANS variant, this is achieved by constructing a finite-state machine to operate on a large alphabet without using multiplication. Among others, ANS is used in the Facebook Zstandard compressor, in the Apple LZFSE compressor, Google Draco 3D compressor and PIK image compressor, in CRAM DNA compressor from SAMtools utilities, Dropbox DivANS compressor, it is being considered for the AV1 open video coding format from the Alliance for Open Media; the basic idea is to encode information into a single natural number x. In the standard binary number system, we can add a bit s ∈ of information to x by appending s at the end of x which gives us x ′ = 2 x + s.
For an entropy coder, this is optimal if Pr = Pr = 1 / 2. ANS generalizes this process for arbitrary sets of symbols s ∈ S with an accompanying probability distribution s ∈ S. In ANS, if x ′ is the result of appending the information from s to x x ′ ≈ x ⋅ p s − 1. Equivalently, log 2 ≈ log 2 + log 2 , where log 2 is the number of bits of information stored in the number x and log 2 is the number of bits contained in the symbol s. For the encoding rule, the set of natural numbers is split into disjoint subsets corresponding to different symbols – like into and odd numbers, but with densities corresponding to the probability distribution of the symbols to encode. To add information from symbol s into the information stored in the current number x, we go to number x ′ = C ≈ x / p being the position of the x -th appearance from the s -th subset. There are alternative ways to apply it in practice – direct mathematical formulas for encoding and decoding steps, or one can put the entire behavior into a table.
Renormalization is used to prevent x going to infinity – transferring accumulated bits to or from the bitstream. Imagine you want to encode a sequence of 1,000 zeros and ones, which would take 1000 bits to store directly. However, if it is somehow known that it only contains 1 zero and 999 ones, it would be sufficient to encode the zero's position, which requires only ⌈ log 2 ⌉ = 10 bits here instead of the original 1000 bits; such length n sequences containing p n zeros and n ones, for some probability p ∈, are called combinations. Using Stirling's approximation we get their asymptotic number being ≈ 2 n h for large n and h = − p log 2 − log 2 , called Shannon entropy. Hence, to choose one such sequence we need n h ( p
Image compression is a type of data compression applied to digital images, to reduce their cost for storage or transmission. Algorithms may take advantage of visual perception and the statistical properties of image data to provide superior results compared with generic data compression methods which are used for other digital data. Image compression may be lossless. Lossless compression is preferred for archival purposes and for medical imaging, technical drawings, clip art, or comics. Lossy compression methods when used at low bit rates, introduce compression artifacts. Lossy methods are suitable for natural images such as photographs in applications where minor loss of fidelity is acceptable to achieve a substantial reduction in bit rate. Lossy compression that produces negligible differences may be called visually lossless. Methods for lossless image compression are: Run-length encoding – used in default method in PCX and as one of possible in BMP, TGA, TIFF Area image compression DPCM and Predictive Coding Entropy encoding Adaptive dictionary algorithms such as LZW – used in GIF and TIFF DEFLATE – used in PNG, MNG, TIFF Chain codesMethods for lossy compression: Reducing the color space to the most common colors in the image.
The selected colors are specified in the colour palette in the header of the compressed image. Each pixel just references the index of a color in the color palette, this method can be combined with dithering to avoid posterization. Chroma subsampling; this takes advantage of the fact that the human eye perceives spatial changes of brightness more than those of color, by averaging or dropping some of the chrominance information in the image. Transform coding; this is the most used method. In particular, a Fourier-related transform such as the Discrete Cosine Transform is used: N. Ahmed, T. Natarajan and K. R. Rao, "Discrete Cosine Transform," IEEE Trans. Computers, 90–93, Jan. 1974. The DCT is sometimes referred to as "DCT-II" in the context of a family of discrete cosine transforms; the more developed wavelet transform is used extensively, followed by quantization and entropy coding. Fractal compression; the best image quality at a given compression rate is the main goal of image compression, there are other important properties of image compression schemes: Scalability refers to a quality reduction achieved by manipulation of the bitstream or file.
Other names for scalability are progressive embedded bitstreams. Despite its contrary nature, scalability may be found in lossless codecs in form of coarse-to-fine pixel scans. Scalability is useful for previewing images while downloading them or for providing variable quality access to e.g. databases. There are several types of scalability: Quality progressive or layer progressive: The bitstream successively refines the reconstructed image. Resolution progressive: First encode a lower image resolution. Component progressive: First encode grey-scale version. Region of interest coding. Certain parts of the image are encoded with higher quality than others; this may be combined with scalability. Meta information. Compressed data may contain information about the image which may be used to categorize, search, or browse images; such information may include color and texture statistics, small preview images, author or copyright information. Processing power. Compression algorithms require different amounts of processing power to decode.
Some high compression algorithms require high processing power. The quality of a compression method is measured by the peak signal-to-noise ratio, it measures the amount of noise introduced through a lossy compression of the image, the subjective judgment of the viewer is regarded as an important measure being the most important measure. Image compression – lecture from MIT OpenCourseWare Image Coding Fundamentals A study about image compression – with basics, comparing different compression methods like JPEG2000, JPEG and JPEG XR / HD Photo Data Compression Basics – includes comparison of PNG, JPEG and JPEG-2000 formats FAQ:What is the state of the art in lossless image compression? from comp.compression IPRG – an open group related to image processing research resources