Rhythm and Hues Studios
Rhythm & Hues is an American visual effects and animation company that received the Academy Award for Best Visual Effects in 1995 for Babe, in 2008 for The Golden Compass, in 2013 for Life of Pi. It has received four Scientific and Technical Academy Awards; the company filed for Chapter 11 bankruptcy in 2013. Rhythm & Hues Studios was established in Los Angeles, California in 1987 by former employees of Robert Abel and Associates; the company uses its own proprietary software for its photo-realistic character animation/visual effects—as well as for those that are more stylized. In 1999, Rhythm & Hues Studios acquired visual effects house VIFX from 20th Century Fox. By 2012, the company had become a global one, with offices and artists in India, Malaysia and Taiwan. Director Ang Lee approached Rhythm & Hues in August 2009 to discuss a planned film adaptation of the fantasy novel Life of Pi. R&H VFX Supervisor Bill Westenhofer noted that Lee "knew we had done the lion in the first Narnia movie.
He asked,'Does a digital character look more or less real in 3D?' We looked at each other and thought, a pretty good question." He stated that during these meetings, Lee said, "‘I look forward to making art with you.’ This was for me one of the most rewarding things I’ve worked on and the first chance to combine art with VFX. Every shot was artistic exploration, to make the ocean a character and make it interesting we had to strive to make it as visually stunning as possible."Rhythm & Hues spent a year on research and development, "building upon its vast knowledge of CG animation" to develop the tiger. Artist Abdul Rahman in the Malaysian branch underscored the global nature of the effects process, saying that "the special thing about Life of Pi is that it was the first time we did something called remote rendering, where we engaged our cloud infrastructure in Taiwan called CAVE"; the resulting film, Life of Pi, was released in theaters in November 2012, was a critical and commercial success.
The British Film Institute's Sight & Sound magazine suggested that, "Life of Pi can be seen as the film Rhythm & Hues has been building up to all these years, by taking things they learned from each production from Cats & Dogs to Yogi Bear, integrating their animals in different situations and environments, pushing them to do more, understanding how all of this can succeed both visually and dramatically." On February 11, 2013, Rhythm & Hues Studios filed for bankruptcy under Chapter 11, three months after Life of Pi was released. Around 254 people were laid off at that time; this led to a demonstration of nearly 500 VFX artists who protested outside of the 2013 Academy Awards, as Rhythm & Hues was nominated for an Oscar for Life of Pi. Inside, during the Oscars, when R&H visual effects supervisor Bill Westenhofer brought up R&H during his acceptance speech for Life of Pi, the microphone was cut off as the music of Jaws took over; this started an uproar among many visual effects industry professionals, changing profile pictures on social media such as Facebook and Twitter to show the green key color, in order to raise awareness of general negative trends in the effects industry.
In addition, director Ang Lee was criticized by the community for not acknowledging their work in the effects-laden film in his acceptance speech, despite thanking many other people, for earlier having complained about the costs of visual effects. On March 29, an affiliate of Prana Studios, 34x118 Holdings, LLC won the bidding on Rhythm and Hues in a bankruptcy auction; the sale was "valued at about $30 million". The Malaysian unit was not part of the sale, became an independent stand-alone entity, now known as Tau Films. In May, the El Segundo headquarters building was sold for $25 million to real estate developers, who planned to turn it into a campus for companies. In February 2014, Christina Lee Storm and Scott Leberecht released the documentary Life After Pi to YouTube; the documentary details both the reasons behind the bankruptcy as well as the general difficulties faced by the visual effects community. It contains a number of interviews with former Rhythm & Hues employees including co-founders John Hughes and Keith Goldfarb.
Bill Westenhofer discusses his experience at the Oscars as he accepted a Visual Effects award for Rhythm & Hues' work on Life of Pi. 2019Hellboy2018Slender Man2017Midnight, Texas The Mist2015I Fear the Walking Dead The Spongebob Movie: Sponge outta Water Alvin and the Chipmunks: The Road Chip2014300: Rise of an Empire Seventh Son Winter's Tale Tammy Into the Storm X-Men: Days of Future Past2013Percy Jackson: Sea of Monsters Machete Kills R. I. P. D. Grown Ups 2 The Secret Life of Walter Mitty2012The Bourne Legacy Big Miracle Django Unchained Chronicle Red Dawn The Hunger Games This is 40 Life of Pi Snow White and the Huntsman2011Alvin and the Chipmunks: Chipwrecked Hop Moneyball The Cabin in the Woods Game of Thrones Moneyball Red Riding Hood Mr. Popper's Penguins X-Men: First Class2010The A-Team Marmaduke Charlie St. Cloud Percy Jackson & the Olympians: The Lightning Thief The Wolfman Knight and Day Little Fockers Hot Tub Time Machine Yogi Bear2009Aliens in the Attic Ghosts of Girlfriends Past Fast & Furious The Time Traveler's Wife Cirque du Freak: The Vampire's Assistant State of Play Land of the Lost Alvin and the Chipmunks: The Squeakquel Night at the Museum: Battle of the Smithsonian2008The Incredible Hulk The Mummy: Tomb of the Dragon Emperor2007The Golden Compass Live Free or Die Hard By the power vested in me, I now pronounce you Chu
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources. For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware, although the application code is executed directly by the hardware and makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles to web servers and supercomputers; the dominant desktop operating system is Microsoft Windows with a market share of around 82.74%. MacOS by Apple Inc. is in second place, the varieties of Linux are collectively in third place. In the mobile sector, use in 2017 is up to 70% of Google's Android and according to third quarter 2016 data, Android on smartphones is dominant with 87.5 percent and a growth rate 10.3 percent per year, followed by Apple's iOS with 12.1 percent and a per year decrease in market share of 5.2 percent, while other operating systems amount to just 0.3 percent.
Linux distributions are dominant in supercomputing sectors. Other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can only run one program at a time, while a multi-tasking operating system allows more than one program to be running in concurrency; this is achieved by time-sharing, where the available processor time is divided between multiple processes. These processes are each interrupted in time slices by a task-scheduling subsystem of the operating system. Multi-tasking may be characterized in co-operative types. In preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, such as Solaris and Linux—as well as non-Unix-like, such as AmigaOS—support preemptive multitasking. Cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking.
32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem. A multi-user operating system extends the basic concept of multi-tasking with facilities that identify processes and resources, such as disk space, belonging to multiple users, the system permits multiple users to interact with the system at the same time. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources to multiple users. A distributed operating system manages a group of distinct computers and makes them appear to be a single computer; the development of networked computers that could be linked and communicate with each other gave rise to distributed computing. Distributed computations are carried out on more than one machine; when computers in a group work in cooperation, they form a distributed system.
In an OS, distributed and cloud computing context, templating refers to creating a single virtual machine image as a guest operating system saving it as a tool for multiple running virtual machines. The technique is used both in virtualization and cloud computing management, is common in large server warehouses. Embedded operating systems are designed to be used in embedded computer systems, they are designed to operate on small machines like PDAs with less autonomy. They are able to operate with a limited number of resources, they are compact and efficient by design. Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is an operating system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, it uses specialized scheduling algorithms so that a deterministic nature of behavior is achieved. An event-driven system switches between tasks based on their priorities or external events while time-sharing operating systems switch tasks based on clock interrupts.
A library operating system is one in which the services that a typical operating system provides, such as networking, are provided in the form of libraries and composed with the application and configuration code to construct a unikernel: a specialized, single address space, machine image that can be deployed to cloud or embedded environments. Early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Operating systems did not exist in their more complex forms until the early 1960s. Hardware features were added, that enabled use of runtime libraries and parallel processing; when personal computers became popular in the 1980s, operating systems were made for them similar in concept to those used on larger computers. In the 1940s, the earliest electronic digital systems had no operating systems.
Electronic systems of this time were programmed on rows of mechanical switches or by jumper wires on plug boards. These were special-purpose systems that, for example, generated ballistics tables for the military or controlled the pri
In computing, floating-point arithmetic is arithmetic using formulaic representation of real numbers as an approximation so as to support a trade-off between range and precision. For this reason, floating-point computation is found in systems which include small and large real numbers, which require fast processing times. A number is, in general, represented to a fixed number of significant digits and scaled using an exponent in some fixed base. A number that can be represented is of the following form: significand × base exponent, where significand is an integer, base is an integer greater than or equal to two, exponent is an integer. For example: 1.2345 = 12345 ⏟ significand × 10 ⏟ base − 4 ⏞ exponent. The term floating point refers to the fact that a number's radix point can "float"; this position is indicated as the exponent component, thus the floating-point representation can be thought of as a kind of scientific notation. A floating-point system can be used to represent, with a fixed number of digits, numbers of different orders of magnitude: e.g. the distance between galaxies or the diameter of an atomic nucleus can be expressed with the same unit of length.
The result of this dynamic range is that the numbers that can be represented are not uniformly spaced. Over the years, a variety of floating-point representations have been used in computers. In 1985, the IEEE 754 Standard for Floating-Point Arithmetic was established, since the 1990s, the most encountered representations are those defined by the IEEE; the speed of floating-point operations measured in terms of FLOPS, is an important characteristic of a computer system for applications that involve intensive mathematical calculations. A floating-point unit is a part of a computer system specially designed to carry out operations on floating-point numbers. A number representation specifies some way of encoding a number as a string of digits. There are several mechanisms. In common mathematical notation, the digit string can be of any length, the location of the radix point is indicated by placing an explicit "point" character there. If the radix point is not specified the string implicitly represents an integer and the unstated radix point would be off the right-hand end of the string, next to the least significant digit.
In fixed-point systems, a position in the string is specified for the radix point. So a fixed-point scheme might be to use a string of 8 decimal digits with the decimal point in the middle, whereby "00012345" would represent 0001.2345. In scientific notation, the given number is scaled by a power of 10, so that it lies within a certain range—typically between 1 and 10, with the radix point appearing after the first digit; the scaling factor, as a power of ten, is indicated separately at the end of the number. For example, the orbital period of Jupiter's moon Io is 152,853.5047 seconds, a value that would be represented in standard-form scientific notation as 1.528535047×105 seconds. Floating-point representation is similar in concept to scientific notation. Logically, a floating-point number consists of: A signed digit string of a given length in a given base; this digit string is referred to mantissa, or coefficient. The length of the significand determines the precision; the radix point position is assumed always to be somewhere within the significand—often just after or just before the most significant digit, or to the right of the rightmost digit.
This article follows the convention that the radix point is set just after the most significant digit. A signed integer exponent. To derive the value of the floating-point number, the significand is multiplied by the base raised to the power of the exponent, equivalent to shifting the radix point from its implied position by a number of places equal to the value of the exponent—to the right if the exponent is positive or to the left if the exponent is negative. Using base-10 as an example, the number 152,853.5047, which has ten decimal digits of precision, is represented as the significand 1,528,535,047 together with 5 as the exponent. To determine the actual value, a decimal point is placed after the first digit of the significand and the result is multiplied by 105 to give 1.528535047×105, or 152,853.5047. In storing such a number, the base need not be stored, since it will be the same for the entire range of supported numbers, can thus be inferred. Symbolically, this final value is: s b p − 1 × b e, where s is the
CMYK color model
The CMYK color model is a subtractive color model, used in color printing, is used to describe the printing process itself. CMYK refers to the four inks used in some color printing: cyan, magenta and key; the CMYK model works by or masking colors on a lighter white, background. The ink reduces the light; such a model is called subtractive because inks "subtract" the colors red and blue from white light. White light minus red leaves cyan, white light minus green leaves magenta, white light minus blue leaves yellow. In additive color models, such as RGB, white is the "additive" combination of all primary colored lights, while black is the absence of light. In the CMYK model, it is the opposite: white is the natural color of the paper or other background, while black results from a full combination of colored inks. To save cost on ink, to produce deeper black tones and dark colors are produced by using black ink instead of the combination of cyan and yellow. With CMYK printing, halftoning allows for less than full saturation of the primary colors.
Magenta printed with a 20% halftone, for example, produces a pink color, because the eye perceives the tiny magenta dots on the large white paper as lighter and less saturated than the color of pure magenta ink. Without halftoning, the three primary process colors could be printed only as solid blocks of color, therefore could produce only seven colors: the three primaries themselves, plus three secondary colors produced by layering two of the primaries: cyan and yellow produce green and magenta produce blue and magenta produce red, plus layering all three of them resulting in black. With halftoning, a full continuous range of colors can be produced. To improve print quality and reduce moiré patterns, the screen for each color is set at a different angle. While the angles depend on how many colors are used and the preference of the press operator, typical CMYK process printing uses any of the following screen angles: The "black" generated by mixing commercially practical cyan and yellow inks is unsatisfactory, so four-color printing uses black ink in addition to the subtractive primaries.
Common reasons for using black ink include: In traditional preparation of color separations, a red keyline on the black line art marked the outline of solid or tint color areas. In some cases a black keyline was used when it served as both a color indicator and an outline to be printed in black; because the black plate contained the keyline, the K in CMYK represents the keyline or black plate sometimes called the key plate. Text is printed in black and includes fine detail, so to reproduce text or other finely detailed outlines, without slight blurring, using three inks would require impractically accurate registration. A combination of 100% cyan and yellow inks soaks the paper with ink, making it slower to dry, causing bleeding, or weakening the paper so much that it tears. Although a combination of 100% cyan and yellow inks should, in theory absorb the entire visible spectrum of light and produce a perfect black, practical inks fall short of their ideal characteristics and the result is a dark muddy color that does not quite appear black.
Adding black ink absorbs more light and yields much better blacks. Using black ink is less expensive than using the corresponding amounts of colored inks; when a dark area is desirable, a colored or gray CMY "bedding" is applied first a full black layer is applied on top, making a rich, deep black. A black made with just CMY inks is sometimes called a composite black; the amount of black to use to replace amounts of the other ink is variable, the choice depends on the technology and ink in use. Processes called under color removal, under color addition, gray component replacement are used to decide on the final mix. CMYK or process color printing is contrasted with spot color printing, in which specific colored inks are used to generate the colors appearing on paper; some printing presses are capable of printing with both four-color process inks and additional spot color inks at the same time. High-quality printed materials, such as marketing brochures and books include photographs requiring process-color printing, other graphic effects requiring spot colors, finishes such as varnish, which enhances the glossy appearance of the printed piece.
CMYK are the process printers which have a small color gamut. Processes such as Pantone's proprietary six-color Hexachrome expand the gamut. Light, saturated colors cannot be created with CMYK, light colors in general may make visible the halftone pattern. Using a CcMmYK process, with the addition of light cyan and magenta inks to CMYK, can solve these problems, such a process is used by many inkjet printers, including desktop models. Comparisons between RGB displays and CMYK prints can be difficult, since the color reproduction technologies and properties are different. A computer monitor mixes shades of red and blue light to create color pictures. A CMYK printer instead uses light-absorbing cyan and yellow inks, whose colors are mixed using dithering, halftoning, or some other optical technique. Similar to monitors, the inks used in printing produ
Digital image processing
In computer science, digital image processing is the use of computer algorithms to perform image processing on digital images. As a subcategory or field of digital signal processing, digital image processing has many advantages over analog image processing, it allows a much wider range of algorithms to be applied to the input data and can avoid problems such as the build-up of noise and signal distortion during processing. Since images are defined over two dimensions digital image processing may be modeled in the form of multidimensional systems. Many of the techniques of digital image processing, or digital picture processing as it was called, were developed in the 1960s at the Jet Propulsion Laboratory, Massachusetts Institute of Technology, Bell Laboratories, University of Maryland, a few other research facilities, with application to satellite imagery, wire-photo standards conversion, medical imaging, character recognition, photograph enhancement; the cost of processing was high, with the computing equipment of that era.
That changed in the 1970s, when digital image processing proliferated as cheaper computers and dedicated hardware became available. Images could be processed in real time, for some dedicated problems such as television standards conversion; as general-purpose computers became faster, they started to take over the role of dedicated hardware for all but the most specialized and computer-intensive operations. With the fast computers and signal processors available in the 2000s, digital image processing has become the most common form of image processing and is used because it is not only the most versatile method, but the cheapest. Digital image processing technology for medical applications was inducted into the Space Foundation Space Technology Hall of Fame in 1994. Digital image processing allows the use of much more complex algorithms, hence, can offer both more sophisticated performance at simple tasks, the implementation of methods which would be impossible by analog means. In particular, digital image processing is the only practical technology for: Classification Feature extraction Multi-scale signal analysis Pattern recognition ProjectionSome techniques which are used in digital image processing include: Anisotropic diffusion Hidden Markov models Image editing Image restoration Independent component analysis Linear filtering Neural networks Partial differential equations Pixelation Principal components analysis Self-organizing maps Wavelets Digital filters are used to blur and sharpen digital images.
Filtering can be performed in the spatial domain by convolution with designed kernels, or in the frequency domain by masking specific frequency regions. The following examples show both methods: Images are padded before being transformed to the Fourier space, the highpass filtered images below illustrate the consequences of different padding techniques: Notice that the highpass filter shows extra edges when zero padded compared to the repeated edge padding. MATLAB example for spatial domain highpass filtering. Affine transformations enable basic image transformations including scale, translate and shear as is shown in the following examples: To apply the affine matrix to an image, the image is converted to matrix in which each entry corresponds to the pixel intensity at that location; each pixel's location can be represented as a vector indicating the coordinates of that pixel in the image, where x and y are the row and column of a pixel in the image matrix. This allows the coordinate to be multiplied by an affine-transformation matrix, which gives the position that the pixel value will be copied to in the output image.
However, to allow transformations that require translation transformations, 3 dimensional homogeneous coordinates are needed. The third dimension is set to a non-zero constant 1, so that the new coordinate is; this allows the coordinate vector to be multiplied by a 3 by 3 matrix. So the third dimension, the constant 1, allows translation; because matrix multiplication is associative, multiple affine transformations can be combined into a single affine transformation by multiplying the matrix of each individual transformation in the order that the transformations are done. This results in a single matrix that, when applied to a point vector, gives the same result as all the individual transformations performed on the vector in sequence, thus a sequence of affine transformation matrices can be reduced to a single affine transformation matrix. For example, 2 dimensional coordinates only allow rotation about the origin, but 3 dimensional homogeneous coordinates can be used to first translate any point to perform the rotation, lastly translate the origin back to the original point.
These 3 affine transformations can be combined into a single matrix, thus allowing rotation around any point in the image. Digital cameras include specialized digital image processing hardware – either dedicated chips or added circuitry on other chips – to convert the raw data from their image sensor into a color-corrected image in a standard image file format. Westworld was the first feature film to use the digital image processing to pixellate photography to simulate an android's point of view. Computer graphics Computer vision CVIPtools Digitizing Free boundary condition GPGPU Homomorphic filtering Image analysis IEEE Intelligent Transportation Systems Society Multidimensional systems Remote sensing software Standard test image Superresolution Solomon, C. J.. P.. Fundamentals of Digital Image Processing: A Practical Approach with Exa
GNOME is a free and open-source desktop environment for Unix-like operating systems. GNOME was an acronym for GNU Network Object Model Environment, but the acronym was dropped because it no longer reflected the vision of the GNOME project. GNOME is part of the GNU Project and developed by The GNOME Project, composed of both volunteers and paid contributors, the largest corporate contributor being Red Hat, it is an international project that aims to develop software frameworks for the development of software, to program end-user applications based on these frameworks, to coordinate efforts for internationalization and localization and accessibility of that software. GNOME 3 is the default desktop environment on many major Linux distributions including Fedora, Ubuntu, SUSE Linux Enterprise, Red Hat Enterprise Linux, CentOS, Oracle Linux, Scientific Linux, SteamOS, Kali Linux and Endless OS; the continued fork of the last GNOME 2 release that goes under the name MATE is default on many distributions that targets low usage of system resources.
GNOME was started on August 15 1997 by Miguel de Icaza and Federico Mena as a free software project to develop a desktop environment and applications for it. It was founded in part because K Desktop Environment, growing in popularity, relied on the Qt widget toolkit which used a proprietary software license until version 2.0. In place of Qt, the GTK toolkit was chosen as the base of GNOME. GTK uses the GNU Lesser General Public License, a free software license that allows software linking to it to use a much wider set of licenses, including proprietary software licenses. GNOME itself is licensed under the LGPL for its libraries, the GNU General Public License for its applications; the name "GNOME" was an acronym of GNU Network Object Model Environment, referring to the original intention of creating a distributed object framework similar to Microsoft's OLE, but the acronym was dropped because it no longer reflected the vision of the GNOME project. The California startup Eazel developed the Nautilus file manager from 1999 to 2001.
De Icaza and Nat Friedman founded Helix Code in 1999 in Massachusetts. During the transition to GNOME 2 around the year 2001 and shortly thereafter there were brief talks about creating a GNOME Office suite. On September 15, 2003 GNOME-Office 1.0, consisting of AbiWord 2.0, GNOME-DB 1.0 and Gnumeric 1.2.0 was released. Although some release planning for GNOME Office 1.2 was happening on gnome-office mailing list, Gnumeric 1.4 was announced as a part of it, the 1.2 release of the suite itself never materialized. As of May 4, 2014 GNOME wiki only mentions "GNOME/Gtk applications that are useful in an office environment". GNOME 2 was similar to a conventional desktop interface, featuring a simple desktop in which users could interact with virtual objects, such as windows and files. GNOME 2 started out with Sawfish, but switched to Metacity as its default window manager; the handling of windows and files in GNOME 2 is similar to that of contemporary desktop operating systems. In the default configuration of GNOME 2, the desktop has a launcher menu for quick access to installed programs and file locations.
However, these features can be moved to any position or orientation the user desires, replaced with other functions or removed altogether. As of 2009, GNOME 2 was the default desktop for OpenSolaris. GNOME 1 and 2 followed the traditional desktop metaphor. GNOME 3, released in 2011, changed this with GNOME Shell, a more abstract metaphor where switching between different tasks and virtual desktops takes place in a separate area called "Overview". Since Mutter replaced Metacity as the default window manager, the minimize and maximize buttons no longer appear by default, the title bar, menu bar and tool bar combinated in one horizontal bar called "header bar" via Client-Side Decoration mechanism. Adwaita replaced Clearlooks as the default theme. Many GNOME Core Applications went through redesigns to provide a more consistent user experience; the release of GNOME 3, notable for its move away from the traditional menu bar and taskbar, has caused considerable controversy in the GNU and Linux community.
Many users and developers have expressed concerns about usability. A few projects have been initiated to continue development of GNOME 2.x or to modify GNOME 3.x to be more like the 2.x releases. GNOME 3 aims to provide a single interface for desktop computers and tablet computers; this means using only input techniques that work on all those devices, requiring abandonment of certain concepts to which desktop users were accustomed, such as right-clicking, or saving files on the desktop. These major changes evoked widespread criticism; the MATE desktop environment was forked from the GNOME 2 code-base with the intent of retaining the traditional GNOME 2 interface, whilst keeping compatibility with modern Linux technology, such as GTK 3. The Linux Mint team addressed the issue in another way by developing the "Mint GNOME Shell Extensions" that ran on top of GNOME Shell and allowed it to be used via the traditional desktop metaphor; this led to the creation of the Cinnamon user interface, forked from the GNOME 3 codebase.
Among those critical of the early releases of GNOME 3 is Linus Torvalds, the creator of the Linux kernel. Torvalds abandoned GNOME for a wh
RGB color model
The RGB color model is an additive color model in which red and blue light are added together in various ways to reproduce a broad array of colors. The name of the model comes from the initials of the three additive primary colors, red and blue; the main purpose of the RGB color model is for the sensing and display of images in electronic systems, such as televisions and computers, though it has been used in conventional photography. Before the electronic age, the RGB color model had a solid theory behind it, based in human perception of colors. RGB is a device-dependent color model: different devices detect or reproduce a given RGB value differently, since the color elements and their response to the individual R, G, B levels vary from manufacturer to manufacturer, or in the same device over time, thus an RGB value does not define the same color across devices without some kind of color management. Typical RGB input devices are color TV and video cameras, image scanners, digital cameras. Typical RGB output devices are TV sets of various technologies and mobile phone displays, video projectors, multicolor LED displays and large screens such as JumboTron.
Color printers, on the other hand subtractive color devices. This article discusses concepts common to all the different color spaces that use the RGB color model, which are used in one implementation or another in color image-producing technology. To form a color with RGB, three light beams must be superimposed; each of the three beams is called a component of that color, each of them can have an arbitrary intensity, from off to on, in the mixture. The RGB color model is additive in the sense that the three light beams are added together, their light spectra add, wavelength for wavelength, to make the final color's spectrum; this is opposite to the subtractive color model that applies to paints, inks and other substances whose color depends on reflecting the light under which we see them. Because of properties, these three colours create white, this is in stark contrast to physical colours, such as dyes which create black when mixed. Zero intensity for each component gives the darkest color, full intensity of each gives a white.
When the intensities for all the components are the same, the result is a shade of gray, darker or lighter depending on the intensity. When the intensities are different, the result is a colorized hue, more or less saturated depending on the difference of the strongest and weakest of the intensities of the primary colors employed; when one of the components has the strongest intensity, the color is a hue near this primary color, when two components have the same strongest intensity the color is a hue of a secondary color. A secondary color is formed by the sum of two primary colors of equal intensity: cyan is green+blue, magenta is red+blue, yellow is red+green; every secondary color is the complement of one primary color. The RGB color model itself does not define what is meant by red and blue colorimetrically, so the results of mixing them are not specified as absolute, but relative to the primary colors; when the exact chromaticities of the red and blue primaries are defined, the color model becomes an absolute color space, such as sRGB or Adobe RGB.
The choice of primary colors is related to the physiology of the human eye. The normal three kinds of light-sensitive photoreceptor cells in the human eye respond most to yellow and violet light; the difference in the signals received from the three kinds allows the brain to differentiate a wide gamut of different colors, while being most sensitive to yellowish-green light and to differences between hues in the green-to-orange region. As an example, suppose that light in the orange range of wavelengths enters the eye and strikes the retina. Light of these wavelengths would activate both the medium and long wavelength cones of the retina, but not equally—the long-wavelength cells will respond more; the difference in the response can be detected by the brain, this difference is the basis of our perception of orange. Thus, the orange appearance of an object results from light from the object entering our eye and stimulating the different cones but to different degrees. Use of the three primary colors is not sufficient to reproduce all colors.
The RGB color model is based on the Young–Helmholtz theory of trichromatic color vision, developed by Thomas Young and Hermann Helmholtz in the early to mid nineteenth century, on James Clerk Maxwell's c