ISDB-T International, ISDB-Tb or SBTVD, short for Sistema Brasileiro de Televisão Digital, is a technical standard for digital television broadcast used in Brazil, Peru, Chile, Venezuela, Costa Rica, Philippines, Nicaragua, El Salvador and Uruguay, based on the Japanese ISDB-T standard. ISDB-T International launched into commercial operation on December 2, 2007, in São Paulo, Brazil, as SBTVD. ISDB-T International is called ISDB-Tb and differs from original ISDB-T by using H.264/MPEG-4 AVC as a video compression standard, a presentation rate of 30 frames per second in portable devices and powerful interaction using middleware Ginga, composed by Ginga-NCL and Ginga-J modules. The ISDB-T International standard was developed as SBTVD by a study group coordinated by the Brazilian Ministry of Communications and was led by the Brazilian Telecommunications Agency with support from the Telecommunication's Research and Development Centre; the study group was composed of members of ten other Brazilian ministries, the National Institute for Information Technology, several Brazilian universities, broadcast professional organizations, manufacturers of broadcast/reception devices.
The objective of the group was to develop and implement a DTV standard in Brazil, addressing not only technical and economical issues and mitigating the digital divide, that is, to promote inclusion of those living apart from today's information society. Another goal was to enable access to e-government, i.e. to make government closer to the population, since in Brazil 95.1% of households have at least one TV set. In January 2009, the Brazilian-Japanese study group for digital TV finished and published a specification document joining the Japanese ISDB-T with Brazilian SBTVD, resulting in a specification now called "ISDB-T International". ISDB-T International is the system, proposed by Japan and Brazil for use in other countries in South America and around the world; the history of SBTVD development can be divided in two major periods: a) Initial Studies and Tests. Since 1994 a group composed of technicians from Brazilian Society for Television Engineering and Brazilian Association of Radio and Television Broadcasters has been analyzing existing digital TV standards and its technical aspects but the discussion become a robust study only in 1998.
From 1998 to 2000, the ABERT and SET group, supported by Universidade Presbiteriana Mackenzie developed a complete study based on several tests considering not only technical characteristics of each standard but signal quality, both indoor and outdoor. That was the first complete study comparing all three major DTV standards in the world by an independent entity and it was considered a rigorous and robust study by the DTV technical world community; the results of the "Brazilian digital television tests" showed the insufficient quality for indoor reception presented by ATSC and, between DVB-T and ISDB-T, the last one presented superior performance in indoor reception and flexibility to access digital services and TV programs through non-mobile, mobile or portable receivers with impressive quality. In parallel in 1998, the Brazilian Ministry of Communication ordered the National Telecommunication Agency to carry on studies to select and implement a DTV standard in Brazil. Due to the completeness and quality of the ABERT/SET/Mackenzie study, ANATEL considered that as the official result and supported it considering ISDB-T the better standard to be implemented in Brazil.
However the final decision about the standard selected wasn't announced at that moment because of three main points: Some groups of society wanted to be more involved in that decision. In the light of those points, the Brazilian Government, created a more structured discussion group, to review the first studies and to address these new points; the program SBTVD program was deployed on November 26, 2003 by Presidential Act # 4.901, focusing the creation of a reference model for national terrestrial digital TV in Brazil. The National Telecommunications Agency was charged by the Brazilian Ministry of Communications to lead this work with the technical support of CPqD, the contributions of 10 other Brazilian ministries, the National Institute for Information Technology, 25 organizations related to the matter, 75 universities/R&D institutes and electro-electronic manufacturers. More than 1,200 researchers/professionals were mobilized; the DTV Work Group was organized in a structure with 3 areas of development: Development Committee: to define and implement a political and regulatory basis.
Digital television is the transmission of television signals, including the sound channel, using digital encoding, in contrast to the earlier television technology, analog television, in which the video and audio are carried by analog signals. It is an innovative advance that represents the first significant evolution in television technology since color television in the 1950s. Digital TV transmits in a new image format called HDTV, with greater resolution than analog TV, in a wide screen aspect ratio similar to recent movies in contrast to the narrower screen of analog TV, it makes more economical use of scarce radio spectrum space. A transition from analog to digital broadcasting began around 2006 in some countries, many industrial countries have now completed the changeover, while other countries are in various stages of adaptation. Different digital television broadcasting standards have been adopted in different parts of the world; this standard has been adopted in Europe, Asia, total about 60 countries.
Advanced Television System Committee uses eight-level vestigial sideband for terrestrial broadcasting. This standard has been adopted by 6 countries: United States, Mexico, South Korea, Dominican Republic and Honduras. Integrated Services Digital Broadcasting is a system designed to provide good reception to fixed receivers and portable or mobile receivers, it utilizes two-dimensional interleaving. It supports hierarchical transmission of up to three layers and uses MPEG-2 video and Advanced Audio Coding; this standard has been adopted in Japan and the Philippines. ISDB-T International is an adaptation of this standard using H.264/MPEG-4 AVC that been adopted in most of South America and is being embraced by Portuguese-speaking African countries. Digital Terrestrial Multimedia Broadcasting adopts time-domain synchronous OFDM technology with a pseudo-random signal frame to serve as the guard interval of the OFDM block and the training symbol; the DTMB standard has been adopted in the People's Republic including Hong Kong and Macau.
Digital Multimedia Broadcasting is a digital radio transmission technology developed in South Korea as part of the national IT project for sending multimedia such as TV, radio and datacasting to mobile devices such as mobile phones, laptops and GPS navigation systems. Digital TV's roots have been tied closely to the availability of inexpensive, high performance computers, it wasn't until the 1990s. In the mid-1980s, as Japanese consumer electronics firms forged ahead with the development of HDTV technology, as the MUSE analog format was proposed by Japan's public broadcaster NHK as a worldwide standard, Japanese advancements were seen as pacesetters that threatened to eclipse U. S. electronics companies. Until June 1990, the Japanese MUSE standard—based on an analog system—was the front-runner among the more than 23 different technical concepts under consideration. An American company, General Instrument, demonstrated the feasibility of a digital television signal; this breakthrough was of such significance that the FCC was persuaded to delay its decision on an ATV standard until a digitally based standard could be developed.
In March 1990, when it became clear that a digital standard was feasible, the FCC made a number of critical decisions. First, the Commission declared that the new ATV standard must be more than an enhanced analog signal, but be able to provide a genuine HDTV signal with at least twice the resolution of existing television images. To ensure that viewers who did not wish to buy a new digital television set could continue to receive conventional television broadcasts, it dictated that the new ATV standard must be capable of being "simulcast" on different channels; the new ATV standard allowed the new DTV signal to be based on new design principles. Although incompatible with the existing NTSC standard, the new DTV standard would be able to incorporate many improvements; the final standard adopted by the FCC did not require a single standard for scanning formats, aspect ratios, or lines of resolution. This outcome resulted from a dispute between the consumer electronics industry and the computer industry over which of the two scanning processes—interlaced or progressive—is superior.
Interlaced scanning, used in televisions worldwide, scans even-numbered lines first odd-numbered ones. Progressive scanning, the format used in computers, scans lines in sequences, from top to bottom; the computer industry argued that progressive scanning is superior because it does not "flicker" in the manner of interlaced scanning. It argued that progressive scanning enables easier connections with the Internet, is more cheaply converted to interlaced formats than vice versa; the film industry supported progressive scanning because it offers a more efficient means of converting filmed programming into digital formats. For their part, the consumer electronics industry and broadcasters argued that interlaced scanning was the only technology that could transmit the highest quality pictures feasible, i.e. 1,080 lines per picture and 1,920 pixels per line. Broadcasters favored interlaced scanning because their vast archive of interlaced
Aspect ratio (image)
The aspect ratio of an image describes the proportional relationship between its width and its height. It is expressed as two numbers separated by a colon, as in 16:9. For an x:y aspect ratio, no matter how big or small the image is, if the width is divided into x units of equal length and the height is measured using this same length unit, the height will be measured to be y units. For example, in a group of images that all have an aspect ratio of 16:9, one image might be 16 inches wide and 9 inches high, another 16 centimeters wide and 9 centimeters high, a third might be 8 yards wide and 4.5 yards high. Thus, aspect ratio concerns the relationship of the width to the height, not an image's actual size; the most common aspect ratios used today in the presentation of films in cinemas are 1.85:1 and 2.39:1. Two common videographic aspect ratios are 4:3, the universal video format of the 20th century, 16:9, universal for high-definition television and European digital television. Other cinema and video aspect ratios are used infrequently.
In still camera photography, the most common aspect ratios are 4:3, 3:2, more found in consumer cameras, 16:9. Other aspect ratios, such as 5:3, 5:4, 1:1, are used in photography as well in medium format and large format. With television, DVD and Blu-ray Disc, converting formats of unequal ratios is achieved by enlarging the original image to fill the receiving format's display area and cutting off any excess picture information, by adding horizontal mattes or vertical mattes to retain the original format's aspect ratio, by stretching the image to fill the receiving format's ratio, or by scaling by different factors in both directions scaling by a different factor in the center and at the edges. In motion picture formats, the physical size of the film area between the sprocket perforations determines the image's size; the universal standard is a frame, four perforations high. The film itself is 35 mm wide, but the area between the perforations is 24.89 mm × 18.67 mm, leaving the de facto ratio of 4:3, or 1.3:1.
With a space designated for the standard optical soundtrack, the frame size reduced to maintain an image, wider than tall, this resulted in the Academy aperture of 22 mm × 16 mm or 1.375:1 aspect ratio. The motion picture industry convention assigns a value of 1.0 to the image's height. After 1952, a number of aspect ratios were experimented with for anamorphic productions, including 2.66:1 and 2.55:1. A SMPTE specification for anamorphic projection from 1957 standardized the aperture to 2.35:1. An update in 1970 changed the aspect ratio to 2.39:1. This aspect ratio of 2.39:1 was confirmed by the most recent revision from August 1993. In American cinemas, the common projection ratios are 1.85:1 and 2.39:1. Some European countries have 1.6:1 as the wide screen standard. The "Academy ratio" of 1.375:1 was used for all cinema films in the sound era until 1953. During that time, which had a similar aspect ratio of 1.3:1, became a perceived threat to movie studios. Hollywood responded by creating a large number of wide-screen formats: CinemaScope, Todd-AO, VistaVision to name just a few.
The "flat" 1.85:1 aspect ratio was introduced in May 1953, became one of the most common cinema projection standards in the U. S. and elsewhere. The goal of these various lenses and aspect ratios was to capture as much of the frame as possible, onto as large an area of the film as possible, in order to utilize the film being used; some of the aspect ratios were chosen to utilize smaller film sizes in order to save film costs while other aspect ratios were chosen to use larger film sizes in order to produce a wider higher resolution image. In either case the image was squeezed horizontally to fit the film's frame size and avoid any unused film area. Development of various film camera systems must cater to the placement of the frame in relation to the lateral constraints of the perforations and the optical soundtrack area. One clever wide screen alternative, VistaVision, used standard 35 mm film running sideways through the camera gate, so that the sprocket holes were above and below frame, allowing a larger horizontal negative size per frame as only the vertical size was now restricted by the perforations.
There were a limited number of projectors constructed to run the print-film horizontally. However, the 1.50:1 ratio of the initial VistaVision image was optically converted to a vertical print to show with the standard projectors available at theaters, was masked in the projector to the US standard of 1.85:1. The format was revived by Lucasfilm in the late 1970s for special effects work that required larger negative size, it went into obsolescence due to better cameras and film stocks available to standard four-perforation formats, in addition to increased lab costs of making prints in comparison to more standard vertical processes. Super 16 mm film was used for televisi
DVD is a digital optical disc storage format invented and developed in 1995. The medium can store any kind of digital data and is used for software and other computer files as well as video programs watched using DVD players. DVDs offer higher storage capacity than compact discs. Prerecorded DVDs are mass-produced using molding machines that physically stamp data onto the DVD; such discs are a form of DVD-ROM because data can only be not written or erased. Blank recordable DVD discs can be recorded once using a DVD recorder and function as a DVD-ROM. Rewritable DVDs can be erased many times. DVDs are used in DVD-Video consumer digital video format and in DVD-Audio consumer digital audio format as well as for authoring DVD discs written in a special AVCHD format to hold high definition material. DVDs containing other types of information may be referred to as DVD data discs; the Oxford English Dictionary comments that, "In 1995 rival manufacturers of the product named digital video disc agreed that, in order to emphasize the flexibility of the format for multimedia applications, the preferred abbreviation DVD would be understood to denote digital versatile disc."
The OED states that in 1995, "The companies said the official name of the format will be DVD. Toshiba had been using the name ‘digital video disc’, but, switched to ‘digital versatile disc’ after computer companies complained that it left out their applications.""Digital versatile disc" is the explanation provided in a DVD Forum Primer from 2000 and in the DVD Forum's mission statement. There were several formats developed for recording video on optical discs before the DVD. Optical recording technology was invented by David Paul Gregg and James Russell in 1958 and first patented in 1961. A consumer optical disc data format known as LaserDisc was developed in the United States, first came to market in Atlanta, Georgia in 1978, it used much larger discs than the formats. Due to the high cost of players and discs, consumer adoption of LaserDisc was low in both North America and Europe, was not used anywhere outside Japan and the more affluent areas of Southeast Asia, such as Hong-Kong, Singapore and Taiwan.
CD Video released in 1987 used analog video encoding on optical discs matching the established standard 120 mm size of audio CDs. Video CD became one of the first formats for distributing digitally encoded films in this format, in 1993. In the same year, two new optical disc storage formats were being developed. One was the Multimedia Compact Disc, backed by Philips and Sony, the other was the Super Density disc, supported by Toshiba, Time Warner, Matsushita Electric, Mitsubishi Electric, Thomson, JVC. By the time of the press launches for both formats in January 1995, the MMCD nomenclature had been dropped, Philips and Sony were referring to their format as Digital Video Disc. Representatives from the SD camp asked IBM for advice on the file system to use for their disc, sought support for their format for storing computer data. Alan E. Bell, a researcher from IBM's Almaden Research Center, got that request, learned of the MMCD development project. Wary of being caught in a repeat of the costly videotape format war between VHS and Betamax in the 1980s, he convened a group of computer industry experts, including representatives from Apple, Sun Microsystems and many others.
This group was referred to as the Technical Working Group, or TWG. On August 14, 1995, an ad hoc group formed from five computer companies issued a press release stating that they would only accept a single format; the TWG voted to boycott both formats unless the two camps agreed on a converged standard. They recruited president of IBM, to pressure the executives of the warring factions. In one significant compromise, the MMCD and SD groups agreed to adopt proposal SD 9, which specified that both layers of the dual-layered disc be read from the same side—instead of proposal SD 10, which would have created a two-sided disc that users would have to turn over; as a result, the DVD specification provided a storage capacity of 4.7 GB for a single-layered, single-sided disc and 8.5 GB for a dual-layered, single-sided disc. The DVD specification ended up similar to Toshiba and Matsushita's Super Density Disc, except for the dual-layer option and EFMPlus modulation designed by Kees Schouhamer Immink.
Philips and Sony decided that it was in their best interests to end the format war, agreed to unify with companies backing the Super Density Disc to release a single format, with technologies from both. After other compromises between MMCD and SD, the computer companies through TWG won the day, a single format was agreed upon; the TWG collaborated with the Optical Storage Technology Association on the use of their implementation of the ISO-13346 file system for use on the new DVDs. Movie and home entertainment distributors adopted the DVD format to replace the ubiquitous VHS tape as the primary consumer digital video distribution format, they embraced DVD as it produced higher quality video and sound, provided superior data lifespan, could be interactive. Interactivity on LaserDiscs had proven desirable to consumers collectors; when LaserDisc prices dropped from $100 per
Letterboxing is the practice of transferring film shot in a widescreen aspect ratio to standard-width video formats while preserving the film's original aspect ratio. The resulting videographic image has mattes below it. LBX or LTBX are the identifying abbreviations for images so formatted; the term refers to the shape of a letter box, a slot in a wall or door through which mail is delivered, being rectangular and wider than it is high. Letterboxing is used as an alternative to a full-screen, pan-and-scan transfer of a widescreen film image to videotape or videodisc. In pan-and-scan transfers, the original image is cropped to the narrower aspect ratio of the destination format the 1.33:1 ratio of the standard television screen, whereas letterboxing preserves the film's original image composition as seen in the cinema. Letterboxing was developed for use in 4:3 television displays before widescreen television screens were available, but it is necessary to represent on a 16:9 widescreen display the unaltered original composition of a film with a wider aspect ratio, such as Panavision's 2.35:1 ratio.
Letterbox mattes are symmetrical, but in some instances the picture can be elevated so the bottom matte is much larger for the purpose of placing "hard" subtitles within the matte to avoid overlapping of the image. This was done for letterbox widescreen anime on VHS, though the practice of "hiding" subtitles within the lower matte is done with symmetrical mattes, albeit with less space available; the placing of "soft" subtitles within the picture or matte varies according to the DVD player being used, though it appears to be dependent on the movie for Blu-ray disc. The first use of letterbox in consumer video appeared with the RCA Capacitance Electronic Disc videodisc format. Letterboxing was limited to several key sequences of a film such as opening and closing credits, but was used for entire films; the first letterboxed CED release was Amarcord in 1984, several others followed including The Long Goodbye, Monty Python and the Holy Grail and The King of Hearts. Each disc contains a label noting the use of "RCA's innovative wide-screen mastering technique."
In some continents such as North America all VHS titles were only released in pan-and-scan versions. However, most LaserDiscs and some VHS releases were released in their original widescreen versions. A good number of North American NTSC DVD releases early DVDs, family films, more popular titles, tend to offer both "Widescreen" and "Fullscreen" versions on the same disc or on a "flipper" format DVD, while most are available in separate versions; some DVD releases of films are in full screen only, due to whatever existing master is available or basing on what format demographics prefer to see on a certain title. In other territories, such as Europe and Asia, widescreen versions of films on VHS and LaserDisc were much more common in those territories, is ubiquitous on region 2 DVDs. Movies such as The Graduate and Woodstock that made use of the full width of the movie screen have the sides cut off and look different in non-letterboxed copies from the original theatrical release; this is more apparent in pan-and-scanned movies that remain on the center area of the film image.
The term "SmileBox" is a registered trademark used to describe a type of letterboxing for Cinerama films, such as on the Blu-ray release of How the West Was Won. The image is produced with 3D mapping technology to approximate a curved screen. With the advent of widescreen HDTV units in every household few films are being offered in 4:3 and pan-and-scan releases today, except for older material shot in 4:3 only such as TV shows, made for TV productions, pre-1953 films; some titles that were released in full screen only on VHS and DVD are now being issued in their original widescreen ratio on recent DVDs and Blu-rays. Digital broadcasting allows 1.78:1 widescreen format transmissions without losing resolution, thus widescreen is becoming the television norm. Most television channels in Europe are broadcasting standard-definition programming in 16:9, while in the United States, these are downscaled to letterbox; when using a 4:3 television, it is possible to display such programming in either a letterbox format or in a 4:3 centre-cut format.
A letterboxed 14:9 compromise ratio was broadcast in analogue transmissions in European countries making the transition from 4:3 to 16:9. In addition, recent years have seen an increase of "fake" 2.35:1 letterbox mattes on television to give the impression of a cinema film seen in adverts, trailers or television programmes such as Top Gear. Current high-definition television systems use video displays with a wider aspect ratio than older television sets, making it easier to display widescreen films. In addition to films produced for the cinema, some television programming is produced in high definition and therefore widescreen. On a widescreen television set, a 1.78:1 image fills the screen. Because the 1.85:1 aspect ratio does not match the 1.78:1 aspect ratio of widescreen DVDs and high-definition video, slight letterboxing occurs. Such matting of 1.85:1 film is eliminated to match the 1.78:1 aspect ratio in the DVD and HD image transference. Letterbox mattes are not black. IBM has used blue mattes for many of thei
The refresh rate is the number of times in a second that a display hardware updates its buffer. This is distinct from the measure of frame rate; the refresh rate includes the repeated drawing of identical frames, while frame rate measures how a video source can feed an entire frame of new data to a display. For example, most movie projectors advance from one frame to the next one 24 times each second, but each frame is illuminated two or three times before the next frame is projected using a shutter in front of its lamp. As a result, the movie projector has a 48 or 72 Hz refresh rate. On cathode ray tube displays, increasing the refresh rate decreases flickering, thereby reducing eye strain. However, if a refresh rate is specified, beyond what is recommended for the display, damage to the display can occur. For computer programs or telemetry, the term is applied to how a datum is updated with a new external value from another source. In a CRT, the scan rate is controlled by the vertical blanking signal generated by the video controller, ordering the monitor to position the beam at the upper left corner of the raster, ready to paint another frame.
It is limited by the monitor's maximum horizontal scan rate and the resolution since higher resolution means more scan lines. The refresh rate can be calculated from the horizontal scan rate by dividing the scanning frequency by the number of horizontal lines multiplied by 1.05. For instance, a monitor with a horizontal scanning frequency of 96 kHz at a resolution of 1280 × 1024 results in a refresh rate of 96,000 ÷ ≈ 89 Hz. CRT refresh rates have been an important factor in electronic game programming. Traditionally, one of the principles of video/computer game programming is to avoid altering the computer's video buffer except during the vertical retrace; this is necessary to prevent screen tearing. Some video game consoles such as the Famicom/Nintendo Entertainment System did not allow any graphics changes except during the retrace. Contrary to popular belief, liquid-crystal displays do suffer from flickering problems, it is still necessary to avoid modifying graphics data except during the retrace phase to prevent tearing from an image, rendered faster than the display operates.
Refresh rate or the temporal resolution of an LCD is the number of times per second in which the display draws the data it is being given. Since activated LCD pixels do not flash on/off between frames, LCD monitors exhibit no refresh-induced flicker, no matter how low the refresh rate. However, high refresh rates may result in visual artifacts that distort the image in unpleasant ways. High-end LCD televisions now feature up to 60000 Hz refresh rate, which requires advanced digital processing to insert additional interpolated frames between the real images to smooth the image motion; such high refresh rates may not be supported by pixel response times. For a refresh rate of 60 Hz to be displayed an LCD would require a response time of 1.667 milliseconds GtG. On smaller CRT monitors, few people notice any discomfort between 60–72 Hz. On larger CRT monitors, most people experience mild discomfort unless the refresh is set to 72 Hz or higher. A rate of 100 Hz is comfortable at any size. However, this does not apply to LCD monitors.
The closest equivalent to a refresh rate on an LCD monitor is its frame rate, locked at 60 fps. But this is a problem, because the only part of an LCD monitor that could produce CRT-like flicker—its backlight — operates at around a minimum of 200 Hz. Different operating systems set the default refresh rate differently. Microsoft Windows 95 and Windows 98 set the refresh rate to the highest rate that they believe the display supports. Windows NT-based operating systems, such as Windows 2000 and its descendants Windows XP, Windows Vista and Windows 7, set the default refresh rate to a conservative rate 60 Hz; the many variations of Linux set a refresh rate chosen by the user during setup of the display manager. Some fullscreen applications, including many games, now allow the user to reconfigure the refresh rate before entering fullscreen mode, but most default to a conservative resolution and refresh rate and let you increase the settings in the options. Old monitors could be damaged if a user set the video card to a refresh rate higher than the highest rate supported by the monitor.
Some models of monitors display a notice. Some LCDs support adapting their refresh rate to the current frame rate delivered by the graphics card. Two technologies that allow this are G-Sync; when LCD shutter glasses are used for stereo 3D displays, the effective refresh rate is halved, because each eye needs a separate picture. For this reason, it is recommended to use a display capable of at least 120 Hz, because divided in half this rate is again 60 Hz. Higher refresh rates result in greater image stability, for example 72 Hz non-stereo is 144 Hz stereo, 90 Hz
Rendering (computer graphics)
Rendering or image synthesis is the automatic process of generating a photorealistic or non-photorealistic image from a 2D or 3D model by means of computer programs. The results of displaying such a model can be called a render. A scene file contains objects in a defined language or data structure; the data contained in the scene file is passed to a rendering program to be processed and output to a digital image or raster graphics image file. The term "rendering" may be by analogy with an "artist's rendering" of a scene. Though the technical details of rendering methods vary, the general challenges to overcome in producing a 2D image from a 3D representation stored in a scene file are outlined as the graphics pipeline along a rendering device, such as a GPU. A GPU is a purpose-built device able to assist a CPU in performing complex rendering calculations. If a scene is to look realistic and predictable under virtual lighting, the rendering software should solve the rendering equation; the rendering equation doesn't account for all lighting phenomena, but is a general lighting model for computer-generated imagery.'Rendering' is used to describe the process of calculating effects in a video editing program to produce final video output.
Rendering is one of the major sub-topics of 3D computer graphics, in practice is always connected to the others. In the graphics pipeline, it is the last major step, giving the final appearance to the models and animation. With the increasing sophistication of computer graphics since the 1970s, it has become a more distinct subject. Rendering has uses in architecture, video games, movie or TV visual effects, design visualization, each employing a different balance of features and techniques; as a product, a wide variety of renderers are available. Some are integrated into larger modeling and animation packages, some are stand-alone, some are free open-source projects. On the inside, a renderer is a engineered program, based on a selective mixture of disciplines related to: light physics, visual perception and software development. In the case of 3D graphics, rendering may be done as in pre-rendering, or in realtime. Pre-rendering is a computationally intensive process, used for movie creation, while real-time rendering is done for 3D video games which rely on the use of graphics cards with 3D hardware accelerators.
When the pre-image is complete, rendering is used, which adds in bitmap textures or procedural textures, bump mapping and relative position to other objects. The result is a completed image intended viewer sees. For movie animations, several images must be rendered, stitched together in a program capable of making an animation of this sort. Most 3D image editing programs can do this. A rendered image can be understood in terms of a number of visible features. Rendering research and development has been motivated by finding ways to simulate these efficiently; some relate directly to particular techniques, while others are produced together. Shading – how the color and brightness of a surface varies with lighting Texture-mapping – a method of applying detail to surfaces Bump-mapping – a method of simulating small-scale bumpiness on surfaces Fogging/participating medium – how light dims when passing through non-clear atmosphere or air Shadows – the effect of obstructing light Soft shadows – varying darkness caused by obscured light sources Reflection – mirror-like or glossy reflection Transparency, transparency or opacity – sharp transmission of light through solid objects Translucency – scattered transmission of light through solid objects Refraction – bending of light associated with transparency Diffraction – bending and interference of light passing by an object or aperture that disrupts the ray Indirect illumination – surfaces illuminated by light reflected off other surfaces, rather than directly from a light source Caustics – reflection of light off a shiny object, or focusing of light through a transparent object, to produce bright highlights on another object Depth of field – objects appear blurry or out of focus when too far in front of or behind the object in focus Motion blur – objects appear blurry due to high-speed motion, or the motion of the camera Non-photorealistic rendering – rendering of scenes in an artistic style, intended to look like a painting or drawing Many rendering algorithms have been researched, software used for rendering may employ a number of different techniques to obtain a final image.
Tracing every particle of light in a scene is nearly always impractical and would take a stupendous amount of time. Tracing a portion large enough to produce an image takes an inordinate amount of time if the sampling is not intelligently restricted. Therefore, a few loose families of more-efficient light transport modelling techniques have emerged: rasterization, including scanline rendering, geometrically projects objects in the scene to an image plane, without advanced optical effects.