A scan line is one line, or row, in a raster scanning pattern, such as a line of video on a cathode ray tube display of a television set or computer monitor. On CRT screens the horizontal scan lines are visually discernible when viewed from a distance, as alternating colored lines and black lines when a progressive scan signal with below maximum vertical resolution is displayed; this is sometimes used today as a visual effect in computer graphics. The term is used, for a single row of pixels in a raster graphics image. Scan lines are important in representations of image data, because many image file formats have special rules for data at the end of a scan line. For example, there may be a rule; this means that otherwise compatible raster data may need to be analyzed at the level of scan lines in order to convert between formats. Interlaced video Progressive video Scanline rendering Flicker Stroboscopic effect
Computer graphics are pictures and films created using computers. The term refers to computer-generated image data created with the help of specialized graphical hardware and software, it is a vast and developed area of computer science. The phrase was coined in 1960, by computer graphics researchers Verne Hudson and William Fetter of Boeing, it is abbreviated as CG, though sometimes erroneously referred to as computer-generated imagery. Some topics in computer graphics include user interface design, sprite graphics, vector graphics, 3D modeling, shaders, GPU design, implicit surface visualization with ray tracing, computer vision, among others; the overall methodology depends on the underlying sciences of geometry and physics. Computer graphics is responsible for displaying art and image data and meaningfully to the consumer, it is used for processing image data received from the physical world. Computer graphics development has had a significant impact on many types of media and has revolutionized animation, advertising, video games, graphic design in general.
The term computer graphics has been used in a broad sense to describe "almost everything on computers, not text or sound". The term computer graphics refers to several different things: the representation and manipulation of image data by a computer the various technologies used to create and manipulate images the sub-field of computer science which studies methods for digitally synthesizing and manipulating visual content, see study of computer graphicsToday, computer graphics is widespread; such imagery is found in and on television, weather reports, in a variety of medical investigations and surgical procedures. A well-constructed graph can present complex statistics in a form, easier to understand and interpret. In the media "such graphs are used to illustrate papers, theses", other presentation material. Many tools have been developed to visualize data. Computer generated imagery can be categorized into several different types: two dimensional, three dimensional, animated graphics; as technology has improved, 3D computer graphics have become more common, but 2D computer graphics are still used.
Computer graphics has emerged as a sub-field of computer science which studies methods for digitally synthesizing and manipulating visual content. Over the past decade, other specialized fields have been developed like information visualization, scientific visualization more concerned with "the visualization of three dimensional phenomena, where the emphasis is on realistic renderings of volumes, illumination sources, so forth with a dynamic component"; the precursor sciences to the development of modern computer graphics were the advances in electrical engineering and television that took place during the first half of the twentieth century. Screens could display art since the Lumiere brothers' use of mattes to create special effects for the earliest films dating from 1895, but such displays were limited and not interactive; the first cathode ray tube, the Braun tube, was invented in 1897 – it in turn would permit the oscilloscope and the military control panel – the more direct precursors of the field, as they provided the first two-dimensional electronic displays that responded to programmatic or user input.
Computer graphics remained unknown as a discipline until the 1950s and the post-World War II period – during which time the discipline emerged from a combination of both pure university and laboratory academic research into more advanced computers and the United States military's further development of technologies like radar, advanced aviation, rocketry developed during the war. New kinds of displays were needed to process the wealth of information resulting from such projects, leading to the development of computer graphics as a discipline. Early projects like the Whirlwind and SAGE Projects introduced the CRT as a viable display and interaction interface and introduced the light pen as an input device. Douglas T. Ross of the Whirlwind SAGE system performed a personal experiment in which a small program he wrote captured the movement of his finger and displayed its vector on a display scope. One of the first interactive video games to feature recognizable, interactive graphics – Tennis for Two – was created for an oscilloscope by William Higinbotham to entertain visitors in 1958 at Brookhaven National Laboratory and simulated a tennis match.
In 1959, Douglas T. Ross innovated again while working at MIT on transforming mathematic statements into computer generated 3D machine tool vectors by taking the opportunity to create a display scope image of a Disney cartoon character. Electronics pioneer Hewlett-Packard went public in 1957 after incorporating the decade prior, established strong ties with Stanford University through its founders, who were alumni; this began the decades-long transformation of the southern San Francisco Bay Area into the world's leading computer technology hub - now known as Silicon Valley. The field of computer graphics developed with the emergence of computer graphics hardware. Further advances in computing led to greater advancements in interactive computer graphics. In 1959, the TX-2 computer was developed at MIT's Lincoln Laboratory; the TX-2 integrated a number of new man-machine interfaces. A light pen could be used to draw sketches on the computer using Ivan Sutherland's revolutionary Sketchpad software.
Using a light pen, Sketchpad allowed one to draw simple shapes on the computer screen, save them and recall them later. The light pen itself had a small photoelectric cell in its tip. T
Central processing unit
A central processing unit called a central processor or main processor, is the electronic circuitry within a computer that carries out the instructions of a computer program by performing the basic arithmetic, logic and input/output operations specified by the instructions. The computer industry has used the term "central processing unit" at least since the early 1960s. Traditionally, the term "CPU" refers to a processor, more to its processing unit and control unit, distinguishing these core elements of a computer from external components such as main memory and I/O circuitry; the form and implementation of CPUs have changed over the course of their history, but their fundamental operation remains unchanged. Principal components of a CPU include the arithmetic logic unit that performs arithmetic and logic operations, processor registers that supply operands to the ALU and store the results of ALU operations and a control unit that orchestrates the fetching and execution of instructions by directing the coordinated operations of the ALU, registers and other components.
Most modern CPUs are microprocessors, meaning they are contained on a single integrated circuit chip. An IC that contains a CPU may contain memory, peripheral interfaces, other components of a computer; some computers employ a multi-core processor, a single chip containing two or more CPUs called "cores". Array processors or vector processors have multiple processors that operate in parallel, with no unit considered central. There exists the concept of virtual CPUs which are an abstraction of dynamical aggregated computational resources. Early computers such as the ENIAC had to be physically rewired to perform different tasks, which caused these machines to be called "fixed-program computers". Since the term "CPU" is defined as a device for software execution, the earliest devices that could rightly be called CPUs came with the advent of the stored-program computer; the idea of a stored-program computer had been present in the design of J. Presper Eckert and John William Mauchly's ENIAC, but was omitted so that it could be finished sooner.
On June 30, 1945, before ENIAC was made, mathematician John von Neumann distributed the paper entitled First Draft of a Report on the EDVAC. It was the outline of a stored-program computer that would be completed in August 1949. EDVAC was designed to perform a certain number of instructions of various types; the programs written for EDVAC were to be stored in high-speed computer memory rather than specified by the physical wiring of the computer. This overcame a severe limitation of ENIAC, the considerable time and effort required to reconfigure the computer to perform a new task. With von Neumann's design, the program that EDVAC ran could be changed by changing the contents of the memory. EDVAC, was not the first stored-program computer. Early CPUs were custom designs used as part of a sometimes distinctive computer. However, this method of designing custom CPUs for a particular application has given way to the development of multi-purpose processors produced in large quantities; this standardization began in the era of discrete transistor mainframes and minicomputers and has accelerated with the popularization of the integrated circuit.
The IC has allowed complex CPUs to be designed and manufactured to tolerances on the order of nanometers. Both the miniaturization and standardization of CPUs have increased the presence of digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in electronic devices ranging from automobiles to cellphones, sometimes in toys. While von Neumann is most credited with the design of the stored-program computer because of his design of EDVAC, the design became known as the von Neumann architecture, others before him, such as Konrad Zuse, had suggested and implemented similar ideas; the so-called Harvard architecture of the Harvard Mark I, completed before EDVAC used a stored-program design using punched paper tape rather than electronic memory. The key difference between the von Neumann and Harvard architectures is that the latter separates the storage and treatment of CPU instructions and data, while the former uses the same memory space for both.
Most modern CPUs are von Neumann in design, but CPUs with the Harvard architecture are seen as well in embedded applications. Relays and vacuum tubes were used as switching elements; the overall speed of a system is dependent on the speed of the switches. Tube computers like EDVAC tended to average eight hours between failures, whereas relay computers like the Harvard Mark I failed rarely. In the end, tube-based CPUs became dominant because the significant speed advantages afforded outweighed the reliability problems. Most of these early synchronous CPUs ran at low clock rates compared to modern microelectronic designs. Clock signal frequencies ranging from 100 kHz to 4 MHz were common at this time, limited by the speed of the switching de
Virtual camera system
In 3D video games, a virtual camera system aims at controlling a camera or a set of cameras to display a view of a 3D virtual world. Camera systems are used in videogames where their purpose is to show the action at the best possible angle; as opposed to film makers, virtual camera system creators have to deal with a world, interactive and unpredictable. It is not possible to know. To solve this issue, the system relies on certain rules or artificial intelligence to select the most appropriate shots. There are three types of camera systems. In fixed camera systems, the camera does not move at all and the system displays the player's character in a succession of still shots. Tracking cameras, on the other hand, follow the character's movements. Interactive camera systems are automated and allow the player to directly change the view. To implement camera systems, video game developers use techniques such as constraint solvers, artificial intelligence scripts, or autonomous agents. In video games, "third-person" refers to a graphical perspective rendered from a fixed distance behind and above the player character.
This viewpoint allows players to see a more characterized avatar, is most common in action games and action adventure games. Games with this perspective make use of positional audio, where the volume of ambient sounds varies depending on the position of the avatar. There are three types of third-person camera systems: the "fixed camera systems" in which the camera positions are set during the game creation. In this kind of system, the developers set the properties of the camera, such as its position, orientation or field of view, during the game creation; the camera views will not change dynamically, so the same place will always be shown under the same set of views. An early example of this kind of camera system can be seen in Alone in the Dark. While the characters are in 3D, the background on which they evolve has been pre-rendered; the early Resident Evil games are notable examples of games. The God of War series of video games is known for this technique. One advantage of this camera system is that it allows the game designers to use the language of film.
Indeed, like filmmakers, they have the possibility to create a mood through camerawork and careful selection of shots. Games that use this kind of technique are praised for their cinematic qualities; as the name says, a tracking camera follows the characters from behind. The player does not control the camera in any way - he/she cannot for example rotate it or move it to a different position; this type of camera system was common in early 3D games such as Crash Bandicoot or Tomb Raider since it is simple to implement. However, there are a number of issues with it. In particular, if the current view is not suitable, it cannot be changed since the player does not control the camera. Sometimes this viewpoint causes difficulty when a character stands face out against a wall; the camera may end up in awkward positions. This type of camera system is an improvement over the tracking camera system. While the camera is still tracking the character, some of its parameters, such as its orientation or distance to the character, can be changed.
On video game consoles, the camera is controlled by an analog stick to provide a good accuracy, whereas on PC games it is controlled by the mouse. This is the case in games such as The Legend of Zelda: The Wind Waker. Interactive camera systems are difficult to implement in the right way, thus GameSpot argues that much of the Super Mario Sunshine' difficulty comes from having to control the camera. The Legend of Zelda: The Wind Waker was more successful at it - IGN called the camera system "so smart that it needs manual correction". One of the first games to offer an interactive camera system was Super Mario 64; the game had two types of camera systems between. The first one was a standard tracking camera system except that it was driven by artificial intelligence. Indeed, the system was "aware" of the structure of the level and therefore could anticipate certain shots. For example, in the first level, when the path to the hill is about to turn left, the camera automatically starts looking towards the left too, thus anticipating the player's movements.
The second type allows the player to control the camera to Mario's position. By pressing on the left or right buttons, the camera rotates around Mario, while pressing up or down moves the camera closer or away from Mario. There is a large body of research on; the role of a constraint solver software is to generate the best possible shot given a set of visual constraints. In other words, the constraint solver is given a requested shot composition such as "show this character and ensure that he covers at least 30 percent of the screen space"; the solver will use various methods to try creating a shot that would satisfy this request. Once a suitable shot is found, the solver outputs the coordinates and rotation of the camera, which can be used by the graphic engine renderer to display the view. In some camera systems, if no solution can be found, constraints are relaxed. For
A game engine is a software-development environment designed for people to build video games. Developers use game engines to construct games for consoles, mobile devices, personal computers; the core functionality provided by a game engine includes a rendering engine for 2D or 3D graphics, a physics engine or collision detection, scripting, artificial intelligence, streaming, memory management, localization support, scene graph, may include video support for cinematics. Implementers economize on the process of game development by reusing/adapting, in large part, the same game engine to produce different games or to aid in porting games to multiple platforms. In many cases game engines provide a suite of visual development tools in addition to reusable software components; these tools are provided in an integrated development environment to enable simplified, rapid development of games in a data-driven manner. Game engine developers attempt to "pre-invent the wheel" by developing robust software suites which include many elements a game developer may need to build a game.
Most game engine suites provide facilities that ease development, such as graphics, physics and AI functions. These game engines are sometimes called "middleware" because, as with the business sense of the term, they provide a flexible and reusable software platform which provides all the core functionality needed, right out of the box, to develop a game application while reducing costs and time-to-market — all critical factors in the competitive video game industry; as of 2001, Gamebryo, JMonkeyEngine and RenderWare were such used middleware programs. Like other types of middleware, game engines provide platform abstraction, allowing the same game to be run on various platforms including game consoles and personal computers with few, if any, changes made to the game source code. Game engines are designed with a component-based architecture that allows specific systems in the engine to be replaced or extended with more specialized game middleware components; some game engines are designed as a series of loosely connected game middleware components that can be selectively combined to create a custom engine, instead of the more common approach of extending or customizing a flexible integrated product.
However extensibility is achieved, it remains a high priority for game engines due to the wide variety of uses for which they are applied. Despite the specificity of the name, game engines are used for other kinds of interactive applications with real-time graphical needs such as marketing demos, architectural visualizations, training simulations, modeling environments; some game engines only provide real-time 3D rendering capabilities instead of the wide range of functionality needed by games. These engines rely upon the game developer to implement the rest of this functionality or assemble it from other game middleware components; these types of engines are referred to as a "graphics engine", "rendering engine", or "3D engine" instead of the more encompassing term "game engine". This terminology is inconsistently used as many full-featured 3D game engines are referred to as "3D engines". A few examples of graphics engines are: Crystal Space, Genesis3D, Irrlicht, OGRE, RealmForge, Truevision3D, Vision Engine.
Modern game or graphics engines provide a scene graph, an object-oriented representation of the 3D game world which simplifies game design and can be used for more efficient rendering of vast virtual worlds. As technology ages, the components of an engine may become outdated or insufficient for the requirements of a given project. Since the complexity of programming an new engine may result in unwanted delays, a development team may elect to update their existing engine with newer functionality or components; such a framework is composed of a multitude of different components. The actual game logic has to be implemented by some algorithms, it is distinct from sound or input work. The rendering engine generates animated 3D graphics by any of a number of methods. Instead of being programmed and compiled to be executed on the CPU or GPU directly, most rendering engines are built upon one or multiple rendering application programming interfaces, such as Direct3D, OpenGL, or Vulkan which provide a software abstraction of the graphics processing unit.
Low-level libraries such as DirectX, Simple DirectMedia Layer, OpenGL are commonly used in games as they provide hardware-independent access to other computer hardware such as input devices, network cards, sound cards. Before hardware-accelerated 3D graphics, software renderers had been used. Software rendering is still used in some modeling tools or for still-rendered images when visual accuracy is valued over real-time performance or when the computer hardware does not meet needs such as shader support. With the advent of hardware accelerated physics processing, various physics APIs such as PAL and the physics extensions of COLLADA became available to provide a software abstraction of the physics processing unit of different middleware providers and console platforms. Game engines can be written in any programming language like C++, C or Java, though each language is structurally different and may provide different levels of access to specific functions; the audio engine is the component which consists of algorithms related to the loading and output of sound through the client's speaker system.
At a minimum i
3D rendering is the 3D computer graphics process of automatically converting 3D wire frame models into 2D images on a computer. 3D renders may include non-photorealistic rendering. Rendering is the final process of creating the actual 2D animation from the prepared scene; this can be compared to taking a photo or filming the scene after the setup is finished in real life. Several different, specialized, rendering methods have been developed; these range from the distinctly non-realistic wireframe rendering through polygon-based rendering, to more advanced techniques such as: scanline rendering, ray tracing, or radiosity. Rendering may take from fractions of a second to days for a single image/frame. In general, different methods are better suited for either photo-realistic rendering, or real-time rendering. Rendering for interactive media, such as games and simulations, is calculated and displayed in real time, at rates of 20 to 120 frames per second. In real-time rendering, the goal is to show as much information as possible as the eye can process in a fraction of a second.
The primary goal is to achieve an as high as possible degree of photorealism at an acceptable minimum rendering speed. In fact, exploitations can be applied in the way the eye'perceives' the world, as a result, the final image presented is not that of the real world, but one close enough for the human eye to tolerate. Rendering software may simulate such visual effects as depth of field or motion blur; these are attempts to simulate visual phenomena resulting from the optical characteristics of cameras and of the human eye. These effects can lend an element of realism to a scene if the effect is a simulated artifact of a camera; this is the basic method employed in games, interactive worlds and VRML. The rapid increase in computer processing power has allowed a progressively higher degree of realism for real-time rendering, including techniques such as HDR rendering. Real-time rendering is polygonal and aided by the computer's GPU. Animations for non-interactive media, such as feature films and video, are rendered much more slowly.
Non-real time rendering enables the leveraging of limited processing power in order to obtain higher image quality. Rendering times for individual frames may vary from a few seconds to several days for complex scenes. Rendered frames are stored on a hard disk can be transferred to other media such as motion picture film or optical disk; these frames are displayed sequentially at high frame rates 24, 25, or 30 frames per second, to achieve the illusion of movement. When the goal is photo-realism, techniques such as ray tracing, path tracing, photon mapping or radiosity are employed; this is the basic method employed in artistic works. Techniques have been developed for the purpose of simulating other occurring effects, such as the interaction of light with various forms of matter. Examples of such techniques include particle systems, volumetric sampling and subsurface scattering; the rendering process is computationally expensive, given the complex variety of physical processes being simulated.
Computer processing power has increased over the years, allowing for a progressively higher degree of realistic rendering. Film studios that produce computer-generated animations make use of a render farm to generate images in a timely manner. However, falling hardware costs mean that it is possible to create small amounts of 3D animation on a home computer system; the output of the renderer is used as only one small part of a completed motion-picture scene. Many layers of material may be rendered separately and integrated into the final shot using compositing software. Models of reflection/scattering and shading are used to describe the appearance of a surface. Although these issues may seem like problems all on their own, they are studied exclusively within the context of rendering. Modern 3D computer graphics rely on a simplified reflection model called Phong reflection model. In refraction of light, an important concept is the refractive index. In most 3D programming implementations, the term for this value is "index of refraction".
Shading can be broken down into two different techniques, which are studied independently: Surface shading - How light spreads across a surface Reflection/Scattering - How light interacts with a surface at a given point Popular surface shading algorithms in 3D computer graphics include: Flat shading: A technique that shades each polygon of an object based on the polygon's "normal" and the position and intensity of a light source. Gouraud shading: Invented by H. Gouraud in 1971, a fast and resource-conscious vertex shading technique used to simulate smoothly shaded surfaces. Phong shading: Invented by Bui Tuong Phong, used to simulate specular highlights and smooth shaded surfaces. Ref
Video game graphics
A variety of computer graphic techniques have been used to display video game content throughout the history of video games. The predominance of individual techniques have evolved over time due to hardware advances and restrictions such as the processing power of central or graphics processing units; some of the earliest video games were text games or text-based games that used text characters instead of bitmapped or vector graphics. Examples include MUDs, where players could read or view depictions of rooms, other players, actions performed in the virtual world; some of the earliest text games were developed for computer systems which had no video display at all. Text games are easier to write and require less processing power than graphical games, thus were more common from 1970 to 1990. However, terminal emulators are still in use today, people continue to play MUDs and explore interactive fiction. Many beginning programmers still create these types of games to familiarize themselves with a programming language, contests are held today on who can finish programming a roguelike within a short time period, such as seven days.
Vector graphics refers to the use of geometrical primitives such as points and curves instead of resolution-dependent bitmap graphics to represent images in computer graphics. In video games this type of projection is somewhat rare, but has become more common in recent years in browser-based gaming with the advent of Flash and HTML5 Canvas, since these support vector graphics natively. An earlier example for the personal computer is Starglider. Vector game can refer to a video game that uses a vector graphics display capable of projecting images using an electron beam to draw images instead of with pixels, much like a laser show. Many early arcade games used such displays, as they were capable of displaying more detailed images than raster displays on the hardware available at that time. Many vector-based arcade games used full-color overlays to complement the otherwise monochrome vector images. Other uses of these overlays were detailed drawings of the static gaming environment, while the moving objects were drawn by the vector beam.
Games of this type were produced by Atari and Sega. Examples of vector games include Asteroids, Armor Attack, Lunar Lander, Space Fury, Space Wars, Star Trek, Tac/Scan and Zektor; the Vectrex home console used a vector display. After 1985, the use of vector graphics declined due to improvements to sprite technology. Full motion video games are video games that rely upon pre-recorded television- or movie-quality recordings and animations rather than sprites, vectors or 3D models to display action in the game. FMV-based games were popular during the early 1990s as CD-ROMs and Laserdiscs made their way into the living rooms, providing an alternative to the low-capacity ROM cartridges of most consoles at the time. Although FMV-based games did manage to look better than many contemporary sprite-based games, they occupied a niche market; as a result, the format became a well-known failure in video gaming, the popularity of FMV games declined after 1995 as more advanced consoles started to become available.
A number of different types of games utilized this format. Some resembled modern music/dance games, where the player timely presses buttons according to a screen instruction. Others included early rail shooters such as Surgical Strike and Sewer Shark. Full motion video was used in several interactive movie adventure games, such as The Beast Within: A Gabriel Knight Mystery and Phantasmagoria. Games utilizing parallel projection make use of two-dimensional bitmap graphics as opposed to 3D-rendered triangle-based geometry, allowing developers to create large, complex gameworlds efficiently and with few art assets by dividing the art into sprites or tiles and reusing them repeatedly. Top-down perspective sometimes referred to in a while as bird's-eye view, Godview, overhead view or helicopter view, when used in video games refers to a camera angle that shows the player and the area around them from above. While not exclusive to video games that utilise parallel projection, it was at one time common in 2D role playing video games and construction and management simulation games such as SimCity, Pokémon, Railroad Tycoon, as well as in action and action-adventure games such as the early The Legend of Zelda and Grand Theft Auto games.
A side-scrolling game or side-scroller is a video game in which the viewpoint is taken from the side, the onscreen characters can only move, to the left or right. Games of this type make use of scrolling computer display technology, sometimes parallax scrolling to suggest added depth. In many games the screen follows the player character such that the player character is always positioned near the center of the screen. In other games the position of the screen will change according to the player character's movement, such that the player character is off-center and more space is shown in front of the character than behind. Sometimes, the screen will scroll not onl