1.
Computer graphics
–
Computer graphics are pictures and films created using computers. Usually, the term refers to computer-generated image data created with help from specialized hardware and software. It is a vast and recent area in computer science, the phrase was coined in 1960, by computer graphics researchers Verne Hudson and William Fetter of Boeing. It is often abbreviated as CG, though sometimes referred to as CGI. The overall methodology depends heavily on the sciences of geometry, optics. Computer graphics is responsible for displaying art and image data effectively and meaningfully to the user and it is also used for processing image data received from the physical world. Computer graphic development has had a significant impact on many types of media and has revolutionized animation, movies, advertising, video games, the term computer graphics has been used a broad sense to describe almost everything on computers that is not text or sound. Such imagery is found in and on television, newspapers, weather reports, a well-constructed graph can present complex statistics in a form that is easier to understand and interpret. In the media such graphs are used to illustrate papers, reports, thesis, many tools have been developed to visualize data. Computer generated imagery can be categorized into different types, two dimensional, three dimensional, and animated graphics. As technology has improved, 3D computer graphics have become more common, Computer graphics has emerged as a sub-field of computer science which studies methods for digitally synthesizing and manipulating visual content. Screens could display art since the Lumiere brothers use of mattes to create effects for the earliest films dating from 1895. New kinds of displays were needed to process the wealth of information resulting from such projects, early projects like the Whirlwind and SAGE Projects introduced the CRT as a viable display and interaction interface and introduced the light pen as an input device. Douglas T. Ross of the Whirlwind SAGE system performed an experiment in 1954 in which a small program he wrote captured the movement of his finger. Electronics pioneer Hewlett-Packard went public in 1957 after incorporating the decade prior, and established ties with Stanford University through its founders. This began the transformation of the southern San Francisco Bay Area into the worlds leading computer technology hub - now known as Silicon Valley. The field of computer graphics developed with the emergence of computer graphics hardware, further advances in computing led to greater advancements in interactive computer graphics. In 1959, the TX-2 computer was developed at MITs Lincoln Laboratory, the TX-2 integrated a number of new man-machine interfaces
2.
Graphics processing units
–
GPUs are used in embedded systems, mobile phones, personal computers, workstations, and game consoles. In a personal computer, a GPU can be present on a video card, the term GPU was popularized by Nvidia in 1999, who marketed the GeForce 256 as the worlds first GPU, or Graphics Processing Unit. It was presented as a processor with integrated transform, lighting, triangle setup/clipping. Rival ATI Technologies coined the visual processing unit or VPU with the release of the Radeon 9700 in 2002. Arcade system boards have been using specialized graphics chips since the 1970s, in early video game hardware, the RAM for frame buffers was expensive, so video chips composited data together as the display was being scanned out on the monitor. Fujitsus MB14241 video shifter was used to accelerate the drawing of sprite graphics for various 1970s arcade games from Taito and Midway, such as Gun Fight, Sea Wolf, the Namco Galaxian arcade system in 1979 used specialized graphics hardware supporting RGB color, multi-colored sprites and tilemap backgrounds. The Galaxian hardware was used during the golden age of arcade video games, by game companies such as Namco, Centuri, Gremlin, Irem, Konami, Midway, Nichibutsu, Sega. In the home market, the Atari 2600 in 1977 used a video shifter called the Television Interface Adaptor,6502 machine code subroutines could be triggered on scan lines by setting a bit on a display list instruction. ANTIC also supported smooth vertical and horizontal scrolling independent of the CPU and it became one of the best known of what were known as graphics processing units in the 1980s. The Williams Electronics arcade games Robotron,2084, Joust, Sinistar, in 1985, the Commodore Amiga featured a custom graphics chip, with a blitter unit accelerating bitmap manipulation, line draw, and area fill functions. Also included is a coprocessor with its own instruction set, capable of manipulating graphics hardware registers in sync with the video beam. In 1986, Texas Instruments released the TMS34010, the first microprocessor with on-chip graphics capabilities and it could run general-purpose code, but it had a very graphics-oriented instruction set. In 1990-1992, this chip would become the basis of the Texas Instruments Graphics Architecture Windows accelerator cards, in 1987, the IBM8514 graphics system was released as one of the first video cards for IBM PC compatibles to implement fixed-function 2D primitives in electronic hardware. Fujitsu later competed with the FM Towns computer, released in 1989 with support for a full 16,777,216 color palette, in 1988, the first dedicated polygonal 3D graphics boards were introduced in arcades with the Namco System 21 and Taito Air System. In 1991, S3 Graphics introduced the S3 86C911, which its designers named after the Porsche 911 as an implication of the performance increase it promised. The 86C911 spawned a host of imitators, by 1995, all major PC graphics chip makers had added 2D acceleration support to their chips. By this time, fixed-function Windows accelerators had surpassed expensive general-purpose graphics coprocessors in Windows performance, throughout the 1990s, 2D GUI acceleration continued to evolve. As manufacturing capabilities improved, so did the level of integration of graphics chips, arcade systems such as the Sega Model 2 and Namco Magic Edge Hornet Simulator in 1993 were capable of hardware T&L years before appearing in consumer graphics cards
3.
Floating-point
–
In computing, floating-point arithmetic is arithmetic using formulaic representation of real numbers as an approximation so as to support a trade-off between range and precision. A number is, in general, represented approximately to a number of significant digits and scaled using an exponent in some fixed base. For example,1.2345 =12345 ⏟ significand ×10 ⏟ base −4 ⏞ exponent, the term floating point refers to the fact that a numbers radix point can float, that is, it can be placed anywhere relative to the significant digits of the number. This position is indicated as the exponent component, and thus the floating-point representation can be thought of as a kind of scientific notation. The result of dynamic range is that the numbers that can be represented are not uniformly spaced. Over the years, a variety of floating-point representations have been used in computers, however, since the 1990s, the most commonly encountered representation is that defined by the IEEE754 Standard. A floating-point unit is a part of a computer system designed to carry out operations on floating point numbers. A number representation specifies some way of encoding a number, usually as a string of digits, there are several mechanisms by which strings of digits can represent numbers. In common mathematical notation, the string can be of any length. If the radix point is not specified, then the string implicitly represents an integer, in fixed-point systems, a position in the string is specified for the radix point. So a fixed-point scheme might be to use a string of 8 decimal digits with the point in the middle. The scaling factor, as a power of ten, is then indicated separately at the end of the number, floating-point representation is similar in concept to scientific notation. Logically, a floating-point number consists of, A signed digit string of a length in a given base. This digit string is referred to as the significand, mantissa, the length of the significand determines the precision to which numbers can be represented. The radix point position is assumed always to be somewhere within the significand—often just after or just before the most significant digit and this article generally follows the convention that the radix point is set just after the most significant digit. A signed integer exponent, which modifies the magnitude of the number, using base-10 as an example, the number 7005152853504700000♠152853.5047, which has ten decimal digits of precision, is represented as the significand 1528535047 together with 5 as the exponent. In storing such a number, the base need not be stored, since it will be the same for the range of supported numbers. Symbolically, this value is, s b p −1 × b e, where s is the significand, p is the precision, b is the base
4.
Texture map
–
Texture mapping is a method for defining high frequency detail, surface texture, or color information on a computer-generated graphic or 3D model. Its application to 3D graphics was pioneered by Edwin Catmull in 1974, texture mapping originally referred to a method that simply wrapped and mapped pixels from a texture to a 3D surface. A texture map is an image applied to the surface of a shape or polygon and this may be a bitmap image or a procedural texture. They may be stored in image file formats, referenced by 3d model formats or material definitions. They may have 1-3 dimensions, although 2 dimensions are most common for visible surfaces, for use with modern hardware, texture map data may be stored in swizzled or tiled orderings to improve cache coherency. Rendering APIs typically manage texture map resources as buffers or surfaces and they usually contain RGB color data, and sometimes an additional channel for alpha blending especially for billboards and decal overlay textures. It is possible to use the channel for other uses such as specularity. Multiple texture maps may be combined for control over specularity, normals, displacement, multiple texture images may be combined in texture atlases or array textures to reduce state changes for modern hardware. Modern hardware often supports cube map textures with multiple faces for environment mapping and they may be acquired by scanning/digital photography, authored in image manipulation software such as Photoshop, or painted onto 3D surfaces directly in a 3D paint tool such as Mudbox or zbrush. This process is akin to applying patterned paper to a white box. Every vertex in a polygon is assigned a texture coordinate and this may be done through explicit assignment of vertex attributes, manually edited in a 3D modelling package through UV unwrapping tools. It is also possible to associate a procedural transformation from 3d space to space with the material. This might be accomplished via planar projection or, alternatively, cylindrical or spherical mapping, more complex mappings may consider the distance along a surface to minimize distortion. These coordinates are interpolated across the faces of polygons to sample the texture map during rendering, UV unwrapping tools typically provide a view in texture space for manual editing of texture coordinates. Some rendering techniques such as subsurface scattering may be performed approximately by texture-space operations, multitexturing is the use of more than one texture at a time on a polygon. For instance, a light map texture may be used to light a surface as an alternative to recalculating that lighting every time the surface is rendered. Microtextures or detail textures are used to add higher frequency details, and dirt maps may add weathering and variation, modern graphics may use in excess of 10 layers for greater fidelity which are combined using shaders. Bump mapping has become popular in recent video games, as graphics hardware has become powerful enough to accommodate it in real-time, the way that samples are calculated from the texels is governed by texture filtering
5.
Two-dimensional space
–
In physics and mathematics, two-dimensional space is a geometric model of the planar projection of the physical universe. The two dimensions are commonly called length and width, both directions lie in the same plane. A sequence of n numbers can be understood as a location in n-dimensional space. When n =2, the set of all locations is called two-dimensional space or bi-dimensional space. Each reference line is called an axis or just axis of the system. The coordinates can also be defined as the positions of the projections of the point onto the two axes, expressed as signed distances from the origin. The idea of system was developed in 1637 in writings by Descartes and independently by Pierre de Fermat. Both authors used a single axis in their treatments and have a length measured in reference to this axis. The concept of using a pair of axes was introduced later, after Descartes La Géométrie was translated into Latin in 1649 by Frans van Schooten and these commentators introduced several concepts while trying to clarify the ideas contained in Descartes work. Later, the plane was thought of as a field, where any two points could be multiplied and, except for 0, divided and this was known as the complex plane. The complex plane is called the Argand plane because it is used in Argand diagrams. These are named after Jean-Robert Argand, although they were first described by Norwegian-Danish land surveyor, Argand diagrams are frequently used to plot the positions of the poles and zeroes of a function in the complex plane. In mathematics, analytic geometry describes every point in space by means of two coordinates. Two perpendicular coordinate axes are given which cross each other at the origin and they are usually labeled x and y. Another widely used system is the polar coordinate system, which specifies a point in terms of its distance from the origin. In two dimensions, there are infinitely many polytopes, the polygons, the first few regular ones are shown below, The Schläfli symbol represents a regular p-gon. The regular henagon and regular digon can be considered degenerate regular polygons and they can exist nondegenerately in non-Euclidean spaces like on a 2-sphere or a 2-torus. There exist infinitely many non-convex regular polytopes in two dimensions, whose Schläfli symbols consist of rational numbers and they are called star polygons and share the same vertex arrangements of the convex regular polygons
6.
Vector (mathematics)
–
A vector space is a collection of objects called vectors, which may be added together and multiplied by numbers, called scalars in this context. Scalars are often taken to be numbers, but there are also vector spaces with scalar multiplication by complex numbers, rational numbers. The operations of addition and scalar multiplication must satisfy certain requirements, called axioms. Euclidean vectors are an example of a vector space and they represent physical quantities such as forces, any two forces can be added to yield a third, and the multiplication of a force vector by a real multiplier is another force vector. In the same vein, but in a more geometric sense, Vector spaces are the subject of linear algebra and are well characterized by their dimension, which, roughly speaking, specifies the number of independent directions in the space. Infinite-dimensional vector spaces arise naturally in mathematical analysis, as function spaces and these vector spaces are generally endowed with additional structure, which may be a topology, allowing the consideration of issues of proximity and continuity. Among these topologies, those that are defined by a norm or inner product are commonly used. This is particularly the case of Banach spaces and Hilbert spaces, historically, the first ideas leading to vector spaces can be traced back as far as the 17th centurys analytic geometry, matrices, systems of linear equations, and Euclidean vectors. Today, vector spaces are applied throughout mathematics, science and engineering, furthermore, vector spaces furnish an abstract, coordinate-free way of dealing with geometrical and physical objects such as tensors. This in turn allows the examination of local properties of manifolds by linearization techniques, Vector spaces may be generalized in several ways, leading to more advanced notions in geometry and abstract algebra. The concept of space will first be explained by describing two particular examples, The first example of a vector space consists of arrows in a fixed plane. This is used in physics to describe forces or velocities, given any two such arrows, v and w, the parallelogram spanned by these two arrows contains one diagonal arrow that starts at the origin, too. This new arrow is called the sum of the two arrows and is denoted v + w, when a is negative, av is defined as the arrow pointing in the opposite direction, instead. Such a pair is written as, the sum of two such pairs and multiplication of a pair with a number is defined as follows, + = and a =. The first example above reduces to one if the arrows are represented by the pair of Cartesian coordinates of their end points. A vector space over a field F is a set V together with two operations that satisfy the eight axioms listed below, elements of V are commonly called vectors. Elements of F are commonly called scalars, the second operation, called scalar multiplication takes any scalar a and any vector v and gives another vector av. In this article, vectors are represented in boldface to distinguish them from scalars
7.
2D computer graphics
–
2D computer graphics is the computer-based generation of digital images—mostly from two-dimensional models and by techniques specific to them. The word may stand for the branch of science that comprises such techniques. This representation is more flexible since it can be rendered at different resolutions to suit different output devices. For these reasons, documents and illustrations are often stored or transmitted as 2D graphic files, 2D computer graphics started in the 1950s, based on vector graphics devices. These were largely supplanted by raster-based devices in the following decades, the PostScript language and the X Window System protocol were landmark developments in the field. 2D graphics models may combine geometric models, digital images, text to be typeset, mathematical functions and equations and these components can be modified and manipulated by two-dimensional geometric transformations such as translation, rotation, scaling. In object-oriented graphics, the image is described indirectly by an object endowed with a self-rendering method—a procedure which assigns colors to the pixels by an arbitrary algorithm. Complex models can be built by combining simpler objects, in the paradigms of object-oriented programming, in Euclidean geometry, a translation moves every point a constant distance in a specified direction. A translation can be described as a motion, other rigid motions include rotations and reflections. A translation can also be interpreted as the addition of a constant vector to every point, a translation operator is an operator T δ such that T δ f = f. If v is a vector, then the translation Tv will work as Tv = p + v. If T is a translation, then the image of a subset A under the function T is the translate of A by T, the translate of A by Tv is often written A + v. In a Euclidean space, any translation is an isometry, the set of all translations forms the translation group T, which is isomorphic to the space itself, and a normal subgroup of Euclidean group E. The quotient group of E by T is isomorphic to the orthogonal group O, E / T ≅ O, thus we write the 3-dimensional vector w = using 4 homogeneous coordinates as w =. Similarly, the product of matrices is given by adding the vectors. Because addition of vectors is commutative, multiplication of matrices is therefore also commutative. In linear algebra, a matrix is a matrix that is used to perform a rotation in Euclidean space. R = rotates points in the xy-Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian coordinate system
8.
Graphical user interface
–
GUIs were introduced in reaction to the perceived steep learning curve of command-line interfaces, which require commands to be typed on a computer keyboard. The actions in a GUI are usually performed through direct manipulation of the graphical elements, beyond computers, GUIs are used in many handheld mobile devices such as MP3 players, portable media players, gaming devices, smartphones and smaller household, office and industrial controls. Designing the visual composition and temporal behavior of a GUI is an important part of application programming in the area of human–computer interaction. Its goal is to enhance the efficiency and ease of use for the logical design of a stored program. Methods of user-centered design are used to ensure that the language introduced in the design is well-tailored to the tasks. The visible graphical interface features of an application are sometimes referred to as chrome or GUI, typically, users interact with information by manipulating visual widgets that allow for interactions appropriate to the kind of data they hold. The widgets of an interface are selected to support the actions necessary to achieve the goals of users. A model–view–controller allows a structure in which the interface is independent from and indirectly linked to application functions. This allows users to select or design a different skin at will, good user interface design relates to users more, and to system architecture less. Large widgets, such as windows, usually provide a frame or container for the main presentation content such as a web page, smaller ones usually act as a user-input tool. A GUI may be designed for the requirements of a market as application-specific graphical user interfaces. By the 1990s, cell phones and handheld game systems also employed application specific touchscreen GUIs, newer automobiles use GUIs in their navigation systems and multimedia centers, or navigation multimedia center combinations. Sample graphical desktop environments A GUI uses a combination of technologies and devices to provide a platform that users can interact with, a series of elements conforming a visual language have evolved to represent information stored in computers. This makes it easier for people with few computer skills to work with, the most common combination of such elements in GUIs is the windows, icons, menus, pointer paradigm, especially in personal computers. The WIMP style of interaction uses a virtual device to represent the position of a pointing device, most often a mouse. Available commands are compiled together in menus, and actions are performed making gestures with the pointing device, a window manager facilitates the interactions between windows, applications, and the windowing system. The windowing system handles hardware devices such as pointing devices, graphics hardware, window managers and other software combine to simulate the desktop environment with varying degrees of realism. Smaller mobile devices such as personal assistants and smartphones typically use the WIMP elements with different unifying metaphors, due to constraints in space
9.
3D models
–
In 3D computer graphics, 3D modeling is the process of developing a mathematical representation of any three-dimensional surface of an object via specialized software. The product is called a 3D model and it can be displayed as a two-dimensional image through a process called 3D rendering or used in a computer simulation of physical phenomena. The model can also be created using 3D printing devices. Models may be created automatically or manually, the manual modeling process of preparing geometric data for 3D computer graphics is similar to plastic arts such as sculpting. 3D modeling software is a class of 3D computer graphics used to produce 3D models. Individual programs of this class are called modeling applications or modelers, three-dimensional models represent a physical body using a collection of points in 3D space, connected by various geometric entities such as triangles, lines, curved surfaces, etc. Being a collection of data, 3D models can be created by hand, algorithmically and their surfaces may be further defined with texture mapping. 3D models are used anywhere in 3D graphics and CAD. Actually, their use predates the use of 3D graphics on personal computers. Many computer games used pre-rendered images of 3D models as sprites before computers could render them in real-time, today, 3D models are used in a wide variety of fields. The medical industry uses detailed models of organs, these may be created with multiple 2-D image slices from an MRI or CT scan, the movie industry uses them as characters and objects for animated and real-life motion pictures. The video game industry uses them as assets for computer and video games, the science sector uses them as highly detailed models of chemical compounds. The architecture industry uses them to demonstrate proposed buildings and landscapes in lieu of traditional, physical architectural models, the engineering community uses them as designs of new devices, vehicles and structures as well as a host of other uses. In recent decades the earth science community has started to construct 3D geological models as a standard practice, 3D models can also be the basis for physical devices that are built with 3D printers or CNC machines. Almost all 3D models can be divided into two categories, solid - These models define the volume of the object they represent. Almost all visual models used in games and film are shell models, solid and shell modeling can create functionally identical objects. Shell models must be manifold to be meaningful as a real object, Polygonal meshes are by far the most common representation. Level sets are a representation for deforming surfaces which undergo many topological changes such as fluids
10.
Bitmap image
–
In computing, a bitmap is a mapping from some domain to bits, that is, values which are zero or one. It is also called a bit array or bitmap index, in computer graphics, when the domain is a rectangle a bitmap gives a way to store a binary image, that is, an image in which each pixel is either black or white. The more general term refers to a map of pixels. Often bitmap is used for this as well, in some contexts, the term bitmap implies one bit per pixel, while pixmap is used for images with multiple bits per pixel. A bitmap is a type of organization or image file format used to store digital images. The term bitmap comes from the computer programming terminology, meaning just a map of bits, now, along with pixmap, it commonly refers to the similar concept of a spatially mapped array of pixels. Raster images in general may be referred to as bitmaps or pixmaps, whether synthetic or photographic, besides BMP, other file formats that store literal bitmaps include InterLeaved Bitmap, Portable Bitmap, X Bitmap, and Wireless Application Protocol Bitmap. Similarly, most other file formats, such as JPEG, TIFF, PNG, and GIF, also store bitmap images. In typical uncompressed bitmaps, image pixels are stored with a color depth of 1,4,8,16,24,32,48. Pixels of 8 bits and fewer can represent either grayscale or indexed color. An alpha channel may be stored in a bitmap, where it is similar to a grayscale bitmap, or in a fourth channel that, for example. The bits representing the bitmap pixels may be packed or unpacked, depending on the color depth, a pixel in the picture will occupy at least n/8 bytes, where n is the bit depth. In the formula above, header size and color palette size, due to effects of row padding to align each row start to a storage unit boundary such as a word, additional bytes may be needed. They called these device-independent bitmaps or DIBs, and the format for them is called DIB file format or BMP file format. According to Microsoft support, A device-independent bitmap is a used to define device-independent bitmaps in various color resolutions. The main purpose of DIBs is to allow bitmaps to be moved from one device to another, a DIB is an external format, in contrast to a device-dependent bitmap, which appears in the system as a bitmap object. A DIB is normally transported in metafiles, BMP files, here, device independent refers to the format, or storage arrangement, and should not be confused with device-independent color. The X Window System uses a similar XBM format for black-and-white images, numerous other uncompressed bitmap file formats are in use, though most not widely
11.
3D model
–
In 3D computer graphics, 3D modeling is the process of developing a mathematical representation of any three-dimensional surface of an object via specialized software. The product is called a 3D model and it can be displayed as a two-dimensional image through a process called 3D rendering or used in a computer simulation of physical phenomena. The model can also be created using 3D printing devices. Models may be created automatically or manually, the manual modeling process of preparing geometric data for 3D computer graphics is similar to plastic arts such as sculpting. 3D modeling software is a class of 3D computer graphics used to produce 3D models. Individual programs of this class are called modeling applications or modelers, three-dimensional models represent a physical body using a collection of points in 3D space, connected by various geometric entities such as triangles, lines, curved surfaces, etc. Being a collection of data, 3D models can be created by hand, algorithmically and their surfaces may be further defined with texture mapping. 3D models are used anywhere in 3D graphics and CAD. Actually, their use predates the use of 3D graphics on personal computers. Many computer games used pre-rendered images of 3D models as sprites before computers could render them in real-time, today, 3D models are used in a wide variety of fields. The medical industry uses detailed models of organs, these may be created with multiple 2-D image slices from an MRI or CT scan, the movie industry uses them as characters and objects for animated and real-life motion pictures. The video game industry uses them as assets for computer and video games, the science sector uses them as highly detailed models of chemical compounds. The architecture industry uses them to demonstrate proposed buildings and landscapes in lieu of traditional, physical architectural models, the engineering community uses them as designs of new devices, vehicles and structures as well as a host of other uses. In recent decades the earth science community has started to construct 3D geological models as a standard practice, 3D models can also be the basis for physical devices that are built with 3D printers or CNC machines. Almost all 3D models can be divided into two categories, solid - These models define the volume of the object they represent. Almost all visual models used in games and film are shell models, solid and shell modeling can create functionally identical objects. Shell models must be manifold to be meaningful as a real object, Polygonal meshes are by far the most common representation. Level sets are a representation for deforming surfaces which undergo many topological changes such as fluids
12.
Lightsource
–
Light is electromagnetic radiation within a certain portion of the electromagnetic spectrum. The word usually refers to light, which is visible to the human eye and is responsible for the sense of sight. Visible light is defined as having wavelengths in the range of 400–700 nanometres, or 4.00 × 10−7 to 7.00 × 10−7 m. This wavelength means a range of roughly 430–750 terahertz. The main source of light on Earth is the Sun, sunlight provides the energy that green plants use to create sugars mostly in the form of starches, which release energy into the living things that digest them. This process of photosynthesis provides virtually all the used by living things. Historically, another important source of light for humans has been fire, with the development of electric lights and power systems, electric lighting has effectively replaced firelight. Some species of animals generate their own light, a process called bioluminescence, for example, fireflies use light to locate mates, and vampire squids use it to hide themselves from prey. Visible light, as all types of electromagnetic radiation, is experimentally found to always move at this speed in a vacuum. In physics, the term sometimes refers to electromagnetic radiation of any wavelength. In this sense, gamma rays, X-rays, microwaves and radio waves are also light, like all types of light, visible light is emitted and absorbed in tiny packets called photons and exhibits properties of both waves and particles. This property is referred to as the wave–particle duality, the study of light, known as optics, is an important research area in modern physics. Generally, EM radiation, or EMR, is classified by wavelength into radio, microwave, infrared, the behavior of EMR depends on its wavelength. Higher frequencies have shorter wavelengths, and lower frequencies have longer wavelengths, when EMR interacts with single atoms and molecules, its behavior depends on the amount of energy per quantum it carries. There exist animals that are sensitive to various types of infrared, infrared sensing in snakes depends on a kind of natural thermal imaging, in which tiny packets of cellular water are raised in temperature by the infrared radiation. EMR in this range causes molecular vibration and heating effects, which is how these animals detect it, above the range of visible light, ultraviolet light becomes invisible to humans, mostly because it is absorbed by the cornea below 360 nanometers and the internal lens below 400. Furthermore, the rods and cones located in the retina of the eye cannot detect the very short ultraviolet wavelengths and are in fact damaged by ultraviolet. Many animals with eyes that do not require lenses are able to detect ultraviolet, by quantum photon-absorption mechanisms, various sources define visible light as narrowly as 420 to 680 to as broadly as 380 to 800 nm
13.
3D rendering
–
3D rendering is the 3D computer graphics process of automatically converting 3D wire frame models into 2D images with 3D photorealistic effects or non-photorealistic rendering on a computer. Rendering is the process of creating the actual 2D image or animation from the prepared scene. This can be compared to taking a photo or filming the scene after the setup is finished in real life, several different, and often specialized, rendering methods have been developed. These range from the distinctly non-realistic wireframe rendering through polygon-based rendering, to advanced techniques such as, scanline rendering, ray tracing. Rendering may take from fractions of a second to days for a single image/frame, in general, different methods are better suited for either photo-realistic rendering, or real-time rendering. Rendering for interactive media, such as games and simulations, is calculated and displayed in real time, in real-time rendering, the goal is to show as much information as possible as the eye can process in a fraction of a second. The primary goal is to achieve an as high as possible degree of photorealism at an acceptable minimum rendering speed, rendering software may simulate such visual effects as lens flares, depth of field or motion blur. These are attempts to simulate visual phenomena resulting from the characteristics of cameras. These effects can lend an element of realism to a scene and this is the basic method employed in games, interactive worlds and VRML. The rapid increase in processing power has allowed a progressively higher degree of realism even for real-time rendering, including techniques such as HDR rendering. Real-time rendering is often polygonal and aided by the computers GPU, animations for non-interactive media, such as feature films and video, are rendered much more slowly. Non-real time rendering enables the leveraging of limited processing power in order to obtain higher image quality, rendering times for individual frames may vary from a few seconds to several days for complex scenes. Rendered frames are stored on a hard disk then can be transferred to media such as motion picture film or optical disk. These frames are then displayed sequentially at high rates, typically 24,25, or 30 frames per second. When the goal is photo-realism, techniques such as ray tracing or radiosity are employed and this is the basic method employed in digital media and artistic works. Techniques have been developed for the purpose of simulating other naturally occurring effects, examples of such techniques include particle systems, volumetric sampling, caustics, and subsurface scattering. The rendering process is expensive, given the complex variety of physical processes being simulated. Computer processing power has increased rapidly over the years, allowing for a higher degree of realistic rendering
14.
Digital painting
–
Digital painting is a method of creating an art object digitally and/or a technique for making digital art in the computer. As a technique, it refers to a graphics software program that uses a virtual canvas and virtual painting box of brushes, colors. The virtual box contains many instruments that do not exist outside the computer, the specific visual characteristics of a digital painting can be traced back to the software. The option to undo without a trace up to twenty or more brush strokes or other actions, permits a more spontaneous, the choice of program determines the output to have the characteristics of a watercolor, lino cut, screen print, oil painting etc. Thus, digital painting is not so much a new medium as a new appearance of the range of existing mediums. Painting is one of at least five directions that can be distinguished in early art, Computer generated art springs directly from artificial intelligence. The image is the result of a string of zeros and ones, much the same as music notes on a score are not music themselves, video art likewise relies on the manipulation of moving images. Traditional digital painting creates an image in a stroke-by-stroke, brush-in-hand fashion, within the category of computer generated painting, a distinction is made between code-mode, and design-mode. The difference can be clarified with the aid of web page design, a web designer who wants to give a web page a black background, can do so by writing, in a language that the computer can understand, <body bgcolor=#000000>. The earliest digital paintings are made in this method, where the artist writes a code, code-mode painting offered a lot of freedom in style and idiom – though intricate forms were difficult to program. Modern programs used for web design usually offer a design mode alongside a code mode, the advantage of a design mode is that it allows to build web pages without the need of programming. The designer can choose to construct an image and the software will generate the necessary code. Graphics programs used for digital painting take this one step further, the design mode is the only mode. The image is translated into the codes that are needed for viewing, printing etc. without interference of the artist, most of these programs feature a number of ready-made shapes, such as circles, ellipses, squares, and many brush points. While it is not possible for a human hand to create exactly identical shapes, or construct a perfect circle. It is possible to subject shapes to a variety of mathematical operations, programs for fractal art for instance, assist the artist in creating visually complex structures of great mathematical regularity. The creative process in traditional and digital painting is more or less the same, the painting is on the hard disk of a computer. The usual way to make it presentable and salable is to project it on a carrier, such as paper
15.
Zbrush
–
ZBrush is a digital sculpting tool that combines 3D/2. 5D modeling, texturing and painting. It uses a proprietary pixol technology which stores lighting, color, material, the main difference between ZBrush and more traditional modeling packages is that it is more akin to sculpting. ZBrush is used for creating models for use in movies, games. ZBrush uses dynamic levels of resolution to allow sculptors to make global or local changes to their models, ZBrush is most known for being able to sculpt medium to high frequency details that were traditionally painted in bump maps. The resulting mesh details can then be exported as normal maps to be used on a low poly version of that same model and they can also be exported as a displacement map, although in that case the lower poly version generally requires more resolution. Or, once completed, the 3D model can be projected to the background, work can then begin on another 3D model which can be used in the same scene. This feature lets users work with complicated scenes without heavy processor overhead, ZBrush was developed by the company Pixologic Inc, founded by Ofer Alon and Jack Rimokh. The software was presented in 1999 at SIGGRAPH, the demo version 1.55 was released in 2002, and the version 3.1 was released in 2007. ZBrush 4 for Windows and Mac systems was announced on April 21,2009 for an August release, Version 3.5 was made available in September the same year, and includes some of the newer features initially intended for ZBrush 4. Through GoZ, available in Version 4, ZBrush offers integration with Autodesk Maya, Autodesk 3ds Max, Cinema 4D, LightWave 3D, Poser Pro, DAZ Studio, EIAS, like a pixel, each pixol contains information on X and Y position and color values. Additionally, it contains information on depth, orientation and material, ZBrush related files store pixol information, but when these maps are exported they are flattened and the pixol data is lost. This technique is similar in concept to a voxel, another kind of 3D pixel, ZBrush comes with many features to aid in the sculpting of models and meshes. 3D Brushes The initial ZBrush download comes with 30 3D sculpting brushes with more available for download, each brush offers unique attributes as well as allowing general control over hardness, intensity, and size. Alphas, used to create a pattern or shape. Polypaint Polypainting allows users to paint on a surface without the need to first assign a texture map by adding color directly to the polygons. Illustration ZBrush also gives the ability to sculpt in 2. 5D, a pixol put down when sculpting or illustrating in 2. 5D contains information on its own color, depth, material, position, and lighting information. Transpose ZBrush also has a feature that is similar to animation in other 3D programs. The transpose feature allows a user to isolate a part of the model, GoZ Introduced in ZBrush 3.2 OSX, GoZ automates setting up shading networks for normal, displacement, and texture maps of the 3D models in GoZ-enabled applications
16.
Unit vector
–
In mathematics, a unit vector in a normed vector space is a vector of length 1. A unit vector is denoted by a lowercase letter with a circumflex, or hat. The term direction vector is used to describe a unit vector being used to represent spatial direction, two 2D direction vectors, d1 and d2 are illustrated. 2D spatial directions represented this way are equivalent numerically to points on the unit circle, the same construct is used to specify spatial directions in 3D. As illustrated, each direction is equivalent numerically to a point on the unit sphere. The normalized vector or versor û of a vector u is the unit vector in the direction of u, i. e. u ^ = u ∥ u ∥ where ||u|| is the norm of u. The term normalized vector is used as a synonym for unit vector. Unit vectors are often chosen to form the basis of a vector space, every vector in the space may be written as a linear combination of unit vectors. By definition, in a Euclidean space the dot product of two vectors is a scalar value amounting to the cosine of the smaller subtended angle. In three-dimensional Euclidean space, the product of two arbitrary unit vectors is a 3rd vector orthogonal to both of them having length equal to the sine of the smaller subtended angle. Unit vectors may be used to represent the axes of a Cartesian coordinate system and they are often denoted using normal vector notation rather than standard unit vector notation. In most contexts it can be assumed that i, j, the notations, or, with or without hat, are also used, particularly in contexts where i, j, k might lead to confusion with another quantity. When a unit vector in space is expressed, with Cartesian notation, as a combination of i, j, k. The value of each component is equal to the cosine of the angle formed by the vector with the respective basis vector. This is one of the used to describe the orientation of a straight line, segment of straight line, oriented axis. It is important to note that ρ ^ and φ ^ are functions of φ, when differentiating or integrating in cylindrical coordinates, these unit vectors themselves must also be operated on. For a more complete description, see Jacobian matrix, to minimize degeneracy, the polar angle is usually taken 0 ≤ θ ≤180 ∘. It is especially important to note the context of any ordered triplet written in spherical coordinates, here, the American physics convention is used
17.
3D space
–
Three-dimensional space is a geometric setting in which three values are required to determine the position of an element. This is the meaning of the term dimension. In physics and mathematics, a sequence of n numbers can be understood as a location in n-dimensional space, when n =3, the set of all such locations is called three-dimensional Euclidean space. It is commonly represented by the symbol ℝ3 and this serves as a three-parameter model of the physical universe in which all known matter exists. However, this space is one example of a large variety of spaces in three dimensions called 3-manifolds. Furthermore, in case, these three values can be labeled by any combination of three chosen from the terms width, height, depth, and breadth. In mathematics, analytic geometry describes every point in space by means of three coordinates. Three coordinate axes are given, each perpendicular to the two at the origin, the point at which they cross. They are usually labeled x, y, and z, below are images of the above-mentioned systems. Two distinct points determine a line. Three distinct points are either collinear or determine a unique plane, four distinct points can either be collinear, coplanar or determine the entire space. Two distinct lines can intersect, be parallel or be skew. Two parallel lines, or two intersecting lines, lie in a plane, so skew lines are lines that do not meet. Two distinct planes can either meet in a line or are parallel. Three distinct planes, no pair of which are parallel, can meet in a common line. In the last case, the three lines of intersection of each pair of planes are mutually parallel, a line can lie in a given plane, intersect that plane in a unique point or be parallel to the plane. In the last case, there will be lines in the plane that are parallel to the given line, a hyperplane is a subspace of one dimension less than the dimension of the full space. The hyperplanes of a space are the two-dimensional subspaces, that is
18.
Homogeneous coordinate
–
They have the advantage that the coordinates of points, including points at infinity, can be represented using finite coordinates. Formulas involving homogeneous coordinates are often simpler and more symmetric than their Cartesian counterparts, if the homogeneous coordinates of a point are multiplied by a non-zero scalar then the resulting coordinates represent the same point. Since homogeneous coordinates are given to points at infinity, the number of coordinates required to allow this extension is one more than the dimension of the projective space being considered. For example, two coordinates are required to specify a point on the projective line and three homogeneous coordinates are required to specify a point in the projective plane. The real projective plane can be thought of as the Euclidean plane with additional points added, which are called points at infinity, and are considered to lie on a new line, the line at infinity. There is a point at infinity corresponding to each direction, informally defined as the limit of a point that moves in direction away from the origin. Parallel lines in the Euclidean plane are said to intersect at a point at infinity corresponding to their common direction, given a point on the Euclidean plane, for any non-zero real number Z, the triple is called a set of homogeneous coordinates for the point. By this definition, multiplying the three homogeneous coordinates by a common, non-zero factor gives a new set of coordinates for the same point. In particular, is such a system of coordinates for the point. For example, the Cartesian point can be represented in coordinates as or. The original Cartesian coordinates are recovered by dividing the first two positions by the third, thus unlike Cartesian coordinates, a single point can be represented by infinitely many homogeneous coordinates. The equation of a line through the origin may be written nx + my =0 where n and m are not both 0, in parametric form this can be written x = mt, y = −nt. Let Z = 1/t, so the coordinates of a point on the line may be written, in the limit, as t approaches infinity, in other words, as the point moves away from the origin, Z approaches 0 and the homogeneous coordinates of the point become. Thus we define as the coordinates of the point at infinity corresponding to the direction of the line nx + my =0. To summarize, Any point in the plane is represented by a triple, called the homogeneous coordinates or projective coordinates of the point. The point represented by a set of homogeneous coordinates is unchanged if the coordinates are multiplied by a common factor. Conversely, two sets of coordinates represent the same point if and only if one is obtained from the other by multiplying all the coordinates by the same non-zero constant. When Z is not 0 the point represented is the point in the Euclidean plane, when Z is 0 the point represented is a point at infinity
19.
3D vector
–
In mathematics, physics, and engineering, a Euclidean vector is a geometric object that has magnitude and direction. Vectors can be added to other vectors according to vector algebra, a Euclidean vector is frequently represented by a line segment with a definite direction, or graphically as an arrow, connecting an initial point A with a terminal point B, and denoted by A B →. A vector is what is needed to carry the point A to the point B and it was first used by 18th century astronomers investigating planet rotation around the Sun. The magnitude of the vector is the distance between the two points and the direction refers to the direction of displacement from A to B. These operations and associated laws qualify Euclidean vectors as an example of the more generalized concept of vectors defined simply as elements of a vector space. Vectors play an important role in physics, the velocity and acceleration of a moving object, many other physical quantities can be usefully thought of as vectors. Although most of them do not represent distances, their magnitude and direction can still be represented by the length, the mathematical representation of a physical vector depends on the coordinate system used to describe it. Other vector-like objects that describe physical quantities and transform in a similar way under changes of the system include pseudovectors and tensors. The concept of vector, as we know it today, evolved gradually over a period of more than 200 years, about a dozen people made significant contributions. Giusto Bellavitis abstracted the basic idea in 1835 when he established the concept of equipollence, working in a Euclidean plane, he made equipollent any pair of line segments of the same length and orientation. Essentially he realized an equivalence relation on the pairs of points in the plane, the term vector was introduced by William Rowan Hamilton as part of a quaternion, which is a sum q = s + v of a Real number s and a 3-dimensional vector. Like Bellavitis, Hamilton viewed vectors as representative of classes of equipollent directed segments, grassmanns work was largely neglected until the 1870s. Peter Guthrie Tait carried the standard after Hamilton. His 1867 Elementary Treatise of Quaternions included extensive treatment of the nabla or del operator ∇, in 1878 Elements of Dynamic was published by William Kingdon Clifford. Clifford simplified the quaternion study by isolating the dot product and cross product of two vectors from the complete quaternion product and this approach made vector calculations available to engineers and others working in three dimensions and skeptical of the fourth. Josiah Willard Gibbs, who was exposed to quaternions through James Clerk Maxwells Treatise on Electricity and Magnetism, the first half of Gibbss Elements of Vector Analysis, published in 1881, presents what is essentially the modern system of vector analysis. In 1901 Edwin Bidwell Wilson published Vector Analysis, adapted from Gibbs lectures, in physics and engineering, a vector is typically regarded as a geometric entity characterized by a magnitude and a direction. It is formally defined as a line segment, or arrow
20.
SIMD
–
Single instruction, multiple data, is a class of parallel computers in Flynns taxonomy. It describes computers with multiple processing elements that perform the operation on multiple data points simultaneously. Thus, such machines exploit data level parallelism, but not concurrency, there are simultaneous computations, SIMD is particularly applicable to common tasks like adjusting the contrast in a digital image or adjusting the volume of digital audio. Most modern CPU designs include SIMD instructions in order to improve the performance of multimedia use, vector processing was especially popularized by Cray in the 1970s and 1980s. The first era of modern SIMD machines was characterized by massively parallel processing-style supercomputers such as the Thinking Machines CM-1 and these machines had many limited-functionality processors that would work in parallel. Supercomputing moved away from the SIMD approach when inexpensive scalar MIMD approaches based on commodity processors such as the Intel i860 XP became more powerful, the current era of SIMD processors grew out of the desktop-computer market rather than the supercomputer market. Sun Microsystems introduced SIMD integer instructions in its VIS instruction set extensions in 1995, MIPS followed suit with their similar MDMX system. The first widely deployed desktop SIMD was with Intels MMX extensions to the x86 architecture in 1996 and this sparked the introduction of the much more powerful AltiVec system in the Motorola PowerPCs and IBMs POWER systems. Intel responded in 1999 by introducing the all-new SSE system, since then, there have been several extensions to the SIMD instruction sets for both architectures. A modern supercomputer is almost always a cluster of MIMD machines, a modern desktop computer is often a multiprocessor MIMD machine where each processor can execute short-vector SIMD instructions. An application that may take advantage of SIMD is one where the value is being added to a large number of data points. One example would be changing the brightness of an image, each pixel of an image consists of three values for the brightness of the red, green and blue portions of the color. To change the brightness, the R, G and B values are read from memory, a value is added to them, with a SIMD processor there are two improvements to this process. For one the data is understood to be in blocks, instead of a series of instructions saying retrieve this pixel, now retrieve the next pixel, a SIMD processor will have a single instruction that effectively says retrieve n pixels. For a variety of reasons, this can take less time than retrieving each pixel individually. Another advantage is that the instruction operates on all loaded data in a single operation. In other words, if the SIMD system works by loading up eight points at once. Not all algorithms can be vectorized easily, note, Batch-pipeline systems are most advantageous for cache control when implemented with SIMD intrinsics, but they are not exclusive to SIMD features
21.
Matrix (mathematics)
–
In mathematics, a matrix is a rectangular array of numbers, symbols, or expressions, arranged in rows and columns. For example, the dimensions of the matrix below are 2 ×3, the individual items in an m × n matrix A, often denoted by ai, j, where max i = m and max j = n, are called its elements or entries. Provided that they have the size, two matrices can be added or subtracted element by element. The rule for multiplication, however, is that two matrices can be multiplied only when the number of columns in the first equals the number of rows in the second. Any matrix can be multiplied element-wise by a scalar from its associated field, a major application of matrices is to represent linear transformations, that is, generalizations of linear functions such as f = 4x. The product of two matrices is a matrix that represents the composition of two linear transformations. Another application of matrices is in the solution of systems of linear equations, if the matrix is square, it is possible to deduce some of its properties by computing its determinant. For example, a matrix has an inverse if and only if its determinant is not zero. Insight into the geometry of a transformation is obtainable from the matrixs eigenvalues. Applications of matrices are found in most scientific fields, in computer graphics, they are used to manipulate 3D models and project them onto a 2-dimensional screen. Matrix calculus generalizes classical analytical notions such as derivatives and exponentials to higher dimensions, Matrices are used in economics to describe systems of economic relationships. A major branch of analysis is devoted to the development of efficient algorithms for matrix computations. Matrix decomposition methods simplify computations, both theoretically and practically, algorithms that are tailored to particular matrix structures, such as sparse matrices and near-diagonal matrices, expedite computations in finite element method and other computations. Infinite matrices occur in planetary theory and in atomic theory, a simple example of an infinite matrix is the matrix representing the derivative operator, which acts on the Taylor series of a function. A matrix is an array of numbers or other mathematical objects for which operations such as addition and multiplication are defined. Most commonly, a matrix over a field F is an array of scalars each of which is a member of F. Most of this focuses on real and complex matrices, that is, matrices whose elements are real numbers or complex numbers. More general types of entries are discussed below, for instance, this is a real matrix, A =
22.
Transformation matrix
–
In linear algebra, linear transformations can be represented by matrices. If T is a linear transformation mapping Rn to Rm and x → is a vector with n entries, then T = A x → for some m×n matrix A. There are alternative expressions of transformation matrices involving row vectors that are preferred by some authors, matrices allow arbitrary linear transformations to be represented in a consistent format, suitable for computation. This also allows transformations to be concatenated easily, linear transformations are not the only ones that can be represented by matrices. Some transformations that are non-linear on an n-dimensional Euclidean space Rn can be represented as linear transformations on the n+1-dimensional space Rn+1 and these include both affine transformations and projective transformations. For this reason, 4×4 transformation matrices are used in 3D computer graphics. With respect to a matrix, an n+1-dimensional matrix can be described as an augmented matrix. The distinction between active and passive transformations is important, by default, by transformation, mathematicians usually mean active transformations, while physicists could mean either. Put differently, a passive transformation refers to description of the object as viewed from two different coordinate frames. In other words, A = For example, the function T =5 x is a linear transformation, nevertheless, the method to find the components remains the same. Being diagonal means that all coefficients a i, j but a i, i are zeros leaving only one term in the sum ∑ a i, j e → i above. The surviving diagonal elements, a i, i, are known as eigenvalues and designated with λ i in the defining equation, the resulting equation is known as eigenvalue equation. The eigenvectors and eigenvalues are derived from it via the characteristic polynomial, with diagonalization, it is often possible to translate to and from eigenbases. In two dimensions, linear transformations can be represented using a 2×2 transformation matrix, a stretch in the xy-plane is a linear transformation which enlarges all distances in a particular direction by a constant factor but does not affect distances in the perpendicular direction. We only consider stretches along the x-axis and y-axis, a stretch along the x-axis has the form x = kx, y = y for some positive constant k. In formats such as SVG where the y axis points down, for shear mapping, there are two possibilities. A shear parallel to the x axis has x ′ = x + k y and y ′ = y. Then use the matrix, A =1 ∥ u → ∥2 As with reflections
23.
Homogeneous coordinates
–
They have the advantage that the coordinates of points, including points at infinity, can be represented using finite coordinates. Formulas involving homogeneous coordinates are often simpler and more symmetric than their Cartesian counterparts, if the homogeneous coordinates of a point are multiplied by a non-zero scalar then the resulting coordinates represent the same point. Since homogeneous coordinates are given to points at infinity, the number of coordinates required to allow this extension is one more than the dimension of the projective space being considered. For example, two coordinates are required to specify a point on the projective line and three homogeneous coordinates are required to specify a point in the projective plane. The real projective plane can be thought of as the Euclidean plane with additional points added, which are called points at infinity, and are considered to lie on a new line, the line at infinity. There is a point at infinity corresponding to each direction, informally defined as the limit of a point that moves in direction away from the origin. Parallel lines in the Euclidean plane are said to intersect at a point at infinity corresponding to their common direction, given a point on the Euclidean plane, for any non-zero real number Z, the triple is called a set of homogeneous coordinates for the point. By this definition, multiplying the three homogeneous coordinates by a common, non-zero factor gives a new set of coordinates for the same point. In particular, is such a system of coordinates for the point. For example, the Cartesian point can be represented in coordinates as or. The original Cartesian coordinates are recovered by dividing the first two positions by the third, thus unlike Cartesian coordinates, a single point can be represented by infinitely many homogeneous coordinates. The equation of a line through the origin may be written nx + my =0 where n and m are not both 0, in parametric form this can be written x = mt, y = −nt. Let Z = 1/t, so the coordinates of a point on the line may be written, in the limit, as t approaches infinity, in other words, as the point moves away from the origin, Z approaches 0 and the homogeneous coordinates of the point become. Thus we define as the coordinates of the point at infinity corresponding to the direction of the line nx + my =0. To summarize, Any point in the plane is represented by a triple, called the homogeneous coordinates or projective coordinates of the point. The point represented by a set of homogeneous coordinates is unchanged if the coordinates are multiplied by a common factor. Conversely, two sets of coordinates represent the same point if and only if one is obtained from the other by multiplying all the coordinates by the same non-zero constant. When Z is not 0 the point represented is the point in the Euclidean plane, when Z is 0 the point represented is a point at infinity
24.
Bounding box
–
In geometry, the minimum or smallest bounding or enclosing box for a point set in N dimensions is the box with the smallest measure within which all the points lie. When other kinds of measure are used, the box is usually called accordingly. The minimum bounding box of a point set is the same as the bounding box of its convex hull. The term box/hyperrectangle comes from its usage in the Cartesian coordinate system, in the two-dimensional case it is called the minimum bounding rectangle. The axis-aligned minimum bounding box for a point set is its minimum bounding box subject to the constraint that the edges of the box are parallel to the coordinate axes. For example, in geometry and its applications when it is required to find intersections in the set of objects. Since it is usually a less expensive operation than the check of the actual intersection. The arbitrarily oriented minimum bounding box is the minimum bounding box, a three-dimensional rotating calipers algorithm can find the minimum-volume arbitrarily-oriented bounding box of a three-dimensional point set in cubic time. Bounding sphere Bounding volume Minimum bounding rectangle
25.
Blend modes
–
Blend modes in digital image editing are used to determine how two layers are blended into each other. The default blend mode in most applications is simply to hide the lower layer with whatever is present in the top layer, however, as each pixel has a numerical representation, a large number of ways to blend two layers is possible. The top layer is not necessarily called a layer in the application and it may be applied with a painting or editing tool. Most graphics editing programs, like Adobe Photoshop and GIMP, allow the user to modify the basic blend modes - for example by applying different levels of opacity to the top picture. In the formulas shown on page, values go from 0.0 to 1.0. This is the standard blend mode which uses the top layer alone, F = b Where a is the value of a color channel in the underlying layer, and b is that of the corresponding channel of the upper layer. The result is most typically merged into the bottom using simple alpha compositing. The compositing step results in the top shape, as defined by its alpha channel. The dissolve mode takes random pixels from both layers, with high opacity, most pixels are taken from the top layer. With low opacity most pixels are taken from the bottom layer, no anti-aliasing is used with this blend mode, so the pictures may look grainy and harsh. Multiply and Screen blend modes are basic modes for darkening and lightening images respectively. There are several different combinations of them like Overlay or Soft Light and Vivid Light, Linear Light, Multiply blend mode multiplies the numbers for each pixel of the top layer with the corresponding pixel for the bottom layer. The result is a darker picture, F = a b, where a is the base layer value and b is the top layer value. This mode is symmetric, exchanging two layers does not change the result, if the two layers contain the same picture, multiply blend mode is equivalent to a quadratic curve, or gamma correction with γ=2. If one layer contains a color, for example the gray color. This is also equivalent to using this value as opacity when doing “normal mode” blend with black bottom layer. With Screen blend mode the values of the pixels in the two layers are inverted, multiplied, and then inverted again and this yields the opposite effect to multiply. The result is a brighter picture, F =1 −, where a is the base layer value and b is the top layer value
26.
Alpha compositing
–
In computer graphics, alpha compositing is the process of combining an image with a background to create the appearance of partial or full transparency. It is often useful to render image elements in separate passes, for example, compositing is used extensively when combining computer-rendered image elements with live footage. In order to combine these image elements correctly, it is necessary to keep an associated matte for each element. To store matte information, the concept of a channel was introduced by Alvy Ray Smith in the late 1970s. In a 2D image element, which stores a color for each pixel, additional data is stored in the alpha channel with a value between 0 and 1. A value of 0 means that the pixel does not have any information and is transparent. A value of 1 means that the pixel is opaque because the geometry completely overlapped the pixel, if an alpha channel is used in an image, there are two common representations that are available, straight alpha, and premultiplied alpha. With straight alpha, the RGB components represent the color of the object or pixel, with premultiplied alpha, the RGB components represent the color of the object or pixel, adjusted for its opacity by multiplication. A more obvious advantage of this is that, in certain situations, however, the most significant advantages of using premultiplied alpha are for correctness and simplicity rather than performance, premultiplied alpha allows correct filtering and blending. In addition, premultiplied alpha allows regions of alpha blending. Assuming that the color is expressed using straight RGBA tuples. If the color were fully green, its RGBA would be, however, if this pixel uses premultiplied alpha, all of the RGB values are multiplied by 0.5 and then the alpha is appended to the end to yield. In this case, the 0.5 value for the G channel actually indicates 100% green intensity, for this reason, knowing whether a file uses straight or premultiplied alpha is essential to correctly process or composite it. Premultiplied alpha has some advantages over normal alpha blending because premultiplied alpha blending is associative. Moreover, premultiplied alpha has a representation for transparent pixels. Ordinary interpolation without premultiplied alpha leads to RGB information leaking out of fully transparent regions, when interpolating or filtering images with abrupt borders between transparent and opaque regions, this can result in borders of colors that were not visible in the original image. Errors also occur in areas of semitransparancy because the RGB components are not correctly weighted, with the existence of an alpha channel, it is possible to express compositing image operations using a compositing algebra. For example, given two image elements A and B, the most common compositing operation is to combine the images such that A appears in the foreground and this can be expressed as A over B
27.
Alpha blending
–
In computer graphics, alpha compositing is the process of combining an image with a background to create the appearance of partial or full transparency. It is often useful to render image elements in separate passes, for example, compositing is used extensively when combining computer-rendered image elements with live footage. In order to combine these image elements correctly, it is necessary to keep an associated matte for each element. To store matte information, the concept of a channel was introduced by Alvy Ray Smith in the late 1970s. In a 2D image element, which stores a color for each pixel, additional data is stored in the alpha channel with a value between 0 and 1. A value of 0 means that the pixel does not have any information and is transparent. A value of 1 means that the pixel is opaque because the geometry completely overlapped the pixel, if an alpha channel is used in an image, there are two common representations that are available, straight alpha, and premultiplied alpha. With straight alpha, the RGB components represent the color of the object or pixel, with premultiplied alpha, the RGB components represent the color of the object or pixel, adjusted for its opacity by multiplication. A more obvious advantage of this is that, in certain situations, however, the most significant advantages of using premultiplied alpha are for correctness and simplicity rather than performance, premultiplied alpha allows correct filtering and blending. In addition, premultiplied alpha allows regions of alpha blending. Assuming that the color is expressed using straight RGBA tuples. If the color were fully green, its RGBA would be, however, if this pixel uses premultiplied alpha, all of the RGB values are multiplied by 0.5 and then the alpha is appended to the end to yield. In this case, the 0.5 value for the G channel actually indicates 100% green intensity, for this reason, knowing whether a file uses straight or premultiplied alpha is essential to correctly process or composite it. Premultiplied alpha has some advantages over normal alpha blending because premultiplied alpha blending is associative. Moreover, premultiplied alpha has a representation for transparent pixels. Ordinary interpolation without premultiplied alpha leads to RGB information leaking out of fully transparent regions, when interpolating or filtering images with abrupt borders between transparent and opaque regions, this can result in borders of colors that were not visible in the original image. Errors also occur in areas of semitransparancy because the RGB components are not correctly weighted, with the existence of an alpha channel, it is possible to express compositing image operations using a compositing algebra. For example, given two image elements A and B, the most common compositing operation is to combine the images such that A appears in the foreground and this can be expressed as A over B
28.
Rendering equation
–
It was simultaneously introduced into computer graphics by David Immel et al. and James Kajiya in 1986. The various realistic rendering techniques in computer graphics attempt to solve this equation, the physical basis for the rendering equation is the law of conservation of energy. Assuming that L denotes radiance, we have that at each position and direction, the outgoing light is the sum of the emitted light. The reflected light itself is the sum from all directions of the incoming light multiplied by the surface reflection and this is often written as cos θ i. Two noteworthy features are, its linearity—it is composed only of multiplications and additions and these mean a wide range of factorings and rearrangements of the equation are possible. It is a Fredholm integral equation of the kind, similar to those that arise in quantum field theory. Note this equations spectral and time dependence — L o may be sampled at or integrated over sections of the spectrum to obtain, for example. A pixel value for a frame in an animation may be obtained by fixing t. Note that a solution to the equation is the function L o. The function L i is related to L o via a ray-tracing operation, solving the rendering equation for any given scene is the primary challenge in realistic rendering. One approach to solving the equation is based on finite element methods, another approach using Monte Carlo methods has led to many different algorithms including path tracing, photon mapping, and Metropolis light transport, among others. Although the equation is general, it does not capture every aspect of light reflection. Lecture notes from Stanford University course CS 348B, Computer Graphics, Image Synthesis Techniques
29.
Mathematical model
–
A mathematical model is a description of a system using mathematical concepts and language. The process of developing a model is termed mathematical modeling. Mathematical models are used in the sciences and engineering disciplines. Physicists, engineers, statisticians, operations research analysts, and economists use mathematical models most extensively, a model may help to explain a system and to study the effects of different components, and to make predictions about behaviour. Mathematical models can take many forms, including systems, statistical models, differential equations. These and other types of models can overlap, with a model involving a variety of abstract structures. In general, mathematical models may include logical models, in many cases, the quality of a scientific field depends on how well the mathematical models developed on the theoretical side agree with results of repeatable experiments. Lack of agreement between theoretical mathematical models and experimental measurements often leads to important advances as better theories are developed, in the physical sciences, the traditional mathematical model contains four major elements. These are Governing equations Defining equations Constitutive equations Constraints Mathematical models are composed of relationships. Relationships can be described by operators, such as operators, functions, differential operators. Variables are abstractions of system parameters of interest, that can be quantified, a model is considered to be nonlinear otherwise. The definition of linearity and nonlinearity is dependent on context, for example, in a statistical linear model, it is assumed that a relationship is linear in the parameters, but it may be nonlinear in the predictor variables. Similarly, an equation is said to be linear if it can be written with linear differential operators. In a mathematical programming model, if the functions and constraints are represented entirely by linear equations. If one or more of the functions or constraints are represented with a nonlinear equation. Nonlinearity, even in simple systems, is often associated with phenomena such as chaos. Although there are exceptions, nonlinear systems and models tend to be difficult to study than linear ones. A common approach to nonlinear problems is linearization, but this can be if one is trying to study aspects such as irreversibility
30.
Simulated
–
Simulation is the imitation of the operation of a real-world process or system over time. The model represents the system itself, whereas the simulation represents the operation of the system over time, Simulation is used in many contexts, such as simulation of technology for performance optimization, safety engineering, testing, training, education, and video games. Often, computer experiments are used to study simulation models, Simulation is also used with scientific modelling of natural systems or human systems to gain insight into their functioning, as in economics. Simulation can be used to show the real effects of alternative conditions. Physical simulation refers to simulation in which objects are substituted for the real thing. These physical objects are chosen because they are smaller or cheaper than the actual object or system. Simulation Fidelity is used to describe the accuracy of a simulation, Fidelity is broadly classified as 1 of 3 categories, low, medium, and high. Simulation in failure analysis refers to simulation in which we create environment/conditions to identify the cause of equipment failure and this was the best and fastest method to identify the failure cause. A computer simulation is an attempt to model a real-life or hypothetical situation on a computer so that it can be studied to see how the system works, by changing variables in the simulation, predictions may be made about the behaviour of the system. It is a tool to investigate the behaviour of the system under study. A good example of the usefulness of using computers to simulate can be found in the field of network traffic simulation, in such simulations, the model behaviour will change each simulation according to the set of initial parameters assumed for the environment. Computer simulation is used as an adjunct to, or substitution for. Several software packages exist for running computer-based simulation modeling that makes all the modeling almost effortless, modern usage of the term computer simulation may encompass virtually any computer-based representation. The computer simulates the subject machine, accordingly, in theoretical computer science the term simulation is a relation between state transition systems, useful in the study of operational semantics. Less theoretically, an application of computer simulation is to simulate computers using computers. For example, simulators have been used to debug a microprogram or sometimes commercial application programs, simulators may also be used to interpret fault trees, or test VLSI logic designs before they are constructed. Symbolic simulation uses variables to stand for unknown values, in the field of optimization, simulations of physical processes are often used in conjunction with evolutionary computation to optimize control strategies. Simulation is extensively used for educational purposes and it is frequently used by way of adaptive hypermedia
31.
Spatial anti-aliasing
–
In digital signal processing, spatial anti-aliasing is the technique of minimising the distortion artifacts known as aliasing when representing a high-resolution image at a lower resolution. Anti-aliasing is used in photography, computer graphics, digital audio. Anti-aliasing means removing signal components that have a higher frequency than is able to be resolved by the recording device. This removal is done before sampling at a lower resolution, when sampling is performed without removing this part of the signal, it causes undesirable artifacts such as the black-and-white noise near the top of figure 1-a below. In digital photography, optical anti-aliasing filters are made of birefringent materials, the anti-aliasing filter essentially blurs the image slightly in order to reduce the resolution to or below that achievable by the digital sensor. In computer graphics, anti-aliasing improves the appearance of polygon edges, however, it incurs a performance cost for the graphics card and uses more video memory. The level of anti-aliasing determines how smooth polygon edges are, Figure 1-a illustrates the visual distortion that occurs when anti-aliasing is not used. Near the top of the image, where the checker-board is very small, in contrast, Figure 1-b shows an anti-aliased version of the scene. The checker-board near the top blends into grey, which is usually the effect when the resolution is insufficient to show the detail. Even near the bottom of the image, the edges appear much smoother in the anti-aliased image, Figure 1-c shows another anti-aliasing algorithm, based on the sinc filter, which is considered better than the algorithm used in 1-b. Figure 2 shows magnified portions of Figure 1-a and 1-c for comparison, in Figure 1-c, anti-aliasing has interpolated the brightness of the pixels at the boundaries to produce grey pixels since the space is occupied by both black and white tiles. These help make Figure 1-c appear much smoother than Figure 1-a at the original magnification, anti-aliasing is often applied in rendering text on a computer screen, to suggest smooth contours that better emulate the appearance of text produced by conventional ink-and-paper printing. Particularly with fonts displayed on typical LCD screens, it is common to use subpixel rendering techniques like ClearType, sub-pixel rendering requires special colour-balanced anti-aliasing filters to turn what would be severe colour distortion into barely-noticeable colour fringes. Pixel geometry affects all of this, whether the anti-aliasing and sub-pixel addressing are done in software or hardware and it is a fairly fast function, but it is relatively low-quality, and gets slower as the complexity of the shape increases. For purposes requiring very high-quality graphics or very complex vector shapes, note, The DrawPixel routine above cannot blindly set the colour value to the percent calculated. It must add the new value to the value at that location up to a maximum of 1. Otherwise, the brightness of each pixel will be equal to the darkest value calculated in time for that location which produces a bad result. In this approach, the image is regarded as a signal
32.
Visual artefact
–
Visual artifacts are anomalies during visual representation of e. g. digital graphics and imagery. The cases can differ but the causes are, Fan issues. Drivers that have values that the card is not suited with. Overclocking beyond the capabilities of the video card. The differing cases of visual artifacting can also differ between scheduled task, in microscopy, an artifact is an apparent structural detail that is caused by the processing of the specimen and is thus not a legitimate feature of the specimen. For example, an artifact is artificial elongation and distortion when smearing cells or tissue for microscopy
33.
Pixel
–
The address of a pixel corresponds to its physical coordinates. LCD pixels are manufactured in a grid, and are often represented using dots or squares. Each pixel is a sample of an image, more samples typically provide more accurate representations of the original. The intensity of each pixel is variable, in color imaging systems, a color is typically represented by three or four component intensities such as red, green, and blue, or cyan, magenta, yellow, and black. The word pixel is based on a contraction of pix and el, the word pixel was first published in 1965 by Frederic C. Billingsley of JPL, to describe the elements of video images from space probes to the Moon. Billingsley had learned the word from Keith E. McFarland, at the Link Division of General Precision in Palo Alto, McFarland said simply it was in use at the time. The word is a combination of pix, for picture, the word pix appeared in Variety magazine headlines in 1932, as an abbreviation for the word pictures, in reference to movies. By 1938, pix was being used in reference to pictures by photojournalists. The concept of a picture element dates to the earliest days of television, some authors explain pixel as picture cell, as early as 1972. In graphics and in image and video processing, pel is often used instead of pixel, for example, IBM used it in their Technical Reference for the original PC. A pixel is generally thought of as the smallest single component of a digital image, however, the definition is highly context-sensitive. For example, there can be printed pixels in a page, or pixels carried by electronic signals, or represented by digital values, or pixels on a display device, or pixels in a digital camera. This list is not exhaustive and, depending on context, synonyms include pel, sample, byte, bit, dot, Pixels can be used as a unit of measure such as,2400 pixels per inch,640 pixels per line, or spaced 10 pixels apart. For example, a high-quality photographic image may be printed with 600 ppi on a 1200 dpi inkjet printer, even higher dpi numbers, such as the 4800 dpi quoted by printer manufacturers since 2002, do not mean much in terms of achievable resolution. The more pixels used to represent an image, the closer the result can resemble the original, the number of pixels in an image is sometimes called the resolution, though resolution has a more specific definition.3 megapixels. The pixels, or color samples, that form an image may or may not be in one-to-one correspondence with screen pixels. In computing, a composed of pixels is known as a bitmapped image or a raster image
34.
Ambient occlusion
–
In computer graphics, ambient occlusion is a shading and rendering technique used to calculate how exposed each point in a scene is to ambient lighting. The interior of a tube is typically more occluded than the outer surfaces, and the deeper you go inside the tube. Ambient occlusion can be seen as an accessibility value that is calculated for each surface point, the result is a diffuse, non-directional shading effect that casts no clear shadows but that darkens enclosed and sheltered areas and can affect the rendered images overall tone. It is often used as a post-processing effect, unlike local methods such as Phong shading, ambient occlusion is a global method, meaning that the illumination at each point is a function of other geometry in the scene. However, it is a crude approximation to full global illumination. The appearance achieved by ambient occlusion alone is similar to the way an object might appear on an overcast day, however, newer technologies are making true ambient occlusion feasible even in real-time. Ambient occlusion is related to accessibility shading, which determines appearance based on how easy it is for a surface to be touched by various elements and it has been popularized in production animation due to its relative simplicity and efficiency. In the industry, ambient occlusion is often referred to as sky light, the ambient occlusion shading model has the nice property of offering a better perception of the 3D shape of the displayed objects. Another approach is to render the view from p ¯ by rasterizing black geometry against a white background and this approach is an example of a gathering or inside-out approach, whereas other algorithms employ scattering or outside-in techniques. In addition to the ambient occlusion value, a bent normal vector n ^ b is often generated, the bent normal can be used to look up incident radiance from an environment map to approximate image-based lighting
35.
Global illumination
–
Global illumination, or indirect illumination, is a general name for a group of algorithms used in 3D computer graphics that are meant to add more realistic lighting to 3D scenes. Theoretically, reflections, refractions, and shadows are all examples of illumination, because when simulating them. In practice, however, only the simulation of diffuse inter-reflection or caustics is called global illumination, images rendered using global illumination algorithms often appear more photorealistic than those using only direct illumination algorithms. However, such images are more expensive and consequently much slower to generate. One common approach is to compute the global illumination of a scene, the stored data can then be used to generate images from different viewpoints for generating walkthroughs of a scene without having to go through expensive lighting calculations repeatedly. Though this method of approximation is easy to perform computationally, when used alone it does not provide a realistic effect. Ambient lighting is known to flatten shadows in 3D scenes, making the visual effect more bland. However, used properly, ambient lighting can be an efficient way to make up for a lack of processing power, more and more specialized algorithms are used in 3D programs that can effectively simulate the global illumination. These algorithms are numerical approximations to the rendering equation, well known algorithms for computing global illumination include path tracing, photon mapping and radiosity. A full treatment can be found in Another way to simulate real global illumination is the use of High dynamic range images, also known as environment maps and this process is known as image-based lighting. Theory and practical implementation of Global Illumination using Monte Carlo Path Tracing
36.
Array data structure
–
In computer science, an array data structure, or simply an array, is a data structure consisting of a collection of elements, each identified by at least one array index or key. An array is stored so that the position of each element can be computed from its index tuple by a mathematical formula, the simplest type of data structure is a linear array, also called one-dimensional array. For example, an array of 10 32-bit integer variables, with indices 0 through 9,2036, so that the element with index i has the address 2000 +4 × i. The memory address of the first element of an array is called first address or foundation address, because the mathematical concept of a matrix can be represented as a two-dimensional grid, two-dimensional arrays are also sometimes called matrices. In some cases the term vector is used in computing to refer to an array, arrays are often used to implement tables, especially lookup tables, the word table is sometimes used as a synonym of array. Arrays are among the oldest and most important data structures, and are used by almost every program and they are also used to implement many other data structures, such as lists and strings. They effectively exploit the addressing logic of computers, in most modern computers and many external storage devices, the memory is a one-dimensional array of words, whose indices are their addresses. Processors, especially vector processors, are optimized for array operations. Arrays are useful mostly because the element indices can be computed at run time, among other things, this feature allows a single iterative statement to process arbitrarily many elements of an array. For that reason, the elements of a data structure are required to have the same size. The set of valid index tuples and the addresses of the elements are usually, Array types are often implemented by array structures, however, in some languages they may be implemented by hash tables, linked lists, search trees, or other data structures. The first digital computers used machine-language programming to set up and access array structures for data tables, vector and matrix computations, john von Neumann wrote the first array-sorting program in 1945, during the building of the first stored-program computer. p. 159 Array indexing was originally done by self-modifying code, and later using index registers, some mainframes designed in the 1960s, such as the Burroughs B5000 and its successors, used memory segmentation to perform index-bounds checking in hardware. Assembly languages generally have no support for arrays, other than what the machine itself provides. The earliest high-level programming languages, including FORTRAN, Lisp, COBOL, and ALGOL60, had support for multi-dimensional arrays, in C++, class templates exist for multi-dimensional arrays whose dimension is fixed at runtime as well as for runtime-flexible arrays. Arrays are used to implement mathematical vectors and matrices, as well as other kinds of rectangular tables, many databases, small and large, consist of one-dimensional arrays whose elements are records. Arrays are used to implement other data structures, such as lists, heaps, hash tables, deques, queues, stacks, strings, one or more large arrays are sometimes used to emulate in-program dynamic memory allocation, particularly memory pool allocation. Historically, this has sometimes been the way to allocate dynamic memory portably