1.
Computer graphics
–
Computer graphics are pictures and films created using computers. Usually, the term refers to computer-generated image data created with help from specialized hardware and software. It is a vast and recent area in computer science, the phrase was coined in 1960, by computer graphics researchers Verne Hudson and William Fetter of Boeing. It is often abbreviated as CG, though sometimes referred to as CGI. The overall methodology depends heavily on the sciences of geometry, optics. Computer graphics is responsible for displaying art and image data effectively and meaningfully to the user and it is also used for processing image data received from the physical world. Computer graphic development has had a significant impact on many types of media and has revolutionized animation, movies, advertising, video games, the term computer graphics has been used a broad sense to describe almost everything on computers that is not text or sound. Such imagery is found in and on television, newspapers, weather reports, a well-constructed graph can present complex statistics in a form that is easier to understand and interpret. In the media such graphs are used to illustrate papers, reports, thesis, many tools have been developed to visualize data. Computer generated imagery can be categorized into different types, two dimensional, three dimensional, and animated graphics. As technology has improved, 3D computer graphics have become more common, Computer graphics has emerged as a sub-field of computer science which studies methods for digitally synthesizing and manipulating visual content. Screens could display art since the Lumiere brothers use of mattes to create effects for the earliest films dating from 1895. New kinds of displays were needed to process the wealth of information resulting from such projects, early projects like the Whirlwind and SAGE Projects introduced the CRT as a viable display and interaction interface and introduced the light pen as an input device. Douglas T. Ross of the Whirlwind SAGE system performed an experiment in 1954 in which a small program he wrote captured the movement of his finger. Electronics pioneer Hewlett-Packard went public in 1957 after incorporating the decade prior, and established ties with Stanford University through its founders. This began the transformation of the southern San Francisco Bay Area into the worlds leading computer technology hub - now known as Silicon Valley. The field of computer graphics developed with the emergence of computer graphics hardware, further advances in computing led to greater advancements in interactive computer graphics. In 1959, the TX-2 computer was developed at MITs Lincoln Laboratory, the TX-2 integrated a number of new man-machine interfaces

2.
3D computer graphics
–
Such images may be stored for viewing later or displayed in real-time. 3D computer graphics rely on many of the same algorithms as 2D computer vector graphics in the wire-frame model, 3D computer graphics are often referred to as 3D models. Apart from the graphic, the model is contained within the graphical data file. However, there are differences, a 3D model is the representation of any three-dimensional object. A model is not technically a graphic until it is displayed, a model can be displayed visually as a two-dimensional image through a process called 3D rendering or used in non-graphical computer simulations and calculations. With 3D printing, 3D models are rendered into a 3D physical representation of the model. William Fetter was credited with coining the term computer graphics in 1961 to describe his work at Boeing, 3D computer graphics software began appearing for home computers in the late 1970s. The earliest known example is 3D Art Graphics, a set of 3D computer graphics effects, written by Kazumasa Mitazawa, models can also be produced procedurally or via physical simulation. Basically, a 3D model is formed from points called vertices that define the shape, a polygon is an area formed from at least three vertexes. A polygon of n points is an n-gon, the overall integrity of the model and its suitability to use in animation depend on the structure of the polygons. Before rendering into an image, objects must be out in a scene. This defines spatial relationships between objects, including location and size, Animation refers to the temporal description of an object. These techniques are used in combination. As with animation, physical simulation also specifies motion, rendering converts a model into an image either by simulating light transport to get photo-realistic images, or by applying an art style as in non-photorealistic rendering. The two basic operations in realistic rendering are transport and scattering and this step is usually performed using 3D computer graphics software or a 3D graphics API. Altering the scene into a form for rendering also involves 3D projection. There are a multitude of websites designed to help, educate, some are managed by software developers and content providers, but there are standalone sites as well. These communities allow for members to seek advice, post tutorials, not all computer graphics that appear 3D are based on a wireframe model

3.
Polygon
–
In elementary geometry, a polygon /ˈpɒlɪɡɒn/ is a plane figure that is bounded by a finite chain of straight line segments closing in a loop to form a closed polygonal chain or circuit. These segments are called its edges or sides, and the points where two edges meet are the vertices or corners. The interior of the polygon is called its body. An n-gon is a polygon with n sides, for example, a polygon is a 2-dimensional example of the more general polytope in any number of dimensions. The basic geometrical notion of a polygon has been adapted in various ways to suit particular purposes, mathematicians are often concerned only with the bounding closed polygonal chain and with simple polygons which do not self-intersect, and they often define a polygon accordingly. A polygonal boundary may be allowed to intersect itself, creating star polygons and these and other generalizations of polygons are described below. The word polygon derives from the Greek adjective πολύς much, many and it has been suggested that γόνυ knee may be the origin of “gon”. Polygons are primarily classified by the number of sides, Polygons may be characterized by their convexity or type of non-convexity, Convex, any line drawn through the polygon meets its boundary exactly twice. As a consequence, all its interior angles are less than 180°, equivalently, any line segment with endpoints on the boundary passes through only interior points between its endpoints. Non-convex, a line may be found which meets its boundary more than twice, equivalently, there exists a line segment between two boundary points that passes outside the polygon. Simple, the boundary of the polygon does not cross itself, there is at least one interior angle greater than 180°. Star-shaped, the interior is visible from at least one point. The polygon must be simple, and may be convex or concave, self-intersecting, the boundary of the polygon crosses itself. Branko Grünbaum calls these coptic, though this term does not seem to be widely used, star polygon, a polygon which self-intersects in a regular way. A polygon cannot be both a star and star-shaped, equiangular, all corner angles are equal. Cyclic, all lie on a single circle, called the circumcircle. Isogonal or vertex-transitive, all lie within the same symmetry orbit. The polygon is cyclic and equiangular

4.
Rendering (computer graphics)
–
Rendering or image synthesis is the process of generating an image from a 2D or 3D model by means of computer programs. Also, the results of such a model can be called a rendering, a scene file contains objects in a strictly defined language or data structure, it would contain geometry, viewpoint, texture, lighting, and shading information as a description of the virtual scene. The data contained in the file is then passed to a rendering program to be processed. The term rendering may be by analogy with a rendering of a scene. A GPU is a device able to assist a CPU in performing complex rendering calculations. If a scene is to look realistic and predictable under virtual lighting. The rendering equation doesnt account for all lighting phenomena, but is a lighting model for computer-generated imagery. Rendering is also used to describe the process of calculating effects in an editing program to produce final video output. Rendering is one of the major sub-topics of 3D computer graphics, in the graphics pipeline, it is the last major step, giving the final appearance to the models and animation. With the increasing sophistication of computer graphics since the 1970s, it has become a distinct subject. Rendering has uses in architecture, video games, simulators, movie or TV visual effects, as a product, a wide variety of renderers are available. Some are integrated into larger modeling and animation packages, some are stand-alone, on the inside, a renderer is a carefully engineered program, based on a selective mixture of disciplines related to, light physics, visual perception, mathematics, and software development. In the case of 3D graphics, rendering may be slowly, as in pre-rendering. When the pre-image is complete, rendering is used, which adds in bitmap textures or procedural textures, lights, bump mapping, the result is a completed image the consumer or intended viewer sees. For movie animations, several images must be rendered, and stitched together in a program capable of making an animation of this sort, most 3D image editing programs can do this. A rendered image can be understood in terms of a number of visible features, Rendering research and development has been largely motivated by finding ways to simulate these efficiently. Some relate directly to particular algorithms and techniques, while others are produced together, Tracing every particle of light in a scene is nearly always completely impractical and would take a stupendous amount of time. Even tracing a portion large enough to produce an image takes an amount of time if the sampling is not intelligently restricted

5.
Wire-frame model
–
A wire-frame model is a visual presentation of a 3-dimensional or physical object used in 3D computer graphics. It is created by specifying each edge of the object where two mathematically continuous smooth surfaces meet, or by connecting an objects constituent vertices using straight lines or curves. The object is projected into space by drawing lines at the location of each edge. The term wire frame comes from designers using metal wire to represent the shape of solid objects. 3D wire frame allows to construct and manipulate solids and solid surfaces, the 3D solid modeling technique efficiently draws higher quality representations of solids than the conventional line drawing. Using a wire-frame model allows visualization of the design structure of a 3D model. Traditional two-dimensional views and drawings can be created by appropriate rotation of the object, since wire-frame renderings are relatively simple and fast to calculate, they are often used in cases where a high screen frame rate is needed. When greater graphical detail is desired, surface textures can be added automatically after completion of the rendering of the wire frame. This allows the designer to quickly review solids or rotate the object to new desired views without long delays associated with more realistic rendering, the wire frame format is also well suited and widely used in programming tool paths for direct numerical control machine tools. Hand-drawn wire-frame-like illustrations date back as far as the Italian Renaissance, wire-frame models are also used as the input for computer-aided manufacturing. There are mainly three types of 3D CAD models, wire frame is one of them and it is the most abstract and least realistic. Other types of 3D CAD models are surface and solid and this method of modelling consists of only lines, points and curves defining the edges of an object. Wireframing is one of the used in geometric modelling systems. A wireframe model represents the shape of an object with its characteristic lines. There are two types of modelling, Pros and Cons. In Pros user gives a simple input to create a shape and it is useful in developing systems. While in Cons wireframe model, it does not include information about inside and outside boundary surfaces, today, wireframe models are used to define complex solid objects. The designer makes a model of a solid object, and then the CAD operator reconstructs the object

6.
Computer animation
–
Computer animation is the process used for generating animated images. The more general term computer-generated imagery encompasses both static scenes and dynamic images, while computer animation refers to the moving images. Modern computer animation usually uses 3D computer graphics, although 2D computer graphics are used for stylistic, low bandwidth. Sometimes, the target of the animation is the computer itself, Computer animation is essentially a digital successor to the stop motion techniques used in traditional animation with 3D models and frame-by-frame animation of 2D illustrations. It can also allow a single graphic artist to such content without the use of actors, expensive set pieces. To create the illusion of movement, an image is displayed on the monitor and repeatedly replaced by a new image that is similar to it. This technique is identical to how the illusion of movement is achieved with television and motion pictures, for 3D animations, objects are built on the computer monitor and 3D figures are rigged with a virtual skeleton. For 2D figure animations, separate objects and separate transparent layers are used with or without that virtual skeleton, then the limbs, eyes, mouth, clothes, etc. of the figure are moved by the animator on key frames. The differences in appearance between key frames are calculated by the computer in a process known as tweening or morphing. For 3D animations, all frames must be rendered after the modeling is complete, for 2D vector animations, the rendering process is the key frame illustration process, while tweened frames are rendered as needed. For pre-recorded presentations, the frames are transferred to a different format or medium. The frames may also be rendered in real time as they are presented to the end-user audience, low bandwidth animations transmitted via the internet often use software on the end-users computer to render in real time as an alternative to streaming or pre-loaded high bandwidth animations. To trick the eye and the brain into thinking they are seeing a moving object. With rates above 75-120 frames per second, no improvement in realism or smoothness is perceivable due to the way the eye, at rates below 12 frames per second, most people can detect jerkiness associated with the drawing of new images that detracts from the illusion of realistic movement. Conventional hand-drawn cartoon animation often uses 15 frames per second in order to save on the number of drawings needed, to produce more realistic imagery, computer animation demands higher frame rates. Films seen in theaters in the United States run at 24 frames per second, for high resolution, adapters are used. Early digital computer animation was developed at Bell Telephone Laboratories in the 1960s by Edward E. Zajac, Frank W. Sinden, other digital animation was also practiced at the Lawrence Livermore National Laboratory. An early step in the history of animation was the sequel to the 1973 film Westworld

7.
Decimal mark
–
A decimal mark is a symbol used to separate the integer part from the fractional part of a number written in decimal form. Different countries officially designate different symbols for the decimal mark, the choice of symbol for the decimal mark also affects the choice of symbol for the thousands separator used in digit grouping, so the latter is also treated in this article. In mathematics the decimal mark is a type of radix point, in the Middle Ages, before printing, a bar over the units digit was used to separate the integral part of a number from its fractional part, e. g.9995. His Compendious Book on Calculation by Completion and Balancing presented the first systematic solution of linear, a similar notation remains in common use as an underbar to superscript digits, especially for monetary values without a decimal mark, e. g.9995. Later, a separatrix between the units and tenths position became the norm among Arab mathematicians, e. g. 99ˌ95, when this character was typeset, it was convenient to use the existing comma or full stop instead. The separatrix was also used in England as an L-shaped or vertical bar before the popularization of the period, gerbert of Aurillac marked triples of columns with an arc when using his Hindu–Arabic numeral-based abacus in the 10th century. Fibonacci followed this convention when writing numbers such as in his influential work Liber Abaci in the 13th century, in France, the full stop was already in use in printing to make Roman numerals more readable, so the comma was chosen. Many other countries, such as Italy, also chose to use the comma to mark the decimal units position and it has been made standard by the ISO for international blueprints. However, English-speaking countries took the comma to separate sequences of three digits, in some countries, a raised dot or dash may be used for grouping or decimal mark, this is particularly common in handwriting. In the United States, the stop or period was used as the standard decimal mark. g. However, as the mid dot was already in use in the mathematics world to indicate multiplication. In the event, the point was decided on by the Ministry of Technology in 1968, the three most spoken international auxiliary languages, Ido, Esperanto, and Interlingua, all use the comma as the decimal mark. Interlingua has used the comma as its decimal mark since the publication of the Interlingua Grammar in 1951, Esperanto also uses the comma as its official decimal mark, while thousands are separated by non-breaking spaces,12345678,9. Idos Kompleta Gramatiko Detaloza di la Linguo Internaciona Ido officially states that commas are used for the mark while full stops are used to separate thousands, millions. So the number 12,345,678.90123 for instance, the 1931 grammar of Volapük by Arie de Jong uses the comma as its decimal mark, and uses the middle dot as the thousands separator. In 1958, disputes between European and American delegates over the representation of the decimal mark nearly stalled the development of the ALGOL computer programming language. ALGOL ended up allowing different decimal marks, but most computer languages, the 22nd General Conference on Weights and Measures declared in 2003 that the symbol for the decimal marker shall be either the point on the line or the comma on the line. It further reaffirmed that numbers may be divided in groups of three in order to facilitate reading, neither dots nor commas are ever inserted in the spaces between groups

8.
Floating-point arithmetic
–
In computing, floating-point arithmetic is arithmetic using formulaic representation of real numbers as an approximation so as to support a trade-off between range and precision. A number is, in general, represented approximately to a number of significant digits and scaled using an exponent in some fixed base. For example,1.2345 =12345 ⏟ significand ×10 ⏟ base −4 ⏞ exponent, the term floating point refers to the fact that a numbers radix point can float, that is, it can be placed anywhere relative to the significant digits of the number. This position is indicated as the exponent component, and thus the floating-point representation can be thought of as a kind of scientific notation. The result of dynamic range is that the numbers that can be represented are not uniformly spaced. Over the years, a variety of floating-point representations have been used in computers, however, since the 1990s, the most commonly encountered representation is that defined by the IEEE754 Standard. A floating-point unit is a part of a computer system designed to carry out operations on floating point numbers. A number representation specifies some way of encoding a number, usually as a string of digits, there are several mechanisms by which strings of digits can represent numbers. In common mathematical notation, the string can be of any length. If the radix point is not specified, then the string implicitly represents an integer, in fixed-point systems, a position in the string is specified for the radix point. So a fixed-point scheme might be to use a string of 8 decimal digits with the point in the middle. The scaling factor, as a power of ten, is then indicated separately at the end of the number, floating-point representation is similar in concept to scientific notation. Logically, a floating-point number consists of, A signed digit string of a length in a given base. This digit string is referred to as the significand, mantissa, the length of the significand determines the precision to which numbers can be represented. The radix point position is assumed always to be somewhere within the significand—often just after or just before the most significant digit and this article generally follows the convention that the radix point is set just after the most significant digit. A signed integer exponent, which modifies the magnitude of the number, using base-10 as an example, the number 7005152853504700000♠152853.5047, which has ten decimal digits of precision, is represented as the significand 1528535047 together with 5 as the exponent. In storing such a number, the base need not be stored, since it will be the same for the range of supported numbers. Symbolically, this value is, s b p −1 × b e, where s is the significand, p is the precision, b is the base

9.
Scan line
–
A scanline is one line, or row, in a raster scanning pattern, such as a line of video on a cathode ray tube display of a television set or computer monitor. This is sometimes used today as an effect in computer graphics. The term is used, by analogy, for a row of pixels in a raster graphics image. Scan lines are important in representations of data, because many image file formats have special rules for data at the end of a scan line. For example, there may be a rule that each line starts on a particular boundary. This means that even otherwise compatible raster data may need to be analyzed at the level of scan lines in order to convert between formats, screen-door effect Fax Interlaced video Native resolution Progressive video Scanline fill Scanline rendering Flicker

10.
Fraction (mathematics)
–
A fraction represents a part of a whole or, more generally, any number of equal parts. When spoken in everyday English, a fraction describes how many parts of a certain size there are, for example, one-half, eight-fifths, three-quarters. A common, vulgar, or simple fraction consists of an integer numerator displayed above a line, numerators and denominators are also used in fractions that are not common, including compound fractions, complex fractions, and mixed numerals. The numerator represents a number of parts, and the denominator. For example, in the fraction 3/4, the numerator,3, tells us that the fraction represents 3 equal parts, the picture to the right illustrates 34 or ¾ of a cake. Fractional numbers can also be written without using explicit numerators or denominators, by using decimals, percent signs, an integer such as the number 7 can be thought of as having an implicit denominator of one,7 equals 7/1. Other uses for fractions are to represent ratios and to represent division, thus the fraction ¾ is also used to represent the ratio 3,4 and the division 3 ÷4. The test for a number being a number is that it can be written in that form. In a fraction, the number of parts being described is the numerator. Informally, they may be distinguished by placement alone but in formal contexts they are separated by a fraction bar. The fraction bar may be horizontal, oblique, or diagonal and these marks are respectively known as the horizontal bar, the slash or stroke, the division slash, and the fraction slash. In typography, horizontal fractions are known as en or nut fractions and diagonal fractions as em fractions. The denominators of English fractions are expressed as ordinal numbers. When the denominator is 1, it may be expressed in terms of wholes but is commonly ignored. When the numerator is one, it may be omitted, a fraction may be expressed as a single composition, in which case it is hyphenated, or as a number of fractions with a numerator of one, in which case they are not. Fractions should always be hyphenated when used as adjectives, alternatively, a fraction may be described by reading it out as the numerator over the denominator, with the denominator expressed as a cardinal number. The term over is used even in the case of solidus fractions, Fractions with large denominators that are not powers of ten are often rendered in this fashion while those with denominators divisible by ten are typically read in the normal ordinal fashion. A simple fraction is a number written as a/b or a b

11.
Bresenham's line algorithm
–
Bresenhams line algorithm is an algorithm that determines the points of an n-dimensional raster that should be selected in order to form a close approximation to a straight line between two points. It is an incremental error algorithm and it is one of the earliest algorithms developed in the field of computer graphics. An extension to the algorithm may be used for drawing circles. The algorithm is used in such as plotters and in the graphics chips of modern graphics cards. It can also be found in many software graphics libraries, because the algorithm is very simple, it is often implemented in either the firmware or the graphics hardware of modern graphics cards. The label Bresenham is used today for a family of algorithms extending or modifying Bresenhams original algorithm, Bresenhams line algorithm is named after Jack Elton Bresenham who developed it in 1962 at IBM. In 2001 Bresenham wrote, I was working in the lab at IBMs San Jose development lab. A Calcomp plotter had been attached to an IBM1401 via the 1407 typewriter console. was in use by summer 1962. Programs in those days were freely exchanged among corporations so Calcomp had copies, when I returned to Stanford in Fall 1962, I put a copy in the Stanford comp center library. A description of the line drawing routine was accepted for presentation at the 1963 ACM national convention in Denver and it was a year in which no proceedings were published, only the agenda of speakers and topics in an issue of Communications of the ACM. A person from the IBM Systems Journal asked me after I made my presentation if they could publish the paper, I happily agreed, and they printed it in 1965. Bresenhams algorithm was extended to produce circles, the resulting algorithm being sometimes known as either Bresenhams circle algorithm or midpoint circle algorithm. The following conventions will be used, the top-left is such that pixel coordinates increase in the right and down directions, the endpoints of the line are the pixels at and, where the first coordinate of the pair is the column and the second is the row. In this octant, for each column x between x 0 and x 1, there is one row y containing a pixel of the line. Bresenhams algorithm chooses the integer y corresponding to the center that is closest to the ideal y for the same x. The general equation of the line through the endpoints is given by, y − y 0 y 1 − y 0 = x − x 0 x 1 − x 0. Since we know the column, x, the row, y, is given by rounding this quantity to the nearest integer. The slope / depends on the endpoint coordinates only and can be precomputed, and this value is first set to m −0.5, and is incremented by m each time the x coordinate is incremented by one

12.
Triangle
–
A triangle is a polygon with three edges and three vertices. It is one of the shapes in geometry. A triangle with vertices A, B, and C is denoted △ A B C, in Euclidean geometry any three points, when non-collinear, determine a unique triangle and a unique plane. This article is about triangles in Euclidean geometry except where otherwise noted, triangles can be classified according to the lengths of their sides, An equilateral triangle has all sides the same length. An equilateral triangle is also a polygon with all angles measuring 60°. An isosceles triangle has two sides of equal length, some mathematicians define an isosceles triangle to have exactly two equal sides, whereas others define an isosceles triangle as one with at least two equal sides. The latter definition would make all equilateral triangles isosceles triangles, the 45–45–90 right triangle, which appears in the tetrakis square tiling, is isosceles. A scalene triangle has all its sides of different lengths, equivalently, it has all angles of different measure. Hatch marks, also called tick marks, are used in diagrams of triangles, a side can be marked with a pattern of ticks, short line segments in the form of tally marks, two sides have equal lengths if they are both marked with the same pattern. In a triangle, the pattern is no more than 3 ticks. Similarly, patterns of 1,2, or 3 concentric arcs inside the angles are used to indicate equal angles, triangles can also be classified according to their internal angles, measured here in degrees. A right triangle has one of its interior angles measuring 90°, the side opposite to the right angle is the hypotenuse, the longest side of the triangle. The other two sides are called the legs or catheti of the triangle, special right triangles are right triangles with additional properties that make calculations involving them easier. One of the two most famous is the 3–4–5 right triangle, where 32 +42 =52, in this situation,3,4, and 5 are a Pythagorean triple. The other one is a triangle that has 2 angles that each measure 45 degrees. Triangles that do not have an angle measuring 90° are called oblique triangles, a triangle with all interior angles measuring less than 90° is an acute triangle or acute-angled triangle. If c is the length of the longest side, then a2 + b2 > c2, a triangle with one interior angle measuring more than 90° is an obtuse triangle or obtuse-angled triangle. If c is the length of the longest side, then a2 + b2 < c2, a triangle with an interior angle of 180° is degenerate

13.
Barycentric coordinates (mathematics)
–
Coordinates also extend outside the simplex, where one or more coordinates become negative. The system was introduced by August Ferdinand Möbius, the vertices themselves have the coordinates x 1 =, x 2 =, …, x n =. Barycentric coordinates are not unique, for any b not equal to zero, are also barycentric coordinates of p. When the coordinates are not negative, the point p lies in the hull of x 1, …, x n. Sometimes values of coordinates are restricted with a condition ∑ a i =1 which makes them unique, then, the classical terminology in this case is that of absolute barycentric coordinates. Areal and trilinear coordinates are used for purposes in geometry. Barycentric or areal coordinates are useful in engineering applications involving triangular subdomains. These make analytic integrals often easier to evaluate, and Gaussian quadrature tables are presented in terms of area coordinates. Consider a triangle T defined by its three vertices, r 1, r 2 and r 3, each point r located inside this triangle can be written as a unique convex combination of the three vertices. They are often denoted as α, β, γ instead of λ1, λ2, λ3, note that although there are three coordinates, there are only two degrees of freedom, since λ1 + λ2 + λ3 =1. Thus every point is defined by any two of the barycentric coordinates. To explain why these coordinates are signed ratios of areas, let us assume that we work in the Euclidean space E3, here, consider the Cartesian coordinate system O x y z and its associated basis, namely. Consider also the positively oriented triangle A B C lying in the O x y plane. It is known that for any basis of E3 and any free vector h one has h =1 ⋅, a subtle point regarding our choice of free vectors, e is, in fact, the equipollence class of the bound vector A B →. We have obtained that A P → = m B ⋅ A B → + m C ⋅ A C →, where m B =, m C =. Given the positive orientation of triangle A B C, the denominator of both m B and m C is precisely the double of the area of the triangle A B C. This m -letter notation of the barycentric coordinates comes from the fact that the point P may be interpreted as the center of mass for the masses m A, m B, m C which are located in A, B and C. Switching back and forth between the coordinates and other coordinate systems makes some problems much easier to solve

14.
Ray tracing (graphics)
–
The technique is capable of producing a very high degree of visual realism, usually higher than that of typical scanline rendering methods, but at a greater computational cost. Ray tracing is capable of simulating a variety of optical effects, such as reflection and refraction, scattering. Optical ray tracing describes a method for producing visual images constructed in 3D computer graphics environments and it works by tracing a path from an imaginary eye through each pixel in a virtual screen, and calculating the color of the object visible through it. Scenes in ray tracing are described mathematically by a programmer or by a visual artist, scenes may also incorporate data from images and models captured by means such as digital photography. Typically, each ray must be tested for intersection with some subset of all the objects in the scene, certain illumination algorithms and reflective or translucent materials may require more rays to be re-cast into the scene. It may at first seem counterintuitive or backwards to send rays away from the camera, rather than into it, therefore, the shortcut taken in raytracing is to presuppose that a given ray intersects the view frame. After either a number of reflections or a ray traveling a certain distance without intersection, the ray ceases to travel. In nature, a light source emits a ray of light travels, eventually. One can think of this ray as a stream of photons traveling along the same path, in a perfect vacuum this ray will be a straight line. Any combination of four things happen with this light ray, absorption, reflection, refraction. A surface may absorb part of the ray, resulting in a loss of intensity of the reflected and/or refracted light. It might also reflect all or part of the light ray, if the surface has any transparent or translucent properties, it refracts a portion of the light beam into itself in a different direction while absorbing some of the spectrum. Between absorption, reflection, refraction and fluorescence, all of the light must be accounted for. A surface cannot, for instance, reflect 66% of a light ray. From here, the reflected and/or refracted rays may strike other surfaces, some of these rays travel in such a way that they hit our eye, causing us to see the scene and so contribute to the final rendered image. The first ray tracing algorithm used for rendering was presented by Arthur Appel in 1968 and this algorithm has since been termed ray casting. The idea behind ray casting is to shoot rays from the eye, one per pixel, think of an image as a screen-door, with each square in the screen being a pixel. This is then the object the eye sees through that pixel, using the material properties and the effect of the lights in the scene, this algorithm can determine the shading of this object

15.
Low poly
–
Low poly is a polygon mesh in 3D computer graphics that has a relatively small number of polygons. Low poly meshes occur in applications and contrast with high poly meshes in animated movies. Polygon meshes are one of the methods of modelling a 3D object for display by a computer. Polygons can, in theory, have any number of sides but are broken down into triangles for display. In general the more triangles in a mesh the more detailed the object is, in order to decrease render times the number of triangles in the scene must be reduced, by using low poly meshes. Computer generated imagery, for example, for films or still images have a higher polygon budget because rendering does not need to be done in real-time, which requires higher frame rates. In addition, computer processing power in situations is typically less limited. Each frame can take hours to create, despite the enormous computer power involved, a common example of the difference this makes is full motion video sequences in computer games which, because they can be pre-rendered, look much smoother than the games themselves. Objects that are said to be low poly often appear blocky, low poly meshes do not necessarily look bad, for example a flat sheet of paper represented by one polygon looks extremely accurate. As computer graphics are getting more powerful, low poly graphics may be used to achieve a certain retro style similar to pixel art orienting on classic video games. Computer graphics techniques such as normal and bump mapping have been designed to make a low poly object appear to contain more polygons than it does and this is done by altering the shading of polygons to contain internal detail which is not in the mesh. For example, Super Mario 64 would be considered low poly today, similarly, in 2009, using hundreds of polygons on a leaf in the background of a scene would be considered high poly, but using that many polygons on the main character would be considered low poly. Physics engines have presented a new role for low poly meshes, a low poly simplified version of the mesh is often used for simplifying the calculation of collisions with other meshes, in some cases this is as simple as a 6 polygon bounding box. Polygon Bump mapping Normal mapping Sprites Nurbs

16.
Polygon mesh
–
A polygon mesh is a collection of vertices, edges and faces that defines the shape of a polyhedral object in 3D computer graphics and solid modeling. The study of polygon meshes is a large sub-field of computer graphics, different representations of polygon meshes are used for different applications and goals. The variety of operations performed on meshes may include Boolean logic, smoothing, simplification, volumetric meshes are distinct from polygon meshes in that they explicitly represent both the surface and volume of a structure, while polygon meshes only explicitly represent the surface. As polygonal meshes are used in computer graphics, algorithms also exist for ray tracing, collision detection. Objects created with polygon meshes must store different types of elements and these include vertices, edges, faces, polygons and surfaces. In many applications, only vertices, edges and either faces or polygons are stored, a renderer may support only 3-sided faces, so polygons must be constructed of many of these, as shown above. However, many renderers either support quads and higher-sided polygons, or are able to convert polygons to triangles on the fly, also, in certain applications like head modeling, it is desirable to be able to create both 3- and 4-sided polygons. Vertex A position along with information such as color, normal vector. Edge A connection between two vertices, Face A closed set of edges, in which a triangle face has three edges, and a quad face has four edges. A polygon is a set of faces. In systems that support multi-sided faces, polygons and faces are equivalent, however, most rendering hardware supports only 3- or 4-sided faces, so polygons are represented as multiple faces. Mathematically a polygonal mesh may be considered an unstructured grid, or undirected graph, with properties of geometry, shape. Surfaces More often called smoothing groups, are useful, but not required to group smooth regions, consider a cylinder with caps, such as a soda can. For smooth shading of the sides, all surface normals must point horizontally away from the center, while the normals of the caps must point straight up, rendered as a single, Phong-shaded surface, the crease vertices would have incorrect normals. Thus, some way of determining where to cease smoothing is needed to group smooth parts of a mesh, as an alternative to providing surfaces/smoothing groups, a mesh may contain other data for calculating the same data, such as a splitting angle. Additionally, very high resolution meshes are less subject to issues that would require smoothing groups, further, another alternative exists in the possibility of simply detaching the surfaces themselves from the rest of the mesh. Renderers do not attempt to smooth edges across noncontiguous polygons, materials Generally materials will be defined, allowing different portions of the mesh to use different shaders when rendered. It is also possible for meshes to contain other such vertex attribute information such as colour, tangent vectors, weight maps to control animation, polygon meshes may be represented in a variety of ways, using different methods to store the vertex, edge and face data