1.
3D computer graphics
–
Such images may be stored for viewing later or displayed in real-time. 3D computer graphics rely on many of the same algorithms as 2D computer vector graphics in the wire-frame model, 3D computer graphics are often referred to as 3D models. Apart from the graphic, the model is contained within the graphical data file. However, there are differences, a 3D model is the representation of any three-dimensional object. A model is not technically a graphic until it is displayed, a model can be displayed visually as a two-dimensional image through a process called 3D rendering or used in non-graphical computer simulations and calculations. With 3D printing, 3D models are rendered into a 3D physical representation of the model. William Fetter was credited with coining the term computer graphics in 1961 to describe his work at Boeing, 3D computer graphics software began appearing for home computers in the late 1970s. The earliest known example is 3D Art Graphics, a set of 3D computer graphics effects, written by Kazumasa Mitazawa, models can also be produced procedurally or via physical simulation. Basically, a 3D model is formed from points called vertices that define the shape, a polygon is an area formed from at least three vertexes. A polygon of n points is an n-gon, the overall integrity of the model and its suitability to use in animation depend on the structure of the polygons. Before rendering into an image, objects must be out in a scene. This defines spatial relationships between objects, including location and size, Animation refers to the temporal description of an object. These techniques are used in combination. As with animation, physical simulation also specifies motion, rendering converts a model into an image either by simulating light transport to get photo-realistic images, or by applying an art style as in non-photorealistic rendering. The two basic operations in realistic rendering are transport and scattering and this step is usually performed using 3D computer graphics software or a 3D graphics API. Altering the scene into a form for rendering also involves 3D projection. There are a multitude of websites designed to help, educate, some are managed by software developers and content providers, but there are standalone sites as well. These communities allow for members to seek advice, post tutorials, not all computer graphics that appear 3D are based on a wireframe model
2.
Frustum
–
In geometry, a frustum is the portion of a solid that lies between one or two parallel planes cutting it. A right frustum is a truncation of a right pyramid. The term is used in computer graphics to describe the viewing frustum. It is formed by a pyramid, in particular, frustum culling is a method of hidden surface determination. In the aerospace industry, frustum is the term for the fairing between two stages of a multistage rocket, which is shaped like a truncated cone. Each plane section is a floor or base of the frustum and its axis if any, is that of the original cone or pyramid. A frustum is circular if it has circular bases, it is if the axis is perpendicular to both bases, and oblique otherwise. The height of a frustum is the distance between the planes of the two bases. Cones and pyramids can be viewed as degenerate cases of frusta, the pyramidal frusta are a subclass of the prismatoids. Two frusta joined at their bases make a bifrustum, the Egyptians knew the correct formula for obtaining the volume of a truncated square pyramid, but no proof of this equation is given in the Moscow papyrus. V = h 1 a h 12 − h 2 a h 223 = a 3 By factoring the difference of two cubes we get h1−h2 = h, the height of the frustum, and α/3. Distributing α and substituting from its definition, the Heronian mean of areas B1, the alternative formula is therefore V = h 3 Heron of Alexandria is noted for deriving this formula and with it encountering the imaginary number, the square root of negative one. In particular, the volume of a circular cone frustum is V = π h 3 where π is 3.14159265. and R1, R2 are the radii of the two bases. The volume of a frustum whose bases are n-sided regular polygons is V = n h 12 cot π n where a1. The surface area of a frustum whose bases are similar regular n-sided polygons is A = n 4 where a1. On the back of a United States one-dollar bill, a pyramidal frustum appears on the reverse of the Great Seal of the United States, certain ancient Native American mounds also form the frustum of a pyramid. The John Hancock Center in Chicago, Illinois is a frustum whose bases are rectangles, the Washington Monument is a narrow square-based pyramidal frustum topped by a small pyramid. The viewing frustum in 3D computer graphics is a photographic or video cameras usable field of view modeled as a pyramidal frustum
3.
Field of view in video games
–
In first person video games, the field of view or field of vision is the extent of the observable game world that is seen on the display at any given moment. It is typically measured as an angle, although whether this angle is the horizontal, vertical, the FOV in a video game may change depending on the aspect ratio of the rendering resolution. In computer games and modern game consoles the FOV normally increases with an aspect ratio of the rendering resolution. The field of view is given as an angle for the horizontal or vertical component of the FOV. A larger angle indicates a field of view, however, depending on the FOV scaling method used by the game. The different values for horizontal and vertical FOV may lead to confusion because the games often just mention FOV, including peripheral vision, the visual field of the average person is approximately 170-180 degrees. Console games are played on a TV at a large distance from the viewer. Many PC games that are released after 2000 are ported from consoles, ideally, the developer will set a wider FOV in the PC release, or offer a setting to change the FOV to the players preference. However, in cases the narrow FOV of the console release is retained in the PC version. This results in an uncomfortable sensation likened to viewing the scene through binoculars, the terms were originally coined by members of the Widescreen Gaming Forum. Hor+ is the most common scaling method for the majority of video games. In games with hor+ scaling the vertical FOV is fixed, while the horizontal FOV is expandable depending on the ratio of the rendering resolution. Since the majority of screens used for gaming nowadays are widescreen and this becomes especially important in more exotic setups like ultra-wide monitor or triple-monitor gaming. Modern games using anamorphic scaling typically have a 16,9 aspect ratio, if this method is used by a game with a 4,3 aspect ratio, the image will be pillarboxed on widescreen resolutions. Pixel-based scaling is almost exclusively used in games with two-dimensional graphics, with pixel-based scaling, the amount of content displayed on screen is directly tied to the rendering resolution. A larger horizontal resolution directly increases the field of view. Vert- is a method used by some games that support a wide variety of resolutions. In vert- games, as the aspect ratio widens, the component of the field of view is reduced to compensate
4.
Angle of view
–
In photography, angle of view describes the angular extent of a given scene that is imaged by a camera. It is used interchangeably with the general term field of view. It is important to distinguish the angle of view from the angle of coverage, typically the image circle produced by a lens is large enough to cover the film or sensor completely, possibly including some vignetting toward the edge. A cameras angle of view not only on the lens. Digital sensors are usually smaller than 35mm film, and this causes the lens to have an angle of view than with 35mm film. In everyday digital cameras, the factor can range from around 1, to 1.6. For lenses projecting rectilinear images of distant objects, the focal length. Calculations for lenses producing non-rectilinear images are more complex and in the end not very useful in most practical applications. Angle of view may be measured horizontally, vertically, or diagonally, for example, for 35mm film which is 36 mm wide and 24mm high, d =36 mm would be used to obtain the horizontal angle of view and d =24 mm for the vertical angle. Because this is a function, the angle of view does not vary quite linearly with the reciprocal of the focal length. However, except for wide-angle lenses, it is reasonable to approximate α ≈ d f radians or 180 d π f degrees. The effective focal length is equal to the stated focal length of the lens. Angle of view can also be determined using FOV tables or paper or software lens calculators, consider a 35 mm camera with a lens having a focal length of F =50 mm. The dimensions of the 35 mm image format are 24 mm ×36 mm, here α is defined to be the angle-of-view, since it is the angle enclosing the largest object whose image can fit on the film. We want to find the relationship between, the angle α the opposite side of the triangle, d /2 the adjacent side, S2 Using basic trigonometry, we find. For macro photography, we neglect the difference between S2 and F. From the thin lens formula,1 F =1 S1 +1 S2, a second effect which comes into play in macro photography is lens asymmetry. The lens asymmetry causes an offset between the plane and pupil positions
5.
Rendering (computer graphics)
–
Rendering or image synthesis is the process of generating an image from a 2D or 3D model by means of computer programs. Also, the results of such a model can be called a rendering, a scene file contains objects in a strictly defined language or data structure, it would contain geometry, viewpoint, texture, lighting, and shading information as a description of the virtual scene. The data contained in the file is then passed to a rendering program to be processed. The term rendering may be by analogy with a rendering of a scene. A GPU is a device able to assist a CPU in performing complex rendering calculations. If a scene is to look realistic and predictable under virtual lighting. The rendering equation doesnt account for all lighting phenomena, but is a lighting model for computer-generated imagery. Rendering is also used to describe the process of calculating effects in an editing program to produce final video output. Rendering is one of the major sub-topics of 3D computer graphics, in the graphics pipeline, it is the last major step, giving the final appearance to the models and animation. With the increasing sophistication of computer graphics since the 1970s, it has become a distinct subject. Rendering has uses in architecture, video games, simulators, movie or TV visual effects, as a product, a wide variety of renderers are available. Some are integrated into larger modeling and animation packages, some are stand-alone, on the inside, a renderer is a carefully engineered program, based on a selective mixture of disciplines related to, light physics, visual perception, mathematics, and software development. In the case of 3D graphics, rendering may be slowly, as in pre-rendering. When the pre-image is complete, rendering is used, which adds in bitmap textures or procedural textures, lights, bump mapping, the result is a completed image the consumer or intended viewer sees. For movie animations, several images must be rendered, and stitched together in a program capable of making an animation of this sort, most 3D image editing programs can do this. A rendered image can be understood in terms of a number of visible features, Rendering research and development has been largely motivated by finding ways to simulate these efficiently. Some relate directly to particular algorithms and techniques, while others are produced together, Tracing every particle of light in a scene is nearly always completely impractical and would take a stupendous amount of time. Even tracing a portion large enough to produce an image takes an amount of time if the sampling is not intelligently restricted
6.
Bounding volume
–
For building code compliance, see Bounding. In computer graphics and computational geometry, a volume for a set of objects is a closed volume that completely contains the union of the objects in the set. Bounding volumes are used to improve the efficiency of operations by using simple volumes to contain more complex objects. Normally, simpler volumes have simpler ways to test for overlap, a bounding volume for a set of objects is also a bounding volume for the single object consisting of their union, and the other way around. Therefore, it is possible to confine the description to the case of a single object, Bounding volumes are most often used to accelerate certain kinds of tests. In ray tracing, bounding volumes are used in ray-intersection tests, if the ray or viewing frustum does not intersect the bounding volume, it cannot intersect the object contained within, allowing trivial rejection. Similarly if the frustum contains the entirety of the bounding volume and these intersection tests produce a list of objects that must be displayed. In collision detection, when two bounding volumes do not intersect, the contained objects cannot collide, testing against a bounding volume is typically much faster than testing against the object itself, because of the bounding volumes simpler geometry. This is because an object is composed of polygons or data structures that are reduced to polygonal approximations. In either case, it is wasteful to test each polygon against the view volume if the object is not visible. To obtain bounding volumes of objects, a common way is to break the objects/scene down using a scene graph or more specifically a bounding volume hierarchy. The basic idea behind this is to organize a scene in a structure where the root comprises the whole scene. The precision of the intersection test is related to the amount of space within the bounding volume not associated with the bounded object, sophisticated bounding volumes generally allow for less void space but are more computationally expensive. It is common to use several types in conjunction, such as a one for a quick but rough test in conjunction with a more precise. The types treated here all give convex bounding volumes, if the object being bounded is known to be convex, this is not a restriction. If non-convex bounding volumes are required, an approach is to represent them as a union of a number of convex bounding volumes, unfortunately, intersection tests become quickly more expensive as the bounding boxes become more sophisticated. A bounding box is a cuboid, or in 2-D a rectangle, in many applications the bounding box is aligned with the axes of the co-ordinate system, and it is then known as an axis-aligned bounding box. To distinguish the case from an AABB, an arbitrary bounding box is sometimes called an oriented bounding box
7.
Normal (geometry)
–
In geometry, a normal is an object such as a line or vector that is perpendicular to a given object. For example, in the case, the normal line to a curve at a given point is the line perpendicular to the tangent line to the curve at the point. In the three-dimensional case a normal, or simply normal. The word normal is used as an adjective, a line normal to a plane, the normal component of a force. The concept of normality generalizes to orthogonality, the concept has been generalized to differentiable manifolds of arbitrary dimension embedded in a Euclidean space. The normal vector space or normal space of a manifold at a point P is the set of the vectors which are orthogonal to the tangent space at P, in the case of differential curves, the curvature vector is a normal vector of special interest. For a convex polygon, a surface normal can be calculated as the cross product of two edges of the polygon. For a plane given by the equation a x + b y + c z + d =0, the vector is a normal. For a hyperplane in n+1 dimensions, given by the equation r = a 0 + α1 a 1 + ⋯ + α n a n, where a0 is a point on the hyperplane and ai for i =1. N are non-parallel vectors lying on the hyperplane, a normal to the hyperplane is any vector in the space of A where A is given by A =. That is, any vector orthogonal to all in-plane vectors is by definition a surface normal. If a surface S is parameterized by a system of coordinates x, with s and t real variables. For a surface S given explicitly as a function f of the independent variables x, y, the first one is obtaining its implicit form F = z − f =0, from which the normal follows readily as the gradient ∇ F. The second way of obtaining the normal follows directly from the gradient of the form, ∇ f, by inspection, ∇ F = k ^ − ∇ f. Note that this is equal to ∇ F = k ^ − ∂ f ∂ x i ^ − ∂ f ∂ y j ^, if a surface does not have a tangent plane at a point, it does not have a normal at that point either. For example, a cone does not have a normal at its tip nor does it have a normal along the edge of its base, however, the normal to the cone is defined almost everywhere. In general, it is possible to define a normal almost everywhere for a surface that is Lipschitz continuous, a normal to a surface does not have a unique direction, the vector pointing in the opposite direction of a surface normal is also a surface normal. For an oriented surface, the normal is usually determined by the right-hand rule
8.
Aspect ratio (image)
–
The aspect ratio of an image describes the proportional relationship between its width and its height. It is commonly expressed as two separated by a colon, as in 16,9.5 yards high. The most common ratios used today in the presentation of films in cinemas are 1.85,1 and 2.39,1. Two common videographic aspect ratios are 4,3, the video format of the 20th century. Other cinema and video aspect ratios exist, but are used infrequently, in still camera photography, the most common aspect ratios are 4,3,3,2, and more recently being found in consumer cameras 16,9. Other aspect ratios, such as 5,3,5,4, in motion picture formats, the physical size of the film area between the sprocket perforations determines the images size. The universal standard is a frame that is four perforations high, the film itself is 35 mm wide, but the area between the perforations is 24.89 mm×18.67 mm, leaving the de facto ratio of 4,3, or 1.33,1. A4,3 ratio mimics human eyesight visual angle of 155°h x 120°v, the motion picture industry convention assigns a value of 1.0 to the image’s height, an anamorphic frame is often incorrectly described as 2.40,1 or 2.40. After 1952, a number of aspect ratios were experimented with for anamorphic productions, a SMPTE specification for anamorphic projection from 1957 finally standardized the aperture to 2.35,1. An update in 1970 changed the ratio to 2.39,1 in order to make splices less noticeable. This aspect ratio of 2.39,1 was confirmed by the most recent revision from August 1993, in American cinemas, the common projection ratios are 1.85,1 and 2.39,1. Some European countries have 1.66,1 as the wide screen standard, the Academy ratio of 1.375,1 was used for all cinema films in the sound era until 1953. During that time, television, which had a aspect ratio of 1.33,1. Hollywood responded by creating a number of wide-screen formats, CinemaScope, Todd-AO. The flat 1.85,1 aspect ratio was introduced in May 1953, and became one of the most common cinema projection standards in the U. S. and elsewhere. The goal of these lenses and aspect ratios was to capture as much of the frame as possible, onto as large an area of the film as possible. In either case the image was squeezed horizontally to fit the frame size. Development of various film camera systems must ultimately cater to the placement of the frame in relation to the constraints of the perforations
9.
International Standard Book Number
–
The International Standard Book Number is a unique numeric commercial book identifier. An ISBN is assigned to each edition and variation of a book, for example, an e-book, a paperback and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, the method of assigning an ISBN is nation-based and varies from country to country, often depending on how large the publishing industry is within a country. The initial ISBN configuration of recognition was generated in 1967 based upon the 9-digit Standard Book Numbering created in 1966, the 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108. Occasionally, a book may appear without a printed ISBN if it is printed privately or the author does not follow the usual ISBN procedure, however, this can be rectified later. Another identifier, the International Standard Serial Number, identifies periodical publications such as magazines, the ISBN configuration of recognition was generated in 1967 in the United Kingdom by David Whitaker and in 1968 in the US by Emery Koltay. The 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108, the United Kingdom continued to use the 9-digit SBN code until 1974. The ISO on-line facility only refers back to 1978, an SBN may be converted to an ISBN by prefixing the digit 0. For example, the edition of Mr. J. G. Reeder Returns, published by Hodder in 1965, has SBN340013818 -340 indicating the publisher,01381 their serial number. This can be converted to ISBN 0-340-01381-8, the check digit does not need to be re-calculated, since 1 January 2007, ISBNs have contained 13 digits, a format that is compatible with Bookland European Article Number EAN-13s. An ISBN is assigned to each edition and variation of a book, for example, an ebook, a paperback, and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, a 13-digit ISBN can be separated into its parts, and when this is done it is customary to separate the parts with hyphens or spaces. Separating the parts of a 10-digit ISBN is also done with either hyphens or spaces, figuring out how to correctly separate a given ISBN number is complicated, because most of the parts do not use a fixed number of digits. ISBN issuance is country-specific, in that ISBNs are issued by the ISBN registration agency that is responsible for country or territory regardless of the publication language. Some ISBN registration agencies are based in national libraries or within ministries of culture, in other cases, the ISBN registration service is provided by organisations such as bibliographic data providers that are not government funded. In Canada, ISBNs are issued at no cost with the purpose of encouraging Canadian culture. In the United Kingdom, United States, and some countries, where the service is provided by non-government-funded organisations. Australia, ISBNs are issued by the library services agency Thorpe-Bowker
10.
Computer graphics
–
Computer graphics are pictures and films created using computers. Usually, the term refers to computer-generated image data created with help from specialized hardware and software. It is a vast and recent area in computer science, the phrase was coined in 1960, by computer graphics researchers Verne Hudson and William Fetter of Boeing. It is often abbreviated as CG, though sometimes referred to as CGI. The overall methodology depends heavily on the sciences of geometry, optics. Computer graphics is responsible for displaying art and image data effectively and meaningfully to the user and it is also used for processing image data received from the physical world. Computer graphic development has had a significant impact on many types of media and has revolutionized animation, movies, advertising, video games, the term computer graphics has been used a broad sense to describe almost everything on computers that is not text or sound. Such imagery is found in and on television, newspapers, weather reports, a well-constructed graph can present complex statistics in a form that is easier to understand and interpret. In the media such graphs are used to illustrate papers, reports, thesis, many tools have been developed to visualize data. Computer generated imagery can be categorized into different types, two dimensional, three dimensional, and animated graphics. As technology has improved, 3D computer graphics have become more common, Computer graphics has emerged as a sub-field of computer science which studies methods for digitally synthesizing and manipulating visual content. Screens could display art since the Lumiere brothers use of mattes to create effects for the earliest films dating from 1895. New kinds of displays were needed to process the wealth of information resulting from such projects, early projects like the Whirlwind and SAGE Projects introduced the CRT as a viable display and interaction interface and introduced the light pen as an input device. Douglas T. Ross of the Whirlwind SAGE system performed an experiment in 1954 in which a small program he wrote captured the movement of his finger. Electronics pioneer Hewlett-Packard went public in 1957 after incorporating the decade prior, and established ties with Stanford University through its founders. This began the transformation of the southern San Francisco Bay Area into the worlds leading computer technology hub - now known as Silicon Valley. The field of computer graphics developed with the emergence of computer graphics hardware, further advances in computing led to greater advancements in interactive computer graphics. In 1959, the TX-2 computer was developed at MITs Lincoln Laboratory, the TX-2 integrated a number of new man-machine interfaces