Ray tracing (graphics)
In computer graphics, ray tracing is a rendering technique for generating an image by tracing the path of light as pixels in an image plane and simulating the effects of its encounters with virtual objects. The technique is capable of producing a high degree of visual realism higher than that of typical scanline rendering methods, but at a greater computational cost; this makes ray tracing best suited for applications where taking a long time to render a frame can be tolerated, such as in still images and film and television visual effects, more poorly suited for real-time applications such as video games where speed is critical. Ray tracing is capable of simulating a wide variety of optical effects, such as reflection and refraction and dispersion phenomena. Optical ray tracing describes a method for producing visual images constructed in 3D computer graphics environments, with more photorealism than either ray casting or scanline rendering techniques, it works by tracing a path from an imaginary eye through each pixel in a virtual screen, calculating the color of the object visible through it.
Scenes in ray tracing are described mathematically by a visual artist. Scenes may incorporate data from images and models captured by means such as digital photography; each ray must be tested for intersection with some subset of all the objects in the scene. Once the nearest object has been identified, the algorithm will estimate the incoming light at the point of intersection, examine the material properties of the object, combine this information to calculate the final color of the pixel. Certain illumination algorithms and reflective or translucent materials may require more rays to be re-cast into the scene, it may at first seem counterintuitive or "backward" to send rays away from the camera, rather than into it, but doing so is many orders of magnitude more efficient. Since the overwhelming majority of light rays from a given light source do not make it directly into the viewer's eye, a "forward" simulation could waste a tremendous amount of computation on light paths that are never recorded.
Therefore, the shortcut taken in raytracing is to presuppose that a given ray intersects the view frame. After either a maximum number of reflections or a ray traveling a certain distance without intersection, the ray ceases to travel and the pixel's value is updated. On input we have: E ∈ R 3 eye position T ∈ R 3 target position θ ∈ [ 0, π ) field of view - for human we can assume ≈ π / 2 rad = 90 ∘ m, k ∈ N numbers of square pixels on viewport vertical and horizontal direction i, j ∈ N, 1 ≤ i ≤ k ∧ 1 ≤ j ≤ m numbers of actual pixel w → ∈ R 3 vertical vector which indicates where is up and down w → = - roll component which determine viewport rotation around point C The idea is to find the position of each viewport pixel center P i j which allows us to find the line going from eye E through that pixel and get the ray described by point E and vector R → i j = P i j − E. First we need to find the coordinates of the bottom left viewport pixel P 1 m and find the next pixel by making a shift along directions parallel to viewport multiplied by the size of the pixel.
Below we introduce formulas which include distance d between the eye and the viewport. However, this value will be reduced during ray normalization r → i j. Pre-calculations: let's find and normalise vector t → and vectors b →, v → which are parallel to the viewport t → = T
A display device is an output device for presentation of information in visual or tactile form. When the input information, supplied has an electrical signal the display is called an electronic display. Common applications for electronic visual displays are televisions or computer monitors. In the history of display technology, a variety of display devices and technologies have been used. There are various designs for display devices. Several components are common to most display devices. Display, or screen, the portion of the device that displays changeable image Bezel, the area surrounding portion that displays changing information Housing, the enclosure of the display These are the technologies used to create the various displays in use today. Electroluminescent display Liquid crystal display with Light-emitting diode -backlit LCD display Light-emitting diode display OLED display AMOLED display Plasma display Quantum dot display Some displays can show only digits or alphanumeric characters.
They are called segment displays, because they are composed of several segments that switch on and off to give appearance of desired glyph. The segments are single LEDs or liquid crystals, they are used in digital watches and pocket calculators. There are several types: Seven-segment display Fourteen-segment display Sixteen-segment display HD44780 LCD controller a accepted protocol for LCDs. Incandescent filaments Vacuum fluorescent display Cold cathode gas discharge Light-emitting diode Liquid crystal display Physical vane with electromagnetic activation 2-dimensional displays that cover a full area are called video displays, since it is the main modality of presenting video. Full-area 2-dimensional displays are used in, for example: Television set Computer monitors Head-mounted display Broadcast reference monitor Medical monitors Underlying technologies for full-area 2-dimensional displays include: Cathode ray tube display Light-emitting diode display Electroluminescent display Electronic paper, E Ink Plasma display panel Liquid crystal display High-Performance Addressing display Thin-film transistor display Organic light-emitting diode display Digital Light Processing display Surface-conduction electron-emitter display Field emission display Laser TV Carbon nanotubes Quantum dot display Interferometric modulator display Digital microshutter display The multiplexed display technique is used to drive most display devices.
Swept-volume display Varifocal mirror display Emissive volume display Laser display Holographic display Light field displays Ticker tape Split-flap display Flip-disc display Rollsign Tactile electronic displays are intended for the blind. They use electro-mechanical parts to dynamically update a tactile image so that the image may be felt by the fingers. Optacon, using metal rods instead of light in order to convey images to blind people by tactile sensation. Society for Information Display - An international professional organization dedicated to the study of display technology University of Waterloo Stratford Campus - A university that offers students the opportunity to display their work on the school's 3-storey Christie MicroTile wall
Graphics processing unit
A graphics processing unit is a specialized electronic circuit designed to manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. GPUs are used in embedded systems, mobile phones, personal computers and game consoles. Modern GPUs are efficient at manipulating computer graphics and image processing, their parallel structure makes them more efficient than general-purpose CPUs for algorithms that process large blocks of data in parallel. In a personal computer, a GPU can be present on a video card or embedded on the motherboard. In certain CPUs, they are embedded on the CPU die; the term GPU has been used from at least the 1980s. It was popularized by Nvidia in 1999, who marketed the GeForce 256 as "the world's first GPU", it was presented as a "single-chip processor with integrated transform, triangle setup/clipping, rendering engines". Rival ATI Technologies coined the term "visual processing unit" or VPU with the release of the Radeon 9700 in 2002.
Arcade system boards have been using specialized graphics chips since the 1970s. In early video game hardware, the RAM for frame buffers was expensive, so video chips composited data together as the display was being scanned out on the monitor. Fujitsu's MB14241 video shifter was used to accelerate the drawing of sprite graphics for various 1970s arcade games from Taito and Midway, such as Gun Fight, Sea Wolf and Space Invaders; the Namco Galaxian arcade system in 1979 used specialized graphics hardware supporting RGB color, multi-colored sprites and tilemap backgrounds. The Galaxian hardware was used during the golden age of arcade video games, by game companies such as Namco, Gremlin, Konami, Nichibutsu and Taito. In the home market, the Atari 2600 in 1977 used a video shifter called the Television Interface Adaptor; the Atari 8-bit computers had ANTIC, a video processor which interpreted instructions describing a "display list"—the way the scan lines map to specific bitmapped or character modes and where the memory is stored.
6502 machine code subroutines could be triggered on scan lines by setting a bit on a display list instruction. ANTIC supported smooth vertical and horizontal scrolling independent of the CPU; the NEC µPD7220 was one of the first implementations of a graphics display controller as a single Large Scale Integration integrated circuit chip, enabling the design of low-cost, high-performance video graphics cards such as those from Number Nine Visual Technology. It became one of the best known of; the Williams Electronics arcade games Robotron 2084, Joust and Bubbles, all released in 1982, contain custom blitter chips for operating on 16-color bitmaps. In 1985, the Commodore Amiga featured a custom graphics chip, with a blitter unit accelerating bitmap manipulation, line draw, area fill functions. Included is a coprocessor with its own primitive instruction set, capable of manipulating graphics hardware registers in sync with the video beam, or driving the blitter. In 1986, Texas Instruments released the TMS34010, the first microprocessor with on-chip graphics capabilities.
It could run general-purpose code, but it had a graphics-oriented instruction set. In 1990-1992, this chip would become the basis of the Texas Instruments Graphics Architecture Windows accelerator cards. In 1987, the IBM 8514 graphics system was released as one of the first video cards for IBM PC compatibles to implement fixed-function 2D primitives in electronic hardware; the same year, Sharp released the X68000, which used a custom graphics chipset, powerful for a home computer at the time, with a 65,536 color palette and hardware support for sprites and multiple playfields serving as a development machine for Capcom's CP System arcade board. Fujitsu competed with the FM Towns computer, released in 1989 with support for a full 16,777,216 color palette. In 1988, the first dedicated polygonal 3D graphics boards were introduced in arcades with the Namco System 21 and Taito Air System. In 1991, S3 Graphics introduced the S3 86C911, which its designers named after the Porsche 911 as an indication of the performance increase it promised.
The 86C911 spawned a host of imitators: by 1995, all major PC graphics chip makers had added 2D acceleration support to their chips. By this time, fixed-function Windows accelerators had surpassed expensive general-purpose graphics coprocessors in Windows performance, these coprocessors faded away from the PC market. Throughout the 1990s, 2D GUI acceleration continued to evolve; as manufacturing capabilities improved, so did the level of integration of graphics chips. Additional application programming interfaces arrived for a variety of tasks, such as Microsoft's WinG graphics library for Windows 3.x, their DirectDraw interface for hardware acceleration of 2D games within Windows 95 and later. In the early- and mid-1990s, real-time 3D graphics were becoming common in arcade and console games, which led to an increasing public demand for hardware-accelerated 3D graphics. Early examples of mass-market 3D graphics hardware can be found in arcade system boards such as the Sega Model 1, Namco System 22, Sega Model 2, the fifth-generation video game consoles such as the Saturn, PlayStation and Nintendo 64.
Arcade systems such as the Sega Model 2 and Namco Magic Edge Hornet Simulator in 1993 were capable of hardware T&L years before appearing in consu
Quadro is Nvidia's brand for graphics cards intended for use in workstations running professional computer-aided design, computer-generated imagery, digital content creation applications, scientific calculations and machine learning. The GPU chips on Quadro-branded graphics cards are identical to those used on GeForce-branded graphics cards; the Quadro cards differ in their ECC memory and enhanced floating point precision, which tremendously reduce the risks of calculation errors. The Nvidia Quadro product line directly competes with AMD's Radeon Pro line of professional workstation cards; the Quadro line of GPU cards emerged in an effort at market segmentation by Nvidia. In introducing Quadro, Nvidia was able to charge a premium for the same graphics hardware in professional markets, direct resources to properly serve the needs of those markets. To differentiate their offerings, Nvidia used driver software and firmware to selectively enable features vital to segments of the workstation market, such as high-performance anti-aliased lines and two-sided lighting, in the Quadro product.
The Quadro line received improved support through a certified driver program. These features were of little value to the gamers that Nvidia's products sold to, but their lack prevented high-end customers from using the less expensive products. There are parallels between the market segmentation used to sell the Quadro line of products to workstation markets and the Tesla line of products to engineering and HPC markets. In a settlement of a patent infringement lawsuit between SGI and Nvidia, SGI acquired rights to speed-binned Nvidia graphics chips which they shipped under the VPro product label; these designs were separate from the SGI Odyssey based VPro products sold on their IRIX workstations which used a different bus. SGI's Nvidia-based VPro line included the VPro V3, VPro VR3, VPro V7, VPro VR7. Actual extra cards only for Quadro 4000 cards and higher: SDI Capture:SDI Output: Quadro Plex consists of a line of external servers for rendering videos. A Quadro Plex contains multiple Quadro FX video cards.
A client computer connects to Quadro Plex to initiate rendering. More data in Nvidia Tesla Cards. Scalable Link Interface SLI is the next generation of Plex. SLI can improve Frame Rendering, FSAA. Quadro SLI support Mosaic for 8 Monitors. With Quadro SYNC Card support of max. 16 Monitors possible. Most Cards have SLI-Bridge-Slot for 3 or 4 cards on one main board. Acceleration of scienctific calculations is possible with CUDA and OpenCL. Nvidia has 4 types of SLI bridges: Standard Bridge LED Bridge High-Bandwidth Bridge PCIe-Lanes only reserved for SLIMore see SLI. Nvidia supports supercomputing with its 8-GPU Visual Computing Appliance. Nvidia Iray, Chaosgroup V-Ray and Nvidia OptiX accelerate Raytracing for Maya, 3DS Max, Cinema4D, Rhinoceros and others. All software with CUDA or OpenCL, such as ANSYS, NASTRAN, ABAQUS, OpenFoam, can benefit from VCA; the DGX-1 is available with 8 GP100 Cards. More data in Nvidia Tesla Cards; the Quadro RTX series is based on the Turing microarchitecture, features real-time raytracing.
This is accelerated by the use of new RT cores, which are designed to process quadtrees and spherical hierarchies, speed up collision tests with individual triangles. The raytracing performed by the RT cores can be used to produce reflections and shadows, replacing traditional raster techniques such as cube maps and depth maps. Instead of replacing rasterization however, the information gathered from ray-tracing can be used to augment the shading with information, much more photo-realistic regarding off-camera action. Tensor cores further accelerate raytracing, are used to fill in the blanks in a rendered image, a technique known as de-noising; the Tensor core performs the result of deep learning on supercomputers to codify how to, for example, increase the resolution of images. In the Tensor core's primary usage, a problem to be solved is analyzed on a supercomputer, taught by example what results are desired, the supercomputer determines a method to use to achieve those results, done with the consumer's Tensor core.
These methods are delivered "over the air" to consumers. RTX is the name of the development platform introduced for the Quadro RTX series. RTX leverages Microsoft's OptiX and Vulkan for access to raytracing. Many of these cards use the same core as the game- and action-oriented GeForce video cards by Nvidia; those cards that are identical to the desktop cards can be software modified to identify themselves as the equivalent Quadro cards and this allows optimized drivers intended for the Quadro cards to be installed on the system. While this may not offer all of the performance of the equivalent Quadro card, it can improve performance in certain applications, but may require installing the MAXtreme driver for comparable speed; the performance difference comes in the firmware controlling the card. Given the importance of speed in a game, a system used for gaming can shut down textures, shading, or rendering after only approximating a final output—in order to keep the overall frame rate high; the algorithms on a CAD-oriented card tend rather to complete all rendering operations if that introduces delays or variations in the timing, prioritising accuracy and rendering quality over speed.
A Geforce card focuses more on texture fillrates and high framerates with lighting and sound, but Quadro cards prioritize wireframe rendering and object
In elementary geometry, a polygon is a plane figure, described by a finite number of straight line segments connected to form a closed polygonal chain or polygonal circuit. The solid plane region, the bounding circuit, or the two together, may be called a polygon; the segments of a polygonal circuit are called its edges or sides, the points where two edges meet are the polygon's vertices or corners. The interior of a solid polygon is sometimes called its body. An n-gon is a polygon with n sides. A simple polygon is one. Mathematicians are concerned only with the bounding polygonal chains of simple polygons and they define a polygon accordingly. A polygonal boundary may be allowed to cross over itself, creating star polygons and other self-intersecting polygons. A polygon is a 2-dimensional example of the more general polytope in any number of dimensions. There are many more generalizations of polygons defined for different purposes; the word polygon derives from the Greek adjective πολύς "much", "many" and γωνία "corner" or "angle".
It has been suggested. Polygons are classified by the number of sides. See the table below. Polygons may be characterized by their convexity or type of non-convexity: Convex: any line drawn through the polygon meets its boundary twice; as a consequence, all its interior angles are less than 180°. Equivalently, any line segment with endpoints on the boundary passes through only interior points between its endpoints. Non-convex: a line may be found which meets its boundary more than twice. Equivalently, there exists a line segment between two boundary points that passes outside the polygon. Simple: the boundary of the polygon does not cross itself. All convex polygons are simple. Concave. Non-convex and simple. There is at least one interior angle greater than 180°. Star-shaped: the whole interior is visible from at least one point, without crossing any edge; the polygon must be simple, may be convex or concave. All convex polygons are star-shaped. Self-intersecting: the boundary of the polygon crosses itself.
The term complex is sometimes used in contrast to simple, but this usage risks confusion with the idea of a complex polygon as one which exists in the complex Hilbert plane consisting of two complex dimensions. Star polygon: a polygon which self-intersects in a regular way. A polygon can not be both star-shaped. Equiangular: all corner angles are equal. Cyclic: all corners lie on a single circle, called the circumcircle. Isogonal or vertex-transitive: all corners lie within the same symmetry orbit; the polygon is cyclic and equiangular. Equilateral: all edges are of the same length; the polygon need not be convex. Tangential: all sides are tangent to an inscribed circle. Isotoxal or edge-transitive: all sides lie within the same symmetry orbit; the polygon is equilateral and tangential. Regular: the polygon is both isogonal and isotoxal. Equivalently, it is both equilateral, or both equilateral and equiangular. A non-convex regular polygon is called a regular star polygon. Rectilinear: the polygon's sides meet at right angles, i.e. all its interior angles are 90 or 270 degrees.
Monotone with respect to a given line L: every line orthogonal to L intersects the polygon not more than twice. Euclidean geometry is assumed throughout. Any polygon has as many corners; each corner has several angles. The two most important ones are: Interior angle – The sum of the interior angles of a simple n-gon is π radians or × 180 degrees; this is because any simple n-gon can be considered to be made up of triangles, each of which has an angle sum of π radians or 180 degrees. The measure of any interior angle of a convex regular n-gon is 180 − 360 n degrees; the interior angles of regular star polygons were first studied by Poinsot, in the same paper in which he describes the four regular star polyhedra: for a regular p q -gon, each interior angle is π p radians or 180 p degrees. Exterior angle – The exterior angle is the supplementary angle to the interior angle. Tracing around a convex n-gon, the angle "turned" at a corner is external angle. Tracing all the way around the polygon makes one full turn, so the sum of the exterior angles must be 360°.
This argument can be generalized to concave simple polygons, if external angles that turn in the opposite direction are subtracted from the total turned. Tracing around an n-gon in general, the sum of the exterior angles can be any integer multiple d of 360°, e.g. 720° for a pentagram and 0° for an angular "eight" or antiparallelogram, where d is the density or starriness of the polygon. See orbit. In this section, the vertices of the polygon under consideration are taken to be, ( x 1
In computer graphics, image tracing, raster-to-vector conversion or vectorization is the conversion of raster graphics into vector graphics. An image does not have any structure: it is just a collection of marks on paper, grains in film, or pixels in a bitmap. While such an image is useful, it has some limits. If the image is magnified enough, its artifacts appear; the halftone dots, film grains and pixels become apparent. Images of sharp edges become jagged. See, for example, pixelation. Ideally, a vector image does not have the same problem. Edges and filled areas are represented as mathematical curves or gradients, they can be magnified arbitrarily; the task in vectorization is to convert a two-dimensional image into a two-dimensional vector representation of the image. It is not examining the image and attempting to recognize or extract a three-dimensional model which may be depicted. For most applications, vectorization does not involve optical character recognition. In vectorization the shape of the character is preserved, so artistic embellishments remain.
Synthetic images such as maps, logos, clip art, technical drawings are suitable for vectorization. Those images could have been made as vector images because they are based on geometric shapes or drawn with simple curves. Continuous tone photographs are not good candidates for vectorization; the input to vectorization is an image, but an image may come in many forms such as a photograph, a drawing on paper, or one of several raster file formats. Programs that do raster-to-vector conversion may accept bitmap formats such as TIFF, BMP and PNG; the output is a vector file format. Common vector formats are SVG, DXF, EPS, EMF and AI. Vectorization can be used to recover work. Personal computers come with a simple paint program that produces a bitmap output file; these programs allow users to make simple illustrations by adding text, drawing outlines, filling outline with a specific color. Only the results of these operations are saved in the resulting bitmap. Vectorization can be used to recapture some of the information, lost.
Vectorization is used to recover information, in a vector format but has been lost or has become unavailable. A company may have commissioned a logo from a graphic arts firm. Although the graphics firm used a vector format, the client company may not have received a copy of that format; the company may acquire a vector format by scanning and vectorizing a paper copy of the logo. Vectorization starts with an image; the image can be vectorized manually. A person could look at the image, make some measurements, write the output file by hand; that was the case for the vectorization of a technical illustration about neutrinos. The illustration has a lot of text; the original image did not have any curves, so the conversion is straightforward. Curves make the conversion more complicated. Manual vectorization of complicated shapes can be facilitated by the tracing function built into some vector graphics editing programs. If the image is not yet in machine readable form it has to be scanned into a usable file format.
Once there is a machine-readable bitmap, the image can be imported into a graphics editing program. A person can manually trace the elements of the image using the program's editing features. Curves in the original image can be approximated with lines, Bézier curves. An illustration program allows spline knots to be adjusted for a close fit. Manual vectorization is possible. Although graphics drawing programs have been around for a long time, artists may find the freehand drawing facilities are awkward when a drawing tablet is used. Instead of using a program, Pepper recommends making an initial sketch on paper. Instead of scanning the sketch and tracing it freehand in the computer, Pepper states: "Those proficient with a graphic tablet and stylus could make the following changes directly in CorelDRAW by using a scan of the sketch as an underlay and drawing over it. I prefer to use pen and ink, a light table"; the line-drawing image was scanned at 600 dpi, cleaned up in a paint program, automatically traced with a program.
Once the black and white image was in the graphics program, some other elements were added and the figure was colored. Ploch recreated a design from a digital photograph; the JPEG was imported and some "basic shapes" were traced by hand and colored in the graphics drawing program. Ploch used a bitmap editor to crop the more complex image components, he printed the image and traced it by hand onto tracing paper to get a clean black and white line drawing. That drawing was scanned and vectorized with a program. There are programs. Example programs are Adobe Streamline, Corel's PowerTRACE, Potrace; some of these programs have a command line interface while others are interactive that allow the user to adjust the conversion settings and view the result. Adobe Streamline is not only an interactive program, but it allows a user to manually edit the input bitmap and the output curves. Corel's PowerTRACE is accessed through CorelDRA
PowerVR is a division of Imagination Technologies that develops hardware and software for 2D and 3D rendering, for video encoding, associated image processing and DirectX, OpenGL ES, OpenVG, OpenCL acceleration. The PowerVR product line was introduced to compete in the desktop PC market for 3D hardware accelerators with a product with a better price–performance ratio than existing products like those from 3dfx Interactive. Rapid changes in that market, notably with the introduction of OpenGL and Direct3D, led to rapid consolidation. PowerVR introduced new versions with low-power electronics that were aimed at the laptop computer market. Over time, this developed into a series of designs that could be incorporated into system-on-a-chip architectures suitable for handheld device use. PowerVR accelerators are not manufactured by PowerVR, but instead their integrated circuit designs and patents are licensed to other companies, such as Texas Instruments, Intel, NEC, BlackBerry, Samsung, STMicroelectronics, Apple, NXP Semiconductors, many others.
The PowerVR chipset uses a method of 3D rendering known as tile-based deferred rendering, tile-based rendering combined with PowerVR's proprietary method of Hidden Surface Removal and Hierarchical Scheduling Technology. As the polygon generating program feeds triangles to the PowerVR, it stores them in memory in a triangle strip or an indexed format. Unlike other architectures, polygon rendering is not performed until all polygon information has been collated for the current frame. Furthermore, the expensive operations of texturing and shading of pixels is delayed, whenever possible, until the visible surface at a pixel is determined — hence rendering is deferred. In order to render, the display is split into rectangular sections in a grid pattern; each section is known as a tile. Associated with each tile is a list of the triangles; each tile is rendered in turn to produce the final image. Tiles are rendered using a process similar to ray-casting. Rays are numerically simulated as if cast onto the triangles associated with the tile and a pixel is rendered from the triangle closest to the camera.
The PowerVR hardware calculates the depths associated with each polygon for one tile row in 1 cycle. This method has the advantage that, unlike a more traditional early Z rejection based hierarchical systems, no calculations need to be made to determine what a polygon looks like in an area where it is obscured by other geometry, it allows for correct rendering of transparent polygons, independent of the order in which they are processed by the polygon producing application. More as the rendering is limited to one tile at a time, the whole tile can be in fast on-chip memory, flushed to video memory before processing the next tile. Under normal circumstances, each tile is visited just once per frame. PowerVR is a pioneer of tile based deferred rendering. Microsoft conceptualised the idea with their abandoned Talisman project. Gigapixel, a company that developed IP for tile-based 3D graphics, was purchased by 3dfx, which in turn was subsequently purchased by Nvidia. Nvidia has now been shown to use tile rendering in the Maxwell and Pascal microarchitectures for a limited amount of geometry.
ARM began developing another major tile based architecture known as Mali after their acquisition of Falanx. Intel uses a similar concept in their integrated graphics solutions. However, their method, coined zone rendering, does not perform full hidden surface removal and deferred texturing, therefore wasting fillrate and texture bandwidth on pixels that are not visible in the final image. Recent advances in hierarchical Z-buffering have incorporated ideas only used in deferred rendering, including the idea of being able to split a scene into tiles and of being able to accept or reject tile sized pieces of polygon. Today, the PowerVR software and hardware suite has ASICs for video encoding and associated image processing, it has virtualisation, DirectX, OpenGL ES, OpenVG, OpenCL acceleration. Newest PowerVR Wizard GPUs have fixed-function Ray Tracing Unit hardware and support hybrid rendering; the first series of PowerVR cards was designed as 3D-only accelerator boards that would use the main 2D video card's memory as framebuffer over PCI.
Videologic's first PowerVR PC product to market was the 3-chip Midas3, which saw limited availability in some OEM Compaq PCs. This card had poor compatibility with all but the first Direct3D games, most SGL games did not run. However, its internal 24-bit color precision rendering was notable for the time; the single-chip PCX1 was released in retail as the VideoLogic Apocalypse 3D and featured an improved architecture with more texture memory, ensuring better game compatibility. This was followed by the further refined PCX2, which clocked 6 MHz higher, offloaded some driver work by including more chip functionality and added bilinear filtering, was released in retail on the Matrox M3D and Videologic Apocalypse 3Dx cards. There was the Videologic Apocalypse 5D Sonic, which combined the PCX2 accelerator with a Tseng ET6100 2D core and ESS Agogo sound on a single PCI board; the PowerVR PCX cards were placed in the market as budget solutions and performed well in the games of their time, but weren't quite as featured as the 3DFX Voodoo accelerators