First-person shooter engine
A first-person shooter engine is a video game engine specialized for simulating 3D environments for use in a first-person shooter video game. First-person refers to the view. Shooter refers to games which revolve around wielding firearms and killing other entities in the game world, either non-player characters or other players; the development of the FPS graphic engines is characterized by a steady increase in technologies, with some breakthroughs. Attempts at defining distinct generations lead to arbitrary choices of what constitutes a modified version of an'old engine' and what is a new engine; the classification is complicated as game engines blend new technologies. Features considered advanced in a new game one year. Games with a combination of both older and newer features are the norm. For example, Jurassic Park: Trespasser introduced physics to the FPS genre, which did not become common until around 2002. Red Faction featured something still not common in engines years later. Game rendering for this early generation of FPS were from the first-person perspective and with the need to shoot things, however they were made up using Vector graphics.
There are two possible claimants for the first Maze War and Spasim. Maze War was developed in 1973 and involved a single player making his way through a maze of corridors rendered using a fixed perspective. Multiplayer capabilities, where players attempted to shoot each other, were added and were networked in 1974. Spasim was developed in 1974 and involved players moving through a wire-frame 3D universe. Spasim could be played by up to 32 players on the PLATO network. Developed in-house by Incentive Software, the Freescape engine is considered to be one of the first proprietary 3D engines to be used for computer games, although the engine was not used commercially outside of Incentive's own titles; the first game to use this engine was the puzzle game Driller in 1987. Games of this generation are regarded as Doom clones, they were not capable of full 3D rendering, but used ray casting 2.5D techniques to draw the environment and sprites to draw enemies instead of 3D models. However these games began to use textures to render the environment instead of simple wire-frame models or solid colors.
Hovertank 3D, from id Software, was the first to use this technique in 1990, but was still not using textures, a capability, added shortly after on Catacomb 3D with the Wolfenstein 3D engine, used for several other games. Catacomb 3D was the first game to show the player's hand on-screen, furthering the implication of the player into the character's role. Wolfenstein 3D engine was still primitive, it did not apply textures to the floor and ceiling, the ray casting restricted walls to a fixed height, levels were all on the same plane. Though it was still not using true 3D, id Tech 1, first used in Doom and again from id Software, removed these limitations, it first introduced the concept of binary space partitioning. Another breakthrough was the introduction of multiplayer abilities in the engine. However, because it was still using 2.5D, it was impossible to look up and down properly in Doom, all Doom levels were two-dimensional. Due to the lack of a z-axis, the engine did not allow for room-over-room support.
Doom's success spawned several games using the same engine or similar techniques, giving them the name Doom clones. The Build engine, used in Duke Nukem 3D removed some of the limitations of id Tech 1, such as the Build engine being able to have support for room-over-room by stacking sectors on top of sectors, however the techniques used remained the same. In the mid-1990s, game engines recreated true 3D worlds with arbitrary level geometry. Instead of sprites the engines used textured polygonal objects. FromSoftware released King's Field, a full polygon free roaming first-person real-time action title for the Sony PlayStation in December 1994. Sega's 32X release Metal Head was a first-person shooter mecha simulation game that used texture-mapped, 3D polygonal graphics. A year prior, Exact released the Sharp X68000 computer game Geograph Seal, a 3D polygonal first-person shooter that employed platform game mechanics and had most of the action take place in free-roaming outdoor environments rather than the corridor labyrinths of Wolfenstein 3D.
The following year, Exact released its successor for the PlayStation console, Jumping Flash!, which used the same game engine but adapted it to place more emphasis on the platforming rather than the shooting. The Jumping Flash! Series continued to use the same engine. Dark Forces, released in 1995 by LucasArts, has been regarded as one of the first "true 3-D" first-person shooter games, its engine, the Jedi Engine, was one of the first engines to support an environment in three dimensions: areas can exist next to each other in all three planes, including on top of each other. Though most of the objects in Dark Forces are sprites, the game does include support for textured 3D-rendered objects. Another game regarded as one of the first true 3D first-person shooter is Parallax Software's 1994 shooter Descent; the Quake engine used fewer animated sprites and used true 3D geometry and lighting, using elaborate techniques such as z-buffering to speed up the rendering. Quake was the first true-3D game to use a special map design system to preprocess and pre-render the 3D environment: the 3D environment in which the game took place was simplified d
3D computer graphics
3D computer graphics or three-dimensional computer graphics, are graphics that use a three-dimensional representation of geometric data, stored in the computer for the purposes of performing calculations and rendering 2D images. Such images may be stored for viewing or displayed in real-time. 3D computer graphics rely on many of the same algorithms as 2D computer vector graphics in the wire-frame model and 2D computer raster graphics in the final rendered display. In computer graphics software, 2D applications may use 3D techniques to achieve effects such as lighting, 3D may use 2D rendering techniques. 3D computer graphics are referred to as 3D models. Apart from the rendered graphic, the model is contained within the graphical data file. However, there are differences: a 3D model is the mathematical representation of any three-dimensional object. A model is not technically a graphic. A model can be displayed visually as a two-dimensional image through a process called 3D rendering or used in non-graphical computer simulations and calculations.
With 3D printing, 3D models are rendered into a 3D physical representation of the model, with limitations to how accurate the rendering can match the virtual model. William Fetter was credited with coining the term computer graphics in 1961 to describe his work at Boeing. One of the first displays of computer animation was Futureworld, which included an animation of a human face and a hand that had appeared in the 1972 experimental short A Computer Animated Hand, created by University of Utah students Edwin Catmull and Fred Parke.3D computer graphic s software began appearing for home computers in the late 1970s. The earliest known example is 3D Art Graphics, a set of 3D computer graphics effects, written by Kazumasa Mitazawa and released in June 1978 for the Apple II. 3D computer graphics creation falls into three basic phases: 3D modeling – the process of forming a computer model of an object's shape Layout and animation – the placement and movement of objects within a scene 3D rendering – the computer calculations that, based on light placement, surface types, other qualities, generate the image The model describes the process of forming the shape of an object.
The two most common sources of 3D models are those that an artist or engineer originates on the computer with some kind of 3D modeling tool, models scanned into a computer from real-world objects. Models can be produced procedurally or via physical simulation. A 3D model is formed from points called vertices that define the shape and form polygons. A polygon is an area formed from at least three vertexes. A polygon of n points is an n-gon; the overall integrity of the model and its suitability to use in animation depend on the structure of the polygons. Materials and textures are properties that the render engine uses to render the model, in an unbiased render engine like blender cycles, one can give the model materials to tell the engine how to treat light when it hits the surface. Textures are used to give the material color using a color or albedo map, or give the surface features using a bump or normal map, it can be used to deform the model itself using a displacement map. Before rendering into an image, objects must be laid out in a scene.
This defines spatial relationships including location and size. Animation refers to the temporal description of an object; these techniques are used in combination. As with animation, physical simulation specifies motion. Rendering converts a model into an image either by simulating light transport to get photo-realistic images, or by applying an art style as in non-photorealistic rendering; the two basic operations in realistic rendering are scattering. This step is performed using 3D computer graphics software or a 3D graphics API. Altering the scene into a suitable form for rendering involves 3D projection, which displays a three-dimensional image in two dimensions. Although 3D modeling and CAD software may perform 3D rendering as well, exclusive 3D rendering software exists. 3D computer graphics software produces computer-generated imagery through 3D modeling and 3D rendering or produces 3D models for analytic and industrial purposes. 3D modeling software is a class of 3D computer graphics. Individual programs of this class are called modeling modelers.
3D modelers allow users to alter models via their 3D mesh. Users can add, subtract and otherwise change the mesh to their desire. Models can be viewed from a variety of angles simultaneously. Models can be rotated and the view can be zoomed in and out. 3D modelers can export their models to files, which can be imported into other applications as long as the metadata are compatible. Many modelers allow importers and exporters to be plugged-in, so they can read and write data in the native formats of other applications. Most 3D modelers contain a number of related features, such as ray tracers and other rendering alternatives and texture mapping facilities; some contain features that support or allow animation of models. Some may be able to generate full-motion video of a series of rendered scenes. Computer aided design software may employ the same fundamental 3D modeling techniques that 3D modeling software use but their goal differs, they are used in computer-aided engineering, computer-aided man
Open-source software is a type of computer software in which source code is released under a license in which the copyright holder grants users the rights to study and distribute the software to anyone and for any purpose. Open-source software may be developed in a collaborative public manner. Open-source software is a prominent example of open collaboration. Open-source software development generates an more diverse scope of design perspective than any company is capable of developing and sustaining long term. A 2008 report by the Standish Group stated that adoption of open-source software models have resulted in savings of about $60 billion per year for consumers. In the early days of computing and developers shared software in order to learn from each other and evolve the field of computing; the open-source notion moved to the way side of commercialization of software in the years 1970-1980. However, academics still developed software collaboratively. For example Donald Knuth in 1979 with the TeX typesetting system or Richard Stallman in 1983 with the GNU operating system.
In 1997, Eric Raymond published The Cathedral and the Bazaar, a reflective analysis of the hacker community and free-software principles. The paper received significant attention in early 1998, was one factor in motivating Netscape Communications Corporation to release their popular Netscape Communicator Internet suite as free software; this source code subsequently became the basis behind SeaMonkey, Mozilla Firefox and KompoZer. Netscape's act prompted Raymond and others to look into how to bring the Free Software Foundation's free software ideas and perceived benefits to the commercial software industry, they concluded that FSF's social activism was not appealing to companies like Netscape, looked for a way to rebrand the free software movement to emphasize the business potential of sharing and collaborating on software source code. The new term they chose was "open source", soon adopted by Bruce Perens, publisher Tim O'Reilly, Linus Torvalds, others; the Open Source Initiative was founded in February 1998 to encourage use of the new term and evangelize open-source principles.
While the Open Source Initiative sought to encourage the use of the new term and evangelize the principles it adhered to, commercial software vendors found themselves threatened by the concept of distributed software and universal access to an application's source code. A Microsoft executive publicly stated in 2001 that "open source is an intellectual property destroyer. I can't imagine something that could be worse than this for the software business and the intellectual-property business." However, while Free and open-source software has played a role outside of the mainstream of private software development, companies as large as Microsoft have begun to develop official open-source presences on the Internet. IBM, Oracle and State Farm are just a few of the companies with a serious public stake in today's competitive open-source market. There has been a significant shift in the corporate philosophy concerning the development of FOSS; the free-software movement was launched in 1983. In 1998, a group of individuals advocated that the term free software should be replaced by open-source software as an expression, less ambiguous and more comfortable for the corporate world.
Software licenses grant rights to users which would otherwise be reserved by copyright law to the copyright holder. Several open-source software licenses have qualified within the boundaries of the Open Source Definition; the most prominent and popular example is the GNU General Public License, which "allows free distribution under the condition that further developments and applications are put under the same licence", thus free. The open source label came out of a strategy session held on April 7, 1998 in Palo Alto in reaction to Netscape's January 1998 announcement of a source code release for Navigator. A group of individuals at the session included Tim O'Reilly, Linus Torvalds, Tom Paquin, Jamie Zawinski, Larry Wall, Brian Behlendorf, Sameer Parekh, Eric Allman, Greg Olson, Paul Vixie, John Ousterhout, Guido van Rossum, Philip Zimmermann, John Gilmore and Eric S. Raymond, they used the opportunity before the release of Navigator's source code to clarify a potential confusion caused by the ambiguity of the word "free" in English.
Many people claimed that the birth of the Internet, since 1969, started the open-source movement, while others do not distinguish between open-source and free software movements. The Free Software Foun
A game engine is a software-development environment designed for people to build video games. Developers use game engines to construct games for consoles, mobile devices, personal computers; the core functionality provided by a game engine includes a rendering engine for 2D or 3D graphics, a physics engine or collision detection, scripting, artificial intelligence, streaming, memory management, localization support, scene graph, may include video support for cinematics. Implementers economize on the process of game development by reusing/adapting, in large part, the same game engine to produce different games or to aid in porting games to multiple platforms. In many cases game engines provide a suite of visual development tools in addition to reusable software components; these tools are provided in an integrated development environment to enable simplified, rapid development of games in a data-driven manner. Game engine developers attempt to "pre-invent the wheel" by developing robust software suites which include many elements a game developer may need to build a game.
Most game engine suites provide facilities that ease development, such as graphics, physics and AI functions. These game engines are sometimes called "middleware" because, as with the business sense of the term, they provide a flexible and reusable software platform which provides all the core functionality needed, right out of the box, to develop a game application while reducing costs and time-to-market — all critical factors in the competitive video game industry; as of 2001, Gamebryo, JMonkeyEngine and RenderWare were such used middleware programs. Like other types of middleware, game engines provide platform abstraction, allowing the same game to be run on various platforms including game consoles and personal computers with few, if any, changes made to the game source code. Game engines are designed with a component-based architecture that allows specific systems in the engine to be replaced or extended with more specialized game middleware components; some game engines are designed as a series of loosely connected game middleware components that can be selectively combined to create a custom engine, instead of the more common approach of extending or customizing a flexible integrated product.
However extensibility is achieved, it remains a high priority for game engines due to the wide variety of uses for which they are applied. Despite the specificity of the name, game engines are used for other kinds of interactive applications with real-time graphical needs such as marketing demos, architectural visualizations, training simulations, modeling environments; some game engines only provide real-time 3D rendering capabilities instead of the wide range of functionality needed by games. These engines rely upon the game developer to implement the rest of this functionality or assemble it from other game middleware components; these types of engines are referred to as a "graphics engine", "rendering engine", or "3D engine" instead of the more encompassing term "game engine". This terminology is inconsistently used as many full-featured 3D game engines are referred to as "3D engines". A few examples of graphics engines are: Crystal Space, Genesis3D, Irrlicht, OGRE, RealmForge, Truevision3D, Vision Engine.
Modern game or graphics engines provide a scene graph, an object-oriented representation of the 3D game world which simplifies game design and can be used for more efficient rendering of vast virtual worlds. As technology ages, the components of an engine may become outdated or insufficient for the requirements of a given project. Since the complexity of programming an new engine may result in unwanted delays, a development team may elect to update their existing engine with newer functionality or components; such a framework is composed of a multitude of different components. The actual game logic has to be implemented by some algorithms, it is distinct from sound or input work. The rendering engine generates animated 3D graphics by any of a number of methods. Instead of being programmed and compiled to be executed on the CPU or GPU directly, most rendering engines are built upon one or multiple rendering application programming interfaces, such as Direct3D, OpenGL, or Vulkan which provide a software abstraction of the graphics processing unit.
Low-level libraries such as DirectX, Simple DirectMedia Layer, OpenGL are commonly used in games as they provide hardware-independent access to other computer hardware such as input devices, network cards, sound cards. Before hardware-accelerated 3D graphics, software renderers had been used. Software rendering is still used in some modeling tools or for still-rendered images when visual accuracy is valued over real-time performance or when the computer hardware does not meet needs such as shader support. With the advent of hardware accelerated physics processing, various physics APIs such as PAL and the physics extensions of COLLADA became available to provide a software abstraction of the physics processing unit of different middleware providers and console platforms. Game engines can be written in any programming language like C++, C or Java, though each language is structurally different and may provide different levels of access to specific functions; the audio engine is the component which consists of algorithms related to the loading and output of sound through the client's speaker system.
At a minimum i
In computer graphics, a shader is a type of computer program, used for shading but which now performs a variety of specialized functions in various fields of computer graphics special effects or does video post-processing unrelated to shading, or functions unrelated to graphics at all. Shaders calculate rendering effects on graphics hardware with a high degree of flexibility. Most shaders are coded for a graphics processing unit. Shading languages are used to program the programmable GPU rendering pipeline, which has superseded the fixed-function pipeline that allowed only common geometry transformation and pixel-shading functions; the position, saturation and contrast of all pixels, vertices, or textures used to construct a final image can be altered on the fly, using algorithms defined in the shader, can be modified by external variables or textures introduced by the program calling the shader. Shaders are used in cinema postprocessing, computer-generated imagery, video games to produce a wide range of effects.
Beyond just simple lighting models, more complex uses include altering the hue, brightness or contrast of an image, producing blur, light bloom, volumetric lighting, normal mapping for depth effects, cel shading, bump mapping, chroma keying, edge detection and motion detection, psychedelic effects, many others. The modern use of "shader" was introduced to the public by Pixar with their "RenderMan Interface Specification, Version 3.0" published in May 1988. As graphics processing units evolved, major graphics software libraries such as OpenGL and Direct3D began to support shaders; the first shader-capable GPUs only supported pixel shading, but vertex shaders were introduced once developers realized the power of shaders. The first video card with programmable pixel shader was the Nvidia GeForce 3, released in 2000. Geometry shaders were introduced with Direct3D 10 and OpenGL 3.2. Graphics hardware evolved toward a unified shader model. Shaders are simple programs that describe the traits of either a pixel.
Vertex shaders describe the traits of a vertex. A vertex shader is called for each vertex in a primitive; each vertex is rendered as a series of pixels onto a surface that will be sent to the screen. Shaders replace a section of the graphics hardware called the Fixed Function Pipeline, so-called because it performs lighting and texture mapping in a hard-coded manner. Shaders provide a programmable alternative to this hard-coded approach; the basic graphics pipeline is as follows: The CPU sends instructions and geometry data to the graphics processing unit, located on the graphics card. Within the vertex shader, the geometry is transformed. If a geometry shader is in the graphic processing unit and active, some changes of the geometries in the scene are performed. If a tessellation shader is in the graphic processing unit and active, the geometries in the scene can be subdivided; the calculated geometry is triangulated. Triangles are broken down into fragment quads. Fragment quads are modified according to the fragment shader.
The depth test is performed, fragments that pass will get written to the screen and might get blended into the frame buffer. The graphic pipeline uses these steps in order to transform three-dimensional data into useful two-dimensional data for displaying. In general, this is a large pixel matrix or "frame buffer". There are three types of shaders in common use, with one more added. While older graphics cards utilize separate processing units for each shader type, newer cards feature unified shaders which are capable of executing any type of shader; this allows graphics cards to make more efficient use of processing power. 2D shaders act on digital images called textures in computer graphics work. They modify attributes of pixels. 2D shaders may take part in rendering 3D geometry. The only 2D shader types are pixel shaders. Pixel shaders known as fragment shaders, compute color and other attributes of each "fragment" - a unit of rendering work affecting at most a single output pixel; the simplest kinds of pixel shaders output one screen pixel as a color value.
Pixel shaders range from always outputting the same color, to applying a lighting value, to doing bump mapping, specular highlights and other phenomena. They can alter the depth of the fragment, or output more than one color if multiple render targets are active. In 3D graphics, a pixel shader alone cannot produce some kinds of complex effects, because it operates only on a single fragment, without knowledge of a scene's geometry. However, pixel shaders do have knowledge of the screen coordinate being drawn, can sample the screen and nearby pixels if the contents of the entire screen are passed as a texture to the shader; this technique can enable a wide variety of two-dimensional postprocessing effects, such as blur, or edge detection/enhancement for cartoon/cel shaders. Pixel shaders may be applied in intermediate stages to any two-dimensional images—sp
Java virtual machine
A Java virtual machine is a virtual machine that enables a computer to run Java programs as well as programs written in other languages that are compiled to Java bytecode. The JVM is detailed by a specification that formally describes what is required of a JVM implementation. Having a specification ensures interoperability of Java programs across different implementations so that program authors using the Java Development Kit need not worry about idiosyncrasies of the underlying hardware platform; the JVM reference implementation is developed by the OpenJDK project as open source code and includes a JIT compiler called HotSpot. The commercially supported Java releases available from Oracle Corporation are based on the OpenJDK runtime. Eclipse OpenJ9 is another open source JVM for OpenJDK; the Java virtual machine is an abstract computer defined by a specification. The garbage-collection algorithm used and any internal optimization of the Java virtual machine instructions are not specified; the main reason for this omission is to not unnecessarily constrain implementers.
Any Java application can be run only inside some concrete implementation of the abstract specification of the Java virtual machine. Starting with Java Platform, Standard Edition 5.0, changes to the JVM specification have been developed under the Java Community Process as JSR 924. As of 2006, changes to specification to support changes proposed to the class file format are being done as a maintenance release of JSR 924; the specification for the JVM was published as the blue book, The preface states: We intend that this specification should sufficiently document the Java Virtual Machine to make possible compatible clean-room implementations. Oracle provides tests that verify the proper operation of implementations of the Java Virtual Machine. One of Oracle's JVMs is named the other, inherited from BEA Systems is JRockit. Clean-room Java implementations include Kaffe, IBM J9 and Skelmir's CEE-J. Oracle owns the Java trademark and may allow its use to certify implementation suites as compatible with Oracle's specification.
One of the organizational units of JVM byte code is a class. A class loader implementation must be able to recognize and load anything that conforms to the Java class file format. Any implementation is free to recognize other binary forms besides class files, but it must recognize class files; the class loader performs three basic activities in this strict order: Loading: finds and imports the binary data for a type Linking: performs verification and resolution Verification: ensures the correctness of the imported type Preparation: allocates memory for class variables and initializing the memory to default values Resolution: transforms symbolic references from the type into direct references. Initialization: invokes Java code that initializes class variables to their proper starting values. In general, there are two types of class loader: bootstrap class loader and user defined class loader; every Java virtual machine implementation must have a bootstrap class loader, capable of loading trusted classes.
The Java virtual machine specification doesn't specify. The JVM operates on primitive references; the JVM is fundamentally a 32-bit machine. Long and double types, which are 64-bits, are supported natively, but consume two units of storage in a frame's local variables or operand stack, since each unit is 32 bits. Boolean, byte and char types are all sign-extended and operated on as 32-bit integers, the same as int types; the smaller types only have a few type-specific instructions for loading and type conversion. Boolean is operated on with 0 representing false and 1 representing true; the JVM has a garbage-collected heap for storing arrays. Code and other class data are stored in the "method area"; the method area is logically part of the heap, but implementations may treat the method area separately from the heap, for example might not garbage collect it. Each JVM thread has its own call stack, which stores frames. A new frame is created each time a method is called, the frame is destroyed when that method exits.
Each frame provides an "operand stack" and an array of "local variables". The operand stack is used for operands to computations and for receiving the return value of a called method, while local variables serve the same purpose as registers and are used to pass method arguments. Thus, the JVM is both a register machine; the JVM has instructions for the following groups of tasks: The aim is binary compatibility. Each particular host operating system needs its own implementation of the runtime; these JVMs interpret the bytecode semantically the same way, but the actual implementation may be different. More complex than just emulating bytecode is compatibly and efficiently im
A computing platform or digital platform is the environment in which a piece of software is executed. It may be the hardware or the operating system a web browser and associated application programming interfaces, or other underlying software, as long as the program code is executed with it. Computing platforms have different abstraction levels, including a computer architecture, an OS, or runtime libraries. A computing platform is the stage. A platform can be seen both as a constraint on the software development process, in that different platforms provide different functionality and restrictions. For example, an OS may be a platform that abstracts the underlying differences in hardware and provides a generic command for saving files or accessing the network. Platforms may include: Hardware alone, in the case of small embedded systems. Embedded systems can access hardware directly, without an OS. A browser in the case of web-based software; the browser itself runs on a hardware+OS platform, but this is not relevant to software running within the browser.
An application, such as a spreadsheet or word processor, which hosts software written in an application-specific scripting language, such as an Excel macro. This can be extended to writing fully-fledged applications with the Microsoft Office suite as a platform. Software frameworks. Cloud computing and Platform as a Service. Extending the idea of a software framework, these allow application developers to build software out of components that are hosted not by the developer, but by the provider, with internet communication linking them together; the social networking sites Twitter and Facebook are considered development platforms. A virtual machine such as the Java virtual machine or. NET CLR. Applications are compiled into a format similar to machine code, known as bytecode, executed by the VM. A virtualized version of a complete system, including virtualized hardware, OS, storage; these allow, for instance, a typical Windows program to run on. Some architectures have multiple layers, with each layer acting as a platform to the one above it.
In general, a component only has to be adapted to the layer beneath it. For instance, a Java program has to be written to use the Java virtual machine and associated libraries as a platform but does not have to be adapted to run for the Windows, Linux or Macintosh OS platforms. However, the JVM, the layer beneath the application, does have to be built separately for each OS. AmigaOS, AmigaOS 4 FreeBSD, NetBSD, OpenBSD IBM i Linux Microsoft Windows OpenVMS Classic Mac OS macOS OS/2 Solaris Tru64 UNIX VM QNX z/OS Android Bada BlackBerry OS Firefox OS iOS Embedded Linux Palm OS Symbian Tizen WebOS LuneOS Windows Mobile Windows Phone Binary Runtime Environment for Wireless Cocoa Cocoa Touch Common Language Infrastructure Mono. NET Framework Silverlight Flash AIR GNU Java platform Java ME Java SE Java EE JavaFX JavaFX Mobile LiveCode Microsoft XNA Mozilla Prism, XUL and XULRunner Open Web Platform Oracle Database Qt SAP NetWeaver Shockwave Smartface Universal Windows Platform Windows Runtime Vexi Ordered from more common types to less common types: Commodity computing platforms Wintel, that is, Intel x86 or compatible personal computer hardware with Windows operating system Macintosh, custom Apple Inc. hardware and Classic Mac OS and macOS operating systems 68k-based PowerPC-based, now migrated to x86 ARM architecture based mobile devices iPhone smartphones and iPad tablet computers devices running iOS from Apple Gumstix or Raspberry Pi full function miniature computers with Linux Newton devices running the Newton OS from Apple x86 with Unix-like systems such as Linux or BSD variants CP/M computers based on the S-100 bus, maybe the earliest microcomputer platform Video game consoles, any variety 3DO Interactive Multiplayer, licensed to manufacturers Apple Pippin, a multimedia player platform for video game console development RISC processor based machines running Unix variants SPARC architecture computers running Solaris or illumos operating systems DEC Alpha cluster running OpenVMS or Tru64 UNIX Midrange computers with their custom operating systems, such as IBM OS/400 Mainframe computers with their custom operating systems, such as IBM z/OS Supercomputer architectures Cross-platform Platform virtualization Third platform Ryan Sarver: What is a platform