1.
Software developer
–
A software developer is a person concerned with facets of the software development process, including the research, design, programming, and testing of computer software. Other job titles which are used with similar meanings are programmer, software analyst. According to developer Eric Sink, the differences between system design, software development, and programming are more apparent, even more so that developers become systems architects, those who design the multi-leveled architecture or component interactions of a large software system. In a large company, there may be employees whose sole responsibility consists of one of the phases above. In smaller development environments, a few people or even an individual might handle the complete process. The word software was coined as a prank as early as 1953, before this time, computers were programmed either by customers, or the few commercial computer vendors of the time, such as UNIVAC and IBM. The first company founded to provide products and services was Computer Usage Company in 1955. The software industry expanded in the early 1960s, almost immediately after computers were first sold in mass-produced quantities, universities, government, and business customers created a demand for software. Many of these programs were written in-house by full-time staff programmers, some were distributed freely between users of a particular machine for no charge. Others were done on a basis, and other firms such as Computer Sciences Corporation started to grow. The computer/hardware makers started bundling operating systems, systems software and programming environments with their machines, new software was built for microcomputers, so other manufacturers including IBM, followed DECs example quickly, resulting in the IBM AS/400 amongst others. The industry expanded greatly with the rise of the computer in the mid-1970s. In the following years, it created a growing market for games, applications. DOS, Microsofts first operating system product, was the dominant operating system at the time, by 2014 the role of cloud developer had been defined, in this context, one definition of a developer in general was published, Developers make software for the world to use. The job of a developer is to crank out code -- fresh code for new products, code fixes for maintenance, code for business logic, bus factor Software Developer description from the US Department of Labor
2.
Id Software
–
Id Software LLC is an American video game developer headquartered in Dallas, Texas. The company was founded on February 1,1991, by four members of the computer company Softdisk, programmers John Carmack and John Romero, game designer Tom Hall, business manager Jay Wilbur was also involved. Id Software made important technological developments in video game technologies for the PC, including work done for the Wolfenstein, Doom, ids work was particularly important in 3D computer graphics technology and in game engines that are heavily used throughout the video game industry. The company was heavily involved in the creation of the first-person shooter genre. Wolfenstein 3D is often considered as the first true FPS, Doom was a game that popularized the genre and PC gaming in general, on June 24,2009, ZeniMax Media acquired the company. In 2015, they opened a studio in Frankfurt, Germany. The founders of id Software met in the offices of Softdisk developing multiple games for Softdisks monthly publishing and these included Dangerous Dave and other titles. In September 1990, John Carmack developed an efficient way that would perform rapid side-scrolling graphics on the PC, upon making this breakthrough, Carmack and Hall stayed up late into the night making a replica of the first level of the popular 1988 NES game Super Mario Bros. 3, inserting stock graphics of Romeros Dangerous Dave character in lieu of Mario, when Romero saw the demo, entitled Dangerous Dave in Copyright Infringement, he realized that Carmacks breakthrough could have potential. Despite their work, Nintendo turned them down, saying they had no interest in expanding to the PC market, miller suggested that they develop shareware games that he would distribute. On December 14,1990, the first episode was released as shareware by Millers company, Apogee, and orders began rolling in. In a legal settlement, the team was required to provide a game to Softdisk every two months for a period of time, but they would do so on their own. On February 1,1991, id Software was founded, the shareware distribution method was initially employed by id Software through Apogee Software to sell their products, such as the Commander Keen, Wolfenstein and Doom games. They would release the first part of their trilogy as shareware, only later did id Software release their games via more traditional shrink-wrapped boxes in stores. Id Software moved from the cube-shaped Mesquite office to a newly built location in Richardson, on June 24,2009, it was announced that id Software had been acquired by ZeniMax Media. The deal would eventually affect publishing deals id Software had before the acquisition, namely Rage, on June 26,2013, id Software president Todd Hollenshead quit after 17 years of service. He was the last of the founders to leave the company. It is presented by the company as a reference to the id, evidence of the reference can be found as early as Wolfenstein 3D with the statement thats id, as in the id, ego, and superego in the psyche appearing in the games documentation
3.
John Carmack
–
John D. Carmack is an American game programmer, aerospace and virtual reality engineer. Carmack was the programmer of the id video games Commander Keen, Wolfenstein 3D, Doom, Quake, Rage. In August 2013, Carmack took the position of CTO at Oculus VR, Carmack, son of local television news reporter Stan Carmack, grew up in the Kansas City Metropolitan Area where he became interested in computers at an early age. He attended Shawnee Mission East High School in Prairie Village, Kansas and Raytown South High School in nearby Raytown, Carmack was introduced to video games with the 1978 shoot em up title Space Invaders in the arcades during a summer vacation as a child. The 1980 maze chase arcade game Pac-Man also left an impression on him. He cited Shigeru Miyamoto as the game developer he most admired, as reported in David Kushners Masters of Doom, when Carmack was 14, he broke into a school to help a group of kids steal Apple II computers. To gain entry to the building, Carmack concocted a sticky substance of thermite mixed with Vaseline that melted through the windows, however, an overweight accomplice struggled to get through the hole, and opened the window, setting off a silent alarm and alerting police. John was arrested, and sent for psychiatric evaluation, Carmack was then sentenced to a year in a juvenile home. He attended the University of Missouri–Kansas City for two semesters before withdrawing to work as a freelance programmer. Softdisk, a company in Shreveport, Louisiana, hired Carmack to work on Softdisk G-S, introducing him to John Romero. Later, this team would be placed by Softdisk in charge of a new, afterwards, Carmack left Softdisk to co-found id Software. Carmacks engines have also licensed for use in other influential first-person shooters such as Half-Life, Call of Duty. In 2007, when Carmack was on vacation with his wife, he ended up playing games on his cellphone. On August 7,2013, Carmack joined Oculus VR as their CTO, on November 22,2013, he resigned from id Software to work full-time at Oculus VR. Carmacks reason for leaving was because ids parent company ZeniMax Media didnt want to support Oculus Rift, Carmacks role at both companies later became central to a ZeniMax lawsuit against Oculus parent company Facebook, claiming that Oculus stole ZeniMaxs virtual reality intellectual property. The trial jury absolved Carmack of liability, though Oculus and other officers were held liable for trademark, copyright. In February 2017 Carmack sued ZeniMax, claiming the company had refused to pay him the remaining $22.5 million owed him from their purchase of id Software, around 2000, Carmack became interested in rocketry, a hobby of his youth. Reviewing how much money he was spending on customizing Ferraris, Carmack realized he could do significant work in rocketry and he began by giving financial support to a few local amateur groups before starting Armadillo Aerospace
4.
Michael Abrash
–
A later book, Zen of Graphics Programming, applied these ideas to 2D and 3D graphics prior to the advent of hardware accelerators for the PC. Though not strictly a game programmer, Abrash has worked on the technology for games, such as Quake. He frequently begins a discussion with an anecdote that draws parallels between a real-life experience he has had and the articles subject matter. His prose encourages readers to think outside the box and to approach solving technical problems in an innovative way, Abrash first bought a microcomputer while doing postgraduate studies at the University of Pennsylvania. Before getting into technical writing, Abrash was a programmer in the early days of the IBM PC. His first commercial game was a clone of Space Invaders published by Datamost in 1982 as Space Strike and he co-authored several PC games with Dan Illowsky who had previously written the successful Pac-Man clone Snack Attack for the Apple II. Abrash and Illowsky worked together on the Galaxian-like Cosmic Crusader, maze game Snack Attack II, after working at Microsoft on graphics and assembly code for Windows NT3.1, he returned to the game industry in the mid-1990s to work on Quake for id Software. Some of the technology behind Quake is documented in Abrashs Ramblings in Realtime published in Dr. Dobbs Journal and he mentions Quake as his favourite game of all time. After Quake was released, Abrash returned to Microsoft to work on natural language research, then moved to the Xbox team, at the end of 2005, Pixomatic was acquired by Intel. When developing Pixomatic, he and Mike Sartain designed a new architecture called Larrabee, gabe Newell, managing director of Valve, said that he had been trying to hire Michael Abrash forever. About once a quarter we go for dinner and I say are you ready to work here yet, in 2011 Abrash made the move to join Valve. On March 28,2014, virtual reality company, Oculus VR. This was three days after Facebook announced agreements to purchase Oculus VR, Michael Abrash was a columnist in the 1980s for a magazine called Programmers Journal. Those articles were collected in the 1989 book, Power Graphics Programming and his second book, Zen of Assembly Language Volume 1, Knowledge, focused on writing efficient assembly code for the 16-bit 8086 processor, but was released after the 80486 CPU was already available. In addition to assembly-level optimization, the focused on parts of the system that silently affect code performance. A key point of Zen of Assembly Language is that performance must always be measured, in the early to mid-1990s, Abrash wrote a PC graphics programming column for Dr. Dobbs Journal called Ramblings in Realtime. In 1991 he introduced Mode X, a 320x240 VGA graphics mode with square pixels instead of the slightly elongated pixels of the standard 320x200 mode. At the same time, he introduced readers to a known part of the VGA standard allowing multiple pixels to be written at once
5.
Software release life cycle
–
Usage of the alpha/beta test terminology originated at IBM. As long ago as the 1950s, IBM used similar terminology for their hardware development, a test was the verification of a new product before public announcement. B test was the verification before releasing the product to be manufactured, C test was the final test before general availability of the product. Martin Belsky, a manager on some of IBMs earlier software projects claimed to have invented the terminology, IBM dropped the alpha/beta terminology during the 1960s, but by then it had received fairly wide notice. The usage of beta test to refer to testing done by customers was not done in IBM, rather, IBM used the term field test. Pre-alpha refers to all activities performed during the project before formal testing. These activities can include requirements analysis, software design, software development, in typical open source development, there are several types of pre-alpha versions. Milestone versions include specific sets of functions and are released as soon as the functionality is complete, the alpha phase of the release life cycle is the first phase to begin software testing. In this phase, developers generally test the software using white-box techniques, additional validation is then performed using black-box or gray-box techniques, by another testing team. Moving to black-box testing inside the organization is known as alpha release, alpha software can be unstable and could cause crashes or data loss. Alpha software may not contain all of the features that are planned for the final version, in general, external availability of alpha software is uncommon in proprietary software, while open source software often has publicly available alpha versions. The alpha phase usually ends with a freeze, indicating that no more features will be added to the software. At this time, the software is said to be feature complete, Beta, named after the second letter of the Greek alphabet, is the software development phase following alpha. Software in the stage is also known as betaware. Beta phase generally begins when the software is complete but likely to contain a number of known or unknown bugs. Software in the phase will generally have many more bugs in it than completed software, as well as speed/performance issues. The focus of beta testing is reducing impacts to users, often incorporating usability testing, the process of delivering a beta version to the users is called beta release and this is typically the first time that the software is available outside of the organization that developed it. Beta version software is useful for demonstrations and previews within an organization
6.
Repository (version control)
–
In revision control systems, a repository is an on-disk data structure which stores metadata for a set of files and/or directory structure. Some of the metadata that a repository contains includes, among other things, a set of references to commit objects, called heads. The main purpose of a repository is to store a set of files and these differences in methodology have generally led to diverse uses of revision control by different groups, depending on their needs. Software repository Codebase Forge Comparison of source code hosting facilities
7.
C (programming language)
–
C was originally developed by Dennis Ritchie between 1969 and 1973 at Bell Labs, and used to re-implement the Unix operating system. C has been standardized by the American National Standards Institute since 1989, C is an imperative procedural language. Therefore, C was useful for applications that had formerly been coded in assembly language. Despite its low-level capabilities, the language was designed to encourage cross-platform programming, a standards-compliant and portably written C program can be compiled for a very wide variety of computer platforms and operating systems with few changes to its source code. The language has become available on a wide range of platforms. In C, all code is contained within subroutines, which are called functions. Function parameters are passed by value. Pass-by-reference is simulated in C by explicitly passing pointer values, C program source text is free-format, using the semicolon as a statement terminator and curly braces for grouping blocks of statements. The C language also exhibits the characteristics, There is a small, fixed number of keywords, including a full set of flow of control primitives, for, if/else, while, switch. User-defined names are not distinguished from keywords by any kind of sigil, There are a large number of arithmetical and logical operators, such as +, +=, ++, &, ~, etc. More than one assignment may be performed in a single statement, function return values can be ignored when not needed. Typing is static, but weakly enforced, all data has a type, C has no define keyword, instead, a statement beginning with the name of a type is taken as a declaration. There is no function keyword, instead, a function is indicated by the parentheses of an argument list, user-defined and compound types are possible. Heterogeneous aggregate data types allow related data elements to be accessed and assigned as a unit, array indexing is a secondary notation, defined in terms of pointer arithmetic. Unlike structs, arrays are not first-class objects, they cannot be assigned or compared using single built-in operators, There is no array keyword, in use or definition, instead, square brackets indicate arrays syntactically, for example month. Enumerated types are possible with the enum keyword and they are not tagged, and are freely interconvertible with integers. Strings are not a data type, but are conventionally implemented as null-terminated arrays of characters. Low-level access to memory is possible by converting machine addresses to typed pointers
8.
Assembly language
–
Each assembly language is specific to a particular computer architecture. In contrast, most high-level programming languages are generally portable across multiple architectures, Assembly language may also be called symbolic machine code. Assembly language is converted into machine code by a utility program referred to as an assembler. The conversion process is referred to as assembly, or assembling the source code, Assembly time is the computational step where an assembler is run. Assembly language uses a mnemonic to represent each low-level machine instruction or opcode, typically also each architectural register, flag, depending on the architecture, these elements may also be combined for specific instructions or addressing modes using offsets or other data as well as fixed addresses. Many assemblers offer additional mechanisms to facilitate development, to control the assembly process. A macro assembler includes a facility so that assembly language text can be represented by a name. A cross assembler is an assembler that is run on a computer or operating system of a different type from the system on which the code is to run. Cross-assembling facilitates the development of programs for systems that do not have the resources to support software development, a microassembler is a program that helps prepare a microprogram, called firmware, to control the low level operation of a computer. A meta-assembler is a used in some circles for a program that accepts the syntactic and semantic description of an assembly language. An assembler program creates object code by translating combinations of mnemonics and syntax for operations and this representation typically includes an operation code as well as other control bits and data. The assembler also calculates constant expressions and resolves symbolic names for memory locations, the use of symbolic references is a key feature of assemblers, saving tedious calculations and manual address updates after program modifications. Most assemblers also include facilities for performing textual substitution – e. g. to generate common short sequences of instructions as inline. Some assemblers may also be able to some simple types of instruction set-specific optimizations. One concrete example of this may be the ubiquitous x86 assemblers from various vendors, most of them are able to perform jump-instruction replacements in any number of passes, on request. Like early programming languages such as Fortran, Algol, Cobol and Lisp, assemblers have been available since the 1950s, however, assemblers came first as they are far simpler to write than compilers for high-level languages. There may be several assemblers with different syntax for a particular CPU or instruction set architecture, despite different appearances, different syntactic forms generally generate the same numeric machine code, see further below. A single assembler may also have different modes in order to support variations in syntactic forms as well as their exact semantic interpretations, there are two types of assemblers based on how many passes through the source are needed to produce the executable program
9.
Operating system
–
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. All computer programs, excluding firmware, require a system to function. Operating systems are found on many devices that contain a computer – from cellular phones, the dominant desktop operating system is Microsoft Windows with a market share of around 83. 3%. MacOS by Apple Inc. is in place, and the varieties of Linux is in third position. Linux distributions are dominant in the server and supercomputing sectors, other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can run one program at a time. Multi-tasking may be characterized in preemptive and co-operative types, in preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, e. g. Solaris, Linux, cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking, 32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem, a distributed operating system manages a group of distinct computers and makes them appear to be a single computer. The development of networked computers that could be linked and communicate with each other gave rise to distributed computing, distributed computations are carried out on more than one machine. When computers in a work in cooperation, they form a distributed system. The technique is used both in virtualization and cloud computing management, and is common in large server warehouses, embedded operating systems are designed to be used in embedded computer systems. They are designed to operate on small machines like PDAs with less autonomy and they are able to operate with a limited number of resources. They are very compact and extremely efficient by design, Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is a system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could run different programs in succession to speed up processing
10.
Computing platform
–
Computing platform means in general sense, where any piece of software is executed. It may be the hardware or the system, even a web browser or other application. The term computing platform can refer to different abstraction levels, including a hardware architecture, an operating system. In total it can be said to be the stage on which programs can run. For example, an OS may be a platform that abstracts the underlying differences in hardware, platforms may also include, Hardware alone, in the case of small embedded systems. Embedded systems can access hardware directly, without an OS, this is referred to as running on bare metal, a browser in the case of web-based software. The browser itself runs on a platform, but this is not relevant to software running within the browser. An application, such as a spreadsheet or word processor, which hosts software written in a scripting language. This can be extended to writing fully-fledged applications with the Microsoft Office suite as a platform, software frameworks that provide ready-made functionality. Cloud computing and Platform as a Service, the social networking sites Twitter and facebook are also considered development platforms. A virtual machine such as the Java virtual machine, applications are compiled into a format similar to machine code, known as bytecode, which is then executed by the VM. A virtualized version of a system, including virtualized hardware, OS, software. These allow, for instance, a typical Windows program to run on what is physically a Mac, some architectures have multiple layers, with each layer acting as a platform to the one above it. In general, a component only has to be adapted to the layer immediately beneath it, however, the JVM, the layer beneath the application, does have to be built separately for each OS
11.
Game engine
–
A game engine is a software framework designed for the creation and development of video games. Developers use them to games for consoles, mobile devices. The process of development is often economized, in large part, by reusing/adapting the same game engine to create different games. In many cases game engines provide a suite of development tools in addition to reusable software components. These tools are provided in an integrated development environment to enable simplified. Game engine developers attempt to pre-invent the wheel by developing robust software suites which include many elements a game developer may need to build a game, most game engine suites provide facilities that ease development, such as graphics, sound, physics and AI functions. Gamebryo, JMonkey Engine and RenderWare are such widely used middleware programs, however extensibility is achieved, it remains a high priority for game engines due to the wide variety of uses for which they are applied. Some game engines only provide real-time 3D rendering capabilities instead of the range of functionality needed by games. These engines rely upon the developer to implement the rest of this functionality or assemble it from other game middleware components. These types of engines are referred to as a graphics engine, rendering engine. This terminology is used as many full-featured 3D game engines are referred to simply as 3D engines. A few examples of engines are, Crystal Space, Genesis3D, Irrlicht, OGRE, RealmForge, Truevision3D. As technology ages, the components of an engine may become outdated or insufficient for the requirements of a given project. Since the complexity of programming a new engine may result in unwanted delays. Such a framework is composed of a multitude of different components. The actual game logic has to be implemented by some algorithms and it is distinct from any rendering, sound or input work. The rendering engine generates 3D animated graphics by the chosen method, before hardware-accelerated 3D graphics, software renderers had been used. Game engines can be written in any programming language like C++, C or Java, though language is structurally different
12.
Software license
–
A software license is a legal instrument governing the use or redistribution of software. Under United States copyright law all software is copyright protected, in code as also object code form. The only exception is software in the public domain, most distributed software can be categorized according to its license type. Two common categories for software under copyright law, and therefore with licenses which grant the licensee specific rights, are proprietary software and free, unlicensed software outside the copyright protection is either public domain software or software which is non-distributed, non-licensed and handled as internal business trade secret. Contrary to popular belief, distributed unlicensed software is copyright protected. Examples for this are unauthorized software leaks or software projects which are placed on public software repositories like GitHub without specified license. As voluntarily handing software into the domain is problematic in some international law domains, there are also licenses granting PD-like rights. Therefore, the owner of a copy of software is legally entitled to use that copy of software. Hence, if the end-user of software is the owner of the respective copy, as many proprietary licenses only enumerate the rights that the user already has under 17 U. S. C. §117, and yet proclaim to take away from the user. Proprietary software licenses often proclaim to give software publishers more control over the way their software is used by keeping ownership of each copy of software with the software publisher. The form of the relationship if it is a lease or a purchase, for example UMG v. Augusto or Vernor v. Autodesk. The ownership of goods, like software applications and video games, is challenged by licensed. The Swiss based company UsedSoft innovated the resale of business software and this feature of proprietary software licenses means that certain rights regarding the software are reserved by the software publisher. Therefore, it is typical of EULAs to include terms which define the uses of the software, the most significant effect of this form of licensing is that, if ownership of the software remains with the software publisher, then the end-user must accept the software license. In other words, without acceptance of the license, the end-user may not use the software at all, one example of such a proprietary software license is the license for Microsoft Windows. The most common licensing models are per single user or per user in the appropriate volume discount level, Licensing per concurrent/floating user also occurs, where all users in a network have access to the program, but only a specific number at the same time. Another license model is licensing per dongle which allows the owner of the dongle to use the program on any computer, Licensing per server, CPU or points, regardless the number of users, is common practice as well as site or company licenses
13.
GNU General Public License
–
The GNU General Public License is a widely used free software license, which guarantees end users the freedom to run, study, share and modify the software. The license was written by Richard Stallman of the Free Software Foundation for the GNU Project. The GPL is a license, which means that derivative work can only be distributed under the same license terms. This is in distinction to permissive free licenses, of which the BSD licenses. GPL was the first copyleft license for general use, historically, the GPL license family has been one of the most popular software licenses in the free and open-source software domain. Prominent free software licensed under the GPL include the Linux kernel. In 2007, the version of the license was released to address some perceived problems with the second version that were discovered during its long-time usage. To keep the license up to date, the GPL license includes an optional any later version clause, developers can omit it when licensing their software, for instance the Linux kernel is licensed under GPLv2 without the any later version clause. The GPL was written by Richard Stallman in 1989, for use with programs released as part of the GNU project, the original GPL was based on a unification of similar licenses used for early versions of GNU Emacs, the GNU Debugger and the GNU C Compiler. These licenses contained similar provisions to the modern GPL, but were specific to each program, rendering them incompatible, Stallmans goal was to produce one license that could be used for any project, thus making it possible for many projects to share code. The second version of the license, version 2, was released in 1991, version 3 was developed to attempt to address these concerns and was officially released on 29 June 2007. Version 1 of the GNU GPL, released on 25 February 1989, the first problem was that distributors may publish binary files only—executable, but not readable or modifiable by humans. To prevent this, GPLv1 stated that any vendor distributing binaries must also make the source code available under the same licensing terms. The second problem was that distributors might add restrictions, either to the license, the union of two sets of restrictions would apply to the combined work, thus adding unacceptable restrictions. To prevent this, GPLv1 stated that modified versions, as a whole, had to be distributed under the terms in GPLv1. Therefore, software distributed under the terms of GPLv1 could be combined with software under more permissive terms, according to Richard Stallman, the major change in GPLv2 was the Liberty or Death clause, as he calls it – Section 7. The section says that licensees may distribute a GPL-covered work only if they can all of the licenses obligations. In other words, the obligations of the license may not be severed due to conflicting obligations and this provision is intended to discourage any party from using a patent infringement claim or other litigation to impair users freedom under the license
14.
Quake (video game)
–
Quake is a first-person shooter video game, developed by id Software and published by GT Interactive in 1996. It is the first game in the Quake series, in the game, players must find their way through various maze-like, medieval environments while battling a variety of monsters using a wide array of weapons. The successor to id Softwares Doom series, Quake built upon the technology, unlike the Doom engine before it, the Quake engine offered full real-time 3D rendering and had early support for 3D acceleration through OpenGL. After Doom helped popularize multiplayer deathmatches, Quake added various multiplayer options and it features music composed by Trent Reznor and Nine Inch Nails. In Quakes single-player mode, players explore and navigate to the exit of each Gothic and dark level, facing monsters, usually there are switches to activate or keys to collect in order to open doors before the exit can be reached. Reaching the exit takes the player to the next level, before accessing an episode, there is a set of three pathways with easy, medium, and hard skill levels. Quakes single-player campaign is organized into four episodes with seven to eight levels in each. As items are collected, they are carried to the next level, if the players character dies, he must restart at the beginning of the level. The game may be saved at any time, upon completing an episode, the player is returned to the hub START level, where another episode can be chosen. Each episode starts the player from scratch, without any previously collected items, episode one has the most traditional ideology of a boss in the last level. The ultimate objective at the end of episode is to recover a magic rune. After all of the runes are collected, the floor of the hub level opens up to reveal an entrance to the END level which contains the boss of the game. In multiplayer mode, players on several computers connect to a server, when players die in multiplayer mode, they can immediately respawn, but will lose any items that were collected. Similarly, items that have picked up previously respawn after some time. The most popular multiplayer modes are all forms of deathmatch, deathmatch modes typically consist of either free-for-all, one-on-one duels, or organized teamplay with two or more players per team. Teamplay is also played with one or another mod. Monsters are not normally present in teamplay, as they serve no other than to get in the way. The gameplay in Quake was considered unique for its time because of the different ways the player can maneuver through the game
15.
3D computer graphics
–
Such images may be stored for viewing later or displayed in real-time. 3D computer graphics rely on many of the same algorithms as 2D computer vector graphics in the wire-frame model, 3D computer graphics are often referred to as 3D models. Apart from the graphic, the model is contained within the graphical data file. However, there are differences, a 3D model is the representation of any three-dimensional object. A model is not technically a graphic until it is displayed, a model can be displayed visually as a two-dimensional image through a process called 3D rendering or used in non-graphical computer simulations and calculations. With 3D printing, 3D models are rendered into a 3D physical representation of the model. William Fetter was credited with coining the term computer graphics in 1961 to describe his work at Boeing, 3D computer graphics software began appearing for home computers in the late 1970s. The earliest known example is 3D Art Graphics, a set of 3D computer graphics effects, written by Kazumasa Mitazawa, models can also be produced procedurally or via physical simulation. Basically, a 3D model is formed from points called vertices that define the shape, a polygon is an area formed from at least three vertexes. A polygon of n points is an n-gon, the overall integrity of the model and its suitability to use in animation depend on the structure of the polygons. Before rendering into an image, objects must be out in a scene. This defines spatial relationships between objects, including location and size, Animation refers to the temporal description of an object. These techniques are used in combination. As with animation, physical simulation also specifies motion, rendering converts a model into an image either by simulating light transport to get photo-realistic images, or by applying an art style as in non-photorealistic rendering. The two basic operations in realistic rendering are transport and scattering and this step is usually performed using 3D computer graphics software or a 3D graphics API. Altering the scene into a form for rendering also involves 3D projection. There are a multitude of websites designed to help, educate, some are managed by software developers and content providers, but there are standalone sites as well. These communities allow for members to seek advice, post tutorials, not all computer graphics that appear 3D are based on a wireframe model
16.
Fork (software development)
–
The term often implies not merely a development branch, but also a split in the developer community, a form of schism. Free and open-source software is that which, by definition, may be forked from the development team without prior permission without violating copyright law. However, licensed forks of proprietary software also happen, the word fork stems from the Latin word furca, meaning a fork or similarly shaped instrument. Fork in the meaning of to divide in branches, go separate ways has been used as early as the 14th century. In the software environment, the word evokes the fork system call, however, fork was in use in the present sense by 1995 to describe the XEmacs split, and was an understood usage in the GNU Project by 1996. Thus, there is a penalty associated with forking. The relationship between the different teams can be cordial or very bitter and he notes in the Jargon File, David A. Wheeler notes four possible outcomes of a fork, with examples, The death of the fork. This is by far the most common case and it is easy to declare a fork, but considerable effort to continue independent development and support. With a DVCS such as Mercurial or Git, the way to contribute to a project is to first branch the repository. Forks often restart version numbering from 0.1 or 1.0 even if the software was at version 3.0,4.0. An exception is when the software is designed to be a drop-in replacement for the original project. In proprietary software, the copyright is held by the employing entity. This is almost always an economic decision to generate a greater market share, a notable proprietary fork not of this kind is the many varieties of proprietary Unix—almost all derived from AT&T Unix under license and all called Unix, but increasingly mutually incompatible. The BSD licenses permit forks to become proprietary software, and some say that commercial incentives thus make proprietisation almost inevitable, examples include macOS, Cedega and CrossOver, EnterpriseDB, Supported PostgreSQL with their proprietary ESM storage system, and Netezzas proprietary highly scalable derivative of PostgreSQL. Some of these vendors contribute back changes to the community project, while some keep their changes as their own competitive advantages
17.
Quake II
–
Action Quake 2 is a mod for the video game Quake II created by The A-Team. Action Quake 2 was developed to recreate the look and feel of a movie, having a fast pace. It features many maps recreating realistic settings, such as city streets and office buildings, with a range of weapons. Action Quake 2 is essentially a version of deathmatch and team deathmatch in which most of the elements have been modified to some degree. Damage to extremities such as a shot to the leg with any weapon will cause damage, with a then unique style of play, realistic weapons and fast pace, Action Quake 2 became one of the most popular Quake II mods. The mod caught the attention of id Software in June 1998,1, Extremities, a commercial add-on product for Quake II. Members of the development team would go on to work on titles such as Action Half-Life. There are two modes of gameplay in Action Quake 2, Deathmatch and Teamplay. In Deathmatch, spawn points are distributed over most of the map, the map changes when one of two conditions is met, either the timelimit is reached, or the fraglimit is reached. By default each player spawns with only the MK23 pistol, staying alive is rewarded with more frags for each player killed. In Teamplay, players are split into two teams and play is round based, players select one primary weapon and one item to use, in addition to the default pistol and knife. The teams spawn on opposite sides of the map, and are let loose to kill each other. If one team eliminates the other, they win the round
18.
Quake III Arena
–
Quake III Arena is a multiplayer-focused first-person shooter video game released in December 1999. The game was developed by id Software and featured music composed by Sonic Mayhem and Front Line Assembly founder, Quake III Arena is the third game in the Quake series and differs from previous games by excluding a traditional single-player element, instead focusing on multiplayer action. The single-player mode is played against computer-controlled bots, Quake III Arena is available on a number of platforms and contains mature content. The game was praised by reviewers who, for the most part, described the gameplay as fun. Many liked the graphics and focus on multiplayer. Quake III Arena has also used extensively in professional electronic sports tournaments such as QuakeCon, Cyberathlete Professional League, Dreamhack. Unlike its predecessors, Quake III Arena does not have a plot-based single-player campaign, instead, it simulates the multiplayer experience with computer-controlled players known as bots. The games story is brief - the greatest warriors of all time fight for the amusement of a race called the Vadrigar in the Arena Eternal, the introduction video shows the abduction of such a warrior, Sarge, while making a last stand. Continuity with prior games in the Quake series and even Doom is maintained by the inclusion of player models and biographical information. A familiar mixture of gothic and technological map architecture as well as equipment is included, such as the Quad Damage power-up, the infamous rocket launcher. In Quake III Arena, the player progresses through tiers of maps, combating different bot characters that increase in difficulty, as the game progresses, the fights take place in more complex arenas and against tougher opponents. While deathmatch maps are designed for up to 16 players, tournament maps are designed for duels between 2 players and in the game could be considered as boss battles. The weapons are balanced by role, with each weapon having advantages in situations, such as the railgun at long-range. Weapons appear as level items, spawning at regular intervals in set locations on the map, if a player dies, all of their weapons are lost and they receive the spawn weapons for the current map, usually the gauntlet and machine gun. Players also drop the weapon they were using when killed, which players can then pick up. Quake III Arena was specifically designed for multiplayer, the game allows players whose computers are connected by a network or to the internet, to play against each other in real time, and incorporates a handicap system. It employs a model, requiring all players clients to connect to a server. Quake III Arenas focus on multiplayer gameplay spawned a community, similar to QuakeWorld
19.
Doom engine
–
Id Tech, also known as id Tech 1 or the Doom engine, is the game engine that powers the id Software games Doom and Doom II, Hell on Earth. It is also used in Heretic, Hexen, Beyond Heretic, Strife, Quest for the Sigil, Hacx, Twitch n Kill, Freedoom and it was created by John Carmack, with auxiliary functions written by Mike Abrash, John Romero, Dave Taylor, and Paul Radek. Originally developed on NeXT computers, it was ported to DOS for Dooms initial release and was ported to several game consoles. The source code to the Linux version of Doom was released to the public under a license that granted rights to use on December 23,1997. The source code was later re-released under the GNU General Public License on October 3,1999, although the engine renders a 3D space, that space is projected from a two-dimensional floor plan. The line of sight is parallel to the floor, walls must be perpendicular to the floors. Despite these limitations, the engine represented a leap from ids previous Wolfenstein 3D engine. The Doom engine was renamed to id Tech 1 in order to categorize it in a list of id Softwares long line of game engines. Viewed from the top down, all Doom levels are actually two-dimensional, demonstrating one of the key limitations of the Doom engine, room-over-room is not possible. This limitation, however, has a lining, a map mode can be easily displayed, which represents the walls. The base unit is the vertex, which represents a single 2D point, vertices are then joined to form lines, known as linedefs. Each linedef can have one or two sides, which are known as sidedefs. Sidedefs are then grouped together to form polygons, these are called sectors, sectors represent particular areas of the level. Each sector contains a number of properties, a height, ceiling height, light level, a floor texture. To have a different light level in an area, for example. One-sided linedefs therefore represent solid walls, while two-sided linedefs represent bridge lines between sectors, sidedefs are used to store wall textures, these are completely separate from the floor and ceiling textures. Each sidedef can have three textures, these are called the middle, upper and lower textures, in one-sided linedefs, only the middle texture is used for the texture on the wall. In two-sided linedefs, the situation is more complex, the lower and upper textures are used to fill the gaps where adjacent sectors have different floor and ceiling heights, lower textures are used for steps, for example
20.
Binary space partitioning
–
In computer science, binary space partitioning is a method for recursively subdividing a space into convex sets by hyperplanes. This subdivision gives rise to a representation of objects within the space by means of a data structure known as a BSP tree. Binary space partitioning is a process of recursively dividing a scene into two until the partitioning satisfies one or more requirements. When used in graphics to render scenes composed of planar polygons. The specific choice of partitioning plane and criterion for terminating the partitioning process varies depending on the purpose of the BSP tree, for example, in computer graphics rendering, the scene is divided until each node of the BSP tree contains only polygons that can render in arbitrary order. In collision detection or ray tracing, a scene may be divided up into primitives on which collision or ray intersection tests are straightforward, Binary space partitioning arose from the computer graphics need to rapidly draw three-dimensional scenes composed of polygons. This approach has two disadvantages, time required to sort polygons in back to front order, and the possibility of errors in overlapping polygons, a disadvantage of binary space partitioning is that generating a BSP tree can be time-consuming. Typically, it is performed once on static geometry, as a pre-calculation step. The expense of constructing a BSP tree makes it difficult and inefficient to directly implement moving objects into a tree, BSP trees are often used by 3D video games, particularly first-person shooters and those with indoor environments. Game engines utilising BSP trees include the Doom engine, the Quake engine, in video games, BSP trees containing the static geometry of a scene are often used together with a Z-buffer, to correctly merge movable objects such as doors and characters onto the background scene. While binary space partitioning provides a convenient way to store and retrieve information about polygons in a scene. The canonical use of a BSP tree is for rendering polygons with the painters algorithm, each polygon is designated with a front side and a back side which could be chosen arbitrarily and only affects the structure of the tree but not the required result. Such a tree is constructed from an unsorted list of all the polygons in a scene, the recursive algorithm for construction of a BSP tree from that list of polygons is, Choose a polygon P from the list. Make a node N in the BSP tree, and add P to the list of polygons at that node. For each other polygon in the list, If that polygon is wholly in front of the plane containing P, If that polygon is wholly behind the plane containing P, move that polygon to the list of nodes behind P. If that polygon is intersected by the plane containing P, split it into two polygons and move them to the lists of polygons behind and in front of P. If that polygon lies in the plane containing P, add it to the list of polygons at node N, apply this algorithm to the list of polygons in front of P. Apply this algorithm to the list of polygons behind P, the following diagram illustrates the use of this algorithm in converting a list of lines or polygons into a BSP tree
21.
Gouraud shading
–
Gouraud shading, named after Henri Gouraud, is an interpolation method used in computer graphics to produce continuous shading of surfaces represented by polygon meshes. Gouraud first published the technique in 1971, using these estimates, lighting computations based on a reflection model, e. g. the Phong reflection model, are then performed to produce colour intensities at the vertices. For each screen pixel that is covered by the polygonal mesh, Gouraud shading is considered superior to flat shading and requires significantly less processing than Phong shading, but usually results in a faceted look. In comparison to Phong shading, Gouraud shadings strength and weakness lies in its interpolation, the problem is easily spotted in a rendering which ought to have a specular highlight moving smoothly across the surface of a model as it rotates. While this problem can be fixed by increasing the density of vertices in the object, list of common shading algorithms Blinn–Phong shading model Phong shading
22.
Lightmap
–
A lightmap is a data structure used in lightmapping, a form of surface caching in which the brightness of surfaces in a virtual scene is pre-calculated and stored in texture maps for later use. John Carmacks Quake was the first computer game to use lightmaps to augment rendering, before lightmaps were invented, realtime applications relied purely on Gouraud shading to interpolate vertex lighting for surfaces. This only allowed low frequency lighting information, and could create clipping artefacts close to the camera without perspective-correct interpolation, quakes software rasterizer used surface caching to apply lighting calculations in texture space once when polygons initially appear within the viewing frustum. As consumer 3d graphics hardware capable of multitexturing, light-mapping became more popular, lightmaps are composed of lumels, analogous to texels in Texture Mapping. Smaller lumels yield a higher resolution lightmap, providing finer lighting detail at the price of reduced performance, for example, a lightmap scale of 4 lumels per world unit would give a lower quality than a scale of 16 lumels per world unit. Lightmap resolution and scaling may also be limited by the amount of storage space, bandwidth/download time. Some implementations attempt to pack multiple lightmaps together in a known as atlasing to help circumvent these limitations. Lightmap resolution and scale are two different things, the resolution is the area, in pixels, available for storing one or more surfaces lightmaps. The number of surfaces that can fit on a lightmap is determined by the scale. Lower scale values mean higher quality and more taken on a lightmap. Higher scale values mean lower quality and less space taken, a surface can have a lightmap that has the same area, so a 1,1 ratio, or smaller, so the lightmap is stretched to fit. Lightmaps in games are usually colored texture maps, or per vertex colors and they are usually flat, without information about the lights direction, whilst some game engines use multiple lightmaps to provide approximate directional information to combine with normal-maps. Lightmaps may also store separate precalculated components of lighting information for semi-dynamic lighting with shaders, when creating lightmaps, any lighting model may be used, because the lighting is entirely precomputed and real-time performance is not always a necessity. A variety of techniques including ambient occlusion, direct lighting with sampled shadow edges, modern 3d packages include specific plugins for applying light-map UV-coordinates, atlas-ing multiple surfaces into single texture sheets, and rendering the maps themselves. Alternatively game engine pipelines may include custom lightmap creation tools, an additional consideration is the use of compressed DXT textures which are subject to blocking artifacts - individual surfaces must not collide on 4x4 texel chunks for best results. In all cases, soft shadows for static geometry are possible if simple occlusion tests are used to determine which lumels are visible to the light. However, the softness of the shadows is determined by how the engine interpolates the lumel data across a surface. Photon mapping can be used to calculate global illumination for light maps. in vertex lighting, lighting information is computed per vertex and stored in vertex color attributes
23.
Quake II engine
–
The Quake II engine, later dubbed id Tech 2, is a game engine developed by id Software for use in their 1997 first-person shooter Quake II. It is the successor to the Quake engine, since its release, the Quake II engine has been licensed for use in several other games. One of the engines most notable features was out-of-the-box support for hardware-accelerated graphics, specifically OpenGL, another interesting feature was the subdivision of some of the components into dynamic-link libraries. This allowed both software and OpenGL renderers, which were selected by loading and unloading separate libraries, libraries were also used for the game logic, for two reasons, id could release the source code to allow modifications while keeping the remainder of the engine proprietary. Since they were compiled for platforms, instead of an interpreter, they could run faster than Quakes solution. The level format, as with previous id Software engines, used binary space partitioning, id Software released the source code on 22 December 2001 under the terms of the GNU General Public License. It has since used by Sun as an example of Java Web Start capabilities for games distribution over the Internet. In 2006, it was used to experiment playing 3D games with eye tracking, the performance of Jake2 is on par with the original C version.21 at id Software Quake II engine code review by Fabien Sanglard
24.
Algorithm
–
In mathematics and computer science, an algorithm is a self-contained sequence of actions to be performed. Algorithms can perform calculation, data processing and automated reasoning tasks, an algorithm is an effective method that can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. The transition from one state to the next is not necessarily deterministic, some algorithms, known as randomized algorithms, giving a formal definition of algorithms, corresponding to the intuitive notion, remains a challenging problem. In English, it was first used in about 1230 and then by Chaucer in 1391, English adopted the French term, but it wasnt until the late 19th century that algorithm took on the meaning that it has in modern English. Another early use of the word is from 1240, in a manual titled Carmen de Algorismo composed by Alexandre de Villedieu and it begins thus, Haec algorismus ars praesens dicitur, in qua / Talibus Indorum fruimur bis quinque figuris. Which translates as, Algorism is the art by which at present we use those Indian figures, the poem is a few hundred lines long and summarizes the art of calculating with the new style of Indian dice, or Talibus Indorum, or Hindu numerals. An informal definition could be a set of rules that precisely defines a sequence of operations, which would include all computer programs, including programs that do not perform numeric calculations. Generally, a program is only an algorithm if it stops eventually, but humans can do something equally useful, in the case of certain enumerably infinite sets, They can give explicit instructions for determining the nth member of the set, for arbitrary finite n. An enumerably infinite set is one whose elements can be put into one-to-one correspondence with the integers, the concept of algorithm is also used to define the notion of decidability. That notion is central for explaining how formal systems come into being starting from a set of axioms. In logic, the time that an algorithm requires to complete cannot be measured, from such uncertainties, that characterize ongoing work, stems the unavailability of a definition of algorithm that suits both concrete and abstract usage of the term. Algorithms are essential to the way computers process data, thus, an algorithm can be considered to be any sequence of operations that can be simulated by a Turing-complete system. Although this may seem extreme, the arguments, in its favor are hard to refute. Gurevich. Turings informal argument in favor of his thesis justifies a stronger thesis, according to Savage, an algorithm is a computational process defined by a Turing machine. Typically, when an algorithm is associated with processing information, data can be read from a source, written to an output device. Stored data are regarded as part of the state of the entity performing the algorithm. In practice, the state is stored in one or more data structures, for some such computational process, the algorithm must be rigorously defined, specified in the way it applies in all possible circumstances that could arise. That is, any conditional steps must be dealt with, case-by-case
25.
John Romero
–
Alfonso John Romero is an American director, designer, programmer, and developer in the video game industry. He is best known as a co-founder of id Software and designer for many of their games, including Wolfenstein 3D, Dangerous Dave, Hexen, Doom and he is credited with coining the FPS multiplayer term deathmatch. Among his early influences, he credited Namcos maze chase arcade game Pac-Man as the title that had the biggest influence on his career and he also cited programmer Bill Budge as another influence. John Romeros first published game, Scout Search, appeared in the June 1984 issue of inCider magazine, Romeros first company, Capitol Ideas Software, was listed as the developer for at least 12 of his earliest published games. Romero captured the December cover of the Apple II magazine Nibble for three years in a row starting in 1987 and he entered a programming contest in A+ magazine during its first year of publishing with his game Cavern Crusader. The first game Romero created that was published was Jumpster in Uptime. Jumpster was created in 1983 and published in 1987, making Jumpster his earliest created, then published, Romeros first industry job was at Origin Systems in 1987 after programming games for 8 years. He worked on the Apple II to Commodore 64 port of 2400 A. D. which was scrapped due to slow sales of the Apple II version. Romero then moved onto Space Rogue, a game by Paul Neurath, during this time, Romero was asked if he would be interested in joining Pauls soon-to-start company Blue Sky Productions, eventually renamed Looking Glass Technologies. Instead, Romero left Origin Systems to co-found a game company named Inside Out Software, during this short time, Romero did the artwork for the Apple IIGS version of Dark Castle, a port from the Macintosh. During this time, John and his friend Lane Roathe co-founded a company named Ideas from the Deep and wrote versions of a game named Zappa Roidz for the Apple II, PC and Apple IIGS. Their last collaboration was an Apple II disk operating system for Infocoms games Zork Zero, Arthur, Shogun, Romero moved to Shreveport, Louisiana in March 1989 and joined Softdisk as a programmer in its Special Projects division. After several months of helping the PC monthly disk magazine Big Blue Disk, Romero hired John D. Romero and the others then left Softdisk in February 1991 to form id Software. Romero worked at id Software from its incorporation in 1991 until 1996 and he was involved in the creation of several milestone games, including Commander Keen, Wolfenstein 3D, Doom, Doom II, Hell on Earth and Quake. He served as producer on Heretic and Hexen. He designed most of the first episode of Doom, most of the levels in Quake, half the levels in the Commander Keen, Wolfenstein 3D, during the production of Quake Romero clashed with John Carmack over the future direction of id. Although Romero relented on his vision and joined a death march effort to finish the game, this did not resolve the tensions within the company. In level 30 of Doom II, Icon of Sin, the boss is supposed to be a giant demon head with a fragment missing from its forehead
26.
Sega AM2
–
Sega-AM2 Co. Ltd. is a division of Japanese video game developer Sega. They are Segas second development division for arcade software, several games produced by Sega-AM2 have influenced and innovated the video gaming industry from a technical and developmental perspective. Sega popularized the term and innovated this design through games such as Hang-On, OutRun, Space Harrier, and After Burner and the more eleborate set-up, all the aforementioned games were created by the second arcade department at Sega which started to stand out relatively quickly. From 1990 onwards the game development groups at Sega became their own divisions, Development teams became bigger and many of the planners, designers and programmers of the small teams of before, became producers and managers of their own teams and departments. Thus Amusement Machine Research and Development Division No, AM2 was headed by Yu Suzuki and Toshihiro Nagoshi. Daytona USA was the first game using the palmtree AM2 logo, however, for more financial stability, Sega began consolidating its studios into five main ones in 2003, and merged them back into a uniform R&D structure in 2004. SEGA-AM2 was established as an independent studio but has held its name and it was headed by Hiroshi Kataoka, Yu Suzuki and Makoto Osaki. After the integration back into Sega, the lineage as the second arcade software R&D division continues. It is now headed by Hiroshi Kataoka and Makoto Osaki, as creators of famous franchises such as Virtua Fighter, Outrun and Daytona USA, AM2 supervises the implementation in guest appearances such as in Dead or Alive 5 and Sonic & Sega All-Stars Racing
27.
Fighting game
–
A fighting game is a video game genre in which the player controls an on-screen character and engages in close combat with an opponent, which can be either an AI or controlled by another player. The fight matches typically consist of several rounds and take place in an arena, while each character has widely differing abilities, players must master techniques such as blocking, counter-attacking, and chaining attacks together into combos. Since the early 1990s, most fighting games allow the player to special attacks by performing specific input combinations. The fighting game genre is related to but distinct from beat em ups, the first video game to feature fist fighting was arcade game Heavyweight Champ in 1976, but it was Karate Champ which popularized one-on-one martial arts games in arcades in 1984. In 1985, Yie Ar Kung-Fu featured antagonists with differing fighting styles, in 1987, Street Fighter introduced hidden special attacks. In 1991, Capcoms highly successful Street Fighter II refined and popularized many of the conventions of the genre, the fighting game subsequently became the preeminent genre for competitive video gaming in the early to mid-1990s, particularly in arcades. Fighting games are a type of game where two on-screen characters fight each other. These games typically feature special moves that are triggered using rapid sequences of carefully timed button presses, games traditionally show fighters from a side-view, even as the genre has progressed from two-dimensional to three-dimensional graphics. Fighting games typically involve hand-to-hand combat, but may also feature melee weapons and this genre is distinct from beat em ups, another action genre involving combat, where the player character must fight many weaker enemies at the same time. During the 1980s publications used the fighting game and beat em up interchangeably. With hindsight, critics have argued that the two types of game gradually became dichotomous as they evolved, though the two terms may still be conflated, Fighting games are sometimes grouped with games that feature boxing, UFC, or wrestling. As such, boxing games, mixed martial arts games, and wrestling games are described as distinct genres, without comparison to fighting games. Fighting games involve combat between pairs of fighters using highly exaggerated martial arts moves and they typically revolve around primarily brawling or combat sport, though some variations feature weaponry. Games usually display on-screen fighters from a view, and even 3D fighting games play largely within a 2D plane of motion. Games usually confine characters to moving left and right and jumping, although some such as Fatal Fury. Recent games tend to be rendered in three dimensions and allow side-stepping, but otherwise play like those rendered in two dimensions, aside from moving around a restricted space, fighting games limit the players actions to different offensive and defensive maneuvers. Players must learn which attacks and defenses are effective against each other, often by trial, blocking is a basic technique that allows a player to defend against attacks. Some games feature more advanced blocking techniques, for example, Capcoms Street Fighter III features a move termed parrying which causes the attacker to become momentarily incapacitated, in addition to blows such as punches and kicks, players can utilize throwing or grappling to circumvent blocks
28.
Virtua Fighter
–
Virtua Fighter is a series of fighting games created by Sega studio AM2 and designers Yu Suzuki and Seiichi Ishii. The original Virtua Fighter was released in October 1993 and has received four main sequels, the highly influential first Virtua Fighter game is widely recognized as the first 3D fighting game ever released. If a character is knocked out of the ring, the opponent wins the round, a fourth round is necessary if a double knockout occurred in a previous round and the match is tied one round each. In this fourth round, players fight on a stage wherein one hit equals victory. The basic control scheme is simple, using only a control stick, through various timings, positions, and button combinations, players input normal and special moves for each character. Traditionally, in the mode, the player runs a gauntlet of characters in the game all the way to the final boss. It is considered the first polygon-based fighting game and it introduced the eight initial fighters as well as the boss, Dural. Virtua Fighter 2 was released in November 1994, adding two new fighters, Shun Di and Lion Rafale and it was built using the Model 2 hardware, rendering characters and backgrounds with filtered texture mapping and motion capture. A slightly-tweaked upgrade, Virtua Fighter 2.1, followed soon after, Virtua Fighter 3 came out in 1996, with the introduction of Taka-Arashi and Aoi Umenokoji. Aside from improving the graphics via use of the Model 3, the game also introduced undulations in some stages, Virtua Fighter 3tb in 1997 was the first major update in series history, implementing tournament battles featuring more than two characters. Virtua Fighter 4, which introduced Vanessa Lewis and Lei-Fei and removed Taka-Arashi, was released on the NAOMI2 hardware in 2001 instead of hardware from a joint collaboration with Lockheed Martin, the game also removed the uneven battlegrounds and the Dodge button from the previous game. The title is popular in its home arcade market. Virtua Fighter 4, Evolution, released in 2003, was the first update to add new characters, Virtua Fighter 4, Final Tuned, an upgrade to Evolution, was released in the arcades in early 2005. In Japan, Virtua Fighter 4 was famous for spearheading and opening the market for internet functionality in arcades. VF. NET started in Japan in 2001, and since companies have created their own networks, E-Amusement by Konami, NESiCAxLive by Taito and Square Enix. Virtua Fighter 5 was released in Japan on July 12,2006 for Segas Lindbergh arcade board, similar to its predecessor, two revisions were later released. Virtua Fighter 5 R, released on July 24,2008, saw the return of Taka-Arashi while introducing a new fighter, Virtua Fighter 5 Final Showdown was released in arcades on July 29,2010. The first Virtua Fighter was ported to the Saturn in 1994, the console port, which was nearly identical to the arcade game, sold at a nearly 1,1 ratio with the Saturn hardware at launch
29.
Beat 'em up
–
Beat em up is a video game genre featuring hand-to-hand combat between the protagonist and an improbably large number of opponents. These games typically take place in settings and feature crime-fighting and revenge-based plots, though some games may employ historical. Traditional beat em ups take place in scrolling, two-dimensional levels and these games are noted for their simple gameplay, a source of both critical acclaim and derision. Two-player cooperative gameplay and multiple characters are also hallmarks of the genre. The first influential beat em up was 1984s Kung-Fu Master, with 1986s Renegade introducing the urban settings, Games such as Streets of Rage, Final Fight and Golden Axe are other classics to emerge from this period. The genre has been popular since the emergence of 3D-based mass-market games. A beat em up is a type of game where the player character must fight a large number of enemies in unarmed combat or with melee weapons. Gameplay consists of walking through a level, one section at a time, defeating a group of enemies before advancing to the next section, however arcade versions of these games are often quite difficult to win, causing players to spend more money to try to win. Beat em ups are related to—but distinct from—fighting games, which are based around one-on-one matches rather than scrolling levels, such terminology is loosely applied, however, as some commentators prefer to conflate the two terms. At times, both one-on-one fighting games and scrolling beat em ups have influenced each other in terms of graphics and style, occasionally, a game will feature both kinds of gameplay. Beat em up games usually employ vigilante crime fighting and revenge plots with the taking place on city streets, though historical. Players must walk from one end of the world to the other. Some later beat em ups dispense with 2D-based scrolling levels, instead allowing the player to roam around larger 3D environments, though they retain the simple gameplay. Throughout the level, players may acquire weapons that they can use as well as power-ups that replenish the players health, as players walk through the level, they are stopped by groups of enemies who must be defeated before they can continue. The level ends when all the enemies are defeated, each level contains many identical groups of enemies, making these games notable for their repetition. In beat em up games, players fight a boss—an enemy much stronger than the other enemies—at the end of each level. Beat em ups often allow the player to choose between a selection of protagonists—each with their own strengths, weaknesses, and set of moves, attacks can include rapid combinations of basic attacks as well as jumping and grappling attacks. Characters often have their own attacks, which leads to different strategies depending on which character the player selects
30.
Central processing unit
–
The computer industry has used the term central processing unit at least since the early 1960s. The form, design and implementation of CPUs have changed over the course of their history, most modern CPUs are microprocessors, meaning they are contained on a single integrated circuit chip. An IC that contains a CPU may also contain memory, peripheral interfaces, some computers employ a multi-core processor, which is a single chip containing two or more CPUs called cores, in that context, one can speak of such single chips as sockets. Array processors or vector processors have multiple processors that operate in parallel, there also exists the concept of virtual CPUs which are an abstraction of dynamical aggregated computational resources. Early computers such as the ENIAC had to be rewired to perform different tasks. Since the term CPU is generally defined as a device for software execution, the idea of a stored-program computer was already present in the design of J. Presper Eckert and John William Mauchlys ENIAC, but was initially omitted so that it could be finished sooner. On June 30,1945, before ENIAC was made, mathematician John von Neumann distributed the paper entitled First Draft of a Report on the EDVAC and it was the outline of a stored-program computer that would eventually be completed in August 1949. EDVAC was designed to perform a number of instructions of various types. Significantly, the programs written for EDVAC were to be stored in high-speed computer memory rather than specified by the wiring of the computer. This overcame a severe limitation of ENIAC, which was the considerable time, with von Neumanns design, the program that EDVAC ran could be changed simply by changing the contents of the memory. Early CPUs were custom designs used as part of a larger, however, this method of designing custom CPUs for a particular application has largely given way to the development of multi-purpose processors produced in large quantities. This standardization began in the era of discrete transistor mainframes and minicomputers and has accelerated with the popularization of the integrated circuit. The IC has allowed increasingly complex CPUs to be designed and manufactured to tolerances on the order of nanometers, both the miniaturization and standardization of CPUs have increased the presence of digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in electronic devices ranging from automobiles to cellphones, the so-called Harvard architecture of the Harvard Mark I, which was completed before EDVAC, also utilized a stored-program design using punched paper tape rather than electronic memory. Relays and vacuum tubes were used as switching elements, a useful computer requires thousands or tens of thousands of switching devices. The overall speed of a system is dependent on the speed of the switches, tube computers like EDVAC tended to average eight hours between failures, whereas relay computers like the Harvard Mark I failed very rarely. In the end, tube-based CPUs became dominant because the significant speed advantages afforded generally outweighed the reliability problems, most of these early synchronous CPUs ran at low clock rates compared to modern microelectronic designs. Clock signal frequencies ranging from 100 kHz to 4 MHz were very common at this time, the design complexity of CPUs increased as various technologies facilitated building smaller and more reliable electronic devices
31.
Brush (video game)
–
Brushes are templates, used in some 3D video games such as games based on the Quake engine, the Source game engine, or Unreal Engine, to construct levels. Brushes can be shapes, pre-defined shapes, or custom shapes. During the map compilation process, brushes are turned into meshes that can be rendered by the game engine, often brushes are restricted to convex shapes only, as this reduces the complexity of the binary space partitioning process. However, using CSG operations, complex rooms and objects can be created by adding, subtracting and intersecting brushes to, additionally, brushes can used as liquids or as an area trigger
32.
Skybox (video games)
–
A skybox is a method of creating backgrounds to make a computer and video games level look bigger than it really is. When a skybox is used, the level is enclosed in a cuboid, the sky, distant mountains, distant buildings, and other unreachable objects are projected onto the cubes faces, thus creating the illusion of distant three-dimensional surroundings. A skydome employs the concept but uses either a sphere or a hemisphere instead of a cube. Processing of 3D graphics is computationally expensive, especially in real-time games, levels have to be processed at tremendous speeds, making it difficult to render vast skyscapes in real time. Additionally, realtime graphics generally have depth buffers with limited bit-depth, to compensate for these problems, games often employ skyboxes. Traditionally, these are simple cubes with up to 6 different textures placed on the faces, by careful alignment, a viewer in the exact middle of the skybox will perceive the illusion of a real 3D world around it, made up of those 6 faces. As a viewer moves through a 3D scene, it is common for the skybox to remain stationary with respect to the viewer. This technique gives the skybox the illusion of being far away, since other objects in the scene appear to move. This imitates real life, where distant objects such as clouds, stars, effectively, everything in a skybox will always appear to be infinitely distant from the viewer. The source of a skybox can be any form of texture, including photographs, hand-drawn images, usually, these textures are created and aligned in 6 directions, with viewing angles of 90 degrees. As technology progressed, it clear that the default skybox had severe disadvantages. It could not be animated, and all objects in it appeared to be infinitely distant and this constructed skybox was placed in an unreachable location, typically outside the bounds of the playable portion of the level, to prevent players from touching the skybox. In older versions of this technology, such as the ones presented in the game Unreal, this was limited to movements in the sky, such as the movements of clouds. Elements could be changed from level to level, such as the positions of objects, or the color of the sky. The skybox in this game would appear to be infinitely far away, as the skybox, although containing 3D geometry. Newer engines, such as the Source engine, continue on this idea, allowing the skybox to move along with the player, although at a different speed. Because depth is perceived on the movement of objects, making the skybox move slower than the level causes the skybox to appear far away. It is also possible, but not required, to include 3D geometry which will surround the playing environment
33.
Doom 3
–
Doom 3 is a science fiction first-person shooter video game developed by id Software and published by Activision. Doom 3 was first released for Microsoft Windows on August 3,2004. The game was adapted for Linux, as well as being ported by Aspyr Media for Mac OS X. Developer Vicarious Visions ported the game to the Xbox console. British developers Splash Damage also assisted in design for the elements of the game. The game is a reboot of id Softwares Doom franchise, however, the teleportation experiments inadvertently open a gateway to Hell, resulting in a catastrophic invasion by demons. The player, a space marine, must fight through the base. Doom 3 features a game engine, id Tech 4, which has since been licensed out to other developers. The game was a critical and commercial success for id Software, with more than 3.5 million copies of the game sold, it is the most successful game by the developer to date. The game was followed by Resurrection of Evil, an expansion pack developed by Nerve Software in April 2005, a series of novelizations of Doom 3, written by Matthew J. Costello, debuted in February 2008. An expanded and improved BFG Edition was released in the quarter of 2012. Doom 3 is an action game played from a first-person perspective. As with previous Doom games, the objective is to successfully pass through its levels. Doom 3s more story-centered approach, however, means that the player often encounters friendly non-player characters, enemies come in multiple forms and with different abilities and tactics, but fall into two broad categories of either zombies or demons. The corpses of demons are reduced to ashes after death, leaving no trace of their body behind, the games levels are fairly linear in nature and incorporate several horror elements, the most prominent of which is darkness. In addition, the levels are regularly strewn with corpses, dismembered body parts and blood, frequent radio transmissions through the players communications device also add to the atmosphere, by broadcasting certain sounds and messages from non-player characters meant to unsettle the player. Early in the game, during and directly after the event that plunges the base into chaos, the ambient sound is extended to the base itself through such things as hissing pipes, footsteps, and occasional jarringly loud noises from machinery or other sources. Often ambient sounds can be heard that resemble deep breathing, unexplained voices, early in the game, the player is provided with a personal data assistant. PDAs contain security clearance levels, allowing the player to certain areas that are otherwise locked
34.
Rage (video game)
–
Rage is a first-person shooter video game developed by id Software. It uses the companys id Tech 5 game engine, released in October 2011, the game was first shown as a tech demo on June 11,2007 at the Apple Worldwide Developers Conference, and was officially announced on August 2,2007 at QuakeCon. On the same day, a trailer for the game was released by GameTrailers, Rage was the final game released by id Software under the supervision of founder John Carmack. The game is set in a post-apocalyptic near future, following the impact of the asteroid 99942 Apophis on Earth, the game has been described as similar to the movie Mad Max 2 and to video games such as Fallout and Borderlands. Influences on the driving and racing gameplay include games such as MotorStorm, players can upgrade their cars with racing certificates won from races. Upon its release, the game received praise from critics. The game primarily consists of first-person shooter and driving segments, with the player using their vehicle to explore the world, there are several types of ammunition available for each weapon, to allow the player to further customize their play style. As an example, the primary ammunition is metal bolts, but it also can shoot electrified bolts, explosive bolts. There are a variety of events for the player to participate in, including races. Racing events may or may not have opponents, and some of them are armed races while others are not, players have the ability to augment their cars with various items and upgrades they can gain by completing events. Rage also features some role-playing game elements, including a system, looting system. Players have the option to customize their weapons and vehicles, as well as build a wide assortment of items using collected recipes. Not only can the vehicles be used for racing, but, like all open-world sandbox games, there are also side missions and a number of other minor exploratory elements. Rage has two modes, Road Rage and Wasteland Legends. In Road Rage, up to four players compete in a match that takes place in an arena designed to make use of the vehicles. The objective is to rally points that appear around the arena while killing ones opponents. Legends of the Wasteland is a series of two-player co-op missions based on stories that are throughout the single-player campaign. There are a total of 9 objectives in this game type, on August 23,2029, the asteroid 99942 Apophis collides with Earth, destroying human civilization
35.
Z-buffering
–
In computer graphics, z-buffering, also known as depth buffering, is the management of image depth coordinates in 3D graphics, usually done in hardware, sometimes in software. It is one solution to the visibility problem, which is the problem of deciding which elements of a scene are visible. The painters algorithm is another solution which, though less efficient. When an object is rendered, the depth of a pixel is stored in a buffer. This buffer is usually arranged as an array with one element for each screen pixel. If another object of the scene must be rendered in the same pixel, the chosen depth is then saved to the z-buffer, replacing the old one. In the end, the z-buffer will allow the method to reproduce the usual depth perception. The granularity of a z-buffer has an influence on the scene quality. A 24-bit or 32-bit z-buffer behaves much better, although the problem cannot be eliminated without additional algorithms. An 8-bit z-buffer is almost never used since it has too little precision, the Z-buffer is a technology used in almost all contemporary computers, laptops and mobile phones for performing 3D graphics, for example for computer games. The Z-buffer is implemented as hardware in the silicon ICs within these computers, the Z-buffer is also used for producing computer-generated special effects for films. Furthermore, Z-buffer data obtained from rendering a surface from a lights point-of-view permits the creation of shadows by the shadow mapping technique, even with small enough granularity, quality problems may arise when precision in the z-buffers distance values is not spread evenly over distance. Nearer values are more precise than values which are farther away. Generally, this is desirable, but sometimes it will cause artifacts to appear as objects become more distant, a variation on z-buffering which results in more evenly distributed precision is called w-buffering. The invention of the concept is most often attributed to Edwin Catmull. On recent PC graphics cards, z-buffer management uses a significant chunk of the memory bandwidth. In rendering, z-culling is early pixel elimination based on depth and it is a direct consequence of z-buffering, where the depth of each pixel candidate is compared to the depth of existing geometry behind which it might be hidden. When using a z-buffer, a pixel can be culled as soon as its depth is known, also, time-consuming pixel shaders will generally not be executed for the culled pixels
36.
Big O notation
–
Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. It is a member of a family of notations invented by Paul Bachmann, Edmund Landau, in computer science, big O notation is used to classify algorithms according to how their running time or space requirements grow as the input size grows. Big O notation characterizes functions according to their rates, different functions with the same growth rate may be represented using the same O notation. The letter O is used because the rate of a function is also referred to as order of the function. A description of a function in terms of big O notation usually only provides a bound on the growth rate of the function. Associated with big O notation are several related notations, using the symbols o, Ω, ω, Big O notation is also used in many other fields to provide similar estimates. Let f and g be two functions defined on some subset of the real numbers. That is, f = O if and only if there exists a real number M. In many contexts, the assumption that we are interested in the rate as the variable x goes to infinity is left unstated. If f is a product of several factors, any constants can be omitted, for example, let f = 6x4 − 2x3 +5, and suppose we wish to simplify this function, using O notation, to describe its growth rate as x approaches infinity. This function is the sum of three terms, 6x4, −2x3, and 5, of these three terms, the one with the highest growth rate is the one with the largest exponent as a function of x, namely 6x4. Now one may apply the rule, 6x4 is a product of 6. Omitting this factor results in the simplified form x4, thus, we say that f is a big-oh of. Mathematically, we can write f = O, one may confirm this calculation using the formal definition, let f = 6x4 − 2x3 +5 and g = x4. Applying the formal definition from above, the statement that f = O is equivalent to its expansion, | f | ≤ M | x 4 | for some choice of x0 and M. To prove this, let x0 =1 and M =13, Big O notation has two main areas of application. In mathematics, it is used to describe how closely a finite series approximates a given function. In computer science, it is useful in the analysis of algorithms, in both applications, the function g appearing within the O is typically chosen to be as simple as possible, omitting constant factors and lower order terms
37.
Binary tree
–
In computer science, a binary tree is a tree data structure in which each node has at most two children, which are referred to as the left child and the right child. A recursive definition using just set theory notions is that a tree is a triple. Some authors allow the tree to be the empty set as well. From a graph theory perspective, binary trees as defined here are actually arborescences, a binary tree may thus be also called a bifurcating arborescence—a term which appears in some very old programming books, before the modern computer science terminology prevailed. It is also possible to interpret a binary tree as an undirected, rather than a directed graph, some authors use rooted binary tree instead of binary tree to emphasize the fact that the tree is rooted, but as defined above, a binary tree is always rooted. A binary tree is a case of an ordered K-ary tree. In computing, binary trees are used solely for their structure. Much more typical is to define a function on the nodes. Binary trees labelled this way are used to implement binary search trees and binary heaps, the designation of non-root nodes as left or right child even when there is only one child present matters in some of these applications, in particular it is significant in binary search trees. In mathematics, what is termed binary tree can vary significantly from author to author, some use the definition commonly used in computer science, but others define it as every non-leaf having exactly two children and dont necessarily order the children either. Another way of defining a full tree is a recursive definition. A full binary tree is either, A single vertex, a graph formed by taking two binary trees, adding a vertex, and adding an edge directed from the new vertex to the root of each binary tree. This also does not establish the order of children, but does fix a specific root node, to actually define a binary tree in general, we must allow for the possibility that only one of the children may be empty. An artifact, which in some textbooks is called a binary tree is needed for that purpose. Another way of imagining this construction is to instead of the empty set a different type of node—for instance square nodes if the regular ones are circles. A binary tree is a tree that is also an ordered tree in which every node has at most two children. A rooted tree naturally imparts a notion of levels, thus for every node a notion of children may be defined as the connected to it a level below. Ordering of these children makes possible to distinguish left child from right child, but this still doesnt distinguish between a node with left but not a right child from a one with right but no left child
38.
Texture mapping
–
Texture mapping is a method for defining high frequency detail, surface texture, or color information on a computer-generated graphic or 3D model. Its application to 3D graphics was pioneered by Edwin Catmull in 1974, texture mapping originally referred to a method that simply wrapped and mapped pixels from a texture to a 3D surface. A texture map is an image applied to the surface of a shape or polygon and this may be a bitmap image or a procedural texture. They may be stored in image file formats, referenced by 3d model formats or material definitions. They may have 1-3 dimensions, although 2 dimensions are most common for visible surfaces, for use with modern hardware, texture map data may be stored in swizzled or tiled orderings to improve cache coherency. Rendering APIs typically manage texture map resources as buffers or surfaces and they usually contain RGB color data, and sometimes an additional channel for alpha blending especially for billboards and decal overlay textures. It is possible to use the channel for other uses such as specularity. Multiple texture maps may be combined for control over specularity, normals, displacement, multiple texture images may be combined in texture atlases or array textures to reduce state changes for modern hardware. Modern hardware often supports cube map textures with multiple faces for environment mapping and they may be acquired by scanning/digital photography, authored in image manipulation software such as Photoshop, or painted onto 3D surfaces directly in a 3D paint tool such as Mudbox or zbrush. This process is akin to applying patterned paper to a white box. Every vertex in a polygon is assigned a texture coordinate and this may be done through explicit assignment of vertex attributes, manually edited in a 3D modelling package through UV unwrapping tools. It is also possible to associate a procedural transformation from 3d space to space with the material. This might be accomplished via planar projection or, alternatively, cylindrical or spherical mapping, more complex mappings may consider the distance along a surface to minimize distortion. These coordinates are interpolated across the faces of polygons to sample the texture map during rendering, UV unwrapping tools typically provide a view in texture space for manual editing of texture coordinates. Some rendering techniques such as subsurface scattering may be performed approximately by texture-space operations, multitexturing is the use of more than one texture at a time on a polygon. For instance, a light map texture may be used to light a surface as an alternative to recalculating that lighting every time the surface is rendered. Microtextures or detail textures are used to add higher frequency details, and dirt maps may add weathering and variation, modern graphics may use in excess of 10 layers for greater fidelity which are combined using shaders. Bump mapping has become popular in recent video games, as graphics hardware has become powerful enough to accommodate it in real-time, the way that samples are calculated from the texels is governed by texture filtering
39.
Image scaling
–
In computer graphics and digital imaging, image scaling refers to the resizing of a digital image. In video technology, the magnification of digital material is known as upscaling or resolution enhancement, when scaling a vector graphic image, the graphic primitives that make up the image can be scaled using geometric transformations, with no loss of image quality. When scaling a raster image, a new image with a higher or lower number of pixels must be generated. In the case of decreasing the pixel number this usually results in a quality loss. Image scaling can be interpreted as a form of image resampling FIX or image reconstruction from the view of the Nyquist sampling theorem, the image is reduced to the information that can be carried by the smaller image. In the case of up sampling, a reconstruction filter takes the place of the anti-aliasing filter. A more sophisticated approach to up scaling treats the problem as a problem, solving the question of generating a plausible image which. A variety of techniques have been applied for this, including optimization techniques with regularization terms, an image size can be changed in several ways. Diagonal lines, for example, show a stairway shape, although this is desirable for continuous-tone images, this algorithm reduces contrast in a way that may be undesirable for line art. Bicubic interpolation yields substantially better results, with only an increase in computational complexity. Sinc and Lanczos resampling Sinc resampling in theory provides the best possible reconstruction for a perfectly bandlimited signal, in practice, the assumptions behind sinc resampling are not completely met by real-world digital images. Lanczos resampling, an approximation to the method, yields better results. Bicubic interpolation can be regarded as an efficient approximation to Lanczos resampling. Box sampling One weakness of bilinear, bicubic and related algorithms is that sample a specific number of pixels. The trivial solution to this issue is box sampling, which is to consider the target pixel a box on the original image and this ensures that all input pixels contribute to the output. The major weakness of this algorithm is that it is hard to optimize, mipmap Another solution to the downscale problem of bi-sampling scaling are mipmaps. A mipmap is a set of downscale copies. When downscaling the nearest larger mipmap is used as the origin and this is algorithm is fast, and easy to optimize
40.
Floating-point unit
–
A floating-point unit is a part of a computer system specially designed to carry out operations on floating point numbers. Typical operations are addition, subtraction, multiplication, division, square root, some systems can also perform various transcendental functions such as exponential or trigonometric calculations, though in most modern processors these are done with software library routines. This could be an integrated circuit, an entire circuit board or a cabinet. Where floating-point calculation hardware has not been provided, floating point calculations are done in software, emulation can be implemented on any of several levels, in the CPU as microcode, as an operating system function, or in user space code. When only integer functionality is available the CORDIC floating point emulation methods are most commonly used, in most modern computer architectures, there is some division of floating-point operations from integer operations. This division varies significantly by architecture, some, like the Intel x86 have dedicated floating-point registers, in earlier superscalar architectures without general out-of-order execution, floating-point operations were sometimes pipelined separately from integer operations. Since the early 1990s, many microprocessors for desktops and servers have more than one FPU, the modular architecture of Bulldozer microarchitecture uses a special FPU named FlexFPU, which uses simultaneous multithreading. Each physical integer core, two per module, is threaded, in contrast with Intels Hyperthreading, where two virtual simultaneous threads share the resources of a single physical core. Some floating-point hardware only supports the simplest operations - addition, subtraction, but even the most complex floating-point hardware has a finite number of operations it can support - for example, none of them directly support arbitrary-precision arithmetic. When a CPU is executing a program calls for a floating-point operation that is not directly supported by the hardware. In systems without any floating-point hardware, the CPU emulates it using a series of simpler fixed-point arithmetic operations that run on the arithmetic logic unit. The software that lists the series of operations to emulate floating-point operations is often packaged in a floating-point library. In some cases, FPUs may be specialized, and divided between simpler floating-point operations and more complicated operations, like division, in some cases, only the simple operations may be implemented in hardware or microcode, while the more complex operations are implemented as software. In the 1980s, it was common in IBM PC/compatible microcomputers for the FPU to be separate from the CPU. It would only be purchased if needed to speed up or enable math-intensive programs, the IBM PC, XT, and most compatibles based on the 8088 or 8086 had a socket for the optional 8087 coprocessor. Other companies manufactured co-processors for the Intel x86 series, coprocessors were available for the Motorola 68000 family, the 68881 and 68882. These were common in Motorola 68020/68030-based workstations like the Sun 3 series, there are also add-on FPUs coprocessor units for microcontroller units /single-board computer, which serve to provide floating-point arithmetic capability. These add-on FPUs are host-processor-independent, possess their own programming requirements and are provided with their own integrated development environments
41.
Quake engine
–
The Quake engine is the game engine developed by id Software to power their 1996 video game Quake. It featured true 3D real-time rendering and is now licensed under the terms of the GNU General Public License, after release, it immediately forked, as did the level design. Much of the engine remained in Quake II and Quake III Arena, the Quake engine, like the Doom engine, used binary space partitioning to optimise the world rendering. The Quake engine also used Gouraud shading for moving objects, historically, the Quake engine has been treated as a separate engine from its successor id Tech 2. The codebases for Quake and Quake II were separate GPL releases, the Quake engine was developed from 1995 for the video game Quake, released on June 22,1996. John Carmack did most of the programming of the engine, with help from Michael Abrash in algorithms, the Quake II engine was based on it. The 3D environment in which the game takes place is referred to as a map, the map editor program uses a number of simple convex 3D geometric objects known as brushes that are sized and rotated to build the environment. The brushes are placed and oriented to create an enclosed, empty, volumetric space, the preprocessor then strips away the back-faces of the individual brushes which are outside the game-space, leaving only the few polygons that define the outer perimeter of the enclosed game space. Generally once a map has been preprocessed it cannot be re-edited in a normal fashion because the original brushes have been cut into small pieces, instead the original map editor data with the brushes is retained and used to create new versions of the map. But it is possible to edit a map by opening it in a special vertex editor and editing the raw vertex data. A processed map file can have a lower polygon count than the original unprocessed map. On the 50–75 MHz PCs of the time, it was common for this step to take many hours to complete on a map. Quake also incorporated the use of lightmaps and 3D light sources, id Softwares innovation has been used for many 3D games released since, particularly first-person shooters, though id Software switched to a Unified lighting and shadowing model for Doom 3. Just because a polygon is not visible does not mean it is excluded from the scene calculations, the Quake engine was optimized specifically to obviate this problem. The engine could be ahead of time to not calculate rendering for all objects in any space out of the players view. This effect is noticeable in the game as small tunnels with sharp 90-degree bends leading from one space into another. A Binary Space Partitioning tree is built from the map, simplifying complexity of searching for a polygon to O. Each leaf creates some area of 3D space, the leaves of this Binary Tree have polygons of the original map associated with them, which are then used for computing each areas visibility