GNU General Public License
The GNU General Public License is a widely-used free software license, which guarantees end users the freedom to run, study and modify the software. The license was written by Richard Stallman of the Free Software Foundation for the GNU Project, grants the recipients of a computer program the rights of the Free Software Definition; the GPL is a copyleft license, which means that derivative work can only be distributed under the same license terms. This is in distinction to permissive free software licenses, of which the BSD licenses and the MIT License are widely-used examples. GPL was the first copyleft license for general use; the GPL license family has been one of the most popular software licenses in the free and open-source software domain. Prominent free-software programs licensed under the GPL include the Linux kernel and the GNU Compiler Collection. David A. Wheeler argues that the copyleft provided by the GPL was crucial to the success of Linux-based systems, giving the programmers who contributed to the kernel the assurance that their work would benefit the whole world and remain free, rather than being exploited by software companies that would not have to give anything back to the community.
In 2007, the third version of the license was released to address some perceived problems with the second version that were discovered during its long-time usage. To keep the license up to date, the GPL license includes an optional "any version" clause, allowing users to choose between the original terms or the terms in new versions as updated by the FSF. Developers can omit it; the GPL was written by Richard Stallman in 1989, for use with programs released as part of the GNU project. The original GPL was based on a unification of similar licenses used for early versions of GNU Emacs, the GNU Debugger and the GNU C Compiler; these licenses contained similar provisions to the modern GPL, but were specific to each program, rendering them incompatible, despite being the same license. Stallman's goal was to produce one license that could be used for any project, thus making it possible for many projects to share code; the second version of the license, version 2, was released in 1991. Over the following 15 years, members of the free software community became concerned over problems in the GPLv2 license that could let someone exploit GPL-licensed software in ways contrary to the license's intent.
These problems included tivoization, compatibility issues similar to those of the Affero General Public License—and patent deals between Microsoft and distributors of free and open-source software, which some viewed as an attempt to use patents as a weapon against the free software community. Version 3 was developed to attempt to address these concerns and was released on 29 June 2007. Version 1 of the GNU GPL, released on 25 February 1989, prevented what were the two main ways that software distributors restricted the freedoms that define free software; the first problem was that distributors may publish binary files only—executable, but not readable or modifiable by humans. To prevent this, GPLv1 stated that copying and distributing copies or any portion of the program must make the human-readable source code available under the same licensing terms; the second problem was that distributors might add restrictions, either to the license, or by combining the software with other software that had other restrictions on distribution.
The union of two sets of restrictions would apply to the combined work, thus adding unacceptable restrictions. To prevent this, GPLv1 stated that modified versions, as a whole, had to be distributed under the terms in GPLv1. Therefore, software distributed under the terms of GPLv1 could be combined with software under more permissive terms, as this would not change the terms under which the whole could be distributed. However, software distributed under GPLv1 could not be combined with software distributed under a more restrictive license, as this would conflict with the requirement that the whole be distributable under the terms of GPLv1. According to Richard Stallman, the major change in GPLv2 was the "Liberty or Death" clause, as he calls it – Section 7; the section says that licensees may distribute a GPL-covered work only if they can satisfy all of the license's obligations, despite any other legal obligations they might have. In other words, the obligations of the license may not be severed due to conflicting obligations.
This provision is intended to discourage any party from using a patent infringement claim or other litigation to impair users' freedom under the license. By 1990, it was becoming apparent that a less restrictive license would be strategically useful for the C library and for software libraries that did the job of existing proprietary ones; the version numbers diverged in 1999 when version 2.1 of the LGPL was released, which renamed it the GNU Lesser General Public License to reflect its place in the philosophy. Most "GPLv2 or any version" is stated by users of the license, to allow upgrading to GPLv3. In late 2005, the Free Software Foundation announced work on version 3 of the GPL. On 16 January 2006, the first "discussion draft" of GPLv3 was published, the public consultation began; the public consultation was planned for ni
Silicon Graphics, Inc. was an American high-performance computing manufacturer, producing computer hardware and software. Founded in Mountain View, California in November 1981 by Jim Clark, its initial market was 3D graphics computer workstations, but its products and market positions developed over time. Early systems were based on the Geometry Engine that Clark and Marc Hannah had developed at Stanford University, were derived from Clark's broader background in computer graphics; the Geometry Engine was the first very-large-scale integration implementation of a geometry pipeline, specialized hardware that accelerated the "inner-loop" geometric computations needed to display three-dimensional images. For much of its history, the company focused on 3D imaging and was a major supplier of both hardware and software in this market. Silicon Graphics reincorporated as a Delaware corporation in January 1990. Through the mid to late-1990s, the improving performance of commodity Wintel machines began to erode SGI's stronghold in the 3D market.
The porting of Maya to other platforms is a major event in this process. SGI made several attempts to address this, including a disastrous move from their existing MIPS platforms to the Intel Itanium, as well as introducing their own Linux-based Intel IA-32 based workstations and servers that failed in the market. In the mid-2000s the company repositioned itself as a supercomputer vendor, a move that failed. On April 1, 2009, SGI filed for Chapter 11 bankruptcy protection and announced that it would sell all of its assets to Rackable Systems, a deal finalized on May 11, 2009, with Rackable assuming the name Silicon Graphics International; the remains of Silicon Graphics, Inc. became Graphics Properties Holdings, Inc. James H. Clark left his position as an electrical engineering associate professor at Stanford University to found SGI in 1982 along with a group of seven graduate students and research staff from Stanford: Kurt Akeley, David J. Brown, Tom Davis, Rocky Rhodes, Marc Hannah, Herb Kuta, Mark Grossman.
Ed McCracken was CEO of Silicon Graphics from 1984 to 1997. During those years, SGI grew from annual revenues of $5.4 million to $3.7 billion. The addition of 3D graphic capabilities to PCs, the ability of clusters of Linux- and BSD-based PCs to take on many of the tasks of larger SGI servers, ate into SGI's core markets; the porting of Maya to Linux, Mac OS X and Microsoft Windows further eroded the low end of SGI's product line. In response to challenges faced in the marketplace and a falling share price Ed McCracken was fired and SGI brought in Richard Belluzzo to replace him. Under Belluzzo's leadership a number of initiatives were taken which are considered to have accelerated the corporate decline. One such initiative was trying to sell workstations running Windows NT called Visual Workstations instead of just ones which ran IRIX, the company's version of UNIX; this put the company in more direct competition with the likes of Dell, making it more difficult to justify a price premium. The product line abandoned a few years later.
SGI's premature announcement of its migration from MIPS to Itanium and its abortive ventures into IA-32 architecture systems damaged SGI's credibility in the market. In 1999, in an attempt to clarify their current market position as more than a graphics company, Silicon Graphics Inc. changed its corporate identity to "SGI", although its legal name was unchanged. At the same time, SGI announced a new logo consisting of only the letters "sgi" in a proprietary font called "SGI", created by branding and design consulting firm Landor Associates, in collaboration with designer Joe Stitzlein. SGI continued to use the "Silicon Graphics" name for its workstation product line, re-adopted the cube logo for some workstation models. In November 2005, SGI announced that it had been delisted from the New York Stock Exchange because its common stock had fallen below the minimum share price for listing on the exchange. SGI's market capitalization dwindled from a peak of over seven billion dollars in 1995 to just $120 million at the time of delisting.
In February 2006, SGI noted. In mid-2005, SGI hired Alix Partners to advise it on returning to profitability and received a new line of credit. SGI announced it was postponing its scheduled annual December stockholders meeting until March 2006, it proposed a reverse stock split to deal with the de-listing from the New York Stock Exchange. In January 2006, SGI hired Dennis McKenna as its new chairman of the board of directors. Mr. McKenna succeeded Robert Bishop. On May 8, 2006, SGI announced that it had filed for Chapter 11 bankruptcy protection for itself and U. S. subsidiaries as part of a plan to reduce debt by $250 million. Two days the U. S. Bankruptcy Court approved its first day motions and its use of a $70 million financing facility provided by a group of its bondholders. Foreign subsidiaries were unaffected. On September 6, 2006, SGI announced the end of development for the MIPS/IRIX line and the IRIX operating system. Production would end on December 29 and the last orders would be fulfilled by March 2007.
Support for these products would end after December 2013. SGI emerged from bankruptcy protection on October 17, 2006, its stock symbol at that point, SGID.pk, was canceled, new stock was issued on the NASDAQ exchange under the symbol SGIC. This new stock was distributed to the company's creditors, the SGID common stockh
3D computer graphics
3D computer graphics or three-dimensional computer graphics, are graphics that use a three-dimensional representation of geometric data, stored in the computer for the purposes of performing calculations and rendering 2D images. Such images may be stored for viewing or displayed in real-time. 3D computer graphics rely on many of the same algorithms as 2D computer vector graphics in the wire-frame model and 2D computer raster graphics in the final rendered display. In computer graphics software, 2D applications may use 3D techniques to achieve effects such as lighting, 3D may use 2D rendering techniques. 3D computer graphics are referred to as 3D models. Apart from the rendered graphic, the model is contained within the graphical data file. However, there are differences: a 3D model is the mathematical representation of any three-dimensional object. A model is not technically a graphic. A model can be displayed visually as a two-dimensional image through a process called 3D rendering or used in non-graphical computer simulations and calculations.
With 3D printing, 3D models are rendered into a 3D physical representation of the model, with limitations to how accurate the rendering can match the virtual model. William Fetter was credited with coining the term computer graphics in 1961 to describe his work at Boeing. One of the first displays of computer animation was Futureworld, which included an animation of a human face and a hand that had appeared in the 1972 experimental short A Computer Animated Hand, created by University of Utah students Edwin Catmull and Fred Parke.3D computer graphic s software began appearing for home computers in the late 1970s. The earliest known example is 3D Art Graphics, a set of 3D computer graphics effects, written by Kazumasa Mitazawa and released in June 1978 for the Apple II. 3D computer graphics creation falls into three basic phases: 3D modeling – the process of forming a computer model of an object's shape Layout and animation – the placement and movement of objects within a scene 3D rendering – the computer calculations that, based on light placement, surface types, other qualities, generate the image The model describes the process of forming the shape of an object.
The two most common sources of 3D models are those that an artist or engineer originates on the computer with some kind of 3D modeling tool, models scanned into a computer from real-world objects. Models can be produced procedurally or via physical simulation. A 3D model is formed from points called vertices that define the shape and form polygons. A polygon is an area formed from at least three vertexes. A polygon of n points is an n-gon; the overall integrity of the model and its suitability to use in animation depend on the structure of the polygons. Materials and textures are properties that the render engine uses to render the model, in an unbiased render engine like blender cycles, one can give the model materials to tell the engine how to treat light when it hits the surface. Textures are used to give the material color using a color or albedo map, or give the surface features using a bump or normal map, it can be used to deform the model itself using a displacement map. Before rendering into an image, objects must be laid out in a scene.
This defines spatial relationships including location and size. Animation refers to the temporal description of an object; these techniques are used in combination. As with animation, physical simulation specifies motion. Rendering converts a model into an image either by simulating light transport to get photo-realistic images, or by applying an art style as in non-photorealistic rendering; the two basic operations in realistic rendering are scattering. This step is performed using 3D computer graphics software or a 3D graphics API. Altering the scene into a suitable form for rendering involves 3D projection, which displays a three-dimensional image in two dimensions. Although 3D modeling and CAD software may perform 3D rendering as well, exclusive 3D rendering software exists. 3D computer graphics software produces computer-generated imagery through 3D modeling and 3D rendering or produces 3D models for analytic and industrial purposes. 3D modeling software is a class of 3D computer graphics. Individual programs of this class are called modeling modelers.
3D modelers allow users to alter models via their 3D mesh. Users can add, subtract and otherwise change the mesh to their desire. Models can be viewed from a variety of angles simultaneously. Models can be rotated and the view can be zoomed in and out. 3D modelers can export their models to files, which can be imported into other applications as long as the metadata are compatible. Many modelers allow importers and exporters to be plugged-in, so they can read and write data in the native formats of other applications. Most 3D modelers contain a number of related features, such as ray tracers and other rendering alternatives and texture mapping facilities; some contain features that support or allow animation of models. Some may be able to generate full-motion video of a series of rendered scenes. Computer aided design software may employ the same fundamental 3D modeling techniques that 3D modeling software use but their goal differs, they are used in computer-aided engineering, computer-aided man
A head-mounted display, both abbreviated HMD, is a display device, worn on the head or as part of a helmet, that has a small display optic in front of one or each eye. A HMD has many uses, including in gaming, aviation and medicine lift. A head-mounted display is the primary component of virtual reality headsets. There is an optical head-mounted display, a wearable display that can reflect projected images and allows a user to see through it. A typical HMD has one or two small displays, with lenses and semi-transparent mirrors embedded in eyeglasses, a visor, or a helmet; the display units are miniaturised and may include cathode ray tubes, liquid crystal displays, liquid crystal on silicon, or organic light-emitting diodes. Some vendors employ multiple micro-displays to increase total field of view. HMDs differ in whether they can display only computer-generated imagery, or only live imagery from the physical world, or combination. Most HMDs can display only a computer-generated image, sometimes referred to as virtual image.
Some HMDs can allow a CGI to be superimposed on real-world view. This is sometimes referred to mixed reality. Combining real-world view with CGI can be done by projecting the CGI through a reflective mirror and viewing the real world directly; this method is called optical see-through. Combining real-world view with CGI can be done electronically by accepting video from a camera and mixing it electronically with CGI; this method is called video see-through. An optical head-mounted display uses an optical mixer, made of silvered mirrors, it can reflect artificial images, let real images cross the lens, let a user look through it. Various methods have existed for see-through HMD's, most of which can be summarized into two main families based on curved mirrors or waveguides. Curved mirrors have been used by Laster Technologies, by Vuzix in their Star 1200 product. Various waveguide methods have existed for years; these include diffraction optics, holographic optics, polarized optics, reflective optics.
Augmented reality systems guru Karl Guttag compared the optics of diffractive waveguides against the competing technology, reflective waveguides. Major HMD applications include military and civilian-commercial. In 1962, Hughes Aircraft Company revealed the Electrocular, a compact CRT, head-mounted monocular display that reflected a TV signal in to transparent eyepiece. Ruggedized HMDs are being integrated into the cockpits of modern helicopters and fighter aircraft; these are fully integrated with the pilot's flying helmet and may include protective visors, night vision devices, displays of other symbology. Military and firefighters use HMDs to display tactical information such as maps or thermal imaging data while viewing a real scene. Recent applications have included the use of HMD for paratroopers. In 2005, the Liteye HMD was introduced for ground combat troops as a rugged, waterproof lightweight display that clips into a standard US PVS-14 military helmet mount; the self-contained color monocular organic light-emitting diode display replaces the NVG tube and connects to a mobile computing device.
The LE has see-through ability and can be used as a standard HMD or for augmented reality applications. The design is optimized to provide high definition data under all lighting conditions, in covered or see-through modes of operation; the LE has a low power consumption, operating on four AA batteries for 35 hours or receiving power via standard Universal Serial Bus connection. The Defense Advanced Research Projects Agency continues to fund research in augmented reality HMDs as part of the Persistent Close Air Support Program. Vuzix is working on a system for PCAS that will use holographic waveguides to produce see-through augmented reality glasses that are only a few millimeters thick. Engineers and scientists use HMDs to provide stereoscopic views of computer-aided design schematics. Virtual reality, when applied to engineering and design, is a key factor in integration of the human in the design. By enabling engineers to interact with their designs in full life-size scale, products can be validated for issues that may not have been visible until physical prototyping.
The use of HMDs for VR is seen as supplemental to the conventional use of CAVE for VR simulation. HMDs are predominantly used for single-person interaction with the design, while CAVEs allow for more collaborative virtual reality sessions. Head Mounted Display systems are used in the maintenance of complex systems, as they can give a technician a simulated x-ray vision by combining computer graphics such as system diagrams and imagery with the technician's natural vision. There are applications in surgery, wherein a combination of radiographic data is combined with the surgeon's natural view of the operation, anesthesia, where the patient vital signs are within the anesthesiologist's field of view at all times. Research universities use HMDs to conduct studies related to vision, balance and neuroscience; as of 2010, the use of predictive visual tracking measurement to identify mild traumatic brain injury was being studied. In visual tracking tests, a HMD unit with eye tracking ability shows an object moving in a regular pattern.
People without brain injury are able to track the moving object with smooth pursuit eye movements and corr
Real-time computer graphics
Real-time computer graphics or real-time rendering is the sub-field of computer graphics focused on producing and analyzing images in real time. The term can refer to anything from rendering an application's graphical user interface to real-time image analysis, but is most used in reference to interactive 3D computer graphics using a graphics processing unit. One example of this concept is a video game that renders changing 3D environments to produce an illusion of motion. Computers have been capable of generating 2D images such as simple lines and polygons in real time since their invention; however rendering detailed 3D objects is a daunting task for traditional Von Neumann architecture-based systems. An early workaround to this problem was the use of sprites, 2D images that could imitate 3D graphics. Different techniques for rendering now exist, such as rasterization. Using these techniques and advanced hardware, computers can now render images enough to create the illusion of motion while accepting user input.
This means that the user can respond to rendered images in real time, producing an interactive experience. The goal of computer graphics is to generate computer-generated images, or frames, using certain desired metrics. One such metric is the number of frames generated in a given second. Real-time computer graphics systems differ from traditional rendering systems in that non-real-time graphics rely on ray tracing. In this process, millions or billions of rays are traced from the camera to the world for detailed rendering—this expensive operation can take hours or days to render a single frame. Real-time graphics systems must render each image in less than 1/30th of a second. Ray tracing is far too slow for these systems. In this technique, every object is decomposed into individual primitives triangles; each triangle gets positioned and scaled on the screen, rasterizer hardware generates pixels inside each triangle. These triangles are decomposed into atomic units called fragments that are suitable for displaying on a display screen.
The fragments are drawn on the screen using a color, computed in several steps. For example, a texture can be used to "paint" a triangle based on a stored image, shadow mapping can alter that triangle's colors based on line-of-sight to light sources. Real-time graphics optimizes image quality subject to hardware constraints. GPUs and other advances increased the image quality. GPUs are capable of handling millions of triangles per frame, current DirectX 11/OpenGL 4.x class hardware is capable of generating complex effects, such as shadow volumes, motion blurring, triangle generation, in real-time. The advancement of real-time graphics is evidenced in the progressive improvements between actual gameplay graphics and the pre-rendered cutscenes traditionally found in video games. Cutscenes are rendered in real-time—and may be interactive. Although the gap in quality between real-time graphics and traditional off-line graphics is narrowing, offline rendering remains much more accurate. Real-time graphics are employed when interactivity is crucial.
When real-time graphics are used in films, the director has complete control of what has to be drawn on each frame, which can sometimes involve lengthy decision-making. Teams of people are involved in the making of these decisions. In real-time computer graphics, the user operates an input device to influence what is about to be drawn on the display. For example, when the user wants to move a character on the screen, the system updates the character's position before drawing the next frame; the display's response-time is far slower than the input device—this is justified by the immense difference between the response time of a human being's motion and the perspective speed of the human visual system. This difference has other effects too: because input devices must be fast to keep up with human motion response, advancements in input devices take much longer to achieve than comparable advancements in display devices. Another important factor controlling real-time computer graphics is the combination of physics and animation.
These techniques dictate what is to be drawn on the screen—especially where to draw objects in the scene. These techniques help realistically imitate real world behavior, adding to the computer graphics' degree of realism. Real-time previewing with graphics software when adjusting lighting effects, can increase work speed; some parameter adjustments in fractal generating software may be made while viewing changes to the image in real time. The graphics rendering pipeline is the foundation of real-time graphics, its main function is to render a two-dimensional image in relation to a virtual camera, three-dimensional objects, light sources, lighting models and more. The architecture of the real-time rendering pipeline can be divided into conceptual stages: application and rasterization; the application stage is responsible for generating "scenes", or 3D settings that are drawn to a 2D display. This stage is implemented in software; this stage may perform processing such as collision detection, speed-up techniques and force feedback, in addition to handling user input.
Collision detection is an example of an operatio
Sun Microsystems, Inc. was an American company that sold computers, computer components and information technology services and created the Java programming language, the Solaris operating system, ZFS, the Network File System, SPARC. Sun contributed to the evolution of several key computing technologies, among them Unix, RISC processors, thin client computing, virtualized computing. Sun was founded on February 24, 1982. At its height, the Sun headquarters were in Santa Clara, California, on the former west campus of the Agnews Developmental Center. On April 20, 2009, it was announced; the deal was completed on January 27, 2010. Sun products included computer servers and workstations built on its own RISC-based SPARC processor architecture, as well as on x86-based AMD Opteron and Intel Xeon processors. Sun developed its own storage systems and a suite of software products, including the Solaris operating system, developer tools, Web infrastructure software, identity management applications. Other technologies included the Java platform and NFS.
In general, Sun was a proponent of open systems Unix. It was a major contributor to open-source software, as evidenced by its $1 billion purchase, in 2008, of MySQL, an open-source relational database management system. At various times, Sun had manufacturing facilities in several locations worldwide, including Newark, California. However, by the time the company was acquired by Oracle, it had outsourced most manufacturing responsibilities; the initial design for what became Sun's first Unix workstation, the Sun-1, was conceived by Andy Bechtolsheim when he was a graduate student at Stanford University in Palo Alto, California. Bechtolsheim designed the SUN workstation for the Stanford University Network communications project as a personal CAD workstation, it was designed around the Motorola 68000 processor with an advanced memory management unit to support the Unix operating system with virtual memory support. He built the first ones from spare parts obtained from Stanford's Department of Computer Science and Silicon Valley supply houses.
On February 24, 1982, Vinod Khosla, Andy Bechtolsheim, Scott McNealy, all Stanford graduate students, founded Sun Microsystems. Bill Joy of Berkeley, a primary developer of the Berkeley Software Distribution, joined soon after and is counted as one of the original founders; the Sun name is derived from the initials of the Stanford University Network. Sun was profitable from its first quarter in July 1982. By 1983 Sun was known for producing 68k-based systems with high-quality graphics that were the only computers other than DEC's VAX to run 4.2BSD. It licensed the computer design to other manufacturers, which used it to build Multibus-based systems running Unix from UniSoft. Sun's initial public offering was in 1986 for Sun Workstations; the symbol was changed in 2007 to JAVA. Sun's logo, which features four interleaved copies of the word sun in the form of a rotationally symmetric ambigram, was designed by professor Vaughan Pratt of Stanford; the initial version of the logo was orange and had the sides oriented horizontally and vertically, but it was subsequently rotated to stand on one corner and re-colored purple, blue.
In the dot-com bubble, Sun began making much more money, its shares rose dramatically. It began spending much more, hiring workers and building itself out; some of this was because of genuine demand, but much was from web start-up companies anticipating business that would never happen. In 2000, the bubble burst. Sales in Sun's important hardware division went into free-fall as customers closed shop and auctioned high-end servers. Several quarters of steep losses led to executive departures, rounds of layoffs, other cost cutting. In December 2001, the stock fell to the 1998, pre-bubble level of about $100, but it kept falling, faster than many other tech companies. A year it had dipped below $10 but bounced back to $20. In mid-2004, Sun closed their Newark, California and consolidated all manufacturing to Hillsboro, Oregon. In 2006, the rest of the Newark campus was put on the market. In 2004, Sun canceled two major processor projects which emphasized high instruction-level parallelism and operating frequency.
Instead, the company chose to concentrate on processors optimized for multi-threading and multiprocessing, such as the UltraSPARC T1 processor. The company announced a collaboration with Fujitsu to use the Japanese company's processor chips in mid-range and high-end Sun servers; these servers were announced on April 17, 2007, as the M-Series, part of the SPARC Enterprise series. In February 2005, Sun announced the Sun Grid, a grid computing deployment on which it offered utility computing services priced at US$1 per CPU/hour for processing and per GB/month for storage; this offering built upon an existing 3,000-CPU server farm used for internal R&D for over 10 years, which Sun marketed as being able to achieve 97% utilization. In August 2005, the first commercial use of this grid was announced for financial risk simulations, launched as its first software as a service product. In January 2005, Sun reported a net profit of $19 million for fiscal 2005 second quarter, for the first time in three years.
This was followed by net loss of $9 million on GAAP basis for the third quarter 2005, as reported on April 14, 2005. In January 2007, Sun reported a net GAAP profit of $126
JMonkeyEngine is a game engine made for modern 3D development, as it uses shader technology extensively. 3d games can be written for both desktop devices using this engine. JMonkeyEngine uses LWJGL as its default renderer. OpenGL 2 through OpenGL 4 is supported. JMonkeyEngine is a community-centric open-source project released under the new BSD license, it is used by educational institutions. The default jMonkeyEngine 3 download comes integrated with an advanced SDK. By itself, jMonkeyEngine is a collection of libraries. Coupled with an IDE like the official jMonkeyEngine 3 SDK it becomes a higher level game development environment with multiple graphical components; the SDK is based on the NetBeans Platform. Alongside the default NetBeans update centers, the SDK includes its own plugin repository and a selection between stable point releases or nightly updates. Since March the 5th of 2016, the SDK is not supported anymore by the core team. Since it is still being maintained by the community. Note: The "jMonkeyPlatform" and the "jMonkeyEngine 3 SDK" are the same thing.
JMonkeyEngine was built to fulfill the lack of full featured graphics engines written in Java. The project has a distinct two-part story, as the current core development team includes none of the original creators. Version 0.1 to 2.0 of jMonkeyEngine marks the time from when the project was first established in 2003, until the last 2.0 version was released in 2008. When the core developers at that time discontinued work on the project throughout the end of 2007 and the beginning of 2008, the 2.0 version had not yet been made stable. Regardless, the codebase became adopted for commercial use and the community supported the 2.0 version more than any other. 2003 Initial work on jMonkeyEngine was begun by Mark Powell as a side project to see if a featured graphics API could be written in Java. Much of the early work on the API was inspired by David Eberly's C++ book 3D Game Engine Design. January 2004 Mark was joined by Joshua Slack and together over the following two years, with the help of other community contributors, a commercially viable API was developed.
August 15, 2008 Joshua Slack announces to step back from active development of the jMonkeyEngine. Since the departure of jME's core developers in late 2008 the codebase remained stagnant for several months; the community kept committing patches. Version 3.0 started as nothing more than an experiment. The first preview release of jME3 in early 2009 created a lot of buzz in the community, the majority agreed that this new branch would be the official successor to jME 2.0. From there on all the formalities were sorted out between the new; the jME core team is now composed of eight committed individuals. April 1, 2009 Kirill Vainer "shadowislord" starts a new branch in the official jMonkeyEngine repository and commits the first publicly available code for jMonkeyEngine 3.0. Soon after, the branch was renamed to reflect its "test" status. June 24, 2009 The project sees a new beginning in the official jMonkeyEngine 3.0 branch designed and developed by Kirill Vainer. Management responsibilities are picked up by Erlend Sogge Heggen, shortly accompanied by Skye Book.
May 17, 2010 The first Alpha of jMonkeyEngine 3 is released. The same date marked the first Alpha release of the jMonkeyEngine SDK, only a few months after the first planning stages; the "jMonkeyEngine SDK" has since become the default product download recommended to all jME3 developers. September 7, 2010 The jMonkeyEngine website was re-designed. A new domain, jmonkeyengine.org, is dedicated to all community activities. The old jmonkeyengine.com is re-purposed as a product promotion site. October 22, 2011 jMonkeyEngine. Stable update track is introduced as an alternative to downloading bleeding edge nightly builds. February 15, 2014 jMonkeyEngine 3 SDK Stable is released. In spite of being technically stable for a long time, the official 3.0 SDK release was delayed until February 2014. Nord, a browser-based MMO on Facebook, created by Skygoblin. Grappling Hook, a first-person action & puzzle game, accomplished by a single independent developer. Drohtin, Realtime Strategy Game, Singleplayer/Multiplayer.
Build your own village and be a great leader of your citizens. Chaos, a 3D fantasy cooperative game based RPG by 4Realms. Skullstone, retro styled single player dungeon crawler game with modern 3D graphics, created by Black Torch Games. Spoxel, a 2D action-adventure sandbox video game, created by Epaga Games. Lightspeed Frontier, a space sandbox game with RPG, exploration elements, created by Crowdwork Studios. Subspace Infinity, a 2d top down space fighter mmo. JavaOne 2008 Presentation Finalist in PacktPub Open Source Graphics Software Award 2010 Ardor3D began life September 23, 2008 as a fork from the jMonkeyEngine by Joshua Slack and Rikard Herlitz due to what they perceived as irreconcilable issues with naming, provenance and community structure in that engine, as well as a desire to back a powerful open-source Java engine with organized corporate support; the first public release came January 2, 2009, with new releases following every few months thereafter. In 2011, Ardor3D was used in the Mars Curiosity mission both by NASA Ames as well as NASA JPL, for vis