Sun Microsystems, Inc. was an American company that sold computers, computer components and information technology services and created the Java programming language, the Solaris operating system, ZFS, the Network File System, SPARC. Sun contributed to the evolution of several key computing technologies, among them Unix, RISC processors, thin client computing, virtualized computing. Sun was founded on February 24, 1982. At its height, the Sun headquarters were in Santa Clara, California, on the former west campus of the Agnews Developmental Center. On April 20, 2009, it was announced; the deal was completed on January 27, 2010. Sun products included computer servers and workstations built on its own RISC-based SPARC processor architecture, as well as on x86-based AMD Opteron and Intel Xeon processors. Sun developed its own storage systems and a suite of software products, including the Solaris operating system, developer tools, Web infrastructure software, identity management applications. Other technologies included the Java platform and NFS.
In general, Sun was a proponent of open systems Unix. It was a major contributor to open-source software, as evidenced by its $1 billion purchase, in 2008, of MySQL, an open-source relational database management system. At various times, Sun had manufacturing facilities in several locations worldwide, including Newark, California. However, by the time the company was acquired by Oracle, it had outsourced most manufacturing responsibilities; the initial design for what became Sun's first Unix workstation, the Sun-1, was conceived by Andy Bechtolsheim when he was a graduate student at Stanford University in Palo Alto, California. Bechtolsheim designed the SUN workstation for the Stanford University Network communications project as a personal CAD workstation, it was designed around the Motorola 68000 processor with an advanced memory management unit to support the Unix operating system with virtual memory support. He built the first ones from spare parts obtained from Stanford's Department of Computer Science and Silicon Valley supply houses.
On February 24, 1982, Vinod Khosla, Andy Bechtolsheim, Scott McNealy, all Stanford graduate students, founded Sun Microsystems. Bill Joy of Berkeley, a primary developer of the Berkeley Software Distribution, joined soon after and is counted as one of the original founders; the Sun name is derived from the initials of the Stanford University Network. Sun was profitable from its first quarter in July 1982. By 1983 Sun was known for producing 68k-based systems with high-quality graphics that were the only computers other than DEC's VAX to run 4.2BSD. It licensed the computer design to other manufacturers, which used it to build Multibus-based systems running Unix from UniSoft. Sun's initial public offering was in 1986 for Sun Workstations; the symbol was changed in 2007 to JAVA. Sun's logo, which features four interleaved copies of the word sun in the form of a rotationally symmetric ambigram, was designed by professor Vaughan Pratt of Stanford; the initial version of the logo was orange and had the sides oriented horizontally and vertically, but it was subsequently rotated to stand on one corner and re-colored purple, blue.
In the dot-com bubble, Sun began making much more money, its shares rose dramatically. It began spending much more, hiring workers and building itself out; some of this was because of genuine demand, but much was from web start-up companies anticipating business that would never happen. In 2000, the bubble burst. Sales in Sun's important hardware division went into free-fall as customers closed shop and auctioned high-end servers. Several quarters of steep losses led to executive departures, rounds of layoffs, other cost cutting. In December 2001, the stock fell to the 1998, pre-bubble level of about $100, but it kept falling, faster than many other tech companies. A year it had dipped below $10 but bounced back to $20. In mid-2004, Sun closed their Newark, California and consolidated all manufacturing to Hillsboro, Oregon. In 2006, the rest of the Newark campus was put on the market. In 2004, Sun canceled two major processor projects which emphasized high instruction-level parallelism and operating frequency.
Instead, the company chose to concentrate on processors optimized for multi-threading and multiprocessing, such as the UltraSPARC T1 processor. The company announced a collaboration with Fujitsu to use the Japanese company's processor chips in mid-range and high-end Sun servers; these servers were announced on April 17, 2007, as the M-Series, part of the SPARC Enterprise series. In February 2005, Sun announced the Sun Grid, a grid computing deployment on which it offered utility computing services priced at US$1 per CPU/hour for processing and per GB/month for storage; this offering built upon an existing 3,000-CPU server farm used for internal R&D for over 10 years, which Sun marketed as being able to achieve 97% utilization. In August 2005, the first commercial use of this grid was announced for financial risk simulations, launched as its first software as a service product. In January 2005, Sun reported a net profit of $19 million for fiscal 2005 second quarter, for the first time in three years.
This was followed by net loss of $9 million on GAAP basis for the third quarter 2005, as reported on April 14, 2005. In January 2007, Sun reported a net GAAP profit of $126
Mainframe computers or mainframes are computers used by large organizations for critical applications. They are larger and have more processing power than some other classes of computers: minicomputers, servers and personal computers; the term referred to the large cabinets called "main frames" that housed the central processing unit and main memory of early computers. The term was used to distinguish high-end commercial machines from less powerful units. Most large-scale computer system architectures were established in the 1960s, but continue to evolve. Mainframe computers are used as servers. Modern mainframe design is characterized less by raw computational speed and more by: Redundant internal engineering resulting in high reliability and security Extensive input-output facilities with the ability to offload to separate engines Strict backward compatibility with older software High hardware and computational utilization rates through virtualization to support massive throughput. Hot-swapping of hardware, such as processors and memory.
Their high stability and reliability enable these machines to run uninterrupted for long periods of time, with mean time between failures measured in decades. Mainframes have high availability, one of the primary reasons for their longevity, since they are used in applications where downtime would be costly or catastrophic; the term reliability and serviceability is a defining characteristic of mainframe computers. Proper planning and implementation is required to realize these features. In addition, mainframes are more secure than other computer types: the NIST vulnerabilities database, US-CERT, rates traditional mainframes such as IBM Z, Unisys Dorado and Unisys Libra as among the most secure with vulnerabilities in the low single digits as compared with thousands for Windows, UNIX, Linux. Software upgrades require setting up the operating system or portions thereof, are non-disruptive only when using virtualizing facilities such as IBM z/OS and Parallel Sysplex, or Unisys XPCL, which support workload sharing so that one system can take over another's application while it is being refreshed.
In the late 1950s, mainframes had only a rudimentary interactive interface, used sets of punched cards, paper tape, or magnetic tape to transfer data and programs. They operated in batch mode to support back office functions such as payroll and customer billing, most of which were based on repeated tape-based sorting and merging operations followed by line printing to preprinted continuous stationery; when interactive user terminals were introduced, they were used exclusively for applications rather than program development. Typewriter and Teletype devices were common control consoles for system operators through the early 1970s, although supplanted by keyboard/display devices. By the early 1970s, many mainframes acquired interactive user terminals operating as timesharing computers, supporting hundreds of users along with batch processing. Users gained access through keyboard/typewriter terminals and specialized text terminal CRT displays with integral keyboards, or from personal computers equipped with terminal emulation software.
By the 1980s, many mainframes supported graphic display terminals, terminal emulation, but not graphical user interfaces. This form of end-user computing became obsolete in the 1990s due to the advent of personal computers provided with GUIs. After 2000, modern mainframes or phased out classic "green screen" and color display terminal access for end-users in favour of Web-style user interfaces; the infrastructure requirements were drastically reduced during the mid-1990s, when CMOS mainframe designs replaced the older bipolar technology. IBM claimed that its newer mainframes reduced data center energy costs for power and cooling, reduced physical space requirements compared to server farms. Modern mainframes can run multiple different instances of operating systems at the same time; this technique of virtual machines allows applications to run as if they were on physically distinct computers. In this role, a single mainframe can replace higher-functioning hardware services available to conventional servers.
While mainframes pioneered this capability, virtualization is now available on most families of computer systems, though not always to the same degree or level of sophistication. Mainframes can add or hot swap system capacity without disrupting system function, with specificity and granularity to a level of sophistication not available with most server solutions. Modern mainframes, notably the IBM zSeries, System z9 and System z10 servers, offer two levels of virtualization: logical partitions and virtual machines. Many mainframe customers run two machines: one in their primary data center, one in their backup data center—fully active active, or on standby—in case there is a catastrophe affecting the first building. Test, development and production workload for applications and databases can run on a single machine, except for large demands where the capacity of one machine might be limiting; such a two-mainframe installation can support continuous business service, avoiding both planned and unplanned outages.
In practice many customers use multiple mainframes linked either by Parallel Sysplex and shared DASD, or with shared, geographically dispersed storage provided by EMC
In distributed computing, distributed objects are objects that are distributed across different address spaces, either in different processes on the same computer, or in multiple computers connected via a network, but which work together by sharing data and invoking methods. This involves location transparency, where remote objects appear the same as local objects; the main method of distributed object communication is with remote method invocation by message-passing: one object sends a message to another object in a remote machine or process to perform some task. The results are sent back to the calling object. Distributed objects were popular in the late 1990s and early 2000s, but have since fallen out of favor; the term may generally refer to one of the extensions of the basic object concept used in the context of distributed computing, such as replicated objects or live distributed objects. Replicated objects are groups of software components that run a distributed multi-party protocol to achieve a high degree of consistency between their internal states, that respond to requests in a coordinated manner.
Referring to the group of replicas jointly as an object reflects the fact that interacting with any of them exposes the same externally visible state and behavior. Live distributed objects generalize the replicated object concept to groups of replicas that might internally use any distributed protocol resulting in only a weak consistency between their local states. Live distributed objects can be defined as running instances of distributed multi-party protocols, viewed from the object-oriented perspective as entities that have distinct identity, that can encapsulate distributed state and behavior. See Internet protocol suite. Local and distributed objects differ in many respects. Here are some of them: Life cycle: Creation and deletion of distributed objects is different from local objects Reference: Remote references to distributed objects are more complex than simple pointers to memory addresses Request Latency: A distributed object request is orders of magnitude slower than local method invocation Object Activation: Distributed objects may not always be available to serve an object request at any point in time Parallelism: Distributed objects may be executed in parallel.
Communication: There are different communication primitives available for distributed objects requests Failure: Distributed objects have far more points of failure than typical local objects. Security: Distribution makes them vulnerable to attack; the RPC facilities of the cross platform serialization protocol, Cap'n Proto amount to a distributed object protocol. Distributed object method calls can be executed through interface references/capabilities. Distributed objects are implemented in Objective-C using the Cocoa API with the NSConnection class and supporting objects. Distributed objects are used in Java RMI. CORBA lets one build distributed mixed object systems. DCOM is a framework for distributed objects on the Microsoft platform. DDObjects is a framework for distributed objects using Borland Delphi. Jt is a framework for distributed components using a messaging paradigm. JavaSpaces is a Sun specification for a distributed, shared memory Pyro is a framework for distributed objects using the Python programming language.
Distributed Ruby is a framework for distributed objects using the Ruby programming language. Fragmented object Distributed object communication Object request broker
MacOS is a series of graphical operating systems developed and marketed by Apple Inc. since 2001. It is the primary operating system for Apple's Mac family of computers. Within the market of desktop and home computers, by web usage, it is the second most used desktop OS, after Microsoft Windows.macOS is the second major series of Macintosh operating systems. The first is colloquially called the "classic" Mac OS, introduced in 1984, the final release of, Mac OS 9 in 1999; the first desktop version, Mac OS X 10.0, was released in March 2001, with its first update, 10.1, arriving that year. After this, Apple began naming its releases after big cats, which lasted until OS X 10.8 Mountain Lion. Since OS X 10.9 Mavericks, releases have been named after locations in California. Apple shortened the name to "OS X" in 2012 and changed it to "macOS" in 2016, adopting the nomenclature that they were using for their other operating systems, iOS, watchOS, tvOS; the latest version is macOS Mojave, publicly released in September 2018.
Between 1999 and 2009, Apple sold. The initial version, Mac OS X Server 1.0, was released in 1999 with a user interface similar to Mac OS 8.5. After this, new versions were introduced concurrently with the desktop version of Mac OS X. Beginning with Mac OS X 10.7 Lion, the server functions were made available as a separate package on the Mac App Store.macOS is based on technologies developed between 1985 and 1997 at NeXT, a company that Apple co-founder Steve Jobs created after leaving the company. The "X" in Mac OS X and OS X is pronounced as such; the X was a prominent part of the operating system's brand identity and marketing in its early years, but receded in prominence since the release of Snow Leopard in 2009. UNIX 03 certification was achieved for the Intel version of Mac OS X 10.5 Leopard and all releases from Mac OS X 10.6 Snow Leopard up to the current version have UNIX 03 certification. MacOS shares its Unix-based core, named Darwin, many of its frameworks with iOS, tvOS and watchOS.
A modified version of Mac OS X 10.4 Tiger was used for the first-generation Apple TV. Releases of Mac OS X from 1999 to 2005 ran on the PowerPC-based Macs of that period. After Apple announced that they were switching to Intel CPUs from 2006 onwards, versions were released for 32-bit and 64-bit Intel-based Macs. Versions from Mac OS X 10.7 Lion run on 64-bit Intel CPUs, in contrast to the ARM architecture used on iOS and watchOS devices, do not support PowerPC applications. The heritage of what would become macOS had originated at NeXT, a company founded by Steve Jobs following his departure from Apple in 1985. There, the Unix-like NeXTSTEP operating system was developed, launched in 1989; the kernel of NeXTSTEP is based upon the Mach kernel, developed at Carnegie Mellon University, with additional kernel layers and low-level user space code derived from parts of BSD. Its graphical user interface was built on top of an object-oriented GUI toolkit using the Objective-C programming language. Throughout the early 1990s, Apple had tried to create a "next-generation" OS to succeed its classic Mac OS through the Taligent and Gershwin projects, but all of them were abandoned.
This led Apple to purchase NeXT in 1996, allowing NeXTSTEP called OPENSTEP, to serve as the basis for Apple's next generation operating system. This purchase led to Steve Jobs returning to Apple as an interim, the permanent CEO, shepherding the transformation of the programmer-friendly OPENSTEP into a system that would be adopted by Apple's primary market of home users and creative professionals; the project was first code named "Rhapsody" and officially named Mac OS X. Mac OS X was presented as the tenth major version of Apple's operating system for Macintosh computers. Previous Macintosh operating systems were named using Arabic numerals, as with Mac OS 8 and Mac OS 9; the letter "X" in Mac OS X's name refers to a Roman numeral. It is therefore pronounced "ten" in this context. However, it is commonly pronounced like the letter "X"; the first version of Mac OS X, Mac OS X Server 1.0, was a transitional product, featuring an interface resembling the classic Mac OS, though it was not compatible with software designed for the older system.
Consumer releases of Mac OS X included more backward compatibility. Mac OS applications could be rewritten to run natively via the Carbon API; the consumer version of Mac OS X was launched in 2001 with Mac OS X 10.0. Reviews were variable, with extensive praise for its sophisticated, glossy Aqua interface but criticizing it for sluggish performance. With Apple's popularity at a low, the makers of several classic Mac applications such as FrameMaker and PageMaker declined to develop new versions of their software for Mac OS X. Ars Technica columnist John Siracusa, who reviewed every major OS X release up to 10.10, described the early releases in retrospect as'dog-slow, feature poor' and Aqua as'unbearably slow and a huge resource hog'. Apple developed several new releases of Mac OS X. Siracusa's review of version 10.3, noted "It's strange to have gone from years of uncertainty and vaporware to a steady annual supply of major new operating system releases." Version 10.4, Tiger shocked executives at Microsoft by offering a number of features, such as fast file s
WebObjects is a Java web application server and a server-based web application framework developed by NeXT Software, Inc. As of 2009 the software has been independently maintained by a volunteer community. WebObject's hallmark features are its object-orientation, database connectivity, prototyping tools. Applications created with WebObjects can be deployed as web sites, Java Web Start desktop applications, and/or standards-based web services; the deployment runtime is pure Java, allowing developers to deploy WebObjects applications on platforms that support Java. One can use the included WebObjects Java SE application server or deploy on third-party Java EE application servers such as JBoss, Apache Tomcat, WebLogic Server or IBM WebSphere. WebObjects was created by NeXT Software, Inc. first publicly demonstrated at the Object World conference in 1995 and released to the public in March 1996. The time and cost benefits of rapid, object-oriented development attracted major corporations to WebObjects in the early days of e-commerce, with clients including BBC News, Dell Computer, DreamWorks SKG, Fannie Mae, GE Capital, Merrill Lynch, Motorola.
However, following NeXT's merger into Apple Inc. in 1997, WebObjects' public profile languished. Many early adopters switched to alternative technologies, Apple remains the biggest client for the software, relying on it to power parts of its online Apple Store and the iTunes Store — WebObjects' highest-profile implementation. WebObjects was part of Apple's strategy of using software to drive hardware sales, in 2000 the price was lowered from $50,000 to $699. From May 2001 WebObjects was included with Mac OS X Server, no longer required a license key for development or deployment. WebObjects transitioned from a stand-alone product to be a part of Mac OS X with the release of version 5.3 in June 2005. The developer tools and frameworks, which sold for US$699, were bundled with Apple's Xcode IDE. Support for other platforms, such as Windows, was discontinued. Apple said that it would further integrate WebObjects development tools with Xcode in future releases; this included a new EOModeler Plugin for Xcode.
This strategy, was not pursued further. In 2006, Apple announced the deprecation of Mac OS X's Cocoa-Java bridge with the release of Xcode 2.4 at the August 2006 Worldwide Developers Conference, with it all dependent features, including the entire suite of WebObjects developer applications: EOModeler, EOModeler Plugin, WebObjects Builder, WebServices Assistant, RuleEditor and WOALauncher. Apple had decided to concentrate its engineering resources on the runtime engine of WebObjects, leaving the future responsibility for developer applications with the open-source community; the main open-source alternative — the Eclipse IDE with the WOLips suite of plugins — had matured to such an extent that its capabilities had, in many areas, surpassed those of Apple's own tools, which had not seen significant updates for a number of years. Apple promised to provide assistance to the community in its efforts to extend such tools and develop new ones. In a posting to the webobjects-dev mailing list, Daryl Lee from Apple's WebObjects team publicly disclosed the company's new strategy for WebObjects.
It promised to "make WebObjects the best server-side runtime environment" by: Improving performance and standards compliance Making WebObjects work well with Ant and the most popular IDEs, including Xcode and Eclipse Opening and making public all standards and formats that WebObjects depends uponWebObjects 5.4, which shipped with Mac OS X Leopard in October 2007, removed the license key requirement for both development and deployment of WebObjects applications on all platforms. All methods for checking license limitations were deprecated. In 2009, Apple stopped issuing new releases of WebObjects outside Apple; the community decided to continue development with Project Wonder, an open-source framework built on top of the core WebObjects frameworks and extends them. For example, Project Wonder has updated development tools and provides a REST framework, not part of the original WebObjects package. Though once included in the default installation of Mac OS X Server, WebObjects was no longer installed by default starting with Mac OS X Snow Leopard Server and shortly after, Apple ceased promoting or selling WebObjects.
As of 2016, WebObjects is supported by its developer community, the "WOCommunity Association", by extending the core frameworks and providing fixes with Project Wonder. The organization last held a Worldwide WebObjects Developer Conference, WOWODC, in 2013. In May 2016, Apple confirmed; as of 2016 most WebObjects architects and engineers are using the tools being developed by the WebObjects community. These tools are open-source; the WebObjects plug-ins for Eclipse are known as WOLips. Building WebObjects frameworks and applications for deployment is achieved using the WOProject set of tools for Apache Ant or Apache Maven; these tools are distributed with WOLips. A WebObjects application is a server-side executable, created by combining prebuilt application framework objects with the developer's own custom code. WebObjects' frameworks can be broken down into three core parts: The WebObjects Framework is at the highest level of the system, it is responsible for the application's user state management.
It uses a template-based approach to take that object graph and turn it into HTML, or other tag-based information display standards, such as XML or SMIL. It provides an environment where you can create reusable components. Components are chunks of presentation and functionality
Graphical user interface
The graphical user interface is a form of user interface that allows users to interact with electronic devices through graphical icons and visual indicators such as secondary notation, instead of text-based user interfaces, typed command labels or text navigation. GUIs were introduced in reaction to the perceived steep learning curve of command-line interfaces, which require commands to be typed on a computer keyboard; the actions in a GUI are performed through direct manipulation of the graphical elements. Beyond computers, GUIs are used in many handheld mobile devices such as MP3 players, portable media players, gaming devices and smaller household and industrial controls; the term GUI tends not to be applied to other lower-display resolution types of interfaces, such as video games, or not including flat screens, like volumetric displays because the term is restricted to the scope of two-dimensional display screens able to describe generic information, in the tradition of the computer science research at the Xerox Palo Alto Research Center.
Designing the visual composition and temporal behavior of a GUI is an important part of software application programming in the area of human–computer interaction. Its goal is to enhance the efficiency and ease of use for the underlying logical design of a stored program, a design discipline named usability. Methods of user-centered design are used to ensure that the visual language introduced in the design is well-tailored to the tasks; the visible graphical interface features of an application are sometimes referred to as chrome or GUI. Users interact with information by manipulating visual widgets that allow for interactions appropriate to the kind of data they hold; the widgets of a well-designed interface are selected to support the actions necessary to achieve the goals of users. A model–view–controller allows flexible structures in which the interface is independent from and indirectly linked to application functions, so the GUI can be customized easily; this allows users to select or design a different skin at will, eases the designer's work to change the interface as user needs evolve.
Good user interface design relates to users more, to system architecture less. Large widgets, such as windows provide a frame or container for the main presentation content such as a web page, email message or drawing. Smaller ones act as a user-input tool. A GUI may be designed for the requirements of a vertical market as application-specific graphical user interfaces. Examples include automated teller machines, point of sale touchscreens at restaurants, self-service checkouts used in a retail store, airline self-ticketing and check-in, information kiosks in a public space, like a train station or a museum, monitors or control screens in an embedded industrial application which employ a real-time operating system. By the 1980s, cell phones and handheld game systems employed application specific touchscreen GUIs. Newer automobiles use GUIs in their navigation systems and multimedia centers, or navigation multimedia center combinations. Sample graphical desktop environments A GUI uses a combination of technologies and devices to provide a platform that users can interact with, for the tasks of gathering and producing information.
A series of elements conforming a visual language have evolved to represent information stored in computers. This makes it easier for people with few computer skills to use computer software; the most common combination of such elements in GUIs is the windows, menus, pointer paradigm in personal computers. The WIMP style of interaction uses a virtual input device to represent the position of a pointing device, most a mouse, presents information organized in windows and represented with icons. Available commands are compiled together in menus, actions are performed making gestures with the pointing device. A window manager facilitates the interactions between windows and the windowing system; the windowing system handles hardware devices such as pointing devices, graphics hardware, positioning of the pointer. In personal computers, all these elements are modeled through a desktop metaphor to produce a simulation called a desktop environment in which the display represents a desktop, on which documents and folders of documents can be placed.
Window managers and other software combine to simulate the desktop environment with varying degrees of realism. Smaller mobile devices such as personal digital assistants and smartphones use the WIMP elements with different unifying metaphors, due to constraints in space and available input devices. Applications for which WIMP is not well suited may use newer interaction techniques, collectively termed post-WIMP user interfaces; as of 2011, some touchscreen-based operating systems such as Apple's iOS and Android use the class of GUIs named post-WIMP. These support styles of interaction using more than one finger in contact with a display, which allows actions such as pinching and rotating, which are unsupported by one pointer and mouse. Human interface devices, for the efficient interaction with a GUI include a computer keyboard used together with keyboard shortcuts, pointing devices for the cursor control: mouse, pointing stick, trackball, virtual keyboards, head-up displays. There are actions performed by programs that affect the GUI.
For example, there are components like inotify or D-Bus to facilitate communication between computer programs. Ivan Sutherland developed Sketchpad in 1963 held as the first graphical co
C++ is a general-purpose programming language, developed by Bjarne Stroustrup as an extension of the C language, or "C with Classes". It has imperative, object-oriented and generic programming features, while providing facilities for low-level memory manipulation, it is always implemented as a compiled language, many vendors provide C++ compilers, including the Free Software Foundation, Intel, IBM, so it is available on many platforms. C++ was designed with a bias toward system programming and embedded, resource-constrained software and large systems, with performance and flexibility of use as its design highlights. C++ has been found useful in many other contexts, with key strengths being software infrastructure and resource-constrained applications, including desktop applications and performance-critical applications. C++ is standardized by the International Organization for Standardization, with the latest standard version ratified and published by ISO in December 2017 as ISO/IEC 14882:2017.
The C++ programming language was standardized in 1998 as ISO/IEC 14882:1998, amended by the C++03, C++11 and C++14 standards. The current C++ 17 standard supersedes these with an enlarged standard library. Before the initial standardization in 1998, C++ was developed by Danish computer scientist Bjarne Stroustrup at Bell Labs since 1979 as an extension of the C language. C++20 is the next planned standard, keeping with the current trend of a new version every three years. In 1979, Bjarne Stroustrup, a Danish computer scientist, began work on "C with Classes", the predecessor to C++; the motivation for creating a new language originated from Stroustrup's experience in programming for his Ph. D. thesis. Stroustrup found that Simula had features that were helpful for large software development, but the language was too slow for practical use, while BCPL was fast but too low-level to be suitable for large software development; when Stroustrup started working in AT&T Bell Labs, he had the problem of analyzing the UNIX kernel with respect to distributed computing.
Remembering his Ph. D. experience, Stroustrup set out to enhance the C language with Simula-like features. C was chosen because it was general-purpose, fast and used; as well as C and Simula's influences, other languages influenced C++, including ALGOL 68, Ada, CLU and ML. Stroustrup's "C with Classes" added features to the C compiler, including classes, derived classes, strong typing and default arguments. In 1983, "C with Classes" was renamed to "C++", adding new features that included virtual functions, function name and operator overloading, constants, type-safe free-store memory allocation, improved type checking, BCPL style single-line comments with two forward slashes. Furthermore, it included the development of a standalone compiler for Cfront. In 1985, the first edition of The C++ Programming Language was released, which became the definitive reference for the language, as there was not yet an official standard; the first commercial implementation of C++ was released in October of the same year.
In 1989, C++ 2.0 was released, followed by the updated second edition of The C++ Programming Language in 1991. New features in 2.0 included multiple inheritance, abstract classes, static member functions, const member functions, protected members. In 1990, The Annotated C++ Reference Manual was published; this work became the basis for the future standard. Feature additions included templates, namespaces, new casts, a boolean type. After the 2.0 update, C++ evolved slowly until, in 2011, the C++11 standard was released, adding numerous new features, enlarging the standard library further, providing more facilities to C++ programmers. After a minor C++14 update released in December 2014, various new additions were introduced in C++17, further changes planned for 2020; as of 2017, C++ remains the third most popular programming language, behind Java and C. On January 3, 2018, Stroustrup was announced as the 2018 winner of the Charles Stark Draper Prize for Engineering, "for conceptualizing and developing the C++ programming language".
According to Stroustrup: "the name signifies the evolutionary nature of the changes from C". This name is credited to Rick Mascitti and was first used in December 1983; when Mascitti was questioned informally in 1992 about the naming, he indicated that it was given in a tongue-in-cheek spirit. The name comes from C's ++ operator and a common naming convention of using "+" to indicate an enhanced computer program. During C++'s development period, the language had been referred to as "new C" and "C with Classes" before acquiring its final name. Throughout C++'s life, its development and evolution has been guided by a set of principles: It must be driven by actual problems and its features should be useful in real world programs; every feature should be implementable. Programmers should be free to pick their own programming style, that style should be supported by C++. Allowing a useful feature is more important than preventing every possible misuse of C++, it should provide facilities for organising programs into separate, well-defined parts, provide facilities for combining separately developed parts.
No implicit violations of the type system (but allow explicit violations.