A computing platform or digital platform is the environment in which a piece of software is executed. It may be the hardware or the operating system a web browser and associated application programming interfaces, or other underlying software, as long as the program code is executed with it. Computing platforms have different abstraction levels, including a computer architecture, an OS, or runtime libraries. A computing platform is the stage. A platform can be seen both as a constraint on the software development process, in that different platforms provide different functionality and restrictions. For example, an OS may be a platform that abstracts the underlying differences in hardware and provides a generic command for saving files or accessing the network. Platforms may include: Hardware alone, in the case of small embedded systems. Embedded systems can access hardware directly, without an OS. A browser in the case of web-based software; the browser itself runs on a hardware+OS platform, but this is not relevant to software running within the browser.
An application, such as a spreadsheet or word processor, which hosts software written in an application-specific scripting language, such as an Excel macro. This can be extended to writing fully-fledged applications with the Microsoft Office suite as a platform. Software frameworks. Cloud computing and Platform as a Service. Extending the idea of a software framework, these allow application developers to build software out of components that are hosted not by the developer, but by the provider, with internet communication linking them together; the social networking sites Twitter and Facebook are considered development platforms. A virtual machine such as the Java virtual machine or. NET CLR. Applications are compiled into a format similar to machine code, known as bytecode, executed by the VM. A virtualized version of a complete system, including virtualized hardware, OS, storage; these allow, for instance, a typical Windows program to run on. Some architectures have multiple layers, with each layer acting as a platform to the one above it.
In general, a component only has to be adapted to the layer beneath it. For instance, a Java program has to be written to use the Java virtual machine and associated libraries as a platform but does not have to be adapted to run for the Windows, Linux or Macintosh OS platforms. However, the JVM, the layer beneath the application, does have to be built separately for each OS. AmigaOS, AmigaOS 4 FreeBSD, NetBSD, OpenBSD IBM i Linux Microsoft Windows OpenVMS Classic Mac OS macOS OS/2 Solaris Tru64 UNIX VM QNX z/OS Android Bada BlackBerry OS Firefox OS iOS Embedded Linux Palm OS Symbian Tizen WebOS LuneOS Windows Mobile Windows Phone Binary Runtime Environment for Wireless Cocoa Cocoa Touch Common Language Infrastructure Mono. NET Framework Silverlight Flash AIR GNU Java platform Java ME Java SE Java EE JavaFX JavaFX Mobile LiveCode Microsoft XNA Mozilla Prism, XUL and XULRunner Open Web Platform Oracle Database Qt SAP NetWeaver Shockwave Smartface Universal Windows Platform Windows Runtime Vexi Ordered from more common types to less common types: Commodity computing platforms Wintel, that is, Intel x86 or compatible personal computer hardware with Windows operating system Macintosh, custom Apple Inc. hardware and Classic Mac OS and macOS operating systems 68k-based PowerPC-based, now migrated to x86 ARM architecture based mobile devices iPhone smartphones and iPad tablet computers devices running iOS from Apple Gumstix or Raspberry Pi full function miniature computers with Linux Newton devices running the Newton OS from Apple x86 with Unix-like systems such as Linux or BSD variants CP/M computers based on the S-100 bus, maybe the earliest microcomputer platform Video game consoles, any variety 3DO Interactive Multiplayer, licensed to manufacturers Apple Pippin, a multimedia player platform for video game console development RISC processor based machines running Unix variants SPARC architecture computers running Solaris or illumos operating systems DEC Alpha cluster running OpenVMS or Tru64 UNIX Midrange computers with their custom operating systems, such as IBM OS/400 Mainframe computers with their custom operating systems, such as IBM z/OS Supercomputer architectures Cross-platform Platform virtualization Third platform Ryan Sarver: What is a platform
An open standard is a standard, publicly available and has various rights to use associated with it, may have various properties of how it was designed. There is no single definition and interpretations vary with usage; the terms open and standard have a wide range of meanings associated with their usage. There are a number of definitions of open standards which emphasize different aspects of openness, including the openness of the resulting specification, the openness of the drafting process, the ownership of rights in the standard; the term "standard" is sometimes restricted to technologies approved by formalized committees that are open to participation by all interested parties and operate on a consensus basis. The definitions of the term open standard used by academics, the European Union and some of its member governments or parliaments such as Denmark and Spain preclude open standards requiring fees for use, as do the New Zealand, South African and the Venezuelan governments. On the standard organisation side, the World Wide Web Consortium ensures that its specifications can be implemented on a royalty-free basis.
Many definitions of the term standard permit patent holders to impose "reasonable and non-discriminatory licensing" royalty fees and other licensing terms on implementers or users of the standard. For example, the rules for standards published by the major internationally recognized standards bodies such as the Internet Engineering Task Force, International Organization for Standardization, International Electrotechnical Commission, ITU-T permit their standards to contain specifications whose implementation will require payment of patent licensing fees. Among these organizations, only the IETF and ITU-T explicitly refer to their standards as "open standards", while the others refer only to producing "standards"; the IETF and ITU-T use definitions of "open standard" that allow "reasonable and non-discriminatory" patent licensing fee requirements. There are those in the open-source software community who hold that an "open standard" is only open if it can be adopted and extended. While open standards or architectures are considered non-proprietary in the sense that the standard is either unowned or owned by a collective body, it can still be publicly shared and not guarded.
The typical example of “open source” that has become a standard is the personal computer originated by IBM and now referred to as Wintel, the combination of the Microsoft operating system and Intel microprocessor. There are three others that are most accepted as “open” which include the GSM phones, Open Group which promotes UNIX and the like, the Internet Engineering Task Force which created the first standards of SMTP and TCP/IP. Buyers tend to prefer open standards which they believe offer them cheaper products and more choice for access due to network effects and increased competition between vendors. Open standards which specify formats are sometimes referred to as open formats. Many specifications that are sometimes referred to as standards are proprietary and only available under restrictive contract terms from the organization that owns the copyright on the specification; as such these specifications are not considered to be open. Joel West has argued that "open" standards are not black and white but have many different levels of "openness".
A standard needs to be open enough that it will become adopted and accepted in the market, but still closed enough that firms can get a return on their investment in developing the technology around the standard. A more open standard tends to occur when the knowledge of the technology becomes dispersed enough that competition is increased and others are able to start copying the technology as they implement it; this occurred with the Wintel architecture. Less open standards exist when a particular firm has much power over the standard, which can occur when a firm’s platform “wins” in standard setting or the market makes one platform most popular. On August 12, 2012, the Institute for Electrical and Electronics Engineers, Internet Society, World Wide Web Consortium, Internet Engineering Task Force and Internet Architecture Board, jointly affirmed a set of principles which have contributed to the exponential growth of the Internet and related technologies; the "OpenStand Principles" establish the building blocks for innovation.
Standards developed using the OpenStand principles are developed through an open, participatory process, support interoperability, foster global competition, are voluntarily adopted on a global level and serve as building blocks for products and services targeted to meet the needs of markets and consumers. This drives innovation which, in turn, contributes to the creation of new markets and the growth and expansion of existing markets. There are five, key OpenStand Principles, as outlined below: 1. Cooperation Respectful cooperation between standards organizations, whereby each respects the autonomy, integrity and intellectual property rules of the others. 2. Adherence to Principles - Adherence to the five fundamental principles of standards development, namely Due process: Decisions are made with equity and fairness among participants. No one party guides standards development. Standards processes are transparent and opportunities exist to appeal decisions. Processes for periodic standards review and updating are well defined.
Broad consensus: Processes allow for all views to be considered and addressed, such that agreement can be found across a range of interests. Transpare
The IEEE Standard for Floating-Point Arithmetic is a technical standard for floating-point arithmetic established in 1985 by the Institute of Electrical and Electronics Engineers. The standard addressed many problems found in the diverse floating-point implementations that made them difficult to use reliably and portably. Many hardware floating-point units use the IEEE 754 standard; the standard defines: arithmetic formats: sets of binary and decimal floating-point data, which consist of finite numbers and special "not a number" values interchange formats: encodings that may be used to exchange floating-point data in an efficient and compact form rounding rules: properties to be satisfied when rounding numbers during arithmetic and conversions operations: arithmetic and other operations on arithmetic formats exception handling: indications of exceptional conditions The current version, IEEE 754-2008 revision published in August 2008, includes nearly all of the original IEEE 754-1985 standard plus IEEE 854-1987 Standard for Radix-Independent Floating-Point Arithmetic.
The current version, IEEE 754-2008 published in August 2008, is derived from and replaces IEEE 754-1985, the previous version, following a seven-year revision process, chaired by Dan Zuras and edited by Mike Cowlishaw. The international standard ISO/IEC/IEEE 60559:2011 has been approved for adoption through JTC1/SC 25 under the ISO/IEEE PSDO Agreement and published; the binary formats in the original standard are included in the new standard along with three new basic formats, one binary and two decimal. To conform to the current standard, an implementation must implement at least one of the basic formats as both an arithmetic format and an interchange format; as of September 2015, the standard is being revised to incorporate errata. An IEEE 754 format is a "set of representations of numerical values and symbols". A format may include how the set is encoded. A floating-point format is specified by: a base b, either 2 or 10 in IEEE 754. A format comprises: Finite numbers, which can be described by three integers: s = a sign, c = a significand having no more than p digits when written in base b, q = an exponent such that emin ≤ q+p−1 ≤ emax.
The numerical value of such a finite number is s ×. Moreover, there are two zero values, called signed zeros: the sign bit specifies whether a zero is +0 or −0. Two infinities: +∞ and −∞. Two kinds of NaN: a quiet NaN and a signaling NaN. For example, if b = 10, p = 7 and emax = 96 emin = −95, the significand satisfies 0 ≤ c ≤ 9,999,999, the exponent satisfies −101 ≤ q ≤ 90; the smallest non-zero positive number that can be represented is 1×10−101, the largest is 9999999×1090, so the full range of numbers is −9.999999×1096 through 9.999999×1096. The numbers − b1 − emax are the smallest normal numbers; some numbers may have several possible exponential format representations. For instance, if b=10 and p=7, −12.345 can be represented by −12345×10−3, −123450×10−4, −1234500×10−5. However, for most operations, such as arithmetic operations, the result does not depend on the representation of the inputs. For the decimal formats, any representation is valid, the set of these representations is called a cohort.
When a result can have several representations, the standard specifies which member of the cohort is chosen. For the binary formats, the representation is made unique by choosing the smallest representable exponent allowing the value to be represented exactly. Further, the exponent is not represented directly, but a bias is added so that the smallest representable exponent is represented as 1, with 0 used for subnormal numbers. For numbers with an exponent in the normal range, the leading bit of the significand will always be 1. A leading 1 can be implied rather than explicitly present in the memory encoding, under the standard the explicitly represented part of the significand will lie between 0 and 1; this rule is called implicit bit convention, or hidden bit convention. This rule allows the binary format to have an extra bit of precision; the leading bit convention cannot be used for the subnormal numbers as they have an exponent outside the normal exponent range and scale by the smallest represented exponent as used for the smallest normal numbers.
Due to the possibility of multiple encodings, a NaN may carry other information: a sign bit and a payload, intended for diagnostic information indicating the source of the NaN. The standard defines five basic formats that are named for their numeric base and the number of bits used in their interchange encoding. There are two decimal floating-point basic formats; the binary32 and binary64 formats are the double formats of IEEE 754-1985 respectively. A conforming implementation must implement at least one of the basic formats. The
International Organization for Standardization
The International Organization for Standardization is an international standard-setting body composed of representatives from various national standards organizations. Founded on 23 February 1947, the organization promotes worldwide proprietary and commercial standards, it is headquartered in Geneva and works in 164 countries. It was one of the first organizations granted general consultative status with the United Nations Economic and Social Council; the International Organization for Standardization is an independent, non-governmental organization, the members of which are the standards organizations of the 164 member countries. It is the world's largest developer of voluntary international standards and facilitates world trade by providing common standards between nations. Over twenty thousand standards have been set covering everything from manufactured products and technology to food safety and healthcare. Use of the standards aids in the creation of products and services that are safe, reliable and of good quality.
The standards help businesses increase productivity while minimizing errors and waste. By enabling products from different markets to be directly compared, they facilitate companies in entering new markets and assist in the development of global trade on a fair basis; the standards serve to safeguard consumers and the end-users of products and services, ensuring that certified products conform to the minimum standards set internationally. The three official languages of the ISO are English and Russian; the name of the organization in French is Organisation internationale de normalisation, in Russian, Международная организация по стандартизации. ISO is not an acronym; the organization adopted ISO as its abbreviated name in reference to the Greek word isos, as its name in the three official languages would have different acronyms. During the founding meetings of the new organization, the Greek word explanation was not invoked, so this meaning may have been made public later. ISO gives this explanation of the name: "Because'International Organization for Standardization' would have different acronyms in different languages, our founders decided to give it the short form ISO.
ISO is derived from the Greek isos, meaning equal. Whatever the country, whatever the language, the short form of our name is always ISO."Both the name ISO and the ISO logo are registered trademarks, their use is restricted. The organization today known as ISO began in 1928 as the International Federation of the National Standardizing Associations, it was suspended in 1942 during World War II, but after the war ISA was approached by the formed United Nations Standards Coordinating Committee with a proposal to form a new global standards body. In October 1946, ISA and UNSCC delegates from 25 countries met in London and agreed to join forces to create the new International Organization for Standardization. ISO is a voluntary organization whose members are recognized authorities on standards, each one representing one country. Members meet annually at a General Assembly to discuss ISO's strategic objectives; the organization is coordinated by a Central Secretariat based in Geneva. A Council with a rotating membership of 20 member bodies provides guidance and governance, including setting the Central Secretariat's annual budget.
The Technical Management Board is responsible for over 250 technical committees, who develop the ISO standards. ISO has formed two joint committees with the International Electrotechnical Commission to develop standards and terminology in the areas of electrical and electronic related technologies. ISO/IEC Joint Technical Committee 1 was created in 1987 to "evelop, maintain and facilitate IT standards", where IT refers to information technology. ISO/IEC Joint Technical Committee 2 was created in 2009 for the purpose of "tandardization in the field of energy efficiency and renewable energy sources". ISO has 163 national members. ISO has three membership categories: Member bodies are national bodies considered the most representative standards body in each country; these are the only members of ISO. Correspondent members are countries; these members do not participate in standards promulgation. Subscriber members are countries with small economies, they can follow the development of standards. Participating members are called "P" members, as opposed to observing members, who are called "O" members.
ISO is funded by a combination of: Organizations that manage the specific projects or loan experts to participate in the technical work. Subscriptions from member bodies; these subscriptions are in proportion to each country's gross national trade figures. Sale of standards. ISO's main products are international standards. ISO publishes technical reports, technical specifications, publicly available specifications, technical corrigenda, guides. International standards These are designated using the format ISO nnnnn: Title, where nnnnn is the number of the standard, p is an optional part number, yyyy is the year published, Title describes the subject. IEC for International Electrotechnical Commission is included if the standard results from the work of ISO/IEC JTC1. ASTM is used for standards developed in cooperation with ASTM International. Yyyy and IS are not used for an incomplete or unpublished standard and may under some
C99 is an informal name for ISO/IEC 9899:1999, a past version of the C programming language standard. It extends the previous version with new features for the language and the standard library, helps implementations make better use of available computer hardware, such as IEEE 754-1985 floating-point arithmetic, compiler technology; the C11 version of the C programming language standard, published in 2011, replaces C99. After ANSI produced the official standard for the C programming language in 1989, which became an international standard in 1990, the C language specification remained static for some time, while C++ continued to evolve during its own standardization effort. Normative Amendment 1 created a new standard for C in 1995, but only to correct some details of the 1989 standard and to add more extensive support for international character sets; the standard underwent further revision in the late 1990s, leading to the publication of ISO/IEC 9899:1999 in 1999, adopted as an ANSI standard in May 2000.
The language defined by that version of the standard is referred to as "C99". The international C standard is maintained by the working group ISO/IEC JTC1/SC22/WG14. C99 is, for the most part, backward compatible with C89. In particular, a declaration that lacks a type specifier no longer has int implicitly assumed; the C standards committee decided that it was of more value for compilers to diagnose inadvertent omission of the type specifier than to silently process legacy code that relied on implicit int. In practice, compilers are to display a warning assume int and continue translating the program. C99 introduced several new features, many of, implemented as extensions in several compilers: inline functions intermingled declarations and code: variable declaration is no longer restricted to file scope or the start of a compound statement, facilitating static single assignment form several new data types, including long long int, optional extended integer types, an explicit boolean data type, a complex type to represent complex numbers variable-length arrays flexible array members support for one-line comments beginning with //, as in BCPL, C++ and Java new library functions, such as snprintf new headers, such as <stdbool.h>, <complex.h>, <tgmath.h>, <inttypes.h> type-generic math functions, in <tgmath.h>, which select a math library function based upon float, double, or long double arguments, etc. improved support for IEEE floating point designated initializers compound literals support for variadic macros restrict qualification allows more aggressive code optimization, removing compile-time array access advantages held by FORTRAN over ANSI C universal character names, which allows user variables to contain other characters than the standard character set keyword static in array indices in parameter declarationsParts of the C99 standard are included in the current version of the C++ standard, including integer types and library functions.
Variable-length arrays are not among these included parts because C++'s Standard Template Library includes similar functionality. A major feature of C99 is its numerics support, in particular its support for access to the features of IEEE 754-1985 floating-point hardware present in the vast majority of modern processors. Platforms without IEEE 754 hardware can implement it in software. On platforms with IEEE 754 floating point: float is defined as IEEE 754 single precision, double is defined as double precision, long double is defined as IEEE 754 extended precision, or some form of quad precision where available; the four arithmetic operations and square root are rounded as defined by IEEE 754. Expression evaluation is defined to be performed in one of three well-defined methods, indicating whether floating-point variables are first promoted to a more precise format in expressions: FLT_EVAL_METHOD == 2 indicates that all internal intermediate computations are performed by default at high precision where available, FLT_EVAL_METHOD == 1 performs all internal intermediate expressions in double precision, while FLT_EVAL_METHOD == 0 specifies each operation is evaluated only at the precision of the widest operand of each operator.
The intermediate result type for operands of a given precision are summarized in the adjacent table. FLT_EVAL_METHOD == 2 tends to limit the risk of rounding errors affecting numerically unstable expressions and is the designed default method for x87 hardware, but yields unintuitive behavior for the unwary user. Before C99, compilers could round intermediate results inconsistently when using x87 floating-point hardware, leading to compiler-specific behaviour.
Arm Holdings is a British multinational semiconductor and software design company, owned by SoftBank Group and its Vision Fund. With its headquarters in Cambridgeshire, within the United Kingdom, its primary business is in the design of ARM processors, although it designs software development tools under the DS-5, RealView and Keil brands, as well as systems and platforms, system-on-a-chip infrastructure and software; as a "Holding" company, it holds shares of other companies. It is considered to be market dominant for processors in mobile phones and tablet computers; the company is one of the best-known "Silicon Fen" companies. Processors based on designs licensed from Arm, or designed by licensees of one of the Arm instruction set architectures, are used in all classes of computing devices. Examples of those processors range from the world's smallest computer to the processors in some supercomputers on the TOP500 list. Processors designed by Arm or by Arm licensees are used as microcontrollers in embedded systems, including real-time safety systems, biometrics systems, smart TVs, all modern smartwatches, are used as general-purpose processors in smartphones, laptops, desktops and supercomputers/HPC, e.g. a CPU "option" in Cray's supercomputers.
Arm's Mali line of graphics processing units are used in laptops, in over 50% of Android tablets by market share, some versions of Samsung's smartphones and smartwatches. It is the third most popular GPU in mobile devices. Systems, including iPhone smartphones include many chips, from many different providers, that include one or more licensed Arm cores, in addition to those in the main Arm-based processor. Arm's core designs are used in chips that support many common network related technologies in smartphones: Bluetooth, WiFi and broadband, in addition to corresponding equipment such as Bluetooth headsets, 802.11ac routers, network providers' cellular LTE. Arm's main CPU competitors in servers include Intel and AMD. In mobile applications, Intel's x86 Atom is a competitor. AMD sells Arm-based chips as well as x86. Arm's main GPU competitors include mobile GPUs from Imagination Technologies and Nvidia and Intel. Despite competing within GPUs, Qualcomm and Nvidia have combined their GPUs with an Arm licensed CPU.
Arm was a constituent of the FTSE 100 Index. It had a secondary listing on NASDAQ; however Japanese telecommunications company SoftBank Group made an agreed offer for Arm on 18 July 2016, subject to approval by Arm's shareholders, valuing the company at £23.4 billion. The transaction was completed on 5 September 2016; the acronym ARM was first used in 1983 and stood for "Acorn RISC Machine". Acorn Computers' first RISC processor was used in the original Acorn Archimedes and was one of the first RISC processors used in small computers. However, when the company was incorporated in 1990, the acronym was changed to "Advanced RISC Machines", in light of the company's name "Advanced RISC Machines Ltd." - and according to an interview with Steve Furber the name change was at the behest of Apple who did not wish to have the name of a former competitor - namely Acorn - in the name of the company. At the time of the IPO in 1998, the company name was changed to "ARM Holdings" just called ARM like the processors.
On 1 August 2017, the logo were changed. The logo is now all lowercase and other uses of'ARM' are in sentence case except where the whole sentence is upper case, so, for instance, it is now'Arm Holdings'; the company was founded in November 1990 as Advanced RISC Machines Ltd and structured as a joint venture between Acorn Computers, Apple Computer and VLSI Technology. The new company intended to further the development of the Acorn RISC Machine processor, used in the Acorn Archimedes and had been selected by Apple for their Newton project, its first profitable year was 1993. The company's Silicon Valley and Tokyo offices were opened in 1994. Arm invested in Palmchip Corporation in 1997 to provide system on chip platforms and to enter into the disk drive market. In 1998, the company changed its name from Advanced RISC Machines Ltd to ARM Ltd; the company was first listed on the London Stock Exchange and NASDAQ in 1998 and by February 1999, Apple's shareholding had fallen to 14.8%. In 2010, Arm joined with IBM, Texas Instruments, Samsung, ST-Ericsson and Freescale Semiconductor in forming a non-profit open source engineering company, Linaro.
Micrologic Solutions, a software consulting company based in Cambridge Allant Software, a developer of debugging software Infinite Designs, a design company based in Sheffield EuroMIPS a smart card design house in Sophia Antipolis, France The engineering team of Noral Micrologics, a debug hardware and software company based in Blackburn, England Adelante Technologies of Belgium, creating its OptimoDE data engines business, a form of lightweight DSP engine Axys Design Automation, a developer of ESL design tools and Artisan Components, a designer of Physical IP, the building blocks of integrated circuits KEIL Software, a leading developer of software development tools for the microcontroller market, including 8051 and C16x platforms. Arm acquired the engineering team of PowerEscape. Falanx, a developer of 3D graphics accelerators a
C++ is a general-purpose programming language, developed by Bjarne Stroustrup as an extension of the C language, or "C with Classes". It has imperative, object-oriented and generic programming features, while providing facilities for low-level memory manipulation, it is always implemented as a compiled language, many vendors provide C++ compilers, including the Free Software Foundation, Intel, IBM, so it is available on many platforms. C++ was designed with a bias toward system programming and embedded, resource-constrained software and large systems, with performance and flexibility of use as its design highlights. C++ has been found useful in many other contexts, with key strengths being software infrastructure and resource-constrained applications, including desktop applications and performance-critical applications. C++ is standardized by the International Organization for Standardization, with the latest standard version ratified and published by ISO in December 2017 as ISO/IEC 14882:2017.
The C++ programming language was standardized in 1998 as ISO/IEC 14882:1998, amended by the C++03, C++11 and C++14 standards. The current C++ 17 standard supersedes these with an enlarged standard library. Before the initial standardization in 1998, C++ was developed by Danish computer scientist Bjarne Stroustrup at Bell Labs since 1979 as an extension of the C language. C++20 is the next planned standard, keeping with the current trend of a new version every three years. In 1979, Bjarne Stroustrup, a Danish computer scientist, began work on "C with Classes", the predecessor to C++; the motivation for creating a new language originated from Stroustrup's experience in programming for his Ph. D. thesis. Stroustrup found that Simula had features that were helpful for large software development, but the language was too slow for practical use, while BCPL was fast but too low-level to be suitable for large software development; when Stroustrup started working in AT&T Bell Labs, he had the problem of analyzing the UNIX kernel with respect to distributed computing.
Remembering his Ph. D. experience, Stroustrup set out to enhance the C language with Simula-like features. C was chosen because it was general-purpose, fast and used; as well as C and Simula's influences, other languages influenced C++, including ALGOL 68, Ada, CLU and ML. Stroustrup's "C with Classes" added features to the C compiler, including classes, derived classes, strong typing and default arguments. In 1983, "C with Classes" was renamed to "C++", adding new features that included virtual functions, function name and operator overloading, constants, type-safe free-store memory allocation, improved type checking, BCPL style single-line comments with two forward slashes. Furthermore, it included the development of a standalone compiler for Cfront. In 1985, the first edition of The C++ Programming Language was released, which became the definitive reference for the language, as there was not yet an official standard; the first commercial implementation of C++ was released in October of the same year.
In 1989, C++ 2.0 was released, followed by the updated second edition of The C++ Programming Language in 1991. New features in 2.0 included multiple inheritance, abstract classes, static member functions, const member functions, protected members. In 1990, The Annotated C++ Reference Manual was published; this work became the basis for the future standard. Feature additions included templates, namespaces, new casts, a boolean type. After the 2.0 update, C++ evolved slowly until, in 2011, the C++11 standard was released, adding numerous new features, enlarging the standard library further, providing more facilities to C++ programmers. After a minor C++14 update released in December 2014, various new additions were introduced in C++17, further changes planned for 2020; as of 2017, C++ remains the third most popular programming language, behind Java and C. On January 3, 2018, Stroustrup was announced as the 2018 winner of the Charles Stark Draper Prize for Engineering, "for conceptualizing and developing the C++ programming language".
According to Stroustrup: "the name signifies the evolutionary nature of the changes from C". This name is credited to Rick Mascitti and was first used in December 1983; when Mascitti was questioned informally in 1992 about the naming, he indicated that it was given in a tongue-in-cheek spirit. The name comes from C's ++ operator and a common naming convention of using "+" to indicate an enhanced computer program. During C++'s development period, the language had been referred to as "new C" and "C with Classes" before acquiring its final name. Throughout C++'s life, its development and evolution has been guided by a set of principles: It must be driven by actual problems and its features should be useful in real world programs; every feature should be implementable. Programmers should be free to pick their own programming style, that style should be supported by C++. Allowing a useful feature is more important than preventing every possible misuse of C++, it should provide facilities for organising programs into separate, well-defined parts, provide facilities for combining separately developed parts.
No implicit violations of the type system (but allow explicit violations.