1.
Programming paradigm
–
Programming paradigms are a way to classify programming languages based on the features of various programming languages. Other paradigms are concerned mainly with the way that code is organized, yet others are concerned mainly with the style of syntax and grammar. The communication between the units of code is not explicit, meanwhile, in object-oriented programming, code is organized into objects that contain state that is only modified by the code that is part of the object. Most object-oriented languages are also imperative languages, in contrast, languages that fit the declarative paradigm do not state the order in which to execute operations. Instead, they supply a number of operations that are available in the system, the implementation of the languages execution model tracks which operations are free to execute and chooses the order on its own. More at Comparison of multi-paradigm programming languages, just as software engineering is defined by differing methodologies, so the programming languages are defined by differing paradigms. Some languages are designed to support one paradigm, while other programming languages support multiple paradigms, for example, programs written in C++, Object Pascal or PHP can be purely procedural, purely object-oriented, or can contain elements of both or other paradigms. Software designers and programmers decide how to use those paradigm elements, in object-oriented programming, programs are treated as a set of interacting objects. In functional programming, programs are treated as a sequence of function evaluations. When programming computers or systems with many processors, in process-oriented programming, many programming paradigms are as well known for the techniques they forbid as for those they enable. For instance, pure functional programming disallows use of side-effects, while structured programming disallows use of the goto statement, partly for this reason, new paradigms are often regarded as doctrinaire or overly rigid by those accustomed to earlier styles. Yet, avoiding certain techniques can make it easier to understand program behavior, programming paradigms can also be compared with programming models which allow invoking an external execution model by using only an API. Programming models can also be classified into paradigms, based on features of the execution model, for parallel computing, using a programming model instead of a language is common. The reason is that details of the parallel hardware leak into the used to program the hardware. This causes the programmer to have to map patterns in the algorithm onto patterns in the execution model, as a consequence, no one parallel programming language maps well to all computation problems. It is thus convenient to use a base sequential language and insert API calls to parallel execution models. These can be considered flavors of programming paradigm that apply to only parallel languages, different approaches to programming have developed over time, being identified as such either at the time or retrospectively. An early approach consciously identified as such is structured programming, advocated since the mid 1960s and these are sometimes called first- and second-generation languages
2.
Structured programming
–
It emerged in the late 1950s with the appearance of the ALGOL58 and ALGOL60 programming languages, with the latter including support for block structures. Dijkstra, who coined the term structured programming, Structured programming is most frequently used with deviations that allow for clearer programs in some particular cases, such as when exception handling has to be performed. Following the structured program theorem, all programs are seen as composed of control structures, Sequence, selection, one or a number of statements is executed depending on the state of the program. This is usually expressed with such as if. then. else. endif. Iteration, a statement or block is executed until the program reaches a certain state and this is usually expressed with keywords such as while, repeat, for or do. until. Often it is recommended that each loop should only have one entry point, recursion, a statement is executed by repeatedly calling itself until termination conditions are met. While similar in practice to iterative loops, recursive loops may be computationally efficient. Subroutines, callable units such as procedures, functions, methods, blocks are used to enable groups of statements to be treated as if they were one statement. It is possible to do structured programming in any programming language, Structured programming enforces a logical structure on the program being written to make it more efficient and easier to understand and modify. The structured program theorem provides the basis of structured programming. It states that three ways of combining programs—sequencing, selection, and iteration—are sufficient to express any computable function, therefore, a processor is always executing a structured program in this sense, even if the instructions it reads from memory are not part of a structured program. However, authors usually credit the result to a 1966 paper by Böhm and Jacopini, the structured program theorem does not address how to write and analyze a usefully structured program. These issues were addressed during the late 1960s and early 1970s, with contributions by Dijkstra, Robert W. Floyd, Tony Hoare, Ole-Johan Dahl. Neither the proof by Böhm and Jacopini nor our repeated successes at writing structured code brought them one day sooner than they were ready to convince themselves. Donald Knuth accepted the principle that programs must be written with provability in mind, in his 1974 paper, Structured Programming with Goto Statements, he gave examples where he believed that a direct jump leads to clearer and more efficient code without sacrificing provability. Many of those knowledgeable in compilers and graph theory have advocated allowing only reducible flow graphs, as late as 1987 it was still possible to raise the question of structured programming in a computer science journal. Frank Rubin did so in that year with a letter titled GOTO considered harmful considered harmful. Numerous objections followed, including a response from Dijkstra that sharply criticized both Rubin and the other writers made when responding to him
3.
Software developer
–
A software developer is a person concerned with facets of the software development process, including the research, design, programming, and testing of computer software. Other job titles which are used with similar meanings are programmer, software analyst. According to developer Eric Sink, the differences between system design, software development, and programming are more apparent, even more so that developers become systems architects, those who design the multi-leveled architecture or component interactions of a large software system. In a large company, there may be employees whose sole responsibility consists of one of the phases above. In smaller development environments, a few people or even an individual might handle the complete process. The word software was coined as a prank as early as 1953, before this time, computers were programmed either by customers, or the few commercial computer vendors of the time, such as UNIVAC and IBM. The first company founded to provide products and services was Computer Usage Company in 1955. The software industry expanded in the early 1960s, almost immediately after computers were first sold in mass-produced quantities, universities, government, and business customers created a demand for software. Many of these programs were written in-house by full-time staff programmers, some were distributed freely between users of a particular machine for no charge. Others were done on a basis, and other firms such as Computer Sciences Corporation started to grow. The computer/hardware makers started bundling operating systems, systems software and programming environments with their machines, new software was built for microcomputers, so other manufacturers including IBM, followed DECs example quickly, resulting in the IBM AS/400 amongst others. The industry expanded greatly with the rise of the computer in the mid-1970s. In the following years, it created a growing market for games, applications. DOS, Microsofts first operating system product, was the dominant operating system at the time, by 2014 the role of cloud developer had been defined, in this context, one definition of a developer in general was published, Developers make software for the world to use. The job of a developer is to crank out code -- fresh code for new products, code fixes for maintenance, code for business logic, bus factor Software Developer description from the US Department of Labor
4.
Software release life cycle
–
Usage of the alpha/beta test terminology originated at IBM. As long ago as the 1950s, IBM used similar terminology for their hardware development, a test was the verification of a new product before public announcement. B test was the verification before releasing the product to be manufactured, C test was the final test before general availability of the product. Martin Belsky, a manager on some of IBMs earlier software projects claimed to have invented the terminology, IBM dropped the alpha/beta terminology during the 1960s, but by then it had received fairly wide notice. The usage of beta test to refer to testing done by customers was not done in IBM, rather, IBM used the term field test. Pre-alpha refers to all activities performed during the project before formal testing. These activities can include requirements analysis, software design, software development, in typical open source development, there are several types of pre-alpha versions. Milestone versions include specific sets of functions and are released as soon as the functionality is complete, the alpha phase of the release life cycle is the first phase to begin software testing. In this phase, developers generally test the software using white-box techniques, additional validation is then performed using black-box or gray-box techniques, by another testing team. Moving to black-box testing inside the organization is known as alpha release, alpha software can be unstable and could cause crashes or data loss. Alpha software may not contain all of the features that are planned for the final version, in general, external availability of alpha software is uncommon in proprietary software, while open source software often has publicly available alpha versions. The alpha phase usually ends with a freeze, indicating that no more features will be added to the software. At this time, the software is said to be feature complete, Beta, named after the second letter of the Greek alphabet, is the software development phase following alpha. Software in the stage is also known as betaware. Beta phase generally begins when the software is complete but likely to contain a number of known or unknown bugs. Software in the phase will generally have many more bugs in it than completed software, as well as speed/performance issues. The focus of beta testing is reducing impacts to users, often incorporating usability testing, the process of delivering a beta version to the users is called beta release and this is typically the first time that the software is available outside of the organization that developed it. Beta version software is useful for demonstrations and previews within an organization
5.
C (programming language)
–
C was originally developed by Dennis Ritchie between 1969 and 1973 at Bell Labs, and used to re-implement the Unix operating system. C has been standardized by the American National Standards Institute since 1989, C is an imperative procedural language. Therefore, C was useful for applications that had formerly been coded in assembly language. Despite its low-level capabilities, the language was designed to encourage cross-platform programming, a standards-compliant and portably written C program can be compiled for a very wide variety of computer platforms and operating systems with few changes to its source code. The language has become available on a wide range of platforms. In C, all code is contained within subroutines, which are called functions. Function parameters are passed by value. Pass-by-reference is simulated in C by explicitly passing pointer values, C program source text is free-format, using the semicolon as a statement terminator and curly braces for grouping blocks of statements. The C language also exhibits the characteristics, There is a small, fixed number of keywords, including a full set of flow of control primitives, for, if/else, while, switch. User-defined names are not distinguished from keywords by any kind of sigil, There are a large number of arithmetical and logical operators, such as +, +=, ++, &, ~, etc. More than one assignment may be performed in a single statement, function return values can be ignored when not needed. Typing is static, but weakly enforced, all data has a type, C has no define keyword, instead, a statement beginning with the name of a type is taken as a declaration. There is no function keyword, instead, a function is indicated by the parentheses of an argument list, user-defined and compound types are possible. Heterogeneous aggregate data types allow related data elements to be accessed and assigned as a unit, array indexing is a secondary notation, defined in terms of pointer arithmetic. Unlike structs, arrays are not first-class objects, they cannot be assigned or compared using single built-in operators, There is no array keyword, in use or definition, instead, square brackets indicate arrays syntactically, for example month. Enumerated types are possible with the enum keyword and they are not tagged, and are freely interconvertible with integers. Strings are not a data type, but are conventionally implemented as null-terminated arrays of characters. Low-level access to memory is possible by converting machine addresses to typed pointers
6.
Computing platform
–
Computing platform means in general sense, where any piece of software is executed. It may be the hardware or the system, even a web browser or other application. The term computing platform can refer to different abstraction levels, including a hardware architecture, an operating system. In total it can be said to be the stage on which programs can run. For example, an OS may be a platform that abstracts the underlying differences in hardware, platforms may also include, Hardware alone, in the case of small embedded systems. Embedded systems can access hardware directly, without an OS, this is referred to as running on bare metal, a browser in the case of web-based software. The browser itself runs on a platform, but this is not relevant to software running within the browser. An application, such as a spreadsheet or word processor, which hosts software written in a scripting language. This can be extended to writing fully-fledged applications with the Microsoft Office suite as a platform, software frameworks that provide ready-made functionality. Cloud computing and Platform as a Service, the social networking sites Twitter and facebook are also considered development platforms. A virtual machine such as the Java virtual machine, applications are compiled into a format similar to machine code, known as bytecode, which is then executed by the VM. A virtualized version of a system, including virtualized hardware, OS, software. These allow, for instance, a typical Windows program to run on what is physically a Mac, some architectures have multiple layers, with each layer acting as a platform to the one above it. In general, a component only has to be adapted to the layer immediately beneath it, however, the JVM, the layer beneath the application, does have to be built separately for each OS
7.
Operating system
–
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. All computer programs, excluding firmware, require a system to function. Operating systems are found on many devices that contain a computer – from cellular phones, the dominant desktop operating system is Microsoft Windows with a market share of around 83. 3%. MacOS by Apple Inc. is in place, and the varieties of Linux is in third position. Linux distributions are dominant in the server and supercomputing sectors, other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can run one program at a time. Multi-tasking may be characterized in preemptive and co-operative types, in preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, e. g. Solaris, Linux, cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking, 32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem, a distributed operating system manages a group of distinct computers and makes them appear to be a single computer. The development of networked computers that could be linked and communicate with each other gave rise to distributed computing, distributed computations are carried out on more than one machine. When computers in a work in cooperation, they form a distributed system. The technique is used both in virtualization and cloud computing management, and is common in large server warehouses, embedded operating systems are designed to be used in embedded computer systems. They are designed to operate on small machines like PDAs with less autonomy and they are able to operate with a limited number of resources. They are very compact and extremely efficient by design, Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is a system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could run different programs in succession to speed up processing
8.
MacOS
–
Within the market of desktop, laptop and home computers, and by web usage, it is the second most widely used desktop OS after Microsoft Windows. Launched in 2001 as Mac OS X, the series is the latest in the family of Macintosh operating systems, Mac OS X succeeded classic Mac OS, which was introduced in 1984, and the final release of which was Mac OS9 in 1999. An initial, early version of the system, Mac OS X Server 1.0, was released in 1999, the first desktop version, Mac OS X10.0, followed in March 2001. In 2012, Apple rebranded Mac OS X to OS X. Releases were code named after big cats from the release up until OS X10.8 Mountain Lion. Beginning in 2013 with OS X10.9 Mavericks, releases have been named after landmarks in California, in 2016, Apple rebranded OS X to macOS, adopting the nomenclature that it uses for their other operating systems, iOS, watchOS, and tvOS. The latest version of macOS is macOS10.12 Sierra, macOS is based on technologies developed at NeXT between 1985 and 1997, when Apple acquired the company. The X in Mac OS X and OS X is pronounced ten, macOS shares its Unix-based core, named Darwin, and many of its frameworks with iOS, tvOS and watchOS. A heavily modified version of Mac OS X10.4 Tiger was used for the first-generation Apple TV, Apple also used to have a separate line of releases of Mac OS X designed for servers. Beginning with Mac OS X10.7 Lion, the functions were made available as a separate package on the Mac App Store. Releases of Mac OS X from 1999 to 2005 can run only on the PowerPC-based Macs from the time period, Mac OS X10.5 Leopard was released as a Universal binary, meaning the installer disc supported both Intel and PowerPC processors. In 2009, Apple released Mac OS X10.6 Snow Leopard, in 2011, Apple released Mac OS X10.7 Lion, which no longer supported 32-bit Intel processors and also did not include Rosetta. All versions of the system released since then run exclusively on 64-bit Intel CPUs, the heritage of what would become macOS had originated at NeXT, a company founded by Steve Jobs following his departure from Apple in 1985. There, the Unix-like NeXTSTEP operating system was developed, and then launched in 1989 and its graphical user interface was built on top of an object-oriented GUI toolkit using the Objective-C programming language. This led Apple to purchase NeXT in 1996, allowing NeXTSTEP, then called OPENSTEP, previous Macintosh operating systems were named using Arabic numerals, e. g. Mac OS8 and Mac OS9. The letter X in Mac OS Xs name refers to the number 10 and it is therefore correctly pronounced ten /ˈtɛn/ in this context. However, a common mispronunciation is X /ˈɛks/, consumer releases of Mac OS X included more backward compatibility. Mac OS applications could be rewritten to run natively via the Carbon API, the consumer version of Mac OS X was launched in 2001 with Mac OS X10.0. Reviews were variable, with praise for its sophisticated, glossy Aqua interface
9.
Microsoft Windows
–
Microsoft Windows is a metafamily of graphical operating systems developed, marketed, and sold by Microsoft. It consists of families of operating systems, each of which cater to a certain sector of the computing industry with the OS typically associated with IBM PC compatible architecture. Active Windows families include Windows NT, Windows Embedded and Windows Phone, defunct Windows families include Windows 9x, Windows 10 Mobile is an active product, unrelated to the defunct family Windows Mobile. Microsoft introduced an operating environment named Windows on November 20,1985, Microsoft Windows came to dominate the worlds personal computer market with over 90% market share, overtaking Mac OS, which had been introduced in 1984. Apple came to see Windows as an encroachment on their innovation in GUI development as implemented on products such as the Lisa. On PCs, Windows is still the most popular operating system, however, in 2014, Microsoft admitted losing the majority of the overall operating system market to Android, because of the massive growth in sales of Android smartphones. In 2014, the number of Windows devices sold was less than 25% that of Android devices sold and this comparison however may not be fully relevant, as the two operating systems traditionally target different platforms. As of September 2016, the most recent version of Windows for PCs, tablets, smartphones, the most recent versions for server computers is Windows Server 2016. A specialized version of Windows runs on the Xbox One game console, Microsoft, the developer of Windows, has registered several trademarks each of which denote a family of Windows operating systems that target a specific sector of the computing industry. It now consists of three operating system subfamilies that are released almost at the time and share the same kernel. Windows, The operating system for personal computers, tablets. The latest version is Windows 10, the main competitor of this family is macOS by Apple Inc. for personal computers and Android for mobile devices. Windows Server, The operating system for server computers, the latest version is Windows Server 2016. Unlike its clients sibling, it has adopted a strong naming scheme, the main competitor of this family is Linux. Windows PE, A lightweight version of its Windows sibling meant to operate as an operating system, used for installing Windows on bare-metal computers. The latest version is Windows PE10.0.10586.0, Windows Embedded, Initially, Microsoft developed Windows CE as a general-purpose operating system for every device that was too resource-limited to be called a full-fledged computer. The following Windows families are no longer being developed, Windows 9x, Microsoft now caters to the consumers market with Windows NT. Windows Mobile, The predecessor to Windows Phone, it was a mobile operating system
10.
Linux
–
Linux is a Unix-like computer operating system assembled under the model of free and open-source software development and distribution. The defining component of Linux is the Linux kernel, an operating system kernel first released on September 17,1991 by Linus Torvalds, the Free Software Foundation uses the name GNU/Linux to describe the operating system, which has led to some controversy. Linux was originally developed for computers based on the Intel x86 architecture. Because of the dominance of Android on smartphones, Linux has the largest installed base of all operating systems. Linux is also the operating system on servers and other big iron systems such as mainframe computers. It is used by around 2. 3% of desktop computers, the Chromebook, which runs on Chrome OS, dominates the US K–12 education market and represents nearly 20% of the sub-$300 notebook sales in the US. Linux also runs on embedded systems – devices whose operating system is built into the firmware and is highly tailored to the system. This includes TiVo and similar DVR devices, network routers, facility automation controls, televisions, many smartphones and tablet computers run Android and other Linux derivatives. The development of Linux is one of the most prominent examples of free, the underlying source code may be used, modified and distributed—commercially or non-commercially—by anyone under the terms of its respective licenses, such as the GNU General Public License. Typically, Linux is packaged in a known as a Linux distribution for both desktop and server use. Distributions intended to run on servers may omit all graphical environments from the standard install, because Linux is freely redistributable, anyone may create a distribution for any intended use. The Unix operating system was conceived and implemented in 1969 at AT&Ts Bell Laboratories in the United States by Ken Thompson, Dennis Ritchie, Douglas McIlroy, first released in 1971, Unix was written entirely in assembly language, as was common practice at the time. Later, in a key pioneering approach in 1973, it was rewritten in the C programming language by Dennis Ritchie, the availability of a high-level language implementation of Unix made its porting to different computer platforms easier. Due to an earlier antitrust case forbidding it from entering the computer business, as a result, Unix grew quickly and became widely adopted by academic institutions and businesses. In 1984, AT&T divested itself of Bell Labs, freed of the legal obligation requiring free licensing, the GNU Project, started in 1983 by Richard Stallman, has the goal of creating a complete Unix-compatible software system composed entirely of free software. Later, in 1985, Stallman started the Free Software Foundation, by the early 1990s, many of the programs required in an operating system were completed, although low-level elements such as device drivers, daemons, and the kernel were stalled and incomplete. Linus Torvalds has stated that if the GNU kernel had been available at the time, although not released until 1992 due to legal complications, development of 386BSD, from which NetBSD, OpenBSD and FreeBSD descended, predated that of Linux. Torvalds has also stated that if 386BSD had been available at the time, although the complete source code of MINIX was freely available, the licensing terms prevented it from being free software until the licensing changed in April 2000
11.
Software license
–
A software license is a legal instrument governing the use or redistribution of software. Under United States copyright law all software is copyright protected, in code as also object code form. The only exception is software in the public domain, most distributed software can be categorized according to its license type. Two common categories for software under copyright law, and therefore with licenses which grant the licensee specific rights, are proprietary software and free, unlicensed software outside the copyright protection is either public domain software or software which is non-distributed, non-licensed and handled as internal business trade secret. Contrary to popular belief, distributed unlicensed software is copyright protected. Examples for this are unauthorized software leaks or software projects which are placed on public software repositories like GitHub without specified license. As voluntarily handing software into the domain is problematic in some international law domains, there are also licenses granting PD-like rights. Therefore, the owner of a copy of software is legally entitled to use that copy of software. Hence, if the end-user of software is the owner of the respective copy, as many proprietary licenses only enumerate the rights that the user already has under 17 U. S. C. §117, and yet proclaim to take away from the user. Proprietary software licenses often proclaim to give software publishers more control over the way their software is used by keeping ownership of each copy of software with the software publisher. The form of the relationship if it is a lease or a purchase, for example UMG v. Augusto or Vernor v. Autodesk. The ownership of goods, like software applications and video games, is challenged by licensed. The Swiss based company UsedSoft innovated the resale of business software and this feature of proprietary software licenses means that certain rights regarding the software are reserved by the software publisher. Therefore, it is typical of EULAs to include terms which define the uses of the software, the most significant effect of this form of licensing is that, if ownership of the software remains with the software publisher, then the end-user must accept the software license. In other words, without acceptance of the license, the end-user may not use the software at all, one example of such a proprietary software license is the license for Microsoft Windows. The most common licensing models are per single user or per user in the appropriate volume discount level, Licensing per concurrent/floating user also occurs, where all users in a network have access to the program, but only a specific number at the same time. Another license model is licensing per dongle which allows the owner of the dongle to use the program on any computer, Licensing per server, CPU or points, regardless the number of users, is common practice as well as site or company licenses
12.
Dylan (programming language)
–
It was created in the early 1990s by a group led by Apple Computer. A concise and thorough overview of the language may be found in the Dylan Reference Manual, Dylan derives from Scheme and Common Lisp and adds an integrated object system derived from the Common Lisp Object System. In Dylan, all values are first-class objects, Dylan supports multiple inheritance, polymorphism, multiple dispatch, keyword arguments, object introspection, pattern-based syntax extension macros, and many other advanced features. Programs can express fine-grained control over dynamism, admitting programs that occupy a continuum between dynamic and static programming and supporting evolutionary development, Dylans main design goal is to be a dynamic language well-suited for developing commercial software. Dylan attempts to address performance issues by introducing natural limits to the full flexibility of Lisp systems. Dylan was created in the early 1990s by a led by Apple Computer. Apple ended their Dylan development effort in 1995, though made a technology release version available that included an advanced IDE. Both of these implementations are now open source, the Harlequin implementation is now known as Open Dylan and is maintained by a group of volunteers, the Dylan Hackers. The Dylan language was code-named Ralph, james Joaquin chose the name Dylan for DYnamic LANguage. Dylan uses an Algol-like syntax designed by Michael Kahl and it is described in great detail in the Dylan Reference Manual. This page shows examples of some features that are more unusual. Many of them come from Dylans Lisp heritage, a simple class with several slots, By convention all classes are named with angle brackets. The class could be named Point, but that is never done, in end class <point> both class and <point> are optional. This is true for all end clauses, for example, you may write end if or just end to terminate an if statement. The same class, rewritten in the most minimal way possible, the slots must be initialized manually. By convention constant names begin with $, a factorial function, There is no explicit return statement. The result of a method or function is the last expression evaluated and it is a common style to leave off the semicolon after an expression in return position. Identifiers in Dylan may contain more special characters than most language, N. and <integer> are just normal identifiers
13.
Smalltalk
–
Smalltalk is an object-oriented, dynamically typed, reflective programming language. Smalltalk was created as the language to underpin the new world of computing exemplified by human–computer symbiosis, the language was first generally released as Smalltalk-80. Smalltalk-like languages are in continuing development and have gathered loyal communities of users around them. ANSI Smalltalk was ratified in 1998 and represents the version of Smalltalk. There are a number of Smalltalk variants. The unqualified word Smalltalk is often used to indicate the Smalltalk-80 language, Smalltalk was the product of research led by Alan Kay at Xerox Palo Alto Research Center, Alan Kay designed most of the early Smalltalk versions, which Dan Ingalls implemented. A later variant actually used for work is now known as Smalltalk-72. Its syntax and execution model were very different from modern Smalltalk variants, after significant revisions which froze some aspects of execution semantics to gain performance, Smalltalk-76 was created. This system had a development environment featuring most of the now familiar tools, later a general availability implementation, known as Smalltalk-80 Version 2, was released as an image and a virtual machine specification. ANSI Smalltalk has been the standard language reference since 1998, two of the currently popular Smalltalk implementation variants are descendants of those original Smalltalk-80 images. Squeak is an open source implementation derived from Smalltalk-80 Version 1 by way of Apple Smalltalk, VisualWorks is derived from Smalltalk-80 version 2 by way of Smalltalk-802.5 and ObjectWorks. As an interesting link between generations, in 2001 Vassili Bykov implemented Hobbes, a machine running Smalltalk-80 inside VisualWorks. During the late 1980s to mid-1990s, Smalltalk environments—including support, training, ParcPlace Systems tended to focus on the Unix/Sun microsystems market, while Digitalk focused on Intel-based PCs running Microsoft Windows or IBMs OS/2. IBM initially supported the Digitalk product, but then entered the market with a Smalltalk product in 1995 called VisualAge/Smalltalk, easel introduced Enfin at this time on Windows and OS/2. Enfin became far more popular in Europe, as IBM introduced it into IT shops before their development of IBM Smalltalk, Enfin was later acquired by Cincom Systems, and is now sold under the name ObjectStudio, and is part of the Cincom Smalltalk product suite. In 1995, ParcPlace and Digitalk merged into ParcPlace-Digitalk and then rebranded in 1997 as ObjectShare, located in Irvine, ObjectShare was traded publicly until 1999, when it was delisted and dissolved. The merged firm never managed to find a response to Java as to market positioning. VisualWorks was sold to Cincom and is now part of Cincom Smalltalk, Cincom has backed Smalltalk strongly, releasing multiple new versions of VisualWorks and ObjectStudio each year since 1999
14.
Scala (programming language)
–
Scala is a general-purpose programming language providing support for functional programming and a strong static type system. Designed to be concise, many of Scalas design decisions were designed to build from criticisms of Java, Scala source code is intended to be compiled to Java bytecode, so that the resulting executable code runs on a Java virtual machine. Scala provides language interoperability with Java, so that libraries written in languages may be referenced directly in Scala or Java code. Like Java, Scala is object-oriented, and uses a syntax reminiscent of the C programming language. Unlike Java, Scala has many features of programming languages like Scheme, Standard ML and Haskell, including currying, type inference, immutability, lazy evaluation. It also has a type system supporting algebraic data types, covariance and contravariance, higher-order types. Other features of Scala not present in Java include operator overloading, optional parameters, named parameters, raw strings, the name Scala is a portmanteau of scalable and language, signifying that it is designed to grow with the demands of its users. The design of Scala started in 2001 at the École Polytechnique Fédérale de Lausanne by Martin Odersky and it followed on from work on Funnel, a programming language combining ideas from functional programming and Petri nets. Odersky formerly worked on Generic Java, and javac, Suns Java compiler, after an internal release in late 2003, Scala was released publicly in early 2004 on the Java platform, A second version followed in March 2006. Although Scala had extensive support for programming from its inception. On 17 January 2011 the Scala team won a research grant of over €2.3 million from the European Research Council. On 12 May 2011, Odersky and collaborators launched Typesafe Inc. a company to provide support, training. Typesafe received a $3 million investment in 2011 from Greylock Partners, Scala runs on the Java platform and is compatible with existing Java programs. The reference Scala software distribution, including compiler and libraries, is released under a BSD license, Scala. js is a Scala compiler that compiles to JavaScript, making it possible to write Scala programs that can run in web browsers. Scala Native is a Scala compiler that targets the LLVM compiler infrastructure to create code that uses a lightweight managed runtime which utilizes the Boehm garbage collector. The project is led by Denys Shabalin and had its first release,0.1, a reference Scala compiler targeting the. NET Framework and its Common Language Runtime was released in June 2004, but was officially dropped in 2012. Indeed, Scalas compiling and executing model is identical to that of Java, a shorter version of the Hello World Scala program is, Scala includes interactive shell and scripting support. Saved in a file named HelloWorld2. scala, this can be run as a script with no prior compiling using, value types are capitalized, Int, Double, Boolean instead of int, double, boolean
15.
Apple Inc.
–
Apple is an American multinational technology company headquartered in Cupertino, California that designs, develops, and sells consumer electronics, computer software, and online services. Apples consumer software includes the macOS and iOS operating systems, the media player, the Safari web browser. Its online services include the iTunes Store, the iOS App Store and Mac App Store, Apple Music, Apple was founded by Steve Jobs, Steve Wozniak, and Ronald Wayne in April 1976 to develop and sell personal computers. It was incorporated as Apple Computer, Inc. in January 1977, Apple joined the Dow Jones Industrial Average in March 2015. In November 2014, Apple became the first U. S. company to be valued at over US$700 billion in addition to being the largest publicly traded corporation in the world by market capitalization. The company employs 115,000 full-time employees as of July 2015 and it operates the online Apple Store and iTunes Store, the latter of which is the worlds largest music retailer. Consumers use more than one billion Apple products worldwide as of March 2016, Apples worldwide annual revenue totaled $233 billion for the fiscal year ending in September 2015. This revenue accounts for approximately 1. 25% of the total United States GDP.1 billion, the corporation receives significant criticism regarding the labor practices of its contractors and its environmental and business practices, including the origins of source materials. Apple was founded on April 1,1976, by Steve Jobs, Steve Wozniak, the Apple I kits were computers single-handedly designed and hand-built by Wozniak and first shown to the public at the Homebrew Computer Club. The Apple I was sold as a motherboard, which was less than what is now considered a personal computer. The Apple I went on sale in July 1976 and was market-priced at $666.66, Apple was incorporated January 3,1977, without Wayne, who sold his share of the company back to Jobs and Wozniak for $800. Multimillionaire Mike Markkula provided essential business expertise and funding of $250,000 during the incorporation of Apple, during the first five years of operations revenues grew exponentially, doubling about every four months. Between September 1977 and September 1980 yearly sales grew from $775,000 to $118m, the Apple II, also invented by Wozniak, was introduced on April 16,1977, at the first West Coast Computer Faire. It differed from its rivals, the TRS-80 and Commodore PET, because of its character cell-based color graphics. While early Apple II models used ordinary cassette tapes as storage devices, they were superseded by the introduction of a 5 1/4 inch floppy disk drive and interface called the Disk II. The Apple II was chosen to be the platform for the first killer app of the business world, VisiCalc. VisiCalc created a market for the Apple II and gave home users an additional reason to buy an Apple II. Before VisiCalc, Apple had been a distant third place competitor to Commodore, by the end of the 1970s, Apple had a staff of computer designers and a production line
16.
Claris Home Page
–
Claris Home Page was one of the earliest true WYSIWYG HTML editors, developed from 1994 on. The project was code-named Loma Prieta, Claris purchased it from San Andreas Systems, reworked it to use the user interface common to all their products, and released it in 1996. Home Page supported all of the common in HTML at the time. It also had a number of bugs and display glitches, which Claris did not fix as late as 1997 and it improved the table editor and included a clip art library of several thousand images in GIF format. In January 1998, the third and final version of Home Page was released and this version contained templates and tools for building database-driven websites using FileMaker Pro 4.1 and Claris Dynamic Markup Language. Within weeks of the final Home Page release, parent company Apple Computer reorganized Claris into FileMaker Inc. with Home Page, Home Page was discontinued in 2001, though it continued to run in the Classic Environment of Mac OS X through version 10.4. Home Page was designed for HTML3.2, and consequently does not support HTML4.0, Home Page cannot display Portable Network Graphics images, and if it is used to upload them, the PNG files will be corrupted and rendered unviewable. Programmer Sam Schillace, who had developed Claris Home Page with his partner from 1994 on, would later go on to lead the development of Google Docs
17.
HTML
–
Hypertext Markup Language is the standard markup language for creating web pages and web applications. With Cascading Style Sheets and JavaScript it forms a triad of cornerstone technologies for the World Wide Web, Web browsers receive HTML documents from a webserver or from local storage and render them into multimedia web pages. HTML describes the structure of a web page semantically and originally included cues for the appearance of the document, HTML elements are the building blocks of HTML pages. With HTML constructs, images and other objects, such as interactive forms and it provides a means to create structured documents by denoting structural semantics for text such as headings, paragraphs, lists, links, quotes and other items. HTML elements are delineated by tags, written using angle brackets, tags such as <img /> and <input /> introduce content into the page directly. Include explicit close tags for elements that permit content but are left empty, by carefully following the W3Cs compatibility guidelines, a user agent should be able to interpret the document equally as HTML or XHTML. For documents that are XHTML1.0 and have made compatible in this way. When delivered as XHTML, browsers should use an XML parser, HTML4 defined three different versions of the language, Strict, Transitional and Frameset. The Transitional and Frameset versions allow for presentational markup, which is omitted in the Strict version, instead, cascading style sheets are encouraged to improve the presentation of HTML documents. Because XHTML1 only defines an XML syntax for the language defined by HTML4, as this list demonstrates, the loose versions of the specification are maintained for legacy support. However, contrary to popular misconceptions, the move to XHTML does not imply a removal of this legacy support, rather the X in XML stands for extensible and the W3C is modularizing the entire specification and opening it up to independent extensions. The primary achievement in the move from XHTML1.0 to XHTML1.1 is the modularization of the entire specification, the strict version of HTML is deployed in XHTML1.1 through a set of modular extensions to the base XHTML1.1 specification. Likewise, someone looking for the loose or frameset specifications will find similar extended XHTML1.1 support, the modularization also allows for separate features to develop on their own timetable. So for example, XHTML1.1 will allow quicker migration to emerging XML standards such as MathML, in summary, the HTML4 specification primarily reined in all the various HTML implementations into a single clearly written specification based on SGML. XHTML1.0, ported this specification, as is, next, XHTML1.1 takes advantage of the extensible nature of XML and modularizes the whole specification. XHTML2.0 was intended to be the first step in adding new features to the specification in a standards-body-based approach. The WHATWG considers their work as living standard HTML for what constitutes the state of the art in major browser implementations by Apple, Google, Mozilla, Opera, hTML5 is specified by the HTML Working Group of the W3C following the W3C process. HTML lacks some of the found in earlier hypertext systems, such as source tracking, fat links
18.
Memory management
–
Memory management is a form of resource management applied to computer memory. The essential requirement of memory management is to provide ways to dynamically allocate portions of memory to programs at their request and this is critical to any advanced computer system where more than a single process might be underway at any time. Several methods have been devised that increase the effectiveness of memory management, the quality of the virtual memory manager can have an extensive effect on overall system performance. Modern general-purpose computer systems manage memory at two levels, operating system level, and application level, application-level memory management is generally categorized as either automatic memory management, usually involving garbage collection, or manual memory management. The task of fulfilling an allocation request consists of locating a block of unused memory of sufficient size, Memory requests are satisfied by allocating portions from a large pool of memory called the heap or free store. At any given time, some parts of the heap are in use, while some are free, the allocators metadata can also inflate the size of small allocations. This is often managed by chunking, the memory management system must track outstanding allocations to ensure that they do not overlap and that no memory is ever lost as a memory leak. The specific dynamic memory allocation algorithm implemented can impact performance significantly, a study conducted in 1994 by Digital Equipment Corporation illustrates the overheads involved for a variety of allocators. The lowest average instruction path length required to allocate a single memory slot was 52, since the precise location of the allocation is not known in advance, the memory is accessed indirectly, usually through a pointer reference. This works well for simple embedded systems where no large objects need to be allocated, however, due to the significantly reduced overhead this method can substantially improve performance for objects that need frequent allocation / de-allocation and is often used in video games. All blocks of a particular size are kept in a linked list or tree. If a smaller size is requested than is available, the smallest available size is selected, one of the resulting parts is selected, and the process repeats until the request is complete. When a block is allocated, the allocator will start with the smallest sufficiently large block to avoid needlessly breaking blocks, when a block is freed, it is compared to its buddy. If they are free, they are combined and placed in the correspondingly larger-sized buddy-block list. Virtual memory is a method of decoupling the memory organization from the physical hardware, the applications operate memory via virtual addresses. Each time an attempt to access stored data is made, virtual memory data orders translate the virtual address to a physical address, in this way addition of virtual memory enables granular control over memory systems and methods of access. In virtual memory systems the system limits how a process can access the memory. Even though the memory allocated for specific processes is normally isolated, shared memory is one of the fastest techniques for inter-process communication
19.
Compiler
–
A compiler is a computer program that transforms source code written in a programming language into another computer language, with the latter often having a binary form known as object code. The most common reason for converting source code is to create an executable program, the name compiler is primarily used for programs that translate source code from a high-level programming language to a lower level language. If the compiled program can run on a computer whose CPU or operating system is different from the one on which the compiler runs, more generally, compilers are a specific type of translator. While all programs that take a set of programming specifications and translate them, a program that translates from a low-level language to a higher level one is a decompiler. A program that translates between high-level languages is called a source-to-source compiler or transpiler. A language rewriter is usually a program that translates the form of expressions without a change of language, the term compiler-compiler is sometimes used to refer to a parser generator, a tool often used to help create the lexer and parser. A compiler is likely to many or all of the following operations, lexical analysis, preprocessing, parsing, semantic analysis, code generation. Program faults caused by incorrect compiler behavior can be difficult to track down and work around, therefore. Software for early computers was written in assembly language. The notion of a high level programming language dates back to 1943, no actual implementation occurred until the 1970s, however. The first actual compilers date from the 1950s, identifying the very first is hard, because there is subjectivity in deciding when programs become advanced enough to count as the full concept rather than a precursor. 1952 saw two important advances. Grace Hopper wrote the compiler for the A-0 programming language, though the A-0 functioned more as a loader or linker than the notion of a full compiler. Also in 1952, the first autocode compiler was developed by Alick Glennie for the Mark 1 computer at the University of Manchester and this is considered by some to be the first compiled programming language. The FORTRAN team led by John Backus at IBM is generally credited as having introduced the first unambiguously complete compiler, COBOL was an early language to be compiled on multiple architectures, in 1960. In many application domains the idea of using a higher level language quickly caught on, because of the expanding functionality supported by newer programming languages and the increasing complexity of computer architectures, compilers have become more complex. Early compilers were written in assembly language, the first self-hosting compiler – capable of compiling its own source code in a high-level language – was created in 1962 for the Lisp programming language by Tim Hart and Mike Levin at MIT. Since the 1970s, it has become practice to implement a compiler in the language it compiles
20.
PHP
–
PHP is a server-side scripting language designed primarily for web development but also used as a general-purpose programming language. Originally created by Rasmus Lerdorf in 1994, the PHP reference implementation is now produced by The PHP Development Team, PHP originally stood for Personal Home Page, but it now stands for the recursive acronym PHP, Hypertext Preprocessor. PHP code may be embedded into HTML or HTML5 code, or it can be used in combination with various web template systems, web content management systems and web frameworks. PHP code is usually processed by a PHP interpreter implemented as a module in the web server or as a Common Gateway Interface executable. The web server combines the results of the interpreted and executed PHP code, PHP code may also be executed with a command-line interface and can be used to implement standalone graphical applications. The standard PHP interpreter, powered by the Zend Engine, is free software released under the PHP License, PHP has been widely ported and can be deployed on most web servers on almost every operating system and platform, free of charge. The PHP language evolved without a formal specification or standard until 2014. Since 2014 work has gone on to create a formal PHP specification, PHP development began in 1995 when Rasmus Lerdorf wrote several Common Gateway Interface programs in C, which he used to maintain his personal homepage. He extended them to work with web forms and to communicate with databases, PHP/FI could help to build simple, dynamic web applications. This release already had the functionality that PHP has as of 2013. This included Perl-like variables, form handling, and the ability to embed HTML, the syntax resembled that of Perl but was simpler, more limited and less consistent. A development team began to form and, after months of work and beta testing, the fact that PHP lacked an original overall design but instead developed organically has led to inconsistent naming of functions and inconsistent ordering of their parameters. Zeev Suraski and Andi Gutmans rewrote the parser in 1997 and formed the base of PHP3, changing the name to the recursive acronym PHP. Afterwards, public testing of PHP3 began, and the launch came in June 1998. Suraski and Gutmans then started a new rewrite of PHPs core and they also founded Zend Technologies in Ramat Gan, Israel. On May 22,2000, PHP4, powered by the Zend Engine 1.0, was released, as of August 2008 this branch reached version 4.4.9. PHP4 is no longer under development nor will any security updates be released, on July 13,2004, PHP5 was released, powered by the new Zend Engine II. PHP5 included new features such as improved support for object-oriented programming, the PHP Data Objects extension, in 2008 PHP5 became the only stable version under development
21.
Python (programming language)
–
Python is a widely used high-level programming language for general-purpose programming, created by Guido van Rossum and first released in 1991. The language provides constructs intended to enable writing clear programs on both a small and large scale and it has a large and comprehensive standard library. Python interpreters are available for operating systems, allowing Python code to run on a wide variety of systems. CPython, the implementation of Python, is open source software and has a community-based development model. CPython is managed by the non-profit Python Software Foundation, about the origin of Python, Van Rossum wrote in 1996, Over six years ago, in December 1989, I was looking for a hobby programming project that would keep me occupied during the week around Christmas. Would be closed, but I had a computer. I decided to write an interpreter for the new scripting language I had been thinking about lately, I chose Python as a working title for the project, being in a slightly irreverent mood. Python 2.0 was released on 16 October 2000 and had major new features, including a cycle-detecting garbage collector. With this release the development process was changed and became more transparent, Python 3.0, a major, backwards-incompatible release, was released on 3 December 2008 after a long period of testing. Many of its features have been backported to the backwards-compatible Python 2.6. x and 2.7. x version series. The End Of Life date for Python 2.7 was initially set at 2015, many other paradigms are supported via extensions, including design by contract and logic programming. Python uses dynamic typing and a mix of reference counting and a garbage collector for memory management. An important feature of Python is dynamic name resolution, which binds method, the design of Python offers some support for functional programming in the Lisp tradition. The language has map, reduce and filter functions, list comprehensions, dictionaries, and sets, the standard library has two modules that implement functional tools borrowed from Haskell and Standard ML. Python can also be embedded in existing applications that need a programmable interface, while offering choice in coding methodology, the Python philosophy rejects exuberant syntax, such as in Perl, in favor of a sparser, less-cluttered grammar. As Alex Martelli put it, To describe something as clever is not considered a compliment in the Python culture. Pythons philosophy rejects the Perl there is more one way to do it approach to language design in favor of there should be one—and preferably only one—obvious way to do it. Pythons developers strive to avoid premature optimization, and moreover, reject patches to non-critical parts of CPython that would offer an increase in speed at the cost of clarity
22.
.NET Framework
–
. NET Framework is a software framework developed by Microsoft that runs primarily on Microsoft Windows. It includes a class library named Framework Class Library and provides language interoperability across several programming languages. FCL and CLR together constitute. NET Framework, FCL provides user interface, data access, database connectivity, cryptography, web application development, numeric algorithms, and network communications. Programmers produce software by combining their source code with. NET Framework, the framework is intended to be used by most new applications created for the Windows platform. Silverlight was available as a web browser plugin, Microsoft began developing. NET Framework in the late 1990s, originally under the name of Next Generation Windows Services. By late 2000, the first beta versions of. NET1.0 were released, in August 2000, Microsoft, Hewlett-Packard, and Intel worked to standardize Common Language Infrastructure and C#. By December 2001, both were ratified Ecma International standards, International Organisation for Standardisation followed in April 2003. The current version of ISO standards are ISO-IEC23271,2012, while Microsoft and their partners hold patents for CLI and C#, ECMA and ISO require that all patents essential to implementation be made available under reasonable and non-discriminatory terms. The firms agreed to meet these terms, and to make the patents available royalty-free, however, this did not apply for the part of. NET Framework not covered by ECMA-ISO standards, which included Windows Forms, ADO. NET, and ASP. NET. Patents that Microsoft holds in these areas may have deterred non-Microsoft implementations of the full framework, on 3 October 2007, Microsoft announced that the source code for. NET Framework 3.5 libraries was to become available under the Microsoft Reference Source License. The source code became available online on 16 January 2008 and included BCL, ASP. NET, ADO. NET, Windows Forms, WPF. Scott Guthrie of Microsoft promised that LINQ, WCF, and WF libraries were being added, in November 2014, Microsoft also produced an update to its patent grants, which further extends the scope beyond its prior pledges. The new grant does maintain the restriction that any implementation must maintain compliance with the mandatory parts of the CLI specification. On 31 March 2016, Microsoft announced at Microsoft Build that they will completely relicense Mono under an MIT License even in scenarios where formerly a commercial license was needed and it was announced that the Mono Project was contributed to the. NET Foundation. These developments followed the acquisition of Xamarin, which began in February 2016 and was finished on 18 March 2016. Microsofts press release highlights that the cross-platform commitment now allows for a fully open-source, however, Microsoft does not plan to release the source for WPF or Windows Forms. By implementing the core aspects of. NET Framework within the scope of CLI, microsofts implementation of CLI is Common Language Runtime. It serves as the engine of. NET Framework