1.
Software developer
–
A software developer is a person concerned with facets of the software development process, including the research, design, programming, and testing of computer software. Other job titles which are used with similar meanings are programmer, software analyst. According to developer Eric Sink, the differences between system design, software development, and programming are more apparent, even more so that developers become systems architects, those who design the multi-leveled architecture or component interactions of a large software system. In a large company, there may be employees whose sole responsibility consists of one of the phases above. In smaller development environments, a few people or even an individual might handle the complete process. The word software was coined as a prank as early as 1953, before this time, computers were programmed either by customers, or the few commercial computer vendors of the time, such as UNIVAC and IBM. The first company founded to provide products and services was Computer Usage Company in 1955. The software industry expanded in the early 1960s, almost immediately after computers were first sold in mass-produced quantities, universities, government, and business customers created a demand for software. Many of these programs were written in-house by full-time staff programmers, some were distributed freely between users of a particular machine for no charge. Others were done on a basis, and other firms such as Computer Sciences Corporation started to grow. The computer/hardware makers started bundling operating systems, systems software and programming environments with their machines, new software was built for microcomputers, so other manufacturers including IBM, followed DECs example quickly, resulting in the IBM AS/400 amongst others. The industry expanded greatly with the rise of the computer in the mid-1970s. In the following years, it created a growing market for games, applications. DOS, Microsofts first operating system product, was the dominant operating system at the time, by 2014 the role of cloud developer had been defined, in this context, one definition of a developer in general was published, Developers make software for the world to use. The job of a developer is to crank out code -- fresh code for new products, code fixes for maintenance, code for business logic, bus factor Software Developer description from the US Department of Labor
2.
Software release life cycle
–
Usage of the alpha/beta test terminology originated at IBM. As long ago as the 1950s, IBM used similar terminology for their hardware development, a test was the verification of a new product before public announcement. B test was the verification before releasing the product to be manufactured, C test was the final test before general availability of the product. Martin Belsky, a manager on some of IBMs earlier software projects claimed to have invented the terminology, IBM dropped the alpha/beta terminology during the 1960s, but by then it had received fairly wide notice. The usage of beta test to refer to testing done by customers was not done in IBM, rather, IBM used the term field test. Pre-alpha refers to all activities performed during the project before formal testing. These activities can include requirements analysis, software design, software development, in typical open source development, there are several types of pre-alpha versions. Milestone versions include specific sets of functions and are released as soon as the functionality is complete, the alpha phase of the release life cycle is the first phase to begin software testing. In this phase, developers generally test the software using white-box techniques, additional validation is then performed using black-box or gray-box techniques, by another testing team. Moving to black-box testing inside the organization is known as alpha release, alpha software can be unstable and could cause crashes or data loss. Alpha software may not contain all of the features that are planned for the final version, in general, external availability of alpha software is uncommon in proprietary software, while open source software often has publicly available alpha versions. The alpha phase usually ends with a freeze, indicating that no more features will be added to the software. At this time, the software is said to be feature complete, Beta, named after the second letter of the Greek alphabet, is the software development phase following alpha. Software in the stage is also known as betaware. Beta phase generally begins when the software is complete but likely to contain a number of known or unknown bugs. Software in the phase will generally have many more bugs in it than completed software, as well as speed/performance issues. The focus of beta testing is reducing impacts to users, often incorporating usability testing, the process of delivering a beta version to the users is called beta release and this is typically the first time that the software is available outside of the organization that developed it. Beta version software is useful for demonstrations and previews within an organization
3.
Repository (version control)
–
In revision control systems, a repository is an on-disk data structure which stores metadata for a set of files and/or directory structure. Some of the metadata that a repository contains includes, among other things, a set of references to commit objects, called heads. The main purpose of a repository is to store a set of files and these differences in methodology have generally led to diverse uses of revision control by different groups, depending on their needs. Software repository Codebase Forge Comparison of source code hosting facilities
4.
Haskell (programming language)
–
Haskell /ˈhæskəl/ is a standardized, general-purpose purely functional programming language, with non-strict semantics and strong static typing. It is named after logician Haskell Curry, the latest standard of Haskell is Haskell 2010. As of May 2016, a group is working on the next version, Haskell features a type system with type inference and lazy evaluation. Type classes first appeared in the Haskell programming language and its main implementation is the Glasgow Haskell Compiler. Haskell is based on the semantics, but not the syntax, of the language Miranda, Haskell is used widely in academia and also used in industry. Following the release of Miranda by Research Software Ltd, in 1985, by 1987, more than a dozen non-strict, purely functional programming languages existed. Of these, Miranda was used most widely, but it was proprietary software, the committees purpose was to consolidate the existing functional languages into a common one that would serve as a basis for future research in functional-language design. The first version of Haskell was defined in 1990, the committees efforts resulted in a series of language definitions. The committee expressly welcomed creating extensions and variants of Haskell 98 via adding and incorporating experimental features, in February 1999, the Haskell 98 language standard was originally published as The Haskell 98 Report. In January 2003, a version was published as Haskell 98 Language and Libraries. The language continues to rapidly, with the Glasgow Haskell Compiler implementation representing the current de facto standard. In early 2006, the process of defining a successor to the Haskell 98 standard, informally named Haskell Prime and this was intended to be an ongoing incremental process to revise the language definition, producing a new revision up to once per year. The first revision, named Haskell 2010, was announced in November 2009 and it introduces the Language-Pragma-Syntax-Extension which allows for code designating a Haskell source as Haskell 2010 or requiring certain extensions to the Haskell language. Haskell features lazy evaluation, pattern matching, list comprehension, type classes and it is a purely functional language, which means that in general, functions in Haskell have no side effects. A distinct construct exists to represent side effects, orthogonal to the type of functions, a pure function may return a side effect which is subsequently executed, modeling the impure functions of other languages. Haskell has a strong, static type system based on Hindley–Milner type inference, haskells principal innovation in this area is to add type classes, originally conceived as a principled way to add overloading to the language, but since finding many more uses. The construct which represents side effects is an example of a monad, monads are a general framework which can model different kinds of computation, including error handling, nondeterminism, parsing, and software transactional memory. Monads are defined as ordinary datatypes, but Haskell provides some syntactic sugar for their use, Haskell has an open, published specification, and multiple implementations exist
5.
C (programming language)
–
C was originally developed by Dennis Ritchie between 1969 and 1973 at Bell Labs, and used to re-implement the Unix operating system. C has been standardized by the American National Standards Institute since 1989, C is an imperative procedural language. Therefore, C was useful for applications that had formerly been coded in assembly language. Despite its low-level capabilities, the language was designed to encourage cross-platform programming, a standards-compliant and portably written C program can be compiled for a very wide variety of computer platforms and operating systems with few changes to its source code. The language has become available on a wide range of platforms. In C, all code is contained within subroutines, which are called functions. Function parameters are passed by value. Pass-by-reference is simulated in C by explicitly passing pointer values, C program source text is free-format, using the semicolon as a statement terminator and curly braces for grouping blocks of statements. The C language also exhibits the characteristics, There is a small, fixed number of keywords, including a full set of flow of control primitives, for, if/else, while, switch. User-defined names are not distinguished from keywords by any kind of sigil, There are a large number of arithmetical and logical operators, such as +, +=, ++, &, ~, etc. More than one assignment may be performed in a single statement, function return values can be ignored when not needed. Typing is static, but weakly enforced, all data has a type, C has no define keyword, instead, a statement beginning with the name of a type is taken as a declaration. There is no function keyword, instead, a function is indicated by the parentheses of an argument list, user-defined and compound types are possible. Heterogeneous aggregate data types allow related data elements to be accessed and assigned as a unit, array indexing is a secondary notation, defined in terms of pointer arithmetic. Unlike structs, arrays are not first-class objects, they cannot be assigned or compared using single built-in operators, There is no array keyword, in use or definition, instead, square brackets indicate arrays syntactically, for example month. Enumerated types are possible with the enum keyword and they are not tagged, and are freely interconvertible with integers. Strings are not a data type, but are conventionally implemented as null-terminated arrays of characters. Low-level access to memory is possible by converting machine addresses to typed pointers
6.
Operating system
–
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. All computer programs, excluding firmware, require a system to function. Operating systems are found on many devices that contain a computer – from cellular phones, the dominant desktop operating system is Microsoft Windows with a market share of around 83. 3%. MacOS by Apple Inc. is in place, and the varieties of Linux is in third position. Linux distributions are dominant in the server and supercomputing sectors, other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can run one program at a time. Multi-tasking may be characterized in preemptive and co-operative types, in preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, e. g. Solaris, Linux, cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking, 32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem, a distributed operating system manages a group of distinct computers and makes them appear to be a single computer. The development of networked computers that could be linked and communicate with each other gave rise to distributed computing, distributed computations are carried out on more than one machine. When computers in a work in cooperation, they form a distributed system. The technique is used both in virtualization and cloud computing management, and is common in large server warehouses, embedded operating systems are designed to be used in embedded computer systems. They are designed to operate on small machines like PDAs with less autonomy and they are able to operate with a limited number of resources. They are very compact and extremely efficient by design, Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is a system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could run different programs in succession to speed up processing
7.
Linux
–
Linux is a Unix-like computer operating system assembled under the model of free and open-source software development and distribution. The defining component of Linux is the Linux kernel, an operating system kernel first released on September 17,1991 by Linus Torvalds, the Free Software Foundation uses the name GNU/Linux to describe the operating system, which has led to some controversy. Linux was originally developed for computers based on the Intel x86 architecture. Because of the dominance of Android on smartphones, Linux has the largest installed base of all operating systems. Linux is also the operating system on servers and other big iron systems such as mainframe computers. It is used by around 2. 3% of desktop computers, the Chromebook, which runs on Chrome OS, dominates the US K–12 education market and represents nearly 20% of the sub-$300 notebook sales in the US. Linux also runs on embedded systems – devices whose operating system is built into the firmware and is highly tailored to the system. This includes TiVo and similar DVR devices, network routers, facility automation controls, televisions, many smartphones and tablet computers run Android and other Linux derivatives. The development of Linux is one of the most prominent examples of free, the underlying source code may be used, modified and distributed—commercially or non-commercially—by anyone under the terms of its respective licenses, such as the GNU General Public License. Typically, Linux is packaged in a known as a Linux distribution for both desktop and server use. Distributions intended to run on servers may omit all graphical environments from the standard install, because Linux is freely redistributable, anyone may create a distribution for any intended use. The Unix operating system was conceived and implemented in 1969 at AT&Ts Bell Laboratories in the United States by Ken Thompson, Dennis Ritchie, Douglas McIlroy, first released in 1971, Unix was written entirely in assembly language, as was common practice at the time. Later, in a key pioneering approach in 1973, it was rewritten in the C programming language by Dennis Ritchie, the availability of a high-level language implementation of Unix made its porting to different computer platforms easier. Due to an earlier antitrust case forbidding it from entering the computer business, as a result, Unix grew quickly and became widely adopted by academic institutions and businesses. In 1984, AT&T divested itself of Bell Labs, freed of the legal obligation requiring free licensing, the GNU Project, started in 1983 by Richard Stallman, has the goal of creating a complete Unix-compatible software system composed entirely of free software. Later, in 1985, Stallman started the Free Software Foundation, by the early 1990s, many of the programs required in an operating system were completed, although low-level elements such as device drivers, daemons, and the kernel were stalled and incomplete. Linus Torvalds has stated that if the GNU kernel had been available at the time, although not released until 1992 due to legal complications, development of 386BSD, from which NetBSD, OpenBSD and FreeBSD descended, predated that of Linux. Torvalds has also stated that if 386BSD had been available at the time, although the complete source code of MINIX was freely available, the licensing terms prevented it from being free software until the licensing changed in April 2000
8.
MacOS
–
Within the market of desktop, laptop and home computers, and by web usage, it is the second most widely used desktop OS after Microsoft Windows. Launched in 2001 as Mac OS X, the series is the latest in the family of Macintosh operating systems, Mac OS X succeeded classic Mac OS, which was introduced in 1984, and the final release of which was Mac OS9 in 1999. An initial, early version of the system, Mac OS X Server 1.0, was released in 1999, the first desktop version, Mac OS X10.0, followed in March 2001. In 2012, Apple rebranded Mac OS X to OS X. Releases were code named after big cats from the release up until OS X10.8 Mountain Lion. Beginning in 2013 with OS X10.9 Mavericks, releases have been named after landmarks in California, in 2016, Apple rebranded OS X to macOS, adopting the nomenclature that it uses for their other operating systems, iOS, watchOS, and tvOS. The latest version of macOS is macOS10.12 Sierra, macOS is based on technologies developed at NeXT between 1985 and 1997, when Apple acquired the company. The X in Mac OS X and OS X is pronounced ten, macOS shares its Unix-based core, named Darwin, and many of its frameworks with iOS, tvOS and watchOS. A heavily modified version of Mac OS X10.4 Tiger was used for the first-generation Apple TV, Apple also used to have a separate line of releases of Mac OS X designed for servers. Beginning with Mac OS X10.7 Lion, the functions were made available as a separate package on the Mac App Store. Releases of Mac OS X from 1999 to 2005 can run only on the PowerPC-based Macs from the time period, Mac OS X10.5 Leopard was released as a Universal binary, meaning the installer disc supported both Intel and PowerPC processors. In 2009, Apple released Mac OS X10.6 Snow Leopard, in 2011, Apple released Mac OS X10.7 Lion, which no longer supported 32-bit Intel processors and also did not include Rosetta. All versions of the system released since then run exclusively on 64-bit Intel CPUs, the heritage of what would become macOS had originated at NeXT, a company founded by Steve Jobs following his departure from Apple in 1985. There, the Unix-like NeXTSTEP operating system was developed, and then launched in 1989 and its graphical user interface was built on top of an object-oriented GUI toolkit using the Objective-C programming language. This led Apple to purchase NeXT in 1996, allowing NeXTSTEP, then called OPENSTEP, previous Macintosh operating systems were named using Arabic numerals, e. g. Mac OS8 and Mac OS9. The letter X in Mac OS Xs name refers to the number 10 and it is therefore correctly pronounced ten /ˈtɛn/ in this context. However, a common mispronunciation is X /ˈɛks/, consumer releases of Mac OS X included more backward compatibility. Mac OS applications could be rewritten to run natively via the Carbon API, the consumer version of Mac OS X was launched in 2001 with Mac OS X10.0. Reviews were variable, with praise for its sophisticated, glossy Aqua interface
9.
IOS
–
IOS is a mobile operating system created and developed by Apple Inc. exclusively for its hardware. It is the system that presently powers many of the companys mobile devices, including the iPhone, iPad. It is the second most popular operating system globally after Android. IPad tablets are also the second most popular, by sales, originally unveiled in 2007 for the iPhone, iOS has been extended to support other Apple devices such as the iPod Touch and the iPad. As of January 2017, Apples App Store contains more than 2.2 million iOS applications,1 million of which are native for iPads and these mobile apps have collectively been downloaded more than 130 billion times. The iOS user interface is based upon direct manipulation, using multi-touch gestures, interface control elements consist of sliders, switches, and buttons. Internal accelerometers are used by applications to respond to shaking the device or rotating it in three dimensions. Apple has been praised for incorporating thorough accessibility functions into iOS, enabling users with vision. Major versions of iOS are released annually, the current version, iOS10, was released on September 13,2016. In iOS, there are four layers, the Core OS, Core Services, Media. In 2005, when Steve Jobs began planning the iPhone, he had a choice to either shrink the Mac, forstall was also responsible for creating a software development kit for programmers to build iPhone apps, as well as an App Store within iTunes. The operating system was unveiled with the iPhone at the Macworld Conference & Expo on January 9,2007, and released in June of that year. At the time of its unveiling in January, Steve Jobs claimed, iPhone runs OS X and runs applications, but at the time of the iPhones release. Initially, third-party native applications were not supported, Steve Jobs reasoning was that developers could build web applications through the Safari web browser that would behave like native apps on the iPhone. In October 2007, Apple announced that a native Software Development Kit was under development, on March 6,2008, Apple held a press event, announcing the iPhone SDK. The iOS App Store was opened on July 10,2008 with an initial 500 applications available.2 million in January 2017, as of March 2016,1 million apps are natively compatible with the iPad tablet computer. These apps have collectively been downloaded more than 130 billion times, App intelligence firm Sensor Tower has estimated that the App Store will reach 5 million apps by the year 2020. On September 5,2007, Apple released the iPod Touch, Apple also sold more than one million iPhones during the 2007 holiday season
10.
Windows 2000
–
Windows 2000 is an operating system for use on both client and server computers. It was produced by Microsoft and released to manufacturing on December 15,1999 and it is the successor to Windows NT4.0, and is the last version of Microsoft Windows to display the Windows NT designation. It is succeeded by Windows XP and Windows Server 2003, during development, Windows 2000 was known as Windows NT5.0. Four editions of Windows 2000 were released, Professional, Server, Advanced Server, and Datacenter Server, Windows 2000 introduces NTFS3.0, Encrypting File System, as well as basic and dynamic disk storage. Support for people with disabilities was improved over Windows NT4.0 with a number of new assistive technologies, the Windows 2000 Server family has additional features including the ability to provide Active Directory services. Windows 2000 can be installed either a manual or unattended installation. Microsoft marketed Windows 2000 as the most secure Windows version ever at the time, however, it became the target of a number of high-profile virus attacks such as Code Red and Nimda. For ten years after its release, it continued to receive patches for security vulnerabilities nearly every month until reaching the end of its lifecycle on July 13,2010. Windows 2000 is a continuation of the Microsoft Windows NT family of operating systems, the original name for the operating system was Windows NT5.0 and its Beta 1 was released in September 1997, followed by Beta 2 in August 1998. On October 27,1998, Microsoft announced that the name of the version of the operating system would be Windows 2000. Windows 2000 Beta 3 was released in January 1999, NT5.0 Beta 1 was similar to NT4.0, including a very similar themed logo. NT5.0 Beta 2 introduced a new boot screen. The new login prompt from the version made its first appearance in Beta 3 build 1946. The new, updated icons first appeared in Beta 3 build 1976, the Windows 2000 boot screen in the final version first appeared in Beta 3 build 1994. Windows 2000 did not have a codename because, according to Dave Thompson of Windows NT team, Windows 2000 Service Pack 1 was codenamed Asteroid and Windows 2000 64-bit was codenamed Janus. During development, there was a build for the Alpha which was abandoned some time after RC1 after Compaq announced they had dropped support for Windows NT on Alpha. From here, Microsoft issued three release candidates between July and November 1999, and finally released the system to partners on December 12,1999. The public could buy the version of Windows 2000 on February 17,2000
11.
FreeBSD
–
FreeBSD is a free and open source Unix-like operating system descended from Research Unix via the Berkeley Software Distribution. Although for legal reasons FreeBSD cannot use the Unix trademark, it is a descendant of BSD. FreeBSD has similarities with Linux, with two differences in scope and licensing, FreeBSD maintains a complete operating system, i. e. The FreeBSD project includes a security team overseeing all software shipped in the base distribution, a wide range of additional third-party applications may be installed using the pkgng package management system or the FreeBSD Ports, or by directly compiling source code. FreeBSDs roots go back to the University of California, Berkeley, the university acquired a UNIX source license from AT&T. The BSD project was founded in 1976 by Bill Joy, but since BSD contained code from AT&T Unix, all recipients had to get a license from AT&T first in order to use BSD. In June 1989, Networking Release 1 or simply Net-1 – the first public version of BSD – was released, after releasing Net-1, Keith Bostic, a developer of BSD, suggested replacing all AT&T code with freely-redistributable code under the original BSD license. Work on replacing AT&T code began and, after 18 months, however, six files containing AT&T code remained in the kernel. The BSD developers decided to release the Networking Release 2 without those six files and they released 386BSD via an anonymous FTP server. The first version of FreeBSD was released on November 1993, in the early days of the projects inception, a company named Walnut Creek CDROM, upon the suggestion of the two FreeBSD developers, agreed to release the operating system on CD-ROM. By 1997, FreeBSD was Walnut Creeks most successful product, the company itself later renamed to The FreeBSD Mall and later iXSystems. Today, FreeBSD is used by many IT companies such as IBM, Nokia, Juniper Networks, certain parts of Apples Mac OS X operating system are based on FreeBSD. The PlayStation 3 operating system also borrows certain components from FreeBSD, netflix, WhatsApp, and FlightAware are also examples of big, successful and heavily network-oriented companies which are running FreeBSD. 386BSD and FreeBSD were both derived from 1992s BSD release, in January 1992, BSDi started to release BSD/386, later called BSD/OS, an operating system similar to FreeBSD and based on 1992s BSD release. AT&T filed a lawsuit against BSDi and alleged distribution of AT&T source code in violation of license agreements, the lawsuit was settled out of court and the exact terms were not all disclosed. The only one that became public was that BSDi would migrate their source base to the newer 4. 4BSD-Lite sources, Although not involved in the litigation, it was suggested to FreeBSD that they should also move to 4. 4BSD-Lite. FreeBSD2.0, which was released on November 1994, was the first version of FreeBSD without any code from AT&T, Desktop Although FreeBSD does not install the X Window System by default, it is available in the FreeBSD ports collection. A number of Desktop environments such as GNOME, KDE and Xfce, embedded systems Although it explicitly focuses on the IA-32 and x86-64 platforms, FreeBSD also supports others such as ARM, PowerPC and MIPS to a lesser degree
12.
Solaris (operating system)
–
Solaris is a Unix operating system originally developed by Sun Microsystems. It superseded their earlier SunOS in 1993, Oracle Solaris, so named as of 2010, has been owned by Oracle Corporation since the Sun acquisition by Oracle in January 2010. Solaris is known for its scalability, especially on SPARC systems, Solaris supports SPARC-based and x86-based workstations and servers from Oracle and other vendors, with efforts underway to port to additional platforms. Solaris is registered as compliant with the Single Unix Specification, historically, Solaris was developed as proprietary software. In June 2005, Sun Microsystems released most of the codebase under the CDDL license, with OpenSolaris, Sun wanted to build a developer and user community around the software. After the acquisition of Sun Microsystems in January 2010, Oracle decided to discontinue the OpenSolaris distribution, in August 2010, Oracle discontinued providing public updates to the source code of the Solaris kernel, effectively turning Solaris 11 back into a closed source proprietary operating system. Following that, in 2011 the Solaris 11 kernel source code leaked to BitTorrent, however, through the Oracle Technology Network, industry partners can still gain access to the in-development Solaris source code. Source code for the source components of Solaris 11 is available for download from Oracle. In 1987, AT&T Corporation and Sun announced that they were collaborating on a project to merge the most popular Unix variants on the market at that time, BSD, System V and this became Unix System V Release 4. On September 4,1991, Sun announced that it would replace its existing BSD-derived Unix, SunOS4 and this was identified internally as SunOS5, but a new marketing name was introduced at the same time, Solaris 2. The justification for this new overbrand was that it encompassed not only SunOS, but also the OpenWindows graphical user interface and Open Network Computing functionality. Although SunOS4.1. x micro releases were retroactively named Solaris 1 by Sun, for releases based on SunOS5, the SunOS minor version is included in the Solaris release number. For example, Solaris 2.4 incorporates SunOS5.4. After Solaris 2.6, the 2. was dropped from the name, so Solaris 7 incorporates SunOS5.7. Although SunSoft stated in its initial Solaris 2 press release their intent to support both SPARC and x86 systems, the first two Solaris 2 releases,2.0 and 2.1, were SPARC-only. An x86 version of Solaris 2.1 was released in June 1993, about 6 months after the SPARC version, as a desktop and it included the Wabi emulator to support Windows applications. At the time, Sun also offered the Interactive Unix system that it had acquired from Interactive Systems Corporation, in 1994, Sun released Solaris 2.4, supporting both SPARC and x86 systems from a unified source code base. Solaris uses a code base for the platforms it supports, SPARC
13.
Computing platform
–
Computing platform means in general sense, where any piece of software is executed. It may be the hardware or the system, even a web browser or other application. The term computing platform can refer to different abstraction levels, including a hardware architecture, an operating system. In total it can be said to be the stage on which programs can run. For example, an OS may be a platform that abstracts the underlying differences in hardware, platforms may also include, Hardware alone, in the case of small embedded systems. Embedded systems can access hardware directly, without an OS, this is referred to as running on bare metal, a browser in the case of web-based software. The browser itself runs on a platform, but this is not relevant to software running within the browser. An application, such as a spreadsheet or word processor, which hosts software written in a scripting language. This can be extended to writing fully-fledged applications with the Microsoft Office suite as a platform, software frameworks that provide ready-made functionality. Cloud computing and Platform as a Service, the social networking sites Twitter and facebook are also considered development platforms. A virtual machine such as the Java virtual machine, applications are compiled into a format similar to machine code, known as bytecode, which is then executed by the VM. A virtualized version of a system, including virtualized hardware, OS, software. These allow, for instance, a typical Windows program to run on what is physically a Mac, some architectures have multiple layers, with each layer acting as a platform to the one above it. In general, a component only has to be adapted to the layer immediately beneath it, however, the JVM, the layer beneath the application, does have to be built separately for each OS
14.
X86
–
X86 is a family of backward-compatible instruction set architectures based on the Intel 8086 CPU and its Intel 8088 variant. The term x86 came into being because the names of several successors to Intels 8086 processor end in 86, many additions and extensions have been added to the x86 instruction set over the years, almost consistently with full backward compatibility. The architecture has been implemented in processors from Intel, Cyrix, AMD, VIA and many companies, there are also open implementations. In the 1980s and early 1990s, when the 8088 and 80286 were still in common use, today, however, x86 usually implies a binary compatibility also with the 32-bit instruction set of the 80386. An 8086 system, including such as 8087 and 8089. There were also terms iRMX, iSBC, and iSBX – all together under the heading Microsystem 80, however, this naming scheme was quite temporary, lasting for a few years during the early 1980s. Today, x86 is ubiquitous in both stationary and portable computers, and is also used in midrange computers, workstations, servers. A large amount of software, including operating systems such as DOS, Windows, Linux, BSD, Solaris and macOS, functions with x86-based hardware. There have been attempts, including by Intel itself, to end the market dominance of the inelegant x86 architecture designed directly from the first simple 8-bit microprocessors. Examples of this are the iAPX432, the Intel 960, Intel 860, however, the continuous refinement of x86 microarchitectures, circuitry and semiconductor manufacturing would make it hard to replace x86 in many segments. The table below lists processor models and model series implementing variations of the x86 instruction set, each line item is characterized by significantly improved or commercially successful processor microarchitecture designs. Such x86 implementations are seldom simple copies but often employ different internal microarchitectures as well as different solutions at the electronic, quite naturally, early compatible microprocessors were 16-bit, while 32-bit designs were developed much later. For the personal computer market, real quantities started to appear around 1990 with i386 and i486 compatible processors, other companies, which designed or manufactured x86 or x87 processors, include ITT Corporation, National Semiconductor, ULSI System Technology, and Weitek. Some early versions of these microprocessors had heat dissipation problems, AMD later managed to establish itself as a serious contender with the K6 set of processors, which gave way to the very successful Athlon and Opteron. There were also other contenders, such as Centaur Technology, Rise Technology, VIA Technologies energy efficient C3 and C7 processors, which were designed by the Centaur company, have been sold for many years. Centaurs newest design, the VIA Nano, is their first processor with superscalar and it was, perhaps interestingly, introduced at about the same time as Intels first in-order processor since the P5 Pentium, the Intel Atom. The instruction set architecture has twice been extended to a word size. In 1999-2003, AMD extended this 32-bit architecture to 64 bits and referred to it as x86-64 in early documents, Intel soon adopted AMDs architectural extensions under the name IA-32e, later using the name EM64T and finally using Intel 64
15.
X86-64
–
X86-64 is the 64-bit version of the x86 instruction set. It supports vastly larger amounts of memory and physical memory than is possible on its 32-bit predecessors. X86-64 also provides 64-bit general-purpose registers and numerous other enhancements and it is fully backward compatible with 16-bit and 32-bit x86 code. The original specification, created by AMD and released in 2000, has been implemented by AMD, Intel, the AMD K8 processor was the first to implement the architecture, this was the first significant addition to the x86 architecture designed by a company other than Intel. Intel was forced to suit and introduced a modified NetBurst family which was fully software-compatible with AMDs design. VIA Technologies introduced x86-64 in their VIA Isaiah architecture, with the VIA Nano, the x86-64 specification is distinct from the Intel Itanium architecture, which is not compatible on the native instruction set level with the x86 architecture. AMD64 was created as an alternative to the radically different IA-64 architecture, the first AMD64-based processor, the Opteron, was released in April 2003. AMDs processors implementing the AMD64 architecture include Opteron, Athlon 64, Athlon 64 X2, Athlon 64 FX, Athlon II, Turion 64, Turion 64 X2, Sempron, Phenom, Phenom II, FX, Fusion and Ryzen. The primary defining characteristic of AMD64 is the availability of 64-bit general-purpose processor registers, 64-bit integer arithmetic and logical operations, the designers took the opportunity to make other improvements as well. Some of the most significant changes are described below, pushes and pops on the stack default to 8-byte strides, and pointers are 8 bytes wide. Additional registers In addition to increasing the size of the general-purpose registers, AMD64 still has fewer registers than many common RISC instruction sets or VLIW-like machines such as the IA-64. However, an AMD64 implementation may have far more internal registers than the number of architectural registers exposed by the instruction set, additional XMM registers Similarly, the number of 128-bit XMM registers is also increased from 8 to 16. Larger virtual address space The AMD64 architecture defines a 64-bit virtual address format and this allows up to 256 TB of virtual address space. The architecture definition allows this limit to be raised in future implementations to the full 64 bits and this is compared to just 4 GB for the x86. This means that very large files can be operated on by mapping the entire file into the address space, rather than having to map regions of the file into. Larger physical address space The original implementation of the AMD64 architecture implemented 40-bit physical addresses, current implementations of the AMD64 architecture extend this to 48-bit physical addresses and therefore can address up to 256 TB of RAM. The architecture permits extending this to 52 bits in the future, for comparison, 32-bit x86 processors are limited to 64 GB of RAM in Physical Address Extension mode, or 4 GB of RAM without PAE mode. Any implementation therefore allows the physical address limit as under long mode
16.
ARM architecture
–
ARM, originally Acorn RISC Machine, later Advanced RISC Machine, is a family of reduced instruction set computing architectures for computer processors, configured for various environments. It also designs cores that implement this instruction set and licenses these designs to a number of companies that incorporate those core designs into their own products, a RISC-based computer design approach means processors require fewer transistors than typical complex instruction set computing x86 processors in most personal computers. This approach reduces costs, heat and power use and these characteristics are desirable for light, portable, battery-powered devices—including smartphones, laptops and tablet computers, and other embedded systems. For supercomputers, which large amounts of electricity, ARM could also be a power-efficient solution. ARM Holdings periodically releases updates to architectures and core designs, some older cores can also provide hardware execution of Java bytecodes. The ARMv8-A architecture, announced in October 2011, adds support for a 64-bit address space, with over 100 billion ARM processors produced as of 2017, ARM is the most widely used instruction set architecture in terms of quantity produced. Currently, the widely used Cortex cores, older classic cores, the British computer manufacturer Acorn Computers first developed the Acorn RISC Machine architecture in the 1980s to use in its personal computers. Its first ARM-based products were coprocessor modules for the BBC Micro series of computers, according to Sophie Wilson, all the tested processors at that time performed about the same, with about a 4 Mbit/second bandwidth. After testing all available processors and finding them lacking, Acorn decided it needed a new architecture, inspired by white papers on the Berkeley RISC project, Acorn considered designing its own processor. Wilson developed the set, writing a simulation of the processor in BBC BASIC that ran on a BBC Micro with a 6502 second processor. This convinced Acorn engineers they were on the right track, Wilson approached Acorns CEO, Hermann Hauser, and requested more resources. Hauser gave his approval and assembled a team to implement Wilsons model in hardware. The official Acorn RISC Machine project started in October 1983 and they chose VLSI Technology as the silicon partner, as they were a source of ROMs and custom chips for Acorn. Wilson and Furber led the design and they implemented it with a similar efficiency ethos as the 6502. A key design goal was achieving low-latency input/output handling like the 6502, the 6502s memory access architecture had let developers produce fast machines without costly direct memory access hardware. The first samples of ARM silicon worked properly when first received and tested on 26 April 1985, Wilson subsequently rewrote BBC BASIC in ARM assembly language. The in-depth knowledge gained from designing the instruction set enabled the code to be very dense, the original aim of a principally ARM-based computer was achieved in 1987 with the release of the Acorn Archimedes. In 1992, Acorn once more won the Queens Award for Technology for the ARM, the ARM2 featured a 32-bit data bus, 26-bit address space and 27 32-bit registers
17.
Compiler
–
A compiler is a computer program that transforms source code written in a programming language into another computer language, with the latter often having a binary form known as object code. The most common reason for converting source code is to create an executable program, the name compiler is primarily used for programs that translate source code from a high-level programming language to a lower level language. If the compiled program can run on a computer whose CPU or operating system is different from the one on which the compiler runs, more generally, compilers are a specific type of translator. While all programs that take a set of programming specifications and translate them, a program that translates from a low-level language to a higher level one is a decompiler. A program that translates between high-level languages is called a source-to-source compiler or transpiler. A language rewriter is usually a program that translates the form of expressions without a change of language, the term compiler-compiler is sometimes used to refer to a parser generator, a tool often used to help create the lexer and parser. A compiler is likely to many or all of the following operations, lexical analysis, preprocessing, parsing, semantic analysis, code generation. Program faults caused by incorrect compiler behavior can be difficult to track down and work around, therefore. Software for early computers was written in assembly language. The notion of a high level programming language dates back to 1943, no actual implementation occurred until the 1970s, however. The first actual compilers date from the 1950s, identifying the very first is hard, because there is subjectivity in deciding when programs become advanced enough to count as the full concept rather than a precursor. 1952 saw two important advances. Grace Hopper wrote the compiler for the A-0 programming language, though the A-0 functioned more as a loader or linker than the notion of a full compiler. Also in 1952, the first autocode compiler was developed by Alick Glennie for the Mark 1 computer at the University of Manchester and this is considered by some to be the first compiled programming language. The FORTRAN team led by John Backus at IBM is generally credited as having introduced the first unambiguously complete compiler, COBOL was an early language to be compiled on multiple architectures, in 1960. In many application domains the idea of using a higher level language quickly caught on, because of the expanding functionality supported by newer programming languages and the increasing complexity of computer architectures, compilers have become more complex. Early compilers were written in assembly language, the first self-hosting compiler – capable of compiling its own source code in a high-level language – was created in 1962 for the Lisp programming language by Tim Hart and Mike Levin at MIT. Since the 1970s, it has become practice to implement a compiler in the language it compiles
18.
Software license
–
A software license is a legal instrument governing the use or redistribution of software. Under United States copyright law all software is copyright protected, in code as also object code form. The only exception is software in the public domain, most distributed software can be categorized according to its license type. Two common categories for software under copyright law, and therefore with licenses which grant the licensee specific rights, are proprietary software and free, unlicensed software outside the copyright protection is either public domain software or software which is non-distributed, non-licensed and handled as internal business trade secret. Contrary to popular belief, distributed unlicensed software is copyright protected. Examples for this are unauthorized software leaks or software projects which are placed on public software repositories like GitHub without specified license. As voluntarily handing software into the domain is problematic in some international law domains, there are also licenses granting PD-like rights. Therefore, the owner of a copy of software is legally entitled to use that copy of software. Hence, if the end-user of software is the owner of the respective copy, as many proprietary licenses only enumerate the rights that the user already has under 17 U. S. C. §117, and yet proclaim to take away from the user. Proprietary software licenses often proclaim to give software publishers more control over the way their software is used by keeping ownership of each copy of software with the software publisher. The form of the relationship if it is a lease or a purchase, for example UMG v. Augusto or Vernor v. Autodesk. The ownership of goods, like software applications and video games, is challenged by licensed. The Swiss based company UsedSoft innovated the resale of business software and this feature of proprietary software licenses means that certain rights regarding the software are reserved by the software publisher. Therefore, it is typical of EULAs to include terms which define the uses of the software, the most significant effect of this form of licensing is that, if ownership of the software remains with the software publisher, then the end-user must accept the software license. In other words, without acceptance of the license, the end-user may not use the software at all, one example of such a proprietary software license is the license for Microsoft Windows. The most common licensing models are per single user or per user in the appropriate volume discount level, Licensing per concurrent/floating user also occurs, where all users in a network have access to the program, but only a specific number at the same time. Another license model is licensing per dongle which allows the owner of the dongle to use the program on any computer, Licensing per server, CPU or points, regardless the number of users, is common practice as well as site or company licenses
19.
Open-source model
–
Open-source software may be developed in a collaborative public manner. According to scientists who studied it, open-source software is a prominent example of open collaboration, a 2008 report by the Standish Group states that adoption of open-source software models has resulted in savings of about $60 billion per year to consumers. In the early days of computing, programmers and developers shared software in order to learn from each other, eventually the open source notion moved to the way side of commercialization of software in the years 1970-1980. In 1997, Eric Raymond published The Cathedral and the Bazaar and this source code subsequently became the basis behind SeaMonkey, Mozilla Firefox, Thunderbird and KompoZer. Netscapes act prompted Raymond and others to look into how to bring the Free Software Foundations free software ideas, the new term they chose was open source, which was soon adopted by Bruce Perens, publisher Tim OReilly, Linus Torvalds, and others. The Open Source Initiative was founded in February 1998 to encourage use of the new term, a Microsoft executive publicly stated in 2001 that open source is an intellectual property destroyer. I cant imagine something that could be worse than this for the software business, IBM, Oracle, Google and State Farm are just a few of the companies with a serious public stake in todays competitive open-source market. There has been a significant shift in the corporate philosophy concerning the development of FOSS, the free software movement was launched in 1983. In 1998, a group of individuals advocated that the free software should be replaced by open-source software as an expression which is less ambiguous. Software developers may want to publish their software with an open-source license, the Open Source Definition, notably, presents an open-source philosophy, and further defines the terms of usage, modification and redistribution of open-source software. Software licenses grant rights to users which would otherwise be reserved by law to the copyright holder. Several open-source software licenses have qualified within the boundaries of the Open Source Definition, the open source label came out of a strategy session held on April 7,1998 in Palo Alto in reaction to Netscapes January 1998 announcement of a source code release for Navigator. They used the opportunity before the release of Navigators source code to clarify a potential confusion caused by the ambiguity of the free in English. Many people claimed that the birth of the Internet, since 1969, started the open source movement, the Free Software Foundation, started in 1985, intended the word free to mean freedom to distribute and not freedom from cost. Since a great deal of free software already was free of charge, such software became associated with zero cost. The Open Source Initiative was formed in February 1998 by Eric Raymond and they sought to bring a higher profile to the practical benefits of freely available source code, and they wanted to bring major software businesses and other high-tech industries into open source. Perens attempted to open source as a service mark for the OSI. The Open Source Initiatives definition is recognized by governments internationally as the standard or de facto definition, OSI uses The Open Source Definition to determine whether it considers a software license open source
20.
Machine code
–
Machine code or machine language is a set of instructions executed directly by a computers central processing unit. Each instruction performs a specific task, such as a load. Every program directly executed by a CPU is made up of a series of such instructions, numerical machine code may be regarded as the lowest-level representation of a compiled or assembled computer program or as a primitive and hardware-dependent programming language. While it is possible to write directly in numerical machine code, it is tedious and error prone to manage individual bits and calculate numerical addresses. For this reason machine code is almost never used to programs in modern contexts. Almost all practical programs today are written in languages or assembly language. However, the interpreter itself, which may be seen as an executor or processor, performing the instructions of the source code, every processor or processor family has its own machine code instruction set. Instructions are patterns of bits that by physical design correspond to different commands to the machine, thus, the instruction set is specific to a class of processors using the same architecture. Successor or derivative processor designs often include all the instructions of a predecessor, systems may also differ in other details, such as memory arrangement, operating systems, or peripheral devices. Because a program normally relies on such factors, different systems will not run the same machine code. A machine code instruction set may have all instructions of the same length, how the patterns are organized varies strongly with the particular architecture and often also with the type of instruction. Not all machines or individual instructions have explicit operands, an accumulator machine has a combined left operand and result in an implicit accumulator for most arithmetic instructions. Other architectures have accumulator versions of common instructions, with the accumulator regarded as one of the general registers by longer instructions, a stack machine has most or all of its operands on an implicit stack. Special purpose instructions also often lack explicit operands and this distinction between explicit and implicit operands is important in machine code generators, especially in the register allocation and live range tracking parts. A good code optimizer can track implicit as well as explicit operands which may allow more frequent constant propagation, constant folding of registers, a computer program is a sequence of instructions that are executed by a CPU. While simple processors execute instructions one after another, superscalar processors are capable of executing several instructions at once, program flow may be influenced by special jump instructions that transfer execution to an instruction other than the numerically following one. Conditional jumps are taken or not depending on some condition, a much more readable rendition of machine language, called assembly language, uses mnemonic codes to refer to machine code instructions, rather than using the instructions numeric values directly. For example, on the Zilog Z80 processor, the machine code 00000101, the MIPS instruction set provides a specific example for a machine code whose instructions are always 32 bits long
21.
Computer programming
–
Computer programming is a process that leads from an original formulation of a computing problem to executable computer programs. Source code is written in one or more programming languages, the purpose of programming is to find a sequence of instructions that will automate performing a specific task or solving a given problem. The process of programming thus often requires expertise in many different subjects, including knowledge of the domain, specialized algorithms. Related tasks include testing, debugging, and maintaining the code, implementation of the build system. Software engineering combines engineering techniques with software development practices, within software engineering, programming is regarded as one phase in a software development process. There is a debate on the extent to which the writing of programs is an art form. In general, good programming is considered to be the application of all three, with the goal of producing an efficient and evolvable software solution. Because the discipline covers many areas, which may or may not include critical applications, in most cases, the discipline is self-governed by the entities which require the programming, and sometimes very strict environments are defined. Another ongoing debate is the extent to which the language used in writing computer programs affects the form that the final program takes. Different language patterns yield different patterns of thought and this idea challenges the possibility of representing the world perfectly with language because it acknowledges that the mechanisms of any language condition the thoughts of its speaker community. In the 1880s Herman Hollerith invented the concept of storing data in machine-readable form, however, with the concept of the stored-program computers introduced in 1949, both programs and data were stored and manipulated in the same way in computer memory. Machine code was the language of early programs, written in the set of the particular machine. Assembly languages were developed that let the programmer specify instruction in a text format, with abbreviations for each operation code. However, because a language is little more than a different notation for a machine language. High-level languages allow the programmer to write programs in terms that are more abstract and they harness the power of computers to make programming easier by allowing programmers to specify calculations by entering a formula directly. Programs were mostly still entered using punched cards or paper tape, see computer programming in the punch card era. By the late 1960s, data storage devices and computer terminals became inexpensive enough that programs could be created by typing directly into the computers, text editors were developed that allowed changes and corrections to be made much more easily than with punched cards. Whatever the approach to development may be, the program must satisfy some fundamental properties
22.
Programming language
–
A programming language is a formal computer language designed to communicate instructions to a machine, particularly a computer. Programming languages can be used to programs to control the behavior of a machine or to express algorithms. From the early 1800s, programs were used to direct the behavior of such as Jacquard looms. Thousands of different programming languages have created, mainly in the computer field. Many programming languages require computation to be specified in an imperative form while other languages use forms of program specification such as the declarative form. The description of a language is usually split into the two components of syntax and semantics. Some languages are defined by a document while other languages have a dominant implementation that is treated as a reference. Some languages have both, with the language defined by a standard and extensions taken from the dominant implementation being common. A programming language is a notation for writing programs, which are specifications of a computation or algorithm, some, but not all, authors restrict the term programming language to those languages that can express all possible algorithms. For example, PostScript programs are created by another program to control a computer printer or display. More generally, a language may describe computation on some, possibly abstract. It is generally accepted that a specification for a programming language includes a description, possibly idealized. In most practical contexts, a programming language involves a computer, consequently, abstractions Programming languages usually contain abstractions for defining and manipulating data structures or controlling the flow of execution. Expressive power The theory of computation classifies languages by the computations they are capable of expressing, all Turing complete languages can implement the same set of algorithms. ANSI/ISO SQL-92 and Charity are examples of languages that are not Turing complete, markup languages like XML, HTML, or troff, which define structured data, are not usually considered programming languages. Programming languages may, however, share the syntax with markup languages if a computational semantics is defined, XSLT, for example, is a Turing complete XML dialect. Moreover, LaTeX, which is used for structuring documents. The term computer language is used interchangeably with programming language
23.
Simon Peyton Jones
–
Simon Peyton Jones FRS is a British computer scientist who researches the implementation and applications of functional programming languages, particularly lazy functional programming. He is an honorary Professor of Computer Science at the University of Glasgow, Peyton Jones graduated from Trinity College, Cambridge in 1980 and went on to complete the Cambridge Diploma in Computer Science. Peyton Jones worked in industry for two years serving as a lecturer at University College London and, from 1990 to 1998. Since 1998 he has worked as a researcher at Microsoft Research in Cambridge and he is a major contributor to the design of the Haskell programming language, and a lead developer of the Glasgow Haskell Compiler. He was also a contributor to the 1999 book Cybernauts Awake. Peyton Jones chairs the Computing At School group, an organisation which aims to promote the teaching of science at school. In 2004 he was inducted as a Fellow of the Association for Computing Machinery for contributions to programming languages. In 2011 he received membership in the Academia Europaea, in 2011, he and Simon Marlow were awarded the SIGPLAN Programming Languages Software Award for their work on GHC. In 2013, he received a doctorate from the University of Glasgow. He was elected a Fellow of the Royal Society in 2016
24.
University of Glasgow
–
The University of Glasgow is the fourth oldest university in the English-speaking world and one of Scotlands four ancient universities. Along with the University of Edinburgh, the University was part of the Scottish Enlightenment during the 18th century and it is currently a member of Universitas 21, the international network of research universities, and the Russell Group. Glasgow University served all of students by preparing them for professions, the law, medicine, civil service, teaching. It also trained smaller but growing numbers for careers in science, originally located in the citys High Street, since 1870 the main University campus has been located at Gilmorehill in the West End of the city. Additionally, a number of university buildings are located elsewhere, such as the Veterinary School in Bearsden, and it is the second-oldest university in Scotland after St Andrews and the fourth-oldest in the English-speaking world. The universities of St Andrews, Glasgow and Aberdeen were ecclesiastical foundations, as one of the Ancient Universities of the United Kingdom, Glasgow University is one of only eight institutions to award undergraduate masters degrees in certain disciplines. The University has been without its original Bull since the mid-sixteenth century, in 1560, during the political unrest accompanying the Scottish Reformation, the then chancellor, Archbishop James Beaton, a supporter of the Marian cause, fled to France. He took with him, for safe-keeping, many of the archives and valuables of the Cathedral and the University, including the Mace, although the Mace was sent back in 1590, the archives were not. If they had not been lost by time, they certainly went astray during the French Revolution when the Scots College was under threat. Its records and valuables were moved for safe-keeping out of the city of Paris, the Bull remains the authority by which the University awards degrees. Teaching at the University began in the chapterhouse of Glasgow Cathedral, subsequently moving to nearby Rottenrow, the University was given 13 acres of land belonging to the Black Friars on High Street by Mary, Queen of Scots, in 1563. The Lion and Unicorn Staircase was also transferred from the old site and is now attached to the Main Building. To continue this work in his will he founded Andersons College, in 1973, Delphine Parrott became its first woman professor, as Gardiner Professor of Immunology. In October 2014, the university court voted for the University to become the first academic institution in Europe to divest from the fuel industry. The University is currently spread over a number of different campuses, the main one is the Gilmorehill campus, in Hillhead. The University has also established joint departments with the Glasgow School of Art, the Universitys initial accommodation including Glasgow University Library was part of the complex of religious buildings in the precincts of Glasgow Cathedral. In the mid-seventeenth century, the Hamilton Building was replaced with a very grand two-court building with a decorated west front facing the High Street, called the Nova Erectio, over the following centuries, the Universitys size and scope continued to expand. In 1757 it built the Macfarlane Observatory and later Scotlands first public museum and it was a centre of the Scottish Enlightenment and subsequently of the Industrial Revolution, and its expansion in the High Street was constrained
25.
Profiling (computer programming)
–
Most commonly, profiling information serves to aid program optimization. Profiling is achieved by instrumenting either the source code or its binary executable form using a tool called a profiler. Profilers may use a number of different techniques, such as event-based, statistical, instrumented, profilers use a wide variety of techniques to collect data, including hardware interrupts, code instrumentation, instruction set simulation, operating system hooks, and performance counters. Profilers are used in the engineering process. The size of a trace is linear to the programs instruction path length, a trace may therefore be initiated at one point in a program and terminated at another point to limit the output. An ongoing interaction with the hypervisor This provides the opportunity to switch a trace on or off at any desired point during execution in addition to viewing on-going metrics about the program. It also provides the opportunity to suspend asynchronous processes at critical points to examine interactions with other processes in more detail. This was an example of sampling. In early 1974 instruction-set simulators permitted full trace and other performance-monitoring features, profiler-driven program analysis on Unix dates back to at least 1979, when Unix systems included a basic tool, prof, which listed each function and how much of program execution time it used. In 1982 gprof extended the concept to a call graph analysis. In 1994, Amitabh Srivastava and Alan Eustace of Digital Equipment Corporation published a paper describing ATOM, the ATOM platform converts a program into its own profiler, at compile time, it inserts code into the program to be analyzed. That inserted code outputs analysis data and this technique - modifying a program to analyze itself - is known as instrumentation. In 2004 both the gprof and ATOM papers appeared on the list of the 50 most influential PLDI papers for the 20-year period ending in 1999. Flat profilers compute the average times, from the calls. Call graph profilers show the times, and frequencies of the functions. In some tools full context is not preserved, input-sensitive profilers add a further dimension to flat or call-graph profilers by relating performance measures to features of the input workloads, such as input size or input values. They generate charts that characterize how an applications performance scales as a function of its input, profilers, which are also programs themselves, analyze target programs by collecting information on their execution. Based on their data granularity, on how profilers collect information, like Java, the runtime then provides various callbacks into the agent, for trapping events like method JIT / enter / leave, object creation, etc
26.
Microsoft Research
–
Microsoft Research is the research division of Microsoft. Microsoft Research is co-led by corporate vice presidents Peter Lee and Jeannette Wing, Lee leads Microsoft Research New Experiences and Technologies, Wing leads the organization’s core research labs. Microsoft has research labs around the world, Microsoft Research Redmond was founded on the Microsoft Redmond campus in 1991 and it has about 350 researchers and is headed by Eric Horvitz. Microsoft Research Cambridge was founded in the United Kingdom in 1997 by Roger Needham and is headed by Christopher Bishop, Microsoft Research Asia was founded in Beijing in November 1998. Microsoft Research India, located in Bangalore, was founded in January 2005, Microsoft Research India also collaborates extensively with research institutions and universities in India and abroad to support scientific progress and innovation. Microsoft Research Station Q, located on the campus of the University of California, Station Qs collaborators explore theoretical and experimental approaches to creating the quantum analog of the traditional bit—the qubit. The group is led by Dr. Michael Freedman, a mathematician who has won the prestigious Fields Medal. Microsoft Research New England was established in 2008 in Cambridge, Massachusetts, Microsoft Research New York City was established on May 3,2012. Jennifer Chayes serves as Managing Director of this location as well as the New England lab, Microsoft Research Silicon Valley, located in Mountain View, California, was founded in August 2001 and closed in September 2014
27.
Cambridge
–
Cambridge is a university city and the county town of Cambridgeshire, England, on the River Cam about 50 miles north of London. At the United Kingdom Census 2011, its population was 123,867, there is archaeological evidence of settlement in the area in the Bronze Age and in Roman Britain, under Viking rule, Cambridge became an important trading centre. The first town charters were granted in the 12th century, although city status was not conferred until 1951, the University of Cambridge, founded in 1209, is one of the top five universities in the world. The university includes the Cavendish Laboratory, Kings College Chapel, the citys skyline is dominated by the last two buildings, along with the spire of the Our Lady and the English Martyrs Church, the chimney of Addenbrookes Hospital and St Johns College Chapel tower. Anglia Ruskin University, evolved from the Cambridge School of Art, Cambridge is at the heart of the high-technology Silicon Fen with industries such as software and bioscience and many start-up companies spun out of the university. More than 40% of the workforce has a higher education qualification, the Cambridge Biomedical Campus, one of the largest biomedical research clusters in the world, is soon to be home to AstraZeneca, a hotel and the relocated Papworth Hospital. Parkers Piece hosted the first ever game of Association football, the Strawberry Fair music and arts festival and Midsummer Fairs are held on Midsummer Common, and the annual Cambridge Beer Festival takes place on Jesus Green. The city is adjacent to the M11 and A14 roads, settlements have existed around the Cambridge area since prehistoric times. The earliest clear evidence of occupation is the remains of a 3, the principal Roman site is a small fort Duroliponte on Castle Hill, just northwest of the city centre around the location of the earlier British village. The fort was bounded on two sides by the lines formed by the present Mount Pleasant, continuing across Huntingdon Road into Clare Street, the eastern side followed Magrath Avenue, with the southern side running near to Chesterton Lane and Kettles Yard before turning northwest at Honey Hill. It was constructed around AD70 and converted to use around 50 years later. Evidence of more widespread Roman settlement has been discovered including numerous farmsteads, evidence exists that the invading Anglo-Saxons had begun occupying the area by the end of the century. Their settlement—also on and around Castle Hill—became known as Grantebrycge, Anglo-Saxon grave goods have been found in the area. During this period, Cambridge benefited from good trade links across the hard-to-travel fenlands, by the 7th century, the town was less significant and described by Bede as a little ruined city containing the burial site of Etheldreda. Cambridge was on the border between the East and Middle Anglian kingdoms and the settlement slowly expanded on both sides of the river, the arrival of the Vikings was recorded in the Anglo-Saxon Chronicle in 875. Viking rule, the Danelaw, had been imposed by 878 Their vigorous trading habits caused the town to grow rapidly. During this period the centre of the town shifted from Castle Hill on the bank of the river to the area now known as the Quayside on the right bank. In 1068, two years after his conquest of England, William of Normandy built a castle on Castle Hill, like the rest of the newly conquered kingdom, Cambridge fell under the control of the King and his deputies
28.
Source code
–
In computing, source code is any collection of computer instructions, possibly with comments, written using a human-readable programming language, usually as ordinary text. The source code of a program is designed to facilitate the work of computer programmers. The source code is often transformed by an assembler or compiler into binary machine code understood by the computer, the machine code might then be stored for execution at a later time. Alternatively, source code may be interpreted and thus immediately executed, most application software is distributed in a form that includes only executable files. If the source code were included it would be useful to a user, programmer or a system administrator, the Linux Information Project defines source code as, Source code is the version of software as it is originally written by a human in plain text. The notion of source code may also be more broadly, to include machine code and notations in graphical languages. It is therefore so construed as to include code, very high level languages. Often there are several steps of program translation or minification between the source code typed by a human and an executable program. The earliest programs for stored-program computers were entered in binary through the front panel switches of the computer and this first-generation programming language had no distinction between source code and machine code. When IBM first offered software to work with its machine, the code was provided at no additional charge. At that time, the cost of developing and supporting software was included in the price of the hardware, for decades, IBM distributed source code with its software product licenses, until 1983. Most early computer magazines published source code as type-in programs, Source code can also be stored in a database or elsewhere. The source code for a piece of software may be contained in a single file or many files. Though the practice is uncommon, a source code can be written in different programming languages. For example, a program written primarily in the C programming language, in some languages, such as Java, this can be done at run time. The code base of a programming project is the larger collection of all the source code of all the computer programs which make up the project. It has become practice to maintain code bases in version control systems. Moderately complex software customarily requires the compilation or assembly of several, sometimes dozens or even hundreds, in these cases, instructions for compilations, such as a Makefile, are included with the source code