Computer programming is the process of designing and building an executable computer program for accomplishing a specific computing task. Programming involves tasks such as: analysis, generating algorithms, profiling algorithms' accuracy and resource consumption, the implementation of algorithms in a chosen programming language; the source code of a program is written in one or more languages that are intelligible to programmers, rather than machine code, directly executed by the central processing unit. The purpose of programming is to find a sequence of instructions that will automate the performance of a task on a computer for solving a given problem; the process of programming thus requires expertise in several different subjects, including knowledge of the application domain, specialized algorithms, formal logic. Tasks accompanying and related to programming include: testing, source code maintenance, implementation of build systems, management of derived artifacts, such as the machine code of computer programs.
These might be considered part of the programming process, but the term software development is used for this larger process with the term programming, implementation, or coding reserved for the actual writing of code. Software engineering combines engineering techniques with software development practices. Reverse engineering is the opposite process. A hacker is any skilled computer expert that uses their technical knowledge to overcome a problem, but it can mean a security hacker in common language. Programmable devices have existed at least as far back as 1206 AD, when the automata of Al-Jazari were programmable, via pegs and cams, to play various rhythms and drum patterns. However, the first computer program is dated to 1843, when mathematician Ada Lovelace published an algorithm to calculate a sequence of Bernoulli numbers, intended to be carried out by Charles Babbage's Analytical Engine. Women would continue to dominate the field of computer programming until the mid 1960s. In the 1880s Herman Hollerith invented the concept of storing data in machine-readable form.
A control panel added to his 1906 Type I Tabulator allowed it to be programmed for different jobs, by the late 1940s, unit record equipment such as the IBM 602 and IBM 604, were programmed by control panels in a similar way. However, with the concept of the stored-program computers introduced in 1949, both programs and data were stored and manipulated in the same way in computer memory. Machine code was the language of early programs, written in the instruction set of the particular machine in binary notation. Assembly languages were soon developed that let the programmer specify instruction in a text format, with abbreviations for each operation code and meaningful names for specifying addresses. However, because an assembly language is little more than a different notation for a machine language, any two machines with different instruction sets have different assembly languages. Kathleen Booth created one of the first Assembly languages in 1950 for various computers at Birkbeck College. High-level languages allow the programmer to write programs in terms that are syntactically richer, more capable of abstracting the code, making it targetable to varying machine instruction sets via compilation declarations and heuristics.
The first compiler for a programming language was developed by Grace Hopper. When Hopper went to work on UNIVAC in 1949, she brought the idea of using compilers with her. Compilers harness the power of computers to make programming easier by allowing programmers to specify calculations by entering a formula using infix notation for example. FORTRAN, the first used high-level language to have a functional implementation which permitted the abstraction of reusable blocks of code, came out in 1957. In 1951 Frances E. Holberton developed the first sort-merge generator which ran on the UNIVAC I. Another woman working at UNIVAC, Adele Mildred Koss, developed a program, a precursor to report generators. In USSR, Kateryna Yushchenko developed the Address programming language for the MESM in 1955; the idea for the creation of COBOL started in 1959 when Mary K. Hawes, who worked for Burroughs Corporation, set up a meeting to discuss creating a common business language, she invited six people, including Grace Hopper.
Hopper was involved in developing COBOL as a business language and creating "self-documenting" programming. Hopper's contribution to COBOL was based on her programming language, called FLOW-MATIC. In 1961, Jean E. Sammet developed FORMAC and published Programming Languages: History and Fundamentals which went on to be a standard work on programming languages. Programs were still entered using punched cards or paper tape. See computer programming in the punch card era. By the late 1960s, data storage devices and computer terminals became inexpensive enough that programs could be created by typing directly into the computers. Frances Holberton created a code to allow keyboard inputs while she worked at UNIVAC. Text editors were developed that allowed changes and corrections to be made much more than with punched cards. Sister Mary Kenneth Keller worked on developing the programming language, BASIC when she was a graduate student at Dartmouth in the 1960s. One of the first object-oriented programming languages, was developed by seven programmers, including Adele Goldberg, in the 1970s.
In 1985, Radia Perlman developed the Spannin
Robert "Rob" C. Pike is author, he is best known for his work on the Go programming language and at Bell Labs, where he was a member of the Unix team and was involved in the creation of the Plan 9 from Bell Labs and Inferno operating systems, as well as the Limbo programming language. He co-developed the Blit graphical terminal for Unix. Pike is the sole inventor named in AT&T's US patent 4,555,775 or "backing store patent", part of the X graphic system protocol and one of the first software patents. Over the years Pike has written many text editors. Pike, with Brian Kernighan, is the co-author of The Practice of Programming and The Unix Programming Environment. With Ken Thompson he is the co-creator of UTF-8. Pike developed lesser systems such as the vismon program for displaying faces of email authors. Pike appeared once on Late Night with David Letterman, as a technical assistant to the comedy duo Penn & Teller. Pike works for Google, where he is involved in the creation of the programming languages Go and Sawzall.
Pike is married to illustrator Renée French. The plumber – the interprocess communications mechanism used in Plan 9 and Inferno Mark V. Shaney – an artificial Usenet poster designed by Pike The Good, the Bad, the Ugly: The Unix Legacy – Slides of his presentation at the commemoration of 1000000000 seconds of the Unix clock. Systems Software Research is Irrelevant slides Pike's personal homepage Pike's Google homepage Questions and Answers with Rob Pike by Robin "Roblimo" Miller Concurrency/message passing Newsqueak on YouTube Structural Regular Expressions by Rob Pike slides The history of UTF-8 as told by Rob Pike Pike's appearance with Penn & Teller on Letterman on YouTube
Stack (abstract data type)
In computer science, a stack is an abstract data type that serves as a collection of elements, with two principal operations: push, which adds an element to the collection, pop, which removes the most added element, not yet removed. The order in which elements come off a stack gives rise to its alternative name, LIFO. Additionally, a peek operation may give access to the top without modifying the stack; the name "stack" for this type of structure comes from the analogy to a set of physical items stacked on top of each other, which makes it easy to take an item off the top of the stack, while getting to an item deeper in the stack may require taking off multiple other items first. Considered as a linear data structure, or more abstractly a sequential collection, the push and pop operations occur only at one end of the structure, referred to as the top of the stack; this makes it possible to implement a stack as a singly linked list and a pointer to the top element. A stack may be implemented to have a bounded capacity.
If the stack is full and does not contain enough space to accept an entity to be pushed, the stack is considered to be in an overflow state. The pop operation removes an item from the top of the stack. Stacks entered the computer science literature in 1946, when Alan M. Turing used the terms "bury" and "unbury" as a means of calling and returning from subroutines. Subroutines had been implemented in Konrad Zuse's Z4 in 1945. Klaus Samelson and Friedrich L. Bauer of Technical University Munich proposed the idea in 1955 and filed a patent in 1957, in March 1988 Bauer received the Computer Pioneer Award for the invention of the stack principle; the same concept was developed, independently, by the Australian Charles Leonard Hamblin in the first half of 1954. Stacks are described by analogy to a spring-loaded stack of plates in a cafeteria. Clean plates are placed on top of the stack, pushing down any there; when a plate is removed from the stack, the one below it pops up to become the new top. In many implementations, a stack has more operations than "push" and "pop".
An example is "top of stack", or "peek", which observes the top-most element without removing it from the stack. Since this can be done with a "pop" and a "push" with the same data, it is not essential. An underflow condition can occur in the "stack top" operation if the stack is empty, the same as "pop". Implementations have a function which just returns whether the stack is empty. A stack can be implemented either through an array or a linked list. What identifies the data structure as a stack in either case is not the implementation but the interface: the user is only allowed to pop or push items onto the array or linked list, with few other helper operations; the following will demonstrate both implementations. An array can be used to implement a stack; the first element is the bottom, resulting in array being the first element pushed onto the stack and the last element popped off. The program must keep track of the size of the stack, using a variable top that records the number of items pushed so far, therefore pointing to the place in the array where the next element is to be inserted.
Thus, the stack itself can be implemented as a three-element structure: structure stack: maxsize: integer top: integer items: array of item procedure initialize: stk.items ← new array of size items empty stk.maxsize ← size stk.top ← 0 The push operation adds an element and increments the top index, after checking for overflow: procedure push: if stk.top = stk.maxsize: report overflow error else: stk.items ← x stk.top ← stk.top + 1 Similarly, pop decrements the top index after checking for underflow, returns the item, the top one: procedure pop: if stk.top = 0: report underflow error else: stk.top ← stk.top − 1 r ← stk.items return r Using a dynamic array, it is possible to implement a stack that can grow or shrink as much as needed. The size of the stack is the size of the dynamic array, a efficient implementation of a stack since adding items to or removing items from the end of a dynamic array requires amortized O time. Another option for implementing stacks is to use a singly linked list.
A stack is a pointer to the "head" of the list, with a counter to keep track of the size of the list: structure frame: data: item next: frame or nil structure stack: head: frame or nil size: integer procedure initialize: stk.head ← nil stk.size ← 0 Pushing and popping items happens at the head of the list. Some languages, notably those in the Forth family, are designed around language-defined stacks that are directly visible to and manipulated by the programmer; the following is an example of manipulating a stack in Common Lisp: Several of the C
Plan 9 from Bell Labs
Plan 9 from Bell Labs is a distributed operating system, originating in the Computing Sciences Research Center at Bell Labs in the mid-1980s, building on UNIX concepts first developed there in the late 1960s. The final official release was in early 2015. Under Plan 9, UNIX's "everything is a file" metaphor was to be extended via a pervasive network-centric filesystem, graphical user interface assumed as a basis for all functionality, though retaining a text-centric ideology; the name Plan 9 from Bell Labs is a reference to the Ed Wood 1959 cult science fiction Z-movie Plan 9 from Outer Space. Glenda, the Plan 9 Bunny, is a reference to Wood's film Glen or Glenda; the system hobbyists. Plan 9 from Bell Labs was developed, starting mid-1980s, by members of the Computing Science Research Center at Bell Labs, the same group that developed Unix and C; the Plan 9 team was led by Rob Pike, Ken Thompson, Dave Presotto and Phil Winterbottom, with support from Dennis Ritchie as head of the Computing Techniques Research Department.
Over the years, many notable developers have contributed to the project including Brian Kernighan, Tom Duff, Doug McIlroy, Bjarne Stroustrup and Bruce Ellis. Plan 9 replaced Unix as Bell Labs's primary platform for operating systems research, it explored several changes to the original Unix model that facilitate the use and programming of the system, notably in distributed multi-user environments. After several years of development and internal use, Bell Labs shipped the operating system to universities in 1992. Three years in 1995, Plan 9 was made available for commercial parties by AT&T via the book publisher Harcourt Brace. With source licenses costing $350, AT&T targeted the embedded systems market rather than the computer market at large. By early 1996, the Plan 9 project had been "put on the back burner" by AT&T in favor of Inferno, intended to be a rival to Sun Microsystems' Java platform. In the late 1990s, Bell Labs' new owner Lucent Technologies dropped commercial support for the project and in 2000, a third release was distributed under an open-source license.
A fourth release under a new free software license occurred in 2002. A user and development community, including current and former Bell Labs personnel, produced minor daily releases in form of ISO images. Bell Labs hosted the development; the development source tree is accessible over the 9P and HTTP protocols and is used to update existing installations. In addition to the official components of the OS included in the ISOs, Bell Labs hosts a repository of externally developed applications and tools; as Bell Labs has moved on to projects in recent years, development of the official Plan 9 system has stopped. Unofficial development for the system continues on the 9front fork, where active contributors provide monthly builds and new functionality. So far, the 9front fork has provided the system Wi-Fi drivers, Audio drivers, USB support and built-in game emulator, along with other features. Other recent Plan 9 inspired operating systems include Harvey OS and Jehanne OS. Plan 9 is a distributed operating system, designed to make a network of heterogeneous and geographically separated computers function as a single system.
In a typical Plan 9 installation, users work at terminals running the window system rio, they access CPU servers which handle computation-intensive processes. Permanent data storage is provided by additional network hosts acting as file servers and archival storage, its designers state that, he foundations of the system are built on two ideas: a per-process name space and a simple message-oriented file system protocol. The first idea means that, unlike on most operating systems, processes each have their own view of the namespace, corresponding to what other operating systems call the file system; the potential complexity of this setup is controlled by a set of conventional locations for common resources. The second idea means that processes can offer their services to other processes by providing virtual files that appear in the other processes' namespace; the client process's input/output on such a file becomes inter-process communication between the two processes. This way, Plan 9 generalizes the Unix notion of the filesystem as the central point of access to computing resources.
It carries over Unix's idea of device files to provide access to peripheral devices and the possibility to mount filesystems residing on physically distinct filesystems into a hierarchical namespace, but adds the possibility to mount a connection to a server program that speaks a standardized protocol and treat its services as part of the namespace. For example, the original window system, called 8 1/2, exploited these possibilities. Plan 9 represents the user interface on a terminal by means of three pseudo-files: mouse, which can be read by a program to get notification of mouse movements and button clicks, which can be used to perform textual input/output, bitblt, writing to which enacts graphics operations; the window system multiplexes these devices: when creating a new window to run some program in, it first sets up a new namespace in which mouse and bitblt are connected to itself, hiding the actual device files to which it itself has access. The window system thus receives all input and output commands from the program and handles these appropriately, by sending output to the actual screen device and giving the focused program the
Steven Pemberton is one of the developers of the ABC programming language and of the Views system. He is chair of the W3C XHTML2 and XForms Working Groups and member of RDFa Taskforce. Pemberton was editor-in-chief of SIGCHI Bulletin from 1993 to 1999 and of ACM Interactions from 1998 to 2004. In 2009 Pemberton was awarded the CHI Lifetime Service Award by SIGCHI. Steven Pemberton's home page Transcription of Steven Pemberton's conference at A Decade of Webdesign
In computer science, computer engineering and programming language implementations, a stack machine is a type of computer. In some cases, the term refers to a software scheme; the main difference from other computers is that most of its instructions operate on a pushdown stack of numbers rather than numbers in registers. Most computer systems link to subroutines; this does not make these computers stack machines. The common alternative to a stack machine is a register machine, in which each instruction explicitly names specific registers for its operands and result. A "stack machine" is a computer that uses a last-in, first-out stack to hold short-lived temporary values. Most of its instructions assume that operands will be from the stack, results placed in the stack. For a typical instruction such as Add the computer takes both operands from the topmost values of the stack; the computer replaces those two values with the sum, which the computer calculates when it performs the Add instruction. The instruction's operands are "popped" off the stack, its result are "pushed" back onto the stack, ready for the next instruction.
Most stack instructions have only an opcode commanding an operation, with no additional fields to identify a constant, register or memory cell. The stack holds more than two inputs or more than one result, so a richer set of operations can be computed. Integer constant operands are pushed by separate Load Immediate instructions. Memory is accessed by separate Load or Store instructions containing a memory address or calculating the address from values in the stack. For speed, a stack machine implements some part of its stack with registers. To execute operands of the arithmetic logic unit may be the top two registers of the stack and the result from the ALU is stored in the top register of the stack; some stack machines have a stack of limited size, implemented as a register file. The ALU will access this with an index; some machines have a stack of unlimited size, implemented as an array in RAM accessed by a "top of stack" address register. This is slower, but the number of flip-flops is less, making a less-expensive, more compact CPU.
Its topmost N values may be cached for speed. A few machines have both an expression stack in a separate register stack. In this case, software, or an interrupt may move data between them; the instruction set carries out most ALU actions with postfix operations that work only on the expression stack, not on data registers or main memory cells. This can be convenient for executing high-level languages, because most arithmetic expressions can be translated into postfix notation. In contrast, register machines hold temporary values in a fast array of registers. Accumulator machines have only one general-purpose register. Belt machines use a FIFO queue to hold temporary values. Memory-to-memory machines do not have any temporary registers usable by a programmer. Stack machines may have their expression stack and their call-return stack separated or as one integrated structure. If they are separated, the instructions of the stack machine can be pipelined with fewer interactions and less design complexity.
It can run faster. Some technical handheld calculators use reverse Polish notation in their keyboard interface, instead of having parenthesis keys; this is a form of stack machine. The Plus key relies on its two operands being at the correct topmost positions of the user-visible stack. Stack machines have much smaller instructions than the other styles of machines. Loads and stores to memory are separate and so stack code requires twice as many instructions as the equivalent code for register machines; the total code size is still less for stack machines. In stack machine code, the most frequent instructions consist of just an opcode selecting the operation; this can fit in 6 bits or less. Branches, load immediates, load/store instructions require an argument field, but stack machines arrange that the frequent cases of these still fit together with the opcode into a compact group of bits; the selection of operands from prior results is done implicitly by ordering the instructions. In contrast, register machines require two or three register-number fields per ALU instruction to select operands.
The instructions for accumulator or memory-to-memory machines are not padded out with multiple register fields. Instead, they use compiler-managed anonymous variables for subexpression values; these temporary locations require extra memory reference instructions which take more code space than for the stack machine, or compact register machines. All practical stack machines have variants of the load–store opcodes for accessing local variables and formal parameters without explicit address calculations; this can be by offsets from the current top-of-stack address, or by offsets from a stable frame-base register. Register machines handle this with a register + use a wider offset field. Dense machine code was valuable in the 1960s, when main memory was expensive and limited on mainframes, it became important again on the initially-tiny memories of minicomputers and microprocessors. Density remains important today, for smartphone applications, applications downloaded into browsers over slow Internet connections, in ROMs for embedded applications.
A more general advantage of increased density is improved effectiveness of caches and instruction prefetch. Some of the density of Burroughs B6700 code was due to moving vital operand information elsewhere
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources. For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware, although the application code is executed directly by the hardware and makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles to web servers and supercomputers; the dominant desktop operating system is Microsoft Windows with a market share of around 82.74%. MacOS by Apple Inc. is in second place, the varieties of Linux are collectively in third place. In the mobile sector, use in 2017 is up to 70% of Google's Android and according to third quarter 2016 data, Android on smartphones is dominant with 87.5 percent and a growth rate 10.3 percent per year, followed by Apple's iOS with 12.1 percent and a per year decrease in market share of 5.2 percent, while other operating systems amount to just 0.3 percent.
Linux distributions are dominant in supercomputing sectors. Other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can only run one program at a time, while a multi-tasking operating system allows more than one program to be running in concurrency; this is achieved by time-sharing, where the available processor time is divided between multiple processes. These processes are each interrupted in time slices by a task-scheduling subsystem of the operating system. Multi-tasking may be characterized in co-operative types. In preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, such as Solaris and Linux—as well as non-Unix-like, such as AmigaOS—support preemptive multitasking. Cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking.
32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem. A multi-user operating system extends the basic concept of multi-tasking with facilities that identify processes and resources, such as disk space, belonging to multiple users, the system permits multiple users to interact with the system at the same time. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources to multiple users. A distributed operating system manages a group of distinct computers and makes them appear to be a single computer; the development of networked computers that could be linked and communicate with each other gave rise to distributed computing. Distributed computations are carried out on more than one machine; when computers in a group work in cooperation, they form a distributed system.
In an OS, distributed and cloud computing context, templating refers to creating a single virtual machine image as a guest operating system saving it as a tool for multiple running virtual machines. The technique is used both in virtualization and cloud computing management, is common in large server warehouses. Embedded operating systems are designed to be used in embedded computer systems, they are designed to operate on small machines like PDAs with less autonomy. They are able to operate with a limited number of resources, they are compact and efficient by design. Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is an operating system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, it uses specialized scheduling algorithms so that a deterministic nature of behavior is achieved. An event-driven system switches between tasks based on their priorities or external events while time-sharing operating systems switch tasks based on clock interrupts.
A library operating system is one in which the services that a typical operating system provides, such as networking, are provided in the form of libraries and composed with the application and configuration code to construct a unikernel: a specialized, single address space, machine image that can be deployed to cloud or embedded environments. Early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Operating systems did not exist in their more complex forms until the early 1960s. Hardware features were added, that enabled use of runtime libraries and parallel processing; when personal computers became popular in the 1980s, operating systems were made for them similar in concept to those used on larger computers. In the 1940s, the earliest electronic digital systems had no operating systems.
Electronic systems of this time were programmed on rows of mechanical switches or by jumper wires on plug boards. These were special-purpose systems that, for example, generated ballistics tables for the military or controlled the pri