John R. Levine
John R. Levine is an Internet author and consultant specializing in email infrastructure, spam filtering, software patents, he chaired the Anti-Spam Research Group of the Internet Research Task Force, is president of CAUCE, is a member of the ICANN Stability and Security Advisory Committee, runs Taughannock Networks. He has co-authored many books, including The Internet For Dummies, UNIX For Dummies, Fighting Spam for Dummies, flex & bison, he was the mayor of the village of Trumansburg, New York, United States from March 2004 until March 2007. Levine graduated from Yale University in 1975 and earned his Ph. D. in Computer Science from Yale in 1984 with a thesis about the design and implementation of small databases. His roommate at Yale was economist Paul Krugman. Levine was a co-founder and board member of Segue Software and Senior Programmer at Javelin Software, he was a member of the R. E. S. I. S. T. O. R. S. One of the first computer clubs in the United States. Levine is the only moderator of the comp.compilers usenet group for 32 years.
Home page Weblog Biography at O'Reilly media R. E. S. I. S. T. O. R. S
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources. For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware, although the application code is executed directly by the hardware and makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles to web servers and supercomputers; the dominant desktop operating system is Microsoft Windows with a market share of around 82.74%. MacOS by Apple Inc. is in second place, the varieties of Linux are collectively in third place. In the mobile sector, use in 2017 is up to 70% of Google's Android and according to third quarter 2016 data, Android on smartphones is dominant with 87.5 percent and a growth rate 10.3 percent per year, followed by Apple's iOS with 12.1 percent and a per year decrease in market share of 5.2 percent, while other operating systems amount to just 0.3 percent.
Linux distributions are dominant in supercomputing sectors. Other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can only run one program at a time, while a multi-tasking operating system allows more than one program to be running in concurrency; this is achieved by time-sharing, where the available processor time is divided between multiple processes. These processes are each interrupted in time slices by a task-scheduling subsystem of the operating system. Multi-tasking may be characterized in co-operative types. In preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, such as Solaris and Linux—as well as non-Unix-like, such as AmigaOS—support preemptive multitasking. Cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking.
32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem. A multi-user operating system extends the basic concept of multi-tasking with facilities that identify processes and resources, such as disk space, belonging to multiple users, the system permits multiple users to interact with the system at the same time. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources to multiple users. A distributed operating system manages a group of distinct computers and makes them appear to be a single computer; the development of networked computers that could be linked and communicate with each other gave rise to distributed computing. Distributed computations are carried out on more than one machine; when computers in a group work in cooperation, they form a distributed system.
In an OS, distributed and cloud computing context, templating refers to creating a single virtual machine image as a guest operating system saving it as a tool for multiple running virtual machines. The technique is used both in virtualization and cloud computing management, is common in large server warehouses. Embedded operating systems are designed to be used in embedded computer systems, they are designed to operate on small machines like PDAs with less autonomy. They are able to operate with a limited number of resources, they are compact and efficient by design. Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is an operating system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, it uses specialized scheduling algorithms so that a deterministic nature of behavior is achieved. An event-driven system switches between tasks based on their priorities or external events while time-sharing operating systems switch tasks based on clock interrupts.
A library operating system is one in which the services that a typical operating system provides, such as networking, are provided in the form of libraries and composed with the application and configuration code to construct a unikernel: a specialized, single address space, machine image that can be deployed to cloud or embedded environments. Early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Operating systems did not exist in their more complex forms until the early 1960s. Hardware features were added, that enabled use of runtime libraries and parallel processing; when personal computers became popular in the 1980s, operating systems were made for them similar in concept to those used on larger computers. In the 1940s, the earliest electronic digital systems had no operating systems.
Electronic systems of this time were programmed on rows of mechanical switches or by jumper wires on plug boards. These were special-purpose systems that, for example, generated ballistics tables for the military or controlled the pri
The IBM System/360 is a family of mainframe computer systems, announced by IBM on April 7, 1964, delivered between 1965 and 1978. It was the first family of computers designed to cover the complete range of applications, from small to large, both commercial and scientific; the design made a clear distinction between architecture and implementation, allowing IBM to release a suite of compatible designs at different prices. All but the incompatible Model 44 and the most expensive systems used microcode to implement the instruction set, which featured 8-bit byte addressing and binary and floating-point calculations; the launch of the System/360 family introduced IBM's Solid Logic Technology, a new technology, the start of more powerful but smaller computers. The slowest System/360 model announced in 1964, the Model 30, could perform up to 34,500 instructions per second, with memory from 8 to 64 KB. High performance models came later; the 1967 IBM System/360 Model 91 could do up to 16.6 million instructions per second.
The larger 360 models could have up to 8 MB of main memory, though main memory that big was unusual—a large installation might have as little as 256 KB of main storage, but 512 KB, 768 KB or 1024 KB was more common. Up to 8 megabytes of slower Large Capacity Storage was available; the IBM 360 was successful in the market, allowing customers to purchase a smaller system with the knowledge they would always be able to migrate upward if their needs grew, without reprogramming of application software or replacing peripheral devices. Many consider the design one of the most successful computers in history, influencing computer design for years to come; the chief architect of System/360 was Gene Amdahl, the project was managed by Fred Brooks, responsible to Chairman Thomas J. Watson Jr; the commercial release was piloted by another of Watson's lieutenants, John R. Opel, who managed the launch of IBM’s System 360 mainframe family in 1964. Application-level compatibility for System/360 software is maintained to the present day with the System z mainframe servers.
Contrasting with at-the-time normal industry practice, IBM created an entire new series of computers, from small to large, low- to high-performance, all using the same instruction set. This feat allowed customers to use a cheaper model and upgrade to larger systems as their needs increased without the time and expense of rewriting software. Before the introduction of System/360, business and scientific applications used different computers with different instruction sets and operating systems. Different-sized computers had their own instruction sets. IBM was the first manufacturer to exploit microcode technology to implement a compatible range of computers of differing performance, although the largest, models had hard-wired logic instead; this flexibility lowered barriers to entry. With most other vendors customers had to choose between machines they could outgrow and machines that were too powerful and thus too costly; this meant that many companies did not buy computers. IBM announced a series of six computers and forty common peripherals.
IBM delivered fourteen models, including rare one-off models for NASA. The least expensive model was the Model 20 with as little as 4096 bytes of core memory, eight 16-bit registers instead of the sixteen 32-bit registers of other System/360 models, an instruction set, a subset of that used by the rest of the range; the initial announcement in 1964 included Models 30, 40, 50, 60, 62, 70. The first three were low- to middle-range systems aimed at the IBM 1400 series market. All three first shipped in mid-1965; the last three, intended to replace the 7000 series machines, never shipped and were replaced with the 65 and 75, which were first delivered in November 1965, January 1966, respectively. Additions to the low-end included models 20, 22, 25; the Model 20 had several sub-models. The Model 22 was a recycled Model 30 with minor limitations: a smaller maximum memory configuration, slower I/O channels, which limited it to slower and lower-capacity disk and tape devices than on the 30; the Model 44 was a specialized model, designed for scientific computing and for real-time computing and process control, featuring some additional instructions, with all storage-to-storage instructions and five other complex instructions eliminated.
A succession of high-end machines included the Model 67, 85, 91, 95, 195. The 85 design was intermediate between the System/360 line and the follow-on System/370 and was the basis for the 370/165. There was a System/370 version of the 195; the implementations differed using different native data path widths, presence or absence of microcode, yet were compatible. Except where documented, the models were architecturally compatible; the 91, for example, was designed for scientific computing and provided out-of-order instruction execution, but lacked the decimal instruction set used in commercial applications. New features could be added without violating architectural definitions: the 65 had a dual-processor version with extensions for inter-CPU signalling. Models 44, 75, 91, 95, 195 were implemented with hardwired logic, rather than microcoded as
Backward compatibility is a property of a system, product, or technology that allows for interoperability with an older legacy system, or with input designed for such a system in telecommunications and computing. Backward compatibility is sometimes called downward compatibility. Modifying a system in a way that does not allow backward compatibility is sometimes called "breaking" backward compatibility. A complementary concept is forward compatibility. A design, forward-compatible has a roadmap for compatibility with future standards and products; the associated benefits of backward compatibility are the appeal to an existing user base through an inexpensive upgrade path as well as the network effect, important, as it increases the value of goods and services proportionally to the size of the user base. One example of this is the Sony PlayStation 2, backward compatible with games for its predecessor PlayStation. While the selection of PS2 games available at launch was small, sales of the console were nonetheless strong in 2000-2001 thanks to the large library of games for the preceding PS1.
This bought time for the PS2 to grow a large installed base and developers to release more quality PS2 games for the crucial 2001 holiday season. The associated costs of backward compatibility are a higher bill of materials if hardware is required to support the legacy systems. A notable example is the Sony PlayStation 3, as the first PS3 iteration was expensive to manufacture in part due to including the Emotion Engine from the preceding PS2 in order to run PS2 games, since the PS3 architecture was different from the PS2. Subsequent PS3 hardware revisions have eliminated the Emotion Engine as it saved production costs while removing the ability to run PS2 titles, as Sony found out that backward compatibility was not a major selling point for the PS3. in contrast to the PS2. The PS3's chief competitor, the Microsoft Xbox 360, took a different approach to backward compatibility by using software emulation in order to run games from the first Xbox, rather than including legacy hardware from the original Xbox, quite different than the Xbox 360, however Microsoft stopped releasing emulation profiles after 2007.
A simple example of both backward and forward compatibility is the introduction of FM radio in stereo. FM radio was mono, with only one audio channel represented by one signal. With the introduction of two-channel stereo FM radio, a large number of listeners had only mono FM receivers. Forward compatibility for mono receivers with stereo signals was achieved through sending the sum of both left and right audio channels in one signal and the difference in another signal; that allows mono FM receivers to receive and decode the sum signal while ignoring the difference signal, necessary only for separating the audio channels. Stereo FM receivers can receive a mono signal and decode it without the need for a second signal, they can separate a sum signal to left and right channels if both sum and difference signals are received. Without the requirement for backward compatibility, a simpler method could have been chosen. Full backward compatibility is important in computer instruction set architectures, one of the most successful being the x86 family of microprocessors.
Their full backward compatibility spans back to the 16-bit Intel 8086/8088 processors introduced in 1978. Backwards compatible processors can process the same binary executable software instructions as their predecessors, allowing the use of a newer processor without having to acquire new applications or operating systems; the success of the Wi-Fi digital communication standard is attributed to its broad forward and backward compatibility. Compiler backward compatibility may refer to the ability of a compiler of a newer version of the language to accept programs or data that worked under the previous version. A data format is said to be backward compatible with its predecessor if every message or file, valid under the old format is still valid, retaining its meaning under the new format
Douglas W. Jones
Douglas W. Jones is an American computer scientist at the University of Iowa, his research focuses on computer security electronic voting. Jones received a B. S. in physics from Carnegie Mellon University in 1973, a M. S. and Ph. D. in computer science from the University of Illinois at Urbana-Champaign in 1976 and 1980 respectively. Jones' involvement with electronic voting research began in 1994, when he was appointed to the Iowa Board of Examiners for Voting Machines and Electronic Voting Systems, he chaired the board from 1999 to 2003, has testified before the United States Commission on Civil Rights, the United States House Committee on Science and the Federal Election Commission on voting issues. In 2005 he participated as an election observer for the presidential election in Kazakhstan. Jones was the technical advisor for HBO's documentary on electronic voting machine issues, "Hacking Democracy", released in 2006, he was a member of the ACCURATE electronic voting project from 2005 to 2011.
On December 11, 2009, the Election Assistance Commission appointed Douglas Jones to the Technical Guidelines Development Committee. Together with Barbara Simons, Jones has published a book on electronic voting entitled Broken Ballots: Will Your Vote Count?. Jones's most cited work centers on the evaluation of priority queue implementations; this work has been credited with helping relaunch the empirical study of algorithm performance. In related work, Jones applied splay trees to data compression and developed algorithms for applying parallel computing to discrete event simulation. Jones's PhD thesis was in the area of capability-based addressing, he has published on other aspects of computer architecture, he has published work on computer architecture on an occasional basis, such as his proposal for a one instruction set computer. Douglas Jones' website NPR Science Friday interview on voting technology NPR Talk of the Nation interview on voting technology Douglas W. Jones at the Internet Speculative Fiction Database
A punched card or punch card is a piece of stiff paper that can be used to contain digital data represented by the presence or absence of holes in predefined positions. Digital data can be used for data processing applications or, in earlier examples, used to directly control automated machinery. Punched cards were used through much of the 20th century in the data processing industry, where specialized and complex unit record machines, organized into semiautomatic data processing systems, used punched cards for data input and storage. Many early digital computers used punched cards prepared using keypunch machines, as the primary medium for input of both computer programs and data. While punched cards are now obsolete as a storage medium, as of 2012, some voting machines still use punched cards to record votes. Basile Bouchon developed the control of a loom by punched holes in paper tape in 1725; the design was improved by his assistant Jean-Baptiste Falcon and Jacques Vaucanson Although these improvements controlled the patterns woven, they still required an assistant to operate the mechanism.
In 1804 Joseph Marie Jacquard demonstrated a mechanism to automate loom operation. A number of punched cards were linked into a chain of any length; each card held the instructions for selecting the shuttle for a single pass. It is considered an important step in the history of computing hardware. Semyon Korsakov was reputedly the first to propose punched cards in informatics for information store and search. Korsakov announced his new method and machines in September 1832. Charles Babbage proposed the use of "Number Cards", "pierced with certain holes and stand opposite levers connected with a set of figure wheels... advanced they push in those levers opposite to which there are no holes on the cards and thus transfer that number together with its sign" in his description of the Calculating Engine's Store. In 1881 Jules Carpentier developed a method of recording and playing back performances on a harmonium using punched cards; the system was called the Mélographe Répétiteur and “writes down ordinary music played on the keyboard dans la langage de Jacquard”, as holes punched in a series of cards.
By 1887 Carpentier had separated the mechanism into the Melograph which recorded the player's key presses and the Melotrope which played the music. At the end of the 1800s Herman Hollerith invented the recording of data on a medium that could be read by a machine. "After some initial trials with paper tape, he settled on punched cards...", developing punched card data processing technology for the 1890 US census. His tabulating machines read and summarized data stored on punched cards and they began use for government and commercial data processing; these electromechanical machines only counted holes, but by the 1920s they had units for carrying out basic arithmetic operations. Hollerith founded the Tabulating Machine Company, one of four companies that were amalgamated to form a fifth company, Computing-Tabulating-Recording Company renamed International Business Machines Corporation. Other companies entering the punched card business included The Tabulator Limited, Deutsche Hollerith-Maschinen Gesellschaft mbH, Powers Accounting Machine Company, Remington Rand, H.
W. Egli Bull; these companies, others and marketed a variety of punched cards and unit record machines for creating and tabulating punched cards after the development of electronic computers in the 1950s. Both IBM and Remington Rand tied punched card purchases to machine leases, a violation of the 1914 Clayton Antitrust Act. In 1932, the US government took both to court on this issue. Remington Rand settled quickly. IBM viewed its business as providing a service. IBM fought all the way to the Supreme Court and lost in 1936. IBM had 32 presses at work in Endicott, N. Y. printing and stacking five to 10 million punched cards every day." Punched cards were used as legal documents, such as U. S. Government checks and savings bonds. During WW II punched card equipment was used by the Allies in some of their efforts to decrypt Axis communications. See, for example, Central Bureau in Australia. At Bletchley Park in England, 2,000,000 punched cards were used each week for storing decrypted German messages.
Punched card technology developed into a powerful tool for business data-processing. By 1950 punched cards had become ubiquitous in government. "Do not fold, spindle or mutilate," a generalized version of the warning that appeared on some punched cards, became a motto for the post-World War II era. In 1955 IBM signed a consent decree requiring, amongst other things, that IBM would by 1962 have no more than one-half of the punched card manufacturing capacity in the United States. Tom Watson Jr.'s decision to sign this decree, where IBM saw the punched card provisions as the most significant point, completed the transfer of power to him from Thomas Watson, Sr. The UNITYPER introduced magnetic tape for data entry in the 1950s. During the 1960s, the punched card was replaced as the primary means for data storage by magnetic tape, as better, more capable computers became available. Mohawk Data Sciences introduced a magnetic tape encoder in 1965, a system marketed as a keypunch replacement, somewhat successful.
Punched cards were still c
In computing, memory refers to the computer hardware integrated circuits that store information for immediate use in a computer. Computer memory operates at a high speed, for example random-access memory, as a distinction from storage that provides slow-to-access information but offers higher capacities. If needed, contents of the computer memory can be transferred to secondary storage. An archaic synonym for memory is store; the term "memory", meaning "primary storage" or "main memory", is associated with addressable semiconductor memory, i.e. integrated circuits consisting of silicon-based transistors, used for example as primary storage but other purposes in computers and other digital electronic devices. There are two main kinds of semiconductor memory and non-volatile. Examples of non-volatile memory are ROM, PROM, EPROM and EEPROM memory. Examples of volatile memory are primary storage, dynamic random-access memory, fast CPU cache memory, static random-access memory, fast but energy-consuming, offering lower memory areal density than DRAM.
Most semiconductor memory is organized into memory cells or bistable flip-flops, each storing one bit. Flash memory organization includes multiple bits per cell; the memory cells are grouped into words of fixed word length, for example 1, 2, 4, 8, 16, 32, 64 or 128 bit. Each word can be accessed by a binary address of N bit, making it possible to store 2 raised by N words in the memory; this implies that processor registers are not considered as memory, since they only store one word and do not include an addressing mechanism. Typical secondary storage devices are solid-state drives. In the early 1940s, memory technology permitted a capacity of a few bytes; the first electronic programmable digital computer, the ENIAC, using thousands of octal-base radio vacuum tubes, could perform simple calculations involving 20 numbers of ten decimal digits which were held in the vacuum tube accumulators. The next significant advance in computer memory came with acoustic delay line memory, developed by J. Presper Eckert in the early 1940s.
Through the construction of a glass tube filled with mercury and plugged at each end with a quartz crystal, delay lines could store bits of information in the form of sound waves propagating through mercury, with the quartz crystals acting as transducers to read and write bits. Delay line memory would be limited to a capacity of up to a few hundred thousand bits to remain efficient. Two alternatives to the delay line, the Williams tube and Selectron tube, originated in 1946, both using electron beams in glass tubes as means of storage. Using cathode ray tubes, Fred Williams would invent the Williams tube, which would be the first random-access computer memory; the Williams tube would prove less expensive. The Williams tube would prove to be frustratingly sensitive to environmental disturbances. Efforts began in the late 1940s to find non-volatile memory. Jay Forrester, Jan A. Rajchman and An Wang developed magnetic-core memory, which allowed for recall of memory after power loss. Magnetic core memory would become the dominant form of memory until the development of transistor-based memory in the late 1960s.
Developments in technology and economies of scale have made possible so-called Very Large Memory computers. The term "memory" when used with reference to computers refers to random-access memory. Volatile memory is computer memory. Most modern semiconductor volatile memory is either static RAM or dynamic RAM. SRAM retains its contents as long as the power is connected and is easy for interfacing, but uses six transistors per bit. Dynamic RAM is more complicated for interfacing and control, needing regular refresh cycles to prevent losing its contents, but uses only one transistor and one capacitor per bit, allowing it to reach much higher densities and much cheaper per-bit costs. SRAM is not worthwhile for desktop system memory, where DRAM dominates, but is used for their cache memories. SRAM is commonplace in small embedded systems. Forthcoming volatile memory technologies that aim at replacing or competing with SRAM and DRAM include Z-RAM and A-RAM. Non-volatile memory is computer memory that can retain the stored information when not powered.
Examples of non-volatile memory include read-only memory, flash memory, most types of magnetic computer storage devices, optical discs, early computer storage methods such as paper tape and punched cards. Forthcoming non-volatile memory technologies include FERAM, CBRAM, PRAM, STT-RAM, SONOS, RRAM, racetrack memory, NRAM, 3D XPoint, millipede memory. A third category of memory is "semi-volatile"; the term is used to describe a memory which has some limited non-volatile duration after power is removed, but data is lost. A typical goal when using a semi-volatile memory is to provide high performance/durability/etc. Associated with volatile memories, while providing some benefits of a true non-volatile memory. For example, some non-volatile memory types can wear out, where a "worn" cell has increased volatility but otherwise continues to work. Data locations which are written