A microcomputer is a small inexpensive computer with a microprocessor as its central processing unit. It includes a microprocessor and minimal input/output circuitry mounted on a single printed circuit board. Microcomputers became popular in the 1970s and 1980s with the advent of powerful microprocessors; the predecessors to these computers and minicomputers, were comparatively much larger and more expensive. Many microcomputers are personal computers; the abbreviation micro was common during the 1970s and 1980s, but has now fallen out of common usage. The term microcomputer came into popular use after the introduction of the minicomputer, although Isaac Asimov used the term in his short story "The Dying Night" as early as 1956. Most notably, the microcomputer replaced the many separate components that made up the minicomputer's CPU with one integrated microprocessor chip; the French developers of the Micral N filed their patents with the term "Micro-ordinateur", a literal equivalent of "Microcomputer", to designate a solid state machine designed with a microprocessor.
In the USA, the earliest models such as the Altair 8800 were sold as kits to be assembled by the user, came with as little as 256 bytes of RAM, no input/output devices other than indicator lights and switches, useful as a proof of concept to demonstrate what such a simple device could do. However, as microprocessors and semiconductor memory became less expensive, microcomputers in turn grew cheaper and easier to use: Increasingly inexpensive logic chips such as the 7400 series allowed cheap dedicated circuitry for improved user interfaces such as keyboard input, instead of a row of switches to toggle bits one at a time. Use of audio cassettes for inexpensive data storage replaced manual re-entry of a program every time the device was powered on. Large cheap arrays of silicon logic gates in the form of read-only memory and EPROMs allowed utility programs and self-booting kernels to be stored within microcomputers; these stored programs could automatically load further more complex software from external storage devices without user intervention, to form an inexpensive turnkey system that does not require a computer expert to understand or to use the device.
Random access memory became cheap enough to afford dedicating 1-2 kilobytes of memory to a video display controller frame buffer, for a 40x25 or 80x25 text display or blocky color graphics on a common household television. This replaced the slow and expensive teletypewriter, common as an interface to minicomputers and mainframes. All these improvements in cost and usability resulted in an explosion in their popularity during the late 1970s and early 1980s. A large number of computer makers packaged microcomputers for use in small business applications. By 1979, many companies such as Cromemco, Processor Technology, IMSAI, North Star Computers, Southwest Technical Products Corporation, Ohio Scientific, Altos Computer Systems, Morrow Designs and others produced systems designed either for a resourceful end user or consulting firm to deliver business systems such as accounting, database management, word processing to small businesses; this allowed businesses unable to afford leasing of a minicomputer or time-sharing service the opportunity to automate business functions, without hiring a full-time staff to operate the computers.
A representative system of this era would have used an S100 bus, an 8-bit processor such as an Intel 8080 or Zilog Z80, either CP/M or MP/M operating system. The increasing availability and power of desktop computers for personal use attracted the attention of more software developers. In time, as the industry matured, the market for personal computers standardized around IBM PC compatibles running DOS, Windows. Modern desktop computers, video game consoles, tablet PCs, many types of handheld devices, including mobile phones, pocket calculators, industrial embedded systems, may all be considered examples of microcomputers according to the definition given above. Everyday use of the expression "microcomputer" has declined from the mid-1980s and has declined in commonplace usage since 2000; the term is most associated with the first wave of all-in-one 8-bit home computers and small business microcomputers. Although, or because, an diverse range of modern microprocessor-based devices fit the definition of "microcomputer", they are no longer referred to as such in everyday speech.
In common usage, "microcomputer" has been supplanted by the term "personal computer" or "PC", which specifies a computer, designed to be used by one individual at a time, a term first coined in 1959. IBM first promoted the term "personal computer" to differentiate themselves from other microcomputers called "home computers", IBM's own mainframes and minicomputers. However, following its release, the IBM PC itself was imitated, as well as the term; the component parts were available to producers and the BIOS was reverse engineered through cleanroom design techniques. IBM PC compatible "clones" became commonplace, the terms "personal computer", "PC", stuck with the general public specifically for a DOS or Windows-compatible computer. Monitors and other devices for inpu
In computing, a computer keyboard is a typewriter-style device which uses an arrangement of buttons or keys to act as mechanical levers or electronic switches. Following the decline of punch cards and paper tape, interaction via teleprinter-style keyboards became the main input method for computers. Keyboard keys have characters engraved or printed on them, each press of a key corresponds to a single written symbol. However, producing some symbols may require pressing and holding several keys or in sequence. While most keyboard keys produce letters, numbers or signs, other keys or simultaneous key presses can produce actions or execute computer commands. In normal usage, the keyboard is used as a text entry interface for typing text and numbers into a word processor, text editor or any other program. In a modern computer, the interpretation of key presses is left to the software. A computer keyboard distinguishes each physical key from every other key and reports all key presses to the controlling software.
Keyboards are used for computer gaming — either regular keyboards or keyboards with special gaming features, which can expedite used keystroke combinations. A keyboard is used to give commands to the operating system of a computer, such as Windows' Control-Alt-Delete combination. Although on Pre-Windows 95 Microsoft operating systems this forced a re-boot, now it brings up a system security options screen. A command-line interface is a type of user interface navigated using a keyboard, or some other similar device that does the job of one. While typewriters are the definitive ancestor of all key-based text entry devices, the computer keyboard as a device for electromechanical data entry and communication derives from the utility of two devices: teleprinters and keypunches, it was through such devices. As early as the 1870s, teleprinter-like devices were used to type and transmit stock market text data from the keyboard across telegraph lines to stock ticker machines to be copied and displayed onto ticker tape.
The teleprinter, in its more contemporary form, was developed from 1907 to 1910 by American mechanical engineer Charles Krum and his son Howard, with early contributions by electrical engineer Frank Pearne. Earlier models were developed separately by individuals such as Royal Earl House and Frederick G. Creed. Earlier, Herman Hollerith developed the first keypunch devices, which soon evolved to include keys for text and number entry akin to normal typewriters by the 1930s; the keyboard on the teleprinter played a strong role in point-to-point and point-to-multipoint communication for most of the 20th century, while the keyboard on the keypunch device played a strong role in data entry and storage for just as long. The development of the earliest computers incorporated electric typewriter keyboards: the development of the ENIAC computer incorporated a keypunch device as both the input and paper-based output device, while the BINAC computer made use of an electromechanically controlled typewriter for both data entry onto magnetic tape and data output.
The keyboard remained the primary, most integrated computer peripheral well into the era of personal computing until the introduction of the mouse as a consumer device in 1984. By this time, text-only user interfaces with sparse graphics gave way to comparatively graphics-rich icons on screen. However, keyboards remain central to human-computer interaction to the present as mobile personal computing devices such as smartphones and tablets adapt the keyboard as an optional virtual, touchscreen-based means of data entry. One factor determining the size of a keyboard is the presence of duplicate keys, such as a separate numeric keyboard or two each of Shift, ALT and CTL for convenience. Further the keyboard size depends on the extent to which a system is used where a single action is produced by a combination of subsequent or simultaneous keystrokes, or multiple pressing of a single key. A keyboard with few keys is called a keypad. Another factor determining the size of a keyboard is the spacing of the keys.
Reduction is limited by the practical consideration that the keys must be large enough to be pressed by fingers. Alternatively a tool is used for pressing small keys. Standard alphanumeric keyboards have keys that are on three-quarter inch centers, have a key travel of at least 0.150 inches. Desktop computer keyboards, such as the 101-key US traditional keyboards or the 104-key Windows keyboards, include alphabetic characters, punctuation symbols, numbers and a variety of function keys; the internationally common 102/104 key keyboards have a smaller left shift key and an additional key with some more symbols between that and the letter to its right. The enter key is shaped differently. Computer keyboards are similar to electric-typewriter keyboards but contain additional keys, such as the command or Windows keys. There is no standard computer keyboard. There are three different PC keyboards: the original PC keyboard with 84 keys, the AT keyboard with 84 keys and the enhanced keyboard with 101 keys.
The three differ somewhat in the placement of function keys, the control keys, the return key, the shift key. Keyboards on laptops and notebook computers have a shorter travel distance for the keystroke, shorter over travel distance, a reduced set of keys, they may not have a numeric keypad, the function keys may be placed in locations that differ from their placement on a standard, full-sized keyboard. The switch
An electronic component is any basic discrete device or physical entity in an electronic system used to affect electrons or their associated fields. Electronic components are industrial products, available in a singular form and are not to be confused with electrical elements, which are conceptual abstractions representing idealized electronic components. Electronic components leads; these leads connect to create an electronic circuit with a particular function. Basic electronic components may be packaged discretely, as arrays or networks of like components, or integrated inside of packages such as semiconductor integrated circuits, hybrid integrated circuits, or thick film devices; the following list of electronic components focuses on the discrete version of these components, treating such packages as components in their own right. Components can be classified as active, or electromechanic; the strict physics definition treats passive components as ones that cannot supply energy themselves, whereas a battery would be seen as an active component since it acts as a source of energy.
However, electronic engineers who perform circuit analysis use a more restrictive definition of passivity. When only concerned with the energy of signals, it is convenient to ignore the so-called DC circuit and pretend that the power supplying components such as transistors or integrated circuits is absent, though it may in reality be supplied by the DC circuit; the analysis only concerns the AC circuit, an abstraction that ignores DC voltages and currents present in the real-life circuit. This fiction, for instance, lets us view an oscillator as "producing energy" though in reality the oscillator consumes more energy from a DC power supply, which we have chosen to ignore. Under that restriction, we define the terms as used in circuit analysis as: Active components rely on a source of energy and can inject power into a circuit, though this is not part of the definition. Active components include amplifying components such as transistors, triode vacuum tubes, tunnel diodes. Passive components can't introduce net energy into the circuit.
They can't rely on a source of power, except for what is available from the circuit they are connected to. As a consequence they can't amplify, although they may increase current. Passive components include two-terminal components such as resistors, capacitors and transformers. Electromechanical components can carry out electrical operations by using moving parts or by using electrical connectionsMost passive components with more than two terminals can be described in terms of two-port parameters that satisfy the principle of reciprocity—though there are rare exceptions. In contrast, active components lack that property. Conduct electricity in one direction, among more specific behaviors. Diode, diode bridge Schottky diode – super fast diode with lower forward voltage drop Zener diode – passes current in reverse direction to provide a constant voltage reference Transient voltage suppression diode, unipolar or bipolar – used to absorb high-voltage spikes Varicap, tuning diode, variable capacitance diode – a diode whose AC capacitance varies according to the DC voltage applied.
Light-emitting diode – a diode that emits light Photodiode – passes current in proportion to incident light Avalanche photodiode – photodiode with internal gain Solar Cell, photovoltaic cell, PV array or panel – produces power from light DIAC, Trigger Diode, SIDAC) – used to trigger an SCR Constant-current diode Peltier cooler – a semiconductor heat pump Tunnel diode - fast diode based on quantum mechanical tunneling Transistors were considered the invention of the twentieth century that changed electronic circuits forever. A transistor is a semiconductor device used to amplify and switch electronic signals and electrical power. Transistors Bipolar junction transistor – NPN or PNP Photo transistor – amplified photodetector Darlington transistor – NPN or PNP Photo Darlington – amplified photodetector Sziklai pair Field-effect transistor JFET – N-CHANNEL or P-CHANNEL MOSFET – N-CHANNEL or P-CHANNEL MESFET HEMT Thyristors Silicon-controlled rectifier – passes current only after triggered by a sufficient control voltage on its gate TRIAC – bidirectional SCR Unijunction transistor Programmable Unijunction transistor SIT SITh Composite transistors IGBT Digital electronics Analog Hall effect sensor – senses a magnetic field Current sensor – senses a current through it Opto-electronics Opto-isolator, opto-coupler, photo-coupler – photodiode, BJT, JFET, SCR, TRIAC, zero-crossing TRIAC, open collector IC, CMOS IC, solid state relay Slotted optical switch, opto switch, optical switch LED display – seven-segment display, sixteen-segment display, dot-matrix display Current: Filament lamp Vacuum fluorescent display Cathode ray tube (monochro
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources. For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware, although the application code is executed directly by the hardware and makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles to web servers and supercomputers; the dominant desktop operating system is Microsoft Windows with a market share of around 82.74%. MacOS by Apple Inc. is in second place, the varieties of Linux are collectively in third place. In the mobile sector, use in 2017 is up to 70% of Google's Android and according to third quarter 2016 data, Android on smartphones is dominant with 87.5 percent and a growth rate 10.3 percent per year, followed by Apple's iOS with 12.1 percent and a per year decrease in market share of 5.2 percent, while other operating systems amount to just 0.3 percent.
Linux distributions are dominant in supercomputing sectors. Other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can only run one program at a time, while a multi-tasking operating system allows more than one program to be running in concurrency; this is achieved by time-sharing, where the available processor time is divided between multiple processes. These processes are each interrupted in time slices by a task-scheduling subsystem of the operating system. Multi-tasking may be characterized in co-operative types. In preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, such as Solaris and Linux—as well as non-Unix-like, such as AmigaOS—support preemptive multitasking. Cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking.
32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem. A multi-user operating system extends the basic concept of multi-tasking with facilities that identify processes and resources, such as disk space, belonging to multiple users, the system permits multiple users to interact with the system at the same time. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources to multiple users. A distributed operating system manages a group of distinct computers and makes them appear to be a single computer; the development of networked computers that could be linked and communicate with each other gave rise to distributed computing. Distributed computations are carried out on more than one machine; when computers in a group work in cooperation, they form a distributed system.
In an OS, distributed and cloud computing context, templating refers to creating a single virtual machine image as a guest operating system saving it as a tool for multiple running virtual machines. The technique is used both in virtualization and cloud computing management, is common in large server warehouses. Embedded operating systems are designed to be used in embedded computer systems, they are designed to operate on small machines like PDAs with less autonomy. They are able to operate with a limited number of resources, they are compact and efficient by design. Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is an operating system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, it uses specialized scheduling algorithms so that a deterministic nature of behavior is achieved. An event-driven system switches between tasks based on their priorities or external events while time-sharing operating systems switch tasks based on clock interrupts.
A library operating system is one in which the services that a typical operating system provides, such as networking, are provided in the form of libraries and composed with the application and configuration code to construct a unikernel: a specialized, single address space, machine image that can be deployed to cloud or embedded environments. Early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Operating systems did not exist in their more complex forms until the early 1960s. Hardware features were added, that enabled use of runtime libraries and parallel processing; when personal computers became popular in the 1980s, operating systems were made for them similar in concept to those used on larger computers. In the 1940s, the earliest electronic digital systems had no operating systems.
Electronic systems of this time were programmed on rows of mechanical switches or by jumper wires on plug boards. These were special-purpose systems that, for example, generated ballistics tables for the military or controlled the pri
Machine code monitor
A machine code monitor is software that allows a user to enter commands to view and change memory locations on a computer, with options to load and save memory contents from/to secondary storage. Some full-featured machine code monitors provide detailed control of the execution of machine language programs, include absolute-address code assembly and disassembly capabilities. Machine code monitors became popular during the home computer era of the 1970s and 1980s and were sometimes available as resident firmware in some computers, it was not unheard of to perform all of one's programming in a monitor in lieu of a full-fledged symbolic assembler. After full-featured assemblers became available, a machine code monitor was indispensable for debugging programs; the usual technique was to start the program. When the microprocessor encountered a break point the test program would be interrupted and control would be transferred to the machine code monitor; this would trigger a register dump and the monitor would await programmer input.
Activities at this point might include examining memory contents, patching code and/or altering the processor registers prior to restarting the test program. The general decline of scratch-written assembly language software has made the use of a machine code monitor somewhat of a lost art. In most systems where higher-level languages are employed, debuggers are used to present a more abstract and friendly view of what is happening within a program. However, the use of machine code monitors persists in the area of hobby-built computers
Anders Hejlsberg is a prominent Danish software engineer who co-designed several popular and commercially successful programming languages and development tools. He was the chief architect of Delphi, he works for Microsoft as the lead architect of C# and core developer on TypeScript. Hejlsberg was born in Copenhagen and studied Electrical Engineering at the Technical University of Denmark. While at the university in 1980, he began writing programs for the Nascom microcomputer, including a Pascal compiler, marketed as the Blue Label Software Pascal for the Nascom-2. However, he soon rewrote it for CP/M and DOS, marketing it first as Compas Pascal and as PolyPascal; the product was licensed to Borland, integrated into an IDE to become the Turbo Pascal system. Turbo Pascal competed with PolyPascal; the compiler itself was inspired by the "Tiny Pascal" compiler in Niklaus Wirth's "Algorithms + Data Structures = Programs", one of the most influential computer science books of the time. In Borland's hands, Turbo Pascal became one of the most commercially successful Pascal compilers.
Microsoft doubled the bonus to US$1,000,000. Hejlsberg left Borland in October 1996; the C# Design Process The Trouble with Checked Exceptions Delegates and Simplexity Versioning and Override Contracts and Interoperability Inappropriate Abstractions Generics in C#, Java and C++ CLR Design Choices Microsoft's Hejlsberg touts. NET, C-Omega technologies Deep Inside C#: An Interview with Microsoft Chief Architect Anders Hejlsberg C#: Yesterday and Tomorrow Video interview at channel9 Computerworld Interview with Anders on C# Anders Hejlsberg - Introducing TypeScript Life and Times of Anders Hejlsberg Anders Hejlsberg - Tour through computing industry history at the Microsoft Museum Anders Hejlsberg - What's so great about generics? Anders Hejlsberg - Programming data in C# 3.0 Anders Hejlsberg - What brought about the birth of the CLR Anders Hejlsberg - The. NET Show: The. NET Framework Anders Hejlsberg - The. NET Show: Programming in C# Anders Hejlsberg - More C# Talk from C#'s Architect Anders Hejlsberg - LINQ Anders Hejlsberg - Whiteboard with Anders Hejlsberg Anders Hejlsberg - LINQ and Functional Programming Outstanding Technical Achievement: C# Team Anders Hejlsberg - The Future of C# Anders Hejlsberg - The future of programming languages The Future of C# and Visual Basic