1.
Personal computer
–
A personal computer is a multi-purpose electronic computer whose size, capabilities, and price make it feasible for individual use. PCs are intended to be operated directly by a end-user, rather than by an expert or technician. In the 2010s, PCs are typically connected to the Internet, allowing access to the World Wide Web, personal computers may be connected to a local area network, either by a cable or a wireless connection. In the 2010s, a PC may be, a multi-component desktop computer, designed for use in a location a laptop computer, designed for easy portability or a tablet computer. In the 2010s, PCs run using a system, such as Microsoft Windows, Linux. The very earliest microcomputers, equipped with a front panel, required hand-loading of a program to load programs from external storage. Before long, automatic booting from permanent read-only memory became universal, in the 2010s, users have access to a wide range of commercial software, free software and free and open-source software, which are provided in ready-to-run or ready-to-compile form. Since the early 1990s, Microsoft operating systems and Intel hardware have dominated much of the computer market, first with MS-DOS. Alternatives to Microsofts Windows operating systems occupy a minority share of the industry and these include Apples OS X and free open-source Unix-like operating systems such as Linux and Berkeley Software Distribution. Advanced Micro Devices provides the alternative to Intels processors. PC is an initialism for personal computer, some PCs, including the OLPC XOs, are equipped with x86 or x64 processors but not designed to run Microsoft Windows. PC is used in contrast with Mac, an Apple Macintosh computer and this sense of the word is used in the Get a Mac advertisement campaign that ran between 2006 and 2009, as well as its rival, Im a PC campaign, that appeared in 2008. Since Apples transition to Intel processors starting 2005, all Macintosh computers are now PCs, the “brain” may one day come down to our level and help with our income-tax and book-keeping calculations. But this is speculation and there is no sign of it so far, in the history of computing there were many examples of computers designed to be used by one person, as opposed to terminals connected to mainframe computers. Using the narrow definition of operated by one person, the first personal computer was the ENIAC which became operational in 1946 and it did not meet further definitions of affordable or easy to use. An example of an early single-user computer was the LGP-30, created in 1956 by Stan Frankel and used for science and it came with a retail price of $47, 000—equivalent to about $414,000 today. Introduced at the 1965 New York Worlds Fair, the Programma 101 was a programmable calculator described in advertisements as a desktop computer. It was manufactured by the Italian company Olivetti and invented by the Italian engineer Pier Giorgio Perotto, the Soviet MIR series of computers was developed from 1965 to 1969 in a group headed by Victor Glushkov
2.
Operating system
–
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. All computer programs, excluding firmware, require a system to function. Operating systems are found on many devices that contain a computer – from cellular phones, the dominant desktop operating system is Microsoft Windows with a market share of around 83. 3%. MacOS by Apple Inc. is in place, and the varieties of Linux is in third position. Linux distributions are dominant in the server and supercomputing sectors, other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can run one program at a time. Multi-tasking may be characterized in preemptive and co-operative types, in preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, e. g. Solaris, Linux, cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking, 32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem, a distributed operating system manages a group of distinct computers and makes them appear to be a single computer. The development of networked computers that could be linked and communicate with each other gave rise to distributed computing, distributed computations are carried out on more than one machine. When computers in a work in cooperation, they form a distributed system. The technique is used both in virtualization and cloud computing management, and is common in large server warehouses, embedded operating systems are designed to be used in embedded computer systems. They are designed to operate on small machines like PDAs with less autonomy and they are able to operate with a limited number of resources. They are very compact and extremely efficient by design, Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is a system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could run different programs in succession to speed up processing
3.
IBM PC DOS
–
IBM PC DOS is a discontinued operating system for the IBM Personal Computer, manufactured and sold by IBM from the 1981 into the 2000s. Before version 6.1, PC DOS was an IBM-branded version of MS-DOS, from version 6.1 on, PC DOS became IBMs independent product. The IBM task force assembled to develop the PC decided that critical components of the machine, including the operating system and this radical break from company tradition of in-house development was one of the key decisions that made the IBM PC an industry standard. At that time private company Microsoft, founded five years before by Bill Gates, was selected for the operating system. IBM wanted Microsoft to retain ownership of whatever software it developed, according to task force member Jack Sams, The reasons were internal. We had a problem being sued by people claiming we had stolen their stuff. It could be expensive for us to have our programmers look at code that belonged to someone else because they would then come back and say we stole it. We had lost a series of suits on this, and so we didnt want to have a product which was clearly someone elses product worked on by IBM people and we went to Microsoft on the proposition that we wanted this to be their product. IBM first contacted Microsoft to look the company over in July 1980, negotiations continued over the next months, and the paperwork was officially signed in early November. Although IBM expected that most customers would use PC DOS, The IBM PC also supported CP/M-86, which became available six months after PC DOS, and UCSD p-System operating systems. IBMs expectation proved correct, one found that 96. 3% of PCs were ordered with the $40 PC-DOS compared to 3. 4% for the $240 CP/M-86. Microsoft first licensed, then purchased 86-DOS from Seattle Computer Products, ORear got 86-DOS to run on the prototype PC in February 1981. 86-DOS had to be converted from 8-inch to 5. 25-inch floppy disks and integrated with the BIOS, IBM had more people writing requirements for the computer than Microsoft had writing code. ORear often felt overwhelmed by the number of people he had to deal with at the ESD facility in Boca Raton, 86-DOS was rebranded IBM PC DOS1.0 for its August 1981 release with the IBM PC. The initial version of DOS was largely based on CP/M-801. x and most of its architecture, function calls, the most significant difference was the fact that it introduced a different file system, FAT12. Unlike all later DOS versions, the DATE and TIME commands were separate rather than part of COMMAND. COM. Single-sided 160 kilobyte 5.25 floppies were the only disk format supported, in late 1981 Paterson, now at Microsoft, began writing PC DOS1.10. It debuted in May 1982 along with the Revision B IBM PC, support for the new double-sided drives was added, allowing 320 kB per disk
4.
Interactive Systems Corporation
–
Interactive Systems Corporation was a US-based software company and the first vendor of the Unix operating system outside AT&T, operating from Santa Monica, California. ISC was acquired by the Eastman Kodak Company in 1988, which sold its ISC Unix operating system assets to Sun Microsystems on September 26,1991, Kodak sold the remaining parts of ISC to SHL Systemhouse Inc in 1993. ISCs 1977 offering, IS/1, was a Version 6 Unix variant enhanced for office automation running on the PDP-11, iS/3 and IS/5 were enhanced versions of Unix System III and System V for PDP-11 and VAX. ISC Unix ports to the IBM PC included a variant of System III, developed under contract to IBM, known as PC/IX, with later versions branded 386/ix, ISC was AT&Ts Principal Publisher for System V.4 on the Intel platform. ISC was also involved in the development of VM/IX and IX/370, according to Bob Blake, the PC/IX product manager for IBM, their primary objective was to make a credible Unix system - not try to IBM-ize the product. PC/IX was not however the first Unix port to the XT, venix/86 preceded PC/IX by about a year, although it was based on the older Version 7 Unix. The main addition to PC/IX was the INed screen editor from ISC, INed offered multiple windows and context-sensitive help, paragraph justification and margin changes, although it was not a fully fledged word processor. PC/IX omitted the System III FORTRAN compiler, the tar file archiver, one reason for not porting these was that in PC/IX individual applications were limited to a single segment of 64 KB of RAM. To achieve good performance, PC/IX directly addressed the XT hard-drive rather than doing this through the BIOS. Because of the lack of memory protection in the 8088 chips. The PC/IX distribution came on 19 floppy disks and was accompanied by a 1, installed, PC/IX took approximately 4.5 MB of disk space. An editorial by Bill Machrone in PC Magazine at the time of PC/IXs launch flagged the $900 price as a show stopper given its lack of compatibility with MS-DOS applications, PC/IX was succeeded by 386/ix in 1985, a System VR3 derivative. Later versions were termed Interactive UNIX System V/386 and based on System V3.2 and its SVR3.2 kernel meant diminished compatibility with other Unix ports in the early nineties, but Interactive Unix was praised by a PC Magazine reviewer for its stability. The last version was System V/386 Release 3.2 Version 4.1.1, official support ended on July 23,2006, five years after Sun withdrew the product from sale. Until version ISA3.0.1, Interactive Unix supported only 16 MB of RAM, in the next versions, it supported 256MB RAM and PCI bus. EISA versions always supported 256MB RAM, covers and compares PC/IX, Xenix, and Venix. Maurice J. Bach, The Design of the UNIX Operating System, ISBN 0-13-201799-7, IBM has snubbed both Microsofts multimillion dollar investment in Xenix and AT&Ts determination to establish System V as the dominant version on Unix. IBMs latest hot potato Interactive Unix Documentation
5.
Xenix
–
Xenix is a discontinued version of the Unix operating system for various microcomputer platforms, licensed by Microsoft from AT&T Corporation in the late 1970s. The Santa Cruz Operation later acquired rights to the software. In the mid-to-late 1980s, Xenix was the most common Unix variant, Microsoft chairman Bill Gates said in 1996 that for a long time that company had the highest-volume AT&T Unix license. Bell Labs, the developer of Unix, was part of the regulated Bell System and it instead licensed the software to others. Because Microsoft was not able to license the UNIX name itself, Microsoft called Xenix a universal operating environment. The first version of Xenix was very close to the original UNIX version 7 source on the PDP-11, Microsoft said in 1981, and later versions were to incorporate its own fixes and improvements. The first port was for the Z8001 16-bit processor, the first customer ship was January 1981 for Central Data Corporation of Illinois, the first 8086 port was for the Altos Computer Systems non-PC-compatible 8600-series computers. Intel sold complete computers with Xenix under their Intel System 86 brand and this included processor boards like iSBC 86/12 and also MMU boards such as the iSBC309. The first Intel Xenix systems shipped in July 1982, seattle Computer Products also made 8086 computers bundled with Xenix, like their Gazelle II, which used the S-100 bus and was available in late 1983 or early 1984. There was also a port for IBM System 9000, SCO had initially worked on its own PDP-11 port of V7, called Dynix, but then struck an agreement with Microsoft for joint development and technology exchange on Xenix in 1982. In 1984, a port to the 68000-based Apple Lisa was jointly developed by SCO and Microsoft, the difficulty in porting to the various 8086 and Z8000-based machines, said Microsoft in its 1983 OEM directory, had been the lack of a standardized memory management unit and protection facilities. A generally available port to the unmapped Intel 8086/8088 architecture was done by The Santa Cruz Operation around 1983, SCO Xenix for the PC XT shipped sometime in 1984 and contained some enhancement from 4. 2BSD, it also supported the Micnet local area networking. The later 286 version of Xenix leveraged the integrated MMU present on this chip, the 286 Xenix was accompanied by new hardware from Xenix OEMs. For example, the Sperry PC/IT, an IBM PC AT clone, was advertised as capable of supporting eight simultaneous dumb terminal users under this version and it was followed by a System V.2 codebase in Xenix 5.0. Microsoft hopes that XENIX will become the choice for software production and exchange. Microsoft referred to its own MS-DOS as its single-user, single-tasking operating system, microsofts Chris Larson described MS-DOS2. 0s Xenix compatibility as the second most important feature. AT&T started selling System V, however, after the breakup of the Bell System, Microsoft, believing that it could not compete with Unixs developer, decided to abandon Xenix. The decision was not immediately transparent, which led to the term vaporware and it agreed with IBM to develop OS/2, and the Xenix team was assigned to that project
6.
Microsoft Windows
–
Microsoft Windows is a metafamily of graphical operating systems developed, marketed, and sold by Microsoft. It consists of families of operating systems, each of which cater to a certain sector of the computing industry with the OS typically associated with IBM PC compatible architecture. Active Windows families include Windows NT, Windows Embedded and Windows Phone, defunct Windows families include Windows 9x, Windows 10 Mobile is an active product, unrelated to the defunct family Windows Mobile. Microsoft introduced an operating environment named Windows on November 20,1985, Microsoft Windows came to dominate the worlds personal computer market with over 90% market share, overtaking Mac OS, which had been introduced in 1984. Apple came to see Windows as an encroachment on their innovation in GUI development as implemented on products such as the Lisa. On PCs, Windows is still the most popular operating system, however, in 2014, Microsoft admitted losing the majority of the overall operating system market to Android, because of the massive growth in sales of Android smartphones. In 2014, the number of Windows devices sold was less than 25% that of Android devices sold and this comparison however may not be fully relevant, as the two operating systems traditionally target different platforms. As of September 2016, the most recent version of Windows for PCs, tablets, smartphones, the most recent versions for server computers is Windows Server 2016. A specialized version of Windows runs on the Xbox One game console, Microsoft, the developer of Windows, has registered several trademarks each of which denote a family of Windows operating systems that target a specific sector of the computing industry. It now consists of three operating system subfamilies that are released almost at the time and share the same kernel. Windows, The operating system for personal computers, tablets. The latest version is Windows 10, the main competitor of this family is macOS by Apple Inc. for personal computers and Android for mobile devices. Windows Server, The operating system for server computers, the latest version is Windows Server 2016. Unlike its clients sibling, it has adopted a strong naming scheme, the main competitor of this family is Linux. Windows PE, A lightweight version of its Windows sibling meant to operate as an operating system, used for installing Windows on bare-metal computers. The latest version is Windows PE10.0.10586.0, Windows Embedded, Initially, Microsoft developed Windows CE as a general-purpose operating system for every device that was too resource-limited to be called a full-fledged computer. The following Windows families are no longer being developed, Windows 9x, Microsoft now caters to the consumers market with Windows NT. Windows Mobile, The predecessor to Windows Phone, it was a mobile operating system
7.
Central processing unit
–
The computer industry has used the term central processing unit at least since the early 1960s. The form, design and implementation of CPUs have changed over the course of their history, most modern CPUs are microprocessors, meaning they are contained on a single integrated circuit chip. An IC that contains a CPU may also contain memory, peripheral interfaces, some computers employ a multi-core processor, which is a single chip containing two or more CPUs called cores, in that context, one can speak of such single chips as sockets. Array processors or vector processors have multiple processors that operate in parallel, there also exists the concept of virtual CPUs which are an abstraction of dynamical aggregated computational resources. Early computers such as the ENIAC had to be rewired to perform different tasks. Since the term CPU is generally defined as a device for software execution, the idea of a stored-program computer was already present in the design of J. Presper Eckert and John William Mauchlys ENIAC, but was initially omitted so that it could be finished sooner. On June 30,1945, before ENIAC was made, mathematician John von Neumann distributed the paper entitled First Draft of a Report on the EDVAC and it was the outline of a stored-program computer that would eventually be completed in August 1949. EDVAC was designed to perform a number of instructions of various types. Significantly, the programs written for EDVAC were to be stored in high-speed computer memory rather than specified by the wiring of the computer. This overcame a severe limitation of ENIAC, which was the considerable time, with von Neumanns design, the program that EDVAC ran could be changed simply by changing the contents of the memory. Early CPUs were custom designs used as part of a larger, however, this method of designing custom CPUs for a particular application has largely given way to the development of multi-purpose processors produced in large quantities. This standardization began in the era of discrete transistor mainframes and minicomputers and has accelerated with the popularization of the integrated circuit. The IC has allowed increasingly complex CPUs to be designed and manufactured to tolerances on the order of nanometers, both the miniaturization and standardization of CPUs have increased the presence of digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in electronic devices ranging from automobiles to cellphones, the so-called Harvard architecture of the Harvard Mark I, which was completed before EDVAC, also utilized a stored-program design using punched paper tape rather than electronic memory. Relays and vacuum tubes were used as switching elements, a useful computer requires thousands or tens of thousands of switching devices. The overall speed of a system is dependent on the speed of the switches, tube computers like EDVAC tended to average eight hours between failures, whereas relay computers like the Harvard Mark I failed very rarely. In the end, tube-based CPUs became dominant because the significant speed advantages afforded generally outweighed the reliability problems, most of these early synchronous CPUs ran at low clock rates compared to modern microelectronic designs. Clock signal frequencies ranging from 100 kHz to 4 MHz were very common at this time, the design complexity of CPUs increased as various technologies facilitated building smaller and more reliable electronic devices
8.
Intel 80286
–
The Intel 80286 is a 16-bit microprocessor that was introduced on 1 February 1982. It was the first 8086 based CPU with separate, non-multiplexed, address and data buses and also the first with memory management, the 80286 was employed for the IBM PC/AT, introduced in 1984, and then widely used in most PC/AT compatible computers until the early 1990s. Although now long since obsolete for use in computers,80286 based processors are still widely used in embedded microcontroller applications. Intels first 80286 chips were specified for a maximum clockrate of 4,6 or 8 MHz, AMD and Harris later produced 16 MHz,20 MHz and 25 MHz parts, respectively. Intersil and Fujitsu also designed fully static CMOS versions of Intels original depletion-load nMOS implementation, the 6 MHz,10 MHz and 12 MHz models were reportedly measured to operate at 0.9 MIPS,1.5 MIPS and 2.66 MIPS respectively. The later E-stepping level of the 80286 was free of the several significant errata that caused problems for programmers, the 80286 was designed for multi-user systems with multitasking applications, including communications and real-time process control. It had 134,000 transistors and consisted of four independent units, address unit, bus unit, instruction unit, the significantly increased performance over the 8086 was primarily due to the non-multiplexed address and data buses, more address calculation hardware and a faster multiplier. It was produced in a 68-pin package including PLCC, LCC, the performance increase of the 80286 over the 8086 could be more than 100% per clock cycle in many programs. This was an increase, fully comparable to the speed improvements around a decade later when the i486 or the original Pentium were introduced. This was partly due to the address and data buses. They were performed by a unit in the 80286 while the older 8086 had to do effective address computation using its general ALU. Also, the 80286 was more efficient in the prefetch of instructions, buffering, execution of jumps, together with contemporary 80186,286 added new instructions, ENTER, LEAVE, BOUND, INS, OUTS, PUSHA, POPA, PUSH immediate, IMUL immediate, multi-bit shifts. The intel 80286 had a 24-bit address bus and was able to address up to 16 MB of RAM, however cost and initial rarity of software using the memory above 1 MB meant that 80286 computers were rarely shipped with more than one megabyte of RAM. Additionally, there was a performance penalty involved in accessing extended memory from real mode, the 286 was the first of the x86 CPU family to support protected mode. In addition, it was the first commercially available microprocessor with on-chip MMU capabilities and this would allow IBM compatibles to have advanced multitasking OSes for the first time and compete in the Unix-dominated server/workstation market. Several additional instructions were introduced in protected mode of 80286, which are helpful for multitasking operating systems, another important feature of 80286 is Prevention of Unauthorized Access. This is achieved by, Forming different segments for data, code, and stack, segment with lower privilege level cannot access the segment with higher privilege level. By design, the 286 could not revert from protected mode to the basic 8086-compatible real mode without a hardware-initiated reset, though it worked correctly, the method imposed a huge performance penalty
9.
Megabyte
–
The megabyte is a multiple of the unit byte for digital information. Its recommended unit symbol is MB, but sometimes MByte is used, the unit prefix mega is a multiplier of 1000000 in the International System of Units. Therefore, one megabyte is one million bytes of information and this definition has been incorporated into the International System of Quantities. However, in the computer and information fields, several other definitions are used that arose for historical reasons of convenience. A common usage has been to one megabyte as 1048576bytes. However, most standards bodies have deprecated this usage in favor of a set of binary prefixes, less common is a convention that used the megabyte to mean 1000×1024 bytes. The megabyte is commonly used to measure either 10002 bytes or 10242 bytes, the interpretation of using base 1024 originated as a compromise technical jargon for the byte multiples that needed to be expressed by the powers of 2 but lacked a convenient name. As 1024 approximates 1000, roughly corresponding to the SI prefix kilo-, in 1998 the International Electrotechnical Commission proposed standards for binary prefixes requiring the use of megabyte to strictly denote 10002 bytes and mebibyte to denote 10242 bytes. By the end of 2009, the IEC Standard had been adopted by the IEEE, EU, ISO, the Mac OS X10.6 file manager is a notable example of this usage in software. Since Snow Leopard, file sizes are reported in decimal units, base 21 MB =1048576 bytes is the definition used by Microsoft Windows in reference to computer memory, such as RAM. This definition is synonymous with the binary prefix mebibyte. Mixed 1 MB =1024000 bytes is the used to describe the formatted capacity of the 1.44 MB3. 5inch HD floppy disk. Semiconductor memory doubles in size for each address lane added to an integrated circuit package, the capacity of a disk drive is the product of the sector size, number of sectors per track, number of tracks per side, and the number of disk platters in the drive. Changes in any of these factors would not usually double the size, sector sizes were set as powers of two for convenience in processing. It was an extension to give the capacity of a disk drive in multiples of the sector size, giving a mix of decimal. Depending on compression methods and file format, a megabyte of data can roughly be, a 4 megapixel JPEG image with normal compression. Approximately 1 minute of 128 kbit/s MP3 compressed music,6 seconds of uncompressed CD audio. A typical English book volume in plain text format, the human genome consists of DNA representing 800 MB of data
10.
IBM Personal Computer XT
–
The IBM Personal Computer XT, often shortened to the IBM XT, PC XT, or simply XT, is a version of the IBM PC with a built-in hard drive. It was released as IBM Machine Type number 5160 on March 8,1983, apart from the hard drive, it was essentially the same as the original PC, with only minor improvements. The XT was mainly intended as an enhanced IBM PC for business users, later floppy-only models would effectively replace the original model 5150 PC. A corresponding 3270 PC featuring 3270 terminal emulation was released later in October 1983, the motherboard had an Intel 8088 microprocessor running at 4.77 MHz, with a socket for an optional 8087 math coprocessor. IBM recognized soon after the IBM PCs release in 1981 that its five 8-bit I/O channel expansion slots were insufficient. An internal IBM publication stated in October 1981 about the number that In my opinion, it could be a problem, a BYTE Magazine article commented that DOS2.0 is more revolutionary and advanced than the computer itself. Unlike many hard disk systems on microcomputers at the time, the XT was able to boot directly off the drive, aside from the hard disk, a serial port card was also standard equipment on the XT, all other cards being optional. By the end of 1983, the XT was neck-and-neck with the original PC for sales, two were behind the floppy drive and shorter than PCs slots. The other six fit into the space as the PCs five slots. Most PC cards would not fit into the two slots, and some would not fit into the six standard-length, but narrower, slots. The floppy and hard drive adapters, the serial port card, the basic specification was soon upgraded to have 256 KB of RAM as standard. Expansion slots could be used for I/O devices or for memory expansion, available Video cards were initially the Monochrome Display Adapter and Color Graphics Adapter, with Enhanced Graphics Adapter and Professional Graphics Controller becoming available in 1984. The XT had a case similar to that of the IBM PC. It weighed 32 pounds and was approximately 19.5 inches wide by 16 inches deep by 5.5 inches high, the power supply of the original XT sold in the US was configured for 120 V AC only and could not be used with 240 V mains supplies. XTs with 240V-compatible power supplies were sold in international markets. Both were rated at 130 Watts, the operating system usually sold with the XT was PC DOS2.0 or, by the time the XT was discontinued in early 1987, DOS3.2. Like the original PC, the XT came with IBM BASIC in its ROM, the XT BIOS also displayed a memory count during the POST, unlike the PC. The XT was discontinued in the spring of 1987, replaced by the PS/2 Model 30, XT motherboards came in two different versions
11.
IBM Personal System/2
–
The Personal System/2 or PS/2 was IBMs third generation of personal computers. Released in 1987, it replaced the IBM PC, XT, AT. The PS/2 line was created by IBM in an attempt to control of the PC market by introducing an advanced yet proprietary architecture. IBMs considerable market presence plus the reliability of the PS/2 ensured that the systems would sell in large numbers. Also the evolving Wintel architecture was seeing a period of dramatic reductions in price, the OS/2 operating system was announced at the same time as the PS/2 line and was intended to be the primary operating system for models with Intel 286 or later processors. However, at the time of the first shipments, only PC DOS was available, OS/21.0 and Microsofts Windows 2.0 became available several months later. IBM also released AIX PS/2, a UNIX operating system for PS/2 models with Intel 386 or later processors, for years before IBM released the PS/2, rumors spread about IBMs plans for successors to its IBM PC, XT, and AT personal computers. Among the rumors that did not come true, The company would use proprietary, the company would release a version of its VM mainframe operating system for them. The company would design the new computers to make third-party communications products more difficult to design, IBMs PS/2 was designed to remain software compatible with their PC/AT/XT line of computers upon which the large PC clone market was built, but the hardware was quite different. CBIOS was so compatible that it even included Cassette BASIC, while IBM did not publish the BIOS source code, it did promise to publish BIOS entry points. With the IBM PS/2 line, Micro Channel Architecture was also introduced, MCA was conceptually similar to the channel architecture of the IBM System/360 mainframes. MCA was technically superior to ISA and allowed for higher speed communications within the system, MCA featured many advances not seen in other standards until several years later. Transfer speeds were on par with the much later PCI standard, MCA allowed one-to-one, card to card, and multi-card to processor simultaneous transaction management which is a feature of the PCI-X bus format. Bus mastering capability, bus arbitration, and a form of plug-and-play BIOS management of hardware were all benefits of MCA. (One book from the year 2000 writes, MCA used a version of what we know now as “Plug-N′-Play”, requiring a special setup disk for each machine. MCA never gained wide acceptance outside of the PS/2 line due to IBMs anti-clone practices and incompatibilities with ISA, IBM offered to sell an MCA license to anyone who could afford the royalty. However, royalties were required for every MCA-compatible machine sold and a payment for every IBM-compatible machine the particular maker had made in the past, there was nothing unique in IBM insisting on payment of royalties on the use of its patents applied to Micro Channel based machines. However, up until that time, some companies had failed to pay IBM for the use of its patents on the generation of Personal Computer
12.
IBM Personal Computer
–
The IBM Personal Computer, commonly known as the IBM PC, is the original version and progenitor of the IBM PC compatible hardware platform. It is IBM model number 5150, and was introduced on August 12,1981 and it was created by a team of engineers and designers under the direction of Don Estridge of the IBM Entry Systems Division in Boca Raton, Florida. IBM compatible became an important criterion for sales growth, only the Apple Macintosh family kept significant market share without compatibility with the IBM personal computer, International Business Machines, one of the worlds largest companies, had a 62% share of the mainframe computer market in 1981. Its share of the computer market, however, had declined from 60% in 1970 to 32% in 1980. In 1979 BusinessWeek asked, Is IBM just another stodgy, mature company, by 1981 its stock price had declined by 22%. IBMs earnings for the first half the year grew by 5. 3%—one third of the inflation rate—while those of minicomputer maker Digital Equipment Corporation grew by more than 35%. B. M, No longer dominates the computer business. IBM wished to avoid the outcome with the new personal computer industry, dominated by the Commodore PET, Atari 8-bit family, Apple II, Tandy Corporations TRS-80. With $150 million in sales by 1979 and projected growth of more than 40% in the early 1980s. The Japanese project, codenamed Go, ended before the 1981 release of the American-designed IBM PC codenamed Chess, whether IBM had waited too long to enter an industry in which Apple and others were already successful was unclear. An observer stated that IBM bringing out a computer would be like teaching an elephant to tap dance. Successful microcomputer company Vector Graphics fiscal 1980 revenue was $12 million, the company only sold through its internal sales force, had no experience with resellers or retail stores, and did not introduce the first product designed to work with non-IBM equipment until 1980. Another observer claimed that IBM made decisions so slowly that, when tested, as with other large computer companies, its new products typically required about four to five years for development. IBM had to learn how to develop, mass-produce. The potential importance to microcomputers of a company so prestigious, that a saying in American companies stated No one ever got fired for buying IBM, was nonetheless clear. InfoWorld, which described itself as The Newsweekly for Microcomputer Users, stated that for my grandmother, is far and away the media star, not because of its features, but because it exists at all. When the number eight company in the Fortune 500 enters the field, the influence of a personal computer made by a company whose name has literally come to mean computer to most of the world is hard to contemplate. The editorial acknowledged that some factions in our industry have looked upon IBM as the enemy, desktop sized programmable calculators by Hewlett Packard had evolved into the HP9830 BASIC language computer by 1972. In 1972–1973 a team led by Dr. SCAMP emulated an IBM1130 minicomputer to run APL\1130, in 1973 APL was generally available only on mainframe computers, and most desktop sized microcomputers such as the Wang 2200 or HP9800 offered only BASIC
13.
IBM
–
International Business Machines Corporation is an American multinational technology company headquartered in Armonk, New York, United States, with operations in over 170 countries. The company originated in 1911 as the Computing-Tabulating-Recording Company and was renamed International Business Machines in 1924, IBM manufactures and markets computer hardware, middleware and software, and offers hosting and consulting services in areas ranging from mainframe computers to nanotechnology. IBM is also a research organization, holding the record for most patents generated by a business for 24 consecutive years. IBM has continually shifted its business mix by exiting commoditizing markets and focusing on higher-value, also in 2014, IBM announced that it would go fabless, continuing to design semiconductors, but offloading manufacturing to GlobalFoundries. Nicknamed Big Blue, IBM is one of 30 companies included in the Dow Jones Industrial Average and one of the worlds largest employers, with nearly 380,000 employees. Known as IBMers, IBM employees have been awarded five Nobel Prizes, six Turing Awards, ten National Medals of Technology, in the 1880s, technologies emerged that would ultimately form the core of what would become International Business Machines. On June 16,1911, their four companies were amalgamated in New York State by Charles Ranlett Flint forming a fifth company, the Computing-Tabulating-Recording Company based in Endicott, New York. The five companies had 1,300 employees and offices and plants in Endicott and Binghamton, New York, Dayton, Ohio, Detroit, Michigan, Washington, D. C. and Toronto. They manufactured machinery for sale and lease, ranging from commercial scales and industrial time recorders, meat and cheese slicers, to tabulators and punched cards. Thomas J. Watson, Sr. fired from the National Cash Register Company by John Henry Patterson, called on Flint and, Watson joined CTR as General Manager then,11 months later, was made President when court cases relating to his time at NCR were resolved. Having learned Pattersons pioneering business practices, Watson proceeded to put the stamp of NCR onto CTRs companies and his favorite slogan, THINK, became a mantra for each companys employees. During Watsons first four years, revenues more than doubled to $9 million, Watson had never liked the clumsy hyphenated title of the CTR and in 1924 chose to replace it with the more expansive title International Business Machines. By 1933 most of the subsidiaries had been merged into one company, in 1937, IBMs tabulating equipment enabled organizations to process unprecedented amounts of data, its clients including the U. S. During the Second World War the company produced small arms for the American war effort, in 1949, Thomas Watson, Sr. created IBM World Trade Corporation, a subsidiary of IBM focused on foreign operations. In 1952, he stepped down after almost 40 years at the company helm, in 1957, the FORTRAN scientific programming language was developed. In 1961, IBM developed the SABRE reservation system for American Airlines, in 1963, IBM employees and computers helped NASA track the orbital flight of the Mercury astronauts. A year later it moved its headquarters from New York City to Armonk. The latter half of the 1960s saw IBM continue its support of space exploration, on April 7,1964, IBM announced the first computer system family, the IBM System/360
14.
Microprocessor
–
A microprocessor is a computer processor which incorporates the functions of a computers central processing unit on a single integrated circuit, or at most a few integrated circuits. Microprocessors contain both combinational logic and sequential digital logic, Microprocessors operate on numbers and symbols represented in the binary numeral system. The integration of a whole CPU onto a chip or on a few chips greatly reduced the cost of processing power. Integrated circuit processors are produced in numbers by highly automated processes resulting in a low per unit cost. Single-chip processors increase reliability as there are many electrical connections to fail. As microprocessor designs get better, the cost of manufacturing a chip generally stays the same, before microprocessors, small computers had been built using racks of circuit boards with many medium- and small-scale integrated circuits. Microprocessors combined this into one or a few large-scale ICs, the internal arrangement of a microprocessor varies depending on the age of the design and the intended purposes of the microprocessor. Advancing technology makes more complex and powerful chips feasible to manufacture, a minimal hypothetical microprocessor might only include an arithmetic logic unit and a control logic section. The ALU performs operations such as addition, subtraction, and operations such as AND or OR, each operation of the ALU sets one or more flags in a status register, which indicate the results of the last operation. The control logic retrieves instruction codes from memory and initiates the sequence of operations required for the ALU to carry out the instruction, a single operation code might affect many individual data paths, registers, and other elements of the processor. As integrated circuit technology advanced, it was feasible to manufacture more and more complex processors on a single chip, the size of data objects became larger, allowing more transistors on a chip allowed word sizes to increase from 4- and 8-bit words up to todays 64-bit words. Additional features were added to the architecture, more on-chip registers sped up programs. Floating-point arithmetic, for example, was not available on 8-bit microprocessors. Integration of the point unit first as a separate integrated circuit and then as part of the same microprocessor chip. Occasionally, physical limitations of integrated circuits made such practices as a bit slice approach necessary, instead of processing all of a long word on one integrated circuit, multiple circuits in parallel processed subsets of each data word. With the ability to put large numbers of transistors on one chip and this CPU cache has the advantage of faster access than off-chip memory, and increases the processing speed of the system for many applications. Processor clock frequency has increased more rapidly than external memory speed, except in the recent past, a microprocessor is a general purpose system. Several specialized processing devices have followed from the technology, A digital signal processor is specialized for signal processing, graphics processing units are processors designed primarily for realtime rendering of 3D images
15.
Protected mode
–
In computing, protected mode, also called protected virtual address mode, is an operational mode of x86-compatible central processing units. It allows system software to use such as virtual memory, paging. When a processor that supports x86 protected mode is powered on, it begins executing instructions in real mode, protected mode may only be entered after the system software sets up several descriptor tables and enables the Protection Enable bit in the control register 0. Protected mode was first added to the x86 architecture in 1982, with the release of Intels 80286 processor, the Intel 8086, the predecessor to the 286, was originally designed with a 20-bit address bus for its memory. This allowed the processor to access 220 bytes of memory, equivalent to 1 megabyte, as the cost of memory decreased and memory use increased, the 1 MB limitation became a significant problem. Intel intended to solve this limitation along with others with the release of the 286, the initial protected mode, released with the 286, was not widely used, for example, it was used by Microsoft Xenix, Coherent and Minix. Several shortcomings such as the inability to access the BIOS or DOS calls due to inability to back to real mode without resetting the processor prevented widespread usage. The 286 maintained backwards compatibility with its precursor the 8086 by initially entering real mode on power up, real mode functioned virtually identically to the 8086, allowing the vast majority of existing 8086 software to run unmodified on the newer 286. Real mode also served as a more basic mode in which protected mode could be set up and this enabled 24 bit addressing which allowed the processor to access 224 bytes of memory, equivalent to 16 megabytes. With the release of the 386 in 1985, many of the issues preventing widespread adoption of the protected mode were addressed. The 386 was released with an address bus size of 32 bits, the segment sizes were also increased to 32 bits, meaning that the full address space of 4 gigabytes could be accessed without the need to switch between multiple segments. In addition to the size of the address bus and segment registers. Protected mode is now used in all modern operating systems which run on the x86 architecture, such as Microsoft Windows, Linux. Hardware support required for virtualizing the protected mode itself, however, had to wait for another 20 years. IBM devised a workaround which involved resetting the CPU via the controller and saving the system registers, stack pointer. This allowed the BIOS to restore the CPU to a similar state, later, a triple fault was used to reset the 286 CPU, which was a lot faster and cleaner than the keyboard controller method. To enter protected mode, the Global Descriptor Table must first be created with a minimum of three entries, a descriptor, a code segment descriptor and data segment descriptor. In an IBM-compatible machine, the A20 line also must be enabled to allow the use of all the lines so that the CPU can access beyond 1 megabyte of memory
16.
Industry Standard Architecture
–
Industry Standard Architecture is a retronym term for the 16-bit internal bus of IBM PC/AT and similar computers based on the Intel 80286 and its immediate successors during the 1980s. The bus was backward compatible with the 8-bit bus of the 8088-based IBM PC, originally referred to as the PC/AT-bus it was also termed I/O Channel by IBM. The 16-bit ISA bus was used with 32-bit processors for several years. An attempt to extend it to 32 bits, called Extended Industry Standard Architecture, was not very successful, later buses such as VESA Local Bus and PCI were used instead, often along with ISA slots on the same mainboard. Derivatives of the AT bus structure were and still are used in ATA/IDE, the PCMCIA standard, Compact Flash, the PC/104 bus, compaq created the term Industry Standard Architecture to replace PC compatible. The ISA bus was developed by a led by Mark Dean at IBM as part of the IBM PC project in 1981. It originated as an 8-bit system, the newer 16-bit standard, the IBM AT bus, was introduced in 1984. Therefore, the ISA bus was synchronous with the CPU clock, designed to connect peripheral cards to the motherboard, ISA allows for bus mastering although only the first 16 MB of main memory are available for direct access. The 8-bit bus ran at 4.77 MHz, while the 16-bit bus operated at 6 or 8 MHz IBM RT/PC also used the 16-bit bus. It was also available on some non-IBM compatible machines such as Motorola 68k-based Apollo and Amiga 3000 workstations, the short-lived AT&T Hobbit and later PowerPC based BeBox. MCA had many features that would appear in PCI, the successor of ISA. The system was far more advanced than the AT bus, and computer manufacturers responded with the Extended Industry Standard Architecture and later, in fact, VLB used some electronic parts originally intended for MCA because component manufacturers already were equipped to manufacture them. Both EISA and VLB were backwards-compatible expansions of the AT bus, users of ISA-based machines had to know special information about the hardware they were adding to the system. While a handful of devices were essentially plug-n-play, this was rare, users frequently had to configure several parameters when adding a new device, such as the IRQ line, I/O address, or DMA channel. MCA had done away with this complication, and PCI actually incorporated many of the ideas first explored with MCA, in reality, ISA PnP can be troublesome, and did not become well-supported until the architecture was in its final days. PCI slots were the first physically incompatible expansion ports to directly squeeze ISA off the motherboard, at first, motherboards were largely ISA, including a few PCI slots. By the mid-1990s, the two types were roughly balanced, and ISA slots soon were in the minority of consumer systems. Which was why the software compatible LPC bus was created, in late 2008, even floppy disk drives and serial ports were disappearing, and the extinction of vestigial ISA from chipsets was on the horizon
17.
Motherboard
–
A motherboard is the main printed circuit board found in general purpose microcomputers and other expandable systems. It holds and allows communication between many of the electronic components of a system, such as the central processing unit and memory. In very old designs, copper wires were the discrete connections between card connector pins, but printed circuit boards soon became the standard practice, the Central Processing Unit, memory, and peripherals were housed on individual printed circuit boards, which were plugged into the backplate. The ubiquitous S-100 bus of the 1970s is an example of type of backplane system. During the late 1980s and 1990s, it became economical to move a number of peripheral functions onto the motherboard. Business PCs, workstations, and servers were more likely to need expansion cards, either for more robust functions, or for higher speeds, laptop and notebook computers that were developed in the 1990s integrated the most common peripherals. This even included motherboards with no upgradeable components, a trend that would continue as smaller systems were introduced after the turn of the century, memory, processors, network controllers, power source, and storage would be integrated into some systems. A motherboard provides the connections by which the other components of the system communicate. Unlike a backplane, it contains the central processing unit and hosts other subsystems. A typical desktop computer has its microprocessor, main memory, an important component of a motherboard is the microprocessors supporting chipset, which provides the supporting interfaces between the CPU and the various buses and external components. This chipset determines, to an extent, the features and capabilities of the motherboard, modern motherboards include, Sockets in which one or more microprocessors may be installed. In the case of CPUs in ball grid array packages, such as the VIA C3, as of 2007, some graphics cards require more power than the motherboard can provide, and thus dedicated connectors have been introduced to attach them directly to the power supply. Connectors for hard drives, typically SATA only, disk drives also connect to the power supply. Additionally, nearly all motherboards include logic and connectors to support commonly used devices, such as USB for mouse devices. Early personal computers such as the Apple II or IBM PC included only this minimal peripheral support on the motherboard, occasionally video interface hardware was also integrated into the motherboard, for example, on the Apple II and rarely on IBM-compatible computers such as the IBM PC Jr. Additional peripherals such as disk controllers and serial ports were provided as expansion cards, given the high thermal design power of high-speed computer CPUs and components, modern motherboards nearly always include heat sinks and mounting points for fans to dissipate excess heat. Motherboards are produced in a variety of sizes and shapes called computer form factor, however, the motherboards used in IBM-compatible systems are designed to fit various case sizes. As of 2007, most desktop computer motherboards use the ATX standard form factor — even those found in Macintosh and Sun computers, a cases motherboard and PSU form factor must all match, though some smaller form factor motherboards of the same family will fit larger cases
18.
Direct memory access
–
Direct memory access is a feature of computer systems that allows certain hardware subsystems to access main system memory, independent of the central processing unit. Without DMA, when the CPU is using programmed input/output, it is fully occupied for the entire duration of the read or write operation. This feature is useful at any time that the CPU cannot keep up with the rate of data transfer, many hardware systems use DMA, including disk drive controllers, graphics cards, network cards and sound cards. DMA is also used for data transfer in multi-core processors. Computers that have DMA channels can transfer data to and from devices with much less CPU overhead than computers without DMA channels, DMA can also be used for memory to memory copying or moving of data within memory. DMA can offload expensive memory operations, such as copies or scatter-gather operations. An implementation example is the I/O Acceleration Technology, Standard DMA, also called third-party DMA, uses a DMA controller. A DMA controller can generate memory addresses and initiate memory read or write cycles and it contains several hardware registers that can be written and read by the CPU. These include an address register, a byte count register. To carry out an input, output or memory-to-memory operation, the host processor initializes the DMA controller with a count of the number of words to transfer, the CPU then sends commands to a peripheral device to initiate transfer of data. The DMA controller then provides addresses and read/write control lines to the system memory, each time a byte of data is ready to be transferred between the peripheral device and memory, the DMA controller increments its internal address register until the full block of data is transferred. In a bus mastering system, also known as a first-party DMA system, where a peripheral can become bus master, it can directly write to system memory without involvement of the CPU, providing memory address and control signals as required. Some measure must be provided to put the processor into a hold condition so that bus contention does not occur, DMA transfers can either occur one byte at a time or all at once in burst mode. In burst mode DMA, the CPU can be put on hold while the DMA transfer occurs, when memory cycles are much faster than processor cycles, an interleaved DMA cycle is possible, where the DMA controller uses memory while the CPU cannot. An entire block of data is transferred in one contiguous sequence, the mode is also called Block Transfer Mode. It is also used to stop unnecessary data, the cycle stealing mode is used in systems in which the CPU should not be disabled for the length of time needed for burst transfer modes. However, in cycle stealing mode, after one byte of data transfer and it is then continually requested again via BR, transferring one byte of data per request, until the entire block of data has been transferred. By continually obtaining and releasing the control of the system bus, the CPU processes an instruction, then the DMA controller transfers one data value, and so on
19.
Intel 8259
–
The Intel 8259 is a Programmable Interrupt Controller designed for the Intel 8085 and Intel 8086 microprocessors. The initial part was 8259, a later A suffix version was compatible and usable with the 8086 or 8088 processor. The 8259A was the controller for the ISA bus in the original IBM PC. The 8259 was introduced as part of Intels MCS85 family in 1976, the 8259A was included in the original PC introduced in 1981 and maintained by the PC/XT when introduced in 1983. A second 8259A was added with the introduction of the PC/AT, the 8259 has coexisted with the Intel APIC Architecture since its introduction in Symmetric Multi-Processor PCs. Modern PCs have begun to phase out the 8259A in favor of the Intel APIC Architecture, however, while not anymore a separate chip, the 8259A interface is still provided by the Southbridge chipset on modern x86 motherboards. Other connections include CAS0 through CAS2 for cascading between 8259s, up to eight slave 8259s may be cascaded to a master 8259 to provide up to 64 IRQs. 8259s are cascaded by connecting the INT line of one slave 8259 to the IRQ line of one master 8259, there are three registers, an Interrupt Mask Register, an Interrupt Request Register, and an In-Service Register. End Of Interrupt operations support specific EOI, non-specific EOI, a specific EOI specifies the IRQ level it is acknowledging in the ISR. A non-specific EOI resets the IRQ level in the ISR, Auto-EOI resets the IRQ level in the ISR immediately after the interrupt is acknowledged. Edge and level interrupt trigger modes are supported by the 8259A, fixed priority and rotating priority modes are supported. The 8259 may be configured to work with an 8080/8085 or an 8086/8088, on the 8086/8088, the interrupt controller will provide an interrupt number on the data bus when an interrupt occurs. The interrupt cycle of the 8080/8085 will issue three bytes on the data bus, the 8259A provides additional functionality compared to the 8259 and is upward compatible with it. The first issue is more or less the root of the second issue, DOS device drivers are expected to send a non-specific EOI to the 8259s when they finish servicing their device. This prevents the use of any of the 8259s other EOI modes in DOS, the second issue deals with the use of IRQ2 and IRQ9 from the introduction of a slave 8259 in the PC/AT. The slave 8259s INT output is connected to the masters IR2, the IRQ2 line of the ISA bus, originally connected to this IR2, was rerouted to IR1 of the slave. Thus the old IRQ2 line now generates IRQ9 in the CPU, to allow backwards compatibility with DOS device drivers that still set up for IRQ2, a handler is installed by the BIOS for IRQ9 that redirects interrupts to the original IRQ2 handler. On the PC, the BIOS traditionally maps the master 8259 interrupt requests to interrupt vector offset 8 and this was done despite the first 32 interrupt vectors being reserved by the processor for internal exceptions
20.
Intel 8237
–
Intel 8237 is a direct memory access controller, a part of the MCS85 microprocessor family. The 8237 is a device that can be expanded to include any number of DMA channel inputs. The 8237 is capable of DMA transfers at rates of up to 1.6 MByte per second, each channel is capable of addressing a full 64k-byte section of memory and can transfer up to 64k bytes with a single programming. A single 8237 was used as the DMA controller in the original IBM PC, the IBM PC AT added another 8237 in master-slave configuration, increasing the number of DMA channels from four to seven. Later IBM-compatible personal computers may have chip sets that emulate the functions of the 8237 for backward compatibility, block - Transfer progresses until the word count reaches zero or the EOP signal goes active. Demand - Transfers continue until TC or EOP goes active or DRQ goes inactive, the CPU is permitted to use the bus when no transfer is requested. Cascade - Used to cascade additional DMA controllers, DREQ and DACK is matched with HRQ and HLDA from the next chip to establish a priority chain. Actual bus signals is executed by cascaded chip and this means data can be transferred from one memory device to another memory device. The channel 0 Current Address register is the source for the transfer and channel 1. Channel 0 is used for DRAM refresh on IBM PC compatibles, in auto initialize mode the address and count values are restored upon reception of an end of process signal. This happens without any CPU intervention and it is used to repeat the last transfer. The terminal count signals end of transfer to ISA cards, at the end of transfer an auto initialize will occur configured to do so. In single mode only one byte is transferred per request, for every transfer, the counting register is decremented and address is incremented or decremented depending on programming. When the counting register reaches zero, the terminal count TC signal is sent to the card, the DMA request DREQ must be raised by the card and held active until it is acknowledged by the DMA acknowledge DACK. The transfer is activated by the DREQ which can be deactivated once acknowledged by DACK, the transfer continues until end of process EOP is activated which will trigger terminal count TC to the card. Auto-initialization may be programmed in this mode, the transfer is activated by DREQ and acknowledged by DACK and continues until either TC, external EOP or DREQ goes inactive. Only TC or external EOP may activate auto-initialization if this is programmed. e, however, it is compatible with the 8086/88 microprocessors. Attempts to cross a 64 KiB boundary in a DMA transfer will wrap around within one 64 KiB block of memory, the IBM PC AT and 100% compatibles use an 80286 CPU and a 16-bit system bus architecture
21.
Real-time clock
–
A real-time clock is a computer clock that keeps track of the current time. Although the term refers to the devices in personal computers, servers and embedded systems. The term real-time clock is used to avoid confusion with ordinary hardware clocks which are only signals that govern digital electronics, RTC should not be confused with real-time computing, which shares its three-letter acronym but does not directly relate to time of day. If it has less than a few hours, then the previous ephemeris is still usable. RTCs often have a source of power, so they can continue to keep time while the primary source of power is off or unavailable. This alternate source of power is normally a battery in older systems. The alternate power source can supply power to battery backed RAM. Most RTCs use an oscillator, but some use the power line frequency. In many cases, the frequency is 32.768 kHz. Many integrated circuit manufacturers make RTCs, including Epson, Intersil, IDT, Maxim, NXP Semiconductors, Texas Instruments, STMicroelectronics, the RTC was introduced to PC compatibles by the IBM PC/AT in 1984, which used a Motorola MC146818 RTC. In newer systems, the RTC is integrated into the southbridge chip, some microcontrollers have a real-time clock built in, generally only the ones with many other features and peripherals. Some modern computers receive clock information by radio and use it to promote time-standards. Some older computer designs such as Novas and PDP-8s used a clock that was notable for its high accuracy, simplicity, flexibility. The computers power supply produces a pulse at logic voltages for either each half-wave or each zero crossing of AC mains, a wire carries the pulse to an interrupt. The interrupt handler software counts cycles, seconds, etc, in this way, it can provide an entire clock and calendar. The clock also usually formed the basis of computers software timing chains, counting timers used in modern computers provide similar features at lower precision, and may trace their requirements to this type of clock. A software-based clock must be set each time its computer is turned on, originally this was done by computer operators. When the Internet became commonplace, network time protocols were used to set clocks of this type
22.
Nonvolatile BIOS memory
–
Nonvolatile BIOS memory refers to a small memory on PC motherboards that is used to store BIOS settings. It is traditionally called CMOS RAM because it uses a volatile, low-power complementary metal-oxide-semiconductor SRAM powered by a small CMOS battery when system, the typical NVRAM capacity is 256 bytes. The CMOS RAM and the real-time clock have been integrated as a part of the southbridge chipset, the memory battery is generally a CR2032 lithium coin cell. This cell battery has a life of 3 years when psu is unplugged or when psu power switch is turned off. Other common battery types can last significantly longer or shorter periods. Higher temperatures and longer power-off time will shorten battery cell life, when replacing battery cell, the system time and CMOS BIOS settings may revert to default values. Unwanted BIOS reset may be avoided by replacing battery cell with psu power switch turned on, on ATX motherboards, turning on power switch on psu, will supply 5V standby power to the motherboard to keep CMOS memory energized during computer turned off period. Tutorial, How to replace the CMOS battery in your laptop How to replace the CMOS battery Replacing the Motherboard Battery
23.
BIOS
–
The BIOS is a type of firmware used to perform hardware initialization during the booting process on IBM PC compatible computers, and to provide runtime services for operating systems and programs. The BIOS firmware is built into personal computers, and it is the first software they run when powered on, the name itself originates from the Basic Input/Output System used in the CP/M operating system in 1975. Originally proprietary to the IBM PC, the BIOS has been engineered by companies looking to create compatible systems. The fundamental purposes of the BIOS in modern PCs are to initialize and test the hardware components. Variations in the hardware are hidden by the BIOS from programs that use BIOS services instead of directly accessing the hardware. MS-DOS, which was the dominant PC operating system from the early 1980s until the mid-1990s, relied on BIOS services for disk, keyboard, and text display functions. Most BIOS implementations are specifically designed to work with a computer or motherboard model. This allows easy updates to the BIOS firmware so new features can be added or bugs can be fixed, unified Extensible Firmware Interface was designed as a successor to BIOS, aiming to address its technical shortcomings. As of 2014, new PC hardware predominantly ships with UEFI firmware, together with the underlying hardware-specific, but operating system-independent System BIOS, which resides in ROM, it represents the analogous to the CP/M BIOS. With the introduction of PS/2 machines, IBM divided the System BIOS into real-mode, the BIOS of the original IBM PC XT had no interactive user interface. Options on the IBM PC and XT were set by switches and jumpers on the main board, starting around the mid-1990s, it became typical for the BIOS ROM to include a BIOS configuration utility or BIOS setup utility, accessed at system power-up by a particular key sequence. This program allowed the user to set configuration options, of the type formerly set using DIP switches. The disk was supplied with the computer, and if it was lost the settings could not be changed. Instead of battery-backed RAM, the modern Wintel machine may store the BIOS configuration settings in flash ROM, early Intel processors started at physical address 000FFFF0h. When a modern x86 microprocessor is reset, it starts in pseudo 16-bit real mode, the code segment register is initialized with selector F000h, base FFFF0000h, and limit FFFFh, so that execution starts at 4 GB minus 16 bytes. The platform logic maps this address into the system ROM, mirroring address 000FFFF0h, if the system has just been powered up or the reset button was pressed, the full power-on self-test is run. This saves the time used to detect and test all memory. If the download was apparently successful, the BIOS would verify a checksum on it and then run it
24.
DIP switch
–
A DIP switch is a manual electric switch that is packaged with others in a group in a standard dual in-line package. The term may refer to each switch, or to the unit as a whole. DIP switches are an alternative to jumper blocks and their main advantages are that they are quicker to change and there are no parts to lose. The DIP switch with sliding levers was granted US Patent 4012608 in 1976 and it was applied for 1974 and was used in 1977 in an ATARI Flipper game. There are many different kinds of DIP switches, some of the most common are the rotary, slide, and rocker types. Rotary DIP switches contain multiple electrical contacts, one of which is selected by rotating the switch to align it with a number printed on the package and these may be large like thumbwheels, or so small that a screwdriver must be used to change them. The slide and rocker types, which are common, are arrays of simple single pole, single throw contacts. This allows each switch to select a one-bit binary value, the values of all switches in the package can also be interpreted as one number. For example, seven switches offer 128 combinations, allowing them to select a standard ASCII character, eight switches offer 256 combinations, which is equivalent to one byte. The DIP switch package also has socket pins or mounting leads to provide a path from the switch contacts to the circuit board. Although circuits can use the contacts directly, it is more common to convert them into high. In this case, the board also needs interface circuitry for the DIP switch, consisting of a series of pull-up or pull-down resistors, a buffer, decode logic. Typically, the devices firmware reads the DIP switches when the device is powered on, DIP switches were used extensively in ISA architecture of PC expansion cards to select IRQs and memory addresses. DIP switches were very commonly used to set security codes on garage door openers as well as on some early cordless phones and this design, which used up to 12 switches in a group, was used to avoid RF interference from other nearby door opener remotes or other devices. Current garage door openers use rolling code systems for better security and these type of switches were used on early video cards for early computers to facilitate compatibility with other video standards. For example, CGA cards allowed for MDA compatibility, recently, DIP switches have become less common in consumer electronics. Reasons include the trend toward smaller products, the demand for easier configuration through software menus, DIP switches are still used in some remote controls to prevent interference, for example, to control a ceiling fan that was retrofitted to a single-circuit junction box. Rotary switches are used in X10 home automation to select house
25.
Read-only memory
–
Read-only memory is a type of non-volatile memory used in computers and other electronic devices. Data stored in ROM can only be modified slowly, with difficulty, or not at all, strictly, read-only memory refers to memory that is hard-wired, such as diode matrix and the later mask ROM, which cannot be changed after manufacture. Although discrete circuits can be altered in principle, integrated circuits cannot and that such memory can never be changed is a disadvantage in many applications, as bugs and security issues cannot be fixed, and new features cannot be added. More recently, ROM has come to include memory that is read-only in normal operation, the simplest type of solid-state ROM is as old as the semiconductor technology itself. Combinational logic gates can be joined manually to map n-bit address input onto arbitrary values of m-bit data output, with the invention of the integrated circuit came mask ROM. In mask ROM, the data is encoded in the circuit. This leads to a number of disadvantages, It is only economical to buy mask ROM in large quantities. The turnaround time between completing the design for a mask ROM and receiving the finished product is long, for the same reason, mask ROM is impractical for R&D work since designers frequently need to modify the contents of memory as they refine a design. If a product is shipped with faulty mask ROM, the way to fix it is to recall the product. Subsequent developments have addressed these shortcomings, PROM, invented in 1956, allowed users to program its contents exactly once by physically altering its structure with the application of high-voltage pulses. This addressed problems 1 and 2 above, since a company can order a large batch of fresh PROM chips. The 1971 invention of EPROM essentially solved problem 3, since EPROM can be reset to its unprogrammed state by exposure to strong ultraviolet light. All of these technologies improved the flexibility of ROM, but at a significant cost-per-chip, rewriteable technologies were envisioned as replacements for mask ROM. The most recent development is NAND flash, also invented at Toshiba, as of 2007, NAND has partially achieved this goal by offering throughput comparable to hard disks, higher tolerance of physical shock, extreme miniaturization, and much lower power consumption. Every stored-program computer may use a form of storage to store the initial program that runs when the computer is powered on or otherwise begins execution. Likewise, every non-trivial computer needs some form of memory to record changes in its state as it executes. Forms of read-only memory were employed as non-volatile storage for programs in most early stored-program computers, consequently, ROM could be implemented at a lower cost-per-bit than RAM for many years. Most home computers of the 1980s stored a BASIC interpreter or operating system in ROM as other forms of storage such as magnetic disk drives were too costly
26.
Model F keyboard
–
The Model F was a series of computer keyboards produced from 1981–1994 by IBM and later Lexmark. Its mechanical-key design consisted of a buckling spring over a capacitive PCB, the Model F first appeared with the IBM System/23 Datamaster all-in-one computer. The capacitive design is considered superior to that of the later membrane design used on the Model M. It has a lighter actuation force of about 60g, a feel and louder feedback. It also has a higher MTBF of over 100 million keypresses, the top metal plates in Model F keyboards are prone to corrosion and the internal foam can also rot from age, which often requires cleaning and a coating to prevent further corrosion. All Model F internal assemblies are held together with metal tabs, a characteristic feature of the Model F is a plastic top shell painted with a cream paint to create a rough texture. The later Model M keyboards used injection plastic rather than paint to achieve this texture, the plastic used in the Model F is quite brittle and prone to hairline cracks, and the paint can wear off from excessive use. It is possible to use a programmable micro-controller to connect to a Model F controller and convert it to a USB-capable device for unlimited rollover, along with modifying the layout to ANSI
27.
System request
–
System request is a key on personal computer keyboards that has no standard use. A special BIOS routine — software interrupt 0x15, subfunction 0x85 — was added to signal the OS when SysRq was pushed or released, unlike most keys, when it is pressed nothing is stored in the keyboard buffer. The specific low level function that the SysRq key was meant for was to switch between operating systems, when the original IBM-PC was created in 1980, there were three leading competing operating systems, PC DOS, CP/M-86, and UCSD p-System, while Xenix was added in 1983-1984. The SysRq key was added so that multiple operating systems could be run on the same computer, a special key was needed because most software of the day operated at a low level, often bypassing the OS entirely, and typically made use of many hotkey combinations. The use of Terminate and Stay Resident programs further complicated matters, to implement a task switching or multitasking environment, it was thought that a special, separate key was needed. This is similar to the way “Control-Alt-Delete” is used under Windows NT, on 84-key keyboards, SysRq was a key of its own. On the later 101-key keyboard, it shares a key with the Print Screen key function. One must hold down the Alt key while pressing this “dual-function” key to invoke SysRq, the default BIOS keyboard routines simply ignore SysRq and return without taking action. So did the MS-DOS input routines, the keyboard routines in libraries supplied with many high-level languages followed suit. Although it is included on most PC keyboards manufactured, and though it is used by some debugging software. On the Hyundai/Hynix Super-16 computer, pressing Ctrl+SysRq will hard boot the system, in Linux, the kernel can be configured to provide functions for system debugging and crash recovery. This use is known as the “Magic SysRq key”, microsoft has also used SysRq for various OS- and application-level debuggers. In the CodeView debugger, it was used to break into the debugging during program execution. For the Windows NT remote kernel debugger, it can be used to force the system into the debugger, in embedded systems, SysRq key is usually used to assert low-level on RESET# signal. Print Screen Serial console Magic SysRq key Break key Scroll Lock
28.
Light-emitting diode
–
A light-emitting diode is a two-lead semiconductor light source. It is a p–n junction diode, which emits light when activated, when a suitable voltage is applied to the leads, electrons are able to recombine with electron holes within the device, releasing energy in the form of photons. This effect is called electroluminescence, and the color of the light is determined by the band gap of the semiconductor. LEDs are typically small and integrated optical components may be used to shape the radiation pattern, appearing as practical electronic components in 1962, the earliest LEDs emitted low-intensity infrared light. Infrared LEDs are still used as transmitting elements in remote-control circuits. The first visible-light LEDs were also of low intensity and limited to red, modern LEDs are available across the visible, ultraviolet, and infrared wavelengths, with very high brightness. Early LEDs were often used as indicator lamps for electronic devices and they were soon packaged into numeric readouts in the form of seven-segment displays and were commonly seen in digital clocks. Recent developments in LEDs permit them to be used in environmental, LEDs have allowed new displays and sensors to be developed, while their high switching rates are also used in advanced communications technology. LEDs have many advantages over incandescent light sources including lower energy consumption, longer lifetime, improved physical robustness, smaller size, and faster switching. Light-emitting diodes are now used in applications as diverse as aviation lighting, automotive headlamps, advertising, general lighting, traffic signals, camera flashes, as of 2017, LED lights home room lighting are as cheap or cheaper than compact fluorescent lamp sources of comparable output. They are also more energy efficient and, arguably, have fewer environmental concerns linked to their disposal. Electroluminescence as a phenomenon was discovered in 1907 by the British experimenter H. J. Round of Marconi Labs, using a crystal of silicon carbide, russian inventor Oleg Losev reported creation of the first LED in 1927. His research was distributed in Soviet, German and British scientific journals, rubin Braunstein of the Radio Corporation of America reported on infrared emission from gallium arsenide and other semiconductor alloys in 1955. Braunstein observed infrared emission generated by simple diode structures using gallium antimonide, GaAs, indium phosphide, in 1957, Braunstein further demonstrated that the rudimentary devices could be used for non-radio communication across a short distance. As noted by Kroemer Braunstein …had set up a simple optical communications link, the emitted light was detected by a PbS diode some distance away. This signal was fed into an amplifier and played back by a loudspeaker. Intercepting the beam stopped the music and we had a great deal of fun playing with this setup. This setup presaged the use of LEDs for optical communication applications, by October 1961, they had demonstrated efficient light emission and signal coupling between a GaAs p-n junction light emitter and an electrically-isolated semiconductor photodetector
29.
DIN connector
–
A DIN connector is an electrical connector that was originally standardized in the early 1970s by the Deutsches Institut für Normung, the German national standards organization. Some of these connectors have also used in analog video applications, for power connections. The original DIN standards for these connectors are no longer in print and have been replaced with the equivalent international standard IEC 60130-9, while DIN connectors appear superficially similar to the newer professional XLR connectors, they are not compatible. All male connectors of this family of connectors feature a 13.2 mm diameter metal shield with a notch that limits the orientation in which plug, the pins are 1.45 mm in diameter and equally spaced in a 7.0 mm diameter circle. The skirt is keyed to ensure that the plug is inserted with the correct orientation, the basic design also ensures that the shielding is connected between socket and plug prior to any signal path connection being made. There are seven common patterns, with any number of pins from three to eight, three different five-pin connectors exist, known as 180°, 240°, and 270° after the angle of the arc swept between the first and last pin. There are also two variations of the six-pin, seven-pin and eight-pin connectors, one where the pins form 360°. 3-pin and 180° 5-pin connectors will also fit the 270° 7-pin, in addition to these connectors, there are also connectors with 10,12 and 14 pins. Screw-locking versions of this connector have also used in instrumentation, process control. In North America this variant is called a small Tuchel connector after one of the major manufacturers. Tuchel is now a division of Amphenol, the pin and socket inserts are nearly identical to those used in non-locking connectors, and in some cases locking and non-locking connectors can be mated. Additional configurations up to 24 pins are also offered in the shell size. In addition to this, the pin of such a connector are inverted with respect to DIN standards. Some manufacturers offered panel-mounted jacks with potential-free auxiliary contacts that would open if a plug were inserted, a polarised two-pin unshielded connector, designed for connecting a loudspeaker to a power amplifier, is known as the DIN41529 loudspeaker connector. It exists as a female version, and line-mounted male and female versions. The male version has a central pin, and circular pin mounted off-centre. The circular pin is connected to the line while the spade is connected to the negative line. The panel-mounting female version is available with or without an auxiliary contact that disconnects the internal speaker of the if a external speaker connector is inserted
30.
Num lock
–
Num Lock or Numeric Lock is a key on the numeric keypad of most computer keyboards. It is a key, like Caps Lock and Scroll Lock. Its state affects the function of some of the keys, and is displayed by an LED built into the keyboard. The Num Lock key exists because earlier 84-key IBM PC keyboards did not have control or arrows separate from the numeric keypad. Most earlier computer keyboards had separate number keys and cursor keys, however, to reduce cost. Num Lock would be used to choose between the two functions, on some laptop computers, the Num Lock key is used to convert part of the main keyboard to act as a numeric keypad rather than letters. On some laptop computers, the Num Lock key is absent, since Apple keyboards never had a combination of arrow keys and numeric keypad, Apple has keyboards with a separate numeric keypad but no functional Num Lock key. Instead, these include a Clear key
31.
Floppy disk
–
Floppy disks are read and written by a floppy disk drive. Floppy disks, initially as 8-inch media and later in 5¼-inch and 3½-inch sizes, were a form of data storage and exchange from the mid-1970s into the mid-2000s. These formats are usually handled by older equipment and these disks and associated drives were produced and improved upon by IBM and other companies such as Memorex, Shugart Associates, and Burroughs Corporation. The term floppy disk appeared in print as early as 1970, in 1976, Shugart Associates introduced the first 5¼-inch FDD. By 1978 there were more than 10 manufacturers producing such FDDs, there were competing floppy disk formats, with hard- and soft-sector versions and encoding schemes such as FM, MFM and GCR. The 5¼-inch format displaced the 8-inch one for most applications, the most common capacity of the 5¼-inch format in DOS-based PCs was 360 kB and in 1984 IBM introduced the 1.2 MB dual-sided floppy disk along with its PC-AT model. IBM started using the 720 kB double-density 3½-inch microfloppy disk on its Convertible laptop computer in 1986 and these disk drives could be added to older PC models. In 1988 IBM introduced a drive for 2.88 MB DSED diskettes in its top-of-the-line PS/2 models, throughout the early 1980s, limitations of the 5¼-inch format became clear. Originally designed to be practical than the 8-inch format, it was itself too large, as the quality of recording media grew. A number of solutions were developed, with drives at 2-, 2½-, 3-, 3½-, the large market share of the 5¼-inch format made it difficult for these new formats to gain significant market share. A variant on the Sony design, introduced in 1982 by a number of manufacturers, was then rapidly adopted. By the end of the 1980s, 5¼-inch disks had been superseded by 3½-inch disks, by the mid-1990s, 5¼-inch drives had virtually disappeared, as the 3½-inch disk became the predominant floppy disk. Floppy disks became ubiquitous during the 1980s and 1990s in their use with computers to distribute software, transfer data. Before hard disks became affordable to the population, floppy disks were often used to store a computers operating system. Most home computers from that period have a primary OS and BASIC stored as ROM, by the early 1990s, the increasing software size meant large packages like Windows or Adobe Photoshop required a dozen disks or more. In 1996, there were a five billion standard floppy disks in use. Then, distribution of packages was gradually replaced by CD-ROMs, DVDs. External USB-based floppy disk drives are available, many modern systems provide firmware support for booting from such drives
32.
Disk sector
–
In computer disk storage, a sector is a subdivision of a track on a magnetic disk or optical disc. Each sector stores a fixed amount of data, traditionally 512 bytes for hard disk drives and 2048 bytes for CD-ROMs and DVD-ROMs. Newer HDDs use 4096-byte sectors, which are known as the Advanced Format, the sector is the minimum storage unit of a hard drive. Most disk partitioning schemes are designed to have files occupy a multiple of sectors regardless of the actual size. Files that do not fill a whole sector will have the remainder of their last sector filled with zeroes, in practice, operating systems typically operate on blocks of data, which may span multiple sectors. Geometrically, the word means a portion of a disk between a center, two radii and a corresponding arc, which is shaped like a slice of a pie. Thus, the disk sector refers to the intersection of a track, in disk drives, each physical sector is made up of three basic parts, the sector header, the data area and the error-correcting code. The sector header contains information used by the drive and controller, the header may also include an alternate address to be used if the data area is undependable. The address identification is used to ensure that the mechanics of the drive have positioned the read/write head over the correct location, the standard sector size of 512 bytes for magnetic disks was established with the inception of the hard disk drive in 1956. In the 1970s IBM introduced the Direct Access Storage Device with fixed-block architecture using sizes of 512,1024,2048, cray Research had an 819 disk controller in 1975 that transferred 512 64-bit words per sector. Later hard disk drives supporting 1, 024-byte sectors began to be integrated into consumer electronics such as portable media players. However, by far the majority of drives shipped up to the start of the 2010s still used the traditional 512-byte sector size. By the end of 2007, Samsung and Toshiba began shipments of 1. 8-inch hard disk drives with 4096 byte sectors, while sector specifically means the physical disk area, the term block has been used loosely to refer to a small chunk of data. Block has multiple meanings depending on the context, in the context of data storage, a filesystem block is an abstraction over disk sectors possibly encompassing multiple sectors. In other contexts, it may be a unit of a stream or a unit of operation for a utility. For example, the Unix program dd allows one to set the size to be used during execution with the parameter bs=bytes. This specifies the size of the chunks of data as delivered by dd, in Linux, disk sector size can be determined with fdisk -l | grep Sector size and block size can be determined with blockdev --getbsz /dev/sda. Because each sector contains the same number of bytes, the outer sectors have lower bit density than the inner ones
33.
Hard disk drive
–
The platters are paired with magnetic heads, usually arranged on a moving actuator arm, which read and write data to the platter surfaces. Data is accessed in a manner, meaning that individual blocks of data can be stored or retrieved in any order. HDDs are a type of storage, retaining stored data even when powered off. Introduced by IBM in 1956, HDDs became the dominant secondary storage device for computers by the early 1960s. Continuously improved, HDDs have maintained this position into the era of servers. More than 200 companies have produced HDDs historically, though after extensive industry consolidation most current units are manufactured by Seagate, Toshiba, as of 2016, HDD production is growing, although unit shipments and sales revenues are declining. While SSDs have higher cost per bit, SSDs are replacing HDDs where speed, power consumption, small size, the primary characteristics of an HDD are its capacity and performance. Capacity is specified in unit prefixes corresponding to powers of 1000, the two most common form factors for modern HDDs are 3. 5-inch, for desktop computers, and 2. 5-inch, primarily for laptops. HDDs are connected to systems by standard interface cables such as PATA, SATA, Hard disk drives were introduced in 1956, as data storage for an IBM real-time transaction processing computer and were developed for use with general-purpose mainframe and minicomputers. The first IBM drive, the 350 RAMAC in 1956, was approximately the size of two medium-sized refrigerators and stored five million six-bit characters on a stack of 50 disks. In 1962 the IBM350 RAMAC disk storage unit was superseded by the IBM1301 disk storage unit, cylinder-mode read/write operations were supported, and the heads flew about 250 micro-inches above the platter surface. Motion of the head array depended upon a binary system of hydraulic actuators which assured repeatable positioning. The 1301 cabinet was about the size of three home refrigerators placed side by side, storing the equivalent of about 21 million eight-bit bytes, access time was about a quarter of a second. Also in 1962, IBM introduced the model 1311 disk drive, users could buy additional packs and interchange them as needed, much like reels of magnetic tape. Later models of removable pack drives, from IBM and others, became the norm in most computer installations, non-removable HDDs were called fixed disk drives. Some high-performance HDDs were manufactured with one head per track so that no time was lost physically moving the heads to a track, known as fixed-head or head-per-track disk drives they were very expensive and are no longer in production. In 1973, IBM introduced a new type of HDD code-named Winchester and its primary distinguishing feature was that the disk heads were not withdrawn completely from the stack of disk platters when the drive was powered down. Instead, the heads were allowed to land on an area of the disk surface upon spin-down
34.
Color Graphics Adapter
–
For this reason, it also became that computers first color computer display standard. Built around the Motorola MC6845 display controller, the CGA card featured several graphics, the highest display resolution of any mode was 640×200, and the highest color depth supported was 4-bit. CGA supports, 320×200 in 4 colors from a 16 color hardware palette, the 40×25 text and 320×200 graphics modes are usable with a television, and the 80×25 text and 640×200 graphics modes are intended for a monitor. Despite varying bit depths among the CGA graphics modes, CGA processes colors in its palette in four bits, in graphics modes, colors are set per-pixel, in text modes, colors are set per-character, with an independent foreground and background color for each character. These four bits are passed on unmodified to the DE-9 connector at the back of the card, leaving all color processing to the RGBI monitor connected to it. With respect to the RGBI color model described above, the monitor would use approximately the following formula to process the digital four-bit color number to analog voltages ranging from 0.0 to 1. For the composite output, these four-bit color numbers are encoded by the CGAs onboard hardware into an NTSC-compatible signal fed to the cards RCA output jack. For cost reasons, this is not done using an RGB-to-YIQ converter as called for by the NTSC standard, consequently, the hues seen are lacking in purity, notably, both cyan and yellow have a greenish tint, and color 6 again looks dark yellow instead of brown. When the CGA was introduced in 1981, IBM did not offer an RGBI monitor, instead, customers were supposed to use the RCA output with an RF modulator to connect the CGA to their television set. The IBM5153 Personal Computer Color Display would not be introduced until March 1983 and this is relevant insofar as if an application or game programmer used either one of these configurations, they will have expected color 6 to look dark yellow instead of brown. CGA offers four BIOS text modes, 40×25 characters in up to 16 colors, each character is a pattern of 8×8 dots. The effective screen resolution in this mode is 320×200 pixels, though individual pixels cannot be addressed independently, the choice of patterns for any location is thus limited to one of the 256 available characters, the patterns for which are stored in a ROM chip on the card itself. The display font in text mode is fixed and cannot be changed. The card has sufficient video RAM for eight different text pages in this mode, BIOS Modes 0 &1 select 40 column text modes. The difference between two modes can only be seen on a composite monitor, mode 0 disables the color burst. Mode 1 enables the color burst, allowing for color, Mode 0 and Mode 1 are functionally identical on RGB monitors and on later adapters that emulate CGA without supporting composite color output. 80×25 characters in up to 16 colors, each character is again an 8×8 dot pattern, in a pixel aspect ratio of 1,2.4. The effective screen resolution of this mode is 640×200 pixels, again, the pixels cannot be individually addressed
35.
IBM Monochrome Display Adapter
–
The Monochrome Display Adapter is IBMs standard video display card and computer display standard for the PC introduced in 1981. The MDA did not have any pixel-addressable graphics modes and it had only a single monochrome text mode, which could display 80 columns by 25 lines of high resolution text characters or symbols useful for drawing forms. Based on the IBM Datamasters display system, the standard IBM MDA card was equipped with four kilobytes of video memory. The MDAs high character resolution was a feature meant to facilitate business and wordprocessing use, Each character was rendered in a box of 9×14 pixels, some characters, such as the lowercase m, were rendered eight pixels across. The theoretical total screen display resolution of the MDA was 720×350 pixels and this number is arrived at through calculating character width by columns of text and character height by rows of text. However, the MDA again could not address individual pixels, it could work in text mode. The character patterns were stored in ROM on the card and the set could not be changed from the hardware code page 437. The only way to simulate graphical screen content was through ASCII art, because of the lack of pixel-addressable graphics, MDA owners could not play most graphics-based games. At least one game, IBMs One Hundred And One Monochrome Mazes, code page 437 included the standard 127 ASCII characters but also another 127 characters like the aforementioned characters for drawing forms. Some of these shapes would later show up in Unicode as box-drawing characters, the characters were also used in early PC games such as early BBS door games, or games like Castle Adventure by Kevin Bales. IBMs original MDA included a printer port, thus avoiding the need for a separate parallel interface on computers fitted with an MDA. The registers also allow the monochrome mode to be set on, yet there was no software to actually control that feature. Other boards offered MDA compatibility, although with differences on how attributes are displayed or the font used, PC Magazine reported in June 1983 that while the IBM monochrome display is absolutely beautiful for text and wonderfully easy on the eyes, but is limited to simple character graphics. Text quality on displays connected to the color/graphics adapter. is at best of medium quality and is conducive to eyestrain over the long haul, introduced in 1982, the non-IBM Hercules Graphics Card offered both an MDA compatible high resolution text mode and a monochrome graphics mode. It could address individual pixels and display a black and white picture of 720×348 pixels and this resolution was better than even the highest monochrome resolution CGA cards could offer. Thus, even without a color capability of any kind, the Hercules adapters offer of monochrome graphics without sacrificing MDA-equivalent text quality made it a desirable choice for many. Adapters exist to convert MDA into VGA for upgrading to LCD, using a GBS-8219 with the video signal and the horizontal sync and vertical sync, set to a RGB, and Sync Separate
36.
Enhanced Graphics Adapter
–
EGA was introduced in October 1984 by IBM, shortly after its new PC/AT. The EGA standard was made obsolete by the introduction in 1987 of MCGA and VGA with the PS/2 computer line, shortly before the introduction of VGA, Genoa Systems introduced a half-size graphics card built around a proprietary chip set, which they called Super EGA. EGA produces a display of sixteen simultaneous colors from a palette of sixty-four, EGA also includes full sixteen-color versions of the CGA 640×200 and 320×200 graphics modes, only the sixteen CGA/RGBI colors are available in these modes. EGA is dual-sync, it scans at 21.8 kHz when 350-line modes are used and 15.7 kHz when 200-line modes are used, the original CGA modes are also present, though EGA is not 100% hardware compatible with CGA. EGA can drive an MDA monitor by a setting of switches on the board, only 640×350 high-resolution monochrome graphics. EGA cards use the PC ISA bus and were starting in both eight- and sixteen-bit versions. The original IBM EGA card had 64 KB of onboard RAM, all third-party cards came with 128 KB already installed and some even 256 KB, allowing multiple graphics pages. A few third-party EGA clones feature a range of extended graphics modes, as well as automatic monitor type detection, EGA supports, 640×350 w/16 colors, pixel aspect ratio of 1,1.37. 640×350 w/2 colors, pixel aspect ratio of 1,1.37, 640×200 w/16 colors, pixel aspect ratio of 1,2.4. 320×200 w/16 colors, pixel aspect ratio of 1,1.2 and this also allows the CGAs alternate brown color to be used without any additional display hardware. The later VGA standard built on this by allowing each of the 64 colors to be further customized, the extended color palette cannot be used in 200-line modes. When selecting a color from the EGA palette, two bits are used for the red, green and blue channels and this allows each channel a value of 0,1,2 or 3. To select the color magenta, the red and blue values would be medium intensity, for magenta, the most significant bit in the red and blue values is a 1, so the uppercase R and B placeholders would become 1. All other digits are zeros, giving the binary number 000101 for the color magenta and this is 5 in decimal, so setting a palette entry to 5 would result in it being set to magenta. All the color values for the colors are listed in the table on the right. The EGA uses a female nine-pin D-subminiature connector which looks identical to the CGA connector, the hardware signal interface, including the pin configuration, is largely compatible with CGA. Similarly, if the CGA monitor is wired with pin two as its ground, it will not work with the EGA, though it will work with a CGA. Almost all EGA cards have DIP switches on the back of the card to select the monitor type, if CGA is selected, the card will operate in 200-line mode and use 8x8 characters in text mode
37.
Professional Graphics Controller
–
Professional Graphics Controller is a graphics card manufactured by IBM for PCs. It consists of three interconnected PCBs, and contains its own processor and memory, the PGC was, at the time of its release, the most advanced graphics card for the IBM XT and aimed for tasks such as CAD. It was intended for the design market and included 320 kB of display RAM. The 8088 was placed directly on the card to permit rapid updates of video memory, other cards forced the PCs CPU to write to video memory through a slower ISA bus. While never widespread in consumer-class personal computers, its US $4,290 list price compared favorably to US$50,000 dedicated CAD workstations of the time and it was discontinued in 1987 with the arrival of VGA and 8514. The board was targeted at the CAD market, therefore a limited support is to be expected. Yet, the software known to support the PGC are IBMs Graphical Kernel System, P-CAD4.5, Canyon State Systems CompuShow. PGC supports, 640x480 with 256 colors from a palette of 4096 Color Graphics Adapter text, text modes use EGA 14-pixel font and have 400 rows of pixels. The display adapter was composed of three physical circuit boards and occupied two adjacent expansion slots on the XT or AT motherboard, the card was located in between the two slots. The PGC could not be used in the original IBM PC without modification due to the different spacing of its slots. In addition to its native 640 x 480 mode, the PGC optionally supported the text and graphics modes of the Color Graphics Adapter. However, it was only partly register-compatible with CGA, the PGCs matching display was the IBM5175, an analog RGB monitor that is unique to it and not compatible with any other video card without modification. With hardware modification, the 5175 can be used with VGA, Macintosh, some surplus 5175s in VGA-converted form were still sold by catalog retailers such as COMB as late as the early 1990s
38.
8250 UART
–
The 8250 UART is an integrated circuit designed for implementing the interface for serial communications. The part was manufactured by the National Semiconductor Corporation. It was commonly used in PCs and related equipment such as printers or modems, the chip designations carry suffix letters for later versions of the same chip series. For example, the original 8250 was soon followed by the 8250A, in particular, the original 8250 could repeat transmission of a character if the CTS line was asserted asynchronously during the first transmission attempt. Due to the demand, other manufacturers soon began offering compatible chips. Western Digital offered WD8250 chip under Async Communications Interface Adapter and Async Communications Element names, the 16450 UART, commonly used in IBM PC/AT-series computers, improved on the 8250 by permitting higher serial line speeds. The line interface consists of, SOUT, SIN, /RTS, /DTR, DSR, /DCD, /CTS, /RI Clock interface, XIN, XOUT, /BAUDOUT, RCLK Computer interface, the interrupt signal is reset to low level upon the appropriate interrupt service or a reset operation. 8250 UART was introduced with the IBM PC, the 8250A and 8250B revisions were later released, and the 16450 was introduced with the IBM Personal Computer/AT. The main difference between releases was the maximum allowed communication speed, a very similar, but slightly incompatible variant of this chip is the Intel 8251. Serial and UART Tutorial archive. pcjs. org, National Semiconductor PC16450C/NS16450, PC8250A/INS8250A Universal Asynchronous Receiver Transmitter
39.
X87
–
X87 is a floating point-related subset of the x86 architecture instruction set. It originated as an extension of the 8086 instruction set in the form of floating point coprocessors that worked in tandem with corresponding x86 CPUs. These microchips had names ending in 87 and this was also known as the NPX. Most x86 processors since the Intel 80486 have had these x87 instructions implemented in the main CPU, before x87 instructions were standard in PCs, compilers or programmers had to use rather slow library calls to perform floating-point operations, a method that is still common in embedded systems. There are instructions to push, calculate, and pop values on top of this stack and this can also be reversed on an instruction-by-instruction basis with ST as the unmodified operand and ST as the destination. Furthermore, the contents in ST can be exchanged with another stack register using an instruction called FXCH ST and these properties make the x87 stack usable as seven freely addressable registers plus a dedicated accumulator. Such a stack-based interface potentially can minimize the need to save scratch variables in function compared with a register-based interface. The x87 provides single precision, double precision and 80-bit double-extended precision binary floating-point arithmetic as per the IEEE 754-1985 standard, by default, the x87 processors all use 80-bit double-extended precision internally. A given sequence of operations may thus behave slightly differently compared to a strict single-precision or double-precision IEEE754 FPU. Clock cycle counts for examples of typical x87 FPU instructions, the A~B notation covers timing variations dependent on transient pipeline status as well as the arithmetic precision chosen, it also includes variations due to numerical cases. The L→H notation depicts values corresponding to the lowest and the highest maximum clock frequencies that were available, * An effective zero clock delay is often possible, via superscalar execution. § The 5 MHz 8087 was the original x87 processor, compared to typical software-implemented floating point routines on an 8086, the factors would be even larger, perhaps by another factor of 10. The 8087 was the first math coprocessor for 16-bit processors designed by Intel and it was built to be paired with the Intel 8088 or 8086 microprocessors. However, the Intel 8231 floating-point processor was an earlier design and it was a licensed version of AMDs Am9511 of 1977. The family included the 32-bit Am9511 and Am9511A and the later 64-bit Am9512, the 80187 is the math coprocessor for the Intel 80186 CPU. It is incapable of operating with the 80188, as the 80188 has an 8 bit data bus, the 80187 did not appear at the same time as the 80186 and 80188, but was in fact launched after the 80287 and the 80387. The 80287 is the math coprocessor for the Intel 80286 series of microprocessors, Intels models included variants with specified upper frequency limits ranging from 6 up to 12 MHz. Later followed the i80287XL with 387 microarchitecture and the i80287XLT, a version intended for laptops
40.
Switched-mode power supply
–
A switched-mode power supply is an electronic power supply that incorporates a switching regulator to convert electrical power efficiently. Like other power supplies, an SMPS transfers power from a DC or AC source, to DC loads, such as a computer, while converting voltage. Ideally, a power supply dissipates no power. Voltage regulation is achieved by varying the ratio of on-to-off time, in contrast, a linear power supply regulates the output voltage by continually dissipating power in the pass transistor. This higher power efficiency is an important advantage of a switched-mode power supply. Switched-mode power supplies may also be smaller and lighter than a linear supply due to the smaller transformer size. Switching regulators are used as replacements for linear regulators when higher efficiency and they are, however, more complicated, their switching currents can cause electrical noise problems if not carefully suppressed, and simple designs may have a poor power factor. 1836 Induction coils use switches to generate high voltages,1910 An inductive discharge ignition system invented by Charles F. Kettering and his company Dayton Engineering Laboratories Company goes into production for Cadillac. The Kettering ignition system is a version of a flyback boost converter. Variations of this system were used in all non-diesel internal combustion engines until the 1960s when it was displaced with capacitive discharge ignition systems. 1926 On 23 June, British inventor Philip Ray Coursey applies for a patent in his country and United States, the patent mentions high frequency welding and furnaces, among other uses. Ca 1936 Car radios used electromechanical vibrators to transform the 6 V battery supply to a suitable B+ voltage for the vacuum tubes,1959 Transistor oscillation and rectifying converter power supply system U. S. 1972 HP-35, Hewlett-Packards first pocket calculator, is introduced with transistor switching power supply for light-emitting diodes, clocks, timing, ROM,1973 Xerox uses switching power supplies in the Alto minicomputer 1977 Apple II is designed with a switching mode power supply. Rod Holt was brought in as engineer and there were several flaws in Apple II that were never publicized. One thing Holt has to his credit is that he created the power supply that allowed us to do a very lightweight computer. 1980 The HP8662A10 kHz –1.28 GHz synthesized signal generator went with a switched mode power supply, a linear regulator provides the desired output voltage by dissipating excess power in ohmic losses. Ideal switching elements have no resistance when closed and carry no current when open and this is because the inductor responds to changes in current by inducing its own voltage to counter the change in current, and this voltage adds to the source voltage while the switch is open. This boost converter acts like a transformer for DC signals
41.
Coercivity
–
An analogous property, electric coercivity, is the ability of a ferroelectric material to withstand an external electric field without becoming depolarized. Thus coercivity measures the resistance of a material to becoming demagnetized. Coercivity is usually measured in oersted or ampere/meter units and is denoted HC and it can be measured using a B-H analyzer or magnetometer. Ferromagnetic materials with high coercivity are called magnetically hard materials, and are used to permanent magnets. Materials with low coercivity are said to be magnetically soft, the latter are used in transformer and inductor cores, recording heads, microwave devices, and magnetic shielding. Typically the coercivity of a material is determined by measurement of the magnetic hysteresis loop, also called the magnetization curve. The apparatus used to acquire the data is typically a vibrating-sample or alternating-gradient magnetometer, the applied field where the data line crosses zero is the coercivity. If an antiferromagnet is present in the sample, the coercivities measured in increasing and decreasing fields may be unequal as a result of the exchange bias effect, the coercivity of a material depends on the time scale over which a magnetization curve is measured. The magnetization of a material measured at an applied reversed field which is smaller than the coercivity may, over a long time scale. Relaxation occurs when reversal of magnetization by domain wall motion is thermally activated and is dominated by magnetic viscosity, at the coercive field, the vector component of the magnetization of a ferromagnet measured along the applied field direction is zero. There are two modes of magnetization reversal, single-domain rotation and domain wall motion. When the magnetization of a material reverses by rotation, the component along the applied field is zero because the vector points in a direction orthogonal to the applied field. When the magnetization reverses by domain wall motion, the net magnetization is small in every vector direction because the moments of all the individual domains sum to zero, magnetization curves dominated by rotation and magnetocrystalline anisotropy are found in relatively perfect magnetic materials used in fundamental research. The role of walls in determining coercivity is complicated since defects may pin domain walls in addition to nucleating them. The dynamics of domain walls in ferromagnets is similar to that of grain boundaries, common dissipative processes in magnetic materials include magnetostriction and domain wall motion. The coercivity is a measure of the degree of magnetic hysteresis, the squareness and coercivity are figures of merit for hard magnets although energy product is most commonly quoted. The 1980s saw the development of rare-earth magnets with high energy products, since the 1990s new exchange spring hard magnets with high coercivities have been developed. com
42.
Trademark
–
The trademark owner can be an individual, business organization, or any legal entity. A trademark may be located on a package, a label, for the sake of corporate identity, trademarks are often displayed on company buildings. A trademark identifies the owner of a particular product or service. The unauthorized usage of trademarks by producing and trading counterfeit consumer goods is known as brand piracy, the owner of a trademark may pursue legal action against trademark infringement. Most countries require formal registration of a trademark as a precondition for pursuing this type of action, the United States, Canada and other countries also recognize common law trademark rights, which means action can be taken to protect an unregistered trademark if it is in use. Still, common law trademarks offer the holder in general less legal protection than registered trademarks. A trademark may be designated by the symbols, ™ ℠ ® A trademark is typically a name, word, phrase, logo, symbol, design, image. There is also a range of non-conventional trademarks comprising marks which do not fall into these categories, such as those based on colour, smell. Trademarks which are considered offensive are often rejected according to a nations trademark law, the term trademark is also used informally to refer to any distinguishing attribute by which an individual is readily identified, such as the well-known characteristics of celebrities. When a trademark is used in relation to services rather than products, it may sometimes be called a service mark, in other words, trademarks serve to identify a particular business as the source of goods or services. The use of a trademark in this way is known as trademark use, certain exclusive rights attach to a registered mark. Different goods and services have been classified by the International Classification of Goods, in trademark treatises it is usually reported that blacksmiths who made swords in the Roman Empire are thought of as being the first users of trademarks. Other notable trademarks that have used for a long time include Löwenbräu. The first trademark legislation was passed by the Parliament of England under the reign of King Henry III in 1266, the first modern trademark laws emerged in the late 19th century. In France the first comprehensive system in the world was passed into law in 1857 with the Manufacture. In Britain, the Merchandise Marks Act 1862 made it an offense to imitate anothers trade mark with intent to defraud or to enable another to defraud. In 1875 the Trade Marks Registration Act was passed which allowed formal registration of marks at the UK Patent Office for the first time. Registration was considered to comprise prima facie evidence of ownership of a trade mark, in the United States, Congress first attempted to establish a federal trademark regime in 1870
43.
ATX
–
ATX is a motherboard configuration specification developed by Intel in 1995 to improve on previous de facto standards like the AT design. It was the first major change in desktop computer enclosure, motherboard and power supply design in years, improving standardization. The specification defines the key dimensions, mounting point, I/O panel, power. ATX is the most common motherboard design, other standards for smaller boards usually keep the basic rear layout but reduce the size of the board and the number of expansion slots. Dimensions of a full-size ATX board are 12 ×9.6 in, the official ATX specifications were released by Intel in 1995 and have been revised numerous times since. The most recent ATX motherboard specification is version 2.2, the most recent ATX12V power supply unit specification is 2.31, released in February 2008. In 2004, Intel announced the BTX standard, intended as a replacement for ATX, as of 2016, the ATX design still remains popular, while BTX has been introduced by some manufacturers. On the back of the case, some major changes were made to the AT standard. Originally AT style cases had only a keyboard connector and expansion slots for add-on card backplates, any other onboard interfaces had to be connected via flying leads to connectors which were mounted either on spaces provided by the case or brackets placed in unused expansion slot positions. Cases are usually fitted with a panel, also known as an I/O plate or I/O shield. If necessary, I/O plates can be replaced to suit a motherboard that is being fitted, the computer will operate correctly without a plate fitted, although there will be open gaps in the case and the EMI/RFI screening will be compromised. Panels were made that allowed fitting an AT motherboard in an ATX case, ATX also made the PS/2-style mini-DIN keyboard and mouse connectors ubiquitous. AT systems used a 5-pin DIN connector for the keyboard and were used with serial port mice. Many modern motherboards are phasing out the PS/2-style keyboard and mouse connectors in favor of the more modern Universal Serial Bus, other legacy connectors that are slowly being phased out of modern ATX motherboards include 25-pin parallel ports and 9-pin RS-232 serial ports. In their place are onboard peripheral ports such as Ethernet, FireWire, eSATA, audio ports, video, standard ATX provides seven slots at 0.8 in spacing, the popular Micro-ATX size removes 2.4 inches and three slots, leaving four. Here width refers to the distance along the external connector edge, note each larger size inherits all previous colors area. Note, AOpen has conflated the term Mini ATX with a more recent 15 ×15 cm design, a number of manufacturers have added one, two or three additional expansion slots to the standard 12-inch ATX motherboard width. Server System Infrastructure Forums Compact Electronics Bay measures 12 ×10.5 in