Amateur radio known as ham radio, describes the use of radio frequency spectrum for purposes of non-commercial exchange of messages, wireless experimentation, self-training, private recreation, radiosport and emergency communication. The term "amateur" is used to specify "a duly authorised person interested in radioelectric practice with a purely personal aim and without pecuniary interest; the amateur radio service is established by the International Telecommunication Union through the Radio Regulations. National governments regulate technical and operational characteristics of transmissions and issue individual stations licenses with an identifying call sign. Prospective amateur operators are tested for their understanding of key concepts in electronics and the host government's radio regulations. Radio amateurs use a variety of voice, text and data communications modes and have access to frequency allocations throughout the RF spectrum; this enables communication across a city, country, the world, or into space.
In many countries, amateur radio operators may send, receive, or relay radio communications between computers or transceivers connected to secure virtual private networks on the Internet. Amateur radio is represented and coordinated by the International Amateur Radio Union, organized in three regions and has as its members the national amateur radio societies which exist in most countries. According to an estimate made in 2011 by the American Radio Relay League, two million people throughout the world are involved with amateur radio. About 830,000 amateur radio stations are located in IARU Region 2 followed by IARU Region 3 with about 750,000 stations. A smaller number, about 400,000, are located in IARU Region 1; the origins of amateur radio can be traced to the late 19th century, but amateur radio as practiced today began in the early 20th century. The First Annual Official Wireless Blue Book of the Wireless Association of America, produced in 1909, contains a list of amateur radio stations.
This radio callbook lists wireless telegraph stations in Canada and the United States, including 89 amateur radio stations. As with radio in general, amateur radio was associated with various amateur experimenters and hobbyists. Amateur radio enthusiasts have contributed to science, engineering and social services. Research by amateur operators has founded new industries, built economies, empowered nations, saved lives in times of emergency. Ham radio can be used in the classroom to teach English, map skills, math and computer skills; the term "ham" was first a pejorative term used in professional wired telegraphy during the 19th century, to mock operators with poor Morse code sending skills. This term continued to be used after the invention of radio and the proliferation of amateur experimentation with wireless telegraphy; the use of "ham" meaning "amateurish or unskilled" survives today in other disciplines. The amateur radio community subsequently began to reclaim the word as a label of pride, by the mid-20th century it had lost its pejorative meaning.
Although not an acronym, it is mistakenly written as "HAM" in capital letters. The many facets of amateur radio attract practitioners with a wide range of interests. Many amateurs begin with a fascination of radio communication and combine other personal interests to make pursuit of the hobby rewarding; some of the focal areas amateurs pursue include radio contesting, radio propagation study, public service communication, technical experimentation, computer networking. Amateur radio operators use various modes of transmission to communicate; the two most common modes for voice transmissions are single sideband. FM offers high quality audio signals, while SSB is better at long distance communication when bandwidth is restricted. Radiotelegraphy using Morse code known as "CW" from "continuous wave", is the wireless extension of landline telegraphy developed by Samuel Morse and dates to the earliest days of radio. Although computer-based modes and methods have replaced CW for commercial and military applications, many amateur radio operators still enjoy using the CW mode—particularly on the shortwave bands and for experimental work, such as earth-moon-earth communication, because of its inherent signal-to-noise ratio advantages.
Morse, using internationally agreed message encodings such as the Q code, enables communication between amateurs who speak different languages. It is popular with homebrewers and in particular with "QRP" or very-low-power enthusiasts, as CW-only transmitters are simpler to construct, the human ear-brain signal processing system can pull weak CW signals out of the noise where voice signals would be inaudible. A similar "legacy" mode popular with home constructors is amplitude modulation, pursued by many vintage amateur radio enthusiasts and aficionados of vacuum tube technology. Demonstrating a proficiency in Morse code was for many years a requirement to obtain an amateur license to transmit on frequencies below 30 MHz. Following changes in international regulations in 2003, countries are no longer required to demand proficiency; the United States Federal
Internet service provider
An Internet service provider is an organization that provides services for accessing, using, or participating in the Internet. Internet service providers may be organized in various forms, such as commercial, community-owned, non-profit, or otherwise owned. Internet services provided by ISPs include Internet access, Internet transit, domain name registration, web hosting, Usenet service, colocation; the Internet was developed as a network between government research laboratories and participating departments of universities. Other companies and organizations joined by direct connection to the backbone, or by arrangements through other connected companies, sometime using dialup tools such as UUCP. By the late 1980s, a process was set in place towards commercial use of the Internet; the remaining restrictions were removed by 1991, shortly after the introduction of the World Wide Web. During the 1980s, online service providers such as CompuServe and America On Line began to offer limited capabilities to access the Internet, such as e-mail interchange, but full access to the Internet was not available to the general public.
In 1989, the first Internet service providers, companies offering the public direct access to the Internet for a monthly fee, were established in Australia and the United States. In Brookline, The World became the first commercial ISP in the US, its first customer was served in November 1989. These companies offered dial-up connections, using the public telephone network to provide last-mile connections to their customers; the barriers to entry for dial-up ISPs were low and many providers emerged. However, cable television companies and the telephone carriers had wired connections to their customers and could offer Internet connections at much higher speeds than dial-up using broadband technology such as cable modems and digital subscriber line; as a result, these companies became the dominant ISPs in their service areas, what was once a competitive ISP market became a monopoly or duopoly in countries with a commercial telecommunications market, such as the United States. On 23 April 2014, the U.
S. Federal Communications Commission was reported to be considering a new rule that will permit ISPs to offer content providers a faster track to send content, thus reversing their earlier net neutrality position. A possible solution to net neutrality concerns may be municipal broadband, according to Professor Susan Crawford, a legal and technology expert at Harvard Law School. On 15 May 2014, the FCC decided to consider two options regarding Internet services: first, permit fast and slow broadband lanes, thereby compromising net neutrality. On 10 November 2014, President Barack Obama recommended that the FCC reclassify broadband Internet service as a telecommunications service in order to preserve net neutrality. On 16 January 2015, Republicans presented legislation, in the form of a U. S. Congress H. R. discussion draft bill, that makes concessions to net neutrality but prohibits the FCC from accomplishing the goal or enacting any further regulation affecting Internet service providers. On 31 January 2015, AP News reported that the FCC will present the notion of applying Title II of the Communications Act of 1934 to the Internet in a vote expected on 26 February 2015.
Adoption of this notion would reclassify Internet service from one of information to one of the telecommunications and, according to Tom Wheeler, chairman of the FCC, ensure net neutrality. The FCC is expected to enforce net neutrality in its vote, according to The New York Times. On 26 February 2015, the FCC ruled in favor of net neutrality by adopting Title II of the Communications Act of 1934 and Section 706 in the Telecommunications Act of 1996 to the Internet; the FCC Chairman, Tom Wheeler, commented, "This is no more a plan to regulate the Internet than the First Amendment is a plan to regulate free speech. They both stand for the same concept." On 12 March 2015, the FCC released the specific details of the net neutrality rules. On 13 April 2015, the FCC published the final rule on its new "Net Neutrality" regulations; these rules went into effect on 12 June 2015. Upon becoming FCC chairman in April 2017, Ajit Pai proposed an end to net neutrality, awaiting votes from the commission. On 21 November 2017, Pai announced that a vote will be held by FCC members on 14 December on whether to repeal the policy.
On 11 June 2018, the repeal of the FCC's network neutrality rules took effect. Access provider ISPs provide Internet access, employing a range of technologies to connect users to their network. Available technologies have ranged from computer modems with acoustic couplers to telephone lines, to television cable, Wi-Fi, fiber optics. For users and small businesses, traditional options include copper wires to provide dial-up, DSL asymmetric digital subscriber line, cable modem or Integrated Services Digital Network. Using fiber-optics to end users is called Fiber To The Home or similar names. For customers with more demanding requirements can use higher-speed DSL, metropolitan Ethernet, gigabit Ethernet, Frame Relay, ISDN Primary Rate Interface, ATM and synchronous optical networking. Wireless access is another option, including satellite Internet access. A mailbox provider is an organization that provides services for hosting electronic mail domains with access to storage for mail boxes
The Acorn Archimedes is a family of personal computers designed by Acorn Computers Ltd in Cambridge and sold in the late-1980s to mid-1990s, Acorn's first general-purpose home computer based on its own ARM architecture. The first Archimedes was launched in 1987. ARM's RISC design, a 32-bit CPU, running at 8 MHz, was stated as achieving 4.5+ MIPS, which provided a significant upgrade from 8-bit home computers, such as Acorn's previous machines. Claims of being the fastest micro in the world and running at 18 MIPS were made during tests; the models in the family omitted either the Archimedes part of the name. Archimedes machines are no longer sold, but computers such as the Raspberry Pi can still run its operating system, RISC OS, as they use ARM chips that are compatible; the Acorn Archimedes was the first RISC-based home computer. The first models were released in June 1987, as the 400 series; the 400 series included an ST-506 controller for an internal hard drive. Both models included the Arthur operating system, BBC BASIC programming language and an emulator for Acorn's earlier BBC Micro, were mounted in two-part cases with a small central unit, monitor on top, a separate keyboard and three-button mouse.
All models featured eight-channel 8-bit stereo sound and were capable of displaying 256 colours on screen. Three models were released with different amounts of memory, the A305, A310 and A440; these were soon replaced with the A420/1 and A440/1 which featured an upgraded MEMC1a. The A540 was unveiled in September 1990 which supported up to 16MiB of RAM and included higher speed SCSI and provision for connecting Genlock devices; the 300 and 400 were followed by a number of machines with minor changes and upgrades: Work began on a successor to the Arthur operating system. Named Arthur 2, it was renamed to RISC OS 2. New computers were shipped. A number of new machines were introduced along with RISC OS 2 and in May 1989, the 300 series was phased out in favour of the new Acorn A3000. Earlier models which shipped with Arthur could be upgraded to RISC OS 2 by replacing the ROM chips containing the operating system; the A3000 used an 8 MHz ARM2 and was supplied with 1 MB of RAM. Unlike the previous models, the A3000 came in a single-part case similar to the BBC Micro, Amiga 500 and Atari ST computers, with the keyboard integrated in the base unit.
This kind of housing consumes a lot of desktop space, a problem that Acorn tried to overcome by offering a monitor stand that attached to the base unit. The new model sported only a single internal expansion slot, physically different from that of the earlier models, although electronically similar. An external connector could interface to existing expansion cards, although they needed to be housed in an external case joined to the main unit. A300 series, A400 series, R140 and A3000 machines had the VIDC1a video chip, which provided a wide variety of screen resolutions, such as those provided by the operating system: 160 × 256 with 4, 16 or 256 possible colours 320 × 256 with 2, 4, 16 or 256 possible colours 640 × 256 with 2, 4, 16 or 256 possible colours 640 × 512 with 2, 4, 16 or 256 possible colours 800 × 600 with 2, 4 or 16 possible colourswhile the chip could be made to run others, such as: 1152 × 896 with 2 possible colourswhere the palette range was 4096 colours and the VIDC1a had 16 hardware palette registers.
This meant that in screen modes with sixteen colours or fewer, the colours could be mapped to any of the 4096 available. However, in 256 colour modes, 4 bits of the colour data were hardware derived and could not be adjusted; the net result was 256 colours, but only 16 of them could be assigned as desired, covering a range of the 4096 available colours. It had no horizontal sync interrupt, meaning that it was difficult to display additional colours by changing the palette for each scan line, but not impossible, thanks to the 2 MHz IOC timer 1. Many demos managed to display 4096 colour on screen or more with dithering It had one hardware sprite, with 32 pixels width and unlimited height, where each pixel is coded on two bits: value 0 is for transparency, the three others are chosen from the 4096 colour palette. In 1991, the A5000 was launched, it featured the new 25 MHz ARM3 processor, 2 or 4 MB of RAM, either a 40 MB or an 80 MB hard drive and a more conventional pizza box-style two-part case.
Its enhanced video capabilities allowed the A5000 to comfortably display VGA resolutions of up to 800×600 pixels. It was the first Archimedes to feature a High Density capable floppy disc drive as standard; this natively supported various formats including Atari discs. A version of the A5000 featured a 33 MHz ARM3, 4 or 8 MB of RAM, an 80 or 120 MB hard drive; the A5000 ran the new 3.0 version of RISC OS, although several bugs were identified.
Internet protocol suite
The Internet protocol suite is the conceptual model and set of communications protocols used in the Internet and similar computer networks. It is known as TCP/IP because the foundational protocols in the suite are the Transmission Control Protocol and the Internet Protocol, it is known as the Department of Defense model because the development of the networking method was funded by the United States Department of Defense through DARPA. The Internet protocol suite provides end-to-end data communication specifying how data should be packetized, transmitted and received; this functionality is organized into four abstraction layers, which classify all related protocols according to the scope of networking involved. From lowest to highest, the layers are the link layer, containing communication methods for data that remains within a single network segment; the technical standards underlying the Internet protocol suite and its constituent protocols are maintained by the Internet Engineering Task Force.
The Internet protocol suite predates the OSI model, a more comprehensive reference framework for general networking systems. The Internet protocol suite resulted from research and development conducted by the Defense Advanced Research Projects Agency in the late 1960s. After initiating the pioneering ARPANET in 1969, DARPA started work on a number of other data transmission technologies. In 1972, Robert E. Kahn joined the DARPA Information Processing Technology Office, where he worked on both satellite packet networks and ground-based radio packet networks, recognized the value of being able to communicate across both. In the spring of 1973, Vinton Cerf, who helped develop the existing ARPANET Network Control Program protocol, joined Kahn to work on open-architecture interconnection models with the goal of designing the next protocol generation for the ARPANET. By the summer of 1973, Kahn and Cerf had worked out a fundamental reformulation, in which the differences between local network protocols were hidden by using a common internetwork protocol, instead of the network being responsible for reliability, as in the ARPANET, this function was delegated to the hosts.
Cerf credits Hubert Zimmermann and Louis Pouzin, designer of the CYCLADES network, with important influences on this design. The protocol was implemented as the Transmission Control Program, first published in 1974; the TCP managed both datagram transmissions and routing, but as the protocol grew, other researchers recommended a division of functionality into protocol layers. Advocates included Jonathan Postel of the University of Southern California's Information Sciences Institute, who edited the Request for Comments, the technical and strategic document series that has both documented and catalyzed Internet development. Postel stated, "We are screwing up in our design of Internet protocols by violating the principle of layering." Encapsulation of different mechanisms was intended to create an environment where the upper layers could access only what was needed from the lower layers. A monolithic design would lead to scalability issues; the Transmission Control Program was split into two distinct protocols, the Transmission Control Protocol and the Internet Protocol.
The design of the network included the recognition that it should provide only the functions of efficiently transmitting and routing traffic between end nodes and that all other intelligence should be located at the edge of the network, in the end nodes. This design is known as the end-to-end principle. Using this design, it became possible to connect any network to the ARPANET, irrespective of the local characteristics, thereby solving Kahn's initial internetworking problem. One popular expression is that TCP/IP, the eventual product of Cerf and Kahn's work, can run over "two tin cans and a string." Years as a joke, the IP over Avian Carriers formal protocol specification was created and tested. A computer called, it forwards network packets forth between them. A router was called gateway, but the term was changed to avoid confusion with other types of gateways. From 1973 to 1974, Cerf's networking research group at Stanford worked out details of the idea, resulting in the first TCP specification.
A significant technical influence was the early networking work at Xerox PARC, which produced the PARC Universal Packet protocol suite, much of which existed around that time. DARPA contracted with BBN Technologies, Stanford University, the University College London to develop operational versions of the protocol on different hardware platforms. Four versions were developed: TCP v1, TCP v2, TCP v3 and IP v3, TCP/IP v4; the last protocol is still in use today. In 1975, a two-network TCP/IP communications test was performed between Stanford and University College London. In November 1977, a three-network TCP/IP test was conducted between sites in the US, the UK, Norway. Several other TCP/IP prototypes were developed at multiple research centers between 1978 and 1983. In March 1982, the US Department of Defense declared TCP/IP as the standard for all military computer networking. In the same year, Peter T. Kirstein's research group at University College London adopted the protocol; the migration of the ARPANET to TCP/IP was completed on flag day January 1, 1983, when the new protocols were permanently activated.
In 1985, the Internet Advisory Board held a three-day TCP/
Automatic Packet Reporting System
Automatic Packet Reporting System is an amateur radio-based system for real time digital communications of information of immediate value in the local area. Data can include object Global Positioning System coordinates, weather station telemetry, text messages, announcements and other telemetry. APRS data can be displayed on a map, which can show stations, tracks of moving objects, weather stations and rescue data, direction finding data. APRS data are transmitted on a single shared frequency to be repeated locally by area relay stations for widespread local consumption. In addition, all such data are ingested into the APRS Internet System via an Internet-connected receiver and distributed globally for ubiquitous and immediate access. Data shared via radio or Internet are collected by all users and can be combined with external map data to build a shared live view. APRS has been developed since the late 1980s by Bob Bruninga, call sign WB4APR a senior research engineer at the United States Naval Academy.
He still maintains the main APRS Web site. The initialism "APRS" was derived from his call sign. Bob Bruninga, a senior research engineer at the United States Naval Academy, implemented the earliest ancestor of APRS on an Apple II computer in 1982; this early version was used to map high frequency Navy position reports. The first use of APRS was in 1984, when Bruninga developed a more advanced version on a Commodore VIC-20 for reporting the position and status of horses in a 100-mile endurance run. During the next two years, Bruninga continued to develop the system, which he now called the Connectionless Emergency Traffic System. Following a series of Federal Emergency Management Agency exercises using CETS, the system was ported to the IBM Personal Computer. During the early 1990s, CETS continued to evolve into its current form; as GPS technology became more available, "Position" was replaced with "Packet" to better describe the more generic capabilities of the system and to emphasize its uses beyond mere position reporting.
APRS, is a digital communications protocol for exchanging information among a large number of stations covering a large area referred to as "IP-ers". As a multi-user data network, it is quite different from conventional packet radio. Rather than using connected data streams where stations connect to each other and packets are acknowledged and retransmitted if lost, APRS operates in an unconnected broadcast fashion, using unnumbered AX.25 frames. APRS packets are transmitted for all other stations to use. Packet repeaters, called digipeaters, form the backbone of the APRS system, use store and forward technology to retransmit packets. All stations operate on the same radio channel, packets move through the network from digipeater to digipeater, propagating outward from their point of origin. All stations within radio range of each digipeater receive the packet. At each digipeater, the packet path is changed; the packet will only be repeated through a certain number of digipeaters — or hops — depending upon the all-important "PATH" setting.
Digipeaters keep track of the packets they forward for a period of time, thus preventing duplicate packets from being retransmitted. This keeps packets from circulating in endless loops inside the ad-hoc network. Most packets are heard by an APRS Internet Gateway, called an IGate, the packets are routed on to the Internet APRS backbone for display or analysis by other users connected to an APRS-IS server, or on a Web site designed for the purpose. While it would seem that using unconnected and unnumbered packets without acknowledgment and retransmission on a shared and sometimes congested channel would result in poor reliability due to a packet being lost, this is not the case, because the packets are transmitted to everyone and multiplied many times over by each digipeater; this means that all digipeaters and stations in range get a copy, proceed to broadcast it to all other digipeaters and stations within their range. The end result is. Therefore, packets can sometimes be heard some distance from the originating station.
Packets can be digipeated tens of kilometers or hundreds of kilometers, depending on the height and range of the digipeaters in the area. When a packet is transmitted, it is duplicated many times as it radiates out, taking all available paths until the number of "hops" allowed by the path setting is consumed. APRS contains a number of packet types, including position/object/item, messages, weather reports and telemetry; the position/object/item packets contain the latitude and longitude, a symbol to be displayed on the map, have many optional fields for altitude, speed, radiated power, antenna height above average terrain, antenna gain, voice operating frequency. Positions of fixed stations are configured in the APRS software. Moving stations automatically derive their position information from a GPS receiver connected to the APRS equipment; the map display uses these fields to plot communication range of all participants and facilitate the ability to contact users during both routine and emergency situations.
Each position/object/item packet can use any of several hundred different symbols. Position/objects/items can contain weather information or can be any number of dozens of standardised weather symbols; each symbol on an APRS map can display many attributes, discriminated either by colour or
DOS is a family of disk operating systems, hence the name. DOS consists of MS-DOS and a rebranded version under the name IBM PC DOS, both of which were introduced in 1981. Other compatible systems from other manufacturers include DR-DOS, ROM-DOS, PTS-DOS, FreeDOS. MS-DOS dominated the x86-based IBM PC compatible market between 1981 and 1995. Dozens of other operating systems use the acronym "DOS", including the mainframe DOS/360 from 1966. Others are Apple DOS, Apple ProDOS, Atari DOS, Commodore DOS, TRSDOS, AmigaDOS. Fictional operating systems have used this acronym as well, such as GLaDOS from the video game Portal. IBM PC DOS and its predecessor, 86-DOS, resembled Digital Research's CP/M—the dominant disk operating system for 8-bit Intel 8080 and Zilog Z80 microcomputers—but instead ran on Intel 8086 16-bit processors; when IBM introduced the IBM PC, built with the Intel 8088 microprocessor, they needed an operating system. Seeking an 8088-compatible build of CP/M, IBM approached Microsoft CEO Bill Gates.
IBM was sent to Digital Research, a meeting was set up. However, the initial negotiations for the use of CP/M broke down. Digital Research founder Gary Kildall refused, IBM withdrew. IBM again approached Bill Gates. Gates in turn approached Seattle Computer Products. There, programmer Tim Paterson had developed a variant of CP/M-80, intended as an internal product for testing SCP's new 16-bit Intel 8086 CPU card for the S-100 bus; the system was named QDOS, before being made commercially available as 86-DOS. Microsoft purchased 86-DOS for $50,000; this became Microsoft Disk Operating System, MS-DOS, introduced in 1981. Within a year Microsoft licensed MS-DOS to over 70 other companies, which supplied the operating system for their own hardware, sometimes under their own names. Microsoft required the use of the MS-DOS name, with the exception of the IBM variant. IBM continued to develop their version, PC DOS, for the IBM PC. Digital Research became aware that an operating system similar to CP/M was being sold by IBM, threatened legal action.
IBM responded by offering an agreement: they would give PC consumers a choice of PC DOS or CP/M-86, Kildall's 8086 version. Side-by-side, CP/M cost $200 more than PC DOS, sales were low. CP/M faded, with MS-DOS and PC DOS becoming the marketed operating system for PCs and PC compatibles. Microsoft sold MS-DOS only to original equipment manufacturers. One major reason for this was. DOS was structured such that there was a separation between the system specific device driver code and the DOS kernel. Microsoft provided an OEM Adaptation Kit which allowed OEMs to customize the device driver code to their particular system. By the early 1990s, most PCs adhered to IBM PC standards so Microsoft began selling MS-DOS in retail with MS-DOS 5.0. In the mid-1980s Microsoft developed a multitasking version of DOS; this version of DOS is referred to as "European MS-DOS 4" because it was developed for ICL and licensed to several European companies. This version of DOS supports preemptive multitasking, shared memory, device helper services and New Executable format executables.
None of these features were used in versions of DOS, but they were used to form the basis of the OS/2 1.0 kernel. This version of DOS is distinct from the released PC DOS 4.0, developed by IBM and based upon DOS 3.3. Digital Research attempted to regain the market lost from CP/M-86 with Concurrent DOS, FlexOS and DOS Plus with Multiuser DOS and DR DOS. Digital Research was bought by Novell, DR DOS became Novell DOS 7. Gordon Letwin wrote in 1995 that "DOS was, when we first wrote it, a one-time throw-away product intended to keep IBM happy so that they'd buy our languages". Microsoft expected; the company planned to over time improve MS-DOS so it would be indistinguishable from single-user Xenix, or XEDOS, which would run on the Motorola 68000, Zilog Z-8000, LSI-11. IBM, did not want to replace DOS. After AT&T began selling Unix, Microsoft and IBM began developing OS/2 as an alternative; the two companies had a series of disagreements over two successor operating systems to DOS, OS/2 and Windows.
They split development of their DOS systems as a result. The last retail version of MS-DOS was MS-DOS 6.22. The last retail version of PC DOS was PC DOS 2000, though IBM did develop PC DOS 7.10 for OEMs and internal use. The FreeDOS project began on 26 June 1994, when Microsoft announced it would no longer sell or support MS-DOS. Jim Hall posted a manifesto proposing the development of an open-source replacement. Within a few weeks, other programmers including Pat Villani and Tim Norman joined the project. A kernel, the COMMAND. COM command line interpreter, core utilities were created by pooling code they had wri
IBM Personal Computer
The IBM Personal Computer known as the IBM PC, is the original version and progenitor of the IBM PC compatible hardware platform. It is IBM model number 5150, was introduced on August 12, 1981, it was created by a team of engineers and designers under the direction of Don Estridge of the IBM Entry Systems Division in Boca Raton, Florida. The generic term "personal computer" was in use years before 1981, applied as early as 1972 to the Xerox PARC's Alto, but because of the success of the IBM Personal Computer, the term "PC" came to mean more a desktop microcomputer compatible with IBM's Personal Computer branded products. Since the machine was based on open architecture, within a short time of its introduction, third-party suppliers of peripheral devices, expansion cards, software proliferated. "IBM compatible" became an important criterion for sales growth. International Business Machines, one of the world's largest companies, had a 62% share of the mainframe computer market in 1982. In the late 1970s the new personal computer industry was dominated by the Commodore PET, Atari 8-bit family, Apple II, Tandy Corporation's TRS-80, various CP/M machines.
With $150 million in sales by 1979 and projected annual growth of more than 40% in the early 1980s, the microcomputer market was large enough for IBM's attention. Other large technology companies such as Hewlett-Packard, Texas Instruments, Data General had entered it, some large IBM customers were buying Apples, so the company saw introducing its own personal computer as both an experiment in a new market and a defense against rivals and small. In 1980 and 1981 rumors spread of an IBM personal computer a miniaturized version of the IBM System/370, while Matsushita acknowledged that it had discussed with IBM the possibility of manufacturing a personal computer for the American company; the Japanese project, codenamed "Go", ended before the 1981 release of the American-designed IBM PC codenamed "Chess", but two simultaneous projects further confused rumors about the forthcoming product. Data General and TI's small computers were not successful, but observers expected AT&T to soon enter the computer industry, other large companies such as Exxon, Montgomery Ward and Sony were designing their own microcomputers.
Xerox produced the 820 to introduce a personal computer before IBM, becoming the second Fortune 500 company after Tandy to do so, had its Xerox PARC laboratory's sophisticated technology. Whether IBM had waited too long to enter an industry in which Tandy and others were successful was unclear. An observer stated that "IBM bringing out a personal computer would be like teaching an elephant to tap dance." Successful microcomputer company Vector Graphic's fiscal 1980 revenue was $12 million. A single IBM computer in the early 1960s cost as much as $9 million, occupied one quarter acre of air-conditioned space, had a staff of 60 people; the "Colossus of Armonk" only sold through its own sales force, had no experience with resellers or retail stores, did not introduce the first product designed to work with non-IBM equipment until 1980. Another observer claimed that IBM made decisions so that, when tested, "what they found is that it would take at least nine months to ship an empty box"; as with other large computer companies, its new products required about four to five years for development.
IBM had to learn how to develop, mass-produce, market new computers. While the company traditionally let others pioneer a new market—IBM released its first commercial computer a year after Remington Rand's UNIVAC in 1951, but within five years had 85% of the market—the personal-computer development and pricing cycles were much faster than for mainframes, with products designed in a few months and obsolete quickly. Many in the microcomputer industry resented IBM's power and wealth, disliked the perception that an industry founded by startups needed a latecomer so staid that it had a strict dress code and employee songbook; the potential importance to microcomputers of a company so prestigious, that a popular saying in American companies stated "No one got fired for buying IBM", was nonetheless clear. InfoWorld, which described itself as "The Newsweekly for Microcomputer Users", stated that "for my grandmother, for millions of people like her, IBM and computer are synonymous". Byte stated in an editorial just before the announcement of the IBM PC: Rumors abound about personal computers to come from giants such as Digital Equipment Corporation and the General Electric Company.
But there is no contest. IBM's new personal computer... is far and away the media star, not because of its features, but because it exists at all. When the number eight company in the Fortune 500 enters the field, news... The influence of a personal computer made by a company whose name has come to mean "computer" to most of the world is hard to contemplate; the editorial acknowledged that "some factions in our industry have looked upon IBM as the'enemy'", but concluded with optimism: "I want to see personal computing take a giant step." Desktop sized programmable calculators by HP had evolved into the HP 9830 BASIC language computer by 1972. In 1972–1973 a team led by Dr. Paul Friedl at the IBM Los Gatos Scientific Center developed a portable computer prototype called SCAMP (Special Computer APL Machine Po