The GNU Manifesto was written by Richard Stallman and published in March 1985 in Dr. Dobb's Journal of Software Tools as an explanation of goals of the GNU Project, as a call for support and participation in developing GNU, a free software computer operating system, it is held in high regard within the free software movement as a fundamental philosophical source. The full text is included with GNU software such as Emacs, is publicly available; some parts of the GNU Manifesto begun as an announcement of the GNU Project posted by Richard Stallman on September 27, 1983 in form of email on Usenet newsgroups. The project's aim was to give computer users freedom and control over their computers by collaboratively developing and providing software, based on Stallman's idea of software freedom; the manifesto was written as a way to familiarize more people with these concepts, to find more support in form of work, money and hardware. The GNU Manifesto has taken its name and full form in 1985 and was updated in minor ways in 1987.
The GNU Manifesto opens with an explanation of what the GNU Project is, what is the current, at the time, progress in creation of the GNU operating system. The system, although based on, compatible with Unix, is meant by the author to have many improvements over it, which are listed in detail in the manifesto. One of the major driving points behind the GNU project, according to Stallman, is the rapid trend toward Unix and its various components becoming proprietary software; the manifesto lays a philosophical basis for launching the project, importance of bringing it to fruition — proprietary software is a way to divide users, who are no longer able to help each other. Stallman refuses to write proprietary software as a sign of solidarity with them; the author provides many reasons for why the project and software freedom is beneficial to users, although he agrees that its wide adoption will make a work of programmer less profitable. Large part of the GNU Manifesto is focused on rebutting possible objections to GNU Project's goals.
They include the programmer's need to make a living, the issue of advertising ad distributing free software, the perceived need of a profit incentive. History of free and open-source software Open Letter to Hobbyists GNU Manifesto
Massachusetts Institute of Technology
The Massachusetts Institute of Technology is a private research university in Cambridge, Massachusetts. Founded in 1861 in response to the increasing industrialization of the United States, MIT adopted a European polytechnic university model and stressed laboratory instruction in applied science and engineering; the Institute is a land-grant, sea-grant, space-grant university, with a campus that extends more than a mile alongside the Charles River. Its influence in the physical sciences and architecture, more in biology, linguistics and social science and art, has made it one of the most prestigious universities in the world. MIT is ranked among the world's top universities; as of March 2019, 93 Nobel laureates, 26 Turing Award winners, 8 Fields Medalists have been affiliated with MIT as alumni, faculty members, or researchers. In addition, 58 National Medal of Science recipients, 29 National Medals of Technology and Innovation recipients, 50 MacArthur Fellows, 73 Marshall Scholars, 45 Rhodes Scholars, 41 astronauts, 16 Chief Scientists of the US Air Force have been affiliated with MIT.
The school has a strong entrepreneurial culture, the aggregated annual revenues of companies founded by MIT alumni would rank as the tenth-largest economy in the world. MIT is a member of the Association of American Universities. In 1859, a proposal was submitted to the Massachusetts General Court to use newly filled lands in Back Bay, Boston for a "Conservatory of Art and Science", but the proposal failed. A charter for the incorporation of the Massachusetts Institute of Technology, proposed by William Barton Rogers, was signed by the governor of Massachusetts on April 10, 1861. Rogers, a professor from the University of Virginia, wanted to establish an institution to address rapid scientific and technological advances, he did not wish to found a professional school, but a combination with elements of both professional and liberal education, proposing that: The true and only practicable object of a polytechnic school is, as I conceive, the teaching, not of the minute details and manipulations of the arts, which can be done only in the workshop, but the inculcation of those scientific principles which form the basis and explanation of them, along with this, a full and methodical review of all their leading processes and operations in connection with physical laws.
The Rogers Plan reflected the German research university model, emphasizing an independent faculty engaged in research, as well as instruction oriented around seminars and laboratories. Two days after MIT was chartered, the first battle of the Civil War broke out. After a long delay through the war years, MIT's first classes were held in the Mercantile Building in Boston in 1865; the new institute was founded as part of the Morrill Land-Grant Colleges Act to fund institutions "to promote the liberal and practical education of the industrial classes" and was a land-grant school. In 1863 under the same act, the Commonwealth of Massachusetts founded the Massachusetts Agricultural College, which developed as the University of Massachusetts Amherst. In 1866, the proceeds from land sales went toward new buildings in the Back Bay. MIT was informally called "Boston Tech"; the institute adopted the European polytechnic university model and emphasized laboratory instruction from an early date. Despite chronic financial problems, the institute saw growth in the last two decades of the 19th century under President Francis Amasa Walker.
Programs in electrical, chemical and sanitary engineering were introduced, new buildings were built, the size of the student body increased to more than one thousand. The curriculum drifted with less focus on theoretical science; the fledgling school still suffered from chronic financial shortages which diverted the attention of the MIT leadership. During these "Boston Tech" years, MIT faculty and alumni rebuffed Harvard University president Charles W. Eliot's repeated attempts to merge MIT with Harvard College's Lawrence Scientific School. There would be at least six attempts to absorb MIT into Harvard. In its cramped Back Bay location, MIT could not afford to expand its overcrowded facilities, driving a desperate search for a new campus and funding; the MIT Corporation approved a formal agreement to merge with Harvard, over the vehement objections of MIT faculty and alumni. However, a 1917 decision by the Massachusetts Supreme Judicial Court put an end to the merger scheme. In 1916, the MIT administration and the MIT charter crossed the Charles River on the ceremonial barge Bucentaur built for the occasion, to signify MIT's move to a spacious new campus consisting of filled land on a mile-long tract along the Cambridge side of the Charles River.
The neoclassical "New Technology" campus was designed by William W. Bosworth and had been funded by anonymous donations from a mysterious "Mr. Smith", starting in 1912. In January 1920, the donor was revealed to be the industrialist George Eastman of Rochester, New York, who had invented methods of film production and processing, founded Eastman Kodak. Between 1912 and 1920, Eastman donated $20 million in cash and Kodak stock to MIT. In the 1930s, President Karl Taylor Compton and Vice-President Vannevar Bush emphasized the importance of pure sciences like physics and chemistry and reduced the vocational practice required in shops and drafting studios; the Compton reforms "renewed confidence in the ability of the Institute to develop leadership in science as well as in engineering". Unlike Ivy League schools, MIT catered more to middle-class families, depended more on tuition than on endow
Symbolics refers to two companies: now-defunct computer manufacturer Symbolics, Inc. and a held company that acquired the assets of the former company and continues to sell and maintain the Open Genera Lisp system and the Macsyma computer algebra system. The symbolics.com domain was registered on March 15, 1985, making it the first.com-domain in the world. In August 2009, it was sold to napkin.com Investments. Symbolics, Inc. was a computer manufacturer headquartered in Cambridge, in Concord, with manufacturing facilities in Chatsworth, California. Its first CEO, founder was Russell Noftsker. Symbolics designed and manufactured a line of Lisp machines, single-user computers optimized to run the Lisp programming language. Symbolics made significant advances in software technology, offered one of the premier software development environments of the 1980s and 1990s, now sold commercially as Open Genera for Tru64 UNIX on the HP Alpha; the Lisp Machine was the first commercially available "workstation".
Symbolics was a spinoff from the MIT AI Lab, one of two companies to be founded by AI Lab staffers and associated hackers for the purpose of manufacturing Lisp machines. The other was Lisp Machines, Inc. although Symbolics attracted most of the hackers, more funding. Symbolics' initial product, the LM-2, was a repackaged version of the MIT CADR Lisp machine design; the operating system and software development environment, over 500,000 lines, was written in Lisp from the microcode up, based on MIT's Lisp Machine Lisp. The software bundle was renamed ZetaLisp, to distinguish the Symbolics' product from other vendors who had licensed the MIT software. Symbolics' Zmacs text editor, a variant of Emacs, was implemented in a text-processing package named "ZWEI", an acronym for "Zwei was Eine initially", with "Eine" being an acronym for "Eine Is Not Emacs". Both are recursive acronyms and puns on the German words for "One" and "Two"; the Lisp Machine system software was copyrighted by MIT, was licensed to both Symbolics and LMI.
Until 1981, Symbolics shared all its copyrighted enhancements to the source code with MIT and kept it on an MIT server. According to Richard Stallman, Symbolics engaged in a business tactic in which it forced MIT to make all Symbolics' copyrighted fixes and improvements to the Lisp Machine OS available only to Symbolics, thereby choke off its competitor LMI, which at that time had insufficient resources to independently maintain or develop the OS and environment. Symbolics felt. At that point, Symbolics began using their own copy of the software, located on their company servers—while Stallman says that Symbolics did that to prevent its Lisp improvements from flowing to Lisp Machines, Inc. From that base, Symbolics made extensive improvements to every part of the software, continued to deliver all the source code to their customers. However, the policy prohibited MIT staff from distributing the Symbolics version of the software to others. With the end of open collaboration came the end of the MIT hacker community.
As a reaction to this, Stallman initiated the GNU project to make a new community. Copyleft and the GNU General Public License would ensure that a hacker's software could remain free software. In this way Symbolics played a key, albeit adversarial, role in instigating the free software movement. In 1983, a year than planned, Symbolics introduced the 3600 family of Lisp machines. Code-named the "L-machine" internally, the 3600 family was an innovative new design, inspired by the CADR architecture but sharing few of its implementation details; the main processor had a 36-bit word. Memory words were the additional 8 bits being used for error-correcting code; the instruction set was that of a stack machine. The 3600 architecture provided 4,096 hardware registers, of which half were used as a cache for the top of the control stack. Hardware support was provided for virtual memory, common for machines in its class, for garbage collection, unique; the original 3600 processor was a microprogrammed design like the CADR, was built on several large circuit boards from standard TTL integrated circuits, both features being common for commercial computers in its class at the time.
CPU clock speed varied depending on the particular instruction being executed, but was around 5 MHz. Many Lisp primitives could be executed in a single clock cycle. Disk I/O was handled by multitasking at the microcode level. A 68000 processor started the main computer up, handled the slower peripherals during normal operation. An Ethernet interface was standard equipment, replacing the Chaosnet interface of the LM-2; the 3600 was the size of a household refrigerator. This was due to the size of the processor – the cards were spaced to allow wire-wrap prototype cards to fit without interference—and due to the limitations of the disk drive technology in the early 1980s. At the 3600's introduction, the smallest disk that could support the ZetaLisp software was 14 inches across; the 3670 and 3675 were shorter in height, but were the same machine packed a little tighter. The advent of 8 inches (2
Ethernet is a family of computer networking technologies used in local area networks, metropolitan area networks and wide area networks. It was commercially introduced in 1980 and first standardized in 1983 as IEEE 802.3, has since retained a good deal of backward compatibility and been refined to support higher bit rates and longer link distances. Over time, Ethernet has replaced competing wired LAN technologies such as Token Ring, FDDI and ARCNET; the original 10BASE5 Ethernet uses coaxial cable as a shared medium, while the newer Ethernet variants use twisted pair and fiber optic links in conjunction with switches. Over the course of its history, Ethernet data transfer rates have been increased from the original 2.94 megabits per second to the latest 400 gigabits per second. The Ethernet standards comprise several wiring and signaling variants of the OSI physical layer in use with Ethernet. Systems communicating over Ethernet divide a stream of data into shorter pieces called frames; each frame contains source and destination addresses, error-checking data so that damaged frames can be detected and discarded.
As per the OSI model, Ethernet provides services up including the data link layer. Features such as the 48-bit MAC address and Ethernet frame format have influenced other networking protocols including Wi-Fi wireless networking technology. Ethernet is used in home and industry; the Internet Protocol is carried over Ethernet and so it is considered one of the key technologies that make up the Internet. Ethernet was developed at Xerox PARC between 1973 and 1974, it was inspired by ALOHAnet. The idea was first documented in a memo that Metcalfe wrote on May 22, 1973, where he named it after the luminiferous aether once postulated to exist as an "omnipresent, completely-passive medium for the propagation of electromagnetic waves." In 1975, Xerox filed a patent application listing Metcalfe, David Boggs, Chuck Thacker, Butler Lampson as inventors. In 1976, after the system was deployed at PARC, Metcalfe and Boggs published a seminal paper; that same year, Ron Crane, Bob Garner, Roy Ogus facilitated the upgrade from the original 2.94 Mbit/s protocol to the 10 Mbit/s protocol, released to the market in 1980.
Metcalfe left Xerox in June 1979 to form 3Com. He convinced Digital Equipment Corporation and Xerox to work together to promote Ethernet as a standard; as part of that process Xerox agreed to relinquish their'Ethernet' trademark. The first standard was published on September 1980 as "The Ethernet, A Local Area Network. Data Link Layer and Physical Layer Specifications"; this so-called DIX standard specified 10 Mbit/s Ethernet, with 48-bit destination and source addresses and a global 16-bit Ethertype-type field. Version 2 was published in November, 1982 and defines what has become known as Ethernet II. Formal standardization efforts proceeded at the same time and resulted in the publication of IEEE 802.3 on June 23, 1983. Ethernet competed with Token Ring and other proprietary protocols. Ethernet was able to adapt to market realities and shift to inexpensive thin coaxial cable and ubiquitous twisted pair wiring. By the end of the 1980s, Ethernet was the dominant network technology. In the process, 3Com became a major company.
3Com shipped its first 10 Mbit/s Ethernet 3C100 NIC in March 1981, that year started selling adapters for PDP-11s and VAXes, as well as Multibus-based Intel and Sun Microsystems computers. This was followed by DEC's Unibus to Ethernet adapter, which DEC sold and used internally to build its own corporate network, which reached over 10,000 nodes by 1986, making it one of the largest computer networks in the world at that time. An Ethernet adapter card for the IBM PC was released in 1982, and, by 1985, 3Com had sold 100,000. Parallel port based Ethernet adapters were produced with drivers for DOS and Windows. By the early 1990s, Ethernet became so prevalent that it was a must-have feature for modern computers, Ethernet ports began to appear on some PCs and most workstations; this process was sped up with the introduction of 10BASE-T and its small modular connector, at which point Ethernet ports appeared on low-end motherboards. Since Ethernet technology has evolved to meet new bandwidth and market requirements.
In addition to computers, Ethernet is now used to interconnect appliances and other personal devices. As Industrial Ethernet it is used in industrial applications and is replacing legacy data transmission systems in the world's telecommunications networks. By 2010, the market for Ethernet equipment amounted to over $16 billion per year. In February 1980, the Institute of Electrical and Electronics Engineers started project 802 to standardize local area networks; the "DIX-group" with Gary Robinson, Phil Arst, Bob Printis submitted the so-called "Blue Book" CSMA/CD specification as a candidate for the LAN specification. In addition to CSMA/CD, Token Ring and Token Bus were considered as candidates for a LAN standard. Competing proposals and broad interest in the initiative led to strong disagreement over which technology to standardize. In December 1980, the group was split into three subgroups, standardization proceeded separately for each proposal. Delays in the standards process put at risk the market introduction of the Xerox Star workstation and 3Com's Ethernet LAN products.
With such business implications in mind, David Liddle an
Laser printing is an electrostatic digital printing process. It produces high-quality text and graphics by passing a laser beam back and forth over a negatively charged cylinder called a "drum" to define a differentially charged image; the drum selectively collects electrically charged powdered ink, transfers the image to paper, heated in order to permanently fuse the text, imagery, or both. As with digital photocopiers, laser printers employ a xerographic printing process. However, laser printing differs from analog photocopiers in that the image is produced by the direct scanning of the medium across the printer's photoreceptor; this enables laser printing to copy images more than most photocopiers. Invented at Xerox PARC in the 1970s, laser printers were introduced for the office and home markets in subsequent years by IBM, Xerox, Hewlett-Packard and many others. Over the decades and speed have increased as price has fallen, the once cutting-edge printing devices are now ubiquitous. In the 1960s, the Xerox Corporation held a dominant position in the photocopier market.
In 1969, Gary Starkweather, who worked in Xerox's product development department, had the idea of using a laser beam to "draw" an image of what was to be copied directly onto the copier drum. After transferring to the formed Palo Alto Research Center in 1971, Starkweather adapted a Xerox 7000 copier to create SLOT. In 1972, Starkweather worked with Butler Lampson and Ronald Rider to add a control system and character generator, resulting in a printer called EARS —which became the Xerox 9700 laser printer. 1973: The Xerox 1200 was "the first commercial laser printer." A Xerox 2012 lookback described it as the "first commercial non-impact Xerographic printer for computer output." Input was either directly from a mainframe computer. The technology came from the Xerox 3600 copier. 1976: The first commercial implementation of a laser printer was the IBM 3800 in 1976. It was designed for data centers; the IBM 3800 was used for high-volume printing on continuous stationery, achieved speeds of 215 pages per minute, at a resolution of 240 dots per inch.
Over 8,000 of these printers were sold. 1977: The Xerox 9700 was brought to market in 1977. Unlike the IBM 3800, the Xerox 9700 was not targeted to replace any particular existing printers; the Xerox 9700 excelled at printing high-value documents on cut-sheet paper with varying content. 1979: In 1979, inspired by the Xerox 9700's commercial success, Japanese camera and optics company, developed a low-cost, desktop laser printer: the Canon LBP-10. Canon began work on a much-improved print engine, the Canon CX, resulting in the LBP-CX printer. Having no experience in selling to computer users, Canon sought partnerships with three Silicon Valley companies: Diablo Data Systems, Hewlett-Packard, Apple Computer. 1981: The first laser printer designed for office use reached market in 1981: the Xerox Star 8010. The system used a desktop metaphor, unsurpassed in commercial sales, until the Apple Macintosh. Although it was innovative, the Star workstation was a prohibitively expensive system, affordable only to a fraction of the businesses and institutions at which it was targeted.
1984: The first laser printer intended for mass-market sales was the HP LaserJet, released in 1984. The LaserJet was followed by printers from Brother Industries, IBM, others. First-generation machines had large photosensitive drums, of circumference greater than the loaded paper's length. Once faster-recovery coatings were developed, the drums could touch the paper multiple times in a pass, therefore be smaller in diameter. 1985: Apple introduced the LaserWriter, but used the newly released PostScript page-description language. Up until this point, each manufacturer used its own proprietary page-description language, making the supporting software complex and expensive. PostScript allowed the use of text, graphics and color independent of the printer's brand or resolution. PageMaker, written by Aldus for the Macintosh and LaserWriter, was released in 1985 and the combination became popular for desktop publishing. Laser printers brought exceptionally fast and high-quality text printing in multiple fonts on a page, to the business and consumer markets.
No other available printer during this era could offer this combination of features. 1995: Xerox ran magazine print ads headlined "Who invented the laser printer?" and answered "it's Xerox." A laser beam projects an image of the page to be printed onto an electrically charged, selenium-coated, cylindrical drum. Photoconductivity allows the charged electrons to fall away from the areas exposed to light. Powdered ink particles are electrostatically attracted to the charged areas of the drum that have not been laser-beamed; the drum transfers the image onto paper by direct contact. The paper is passed onto a finisher, which uses heat to fuse the toner that represents the image onto the paper. There are seven steps involved in
The Advanced Research Projects Agency Network was an early packet-switching network and the first network to implement the TCP/IP protocol suite. Both technologies became the technical foundation of the Internet; the ARPANET was founded by the Advanced Research Projects Agency of the United States Department of Defense. The packet-switching methodology employed in the ARPANET was based on concepts and designs by Leonard Kleinrock, Paul Baran, Donald Davies, Lawrence Roberts; the TCP/IP communications protocols were developed for the ARPANET by computer scientists Robert Kahn and Vint Cerf, incorporated concepts from the French CYCLADES project directed by Louis Pouzin. As the project progressed, protocols for internetworking were developed by which multiple separate networks could be joined into a network of networks. Access to the ARPANET was expanded in 1981, when the National Science Foundation funded the Computer Science Network. In 1982, the Internet protocol suite was introduced as the standard networking protocol on the ARPANET.
In the early 1980s the NSF funded the establishment of national supercomputing centers at several universities and provided interconnectivity in 1986 with the NSFNET project, which created network access to the supercomputer sites in the United States from research and education organizations. The ARPANET was decommissioned in 1989. Voice and data communications were based on methods of circuit switching, as exemplified in the traditional telephone network, wherein each telephone call is allocated a dedicated, end to end, electronic connection between the two communicating stations; such stations might be computers. The temporarily dedicated line comprises many intermediary lines which are assembled into a chain that reaches from the originating station to the destination station. With packet switching, a network could share a single communication link for communication between multiple pairs of receivers and transmitters; the earliest ideas for a computer network intended to allow general communications among computer users were formulated by computer scientist J. C. R. Licklider of Bolt and Newman, in April 1963, in memoranda discussing the concept of the "Intergalactic Computer Network".
Those ideas encompassed many of the features of the contemporary Internet. In October 1963, Licklider was appointed head of the Behavioral Sciences and Command and Control programs at the Defense Department's Advanced Research Projects Agency, he convinced Ivan Sutherland and Bob Taylor that this network concept was important and merited development, although Licklider left ARPA before any contracts were assigned for development. Sutherland and Taylor continued their interest in creating the network, in part, to allow ARPA-sponsored researchers at various corporate and academic locales to utilize computers provided by ARPA, and, in part, to distribute new software and other computer science results. Taylor had three computer terminals in his office, each connected to separate computers, which ARPA was funding: one for the System Development Corporation Q-32 in Santa Monica, one for Project Genie at the University of California and another for Multics at the Massachusetts Institute of Technology.
Taylor recalls the circumstance: "For each of these three terminals, I had three different sets of user commands. So, if I was talking online with someone at S. D. C. and I wanted to talk to someone I knew at Berkeley, or M. I. T. about this, I had to get up from the S. D. C. Terminal, log into the other terminal and get in touch with them. I said, "Oh Man!", it's obvious what to do: If you have these three terminals, there ought to be one terminal that goes anywhere you want to go. That idea is the ARPANET". Meanwhile, since the early 1960s, Paul Baran at the RAND Corporation had been researching systems that could survive nuclear war and developed the idea of distributed adaptive message block switching. Donald Davies at the United Kingdom's National Physical Laboratory independently invented the same concept in 1965, his work, presented by a colleague caught the attention of ARPANET developers at a conference in Gatlinburg, Tennessee, in October 1967. He gave the first public demonstration, having coined the term packet switching, on 5 August 1968 and incorporated it into the NPL network in England.
Elizabeth Feinler created the first Resource Handbook for ARPANET in 1969 which led to the development of the ARPANET directory. The directory, built by Feinler and a team made it possible to navigate the ARPANET. Larry Roberts at ARPA applied Davies' concepts of packet switching for the ARPANET; the NPL network followed by the ARPANET were the first two networks in the world to use packet switching, were themselves connected together in 1973. Bob Taylor convinced ARPA's Director Charles M. Herzfeld to fund a network project in February 1966, Herzfeld transferred a million dollars from a ballistic missile defense program to Taylor's budget. Taylor hired Larry Roberts as a program manager in the ARPA Information Processing Techniques Office in January 1967 to work on the ARPANET. In April 1967, Roberts held a design session on technical standards; the initial standards for identification and authentication of users, transmission of characters, error checking and retransmission procedures were discussed.
At the meeting, Wesley Clark proposed minicomputers called Interface Message Processors should be used to interface to the network rather than the large mainframes that would be the nodes of the ARPANET. Roberts modified the ARPANET plan to incorporate Clark's suggestion; the plan was presented at the ACM Symposium in Gatlinburg, Tennessee, in October 1967. Donald Davies' work on packet switc
Cable television is a system of delivering television programming to consumers via radio frequency signals transmitted through coaxial cables, or in more recent systems, light pulses through fiber-optic cables. This contrasts with broadcast television, in which the television signal is transmitted over the air by radio waves and received by a television antenna attached to the television. FM radio programming, high-speed Internet, telephone services, similar non-television services may be provided through these cables. Analog television was standard in the 20th century, but since the 2000s, cable systems have been upgraded to digital cable operation. A "cable channel" is a television network available via cable television; when available through satellite television, including direct broadcast satellite providers such as DirecTV, Dish Network and Sky, as well as via IPTV providers such as Verizon FIOS and AT&T U-verse is referred to as a "satellite channel". Alternative terms include "non-broadcast channel" or "programming service", the latter being used in legal contexts.
Examples of cable/satellite channels/cable networks available in many countries are HBO, Cinemax, MTV, Cartoon Network, AXN, E!, FX, Discovery Channel, Canal+, Fox Sports, Disney Channel, Nickelodeon, CNN International, ESPN. The abbreviation CATV is used for cable television, it stood for Community Access Television or Community Antenna Television, from cable television's origins in 1948. In areas where over-the-air TV reception was limited by distance from transmitters or mountainous terrain, large "community antennas" were constructed, cable was run from them to individual homes; the origins of cable broadcasting for radio are older as radio programming was distributed by cable in some European cities as far back as 1924. To receive cable television at a given location, cable distribution lines must be available on the local utility poles or underground utility lines. Coaxial cable brings the signal to the customer's building through a service drop, an overhead or underground cable. If the subscriber's building does not have a cable service drop, the cable company will install one.
The standard cable used in the U. S. is RG-6, which has a 75 ohm impedance, connects with a type F connector. The cable company's portion of the wiring ends at a distribution box on the building exterior, built-in cable wiring in the walls distributes the signal to jacks in different rooms to which televisions are connected. Multiple cables to different rooms are split off the incoming cable with a small device called a splitter. There are two standards for cable television. All cable companies in the United States have switched to or are in the course of switching to digital cable television since it was first introduced in the late 1990s. Most cable companies require a set-top box or a slot on one's TV set for conditional access module cards to view their cable channels on newer televisions with digital cable QAM tuners, because most digital cable channels are now encrypted, or "scrambled", to reduce cable service theft. A cable from the jack in the wall is attached to the input of the box, an output cable from the box is attached to the television the RF-IN or composite input on older TVs.
Since the set-top box only decodes the single channel, being watched, each television in the house requires a separate box. Some unencrypted channels traditional over-the-air broadcast networks, can be displayed without a receiver box; the cable company will provide set top boxes based on the level of service a customer purchases, from basic set top boxes with a standard definition picture connected through the standard coaxial connection on the TV, to high-definition wireless DVR receivers connected via HDMI or component. Older analog television sets are "cable ready" and can receive the old analog cable without a set-top box. To receive digital cable channels on an analog television set unencrypted ones, requires a different type of box, a digital television adapter supplied by the cable company. A new distribution method that takes advantage of the low cost high quality DVB distribution to residential areas, uses TV gateways to convert the DVB-C, DVB-C2 stream to IP for distribution of TV over IP network in the home.
In the most common system, multiple television channels are distributed to subscriber residences through a coaxial cable, which comes from a trunkline supported on utility poles originating at the cable company's local distribution facility, called the "headend". Many channels can be transmitted through one coaxial cable by a technique called frequency division multiplexing. At the headend, each television channel is translated to a different frequency. By giving each channel a different frequency "slot" on the cable, the separate television signals do not interfere with each other. At an outdoor cable box on the subscriber's residence the company's service drop cable is connected to cables distributing the signal to different rooms in the building. At each television, the subscriber's television or a set-top box provided by the cable company translates the desired channel back to its original frequency, it is displayed onscreen. Due to widespread cable theft in earlier analog systems, the signals are encrypted on m