A registered jack is a standardized telecommunication network interface for connecting voice and data equipment to a service provided by a local exchange carrier or long distance carrier. Registration interfaces were first defined in the Universal Service Ordering Code system of the Bell System in the United States for complying with the registration program for customer-supplied telephone equipment mandated by the Federal Communications Commission in the 1970s, they were subsequently codified in title 47 of the Code of Federal Regulations Part 68. The specification includes physical construction and signal semantics. Accordingly, registered jacks are named by the letters RJ, followed by two digits that express the type. Additionally, letter suffixes indicate minor variations. For example, RJ11, RJ14, RJ25 are the most used interfaces for telephone connections for one-, two-, three-line service, respectively. Although these standards are legal definitions in the United States, some interfaces are used worldwide.
The connectors used for registered jack installations are the modular connector and the 50-pin miniature ribbon connector. For example, RJ11 uses a six-position two-conductor connector, RJ14 uses a six-position four-conductor modular jack, while RJ21 uses a 25-pair miniature ribbon connector; the registered jack designations originated in the standardization processes in the Bell System in the United States, describe application circuits and not just the physical geometry of the connectors. The same modular connector type may be used for different registered jack applications. Registered Jack refers to both the female physical connector and its wiring, but the term is used loosely to refer to modular connectors regardless of wiring or gender, such as in Ethernet over twisted pair. There is much confusion over these connection standards; the same six-position plug and jack used for telephone line connections may be used for RJ11, RJ14 or RJ25, all of which are names of interface standards that use this physical connector.
The RJ11 standard dictates a single wire pair connection, while RJ14 is a configuration for two lines, RJ25 uses all six wires for three telephones lines. The RJ designations, only pertain to the wiring of the jack, hence the name Registered Jack. Modular connectors were developed to replace older telephone installation methods that used either hardwired cords, or bulkier varieties of telephone plugs; the common nomenclature for modular connectors includes the number of contact positions and the number of wires connected, for example 6P indicates a six-position modular plug or jack. A six-position modular plug with conductors in the middle two positions and the other four positions unused has the designation 6P2C. RJ11 uses a 6P2C connector; the connectors could be supplied with more pins, but if more pins are wired, the interface is not an RJ11. Registration interfaces were created by the Bell System under a 1976 Federal Communications Commission order for the standard interconnection between telephone company equipment and customer premises equipment.
These interfaces used newly standardized jacks and plugs based on miniature modular connectors. The wired communications provider is responsible for delivery of services to a minimum point of entry; the MPOE is a utility box containing surge protective circuitry, which connects the wiring on the customer's property to the communication provider's network. Customers are responsible for all jacks and equipment on their side of the MPOE; the intent was to establish a universal standard for wiring and interfaces, to separate ownership of in-home telephone wiring from the wiring owned by the service provider. In the Bell System, following the Communications Act of 1934, the telephone companies owned all telecommunications equipment and they did not allow interconnection of third-party equipment. Telephones were hardwired, but may have been installed with Bell System connectors to permit portability; the legal case Hush-A-Phone v. United States and the Federal Communications Commission's Carterfone decision brought changes to this policy, required the Bell System to allow some interconnection, culminating in the development of registered interfaces using new types of miniature connectors.
Registered jacks replaced the use of protective couplers provided by the telephone company. The new modular connectors were much smaller and cheaper to produce than the earlier, bulkier connectors that were used in the Bell System since the 1930s; the Bell System issued specifications for the modular connectors and their wiring as Universal Service Order Codes, which were the only standards at the time. Large customers of telephone services use the USOC to specify the interconnection type and, when necessary, pin-assignments, when placing service orders with a network provider; when the U. S. telephone industry was reformed to foster competition in the 1980s, the connection specifications became federal law, ordered by the FCC and codified in the Code of Federal Regulations, Title 47 CFR Part 68, Subpart F, superseded by T1. TR5-1999. In January 2001, the FCC delegated responsibility for standardizing connections to the telephone network to a new private industry organization, the Administrative Council for Terminal Attachments.
For this delegation, the FCC removed Subpart F from the CFR and added Subpart G. The ACTA derives its recommend
A modem is a hardware device that converts data between transmission media so that it can be transmitted from computer to computer. The goal is to produce a signal that can be transmitted and decoded to reproduce the original digital data. Modems can be used with any means of transmitting analog signals from light-emitting diodes to radio. A common type of modem is one that turns the digital data of a computer into modulated electrical signal for transmission over telephone lines and demodulated by another modem at the receiver side to recover the digital data. Modems are classified by the maximum amount of data they can send in a given unit of time expressed in bits per second or bytes per second. Modems can be classified by their symbol rate, measured in baud; the baud unit denotes symbols per second, or the number of times per second the modem sends a new signal. For example, the ITU V.21 standard used audio frequency-shift keying with two possible frequencies, corresponding to two distinct symbols, to carry 300 bits per second using 300 baud.
By contrast, the original ITU V.22 standard, which could transmit and receive four distinct symbols, transmitted 1,200 bits by sending 600 symbols per second using phase-shift keying News wire services in the 1920s used multiplex devices that satisfied the definition of a modem. However, the modem function was incidental to the multiplexing function, so they are not included in the history of modems. Modems grew out of the need to connect teleprinters over ordinary phone lines instead of the more expensive leased lines, used for current loop–based teleprinters and automated telegraphs. In 1941, the Allies developed a voice encryption system called SIGSALY which used a vocoder to digitize speech encrypted the speech with one-time pad and encoded the digital data as tones using frequency shift keying. Mass-produced modems in the United States began as part of the SAGE air-defense system in 1958, connecting terminals at various airbases, radar sites, command-and-control centers to the SAGE director centers scattered around the United States and Canada.
SAGE modems were described by AT&T's Bell Labs as conforming to their newly published Bell 101 dataset standard. While they ran on dedicated telephone lines, the devices at each end were no different from commercial acoustically coupled Bell 101, 110 baud modems; the 201A and 201B Data-Phones were synchronous modems using two-bit-per-baud phase-shift keying. The 201A operated half-duplex at 2,000 bit/s over normal phone lines, while the 201B provided full duplex 2,400 bit/s service on four-wire leased lines, the send and receive channels each running on their own set of two wires; the famous Bell 103A dataset standard was introduced by AT&T in 1962. It provided full-duplex service at 300 bit/s over normal phone lines. Frequency-shift keying was used, with the call originator transmitting at 1,070 or 1,270 Hz and the answering modem transmitting at 2,025 or 2,225 Hz; the available 103A2 gave an important boost to the use of remote low-speed terminals such as the Teletype Model 33 ASR and KSR, the IBM 2741.
AT&T reduced modem costs by introducing the answer-only 113B/C modems. For many years, the Bell System maintained a monopoly on the use of its phone lines and what devices could be connected to them. However, the FCC's seminal Carterfone Decision of 1968, the FCC concluded that electronic devices could be connected to the telephone system as long as they used an acoustic coupler. Since most handsets were supplied by Western Electric and thus of a standard design, acoustic couplers were easy to build. Acoustically coupled Bell 103A-compatible 300 bit/s modems were common during the 1970s. Well-known models included the Novation CAT and the Anderson-Jacobson, the latter spun off from an in-house project at Stanford Research Institute. An lower-cost option was the Pennywhistle modem, designed to be built using parts from electronics scrap and surplus stores. In December 1972, Vadic introduced the VA3400, notable for full-duplex operation at 1,200 bit/s over the phone network. Like the 103A, it used different frequency bands for receive.
In November 1976, AT&T introduced the 212A modem to compete with Vadic. It used the lower frequency set for transmission. One could use the 212A with a 103A modem at 300 bit/s. According to Vadic, the change in frequency assignments made the 212 intentionally incompatible with acoustic coupling, thereby locking out many potential modem manufacturers. In 1977, Vadic responded with the VA3467 triple modem, an answer-only modem sold to computer center operators that supported Vadic's 1,200-bit/s mode, AT&T's 212A mode, 103A operation; the Hush-a-Phone decision applied only to mechanical connections, but the Carterfone decision of 1968, led to the FCC introducing a rule setting stringent AT&T-designed tests for electronically coupling a device to the phone lines. This opened the door to direct-connect modems that plugged directly into the phone line rather than via a handset. However, the cost of passing the tests was considerable, acoustically coupled modems remained common into the early 1980s.
The falling prices of electronics in the late 1970s led to an increasing number of direct-connect models around 1980. In spite of being directly connected, these modems were operated like their earlier acoustic versions – dialing and other phone-control operations were completed by hand, using an attached handset
A desktop computer is a personal computer designed for regular use at a single location on or near a desk or table due to its size and power requirements. The most common configuration has a case that houses the power supply, disk storage; the case may be oriented horizontally or vertically and placed either underneath, beside, or on top of a desk. Prior to the widespread use of microprocessors, a computer that could fit on a desk was considered remarkably small. Early computers took up the space of a whole room. Minicomputers fit into one or a few refrigerator-sized racks, it was not until the 1970s when programmable computers appeared that could fit on top of a desk. 1970 saw the introduction of the Datapoint 2200, a "smart" computer terminal complete with keyboard and monitor, was designed to connect with a mainframe computer but that didn't stop owners from using its built in computational abilities as a stand alone desktop computer. The HP 9800 series, which started out as programmable calculators in 1971 but was programmable in BASIC by 1972, used a smaller version of a minicomputer design based on ROM memory and had small one-line LED alphanumeric displays and displayed graphics with a plotter.
The Wang 2200 of 1973 had cassette tape storage. The IBM 5100 in 1975 had a small CRT display and could be programmed in BASIC and APL; these were expensive specialized computers sold for business or scientific uses. Apple II, TRS-80 and Commodore PET were first generation personal home computers launched in 1977, which were aimed at the consumer market – rather than businessmen or computer hobbyists. Byte magazine referred to these three as the "1977 Trinity" of personal computing. Throughout the 1980s and 1990s, desktop computers became the predominant type, the most popular being the IBM PC and its clones, followed by the Apple Macintosh, with the third-placed Commodore Amiga having some success in the mid-1980s but declining by the early 1990s. Early personal computers, like the original IBM Personal Computer, were enclosed in a "desktop case", horizontally oriented to have the display screen placed on top, thus saving space on the user's actual desk, although these cases had to be sturdy enough to support the weight of CRT displays that were widespread at the time.
Over the course of the 1990s, desktop cases became less common than the more-accessible tower cases that may be located on the floor under or beside a desk rather than on a desk. Not only do these tower cases have more room for expansion, they have freed up desk space for monitors which were becoming larger every year. Desktop cases the compact form factors, remain popular for corporate computing environments and kiosks; some computer cases can be interchangeably positioned either horizontally or upright. Influential games such as Doom and Quake during the 1990s had pushed gamers and enthusiasts to upgrade to the latest CPUs and graphics cards for their desktops in order to run these applications, though this has slowed since the late 2000s as the growing popularity of Intel integrated graphics forced game developers to scale back. Creative Technology's Sound Blaster series were a de facto standard for sound cards in desktop PCs during the 1990s until the early 2000s, when they were reduced to a niche product, as OEM desktop PCs came with sound boards integrated directly onto the motherboard.
While desktops have long been the most common configuration for PCs, by the mid-2000s the growth shifted from desktops to laptops. Notably, while desktops were produced in the United States, laptops had long been produced by contract manufacturers based in Asia, such as Foxconn; this shift led to the closure of the many desktop assembly plants in the United States by 2010. Another trend around this time was the increasing proportion of inexpensive base-configuration desktops being sold, hurting PC manufacturers such as Dell whose build-to-order customization of desktops relied on upselling added features to buyers. Battery-powered portable computers had just 2% worldwide market share in 1986. However, laptops have become popular, both for business and personal use. Around 109 million notebook PCs shipped worldwide in 2007, a growth of 33% compared to 2006. In 2008, it was estimated that 145.9 million notebooks were sold, that the number would grow in 2009 to 177.7 million. The third quarter of 2008 was the first time when worldwide notebook PC shipments exceeded desktops, with 38.6 million units versus 38.5 million units.
The sales breakdown of the Apple Macintosh have seen sales of desktop Macs staying constant while being surpassed by that of Mac notebooks whose sales rate has grown considerably. The change in sales of form factors is due to the desktop iMac moving from affordable to upscale and subsequent releases are considered premium all-in-ones. By contrast, the MSRP of the MacBook laptop lines have dropped through successive generations such that the MacBook Air and MacBook Pro constitute the lowest price of entry to a Mac, with the exception of the more inexpensive Mac Mini (albeit with
In computing, the expansion card, expansion board, adapter card or accessory card is a printed circuit board that can be inserted into an electrical connector, or expansion slot, on a computer motherboard, backplane or riser card to add functionality to a computer system via the expansion bus. An expansion bus is a computer bus which moves information between the internal hardware of a computer system and peripheral devices, it is a collection of protocols that allows for the expansion of a computer. Vacuum-tube based computers had modular construction, but individual functions for peripheral devices filled a cabinet, not just a printed circuit board. Processor, memory and I/O cards became feasible with the development of integrated circuits. Expansion cards allowed a processor system to be adapted to the needs of the user, allowing variations in the type of devices connected, additions to memory, or optional features to the central processor. Minicomputers, starting with the PDP-8, were made of multiple cards, all powered by and communicating through a passive backplane.
The first commercial microcomputer to feature expansion slots was the Micral N, in 1973. The first company to establish a de facto standard was the Altair 8800, developed 1974-1975, which became a multi-manufacturer standard, the S-100 bus. Many of these computers were passive backplane designs, where all elements of the computer, plugged into a card cage which passively distributed signals and power between the cards. Proprietary bus implementations for systems such as the Apple II co-existed with multi-manufacturer standards. IBM introduced what would retroactively be called the Industry Standard Architecture bus with the IBM PC in 1981. At that time, the technology was called the PC bus; the IBM XT, introduced in 1983, used the same bus. The 8-bit PC and XT bus was extended with the introduction of the IBM AT in 1984; this used a second connector for extending the address and data bus over the XT, but was backward compatible. Industry Standard Architecture became the designation for the IBM AT bus after other types were developed.
Users of the ISA bus had to have in-depth knowledge of the hardware they were adding to properly connect the devices, since memory addresses, I/O port addresses, DMA channels had to be configured by switches or jumpers on the card to match the settings in driver software. IBM's MCA bus, developed for the PS/2 in 1987, was a competitor to ISA their design, but fell out of favor due to the ISA's industry-wide acceptance and IBM's licensing of MCA. EISA, the 32-bit extended version of ISA championed by Compaq, was used on some PC motherboards until 1997, when Microsoft declared it a "legacy" subsystem in the PC 97 industry white-paper. Proprietary local buses and the VESA Local Bus Standard, were late 1980s expansion buses that were tied but not exclusive to the 80386 and 80486 CPU bus; the PC/104 bus is an embedded bus. Intel launched their PCI bus chipsets along with the P5-based Pentium CPUs in 1993; the PCI bus was introduced in 1991 as a replacement for ISA. The standard is found on PC motherboards to this day.
The PCI standard supports bus bridging: as many as ten daisy chained PCI buses have been tested. Cardbus, using the PCMCIA connector, is a PCI format that attaches peripherals to the Host PCI Bus via PCI to PCI Bridge. Cardbus is being supplanted by ExpressCard format. Intel introduced the AGP bus in 1997 as a dedicated video acceleration solution. AGP devices are logically attached to the PCI bus over a PCI-to-PCI bridge. Though termed a bus, AGP supports only a single card at a time. From 2005 PCI-Express has been replacing both PCI and AGP; this standard, approved in 2004, implements the logical PCI protocol over a serial communication interface. PC/104 or Mini PCI are added for expansion on small form factor boards such as Mini-ITX. For their 1000 EX and 1000 HX models, Tandy Computer designed the PLUS expansion interface, an adaptation of the XT-bus supporting cards of a smaller form factor; because it is electrically compatible with the XT bus, a passive adapter can be made to connect XT cards to a PLUS expansion connector.
Another feature of PLUS cards is. Another bus that offered stackable expansion modules was the "sidecar" bus used by the IBM PCjr; this may have been electrically comparable to the XT bus. Again, PCjr sidecars are not technically expansion cards, but expansion modules, with the only difference being that the sidecar is an expansion card enclosed in a plastic box. Most other computer lines, including those from Apple Inc. Tandy, Commodore and Atari, offered their own expansion buses; the Amiga used Zorro II. Apple used a proprietary system with seven 50-pin-slots for Apple II peripheral cards later used the NuBus for its Macintosh series until 1995, when they switched to a PCI Bus. PCI expansion cards will function on any CPU platform if there is a software driver for that type. PCI video cards and other cards that contain a BIOS are problematic, although video cards conforming to VESA Standards may be used for secondary monitors. DEC Alpha, IBM PowerPC, NEC MIPS workstations used PCI bus connectors.
Ethernet is a family of computer networking technologies used in local area networks, metropolitan area networks and wide area networks. It was commercially introduced in 1980 and first standardized in 1983 as IEEE 802.3, has since retained a good deal of backward compatibility and been refined to support higher bit rates and longer link distances. Over time, Ethernet has replaced competing wired LAN technologies such as Token Ring, FDDI and ARCNET; the original 10BASE5 Ethernet uses coaxial cable as a shared medium, while the newer Ethernet variants use twisted pair and fiber optic links in conjunction with switches. Over the course of its history, Ethernet data transfer rates have been increased from the original 2.94 megabits per second to the latest 400 gigabits per second. The Ethernet standards comprise several wiring and signaling variants of the OSI physical layer in use with Ethernet. Systems communicating over Ethernet divide a stream of data into shorter pieces called frames; each frame contains source and destination addresses, error-checking data so that damaged frames can be detected and discarded.
As per the OSI model, Ethernet provides services up including the data link layer. Features such as the 48-bit MAC address and Ethernet frame format have influenced other networking protocols including Wi-Fi wireless networking technology. Ethernet is used in home and industry; the Internet Protocol is carried over Ethernet and so it is considered one of the key technologies that make up the Internet. Ethernet was developed at Xerox PARC between 1973 and 1974, it was inspired by ALOHAnet. The idea was first documented in a memo that Metcalfe wrote on May 22, 1973, where he named it after the luminiferous aether once postulated to exist as an "omnipresent, completely-passive medium for the propagation of electromagnetic waves." In 1975, Xerox filed a patent application listing Metcalfe, David Boggs, Chuck Thacker, Butler Lampson as inventors. In 1976, after the system was deployed at PARC, Metcalfe and Boggs published a seminal paper; that same year, Ron Crane, Bob Garner, Roy Ogus facilitated the upgrade from the original 2.94 Mbit/s protocol to the 10 Mbit/s protocol, released to the market in 1980.
Metcalfe left Xerox in June 1979 to form 3Com. He convinced Digital Equipment Corporation and Xerox to work together to promote Ethernet as a standard; as part of that process Xerox agreed to relinquish their'Ethernet' trademark. The first standard was published on September 1980 as "The Ethernet, A Local Area Network. Data Link Layer and Physical Layer Specifications"; this so-called DIX standard specified 10 Mbit/s Ethernet, with 48-bit destination and source addresses and a global 16-bit Ethertype-type field. Version 2 was published in November, 1982 and defines what has become known as Ethernet II. Formal standardization efforts proceeded at the same time and resulted in the publication of IEEE 802.3 on June 23, 1983. Ethernet competed with Token Ring and other proprietary protocols. Ethernet was able to adapt to market realities and shift to inexpensive thin coaxial cable and ubiquitous twisted pair wiring. By the end of the 1980s, Ethernet was the dominant network technology. In the process, 3Com became a major company.
3Com shipped its first 10 Mbit/s Ethernet 3C100 NIC in March 1981, that year started selling adapters for PDP-11s and VAXes, as well as Multibus-based Intel and Sun Microsystems computers. This was followed by DEC's Unibus to Ethernet adapter, which DEC sold and used internally to build its own corporate network, which reached over 10,000 nodes by 1986, making it one of the largest computer networks in the world at that time. An Ethernet adapter card for the IBM PC was released in 1982, and, by 1985, 3Com had sold 100,000. Parallel port based Ethernet adapters were produced with drivers for DOS and Windows. By the early 1990s, Ethernet became so prevalent that it was a must-have feature for modern computers, Ethernet ports began to appear on some PCs and most workstations; this process was sped up with the introduction of 10BASE-T and its small modular connector, at which point Ethernet ports appeared on low-end motherboards. Since Ethernet technology has evolved to meet new bandwidth and market requirements.
In addition to computers, Ethernet is now used to interconnect appliances and other personal devices. As Industrial Ethernet it is used in industrial applications and is replacing legacy data transmission systems in the world's telecommunications networks. By 2010, the market for Ethernet equipment amounted to over $16 billion per year. In February 1980, the Institute of Electrical and Electronics Engineers started project 802 to standardize local area networks; the "DIX-group" with Gary Robinson, Phil Arst, Bob Printis submitted the so-called "Blue Book" CSMA/CD specification as a candidate for the LAN specification. In addition to CSMA/CD, Token Ring and Token Bus were considered as candidates for a LAN standard. Competing proposals and broad interest in the initiative led to strong disagreement over which technology to standardize. In December 1980, the group was split into three subgroups, standardization proceeded separately for each proposal. Delays in the standards process put at risk the market introduction of the Xerox Star workstation and 3Com's Ethernet LAN products.
With such business implications in mind, David Liddle an
Bluetooth is a wireless technology standard for exchanging data between fixed and mobile devices over short distances using short-wavelength UHF radio waves in the industrial and medical radio bands, from 2.400 to 2.485 GHz, building personal area networks. It was conceived as a wireless alternative to RS-232 data cables. Bluetooth is managed by the Bluetooth Special Interest Group, which has more than 30,000 member companies in the areas of telecommunication, computing and consumer electronics; the IEEE standardized no longer maintains the standard. The Bluetooth SIG oversees development of the specification, manages the qualification program, protects the trademarks. A manufacturer must meet Bluetooth SIG standards to market it as a Bluetooth device. A network of patents apply to the technology; the development of the "short-link" radio technology named Bluetooth, was initiated in 1989 by Nils Rydbeck, CTO at Ericsson Mobile in Lund, Sweden and by Johan Ullman. The purpose was to develop wireless headsets, according to two inventions by Johan Ullman, SE 8902098-6, issued 1989-06-12 and SE 9202239, issued 1992-07-24.
Nils Rydbeck tasked Tord Wingren with specifying and Jaap Haartsen and Sven Mattisson with developing. Both were working for Ericsson in Lund. Invented by Dutch electrical engineer Jaap Haartsen, working for telecommunications company Ericsson in 1994; the first consumer bluetooth launched in 1999. It was a hand free mobile headset which earned the technology the"Best of show Technology Award" at COMDEX; the first Bluetooth mobile phone was the Sony Ericsson T36 but it was the revised T39 model which made it to store shelves in 2001. The name Bluetooth is an Anglicised version of the Scandinavian Blåtand/Blåtann, the epithet of the tenth-century king Harald Bluetooth who united dissonant Danish tribes into a single kingdom; the implication is. The idea of this name was proposed in 1997 by Jim Kardach of Intel who developed a system that would allow mobile phones to communicate with computers. At the time of this proposal he was reading Frans G. Bengtsson's historical novel The Long Ships about Vikings and King Harald Bluetooth.
The Bluetooth logo is a bind rune merging the Younger Futhark runes and, Harald's initials. Bluetooth operates at frequencies between 2402 and 2480 MHz, or 2400 and 2483.5 MHz including guard bands 2 MHz wide at the bottom end and 3.5 MHz wide at the top. This is in the globally unlicensed industrial and medical 2.4 GHz short-range radio frequency band. Bluetooth uses. Bluetooth divides transmitted data into packets, transmits each packet on one of 79 designated Bluetooth channels; each channel has a bandwidth of 1 MHz. It performs 1600 hops per second, with adaptive frequency-hopping enabled. Bluetooth Low Energy uses 2 MHz spacing. Gaussian frequency-shift keying modulation was the only modulation scheme available. Since the introduction of Bluetooth 2.0+EDR, π/4-DQPSK and 8-DPSK modulation may be used between compatible devices. Devices functioning with GFSK are said to be operating in basic rate mode where an instantaneous bit rate of 1 Mbit/s is possible; the term Enhanced Data Rate is used to describe π/4-DPSK and 8-DPSK schemes, each giving 2 and 3 Mbit/s respectively.
The combination of these modes in Bluetooth radio technology is classified as a BR/EDR radio. Bluetooth is a packet-based protocol with a master/slave architecture. One master may communicate with up to seven slaves in a piconet. All devices share the master's clock. Packet exchange is based on the basic clock, defined by the master, which ticks at 312.5 µs intervals. Two clock ticks make up a slot of 625 µs, two slots make up a slot pair of 1250 µs. In the simple case of single-slot packets, the master transmits in slots and receives in odd slots; the slave, receives in slots and transmits in odd slots. Packets may be 1, 3 or 5 slots long, but in all cases the master's transmission begins in slots and the slave's in odd slots; the above excludes Bluetooth Low Energy, introduced in the 4.0 specification, which uses the same spectrum but somewhat differently. A master BR/EDR Bluetooth device can communicate with a maximum of seven devices in a piconet, though not all devices reach this maximum; the devices can switch roles, by agreement, the slave can become the master.
The Bluetooth Core Specification provides for the connection of two or more piconets to form a scatternet, in which certain devices play the master role in one piconet and the slave role in another. At any given time, data can be transferred between one other device; the master chooses. Since it is the master that chooses which slave to address, whereas a slave is supposed to listen in each receive slot, being a master is a lighter burden than being a slave. Being a master of seven slaves is possible; the specification is vague as to required behavior in scatternets. Bluetooth is a standard wire-replacement communications proto
A computer network is a digital telecommunications network which allows nodes to share resources. In computer networks, computing devices exchange data with each other using connections between nodes; these data links are established over cable media such as wires or optic cables, or wireless media such as Wi-Fi. Network computer devices that originate and terminate the data are called network nodes. Nodes are identified by network addresses, can include hosts such as personal computers and servers, as well as networking hardware such as routers and switches. Two such devices can be said to be networked together when one device is able to exchange information with the other device, whether or not they have a direct connection to each other. In most cases, application-specific communications protocols are layered over other more general communications protocols; this formidable collection of information technology requires skilled network management to keep it all running reliably. Computer networks support an enormous number of applications and services such as access to the World Wide Web, digital video, digital audio, shared use of application and storage servers and fax machines, use of email and instant messaging applications as well as many others.
Computer networks differ in the transmission medium used to carry their signals, communications protocols to organize network traffic, the network's size, traffic control mechanism and organizational intent. The best-known computer network is the Internet; the chronology of significant computer-network developments includes: In the late 1950s, early networks of computers included the U. S. military radar system Semi-Automatic Ground Environment. In 1959, Anatolii Ivanovich Kitov proposed to the Central Committee of the Communist Party of the Soviet Union a detailed plan for the re-organisation of the control of the Soviet armed forces and of the Soviet economy on the basis of a network of computing centres, the OGAS. In 1960, the commercial airline reservation system semi-automatic business research environment went online with two connected mainframes. In 1963, J. C. R. Licklider sent a memorandum to office colleagues discussing the concept of the "Intergalactic Computer Network", a computer network intended to allow general communications among computer users.
In 1964, researchers at Dartmouth College developed the Dartmouth Time Sharing System for distributed users of large computer systems. The same year, at Massachusetts Institute of Technology, a research group supported by General Electric and Bell Labs used a computer to route and manage telephone connections. Throughout the 1960s, Paul Baran and Donald Davies independently developed the concept of packet switching to transfer information between computers over a network. Davies pioneered the implementation of the concept with the NPL network, a local area network at the National Physical Laboratory using a line speed of 768 kbit/s. In 1965, Western Electric introduced the first used telephone switch that implemented true computer control. In 1966, Thomas Marill and Lawrence G. Roberts published a paper on an experimental wide area network for computer time sharing. In 1969, the first four nodes of the ARPANET were connected using 50 kbit/s circuits between the University of California at Los Angeles, the Stanford Research Institute, the University of California at Santa Barbara, the University of Utah.
Leonard Kleinrock carried out theoretical work to model the performance of packet-switched networks, which underpinned the development of the ARPANET. His theoretical work on hierarchical routing in the late 1970s with student Farouk Kamoun remains critical to the operation of the Internet today. In 1972, commercial services using X.25 were deployed, used as an underlying infrastructure for expanding TCP/IP networks. In 1973, the French CYCLADES network was the first to make the hosts responsible for the reliable delivery of data, rather than this being a centralized service of the network itself. In 1973, Robert Metcalfe wrote a formal memo at Xerox PARC describing Ethernet, a networking system, based on the Aloha network, developed in the 1960s by Norman Abramson and colleagues at the University of Hawaii. In July 1976, Robert Metcalfe and David Boggs published their paper "Ethernet: Distributed Packet Switching for Local Computer Networks" and collaborated on several patents received in 1977 and 1978.
In 1979, Robert Metcalfe pursued making Ethernet an open standard. In 1976, John Murphy of Datapoint Corporation created ARCNET, a token-passing network first used to share storage devices. In 1995, the transmission speed capacity for Ethernet increased from 10 Mbit/s to 100 Mbit/s. By 1998, Ethernet supported transmission speeds of a Gigabit. Subsequently, higher speeds of up to 400 Gbit/s were added; the ability of Ethernet to scale is a contributing factor to its continued use. Computer networking may be considered a branch of electrical engineering, electronics engineering, telecommunications, computer science, information technology or computer engineering, since it relies upon the theoretical and practical application of the related disciplines. A computer network facilitates interpersonal communications allowing users to communicate efficiently and via various means: email, instant messaging, online chat, video telephone calls, video conferencing. A network allows sharing of computing resources.
Users may access and use resources provided by devices on the network, such as printing a document on a shared network printer or use of a shared storage device. A network allows sharing of files, and