A transmission medium is a material substance that can propagate energy waves. For example, the transmission medium for sounds is a gas, but solids and liquids may act as a transmission medium for sound; the absence of a material medium in vacuum may constitute a transmission medium for electromagnetic waves such as light and radio waves. While material substance is not required for electromagnetic waves to propagate, such waves are affected by the transmission media they pass through, for instance by absorption or by reflection or refraction at the interfaces between media; the term transmission medium refers to a technical device that employs the material substance to transmit or guide waves. Thus, an optical fiber or a copper cable is a transmission medium. Not only this but is able to guide the transmission of networks. A transmission medium can be classified as a: Linear medium, if different waves at any particular point in the medium can be superposed. Electromagnetic radiation can be transmitted through an optical medium, such as optical fiber, or through twisted pair wires, coaxial cable, or dielectric-slab waveguides.
It may pass through any physical material, transparent to the specific wavelength, such as water, glass, or concrete. Sound is, by definition, the vibration of matter, so it requires a physical medium for transmission, as do other kinds of mechanical waves and heat energy. Science incorporated various aether theories to explain the transmission medium. However, it is now known that electromagnetic waves do not require a physical transmission medium, so can travel through the "vacuum" of free space. Regions of the insulative vacuum can become conductive for electrical conduction through the presence of free electrons, holes, or ions. A physical medium in data communications is the transmission path over. Many transmission media are used as communications channel. For telecommunications purposes in the United States, Federal Standard 1037C, transmission media are classified as one of the following: Guided —waves are guided along a solid medium such as a transmission line. Wireless —transmission and reception are achieved by means of an antenna.
One of the most common physical medias used in networking is copper wire. Copper wire to carry signals to long distances using low amounts of power; the unshielded twisted pair is eight strands of copper wire, organized into four pairs. Another example of a physical medium is optical fiber, which has emerged as the most used transmission medium for long-distance communications. Optical fiber is a thin strand of glass. Four major factors favor optical fiber over copper- data rates, distance and costs. Optical fiber can carry huge amounts of data compared to copper, it can be run for hundreds of miles without the need for signal repeaters, in turn, reducing maintenance costs and improving the reliability of the communication system because repeaters are a common source of network failures. Glass is lighter than copper allowing for less need for specialized heavy-lifting equipment when installing long-distance optical fiber. Optical fiber for indoor applications cost a dollar a foot, the same as copper.
Multimode and single mode are two types of used optical fiber. Multimode fiber uses LEDs as the light source and can carry signals over shorter distances, about 2 kilometers. Single mode can carry signals over distances of tens of miles. Wireless media may carry surface waves or skywaves, either longitudinally or transversely, are so classified. In both communications, communication is in the form of electromagnetic waves. With guided transmission media, the waves are guided along a physical path. Unguided transmission media are methods that allow the transmission of data without the use of physical means to define the path it takes. Examples of this include radio or infrared. Unguided media do not guide them; the term direct link is used to refer to the transmission path between two devices in which signals propagate directly from transmitters to receivers with no intermediate devices, other than amplifiers or repeaters used to increase signal strength. This term can apply to unguided media. A transmission may be simplex, full-duplex.
In simplex transmission, signals are transmitted in only one direction. In the half-duplex operation, both stations may only one at a time. In full duplex operation, both stations may transmit simultaneously. In the latter case, the medium is carrying signals in both directions at same time. There are two types of transmission media: guided and unguided. Guided Media: Unshielded Twisted Pair Shielded Twisted Pair Coaxial Cable Optical Fiber hubUnguided Media: Transmission media looking at analysis of using them unguided transmission media is data signals that flow through the air, they are not bound to a channel to follow. Following are unguided media used for data communication: Radio Transmission Microwave Transmission and reception of data is performed in four steps; the data is coded as binary numbers at the sender end A carrie
Ethernet is a family of computer networking technologies used in local area networks, metropolitan area networks and wide area networks. It was commercially introduced in 1980 and first standardized in 1983 as IEEE 802.3, has since retained a good deal of backward compatibility and been refined to support higher bit rates and longer link distances. Over time, Ethernet has replaced competing wired LAN technologies such as Token Ring, FDDI and ARCNET; the original 10BASE5 Ethernet uses coaxial cable as a shared medium, while the newer Ethernet variants use twisted pair and fiber optic links in conjunction with switches. Over the course of its history, Ethernet data transfer rates have been increased from the original 2.94 megabits per second to the latest 400 gigabits per second. The Ethernet standards comprise several wiring and signaling variants of the OSI physical layer in use with Ethernet. Systems communicating over Ethernet divide a stream of data into shorter pieces called frames; each frame contains source and destination addresses, error-checking data so that damaged frames can be detected and discarded.
As per the OSI model, Ethernet provides services up including the data link layer. Features such as the 48-bit MAC address and Ethernet frame format have influenced other networking protocols including Wi-Fi wireless networking technology. Ethernet is used in home and industry; the Internet Protocol is carried over Ethernet and so it is considered one of the key technologies that make up the Internet. Ethernet was developed at Xerox PARC between 1973 and 1974, it was inspired by ALOHAnet. The idea was first documented in a memo that Metcalfe wrote on May 22, 1973, where he named it after the luminiferous aether once postulated to exist as an "omnipresent, completely-passive medium for the propagation of electromagnetic waves." In 1975, Xerox filed a patent application listing Metcalfe, David Boggs, Chuck Thacker, Butler Lampson as inventors. In 1976, after the system was deployed at PARC, Metcalfe and Boggs published a seminal paper; that same year, Ron Crane, Bob Garner, Roy Ogus facilitated the upgrade from the original 2.94 Mbit/s protocol to the 10 Mbit/s protocol, released to the market in 1980.
Metcalfe left Xerox in June 1979 to form 3Com. He convinced Digital Equipment Corporation and Xerox to work together to promote Ethernet as a standard; as part of that process Xerox agreed to relinquish their'Ethernet' trademark. The first standard was published on September 1980 as "The Ethernet, A Local Area Network. Data Link Layer and Physical Layer Specifications"; this so-called DIX standard specified 10 Mbit/s Ethernet, with 48-bit destination and source addresses and a global 16-bit Ethertype-type field. Version 2 was published in November, 1982 and defines what has become known as Ethernet II. Formal standardization efforts proceeded at the same time and resulted in the publication of IEEE 802.3 on June 23, 1983. Ethernet competed with Token Ring and other proprietary protocols. Ethernet was able to adapt to market realities and shift to inexpensive thin coaxial cable and ubiquitous twisted pair wiring. By the end of the 1980s, Ethernet was the dominant network technology. In the process, 3Com became a major company.
3Com shipped its first 10 Mbit/s Ethernet 3C100 NIC in March 1981, that year started selling adapters for PDP-11s and VAXes, as well as Multibus-based Intel and Sun Microsystems computers. This was followed by DEC's Unibus to Ethernet adapter, which DEC sold and used internally to build its own corporate network, which reached over 10,000 nodes by 1986, making it one of the largest computer networks in the world at that time. An Ethernet adapter card for the IBM PC was released in 1982, and, by 1985, 3Com had sold 100,000. Parallel port based Ethernet adapters were produced with drivers for DOS and Windows. By the early 1990s, Ethernet became so prevalent that it was a must-have feature for modern computers, Ethernet ports began to appear on some PCs and most workstations; this process was sped up with the introduction of 10BASE-T and its small modular connector, at which point Ethernet ports appeared on low-end motherboards. Since Ethernet technology has evolved to meet new bandwidth and market requirements.
In addition to computers, Ethernet is now used to interconnect appliances and other personal devices. As Industrial Ethernet it is used in industrial applications and is replacing legacy data transmission systems in the world's telecommunications networks. By 2010, the market for Ethernet equipment amounted to over $16 billion per year. In February 1980, the Institute of Electrical and Electronics Engineers started project 802 to standardize local area networks; the "DIX-group" with Gary Robinson, Phil Arst, Bob Printis submitted the so-called "Blue Book" CSMA/CD specification as a candidate for the LAN specification. In addition to CSMA/CD, Token Ring and Token Bus were considered as candidates for a LAN standard. Competing proposals and broad interest in the initiative led to strong disagreement over which technology to standardize. In December 1980, the group was split into three subgroups, standardization proceeded separately for each proposal. Delays in the standards process put at risk the market introduction of the Xerox Star workstation and 3Com's Ethernet LAN products.
With such business implications in mind, David Liddle an
In telecommunications, transmission is the process of sending and propagating an analogue or digital information signal over a physical point-to-point or point-to-multipoint transmission medium, either wired, optical fiber or wireless. One example of transmission is the sending of a signal with limited duration, for example a block or packet of data, a phone call, or an email. Transmission technologies and schemes refer to physical layer protocol duties such as modulation, line coding, error control, bit synchronization and multiplexing, but the term may involve higher-layer protocol duties, for example, digitizing an analog message signal, data compression. Transmission of a digital message, or of a digitized analog signal, is known as digital communication
Packet switching is a method of grouping data, transmitted over a digital network into packets. Packets are made of a payload. Data in the header are used by networking hardware to direct the packet to its destination where the payload is extracted and used by application software. Packet switching is the primary basis for data communications in computer networks worldwide. In the early 1960s, American computer scientist Paul Baran developed the concept Distributed Adaptive Message Block Switching with the goal to provide a fault-tolerant, efficient routing method for telecommunication messages as part of a research program at the RAND Corporation, funded by the US Department of Defense; this concept contrasted and contradicted then-established principles of pre-allocation of network bandwidth fortified by the development of telecommunications in the Bell System. The new concept found little resonance among network implementers until the independent work of British computer scientist Donald Davies at the National Physical Laboratory in 1965.
Davies is credited with coining the modern term packet switching and inspiring numerous packet switching networks in the decade following, including the incorporation of the concept in the early ARPANET in the United States. A simple definition of packet switching is: The routing and transferring of data by means of addressed packets so that a channel is occupied during the transmission of the packet only, upon completion of the transmission the channel is made available for the transfer of other traffic Packet switching allows delivery of variable bit rate data streams, realized as sequences of packets, over a computer network which allocates transmission resources as needed using statistical multiplexing or dynamic bandwidth allocation techniques; as they traverse networking hardware, such as switches and routers, packets are received, buffered and retransmitted, resulting in variable latency and throughput depending on the link capacity and the traffic load on the network. Packets are forwarded by intermediate network nodes asynchronously using first-in, first-out buffering, but may be forwarded according to some scheduling discipline for fair queuing, traffic shaping, or for differentiated or guaranteed quality of service, such as weighted fair queuing or leaky bucket.
Packet-based communication may be implemented without intermediate forwarding nodes. In case of a shared physical medium, the packets may be delivered according to a multiple access scheme. Packet switching contrasts with another principal networking paradigm, circuit switching, a method which pre-allocates dedicated network bandwidth for each communication session, each having a constant bit rate and latency between nodes. In cases of billable services, such as cellular communication services, circuit switching is characterized by a fee per unit of connection time when no data is transferred, while packet switching may be characterized by a fee per unit of information transmitted, such as characters, packets, or messages; the concept of switching small blocks of data was first explored independently by Paul Baran at the RAND Corporation starting in the late 1950s in the US and Donald Davies at the National Physical Laboratory in the UK. In the late 1950s, the US Air Force established a wide area network for the Semi-Automatic Ground Environment radar defense system.
They sought a system that might survive a nuclear attack to enable a response, thus diminishing the attractiveness of the first strike advantage by enemies. Baran developed the concept of distributed adaptive message block switching in support of the Air Force initiative; the concept was first presented to the Air Force in the summer of 1961 as briefing B-265 published as RAND report P-2626 in 1962, in report RM 3420 in 1964. Report P-2626 described a general architecture for a large-scale, survivable communications network; the work focuses on three key ideas: use of a decentralized network with multiple paths between any two points, dividing user messages into message blocks, delivery of these messages by store and forward switching. Davies developed a similar message routing concept in 1965, he called it packet switching, proposed building a nationwide network in the UK. He gave a talk on the proposal in 1966, after which a person from the Ministry of Defence told him about Baran's work. Roger Scantlebury, a member of Davies' team met Lawrence Roberts at the 1967 ACM Symposium on Operating System Principles and suggested it for use in the ARPANET.
Davies had chosen some of the same parameters for his original network design as did Baran, such as a packet size of 1024 bits. In 1966, Davies proposed that a network should be built at the laboratory to serve the needs of NPL and prove the feasibility of packet switching. After a pilot experiment in 1967, the NPL Data Communications Network entered service in 1969. Leonard Kleinrock conducted early research in queueing theory for his doctoral dissertation at MIT in 1961-2 and published it as a book in 1964 in the field of digital message switching. Following the 1967 ACM Symposium, Lawrence Roberts asked Kleinrock to carry out theoretical work to model the performance of packet-switched networks, which underpinned the development of the ARPANET; the NPL team carried out simulation work on packet networks. The French CYCLADES network, designed by Louis Pouzin in the early 1970s, was the first to employ what came to be known as the end-to-end principle, make the hosts responsible for the reliable delivery of data on a packet-switched network, rather than this being a centralized service of the network i
Wi-Fi is technology for radio wireless local area networking of devices based on the IEEE 802.11 standards. Wi‑Fi is a trademark of the Wi-Fi Alliance, which restricts the use of the term Wi-Fi Certified to products that complete after many years of testing the 802.11 committee interoperability certification testing. Devices that can use Wi-Fi technologies include, among others and laptops, video game consoles and tablets, smart TVs, digital audio players, digital cameras and drones. Wi-Fi compatible devices can connect to the Internet via a wireless access point; such an access point has a range of about 20 meters indoors and a greater range outdoors. Hotspot coverage can be as small as a single room with walls that block radio waves, or as large as many square kilometres achieved by using multiple overlapping access points. Different versions of Wi-Fi exist, with radio bands and speeds. Wi-Fi most uses the 2.4 gigahertz UHF and 5 gigahertz SHF ISM radio bands. Each channel can be time-shared by multiple networks.
These wavelengths work best for line-of-sight. Many common materials absorb or reflect them, which further restricts range, but can tend to help minimise interference between different networks in crowded environments. At close range, some versions of Wi-Fi, running on suitable hardware, can achieve speeds of over 1 Gbit/s. Anyone within range with a wireless network interface controller can attempt to access a network. Wi-Fi Protected Access is a family of technologies created to protect information moving across Wi-Fi networks and includes solutions for personal and enterprise networks. Security features of WPA have included stronger protections and new security practices as the security landscape has changed over time. In 1971, ALOHAnet connected the Hawaiian Islands with a UHF wireless packet network. ALOHAnet and the ALOHA protocol were early forerunners to Ethernet, the IEEE 802.11 protocols, respectively. A 1985 ruling by the U. S. Federal Communications Commission released the ISM band for unlicensed use.
These frequency bands are the same ones used by equipment such as microwave ovens and are subject to interference. In 1991, NCR Corporation with AT&T Corporation invented the precursor to 802.11, intended for use in cashier systems, under the name WaveLAN. The Australian radio-astronomer Dr John O'Sullivan with his colleagues Terence Percival, Graham Daniels, Diet Ostry, John Deane developed a key patent used in Wi-Fi as a by-product of a Commonwealth Scientific and Industrial Research Organisation research project, "a failed experiment to detect exploding mini black holes the size of an atomic particle". Dr O'Sullivan and his colleagues are credited with inventing Wi-Fi. In 1992 and 1996, CSIRO obtained patents for a method used in Wi-Fi to "unsmear" the signal; the first version of the 802.11 protocol was released in 1997, provided up to 2 Mbit/s link speeds. This was updated in 1999 with 802.11b to permit 11 Mbit/s link speeds, this proved to be popular. In 1999, the Wi-Fi Alliance formed as a trade association to hold the Wi-Fi trademark under which most products are sold.
Wi-Fi uses a large number of patents held by many different organizations. In April 2009, 14 technology companies agreed to pay CSIRO $1 billion for infringements on CSIRO patents; this led to Australia labeling Wi-Fi as an Australian invention, though this has been the subject of some controversy. CSIRO won a further $220 million settlement for Wi-Fi patent-infringements in 2012 with global firms in the United States required to pay the CSIRO licensing rights estimated to be worth an additional $1 billion in royalties. In 2016, the wireless local area network Test Bed was chosen as Australia's contribution to the exhibition A History of the World in 100 Objects held in the National Museum of Australia; the name Wi-Fi, commercially used at least as early as August 1999, was coined by the brand-consulting firm Interbrand. The Wi-Fi Alliance had hired Interbrand to create a name, "a little catchier than'IEEE 802.11b Direct Sequence'." Phil Belanger, a founding member of the Wi-Fi Alliance who presided over the selection of the name "Wi-Fi", has stated that Interbrand invented Wi-Fi as a pun on the word hi-fi, a term for high-quality audio technology.
Interbrand created the Wi-Fi logo. The yin-yang Wi-Fi logo indicates the certification of a product for interoperability; the Wi-Fi Alliance used the advertising slogan "The Standard for Wireless Fidelity" for a short time after the brand name was created. While inspired by the term hi-fi, the name was never "Wireless Fidelity"; the Wi-Fi Alliance was called the "Wireless Fidelity Alliance Inc" in some publications. Non-Wi-Fi technologies intended for fixed points, such as Motorola Canopy, are described as fixed wireless. Alternative wireless technologies include mobile phone standards, such as 2G, 3G, 4G, LTE; the name is sometimes written as WiFi, Wifi, or wifi, but these are not approved by the Wi-Fi Alliance. IEEE is a separate, but related organization and their website has stated "WiFi is a short name for Wireless Fidelity". To connect to a Wi-Fi LAN, a computer has to be equipped with a wireless network interface controller; the combination of computer and interface controllers is called a station.
A service set is the set of all the devices associated with a particular Wi-Fi network. The service set can be local, extended or mesh; each service set has an associated identifier, the 32-byte Service Set Identifier, which identifies the partic
Randomness is the lack of pattern or predictability in events. A random sequence of events, symbols or steps has no order and does not follow an intelligible pattern or combination. Individual random events are by definition unpredictable, but in many cases the frequency of different outcomes over a large number of events is predictable. For example, when throwing two dice, the outcome of any particular roll is unpredictable, but a sum of 7 will occur twice as as 4. In this view, randomness is a measure of uncertainty of an outcome, rather than haphazardness, applies to concepts of chance and information entropy; the fields of mathematics and statistics use formal definitions of randomness. In statistics, a random variable is an assignment of a numerical value to each possible outcome of an event space; this association facilitates the calculation of probabilities of the events. Random variables can appear in random sequences. A random process is a sequence of random variables whose outcomes do not follow a deterministic pattern, but follow an evolution described by probability distributions.
These and other constructs are useful in probability theory and the various applications of randomness. Randomness is most used in statistics to signify well-defined statistical properties. Monte Carlo methods, which rely on random input, are important techniques in science, as, for instance, in computational science. By analogy, quasi-Monte Carlo methods use quasirandom number generators. Random selection, when narrowly associated with a simple random sample, is a method of selecting items from a population where the probability of choosing a specific item is the proportion of those items in the population. For example, with a bowl containing just 10 red marbles and 90 blue marbles, a random selection mechanism would choose a red marble with probability 1/10. Note that a random selection mechanism that selected 10 marbles from this bowl would not result in 1 red and 9 blue. In situations where a population consists of items that are distinguishable, a random selection mechanism requires equal probabilities for any item to be chosen.
That is, if the selection process is such that each member of a population, of say research subjects, has the same probability of being chosen we can say the selection process is random. In ancient history, the concepts of chance and randomness were intertwined with that of fate. Many ancient peoples threw dice to determine fate, this evolved into games of chance. Most ancient cultures used various methods of divination to attempt to circumvent randomness and fate; the Chinese of 3000 years ago were the earliest people to formalize odds and chance. The Greek philosophers discussed randomness at length, but only in non-quantitative forms, it was only in the 16th century that Italian mathematicians began to formalize the odds associated with various games of chance. The invention of the calculus had a positive impact on the formal study of randomness. In the 1888 edition of his book The Logic of Chance John Venn wrote a chapter on The conception of randomness that included his view of the randomness of the digits of the number pi by using them to construct a random walk in two dimensions.
The early part of the 20th century saw a rapid growth in the formal analysis of randomness, as various approaches to the mathematical foundations of probability were introduced. In the mid- to late-20th century, ideas of algorithmic information theory introduced new dimensions to the field via the concept of algorithmic randomness. Although randomness had been viewed as an obstacle and a nuisance for many centuries, in the 20th century computer scientists began to realize that the deliberate introduction of randomness into computations can be an effective tool for designing better algorithms. In some cases such randomized algorithms outperform the best deterministic methods. Many scientific fields are concerned with randomness: In the 19th century, scientists used the idea of random motions of molecules in the development of statistical mechanics to explain phenomena in thermodynamics and the properties of gases. According to several standard interpretations of quantum mechanics, microscopic phenomena are objectively random.
That is, in an experiment that controls all causally relevant parameters, some aspects of the outcome still vary randomly. For example, if a single unstable atom is placed in a controlled environment, it cannot be predicted how long it will take for the atom to decay—only the probability of decay in a given time. Thus, quantum mechanics does not specify the outcome of individual experiments but only the probabilities. Hidden variable theories reject the view that nature contains irreducible randomness: such theories posit that in the processes that appear random, properties with a certain statistical distribution are at work behind the scenes, determining the outcome in each case; the modern evolutionary synthesis ascribes the observed diversity of life to random genetic mutations followed by natural selection. The latter retains some random mutations in the gene pool due to the systematically improved chance for survival and reproduction that those mutated genes confer on individuals who possess them.
Several authors claim that evolution and sometimes development require a specific form of randomness, namely the introduction of qualitatively new behaviors. Instead of the choice of one possibility among several pre-given ones, this randomness corresponds to the formation of new possibilities; the characteristics of an organism arise to some extent deterministically and to som
Time-division multiple access
Time-division multiple access is a channel access method for shared-medium networks. It allows several users to share the same frequency channel by dividing the signal into different time slots; the users transmit in one after the other, each using its own time slot. This allows multiple stations to share the same transmission medium while using only a part of its channel capacity. TDMA is used in the digital 2G cellular systems such as Global System for Mobile Communications, IS-136, Personal Digital Cellular and iDEN, in the Digital Enhanced Cordless Telecommunications standard for portable phones, it is used extensively in satellite systems, combat-net radio systems, passive optical network networks for upstream traffic from premises to the operator. For usage of Dynamic TDMA packet mode communication, see below. TDMA is a type of time-division multiplexing, with the special point that instead of having one transmitter connected to one receiver, there are multiple transmitters. In the case of the uplink from a mobile phone to a base station this becomes difficult because the mobile phone can move around and vary the timing advance required to make its transmission match the gap in transmission from its peers.
Shares single carrier frequency with multiple users Non-continuous transmission makes handoff simpler Slots can be assigned on demand in dynamic TDMA Less stringent power control than CDMA due to reduced intra cell interference Higher synchronization overhead than CDMA Advanced equalization may be necessary for high data rates if the channel is "frequency selective" and creates Intersymbol interference Cell breathing is more complicated than in CDMA Frequency/slot allocation complexity Pulsating power envelope: interference with other devices Most 2G cellular systems, with the notable exception of IS-95, are based on TDMA. GSM, D-AMPS, PDC, iDEN, PHS are examples of TDMA cellular systems. GSM combines TDMA with Frequency Hopping and wideband transmission to minimize common types of interference. In the GSM system, the synchronization of the mobile phones is achieved by sending timing advance commands from the base station which instructs the mobile phone to transmit earlier and by how much.
This compensates for the propagation delay resulting from the light speed velocity of radio waves. The mobile phone is not allowed to transmit for its entire time slot, but there is a guard interval at the end of each time slot; as the transmission moves into the guard period, the mobile network adjusts the timing advance to synchronize the transmission. Initial synchronization of a phone requires more care. Before a mobile transmits there is no way to know the offset required. For this reason, an entire time slot has to be dedicated to mobiles attempting to contact the network; the mobile attempts to broadcast at the beginning of the time slot. If the mobile is located next to the base station, there will be no time delay and this will succeed. If, the mobile phone is at just less than 35 km from the base station, the time delay will mean the mobile's broadcast arrives at the end of the time slot. In that case, the mobile will be instructed to broadcast its messages starting nearly a whole time slot earlier than would be expected otherwise.
If the mobile is beyond the 35 km cell range in GSM the RACH will arrive in a neighbouring time slot and be ignored. It is this feature, rather than limitations of power, that limits the range of a GSM cell to 35 km when no special extension techniques are used. By changing the synchronization between the uplink and downlink at the base station, this limitation can be overcome. Although most major 3G systems are based upon CDMA, time-division duplexing, packet scheduling and packet oriented multiple access schemes are available in 3G form, combined with CDMA to take advantage of the benefits of both technologies. While the most popular form of the UMTS 3G system uses CDMA and frequency division duplexing instead of TDMA, TDMA is combined with CDMA and time-division duplexing in two standard UMTS UTRA; the ITU-T G.hn standard, which provides high-speed local area networking over existing home wiring is based on a TDMA scheme. In G.hn, a "master" device allocates "Contention-Free Transmission Opportunities" to other "slave" devices in the network.
Only one device can use a CFTXOP at a time. FlexRay protocol, a wired network used for safety-critical communication in modern cars, uses the TDMA method for data transmission control. In radio systems, TDMA is used alongside frequency-division multiple access and frequency division duplex; this is the case in both IS-136 for example. Exceptions to this include the DECT and Personal Handy-phone System micro-cellular systems, UMTS-TDD UMTS variant, China's TD-SCDMA, which use time-division duplexing, where different time slots are allocated for the base station and handsets on the same frequency. A major advantage of TDMA is that the radio part of the mobile only needs to listen and broadcast for its own time slot. For the rest of the time, the mobile can carry out measurements on the network, detecting surrounding transmitters on different frequencies; this allows safe inter frequency handovers, something, difficult in CDMA systems, not supported at all in IS-95 and supported through complex system additions in Universal Mobile Telecommunications System.
This in turn allows fo