An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources. For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware, although the application code is executed directly by the hardware and makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles to web servers and supercomputers; the dominant desktop operating system is Microsoft Windows with a market share of around 82.74%. MacOS by Apple Inc. is in second place, the varieties of Linux are collectively in third place. In the mobile sector, use in 2017 is up to 70% of Google's Android and according to third quarter 2016 data, Android on smartphones is dominant with 87.5 percent and a growth rate 10.3 percent per year, followed by Apple's iOS with 12.1 percent and a per year decrease in market share of 5.2 percent, while other operating systems amount to just 0.3 percent.
Linux distributions are dominant in supercomputing sectors. Other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can only run one program at a time, while a multi-tasking operating system allows more than one program to be running in concurrency; this is achieved by time-sharing, where the available processor time is divided between multiple processes. These processes are each interrupted in time slices by a task-scheduling subsystem of the operating system. Multi-tasking may be characterized in co-operative types. In preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, such as Solaris and Linux—as well as non-Unix-like, such as AmigaOS—support preemptive multitasking. Cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking.
32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem. A multi-user operating system extends the basic concept of multi-tasking with facilities that identify processes and resources, such as disk space, belonging to multiple users, the system permits multiple users to interact with the system at the same time. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources to multiple users. A distributed operating system manages a group of distinct computers and makes them appear to be a single computer; the development of networked computers that could be linked and communicate with each other gave rise to distributed computing. Distributed computations are carried out on more than one machine; when computers in a group work in cooperation, they form a distributed system.
In an OS, distributed and cloud computing context, templating refers to creating a single virtual machine image as a guest operating system saving it as a tool for multiple running virtual machines. The technique is used both in virtualization and cloud computing management, is common in large server warehouses. Embedded operating systems are designed to be used in embedded computer systems, they are designed to operate on small machines like PDAs with less autonomy. They are able to operate with a limited number of resources, they are compact and efficient by design. Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is an operating system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, it uses specialized scheduling algorithms so that a deterministic nature of behavior is achieved. An event-driven system switches between tasks based on their priorities or external events while time-sharing operating systems switch tasks based on clock interrupts.
A library operating system is one in which the services that a typical operating system provides, such as networking, are provided in the form of libraries and composed with the application and configuration code to construct a unikernel: a specialized, single address space, machine image that can be deployed to cloud or embedded environments. Early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Operating systems did not exist in their more complex forms until the early 1960s. Hardware features were added, that enabled use of runtime libraries and parallel processing; when personal computers became popular in the 1980s, operating systems were made for them similar in concept to those used on larger computers. In the 1940s, the earliest electronic digital systems had no operating systems.
Electronic systems of this time were programmed on rows of mechanical switches or by jumper wires on plug boards. These were special-purpose systems that, for example, generated ballistics tables for the military or controlled the pri
A computer network is a digital telecommunications network which allows nodes to share resources. In computer networks, computing devices exchange data with each other using connections between nodes; these data links are established over cable media such as wires or optic cables, or wireless media such as Wi-Fi. Network computer devices that originate and terminate the data are called network nodes. Nodes are identified by network addresses, can include hosts such as personal computers and servers, as well as networking hardware such as routers and switches. Two such devices can be said to be networked together when one device is able to exchange information with the other device, whether or not they have a direct connection to each other. In most cases, application-specific communications protocols are layered over other more general communications protocols; this formidable collection of information technology requires skilled network management to keep it all running reliably. Computer networks support an enormous number of applications and services such as access to the World Wide Web, digital video, digital audio, shared use of application and storage servers and fax machines, use of email and instant messaging applications as well as many others.
Computer networks differ in the transmission medium used to carry their signals, communications protocols to organize network traffic, the network's size, traffic control mechanism and organizational intent. The best-known computer network is the Internet; the chronology of significant computer-network developments includes: In the late 1950s, early networks of computers included the U. S. military radar system Semi-Automatic Ground Environment. In 1959, Anatolii Ivanovich Kitov proposed to the Central Committee of the Communist Party of the Soviet Union a detailed plan for the re-organisation of the control of the Soviet armed forces and of the Soviet economy on the basis of a network of computing centres, the OGAS. In 1960, the commercial airline reservation system semi-automatic business research environment went online with two connected mainframes. In 1963, J. C. R. Licklider sent a memorandum to office colleagues discussing the concept of the "Intergalactic Computer Network", a computer network intended to allow general communications among computer users.
In 1964, researchers at Dartmouth College developed the Dartmouth Time Sharing System for distributed users of large computer systems. The same year, at Massachusetts Institute of Technology, a research group supported by General Electric and Bell Labs used a computer to route and manage telephone connections. Throughout the 1960s, Paul Baran and Donald Davies independently developed the concept of packet switching to transfer information between computers over a network. Davies pioneered the implementation of the concept with the NPL network, a local area network at the National Physical Laboratory using a line speed of 768 kbit/s. In 1965, Western Electric introduced the first used telephone switch that implemented true computer control. In 1966, Thomas Marill and Lawrence G. Roberts published a paper on an experimental wide area network for computer time sharing. In 1969, the first four nodes of the ARPANET were connected using 50 kbit/s circuits between the University of California at Los Angeles, the Stanford Research Institute, the University of California at Santa Barbara, the University of Utah.
Leonard Kleinrock carried out theoretical work to model the performance of packet-switched networks, which underpinned the development of the ARPANET. His theoretical work on hierarchical routing in the late 1970s with student Farouk Kamoun remains critical to the operation of the Internet today. In 1972, commercial services using X.25 were deployed, used as an underlying infrastructure for expanding TCP/IP networks. In 1973, the French CYCLADES network was the first to make the hosts responsible for the reliable delivery of data, rather than this being a centralized service of the network itself. In 1973, Robert Metcalfe wrote a formal memo at Xerox PARC describing Ethernet, a networking system, based on the Aloha network, developed in the 1960s by Norman Abramson and colleagues at the University of Hawaii. In July 1976, Robert Metcalfe and David Boggs published their paper "Ethernet: Distributed Packet Switching for Local Computer Networks" and collaborated on several patents received in 1977 and 1978.
In 1979, Robert Metcalfe pursued making Ethernet an open standard. In 1976, John Murphy of Datapoint Corporation created ARCNET, a token-passing network first used to share storage devices. In 1995, the transmission speed capacity for Ethernet increased from 10 Mbit/s to 100 Mbit/s. By 1998, Ethernet supported transmission speeds of a Gigabit. Subsequently, higher speeds of up to 400 Gbit/s were added; the ability of Ethernet to scale is a contributing factor to its continued use. Computer networking may be considered a branch of electrical engineering, electronics engineering, telecommunications, computer science, information technology or computer engineering, since it relies upon the theoretical and practical application of the related disciplines. A computer network facilitates interpersonal communications allowing users to communicate efficiently and via various means: email, instant messaging, online chat, video telephone calls, video conferencing. A network allows sharing of computing resources.
Users may access and use resources provided by devices on the network, such as printing a document on a shared network printer or use of a shared storage device. A network allows sharing of files, and
GNU General Public License
The GNU General Public License is a widely-used free software license, which guarantees end users the freedom to run, study and modify the software. The license was written by Richard Stallman of the Free Software Foundation for the GNU Project, grants the recipients of a computer program the rights of the Free Software Definition; the GPL is a copyleft license, which means that derivative work can only be distributed under the same license terms. This is in distinction to permissive free software licenses, of which the BSD licenses and the MIT License are widely-used examples. GPL was the first copyleft license for general use; the GPL license family has been one of the most popular software licenses in the free and open-source software domain. Prominent free-software programs licensed under the GPL include the Linux kernel and the GNU Compiler Collection. David A. Wheeler argues that the copyleft provided by the GPL was crucial to the success of Linux-based systems, giving the programmers who contributed to the kernel the assurance that their work would benefit the whole world and remain free, rather than being exploited by software companies that would not have to give anything back to the community.
In 2007, the third version of the license was released to address some perceived problems with the second version that were discovered during its long-time usage. To keep the license up to date, the GPL license includes an optional "any version" clause, allowing users to choose between the original terms or the terms in new versions as updated by the FSF. Developers can omit it; the GPL was written by Richard Stallman in 1989, for use with programs released as part of the GNU project. The original GPL was based on a unification of similar licenses used for early versions of GNU Emacs, the GNU Debugger and the GNU C Compiler; these licenses contained similar provisions to the modern GPL, but were specific to each program, rendering them incompatible, despite being the same license. Stallman's goal was to produce one license that could be used for any project, thus making it possible for many projects to share code; the second version of the license, version 2, was released in 1991. Over the following 15 years, members of the free software community became concerned over problems in the GPLv2 license that could let someone exploit GPL-licensed software in ways contrary to the license's intent.
These problems included tivoization, compatibility issues similar to those of the Affero General Public License—and patent deals between Microsoft and distributors of free and open-source software, which some viewed as an attempt to use patents as a weapon against the free software community. Version 3 was developed to attempt to address these concerns and was released on 29 June 2007. Version 1 of the GNU GPL, released on 25 February 1989, prevented what were the two main ways that software distributors restricted the freedoms that define free software; the first problem was that distributors may publish binary files only—executable, but not readable or modifiable by humans. To prevent this, GPLv1 stated that copying and distributing copies or any portion of the program must make the human-readable source code available under the same licensing terms; the second problem was that distributors might add restrictions, either to the license, or by combining the software with other software that had other restrictions on distribution.
The union of two sets of restrictions would apply to the combined work, thus adding unacceptable restrictions. To prevent this, GPLv1 stated that modified versions, as a whole, had to be distributed under the terms in GPLv1. Therefore, software distributed under the terms of GPLv1 could be combined with software under more permissive terms, as this would not change the terms under which the whole could be distributed. However, software distributed under GPLv1 could not be combined with software distributed under a more restrictive license, as this would conflict with the requirement that the whole be distributable under the terms of GPLv1. According to Richard Stallman, the major change in GPLv2 was the "Liberty or Death" clause, as he calls it – Section 7; the section says that licensees may distribute a GPL-covered work only if they can satisfy all of the license's obligations, despite any other legal obligations they might have. In other words, the obligations of the license may not be severed due to conflicting obligations.
This provision is intended to discourage any party from using a patent infringement claim or other litigation to impair users' freedom under the license. By 1990, it was becoming apparent that a less restrictive license would be strategically useful for the C library and for software libraries that did the job of existing proprietary ones; the version numbers diverged in 1999 when version 2.1 of the LGPL was released, which renamed it the GNU Lesser General Public License to reflect its place in the philosophy. Most "GPLv2 or any version" is stated by users of the license, to allow upgrading to GPLv3. In late 2005, the Free Software Foundation announced work on version 3 of the GPL. On 16 January 2006, the first "discussion draft" of GPLv3 was published, the public consultation began; the public consultation was planned for ni
A hotspot is a physical location where people may obtain Internet access using Wi-Fi technology, via a wireless local area network using a router connected to an internet service provider. Public hotspots may be created by a business for use by customers, such as coffee hotels. Public hotspots are created from wireless access points configured to provide Internet access, controlled to some degree by the venue. In its simplest form, venues that have broadband Internet access can create public wireless access by configuring an access point, in conjunction with a router and connecting the AP to the Internet connection. A single wireless router combining these functions may suffice. Private hotspots may be configured on a smartphone or tablet with a mobile network data plan to allow Internet access to other devices via Bluetooth pairing or if both the hotspot device and the device/s accessing it are connected to the same Wi-Fi network; the public can use a laptop or other suitable portable device to access the wireless connection provided.
Of the estimated 150 million laptops, 14 million PDAs, other emerging Wi-Fi devices sold per year for the last few years, most include the Wi-Fi feature. The iPass 2014 interactive map, that shows data provided by the analysts Maravedis Rethink, shows that in December 2014 there are 46,000,000 hotspots worldwide and more than 22,000,000 roamable hotspots. More than 10,900 hotspots are on trains and airports and more than 8,500,000 are "branded" hotspots; the region with the largest number of public hotspots is Europe, followed by North Asia. Libraries throughout the United States are implementing hotspot lending programs to extend access to online library services to users at home who cannot afford in-home Internet access or do not have access to Internet infrastructure; the New York Public Library was the largest program. Similar programs have existed in Kansas and Oklahoma. Security is a serious concern in connection with private hotspots. There are three possible attack scenarios. First, there is the wireless connection between the client and the access point, which needs to be encrypted, so that the connection cannot be eavesdropped or attacked by a man-in-the-middle attack.
Second, there is the hotspot itself. The WLAN encryption ends at the interface travels its network stack unencrypted and third, travels over the wired connection up to the BRAS of the ISP. Depending upon the set up of a public hotspot, the provider of the hotspot has access to the metadata and content accessed by users of the hotspot; the safest method when accessing the Internet over a hotspot, with unknown security measures, is end-to-end encryption. Examples of strong end-to-end encryption are HTTPS and SSH; some hotspots authenticate users. Some vendors provide a download option; this conflicts with enterprise configurations that have solutions specific to their internal WLAN. In order to provide robust security to hotspot users, the Wi-Fi Alliance is developing a new hotspot program that aims to encrypt hotspot traffic with WPA2 security; the program was scheduled to launch in the first half of 2012. The Opportunistic Wireless Encryption standard provides encrypted communication in open Wi-Fi networks, alongside the WPA3 standard.
Public hotspots are found at airports, coffee shops, department stores, fuel stations, hospitals, public pay phones, restaurants, RV parks and campgrounds, train stations, other public places. Additionally, many schools and universities have wireless networks on their campuses. Free hotspots operate in two ways: Using an open public network is the easiest way to create a free hotspot. All, needed is a Wi-Fi router; when users of private wireless routers turn off their authentication requirements, opening their connection, intentionally or not, they permit piggybacking by anyone in range. Closed public networks use a HotSpot Management System to control access to hotspots; this software runs on the router itself or an external computer allowing operators to authorize only specific users to access the Internet. Providers of such hotspots associate the free access with a menu, membership, or purchase limit. Operators may limit each user's available bandwidth to ensure that everyone gets a good quality service.
This is done through service-level agreements. A commercial hotspot may feature: A captive portal / login screen / splash page that users are redirected to for authentication and/or payment; the captive portal / splash page sometimes includes the social login buttons. A payment option using a credit card, iPass, PayPal, or another payment service A walled garden feature that allows free access to certain sites Service-oriented provisioning to allow for improved revenue Data analytics and data capture tools, to analyze and export data from Wi-Fi clientsMany services provide payment services to hotspot providers, for a monthly fee or commission from the end-user income. For example, Amazingports can be used to set up hotspots that intend to offer both fee-based and free internet access, ZoneCD is a Linux distribution that provides payment services for hotspot providers who wish to deploy their own service. Major airports and business hotels are more to charge for service, though most hotels provide free service to guests.
Retail shops, public ve
Wi-Fi is technology for radio wireless local area networking of devices based on the IEEE 802.11 standards. Wi‑Fi is a trademark of the Wi-Fi Alliance, which restricts the use of the term Wi-Fi Certified to products that complete after many years of testing the 802.11 committee interoperability certification testing. Devices that can use Wi-Fi technologies include, among others and laptops, video game consoles and tablets, smart TVs, digital audio players, digital cameras and drones. Wi-Fi compatible devices can connect to the Internet via a wireless access point; such an access point has a range of about 20 meters indoors and a greater range outdoors. Hotspot coverage can be as small as a single room with walls that block radio waves, or as large as many square kilometres achieved by using multiple overlapping access points. Different versions of Wi-Fi exist, with radio bands and speeds. Wi-Fi most uses the 2.4 gigahertz UHF and 5 gigahertz SHF ISM radio bands. Each channel can be time-shared by multiple networks.
These wavelengths work best for line-of-sight. Many common materials absorb or reflect them, which further restricts range, but can tend to help minimise interference between different networks in crowded environments. At close range, some versions of Wi-Fi, running on suitable hardware, can achieve speeds of over 1 Gbit/s. Anyone within range with a wireless network interface controller can attempt to access a network. Wi-Fi Protected Access is a family of technologies created to protect information moving across Wi-Fi networks and includes solutions for personal and enterprise networks. Security features of WPA have included stronger protections and new security practices as the security landscape has changed over time. In 1971, ALOHAnet connected the Hawaiian Islands with a UHF wireless packet network. ALOHAnet and the ALOHA protocol were early forerunners to Ethernet, the IEEE 802.11 protocols, respectively. A 1985 ruling by the U. S. Federal Communications Commission released the ISM band for unlicensed use.
These frequency bands are the same ones used by equipment such as microwave ovens and are subject to interference. In 1991, NCR Corporation with AT&T Corporation invented the precursor to 802.11, intended for use in cashier systems, under the name WaveLAN. The Australian radio-astronomer Dr John O'Sullivan with his colleagues Terence Percival, Graham Daniels, Diet Ostry, John Deane developed a key patent used in Wi-Fi as a by-product of a Commonwealth Scientific and Industrial Research Organisation research project, "a failed experiment to detect exploding mini black holes the size of an atomic particle". Dr O'Sullivan and his colleagues are credited with inventing Wi-Fi. In 1992 and 1996, CSIRO obtained patents for a method used in Wi-Fi to "unsmear" the signal; the first version of the 802.11 protocol was released in 1997, provided up to 2 Mbit/s link speeds. This was updated in 1999 with 802.11b to permit 11 Mbit/s link speeds, this proved to be popular. In 1999, the Wi-Fi Alliance formed as a trade association to hold the Wi-Fi trademark under which most products are sold.
Wi-Fi uses a large number of patents held by many different organizations. In April 2009, 14 technology companies agreed to pay CSIRO $1 billion for infringements on CSIRO patents; this led to Australia labeling Wi-Fi as an Australian invention, though this has been the subject of some controversy. CSIRO won a further $220 million settlement for Wi-Fi patent-infringements in 2012 with global firms in the United States required to pay the CSIRO licensing rights estimated to be worth an additional $1 billion in royalties. In 2016, the wireless local area network Test Bed was chosen as Australia's contribution to the exhibition A History of the World in 100 Objects held in the National Museum of Australia; the name Wi-Fi, commercially used at least as early as August 1999, was coined by the brand-consulting firm Interbrand. The Wi-Fi Alliance had hired Interbrand to create a name, "a little catchier than'IEEE 802.11b Direct Sequence'." Phil Belanger, a founding member of the Wi-Fi Alliance who presided over the selection of the name "Wi-Fi", has stated that Interbrand invented Wi-Fi as a pun on the word hi-fi, a term for high-quality audio technology.
Interbrand created the Wi-Fi logo. The yin-yang Wi-Fi logo indicates the certification of a product for interoperability; the Wi-Fi Alliance used the advertising slogan "The Standard for Wireless Fidelity" for a short time after the brand name was created. While inspired by the term hi-fi, the name was never "Wireless Fidelity"; the Wi-Fi Alliance was called the "Wireless Fidelity Alliance Inc" in some publications. Non-Wi-Fi technologies intended for fixed points, such as Motorola Canopy, are described as fixed wireless. Alternative wireless technologies include mobile phone standards, such as 2G, 3G, 4G, LTE; the name is sometimes written as WiFi, Wifi, or wifi, but these are not approved by the Wi-Fi Alliance. IEEE is a separate, but related organization and their website has stated "WiFi is a short name for Wireless Fidelity". To connect to a Wi-Fi LAN, a computer has to be equipped with a wireless network interface controller; the combination of computer and interface controllers is called a station.
A service set is the set of all the devices associated with a particular Wi-Fi network. The service set can be local, extended or mesh; each service set has an associated identifier, the 32-byte Service Set Identifier, which identifies the partic
Dial-up Internet access
Dial-up Internet access is a form of Internet access that uses the facilities of the public switched telephone network to establish a connection to an Internet service provider by dialing a telephone number on a conventional telephone line. The user's computer or router uses an attached modem to encode and decode information into and from audio frequency signals, respectively. In 1979, Tom Truscott and Steve Bellovin, graduates of Duke University, created an early predecessor to dial-up Internet access called the USENET; the USENET was a UNIX based system that used a dial-up connection to transfer data through telephone modems. Dial-up Internet has been around since the 1980s via public providers such as NSFNET-linked universities and was first offered commercially in July 1992 by Sprint. Despite losing ground to broadband since the mid-2000s, dial-up is still used where other forms are not available or where the cost is too high, such as in some rural or remote areas. Dial-up connections to the Internet require no infrastructure other than the telephone network and the modems and servers needed to make and answer the calls.
Where telephone access is available, dial-up is the only choice available for rural or remote areas, where broadband installations are not prevalent due to low population density and high infrastructure cost. Dial-up access may be an alternative for users on limited budgets, as it is offered free by some ISPs, though broadband is available at lower prices in many countries due to market competition. Dial-up requires time to establish a telephone connection and perform configuration for protocol synchronization before data transfers can take place. In locales with telephone connection charges, each connection incurs an incremental cost. If calls are time-metered, the duration of the connection incurs costs. Dial-up access is a transient connection, because either the user, ISP or phone company terminates the connection. Internet service providers will set a limit on connection durations to allow sharing of resources, will disconnect the user—requiring reconnection and the costs and delays associated with it.
Technically inclined users find a way to disable the auto-disconnect program such that they can remain connected for more days than one. A 2008 Pew Research Center study stated that only 10% of US adults still used dial-up Internet access; the study found. Users cited lack of infrastructure as a reason less than stating that they would never upgrade to broadband; that number had fallen to 6% by 2010, to 3% by 2013. The CRTC estimated that there were 336,000 Canadian dial-up users in 2010. Broadband Internet access via cable, digital subscriber line, satellite and FTTx has replaced dial-up access in many parts of the world. Broadband connections offer speeds of 700 kbit/s or higher for two-thirds more than the price of dial-up on average. In addition broadband connections are always on, thus avoiding the need to connect and disconnect at the start and end of each session. Broadband does not require exclusive use of a phone line and so one can access the Internet and at the same time make and receive voice phone calls without having a second phone line.
However, many rural areas still remain without high speed Internet despite the eagerness of potential customers. This can be attributed to population, location, or sometimes ISPs' lack of interest due to little chance of profitability and high costs to build the required infrastructure; some dial-up ISPs have responded to the increased competition by lowering their rates and making dial-up an attractive option for those who want email access or basic web browsing. Dial-up Internet access has undergone a precipitous fall in usage, approaches extinction as modern users turn towards broadband. In contrast to the year 2000 when about 34% of the U. S. population used dial-up, this dropped to 3% in 2013. One contributing factor to the extinction of dial-up is the bandwidth requirements of newer computer programs, like antivirus software, which automatically download sizable updates in the background when a connection to the internet is first made; these background downloads can take several minutes or longer and, until all updates are completed, they can impact the amount of bandwidth available to other applications like web browsers.
Since an "always on" broadband is the norm expected by most newer applications being developed, this automatic upload trend in the background is expected to continue to eat away at dial-up's available bandwidth to the detriment of dial-up users' applications. Many newer websites now assume broadband speeds as the norm and when confronted with slower dial-up speeds may drop these slower connections to free up communication resources. On websites that are designed to be more dial-up friendly, use of a reverse proxy prevents dial-ups from being dropped as but can introduce long wait periods for dial-up users caused by the buffering used by a reverse proxy to bridge the different data rates. Modern dial-up modems have a maximum theoretical transfer speed of 56 kbit/s, although in most cases, 40–50 kbit/s is the norm. Factors such as phone line noise as well as the quality of the modem itself play a large part in determining connection speeds; some connections may be as low as 20 kbit/s in noisy environments, such as in a hotel room where the phone line is shared with many extensions, or in a rural area, many miles from the phone exchange.
Other factors such as long loops, loading coils, pair gain, electric fences, digital loop carriers can slow con