Signal processing is a subfield of mathematics and electrical engineering that concerns the analysis and modification of signals, which are broadly defined as functions conveying "information about the behavior or attributes of some phenomenon", such as sound and biological measurements. For example, signal processing techniques are used to improve signal transmission fidelity, storage efficiency, subjective quality, to emphasize or detect components of interest in a measured signal. According to Alan V. Oppenheim and Ronald W. Schafer, the principles of signal processing can be found in the classical numerical analysis techniques of the 17th century. Oppenheim and Schafer further state that the digital refinement of these techniques can be found in the digital control systems of the 1940s and 1950s. Analog signal processing is for signals that have not been digitized, as in legacy radio, telephone and television systems; this involves linear electronic circuits as well as non-linear ones. The former are, for instance, passive filters, active filters, additive mixers and delay lines.
Non-linear circuits include compandors, voltage-controlled filters, voltage-controlled oscillators and phase-locked loops. Continuous-time signal processing is for signals; the methods of signal processing include time domain, frequency domain, complex frequency domain. This technology discusses the modeling of linear time-invariant continuous system, integral of the system's zero-state response, setting up system function and the continuous time filtering of deterministic signals Discrete-time signal processing is for sampled signals, defined only at discrete points in time, as such are quantized in time, but not in magnitude. Analog discrete-time signal processing is a technology based on electronic devices such as sample and hold circuits, analog time-division multiplexers, analog delay lines and analog feedback shift registers; this technology was a predecessor of digital signal processing, is still used in advanced processing of gigahertz signals. The concept of discrete-time signal processing refers to a theoretical discipline that establishes a mathematical basis for digital signal processing, without taking quantization error into consideration.
Digital signal processing is the processing of digitized discrete-time sampled signals. Processing is done by general-purpose computers or by digital circuits such as ASICs, field-programmable gate arrays or specialized digital signal processors. Typical arithmetical operations include fixed-point and floating-point, real-valued and complex-valued and addition. Other typical operations supported by the hardware are circular buffers and lookup tables. Examples of algorithms are the Fast Fourier transform, finite impulse response filter, Infinite impulse response filter, adaptive filters such as the Wiener and Kalman filters. Nonlinear signal processing involves the analysis and processing of signals produced from nonlinear systems and can be in the time, frequency, or spatio-temporal domains. Nonlinear systems can produce complex behaviors including bifurcations, chaos and subharmonics which cannot be produced or analyzed using linear methods. Statistical signal processing is an approach which treats signals as stochastic processes, utilizing their statistical properties to perform signal processing tasks.
Statistical techniques are used in signal processing applications. For example, one can model the probability distribution of noise incurred when photographing an image, construct techniques based on this model to reduce the noise in the resulting image. Audio signal processing – for electrical signals representing sound, such as speech or music Speech signal processing – for processing and interpreting spoken words Image processing – in digital cameras and various imaging systems Video processing – for interpreting moving pictures Wireless communication – waveform generations, filtering, equalization Control systems Array processing – for processing signals from arrays of sensors Process control – a variety of signals are used, including the industry standard 4-20 mA current loop Seismology Financial signal processing – analyzing financial data using signal processing techniques for prediction purposes. Feature extraction, such as image understanding and speech recognition. Quality improvement, such as noise reduction, image enhancement, echo cancellation.
Including audio compression, image compression, video compression. Genomics, Genomic signal processing In communication systems, signal processing may occur at: OSI layer 1 in the seven layer OSI model, the Physical Layer. Filters – for example analog or digital Samplers and analog-to-digital converters for signal acquisition and reconstruction, which involves measuring a physical signal, storing or transferring it as digital signal, later rebuilding the original signal or an approximation thereof. Signal compressors Digital signal processors Differential equations Recurrence relation Transform theory Time-frequency analysis – for processing non-stationary signals Spectral estimation – for determining the spectral content of a
Border controls are measures taken by a country or a bloc of countries to monitor its borders in order to regulate the movement of people and goods. States and rulers have always regarded the ability to determine who enters or remains in their territories as a key test of their sovereignty, but prior to World War I, border controls were only sporadically implemented. In medieval Europe, for example, the boundaries between rival countries and centres of power were symbolic or consisted of amorphous borderlands, ‘marches’ and ‘debatable lands’ of indeterminate or contested status and the real ‘borders’ consisted of the fortified walls that surrounded towns and cities, where the authorities could exclude undesirable or incompatible people at the gates, from vagrants and the ‘wandering poor’, to ‘masterless women’, Gypsies or Jews; the concept of a travel document such as a passport needed to clear border controls in the modern sense has been traced back to the reign of Henry V of England, as a means of helping his subjects prove who they were in foreign lands.
The earliest reference to these documents is found in a 1414 Act of Parliament. In 1540, granting travel documents in England became a role of the Privy Council of England, it was around this time that the term "passport" was used. In 1794, issuing British passports became the job of the Office of the Secretary of State; the 1548 Imperial Diet of Augsburg required the public to hold imperial documents for travel, at the risk of permanent exile. During World War I, European governments introduced border passport requirements for security reasons, to control the emigration of people with useful skills; these controls remained in place after the war, becoming a standard, though controversial, procedure. British tourists of the 1920s complained about attached photographs and physical descriptions, which they considered led to a "nasty dehumanisation". One of the earliest systematic attempts of a modern nation state to implement border controls to restrict entry of particular groups was the Chinese Exclusion Act of 1882 in America.
This act aimed to implement discriminatory immigration controls on East Asians. The strict and racist border control policies had a negative impact not only on the Chinese alone but on whites and other races as well which lasted for about thirty years; the American economy suffered a great loss as a result of this Act. The Act was a sign of injustice and unfair treatment to the Chinese workers because the jobs they engaged in were menial jobs. A discriminatory approach to border control was taken in Canada through the Chinese Immigration Act of 1885, imposing what came to be called the Chinese head tax. Decolonisation during the twentieth century saw the emergence of mass emigration from nations in the Global South, thus leading former colonial occupiers to introduce stricter border controls. In the United Kingdom this process took place in stages, with British nationality law shifting from recognising all Commonwealth citizens as British subjects to today’s complex British nationality law which distinguishes between British citizens, modern British Subjects, British Overseas Citizens, overseas nationals, with each non-standard category created as a result of attempts to balance border control and the need to mitigate statelessness.
This aspect of the rise of border control in the 20th century has proven controversial. The British Nationality Law 1981 has been criticised by experts, as well as by the Committee on the Elimination of Racial Discrimination of the United Nations, on the grounds that the different classes of British nationality it created are, in fact related to the ethnic origins of their holders; the creation of British Nationality status, for instance, was met with criticism from many Hong Kong residents who felt that British citizenship would have been more appropriate in light of the "moral debt" owed to them by the UK. Some British politicians and magazines criticised the creation of BN status. Ethnic tensions created during colonial occupation resulted in discriminatory policies being adopted in newly independent African nations, such as Uganda under Idi Amin which banned Asians from Uganda, thus creating a mass exodus of the Asian community of Uganda; such ethnically driven border control policies took forms ranging from anti-Asian sentiment in East Africa to Apartheid policies in South Africa and Namibia which creates bantustans and pass laws to segregate and impose border controls against non-whites, encouraged immigration of whites at the expense of Blacks as well as Indians and other Asians.
Whilst border control in Europe and east of the Pacific have tightened over time, they have been liberalised in Africa, from Yoweri Museveni’s reversal of Idi Amin’s anti-Asian border controls to the fall of Apartheid in South Africa. The development of border control policies over the course of the 20th century saw the standardisation of refugee travel documents under the Convention Relating to the Status of Refugees of 1951 and the 1954 Convention travel document for stateless people under the similar 1954 statelessness convention. There are multiple aspects of border control. Quarantine policies exist to control the spread of disease; when applied as a component of border control, such policies focus on mitigating the entry of infected individuals, plants, or animals into a country. Each country has its own laws and regulations for the import and export of goods into and out of a country, which its customs aut
Health care or healthcare is the maintenance or improvement of health via the prevention and treatment of disease, illness and other physical and mental impairments in people. Health care is delivered by health professionals in allied health fields. Physicians and physician associates are a part of these health professionals. Dentistry, nursing, optometry, pharmacy, occupational therapy, physical therapy and other health professions are all part of health care, it includes work done in providing primary care, secondary care, tertiary care, as well as in public health. Access to health care may vary across countries and individuals influenced by social and economic conditions as well as health policies. Health care systems are organizations established to meet the health needs of targeted populations. According to the World Health Organization, a well-functioning health care system requires a financing mechanism, a well-trained and adequately paid workforce, reliable information on which to base decisions and policies, well maintained health facilities to deliver quality medicines and technologies.
An efficient health care system can contribute to a significant part of a country's economy and industrialization. Health care is conventionally regarded as an important determinant in promoting the general physical and mental health and well-being of people around the world. An example of this was the worldwide eradication of smallpox in 1980, declared by the WHO as the first disease in human history to be eliminated by deliberate health care interventions; the delivery of modern health care depends on groups of trained professionals and paraprofessionals coming together as interdisciplinary teams. This includes professionals in medicine, physiotherapy, dentistry and allied health, along with many others such as public health practitioners, community health workers and assistive personnel, who systematically provide personal and population-based preventive and rehabilitative care services. While the definitions of the various types of health care vary depending on the different cultural, political and disciplinary perspectives, there appears to be some consensus that primary care constitutes the first element of a continuing health care process and may include the provision of secondary and tertiary levels of care.
Health care can be defined as either private. Primary care refers to the work of health professionals who act as a first point of consultation for all patients within the health care system; such a professional would be a primary care physician, such as a general practitioner or family physician. Another professional would be a licensed independent practitioner such as a physiotherapist, or a non-physician primary care provider such as a physician assistant or nurse practitioner. Depending on the locality, health system organization the patient may see another health care professional first, such as a pharmacist or nurse. Depending on the nature of the health condition, patients may be referred for secondary or tertiary care. Primary care is used as the term for the health care services that play a role in the local community, it can be provided in different settings, such as Urgent care centers which provide same day appointments or services on a walk-in basis. Primary care involves the widest scope of health care, including all ages of patients, patients of all socioeconomic and geographic origins, patients seeking to maintain optimal health, patients with all types of acute and chronic physical and social health issues, including multiple chronic diseases.
A primary care practitioner must possess a wide breadth of knowledge in many areas. Continuity is a key characteristic of primary care, as patients prefer to consult the same practitioner for routine check-ups and preventive care, health education, every time they require an initial consultation about a new health problem; the International Classification of Primary Care is a standardized tool for understanding and analyzing information on interventions in primary care based on the reason for the patient's visit. Common chronic illnesses treated in primary care may include, for example: hypertension, asthma, COPD, depression and anxiety, back pain, arthritis or thyroid dysfunction. Primary care includes many basic maternal and child health care services, such as family planning services and vaccinations. In the United States, the 2013 National Health Interview Survey found that skin disorders and joint disorders, back problems, disorders of lipid metabolism, upper respiratory tract disease were the most common reasons for accessing a physician.
In the United States, primary care physicians have begun to deliver primary care outside of the managed care system through direct primary care, a subset of the more familiar concierge medicine. Physicians in this model bill patients directly for services, either on a pre-paid monthly, quarterly, or annual basis, or bill for each service in the office. Examples of direct primary care practices include Foundation Health in Colorado and Qliance in Washington. In context of global population aging, with increasing numbers of older adults at greater risk of chronic non-communicable diseases increasing demand for primary care services is expected in both developed and developing countries; the World Health Organization attributes the provision of essential primary care as an integral component of an inclusive primary health care strategy. Secondary care includes acute care: nec
In the broadest definition, a sensor is a device, module, or subsystem whose purpose is to detect events or changes in its environment and send the information to other electronics a computer processor. A sensor is always used with other electronics. Sensors are used in everyday objects such as touch-sensitive elevator buttons and lamps which dim or brighten by touching the base, besides innumerable applications of which most people are never aware. With advances in micromachinery and easy-to-use microcontroller platforms, the uses of sensors have expanded beyond the traditional fields of temperature, pressure or flow measurement, for example into MARG sensors. Moreover, analog sensors such as potentiometers and force-sensing resistors are still used. Applications include manufacturing and machinery and aerospace, medicine and many other aspects of our day-to-day life. A sensor's sensitivity indicates how much the sensor's output changes when the input quantity being measured changes. For instance, if the mercury in a thermometer moves 1 cm when the temperature changes by 1 °C, the sensitivity is 1 cm/°C.
Some sensors can affect what they measure. Sensors are designed to have a small effect on what is measured. Technological progress allows more and more sensors to be manufactured on a microscopic scale as microsensors using MEMS technology. In most cases, a microsensor reaches a higher speed and sensitivity compared with macroscopic approaches. A good sensor obeys the following rules:: it is sensitive to the measured property it is insensitive to any other property to be encountered in its application, it does not influence the measured property. Most sensors have a linear transfer function; the sensitivity is defined as the ratio between the output signal and measured property. For example, if a sensor measures temperature and has a voltage output, the sensitivity is a constant with the units; the sensitivity is the slope of the transfer function. Converting the sensor's electrical output to the measured units requires dividing the electrical output by the slope. In addition, an offset is added or subtracted.
For example, -40 must be added to the output. For an analog sensor signal to be processed, or used in digital equipment, it needs to be converted to a digital signal, using an analog-to-digital converter. Since sensors cannot replicate an ideal transfer function, several types of deviations can occur which limit sensor accuracy: Since the range of the output signal is always limited, the output signal will reach a minimum or maximum when the measured property exceeds the limits; the full scale range defines the minimum values of the measured property. The sensitivity may in practice differ from the value specified; this is called a sensitivity error. This is an error in the slope of a linear transfer function. If the output signal differs from the correct value by a constant, the sensor has an offset error or bias; this is an error in the y-intercept of a linear transfer function. Nonlinearity is deviation of a sensor's transfer function from a straight line transfer function; this is defined by the amount the output differs from ideal behavior over the full range of the sensor noted as a percentage of the full range.
Deviation caused by rapid changes of the measured property over time is a dynamic error. This behavior is described with a bode plot showing sensitivity error and phase shift as a function of the frequency of a periodic input signal. If the output signal changes independent of the measured property, this is defined as drift. Long term drift over months or years is caused by physical changes in the sensor. Noise is a random deviation of the signal. A hysteresis error causes the output value to vary depending on the previous input values. If a sensor's output is different depending on whether a specific input value was reached by increasing vs. decreasing the input the sensor has a hysteresis error. If the sensor has a digital output, the output is an approximation of the measured property; this error is called quantization error. If the signal is monitored digitally, the sampling frequency can cause a dynamic error, or if the input variable or added noise changes periodically at a frequency near a multiple of the sampling rate, aliasing errors may occur.
The sensor may to some extent be sensitive to properties other than the property being measured. For example, most sensors are influenced by the temperature of their environment. A hysteresis error causes the output value to vary depending on the previous input values. If a sensor's output is different depending on whether a specific input value was reached by increasing vs. decreasing the input the sensor has a hysteresis error. All these deviations can be classified as random errors. Systematic errors can sometimes be compensated for by means of some kind of calibration strategy. Noise is a random error that can be reduced by signal processing, such as filtering at the expense of the dynamic behavior of the sensor; the resolution of a sensor is the smallest change it can detect in the quantity that it is measuring. The resolution of a sensor with a digital output is the resolution of the digital output; the resolution is related to the precision with which the mea
A wireless network is a computer network that uses wireless data connections between network nodes. Wireless networking is a method by which homes, telecommunications networks and business installations avoid the costly process of introducing cables into a building, or as a connection between various equipment locations. Wireless telecommunications networks are implemented and administered using radio communication; this implementation takes place at the physical level of the OSI model network structure. Examples of wireless networks include cell phone networks, wireless local area networks, wireless sensor networks, satellite communication networks, terrestrial microwave networks; the first professional wireless network was developed under the brand ALOHAnet in 1969 at the University of Hawaii and became operational in June 1971. The first commercial wireless network was the WaveLAN product family, developed by NCR in 1986. 1991 2G cell phone network June 1997 802.11 "Wi-Fi" protocol first release 1999 803.11 VoIP integration Terrestrial microwave – Terrestrial microwave communication uses Earth-based transmitters and receivers resembling satellite dishes.
Terrestrial microwaves are in the low gigahertz range, which limits all communications to line-of-sight. Relay stations are spaced 48 km apart. Communications satellites – Satellites communicate via microwave radio waves, which are not deflected by the Earth's atmosphere; the satellites are stationed in space in geosynchronous orbit 35,400 km above the equator. These Earth-orbiting systems are capable of receiving and relaying voice, TV signals. Cellular and PCS systems use several radio communications technologies; the systems divide the region covered into multiple geographic areas. Each area has a low-power transmitter or radio relay antenna device to relay calls from one area to the next area. Radio and spread spectrum technologies – Wireless local area networks use a high-frequency radio technology similar to digital cellular and a low-frequency radio technology. Wireless LANs use spread spectrum technology to enable communication between multiple devices in a limited area. IEEE 802.11 defines a common flavor of open-standards wireless radio-wave technology known as.
Free-space optical communication uses invisible light for communications. In most cases, line-of-sight propagation is used, which limits the physical positioning of communicating devices. Wireless personal area networks connect devices within a small area, within a person's reach. For example, both Bluetooth radio and invisible infrared light provides a WPAN for interconnecting a headset to a laptop. ZigBee supports WPAN applications. Wi-Fi PANs are becoming commonplace as equipment designers start to integrate Wi-Fi into a variety of consumer electronic devices. Intel "My WiFi" and Windows 7 "virtual Wi-Fi" capabilities have made Wi-Fi PANs simpler and easier to set up and configure. A wireless local area network links two or more devices over a short distance using a wireless distribution method providing a connection through an access point for internet access; the use of spread-spectrum or OFDM technologies may allow users to move around within a local coverage area, still remain connected to the network.
Products using the IEEE 802.11 WLAN standards are marketed under the Wi-Fi brand name. Fixed wireless technology implements point-to-point links between computers or networks at two distant locations using dedicated microwave or modulated laser light beams over line of sight paths, it is used in cities to connect networks in two or more buildings without installing a wired link. To connect to Wi-Fi, sometimes are used devices like a router or connecting HotSpot using mobile smartphones. A wireless ad hoc network known as a wireless mesh network or mobile ad hoc network, is a wireless network made up of radio nodes organized in a mesh topology; each node forwards messages on behalf of the other nodes and each node performs routing. Ad hoc networks can "self-heal", automatically re-routing around a node. Various network layer protocols are needed to realize ad hoc mobile networks, such as Distance Sequenced Distance Vector routing, Associativity-Based Routing, Ad hoc on-demand Distance Vector routing, Dynamic source routing.
Wireless metropolitan area networks are a type of wireless network that connects several wireless LANs. WiMAX is described by the IEEE 802.16 standard. Wireless wide area networks are wireless networks that cover large areas, such as between neighbouring towns and cities, or city and suburb; these networks can be used to connect branch offices of business or as a public Internet access system. The wireless connections between access points are point to point microwave links using parabolic dishes on the 2.4 GHz and 5.8Ghz band, rather than omnidirectional antennas used with smaller networks. A typical system contains access points and wireless bridging relays. Other configurations are mesh systems; when combined with renewable energy systems such as photovoltaic solar panels or wind systems they can be stand alone systems. A cellular network or mobile network is a radio network distributed over land areas called cells, each served by at least one fixed-location transceiver, known as a cell site or base station.
In a cellular network, each cell characteristically uses a different set of radio frequencies from all their immediate neighbouring cells to avoid any interference. When joined together these cells provide radio coverage over a wide geographic area; this enables a large number of portable transceivers (e.g. mo
Environmental monitoring describes the processes and activities that need to take place to characterise and monitor the quality of the environment. Environmental monitoring is used in the preparation of environmental impact assessments, as well as in many circumstances in which human activities carry a risk of harmful effects on the natural environment. All monitoring strategies and programmes have reasons and justifications which are designed to establish the current status of an environment or to establish trends in environmental parameters. In all cases the results of monitoring will be analysed statistically and published; the design of a monitoring programme must therefore have regard to the final use of the data before monitoring starts. Air pollutants are atmospheric substances—both occurring and anthropogenic—which may have a negative impact on the environment and organism health. With the evolution of new chemicals and industrial processes has come the introduction or elevation of pollutants in the atmosphere, as well as environmental research and regulations, increasing the demand for air quality monitoring.
Air quality monitoring is challenging to enact as it requires the effective integration of multiple environmental data sources, which originate from different environmental networks and institutions. These challenges require specialized observation equipment and tools to establish air pollutant concentrations, including sensor networks, geographic information system models, the Sensor Observation Service, a web service for querying real-time sensor data. Air dispersion models that combine topographic and meteorological data to predict air pollutant concentrations are helpful in interpreting air monitoring data. Additionally, consideration of anemometer data in the area between sources and the monitor provides insights on the source of the air contaminants recorded by an air pollution monitor. Air quality monitors are operated by citizens, regulatory agencies, researchers to investigate air quality and the effects of air pollution. Interpretation of ambient air monitoring data involves a consideration of the spatial and temporal representativeness of the data gathered, the health effects associated with exposure to the monitored levels.
If the interpretation reveals concentrations of multiple chemical compounds, a unique "chemical fingerprint" of a particular air pollution source may emerge from analysis of the data. Passive or "diffusive" air sampling depends on meteorological conditions such as wind to diffuse air pollutants to a sorbent medium. Passive samplers have the advantage of being small and easy to deploy, they are useful in air quality studies that determine key areas for future continuous monitoring. Air pollution can be assessed by biomonitoring with organisms that bioaccumulate air pollutants, such as lichens, mosses and other biomass. One of the benefits of this type of sampling is how quantitative information can be obtained via measurements of accumulated compounds, representative of the environment from which they came. However, careful considerations must be made in choosing the particular organism, how it's dispersed, relevance to the pollutant. Other sampling methods include the use of a denuder, needle trap devices, microextraction techniques.
Soil monitoring involves the collection and/or analysis of soil and its associated quality and physical status to determine or guarantee its fitness for use. Soil faces many threats, including compaction, organic material loss, biodiversity loss, slope stability issues, erosion and acidification. Soil monitoring helps characterize these and other potential risks to the soil, surrounding environments, animal health, human health. Assessing these and other risks to soil can be challenging due to a variety of factors, including soil's heterogeneity and complexity, scarcity of toxicity data, lack of understanding of a contaminant's fate, variability in levels of soil screening; this requires a risk assessment approach and analysis techniques that prioritize environmental protection, risk reduction, and, if necessary, remediation methods. Soil monitoring plays a significant role in that risk assessment, not only aiding in the identification of at-risk and affected areas but in the establishment of base background values of soil.
Soil monitoring has focused on more classical conditions and contaminants, including toxic elements and persistent organic pollutants. Testing for these and other aspects of soil, has had its own set of challenges, as sampling in most cases is of a destructive in nature, requiring multiple samples over time. Additionally and analytical errors may be introduced due to variability among references and methods over time. However, as analytical techniques evolve and new knowledge about ecological processes and contaminant effects disseminate, the focus of monitoring will broaden over time and the quality of monitoring will continue to improve; the two primary types of soil sampling are composite sampling. Grab sampling involves the collection of an individual sample at a specific time and place, while composite sampling involves the collection of a homogenized mixture of multiple individual samples at either a specific place over different times or multiple locations at a specific time. Soil sampling may occur both at shallow ground levels or deep in the ground, with collection methods varying by level collected from.
Scoops, core barrel and solid-tube samplers, other tools are used shallow, whereas split-tube, solid-tube, or hydraulic methods may be used
Internet access is the ability of individuals and organizations to connect to the Internet using computer terminals and other devices. Internet access is sold by Internet service providers delivering connectivity at a wide range of data transfer rates via various networking technologies. Many organizations, including a growing number of municipal entities provide cost-free wireless access. Availability of Internet access was once limited, but has grown rapidly. In 1995, only 0.04 percent of the world's population had access, with well over half of those living in the United States, consumer use was through dial-up. By the first decade of the 21st century, many consumers in developed nations used faster broadband technology, by 2014, 41 percent of the world's population had access, broadband was ubiquitous worldwide, global average connection speeds exceeded one megabit per second; the Internet developed from the ARPANET, funded by the US government to support projects within the government and at universities and research laboratories in the US – but grew over time to include most of the world's large universities and the research arms of many technology companies.
Use by a wider audience only came in 1995 when restrictions on the use of the Internet to carry commercial traffic were lifted. In the early to mid-1980s, most Internet access was from personal computers and workstations directly connected to local area networks or from dial-up connections using modems and analog telephone lines. LANs operated at 10 Mbit/s, while modem data-rates grew from 1200 bit/s in the early 1980s, to 56 kbit/s by the late 1990s. Dial-up connections were made from terminals or computers running terminal emulation software to terminal servers on LANs; these dial-up connections did not support end-to-end use of the Internet protocols and only provided terminal to host connections. The introduction of network access servers supporting the Serial Line Internet Protocol and the point-to-point protocol extended the Internet protocols and made the full range of Internet services available to dial-up users. Broadband Internet access shortened to just broadband, is defined as "Internet access, always on, faster than the traditional dial-up access" and so covers a wide range of technologies.
Broadband connections are made using a computer's built in Ethernet networking capabilities, or by using a NIC expansion card. Most broadband services provide a continuous "always on" connection. Broadband provides improved access to Internet services such as: Faster world wide web browsing Faster downloading of documents, photographs and other large files Telephony, radio and videoconferencing Virtual private networks and remote system administration Online gaming massively multiplayer online role-playing games which are interaction-intensiveIn the 1990s, the National Information Infrastructure initiative in the U. S. made broadband Internet access a public policy issue. In 2000, most Internet access to homes was provided using dial-up, while many businesses and schools were using broadband connections. In 2000 there were just under 150 million dial-up subscriptions in the 34 OECD countries and fewer than 20 million broadband subscriptions. By 2004, broadband had grown and dial-up had declined so that the number of subscriptions were equal at 130 million each.
In 2010, in the OECD countries, over 90% of the Internet access subscriptions used broadband, broadband had grown to more than 300 million subscriptions, dial-up subscriptions had declined to fewer than 30 million. The broadband technologies in widest use are ADSL and cable Internet access. Newer technologies include VDSL and optical fibre extended closer to the subscriber in both telephone and cable plants. Fibre-optic communication, while only being used in premises and to the curb schemes, has played a crucial role in enabling broadband Internet access by making transmission of information at high data rates over longer distances much more cost-effective than copper wire technology. In areas not served by ADSL or cable, some community organizations and local governments are installing Wi-Fi networks. Wireless and satellite Internet are used in rural, undeveloped, or other hard to serve areas where wired Internet is not available. Newer technologies being deployed for fixed and mobile broadband access include WiMAX, LTE, fixed wireless, e.g. Motorola Canopy.
Starting in 2006, mobile broadband access is available at the consumer level using "3G" and "4G" technologies such as HSPA, EV-DO, HSPA+, LTE. In addition to access from home and the workplace Internet access may be available from public places such as libraries and Internet cafes, where computers with Internet connections are available; some libraries provide stations for physically connecting users' laptops to local area networks. Wireless Internet access points are available in public places such as airport halls, in some cases just for brief use while standing; some access points may provide coin-operated computers. Various terms are used, such as "public Internet kiosk", "public access terminal", "Web payphone". Many hotels have public terminals fee based. Coffee shops, shopping malls, other venues offer wireless access to computer networks, referred to as hotspots, for users who bring their own wireless-enabled devices such as a laptop or PDA; these services may be free to all, free to customer