An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources. For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware, although the application code is executed directly by the hardware and makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles to web servers and supercomputers; the dominant desktop operating system is Microsoft Windows with a market share of around 82.74%. MacOS by Apple Inc. is in second place, the varieties of Linux are collectively in third place. In the mobile sector, use in 2017 is up to 70% of Google's Android and according to third quarter 2016 data, Android on smartphones is dominant with 87.5 percent and a growth rate 10.3 percent per year, followed by Apple's iOS with 12.1 percent and a per year decrease in market share of 5.2 percent, while other operating systems amount to just 0.3 percent.
Linux distributions are dominant in supercomputing sectors. Other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can only run one program at a time, while a multi-tasking operating system allows more than one program to be running in concurrency; this is achieved by time-sharing, where the available processor time is divided between multiple processes. These processes are each interrupted in time slices by a task-scheduling subsystem of the operating system. Multi-tasking may be characterized in co-operative types. In preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, such as Solaris and Linux—as well as non-Unix-like, such as AmigaOS—support preemptive multitasking. Cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking.
32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem. A multi-user operating system extends the basic concept of multi-tasking with facilities that identify processes and resources, such as disk space, belonging to multiple users, the system permits multiple users to interact with the system at the same time. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources to multiple users. A distributed operating system manages a group of distinct computers and makes them appear to be a single computer; the development of networked computers that could be linked and communicate with each other gave rise to distributed computing. Distributed computations are carried out on more than one machine; when computers in a group work in cooperation, they form a distributed system.
In an OS, distributed and cloud computing context, templating refers to creating a single virtual machine image as a guest operating system saving it as a tool for multiple running virtual machines. The technique is used both in virtualization and cloud computing management, is common in large server warehouses. Embedded operating systems are designed to be used in embedded computer systems, they are designed to operate on small machines like PDAs with less autonomy. They are able to operate with a limited number of resources, they are compact and efficient by design. Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is an operating system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, it uses specialized scheduling algorithms so that a deterministic nature of behavior is achieved. An event-driven system switches between tasks based on their priorities or external events while time-sharing operating systems switch tasks based on clock interrupts.
A library operating system is one in which the services that a typical operating system provides, such as networking, are provided in the form of libraries and composed with the application and configuration code to construct a unikernel: a specialized, single address space, machine image that can be deployed to cloud or embedded environments. Early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Operating systems did not exist in their more complex forms until the early 1960s. Hardware features were added, that enabled use of runtime libraries and parallel processing; when personal computers became popular in the 1980s, operating systems were made for them similar in concept to those used on larger computers. In the 1940s, the earliest electronic digital systems had no operating systems.
Electronic systems of this time were programmed on rows of mechanical switches or by jumper wires on plug boards. These were special-purpose systems that, for example, generated ballistics tables for the military or controlled the pri
Global Positioning System
The Global Positioning System Navstar GPS, is a satellite-based radionavigation system owned by the United States government and operated by the United States Air Force. It is a global navigation satellite system that provides geolocation and time information to a GPS receiver anywhere on or near the Earth where there is an unobstructed line of sight to four or more GPS satellites. Obstacles such as mountains and buildings block the weak GPS signals; the GPS does not require the user to transmit any data, it operates independently of any telephonic or internet reception, though these technologies can enhance the usefulness of the GPS positioning information. The GPS provides critical positioning capabilities to military and commercial users around the world; the United States government created the system, maintains it, makes it accessible to anyone with a GPS receiver. The GPS project was launched by the U. S. Department of Defense in 1973 for use by the United States military and became operational in 1995.
It was allowed for civilian use in the 1980s. Advances in technology and new demands on the existing system have now led to efforts to modernize the GPS and implement the next generation of GPS Block IIIA satellites and Next Generation Operational Control System. Announcements from Vice President Al Gore and the White House in 1998 initiated these changes. In 2000, the U. S. Congress authorized the modernization effort, GPS III. During the 1990s, GPS quality was degraded by the United States government in a program called "Selective Availability"; the GPS system is provided by the United States government, which can selectively deny access to the system, as happened to the Indian military in 1999 during the Kargil War, or degrade the service at any time. As a result, several countries have developed or are in the process of setting up other global or regional satellite navigation systems; the Russian Global Navigation Satellite System was developed contemporaneously with GPS, but suffered from incomplete coverage of the globe until the mid-2000s.
GLONASS can be added to GPS devices, making more satellites available and enabling positions to be fixed more and to within two meters. China's BeiDou Navigation Satellite System is due to achieve global reach in 2020. There are the European Union Galileo positioning system, India's NAVIC. Japan's Quasi-Zenith Satellite System is a GPS satellite-based augmentation system to enhance GPS's accuracy; when selective availability was lifted in 2000, GPS had about a five-meter accuracy. The latest stage of accuracy enhancement uses the L5 band and is now deployed. GPS receivers released in 2018 that use the L5 band can have much higher accuracy, pinpointing to within 30 centimetres or 11.8 inches. The GPS project was launched in the United States in 1973 to overcome the limitations of previous navigation systems, integrating ideas from several predecessors, including classified engineering design studies from the 1960s; the U. S. Department of Defense developed the system, which used 24 satellites, it was developed for use by the United States military and became operational in 1995.
Civilian use was allowed from the 1980s. Roger L. Easton of the Naval Research Laboratory, Ivan A. Getting of The Aerospace Corporation, Bradford Parkinson of the Applied Physics Laboratory are credited with inventing it; the work of Gladys West is credited as instrumental in the development of computational techniques for detecting satellite positions with the precision needed for GPS. The design of GPS is based on similar ground-based radio-navigation systems, such as LORAN and the Decca Navigator, developed in the early 1940s. Friedwardt Winterberg proposed a test of general relativity – detecting time slowing in a strong gravitational field using accurate atomic clocks placed in orbit inside artificial satellites. Special and general relativity predict that the clocks on the GPS satellites would be seen by the Earth's observers to run 38 microseconds faster per day than the clocks on the Earth; the GPS calculated positions would drift into error, accumulating to 10 kilometers per day. This was corrected for in the design of GPS.
Winterberg, Friedwardt. “Relativistische Zeitdilatation eines künstlichen Satelliten ” When the Soviet Union launched the first artificial satellite in 1957, two American physicists, William Guier and George Weiffenbach, at Johns Hopkins University's Applied Physics Laboratory decided to monitor its radio transmissions. Within hours they realized that, because of the Doppler effect, they could pinpoint where the satellite was along its orbit; the Director of the APL gave them access to their UNIVAC to do the heavy calculations required. Early the next year, Frank McClure, the deputy director of the APL, asked Guier and Weiffenbach to investigate the inverse problem—pinpointing the user's location, given that of the satellite; this led them and APL to develop the TRANSIT system. In 1959, ARPA played a role in TRANSIT. TRANSIT was first tested in 1960, it used a constellation of five satellites and could provide a navigational fix once per hour. In 1967, the U. S. Navy developed the Timation satellite, which proved the feasibility of placing accurate clocks in space, a technology required for GPS.
In the 1970s, the ground-based OMEGA navigation system, based on phase comparison of signal transmission from pairs of stations
Wi-Fi is technology for radio wireless local area networking of devices based on the IEEE 802.11 standards. Wi‑Fi is a trademark of the Wi-Fi Alliance, which restricts the use of the term Wi-Fi Certified to products that complete after many years of testing the 802.11 committee interoperability certification testing. Devices that can use Wi-Fi technologies include, among others and laptops, video game consoles and tablets, smart TVs, digital audio players, digital cameras and drones. Wi-Fi compatible devices can connect to the Internet via a wireless access point; such an access point has a range of about 20 meters indoors and a greater range outdoors. Hotspot coverage can be as small as a single room with walls that block radio waves, or as large as many square kilometres achieved by using multiple overlapping access points. Different versions of Wi-Fi exist, with radio bands and speeds. Wi-Fi most uses the 2.4 gigahertz UHF and 5 gigahertz SHF ISM radio bands. Each channel can be time-shared by multiple networks.
These wavelengths work best for line-of-sight. Many common materials absorb or reflect them, which further restricts range, but can tend to help minimise interference between different networks in crowded environments. At close range, some versions of Wi-Fi, running on suitable hardware, can achieve speeds of over 1 Gbit/s. Anyone within range with a wireless network interface controller can attempt to access a network. Wi-Fi Protected Access is a family of technologies created to protect information moving across Wi-Fi networks and includes solutions for personal and enterprise networks. Security features of WPA have included stronger protections and new security practices as the security landscape has changed over time. In 1971, ALOHAnet connected the Hawaiian Islands with a UHF wireless packet network. ALOHAnet and the ALOHA protocol were early forerunners to Ethernet, the IEEE 802.11 protocols, respectively. A 1985 ruling by the U. S. Federal Communications Commission released the ISM band for unlicensed use.
These frequency bands are the same ones used by equipment such as microwave ovens and are subject to interference. In 1991, NCR Corporation with AT&T Corporation invented the precursor to 802.11, intended for use in cashier systems, under the name WaveLAN. The Australian radio-astronomer Dr John O'Sullivan with his colleagues Terence Percival, Graham Daniels, Diet Ostry, John Deane developed a key patent used in Wi-Fi as a by-product of a Commonwealth Scientific and Industrial Research Organisation research project, "a failed experiment to detect exploding mini black holes the size of an atomic particle". Dr O'Sullivan and his colleagues are credited with inventing Wi-Fi. In 1992 and 1996, CSIRO obtained patents for a method used in Wi-Fi to "unsmear" the signal; the first version of the 802.11 protocol was released in 1997, provided up to 2 Mbit/s link speeds. This was updated in 1999 with 802.11b to permit 11 Mbit/s link speeds, this proved to be popular. In 1999, the Wi-Fi Alliance formed as a trade association to hold the Wi-Fi trademark under which most products are sold.
Wi-Fi uses a large number of patents held by many different organizations. In April 2009, 14 technology companies agreed to pay CSIRO $1 billion for infringements on CSIRO patents; this led to Australia labeling Wi-Fi as an Australian invention, though this has been the subject of some controversy. CSIRO won a further $220 million settlement for Wi-Fi patent-infringements in 2012 with global firms in the United States required to pay the CSIRO licensing rights estimated to be worth an additional $1 billion in royalties. In 2016, the wireless local area network Test Bed was chosen as Australia's contribution to the exhibition A History of the World in 100 Objects held in the National Museum of Australia; the name Wi-Fi, commercially used at least as early as August 1999, was coined by the brand-consulting firm Interbrand. The Wi-Fi Alliance had hired Interbrand to create a name, "a little catchier than'IEEE 802.11b Direct Sequence'." Phil Belanger, a founding member of the Wi-Fi Alliance who presided over the selection of the name "Wi-Fi", has stated that Interbrand invented Wi-Fi as a pun on the word hi-fi, a term for high-quality audio technology.
Interbrand created the Wi-Fi logo. The yin-yang Wi-Fi logo indicates the certification of a product for interoperability; the Wi-Fi Alliance used the advertising slogan "The Standard for Wireless Fidelity" for a short time after the brand name was created. While inspired by the term hi-fi, the name was never "Wireless Fidelity"; the Wi-Fi Alliance was called the "Wireless Fidelity Alliance Inc" in some publications. Non-Wi-Fi technologies intended for fixed points, such as Motorola Canopy, are described as fixed wireless. Alternative wireless technologies include mobile phone standards, such as 2G, 3G, 4G, LTE; the name is sometimes written as WiFi, Wifi, or wifi, but these are not approved by the Wi-Fi Alliance. IEEE is a separate, but related organization and their website has stated "WiFi is a short name for Wireless Fidelity". To connect to a Wi-Fi LAN, a computer has to be equipped with a wireless network interface controller; the combination of computer and interface controllers is called a station.
A service set is the set of all the devices associated with a particular Wi-Fi network. The service set can be local, extended or mesh; each service set has an associated identifier, the 32-byte Service Set Identifier, which identifies the partic
The Universal Mobile Telecommunications System is a third generation mobile cellular system for networks based on the GSM standard. Developed and maintained by the 3GPP, UMTS is a component of the International Telecommunications Union IMT-2000 standard set and compares with the CDMA2000 standard set for networks based on the competing cdmaOne technology. UMTS uses wideband code division multiple access radio access technology to offer greater spectral efficiency and bandwidth to mobile network operators. UMTS specifies a complete network system, which includes the radio access network, the core network and the authentication of users via SIM cards; the technology described in UMTS is sometimes referred to as Freedom of Mobile Multimedia Access or 3GSM. Unlike EDGE and CDMA2000, UMTS requires new base stations and new frequency allocations. UMTS supports maximum theoretical data transfer rates of 42 Mbit/s when Evolved HSPA is implemented in the network. Users in deployed networks can expect a transfer rate of up to 384 kbit/s for Release'99 handsets, 7.2 Mbit/s for High-Speed Downlink Packet Access handsets in the downlink connection.
These speeds are faster than the 9.6 kbit/s of a single GSM error-corrected circuit switched data channel, multiple 9.6 kbit/s channels in High-Speed Circuit-Switched Data and 14.4 kbit/s for CDMAOne channels. Since 2006, UMTS networks in many countries have been or are in the process of being upgraded with High-Speed Downlink Packet Access, sometimes known as 3.5G. HSDPA enables downlink transfer speeds of up to 21 Mbit/s. Work is progressing on improving the uplink transfer speed with the High-Speed Uplink Packet Access. Longer term, the 3GPP Long Term Evolution project plans to move UMTS to 4G speeds of 100 Mbit/s down and 50 Mbit/s up, using a next generation air interface technology based upon orthogonal frequency-division multiplexing; the first national consumer UMTS networks launched in 2002 with a heavy emphasis on telco-provided mobile applications such as mobile TV and video calling. The high data speeds of UMTS are now most utilised for Internet access: experience in Japan and elsewhere has shown that user demand for video calls is not high, telco-provided audio/video content has declined in popularity in favour of high-speed access to the World Wide Web—either directly on a handset or connected to a computer via Wi-Fi, Bluetooth or USB.
UMTS combines three different terrestrial air interfaces, GSM's Mobile Application Part core, the GSM family of speech codecs. The air interfaces are called UMTS Terrestrial Radio Access. All air interface options are part of ITU's IMT-2000. In the most popular variant for cellular mobile telephones, W-CDMA is used, it is called "Uu interface", as it links User Equipment to the UMTS Terrestrial Radio Access Network Please note that the terms W-CDMA, TD-CDMA and TD-SCDMA are misleading. While they suggest covering just a channel access method, they are the common names for the whole air interface standards. W-CDMA or WCDMA, along with UMTS-FDD, UTRA-FDD, or IMT-2000 CDMA Direct Spread is an air interface standard found in 3G mobile telecommunications networks, it supports conventional cellular voice, text and MMS services, but can carry data at high speeds, allowing mobile operators to deliver higher bandwidth applications including streaming and broadband Internet access. W-CDMA uses the DS-CDMA channel access method with a pair of 5 MHz wide channels.
In contrast, the competing CDMA2000 system uses one or more available 1.25 MHz channels for each direction of communication. W-CDMA systems are criticized for their large spectrum usage, which delayed deployment in countries that acted slowly in allocating new frequencies for 3G services; the specific frequency bands defined by the UMTS standard are 1885–2025 MHz for the mobile-to-base and 2110–2200 MHz for the base-to-mobile. In the US, 1710–1755 MHz and 2110–2155 MHz are used instead, as the 1900 MHz band was used. While UMTS2100 is the most deployed UMTS band, some countries' UMTS operators use the 850 MHz and/or 1900 MHz bands, notably in the US by AT&T Mobility, New Zealand by Telecom New Zealand on the XT Mobile Network and in Australia by Telstra on the Next G network; some carriers such as T-Mobile use band numbers to identify the UMTS frequencies. For example, Band I, Band IV, Band V. UMTS-FDD is an acronym for Universal Mobile Telecommunications System - frequency-division duplexing and a 3GPP standardized version of UMTS networks that makes use of frequency-division duplexing for duplexing over an UMTS Terrestrial Radio Access air interface.
W-CDMA is the basis of Japan's NTT DoCoMo's FOMA service and the most-commonly used member of the Universal Mobile Telecommunications System family and sometimes used as a synonym for UMTS. It uses the DS-CDMA channel access method and the FDD duplexing method to achieve higher speeds and support more users compared to most used time division multiple access and time division duplex schemes. While not an evolutionary upgrade on the airside, it uses the same core network as the 2G GSM networks deployed worldwide, allowing dual mode mobile operation al
System on a chip
A system on a chip or system on chip is an integrated circuit that integrates all components of a computer or other electronic system. These components include a central processing unit, input/output ports and secondary storage – all on a single substrate or microchip, the size of a coin, it may contain digital, mixed-signal, radio frequency signal processing functions, depending on the application. As they are integrated on a single substrate, SoCs consume much less power and take up much less area than multi-chip designs with equivalent functionality; because of this, SoCs are common in the mobile computing and edge computing markets. Systems on chip are used in embedded systems and the Internet of Things. Systems on Chip are in contrast to the common traditional motherboard-based PC architecture, which separates components based on function and connects them through a central interfacing circuit board. Whereas a motherboard houses and connects detachable or replaceable components, SoCs integrate all of these components into a single integrated circuit, as if all these functions were built into the motherboard.
An SoC will integrate a CPU, graphics and memory interfaces, hard-disk and USB connectivity, random-access and read-only memories and secondary storage on a single circuit die, whereas a motherboard would connect these modules as discrete components or expansion cards. More integrated computer system designs improve performance and reduce power consumption as well as semiconductor die area needed for an equivalent design composed of discrete modules, at the cost of reduced replaceability of components. By definition, SoC designs are or nearly integrated across different component modules. For these reasons, there has been a general trend towards tighter integration of components in the computer hardware industry, in part due to the influence of SoCs and lessons learned from the mobile and embedded computing markets. Systems-on-Chip can be viewed as part of a larger trend towards embedded computing and hardware acceleration. An SoC integrates a microcontroller or microprocessor with advanced peripherals like graphics processing unit, Wi-Fi module, or one or more coprocessors.
Similar to how a microcontroller integrates a microprocessor with peripheral circuits and memory, an SoC can be seen as integrating a microcontroller with more advanced peripherals. For an overview of integrating system components, see system integration. In general, there are four distinguishable types of SoCs: SoCs built around a microcontroller, SoCs built around a microprocessor found in mobile phones. Systems-on-chip can be applied to any computing task. However, they are used in mobile computing such as tablets, smartphones and netbooks as well as embedded systems and in applications where microcontrollers would be used. Where only microcontrollers could be used, SoCs are rising to prominence in the embedded systems market. Tighter system integration offers better reliability and mean time between failure, SoCs offer more advanced functionality and computing power than microcontrollers. Applications include AI acceleration, embedded machine vision, data collection, vector processing and ambient intelligence.
Embedded systems-on-chip target the internet of things, industrial internet of things and edge computing markets. Mobile computing based SoCs bundle processors, memories, on-chip caches, wireless networking capabilities and digital camera hardware and firmware. With increasing memory sizes, high end SoCs will have no memory and flash storage and instead, the memory and flash memory will be placed right next to, or above, the SoC; some examples of mobile computing SoCs include: Apple: Apple-designed processors A12 Bionic and other A series, used in iPhones and iPads S series and W series, in Apple Watches. Apple T series, used in the 2016 and 2017 MacBook Pro touch bars and fingerprint scanners. Samsung Electronics: list based on ARM7 and ARM9 Exynos, used by Samsung's Galaxy series of smartphones Qualcomm: Snapdragon, used in many LG, Google Pixel, HTC and Samsung Galaxy smartphones. In 2018, Snapdragon SoCs are being used as the backbone of laptop computers running Windows 10, marketed as "Always Connected PCs".
As long ago as 1992, Acorn Computers produced the A3010, A3020 and A4000 range of personal computers with the ARM250 system-on-chip. It combined the original Acorn ARM2 processor with a memory controller, video controller, I/O controller. In previous Acorn ARM-powered computers, these were four discreet chips; the ARM7500 chip was their second-generation system-on-chip, based on the ARM700, VIDC20 and IOMD controllers, was licensed in embedded devices such as set-top-boxes, as well as Acorn personal computers. Systems-on-chip are being applied to mainstream personal computers as of 2018, they are applied to laptops and tablet PCs. Tablet and laptop manufacturers have learned lessons from embedded systems and smartphone markets about reduced power consumption, better performance and reliability from tighter integration of hardware and firmware modules, LTE and other wireless network communications integrated on chip. ARM based: Qualcomm S
Bluetooth is a wireless technology standard for exchanging data between fixed and mobile devices over short distances using short-wavelength UHF radio waves in the industrial and medical radio bands, from 2.400 to 2.485 GHz, building personal area networks. It was conceived as a wireless alternative to RS-232 data cables. Bluetooth is managed by the Bluetooth Special Interest Group, which has more than 30,000 member companies in the areas of telecommunication, computing and consumer electronics; the IEEE standardized no longer maintains the standard. The Bluetooth SIG oversees development of the specification, manages the qualification program, protects the trademarks. A manufacturer must meet Bluetooth SIG standards to market it as a Bluetooth device. A network of patents apply to the technology; the development of the "short-link" radio technology named Bluetooth, was initiated in 1989 by Nils Rydbeck, CTO at Ericsson Mobile in Lund, Sweden and by Johan Ullman. The purpose was to develop wireless headsets, according to two inventions by Johan Ullman, SE 8902098-6, issued 1989-06-12 and SE 9202239, issued 1992-07-24.
Nils Rydbeck tasked Tord Wingren with specifying and Jaap Haartsen and Sven Mattisson with developing. Both were working for Ericsson in Lund. Invented by Dutch electrical engineer Jaap Haartsen, working for telecommunications company Ericsson in 1994; the first consumer bluetooth launched in 1999. It was a hand free mobile headset which earned the technology the"Best of show Technology Award" at COMDEX; the first Bluetooth mobile phone was the Sony Ericsson T36 but it was the revised T39 model which made it to store shelves in 2001. The name Bluetooth is an Anglicised version of the Scandinavian Blåtand/Blåtann, the epithet of the tenth-century king Harald Bluetooth who united dissonant Danish tribes into a single kingdom; the implication is. The idea of this name was proposed in 1997 by Jim Kardach of Intel who developed a system that would allow mobile phones to communicate with computers. At the time of this proposal he was reading Frans G. Bengtsson's historical novel The Long Ships about Vikings and King Harald Bluetooth.
The Bluetooth logo is a bind rune merging the Younger Futhark runes and, Harald's initials. Bluetooth operates at frequencies between 2402 and 2480 MHz, or 2400 and 2483.5 MHz including guard bands 2 MHz wide at the bottom end and 3.5 MHz wide at the top. This is in the globally unlicensed industrial and medical 2.4 GHz short-range radio frequency band. Bluetooth uses. Bluetooth divides transmitted data into packets, transmits each packet on one of 79 designated Bluetooth channels; each channel has a bandwidth of 1 MHz. It performs 1600 hops per second, with adaptive frequency-hopping enabled. Bluetooth Low Energy uses 2 MHz spacing. Gaussian frequency-shift keying modulation was the only modulation scheme available. Since the introduction of Bluetooth 2.0+EDR, π/4-DQPSK and 8-DPSK modulation may be used between compatible devices. Devices functioning with GFSK are said to be operating in basic rate mode where an instantaneous bit rate of 1 Mbit/s is possible; the term Enhanced Data Rate is used to describe π/4-DPSK and 8-DPSK schemes, each giving 2 and 3 Mbit/s respectively.
The combination of these modes in Bluetooth radio technology is classified as a BR/EDR radio. Bluetooth is a packet-based protocol with a master/slave architecture. One master may communicate with up to seven slaves in a piconet. All devices share the master's clock. Packet exchange is based on the basic clock, defined by the master, which ticks at 312.5 µs intervals. Two clock ticks make up a slot of 625 µs, two slots make up a slot pair of 1250 µs. In the simple case of single-slot packets, the master transmits in slots and receives in odd slots; the slave, receives in slots and transmits in odd slots. Packets may be 1, 3 or 5 slots long, but in all cases the master's transmission begins in slots and the slave's in odd slots; the above excludes Bluetooth Low Energy, introduced in the 4.0 specification, which uses the same spectrum but somewhat differently. A master BR/EDR Bluetooth device can communicate with a maximum of seven devices in a piconet, though not all devices reach this maximum; the devices can switch roles, by agreement, the slave can become the master.
The Bluetooth Core Specification provides for the connection of two or more piconets to form a scatternet, in which certain devices play the master role in one piconet and the slave role in another. At any given time, data can be transferred between one other device; the master chooses. Since it is the master that chooses which slave to address, whereas a slave is supposed to listen in each receive slot, being a master is a lighter burden than being a slave. Being a master of seven slaves is possible; the specification is vague as to required behavior in scatternets. Bluetooth is a standard wire-replacement communications proto
In computing, multi-touch is technology that enables a surface to recognize the presence of more than one point of contact with the surface. The origins of multitouch began at CERN, MIT, University of Toronto, Carnegie Mellon University and Bell Labs in the 1970s. Multi-touch was in use as early as 1985. Apple popularized the term "multi-touch" in 2007. Plural-point awareness may be used to implement additional functionality, such as pinch to zoom or to activate certain subroutines attached to predefined gestures; the two different uses of the term resulted from the quick developments in this field, many companies using the term to market older technology, called gesture-enhanced single-touch or several other terms by other companies and researchers. Several other similar or related terms attempt to differentiate between whether a device can determine or only approximate the location of different points of contact to further differentiate between the various technological capabilities, but they are used as synonyms in marketing.
In computing, multi-touch is technology which enables a trackpad or touchscreen to recognize more than one or more than two points of contact with the surface. Apple popularized the term "multi-touch" in 2007 with which it implemented additional functionality, such as pinch to zoom or to activate certain subroutines attached to predefined gestures; the two different uses of the term resulted from the quick developments in this field, many companies using the term to market older technology, called gesture-enhanced single-touch or several other terms by other companies and researchers. Several other similar or related terms attempt to differentiate between whether a device can determine or only approximate the location of different points of contact to further differentiate between the various technological capabilities, but they are used as synonyms in marketing; the use of touchscreen technology predates the personal computer. Early synthesizer and electronic instrument builders like Hugh Le Caine and Robert Moog experimented with using touch-sensitive capacitance sensors to control the sounds made by their instruments.
IBM began building the first touch screens in the late 1960s. In 1972, Control Data released the PLATO IV computer, a terminal used for educational purposes, which employed single-touch points in a 16×16 array user interface; these early touchscreens only registered one point of touch at a time. On-screen keyboards were thus awkward to use, because key-rollover and holding down a shift key while typing another were not possible. An exception was a multi-touch reconfigurable touchscreen keyboard/display developed at the Massachusetts Institute of Technology in the early 1970s. In 1977, one of the early implementations of mutual capacitance touchscreen technology was developed at CERN based on their capacitance touch screens developed in 1972 by Danish electronics engineer Bent Stumpe; this technology was used to develop a new type of human machine interface for the control room of the Super Proton Synchrotron particle accelerator. In a handwritten note dated 11 March 1972, Stumpe presented his proposed solution – a capacitive touch screen with a fixed number of programmable buttons presented on a display.
The screen was to consist of a set of capacitors etched into a film of copper on a sheet of glass, each capacitor being constructed so that a nearby flat conductor, such as the surface of a finger, would increase the capacitance by a significant amount. The capacitors were to consist of fine lines etched in copper on a sheet of glass – fine enough and sufficiently far apart to be invisible. In the final device, a simple lacquer coating prevented the fingers from touching the capacitors. In 1976, MIT described a keyboard with variable graphics capable of multi-touch detection, for what is likely to be the first multitouch screen. In the early 1980s, The University of Toronto's Input Research Group were among the earliest to explore the software side of multi-touch input systems. A 1982 system at the University of Toronto used a frosted-glass panel with a camera placed behind the glass; when a finger or several fingers pressed on the glass, the camera would detect the action as one or more black spots on an otherwise white background, allowing it to be registered as an input.
Since the size of a dot was dependent on pressure, the system was somewhat pressure-sensitive as well. Of note, this system was not able to display graphics. In 1983, Bell Labs at Murray Hill published a comprehensive discussion of touch-screen based interfaces, though it makes no mention of multiple fingers. In the same year, the video-based Video Place/Video Desk system of Myron Krueger was influential in development of multi-touch gestures such as pinch-to-zoom, though this system had no touch interaction itself. By 1984, both Bell Labs and Carnegie Mellon University had working multi-touch-screen prototypes – both input and graphics – that could respond interactively in response to multiple finger inputs; the Bell Labs system was based on capacitive coupling of fingers, whereas the CMU system was optical. In 1985, the canonical multitouch pinch-to-zoom gesture was demonstrated, with coordinated graphics, on CMU's system. In October 1985, Steve Jobs signed a non-disclosure agreement to tour CMU's Sensor Frame multi-touch lab.
In 1990, Sears et al. published a review of academic research on single and multi-touch touchscreen human–computer interaction of the time, describing single touch gestures such as rotating knobs, swiping the screen to activate a switch, an