In computing, a server is a computer program or a device that provides functionality for other programs or devices, called "clients". This architecture is called the client–server model, a single overall computation is distributed across multiple processes or devices. Servers can provide various functionalities called "services", such as sharing data or resources among multiple clients, or performing computation for a client. A single server can serve multiple clients, a single client can use multiple servers. A client process may run on the same device or may connect over a network to a server on a different device. Typical servers are database servers, file servers, mail servers, print servers, web servers, game servers, application servers. Client–server systems are today most implemented by the request–response model: a client sends a request to the server, which performs some action and sends a response back to the client with a result or acknowledgement. Designating a computer as "server-class hardware" implies that it is specialized for running servers on it.
This implies that it is more powerful and reliable than standard personal computers, but alternatively, large computing clusters may be composed of many simple, replaceable server components. The use of the word server in computing comes from queueing theory, where it dates to the mid 20th century, being notably used in Kendall, the paper that introduced Kendall's notation. In earlier papers, such as the Erlang, more concrete terms such as " operators" are used. In computing, "server" dates at least to RFC 5, one of the earliest documents describing ARPANET, is contrasted with "user", distinguishing two types of host: "server-host" and "user-host"; the use of "serving" dates to early documents, such as RFC 4, contrasting "serving-host" with "using-host". The Jargon File defines "server" in the common sense of a process performing service for requests remote, with the 1981 version reading: SERVER n. A kind of DAEMON which performs a service for the requester, which runs on a computer other than the one on which the server runs.
Speaking, the term server refers to a computer program or process. Through metonymy, it refers to a device used for running several server programs. On a network, such a device is called a host. In addition to server, the words serve and service are used, though servicer and servant are not; the word service may refer to either the abstract form of e.g. Web service. Alternatively, it may refer to a computer program that turns a computer into a server, e.g. Windows service. Used as "servers serve users", in the sense of "obey", today one says that "servers serve data", in the same sense as "give". For instance, web servers "serve web pages to users" or "service their requests"; the server is part of the client–server model. The nature of communication between a client and server is response; this is in contrast with peer-to-peer model. In principle, any computerized process that can be used or called by another process is a server, the calling process or processes is a client, thus any general purpose computer connected to a network can host servers.
For example, if files on a device are shared by some process, that process is a file server. Web server software can run on any capable computer, so a laptop or a personal computer can host a web server. While request–response is the most common client–server design, there are others, such as the publish–subscribe pattern. In the publish–subscribe pattern, clients register with a pub–sub server, subscribing to specified types of messages. Thereafter, the pub–sub server forwards matching messages to the clients without any further requests: the server pushes messages to the client, rather than the client pulling messages from the server as in request–response; the purpose of a server is to share data as well as to distribute work. A server computer can serve its own computer programs as well; the following table shows several scenarios. The entire structure of the Internet is based upon a client–server model. High-level root nameservers, DNS, routers direct the traffic on the internet. There are millions of servers connected to the Internet, running continuously throughout the world and every action taken by an ordinary Internet user requires one or more interactions with one or more server.
There are exceptions. Hardware requirement for servers vary depending on the server's purpose and its software. Since servers are accessed over a network, many run unattended without a computer monitor or input device, audio hardware and USB interfaces. Many servers do not have a graphical user interface, they are managed remotely. Remote management can be conducted via various methods including Microsoft Management Console, PowerShell, SSH and browser-based out-of-band management systems such as Dell's iDRAC or HP's iLo. Large traditional single servers would need to be run for long periods without interruption. Ava
It describes 18 elements comprising the initial simple design of HTML. Except for the hyperlink tag, these were influenced by SGMLguid, an in-house Standard Generalized Markup Language -based documentation format at CERN. Eleven of these elements still exist in HTML 4. HTML is a markup language that web browsers use to interpret and compose text and other material into visual or audible web pages. Default characteristics for every item of HTML markup are defined in the browser, these characteristics can be altered or enhanced by the web page designer's additional use of CSS. Many of the text elements are found in the 1988 ISO technical report TR 9537 Techniques for using SGML, which in turn covers the features of early text formatting languages such as that used by the RUNOFF command developed in the early 1960s for the CTSS operating system: these formatting commands were derived from the commands used by typesetters to manually format documents. However, the SGML concept of generalized markup is based on elements rather than print effects, with the separation of structure and markup.
Berners-Lee considered HTML to be an application of SGML. It was formally defined as such by the Internet Engineering Task Force with the mid-1993 publication of the first proposal for an HTML specification, the "Hypertext Markup Language" Internet Draft by Berners-Lee and Dan Connolly, which included an SGML Document type definition to define the grammar; the draft expired after six months, but was notable for its acknowledgment of the NCSA Mosaic browser's custom tag for embedding in-line images, reflecting the IETF's philosophy of basing standards on successful prototypes. Dave Raggett's competing Internet-Draft, "HTML+", from late 1993, suggested standardizing already-implemented features like tables and fill-out forms. After the HTML and HTML+ drafts expired in early 1994, the IETF created an HTML Working Group, which in 1995 completed "HTML 2.0", the first HTML specification intended to be treated as a standard against which future implementations should be based. Further development under the auspices of the IETF was stalled by competing interests.
Since 1996, the HTML specifications have been maintained, with input from commercial software vendors, by the World Wide Web Consortium. However, in 2000, HTML became an international standard. HTML 4.01 was published in late 1999, with further errata published through 2001. In 2004, development began on HTML5 in the Web Hypertext Application Technology Working Group, which became a joint deliverable with the W3C in 2008, completed and standardized on 28 October 2014. November 24, 1995 HTML 2.0 was published as RFC 1866. Supplemental RFCs added capabilities: November 25, 1995: RFC 1867 May 1996: RFC 1942 August 1996: RFC 1980 January 1997: RFC 2070 January 14, 1997 HTML 3.2 was published as a W3C Recommendation. It was the first version developed and standardized by the W3C, as the IETF had closed its HTML Working Group on September 12, 1996. Code-named "Wilbur", HTML 3.2 dropped math formulas reconciled overlap among various proprietary extensions and adopted most of Netscape's visual markup tags.
Netscape's blink element and Microsoft's marquee element were omitted due to a mutual agreement between the two companies. A markup for mathematical formu
A computer network is a digital telecommunications network which allows nodes to share resources. In computer networks, computing devices exchange data with each other using connections between nodes; these data links are established over cable media such as wires or optic cables, or wireless media such as Wi-Fi. Network computer devices that originate and terminate the data are called network nodes. Nodes are identified by network addresses, can include hosts such as personal computers and servers, as well as networking hardware such as routers and switches. Two such devices can be said to be networked together when one device is able to exchange information with the other device, whether or not they have a direct connection to each other. In most cases, application-specific communications protocols are layered over other more general communications protocols; this formidable collection of information technology requires skilled network management to keep it all running reliably. Computer networks support an enormous number of applications and services such as access to the World Wide Web, digital video, digital audio, shared use of application and storage servers and fax machines, use of email and instant messaging applications as well as many others.
Computer networks differ in the transmission medium used to carry their signals, communications protocols to organize network traffic, the network's size, traffic control mechanism and organizational intent. The best-known computer network is the Internet; the chronology of significant computer-network developments includes: In the late 1950s, early networks of computers included the U. S. military radar system Semi-Automatic Ground Environment. In 1959, Anatolii Ivanovich Kitov proposed to the Central Committee of the Communist Party of the Soviet Union a detailed plan for the re-organisation of the control of the Soviet armed forces and of the Soviet economy on the basis of a network of computing centres, the OGAS. In 1960, the commercial airline reservation system semi-automatic business research environment went online with two connected mainframes. In 1963, J. C. R. Licklider sent a memorandum to office colleagues discussing the concept of the "Intergalactic Computer Network", a computer network intended to allow general communications among computer users.
In 1964, researchers at Dartmouth College developed the Dartmouth Time Sharing System for distributed users of large computer systems. The same year, at Massachusetts Institute of Technology, a research group supported by General Electric and Bell Labs used a computer to route and manage telephone connections. Throughout the 1960s, Paul Baran and Donald Davies independently developed the concept of packet switching to transfer information between computers over a network. Davies pioneered the implementation of the concept with the NPL network, a local area network at the National Physical Laboratory using a line speed of 768 kbit/s. In 1965, Western Electric introduced the first used telephone switch that implemented true computer control. In 1966, Thomas Marill and Lawrence G. Roberts published a paper on an experimental wide area network for computer time sharing. In 1969, the first four nodes of the ARPANET were connected using 50 kbit/s circuits between the University of California at Los Angeles, the Stanford Research Institute, the University of California at Santa Barbara, the University of Utah.
Leonard Kleinrock carried out theoretical work to model the performance of packet-switched networks, which underpinned the development of the ARPANET. His theoretical work on hierarchical routing in the late 1970s with student Farouk Kamoun remains critical to the operation of the Internet today. In 1972, commercial services using X.25 were deployed, used as an underlying infrastructure for expanding TCP/IP networks. In 1973, the French CYCLADES network was the first to make the hosts responsible for the reliable delivery of data, rather than this being a centralized service of the network itself. In 1973, Robert Metcalfe wrote a formal memo at Xerox PARC describing Ethernet, a networking system, based on the Aloha network, developed in the 1960s by Norman Abramson and colleagues at the University of Hawaii. In July 1976, Robert Metcalfe and David Boggs published their paper "Ethernet: Distributed Packet Switching for Local Computer Networks" and collaborated on several patents received in 1977 and 1978.
In 1979, Robert Metcalfe pursued making Ethernet an open standard. In 1976, John Murphy of Datapoint Corporation created ARCNET, a token-passing network first used to share storage devices. In 1995, the transmission speed capacity for Ethernet increased from 10 Mbit/s to 100 Mbit/s. By 1998, Ethernet supported transmission speeds of a Gigabit. Subsequently, higher speeds of up to 400 Gbit/s were added; the ability of Ethernet to scale is a contributing factor to its continued use. Computer networking may be considered a branch of electrical engineering, electronics engineering, telecommunications, computer science, information technology or computer engineering, since it relies upon the theoretical and practical application of the related disciplines. A computer network facilitates interpersonal communications allowing users to communicate efficiently and via various means: email, instant messaging, online chat, video telephone calls, video conferencing. A network allows sharing of computing resources.
Users may access and use resources provided by devices on the network, such as printing a document on a shared network printer or use of a shared storage device. A network allows sharing of files, and
A 19-inch rack is a standardized frame or enclosure for mounting multiple electronic equipment modules. Each module has a front panel, 19 inches wide; the 19-inch dimension includes the edges, or "ears", that protrude on each side which allow the module to be fastened to the rack frame with screws. Common uses include computer server, broadcast video, lighting and scientific lab equipment. Equipment designed to be placed in a rack is described as rack-mount, rack-mount instrument, a rack mounted system, a rack mount chassis, rack mountable, or simply shelf; the height of the electronic modules is standardized as multiples of 1.752 inches or one rack unit or U. The industry standard rack cabinet is 42U tall; the term relay rack appeared first in the world of telephony. By 1911, the term was being used in railroad signaling. There is little evidence; the 19-inch rack format with rack-units of 1.75 inches was established as a standard by AT&T around 1922 in order to reduce the space required for repeater and termination equipment for toll cables.
The earliest repeaters from 1914 were installed in ad-hoc fashion on shelves, in wooden boxes and cabinets. Once serial production started, they were built into custom-made one per repeater, but in light of the rapid growth of the toll network, the engineering department of AT&T undertook a systematic redesign, resulting in a family of modular factory-assembled panels all "designed to mount on vertical supports spaced 19½ inches between centers. The height of the different panels will vary... but... in all cases to be a whole multiple of 13⁄4 inches". By 1934, it was an established standard with holes tapped for 12-24 screws with alternating spacings of 1.25 inches and 0.5 inches The EIA standard was revised again in 1992 to comply with the 1988 public law 100-418, setting the standard U as 15.9 mm + 15.9 mm + 12.7 mm, making each "U" 44.50 millimetres. The 19-inch rack format has remained constant while the technology, mounted within it has changed and the set of fields to which racks are applied has expanded.
The 19-inch standard rack arrangement is used throughout the telecommunication, audio, video and other industries, though the Western Electric 23-inch standard, with holes on 1-inch centers, is still used in legacy ILEC/CLEC facilities. Nineteen-inch racks in two-post or four-post form hold most equipment in modern data centers, ISP facilities, professionally designed corporate server rooms, they allow for dense hardware configurations without occupying excessive floorspace or requiring shelving. Nineteen-inch racks are often used to house professional audio and video equipment, including amplifiers, effects units, headphone amplifiers, small scale audio mixers. A third common use for rack-mounted equipment is industrial power and automation hardware. A piece of equipment being installed has a front panel height 1⁄32 inch less than the allotted number of Us. Thus, a 1U rackmount computer is 1.721 inches tall. 2U would be 3.473 inches instead of 3.504 inches. This gap allows a bit of room above and below an installed piece of equipment so it may be removed without binding on the adjacent equipment.
The mounting holes were tapped with a particular screw thread. When rack rails are too thin to tap, rivnuts or other threaded inserts can be used, when the particular class of equipment to be mounted is known in advance, some of the holes can be omitted from the mounting rails. Threaded mounting holes in racks where the equipment is changed are problematic because the threads can be damaged or the mounting screws can break off. Tapping large numbers of holes that may never be used is expensive. Examples include telephone exchanges, network cabling panels, broadcast studios and some government and military applications; the tapped-hole rack was first replaced by clearance-hole racks. The holes are large enough to permit a bolt to be inserted through without binding, bolts are fastened in place using cage nuts. In the event of a nut being stripped out or a bolt breaking, the nut can be removed and replaced with a new one. Production of clearance-hole racks is less expensive because tapping the holes is eliminated and replaced with fewer, less expensive, cage nuts.
The next innovation in rack design has been the square-hole rack. Square-hole racks allow boltless mounting, such that the rack-mount equipment only needs to insert through and hook down into the lip of the square hole. Installation and removal of hardware in a square hole rack is easy and boltless, where the weight of the equipment and small retention clips are all, necessary to hold the equipment in place. Older equipment meant for round-hole or tapped-hole racks can still be used, with the use of cage nuts made for square-hole racks. Rack-mountable equipment is traditionally mounted by bolting or clipping its front panel to the rack. Within the IT industry, it is common for network/communications equipment to have multiple mounting positions, including table-top and wall mounting, so rack mountable equipment will feature L-brackets that must be screwed or bolted to the equipment prior to mounting in a 19-inch rack. With the prevalence of 23-
The European Organization for Nuclear Research, known as CERN, is a European research organization that operates the largest particle physics laboratory in the world. Established in 1954, the organization is based in a northwest suburb of Geneva on the Franco–Swiss border and has 23 member states. Israel is the only non-European country granted full membership. CERN is an official United Nations Observer; the acronym CERN is used to refer to the laboratory, which in 2016 had 2,500 scientific and administrative staff members, hosted about 12,000 users. In the same year, CERN generated 49 petabytes of data. CERN's main function is to provide the particle accelerators and other infrastructure needed for high-energy physics research – as a result, numerous experiments have been constructed at CERN through international collaborations; the main site at Meyrin hosts a large computing facility, used to store and analyse data from experiments, as well as simulate events. Researchers need remote access to these facilities, so the lab has been a major wide area network hub.
CERN is the birthplace of the World Wide Web. The convention establishing CERN was ratified on 29 September 1954 by 12 countries in Western Europe; the acronym CERN represented the French words for Conseil Européen pour la Recherche Nucléaire, a provisional council for building the laboratory, established by 12 European governments in 1952. The acronym was retained for the new laboratory after the provisional council was dissolved though the name changed to the current Organisation Européenne pour la Recherche Nucléaire in 1954. According to Lew Kowarski, a former director of CERN, when the name was changed, the abbreviation could have become the awkward OERN, Werner Heisenberg said that this could "still be CERN if the name is ". CERN's first president was Sir Benjamin Lockspeiser. Edoardo Amaldi was the general secretary of CERN at its early stages when operations were still provisional, while the first Director-General was Felix Bloch; the laboratory was devoted to the study of atomic nuclei, but was soon applied to higher-energy physics, concerned with the study of interactions between subatomic particles.
Therefore, the laboratory operated by CERN is referred to as the European laboratory for particle physics, which better describes the research being performed there. At the sixth session of the CERN Council, which took place in Paris from 29 June - 1 July 1953, the convention establishing the organization was signed, subject to ratification, by 12 states; the convention was ratified by the 12 founding Member States: Belgium, France, the Federal Republic of Germany, Italy, the Netherlands, Sweden, the United Kingdom, Yugoslavia. Several important achievements in particle physics have been made through experiments at CERN, they include: 1973: The discovery of neutral currents in the Gargamelle bubble chamber. In September 2011, CERN attracted media attention when the OPERA Collaboration reported the detection of faster-than-light neutrinos. Further tests showed that the results were flawed due to an incorrectly connected GPS synchronization cable; the 1984 Nobel Prize for Physics was awarded to Carlo Rubbia and Simon van der Meer for the developments that resulted in the discoveries of the W and Z bosons.
The 1992 Nobel Prize for Physics was awarded to CERN staff researcher Georges Charpak "for his invention and development of particle detectors, in particular the multiwire proportional chamber". The 2013 Nobel Prize for Physics was awarded to François Englert and Peter Higgs for the theoretical description of the Higgs mechanism in the year after the Higgs boson was found by CERN experiments; the World Wide Web began as a CERN project named ENQUIRE, initiated by Tim Berners-Lee in 1989 and Robert Cailliau in 1990. Berners-Lee and Cailliau were jointly honoured by the Association for Computing Machinery in 1995 for their contributions to the development of the World Wide Web. Based on the concept of hypertext, the project was intended to facilitate the sharing of information between researchers; the first website was activated in 1991. On 30 April 1993, CERN announced. A copy of the original first webpage, created by Berners-Lee, is still published on the World Wide Web Consortium's website as a historical document.
Prior to the Web's development, CERN had pioneered the introduction of Internet technology, beginning in the early 1980s. More CERN has become a facility for the development of grid computing, hosting projects including the Enabling Grids for E-sciencE and LHC Computing Grid, it hosts the CERN Internet Exchange Point, one of the two main internet exchange points in Switzerland. CERN operates a network of a decelerator; each machine in the chain increases the energy of particle beams before delivering them
Dell is an American multinational computer technology company based in Round Rock, United States, that develops, sells and supports computers and related products and services. Named after its founder, Michael Dell, the company is one of the largest technological corporations in the world, employing more than 145,000 people in the U. S. and around the world. Dell sells personal computers, data storage devices, network switches, computer peripherals, HDTVs, printers, MP3 players, electronics built by other manufacturers; the company is well known for its innovations in supply chain management and electronic commerce its direct-sales model and its "build-to-order" or "configure to order" approach to manufacturing—delivering individual PCs configured to customer specifications. Dell was a pure hardware vendor for much of its existence, but with the acquisition in 2009 of Perot Systems, Dell entered the market for IT services; the company has since made additional acquisitions in storage and networking systems, with the aim of expanding their portfolio from offering computers only to delivering complete solutions for enterprise customers.
Dell was listed at number 51 in the Fortune 500 list, until 2014. After going private in 2013, the newly confidential nature of its financial information prevents the company from being ranked by Fortune. In 2015, it was the third largest PC vendor in the world after Lenovo and HP. Dell is the largest shipper of PC monitors worldwide. Dell is the sixth largest company in Texas by total revenue, according to Fortune magazine, it is the second largest non-oil company in Texas – behind AT&T – and the largest company in the Greater Austin area. It was a publicly traded company, as well as a component of the NASDAQ-100 and S&P 500, until it was taken private in a leveraged buyout which closed on October 30, 2013. In 2015, Dell acquired the enterprise technology firm EMC Corporation. Dell traces its origins to 1984, when Michael Dell created Dell Computer Corporation, which at the time did business as PC's Limited, while a student of the University of Texas at Austin; the dorm-room headquartered company sold IBM PC-compatible computers built from stock components.
Dell dropped out of school to focus full-time on his fledgling business, after getting $1,000 in expansion-capital from his family. In 1985, the company produced the first computer of its own design, the Turbo PC, which sold for $795. PC's Limited advertised its systems in national computer magazines for sale directly to consumers and custom assembled each ordered unit according to a selection of options; the company grossed more than $73 million in its first year of operation. In 1986, Michael Dell brought in Lee Walker, a 51-year-old venture capitalist, as president and chief operating officer, to serve as Dell's mentor and implement Dell's ideas for growing the company. Walker was instrumental in recruiting members to the board of directors when the company went public in 1988. Walker retired in 1990 due to health, Michael Dell hired Morton Meyerson, former CEO and president of Electronic Data Systems to transform the company from a fast-growing medium-sized firm into a billion-dollar enterprise.
The company dropped the PC's Limited name in 1987 to become Dell Computer Corporation and began expanding globally. In June 1988, Dell's market capitalization grew from $30 million to $80 million from its June 22 initial public offering of 3.5 million shares at $8.50 a share. In 1992, Fortune magazine included Dell Computer Corporation in its list of the world's 500 largest companies, making Michael Dell the youngest CEO of a Fortune 500 company ever. In 1993, to complement its own direct sales channel Dell planned to sell PCs at big-box retail outlets such as Wal-Mart, which would have brought in an additional $125 million in annual revenue. Bain consultant Kevin Rollins persuaded Michael Dell to pull out of these deals, believing they would be money losers in the long run. Margins at retail were thin at best and Dell left the reseller channel in 1994. Rollins would soon join Dell full-time and become the company President and CEO. Dell did not emphasize the consumer market, due to the higher costs and unacceptably low-profit margins in selling to individuals and households.
While the industry's average selling price to individuals was going down, Dell's was going up, as second- and third-time computer buyers who wanted powerful computers with multiple features and did not need much technical support were choosing Dell. Dell found an opportunity among PC-savvy individuals who liked the convenience of buying direct, customizing their PC to their means, having it delivered in days. In early 1997, Dell created an internal sales and marketing group dedicated to serving the home market and introduced a product line designed for individual users. From 1997 to 2004, Dell enjoyed steady growth and it gained market share from competitors during industry slumps. During the same period, rival PC vendors such as Compaq, Gateway, IBM, Packard Bell, AST Research struggled and left the market or were bought out. Dell surpassed Compaq to become the largest PC manufacturer in 1999. Operating costs made up only 10 percent of Dell's $35 billion in revenue in 2002, compared with 21 percent of revenue at Hewlett-Packard, 25 percent at Gateway, 46 percent at Cisco.
In 2002, when Compaq merged with Hewlett Packard, the newly combined Hewlett Packard took the top spot but struggled and Dell soon regained its lead. Dell grew the fastest in the early 2000s. Dell attained an
A webcam is a video camera that feeds or streams its image in real time to or through a computer to a computer network. When "captured" by the computer, the video stream may be saved, viewed or sent on to other networks travelling through systems such as the internet, e-mailed as an attachment; when sent to a remote location, the video stream may be viewed or on sent there. Unlike an IP camera, a webcam is connected by a USB cable, or similar cable, or built into computer hardware, such as laptops; the term "webcam" may be used in its original sense of a video camera connected to the Web continuously for an indefinite time, rather than for a particular session supplying a view for anyone who visits its web page over the Internet. Some of them, for example, those used as online traffic cameras, are expensive, rugged professional video cameras. Webcams are known for their low manufacturing cost and their high flexibility, making them the lowest-cost form of videotelephony. Despite the low cost, the resolution offered at present is rather impressive, with low-end webcams offering resolutions of 320×240, medium webcams offering 640×480 resolution, high-end webcams offering 1280×720 or 1920×1080 resolution.
They have become a source of security and privacy issues, as some built-in webcams can be remotely activated by spyware. The most popular use of webcams is the establishment of video links, permitting computers to act as videophones or videoconference stations. Other popular uses include security surveillance, computer vision, video broadcasting, for recording social videos; the video streams provided by webcams can be used for a number of purposes, each using appropriate software: Most modern webcams are capable of capturing arterial pulse rate by the use of a simple algorithmic trick. Researchers claim. Webcams may be installed at places such as childcare centres, offices and private areas to monitor security and general activity. Webcams have been used for augmented reality experiences online. One such function has the webcam act as a "magic mirror" to allow an online shopper to view a virtual item on themselves; the Webcam Social Shopper is one example of software. Webcam can be added to instant messaging, text chat services such as AOL Instant Messenger, VoIP services such as Skype, one-to-one live video communication over the Internet has now reached millions of mainstream PC users worldwide.
Improved video quality has helped webcams encroach on traditional video conferencing systems. New features such as automatic lighting controls, real-time enhancements, automatic face tracking and autofocus, assist users by providing substantial ease-of-use, further increasing the popularity of webcams. Webcam features and performance can vary by program, computer operating system, by the computer's processor capabilities. Video calling support has been added to several popular instant messaging programs. Webcams can be used as security cameras. Software is available to allow PC-connected cameras to watch for movement and sound, recording both when they are detected; these recordings can be saved to the computer, e-mailed, or uploaded to the Internet. In one well-publicised case, a computer e-mailed images of the burglar during the theft of the computer, enabling the owner to give police a clear picture of the burglar's face after the computer had been stolen. Unauthorized access of webcams can present significant privacy issues.
In December 2011, Russia announced that 290,000 Webcams would be installed in 90,000 polling stations to monitor the Russian presidential election, 2012. Webcams can be used to take video clips and still pictures. Various software tools in wide use can be employed for this, such as PicMaster, Photo Booth, or Cheese. For a more complete list see Comparison of webcam software. Special software can use the video stream from a webcam to assist or enhance a user's control of applications and games. Video features, including faces, shapes and colors can be observed and tracked to produce a corresponding form of control. For example, the position of a single light source can be tracked and used to emulate a mouse pointer, a head-mounted light would enable hands-free computing and would improve computer accessibility; this can be applied to games, providing additional control, improved interactivity and immersiveness. FreeTrack is a free webcam motion-tracking application for Microsoft Windows that can track a special head-mounted model in up to six degrees of freedom and output data to mouse, keyboard and FreeTrack-supported games.
By removing the IR filter of the webcam, IR LEDs can be used, which has the advantage of being invisible to the naked eye, removing a distraction from the user. TrackIR is a commercial version of this technology; the EyeToy for the PlayStation 2, PlayStation Eye for the PlayStation 3, the Xbox Live Vision camera and Kinect motion sensor for the Xbox 360 and are color digital cameras that have been used as control input devices by some games. Small webcam-based PC games are available as either standalone executables or inside web browser windows using Adobe Flash. With very-low-light capability, a few specific models of webcams are popular to photograph the night sky by astronomers and astro photographers; these are manual-focus cameras and contain an old CCD array instead of comparatively newer CMOS array. The lenses of the cameras are removed and these are attached to telescopes to record images