A computer network is a digital telecommunications network which allows nodes to share resources. In computer networks, computing devices exchange data with each other using connections between nodes; these data links are established over cable media such as wires or optic cables, or wireless media such as Wi-Fi. Network computer devices that originate and terminate the data are called network nodes. Nodes are identified by network addresses, can include hosts such as personal computers and servers, as well as networking hardware such as routers and switches. Two such devices can be said to be networked together when one device is able to exchange information with the other device, whether or not they have a direct connection to each other. In most cases, application-specific communications protocols are layered over other more general communications protocols; this formidable collection of information technology requires skilled network management to keep it all running reliably. Computer networks support an enormous number of applications and services such as access to the World Wide Web, digital video, digital audio, shared use of application and storage servers and fax machines, use of email and instant messaging applications as well as many others.
Computer networks differ in the transmission medium used to carry their signals, communications protocols to organize network traffic, the network's size, traffic control mechanism and organizational intent. The best-known computer network is the Internet; the chronology of significant computer-network developments includes: In the late 1950s, early networks of computers included the U. S. military radar system Semi-Automatic Ground Environment. In 1959, Anatolii Ivanovich Kitov proposed to the Central Committee of the Communist Party of the Soviet Union a detailed plan for the re-organisation of the control of the Soviet armed forces and of the Soviet economy on the basis of a network of computing centres, the OGAS. In 1960, the commercial airline reservation system semi-automatic business research environment went online with two connected mainframes. In 1963, J. C. R. Licklider sent a memorandum to office colleagues discussing the concept of the "Intergalactic Computer Network", a computer network intended to allow general communications among computer users.
In 1964, researchers at Dartmouth College developed the Dartmouth Time Sharing System for distributed users of large computer systems. The same year, at Massachusetts Institute of Technology, a research group supported by General Electric and Bell Labs used a computer to route and manage telephone connections. Throughout the 1960s, Paul Baran and Donald Davies independently developed the concept of packet switching to transfer information between computers over a network. Davies pioneered the implementation of the concept with the NPL network, a local area network at the National Physical Laboratory using a line speed of 768 kbit/s. In 1965, Western Electric introduced the first used telephone switch that implemented true computer control. In 1966, Thomas Marill and Lawrence G. Roberts published a paper on an experimental wide area network for computer time sharing. In 1969, the first four nodes of the ARPANET were connected using 50 kbit/s circuits between the University of California at Los Angeles, the Stanford Research Institute, the University of California at Santa Barbara, the University of Utah.
Leonard Kleinrock carried out theoretical work to model the performance of packet-switched networks, which underpinned the development of the ARPANET. His theoretical work on hierarchical routing in the late 1970s with student Farouk Kamoun remains critical to the operation of the Internet today. In 1972, commercial services using X.25 were deployed, used as an underlying infrastructure for expanding TCP/IP networks. In 1973, the French CYCLADES network was the first to make the hosts responsible for the reliable delivery of data, rather than this being a centralized service of the network itself. In 1973, Robert Metcalfe wrote a formal memo at Xerox PARC describing Ethernet, a networking system, based on the Aloha network, developed in the 1960s by Norman Abramson and colleagues at the University of Hawaii. In July 1976, Robert Metcalfe and David Boggs published their paper "Ethernet: Distributed Packet Switching for Local Computer Networks" and collaborated on several patents received in 1977 and 1978.
In 1979, Robert Metcalfe pursued making Ethernet an open standard. In 1976, John Murphy of Datapoint Corporation created ARCNET, a token-passing network first used to share storage devices. In 1995, the transmission speed capacity for Ethernet increased from 10 Mbit/s to 100 Mbit/s. By 1998, Ethernet supported transmission speeds of a Gigabit. Subsequently, higher speeds of up to 400 Gbit/s were added; the ability of Ethernet to scale is a contributing factor to its continued use. Computer networking may be considered a branch of electrical engineering, electronics engineering, telecommunications, computer science, information technology or computer engineering, since it relies upon the theoretical and practical application of the related disciplines. A computer network facilitates interpersonal communications allowing users to communicate efficiently and via various means: email, instant messaging, online chat, video telephone calls, video conferencing. A network allows sharing of computing resources.
Users may access and use resources provided by devices on the network, such as printing a document on a shared network printer or use of a shared storage device. A network allows sharing of files, and
Zooming user interface
In computing, a zooming user interface or zoomable user interface is a graphical environment where users can change the scale of the viewed area in order to see more detail or less, browse through different documents. A ZUI is a type of graphical user interface. Information elements appear directly on an infinite virtual desktop, instead of in windows. Users can zoom into objects of interest. For example, as you zoom into a text object it may be represented as a small dot a thumbnail of a page of text a full-sized page and a magnified view of the page. ZUIs use zooming as the main metaphor for browsing through multivariate information. Objects present inside a zoomed page can in turn be zoomed themselves to reveal further detail, allowing for recursive nesting and an arbitrary level of zoom; when the level of detail present in the resized object is changed to fit the relevant information into the current size, instead of being a proportional view of the whole object, it's called semantic zooming.
Some consider the ZUI paradigm as a flexible and realistic successor to the traditional windowing GUI, being a Post-WIMP interface. Ivan Sutherland presented the first program for zooming through and creating graphical structures with constraints and instancing, on a CRT in his Sketchpad program in 1962. A more general interface was done by the Architecture Machine Group in the 1970s at MIT. Hand tracking, touchscreen and voice control was employed to control an infinite plane of projects, contacts and interactive programs. One of the instances of this project was called Spatial Dataland. Another GUI environment of the 70's which used the zooming idea was Smalltalk at Xerox Parc, which had infinite "desktops", that could be zoomed in upon from a birds eye view after the user had recognized a miniature of the window setup for the project; the longest running effort to create a ZUI has been the Pad++ project started by Ken Perlin, Jim Hollan, Ben Bederson at New York University and continued at the University of New Mexico under Hollan's direction.
After Pad++, Bederson developed Jazz Piccolo, now Piccolo2D at the University of Maryland, College Park, maintained in Java and C#. More recent ZUI efforts include Archy by the late Jef Raskin, ZVTM developed at INRIA, the simple ZUI of the Squeak Smalltalk programming environment and language; the term ZUI itself was coined by Franklin Servan-Schreiber and Tom Grauman while they worked together at the Sony Research Laboratories. They were developing the first Zooming User Interface library based on Java 1.0, in partnership with Prof. Ben Bederson, University of New Mexico, Prof. Ken Perlin, New York University. GeoPhoenix, a Cambridge, MA, startup associated with the MIT Media Lab, founded by Julian Orbanes, Adriana Guzman, Max Riesenhuber, released the first mass-marketed commercial Zoomspace in 2002-3 on the Sony CLIÉ PDA handheld, with Ken Miura of Sony In 2006, Hillcrest Labs introduced the HoME television navigation system, the first graphical, zooming interface for television. In 2007, Microsoft's Live Labs has released a zooming UI for web browsing called Microsoft Live Labs Deepfish for the Windows Mobile 5 platform.
Apple's iPhone uses a stylized form of ZUI, in which panning and zooming are performed through a touch interface. A more realised ZUI is present in the iOS home screen, with zooming from the homescreen into folders and in to apps; the photo app zooms out to collections, to years. And in the calendar app with day and year views, it is not a full ZUI implementation since these operations are applied to bounded spaces and have a limited range of zooming and panning. Franklin Servan-Schreiber founded Zoomorama, based on work he did at the Sony Research Laboratories in the mid-nineties; the Zooming Browser for Collage of High Resolution Images was released in Alpha in October 2007. Zoomorama's browser is all Flash based. Development of this project was stopped in 2010. In 2017, bigpictu.re offers an infinite notepad as a web-application based on one of the first ZUI open-source libraries. Zircle UI offers an Open Source UI Library that uses zoomable navigation and circular shapes.. 2012 Economist article discusses the ZUI
IllumiRoom is a Microsoft Research project that augments a television screen with images projected onto the wall and surrounding objects. The current proof-of-concept uses a Kinect video projector; the Kinect sensor captures the geometry and colors of the area of the room that surrounds the television, the projector displays video around the television that corresponds to a video source on the television, such as a video game or movie. IllumiRoom was first introduced at the 2013 Consumer Electronics Show. At the show, with Samsung, showed a video presentation of the system. At CHI 2013, Microsoft presented more details of the system, including a paper written with a researcher at the University of Illinois at Urbana–Champaign; the system prototype uses a Kinect for Windows sensor. The Kinect captures the color and geometry of the room environment and the projector renders images onto the depth map acquired by the sensor; the IllumiRoom concept is based on prior work and research using focus-plus-context screens and projection mapping.
The focus-plus-context technology uses a high resolution screen surrounded by a lower resolution display. Microsoft's CHI 2013 research paper cites Philips' Ambilight as an example of a focus-plus-context display. In the case of IllumiRoom, the television represents the high resolution screen and the surrounding projection is the lower resolution display; the purpose of this technology is to provide the user with additional visual information in the visual periphery, both simulating and taking advantage of peripheral vision. While the center of a person's gaze is in high-resolution and is sensitive to color and detail, peripheral vision is less sensitive to color and detail, but sensitive to movement. IllumiRoom combines the focus-plus-context concept with real-time projection mapping; this allows the system to be used in any room, not just one where a television is surrounded by flat, white wall. The Kinect sensor is used to calibrate the projection; the projector displays a system of gray patterns and the Kinect camera reads the size of the pattern across the projection in order to map the 3D environment.
Once calibrated, the Kinect sensor is no longer needed for the IllumiRoom system and can be used for gaming. IllumiRoom was developed with the open-source first-person shooter Red Eclipse as prototype application; the system can display video game video in one of several modes. These modes require the system to have access to the game's rendering process: Focus + Context Full: The full game content is projected around the television. Focus + Context Edges: Only high-contrast edges are projected around the television. Focus + Context Segmented: Game content is projected onto only a segment of the surrounding environment, most the flat wall. Focus + Context Selective: Only select game content is projected around the television. Without access to the game's rendering, several other projection modes are available: Peripheral Flow: The system displays a grid or starfield that moves with the video game camera around the television. Color Augmentation: The system changes the appearance of physical objects in the room to match the theme or look of the game by saturating colors, making them appear black and white, or creating a cartoon appearance.
Texture Displacement: The illusion of distortion of physical objects in the room is created by the projector. The radial wobble effect creates the illusion that objects in the room are being affected by a rippling force field emanating from the television. Lighting: Since the project provides the lighting for the room, it can project lighting effects that match the lighting from the video game. Physical Interaction: Objects within the game can directly interact with the room environment. For example, a ball may fall onto objects in the physical environment. Although expected to be used in an Xbox application, the researchers have stated that the technology is, for now, only a research project and not ready for commercial use. RoomAlive, a related Microsoft Research project uses a depth camera and video projector in a projector-camera, or "procam" setup, it is a scalable system for dynamic, real-time interactive projection mapping in which multiple such procams can be used together in a room to generate an immersive unified projection mapping, automatically adapted to the room environment, which users can interact with physically.
Unlike IllumiRoom, which implements focus-plus-context visual presentation centered on a television screen, RoomAlive focuses on spatial augmented reality applications. In April 2015, Microsoft released the RoomAlive Toolkit, an open-source MIT licensed software development kit for calibrating a network of video projectors and Kinect sensors, which can be used to develop systems like those of the RoomAlive and IllumiRoom projects; the source code is available. IllumiRoom - Microsoft Research IllumiRoom with HighRes pictures - Brett Jones RoomAlive RoomAlive Toolkit
Microsoft Live Labs
Microsoft Live Labs was a partnership between MSN and Microsoft Research that focused on applied research for Internet products and services at Microsoft. Live Labs was headed by Dr. Gary William Flake, who prior to joining Microsoft was a principal scientist at Yahoo! Research Lab and former head of research at the Web portal's Overture Services division. Live Labs' focus was on applied research and practical applications of computer science areas including natural language processing, machine learning, information retrieval, data mining, computational linguistics, distributed computing, etc. Microsoft Live Labs was formed on January 24, 2006. On October 8, 2010, Microsoft announced the shutdown of Live Labs and the transition of its remaining team of 68 to Microsoft Bing; as a consequence Live Labs' original founder and leader Dr. Gary William Flake has resigned from Microsoft; the following table shows all of the projects, initiated by Microsoft Live Labs: Windows Live Official website
Livestation was a platform for distributing live television and radio broadcasts over a data network. It was developed by Skinkers Ltd. and is now an independent company called Livestation Ltd. The service was based on peer-to-peer technology acquired from Microsoft Research. Between mid-June 2013 and mid-July Livestation was unavailable to some subscribers due to technical issues. In late 2016, the service closed down without notice. Livestation aggregated international news channels online and offered them in a number of ways: Free to watch: a number of channels could be watched for free on the Livestation website or on their desktop player, a downloadable video application that presented all the channels through one interface. Premium service: some of the free channels were available on a subscription basis both in higher quality and in lower delivered via an international content distribution network for higher reliability. Mobile: Livestation launched BBC World News on the iPhone in 16 European countries and Al Jazeera English globally.
The apps were available in the iPhone AppStore and stream the live TV channel 24/7 on both Wi-Fi and 3G connections. Livestation broadcast streams encoded in VC-1 format. Playback controls were overlaid on top of the video stream. Unlike services such as Joost which offer video on demand channels, Livestation streams live broadcasts. Livestation provided a website, mobile website and native applications for iOS, Android and Blackberry handsets. Early models of Samsung TV were supported, they provided desktop software available for Windows and Linux. The cross-platform compatibility of the desktop software was facilitated by the Qt framework. Social networking features were added that include the ability to chat with other viewers and find out what others are watching through a user generated rating system. You could search and select the available channels either from the website, or from within the software. In the first quarter of 2011 by 1047 percent, resulting in the first profitable quarter in its history.
Between mid-June and mid-July 2013, Livestation suffered a prolonged series of technical issues and was unavailable to some users. In early 2015, Livestation re-branded their entire site changing what channels were offered and bringing in an interactive feature; some stations on the app were not on the vice versa. Stations available until closure and former live TV news channels in the global offering included, as of 2016: ABS-CBN News Channel Al Aan TV Al-Alam News Network Al Arabiya Al Jazeera Al Jazeera English Al Jazeera Mubasher Al Mayadeen Al Nabaa TV BBC Arabic BBC Persian BBC World News BBC World Service Radio CNBC CNBC Arabiya Bloomberg TV BBC News Channel CCTV News CNC World CNN International C-SPAN Democratic Voice of Burma Deutsche Welle TV and radio eNCA Euronews Espreso TV Fox News Radio France24 HispanTV i24news Kurdast News Libya TV NASA TV NHK World News One News Press TV RFI Afrique and Monde. Reuters TV Russia Today SAMAA TV Sky News Arabia Sky News International TeleSUR United Nations Television UNHCR TV VOA PersianLivestation site is closed.
IPTV Internet Television TVUnetworks Official website
WAP 2.0 specifies XHTML Mobile Profile plus WAP CSS, subsets of the W3C's standard XHTML and CSS with minor mobile extensions. Newer mobile browsers are full-featured Web browsers capable of HTML, CSS, ECMAScript, as well as mobile technologies such as WML, i-mode HTML, or cHTML. To accommodate small screens, they use Post-WIMP interfaces; the first mobile browser for a PDA was PocketWeb for the Apple Newton created at TecO in 1994, followed by the first commercial product NetHopper released in August 1996. The so-called "microbrowser" technologies such as WAP, NTTDocomo's i-mode platform and Openwave's HDML platform fueled the first wave of interest in wireless data services; the first deployment of a mobile browser on a mobile phone was in 1997 when Unwired Planet put their "UP. Browser" on AT&T handsets to give users access to HDML content. A British company, STNC Ltd. developed a mobile browser in 1997, intended to present the entire device UI. The demonstration platform for this mobile browser had 1 MIPS total processing power.
This was a single core platform, running the GSM stack on the same processor as the application stack. In 1999 STNC was acquired by Microsoft and HitchHiker became Microsoft Mobile Explorer 2.0, not related to the primitive Microsoft Mobile Explorer 1.0. HitchHiker is believed to be the first mobile browser with a unified rendering model, handling HTML and WAP along with ECMAScript, WMLScript, POP3 and IMAP mail in a single client. Although it was not used, it was possible to combine HTML and WAP in the same pages although this would render the pages invalid for any other device. Mobile Explorer 2.0 was available on the Benefon Q, Sony CMD-Z5, CMD-J5, CMD-MZ5, CMD-J6, CMD-Z7, CMD-J7 and CMD-J70. With the addition of a messaging kernel and a driver model, this was powerful enough to be the operating system for certain embedded devices. One such device was the Amstrad e-m@iler and e-m@iler 2; this code formed the basis for MME3. Multiple companies offered browsers for the Palm OS platform; the first HTML browser for Palm OS 1.0 was HandWeb by Smartcode software, released in 1997.
HandWeb included its own TCP/IP stack, Smartcode was acquired by Palm in 1999. Mobile browsers for the Palm OS platform multiplied after the release of Palm OS 2.0, which included a TCP/IP stack. A freeware browser for the Palm OS was Palmscape, written in 1998 by Kazuho Oku in Japan, who went on to found Ilinx. Still in limited use as late as 2003. Qualcomm developed the Eudora Web browser, launched it with the Palm OS based QCP smartphone. ProxiWeb was a proxy-based Web browsing solution, developed by Ian Goldberg and others at the University of California Berkeley and acquired by PumaTech. Released in 2001, Mobile Explorer 3.0 added iMode compatibility plus numerous proprietary schemes. By imaginatively combining these proprietary schemes with WAP protocols, MME3.0 implemented OTA database synchronisation, push email, push information clients and PIM functionality. The cancelled Sony Ericsson CMD-Z700 was to feature heavy integration with MME3.0. Although Mobile Explorer was ahead of its time in the mobile phone space, development was stopped in 2002.
In 2002, Inc. offered Web Pro on Tungsten PDAs based upon a Novarra browser. PalmSource offered a competing Web browser based on Access Netfront. Opera Software pioneered with its Small Screen Medium Screen Rendering technology; the Opera web browser is able to reformat regular web pages for optimal fit on small screens and medium-sized screens. It was the first available mobile browser to support Ajax and the first mobile browser to pass ACID2 test. Distinct from a mobile browser is a web-based emulator, which uses a "Virtual Handset" to display WAP pages on a computer screen, implemented either in Java or as an HTML transcoder; the following are some of the more popular mobile browsers. Some mobile browsers are miniaturized web browsers, so some mobile device providers provide browsers for desktop and laptop computers. Mobile transcoders reformat and compress web content for mobile devices and must be used in conjunction with built-in or user-installed mobile browsers; the following are several leading mobile transcoding services.
Openwave Web Adapter - used by Vodacom Vision Mobile Server Skweezer - used by Orange, JumpTap, Medio and others Teashark Opera Mini Loband by Aptivate Google Mobilizer — Defunct since February 2016. Replaced with Google Web Light. Smart