World Wide Web
The World Wide Web known as the Web, is an information space where documents and other web resources are identified by Uniform Resource Locators, which may be interlinked by hypertext, are accessible over the Internet. The resources of the WWW may be accessed by users by a software application called a web browser. English scientist Tim Berners-Lee invented the World Wide Web in 1989, he wrote the first web browser in 1990 while employed at CERN near Switzerland. The browser was released outside CERN in 1991, first to other research institutions starting in January 1991 and to the general public in August 1991; the World Wide Web has been central to the development of the Information Age and is the primary tool billions of people use to interact on the Internet. Web resources may be any type of downloaded media, but web pages are hypertext media that have been formatted in Hypertext Markup Language; such formatting allows for embedded hyperlinks that contain URLs and permit users to navigate to other web resources.
In addition to text, web pages may contain images, video and software components that are rendered in the user's web browser as coherent pages of multimedia content. Multiple web resources with a common theme, a common domain name, or both, make up a website. Websites are stored in computers that are running a program called a web server that responds to requests made over the Internet from web browsers running on a user's computer. Website content can be provided by a publisher, or interactively where users contribute content or the content depends upon the users or their actions. Websites may be provided for a myriad of informative, commercial, governmental, or non-governmental reasons. Tim Berners-Lee's vision of a global hyperlinked information system became a possibility by the second half of the 1980s. By 1985, the global Internet began to proliferate in Europe and the Domain Name System came into being. In 1988 the first direct IP connection between Europe and North America was made and Berners-Lee began to discuss the possibility of a web-like system at CERN.
While working at CERN, Berners-Lee became frustrated with the inefficiencies and difficulties posed by finding information stored on different computers. On March 12, 1989, he submitted a memorandum, titled "Information Management: A Proposal", to the management at CERN for a system called "Mesh" that referenced ENQUIRE, a database and software project he had built in 1980, which used the term "web" and described a more elaborate information management system based on links embedded as text: "Imagine the references in this document all being associated with the network address of the thing to which they referred, so that while reading this document, you could skip to them with a click of the mouse." Such a system, he explained, could be referred to using one of the existing meanings of the word hypertext, a term that he says was coined in the 1950s. There is no reason, the proposal continues, why such hypertext links could not encompass multimedia documents including graphics and video, so that Berners-Lee goes on to use the term hypermedia.
With help from his colleague and fellow hypertext enthusiast Robert Cailliau he published a more formal proposal on 12 November 1990 to build a "Hypertext project" called "WorldWideWeb" as a "web" of "hypertext documents" to be viewed by "browsers" using a client–server architecture. At this point HTML and HTTP had been in development for about two months and the first Web server was about a month from completing its first successful test; this proposal estimated that a read-only web would be developed within three months and that it would take six months to achieve "the creation of new links and new material by readers, authorship becomes universal" as well as "the automatic notification of a reader when new material of interest to him/her has become available". While the read-only goal was met, accessible authorship of web content took longer to mature, with the wiki concept, WebDAV, Web 2.0 and RSS/Atom. The proposal was modelled after the SGML reader Dynatext by Electronic Book Technology, a spin-off from the Institute for Research in Information and Scholarship at Brown University.
The Dynatext system, licensed by CERN, was a key player in the extension of SGML ISO 8879:1986 to Hypermedia within HyTime, but it was considered too expensive and had an inappropriate licensing policy for use in the general high energy physics community, namely a fee for each document and each document alteration. A NeXT Computer was used by Berners-Lee as the world's first web server and to write the first web browser, WorldWideWeb, in 1990. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the first web browser and the first web server; the first web site, which described the project itself, was published on 20 December 1990. The first web page may be lost, but Paul Jones of UNC-Chapel Hill in North Carolina announced in May 2013 that Berners-Lee gave him what he says is the oldest known web page during a 1991 visit to UNC. Jones stored it on his NeXT computer. On 6 August 1991, Berners-Lee published a short summary of the World Wide Web project on the newsgroup alt.hypertext.
This date is sometimes confused with the public availability of the first web servers, which had occurred months earlier. As another example of such confusion, several news media reported that the first photo on the Web was published by Berners-Lee in 1992, an image of the CERN house band Les Horribles Cernettes taken by Silvano de Gennaro.
A camera phone is a mobile phone, able to capture photographs and record video using one or more built-in digital cameras. The first camera phone was sold in 2000 in Japan, a Sharp J-SH04 J-Phone model, although some argue that the SCH-V200 and Kyocera VP-210 Visual Phone, both introduced months earlier in South Korea and Japan are the first camera phones. Most camera phones are simpler than separate digital cameras, their usual fixed-focus lenses and smaller sensors limit their performance in poor lighting. Lacking a physical shutter, some have a long shutter lag. Photoflash is provided by an LED source which illuminates less intensely over a much longer exposure time than a bright and near-instantaneous flash strobe. Optical zoom and tripod screws are rare and none has a hot shoe for attaching an external flash; some lack a USB connection or a removable memory card. Most have Bluetooth and WiFi, can make geotagged photographs; some of the more expensive camera phones have only a few of these technical disadvantages, but with bigger image sensors, their capabilities approach those of low-end point-and-shoot cameras.
In the smartphone era, the steady sales increase of camera phones caused point-and-shoot camera sales to peak about 2010 and decline thereafter. Most model lines improve two. Most modern smartphones only have a menu choice to start a camera application program and an on-screen button to activate the shutter; some have a separate camera button, for quickness and convenience. A few camera phones are designed to resemble separate low-end digital compact cameras in appearance and to some degree in features and picture quality, are branded as both mobile phones and cameras; the principal advantages of camera phones are cost and compactness. Smartphones that are camera phones may run mobile applications to add capabilities such as geotagging and image stitching. Smartphones can use their touch screens to direct their camera to focus on a particular object in the field of view, giving an inexperienced user a degree of focus control exceeded only by seasoned photographers using manual focus. However, the touch screen, being a general purpose control, lacks the agility of a separate camera's dedicated buttons and dial.
Nearly all camera phones use CMOS image sensors, due to reduced power consumption compared to CCD type cameras, which are used, but in few camera phones. Some of camera phones use more expensive Backside Illuminated CMOS which uses less energy than CMOS, although more expensive than CMOS and CCD; as camera phone technology has progressed over the years, the lens design has evolved from a simple double Gauss or Cooke triplet to many molded plastic aspheric lens elements made with varying dispersion and refractive indexes. The latest generation of phone cameras apply distortion and various optical aberration corrections to the image before it is compressed into a.jpeg format.' Most camera phones have a digital zoom feature. A few have optical zoom. An external camera can be added, coupled wirelessly to the phone by Wi-Fi, they are compatible with most smartphones. Images are saved in the JPEG file format, except for some high-end camera phones which have RAW feature and the Android 5.0 Lollipop has facility of it.
Windows Phones can be configured to operate as a camera if the phone is asleep. An external flash can be employed. Phones store pictures and video in a directory called /DCIM in the internal memory; some can store this media in external memory. Camera phones can share pictures instantly and automatically via a sharing infrastructure integrated with the carrier network. Early developers including Philippe Kahn envisioned a technology that would enable service providers to "collect a fee every time anyone snaps a photo"; the resulting technologies, Multimedia Messaging Service and Sha-Mail, were developed parallel to and in competition to open Internet-based mobile communication provided by GPRS and 3G networks. The first commercial camera phone complete with infrastructure was the J-SH04, made by Sharp Corporation; the first commercial deployment in North America of camera phones was in 2004. The Sprint wireless carriers deployed over one million camera phones manufactured by Sanyo and launched by the PictureMail infrastructure developed and managed by LightSurf.
While early phones had Internet connectivity, working web browsers and email-programs, the phone menu offered no way of including a photo in an email or uploading it to a web site. Connecting cables or removable media that would enable the local transfer of pictures were usually missing. Modern smartphones have unlimited connectivity and transfer options with photograph attachment features. During 2003, in Europe some phones without cameras had support for MMS and external cameras that could be connected with a small cable or directly to the data port at the base of the phone; the external cameras were comparable in quality to those fitted on regular camera phones at the time offering VGA resolution. One of these examples was the Nokia Fun Camera announced together with the Nokia 3100 in June 2003; the idea was for it to be used on devices without a built-in camera and be able to tr
A digital camera or digicam is a camera that captures photographs in digital memory. Most cameras produced today are digital, while there are still dedicated digital cameras, many more cameras are now being incorporated into mobile devices, portable touchscreen computers, which can, among many other purposes, use their cameras to initiate live videotelephony and directly edit and upload imagery to others. However, high-end, high-definition dedicated cameras are still used by professionals. Digital and movie cameras share an optical system using a lens with a variable diaphragm to focus light onto an image pickup device; the diaphragm and shutter admit the correct amount of light to the imager, just as with film but the image pickup device is electronic rather than chemical. However, unlike film cameras, digital cameras can display images on a screen after being recorded, store and delete images from memory. Many digital cameras can record moving videos with sound; some digital cameras can perform other elementary image editing.
The history of the digital camera began with Eugene F. Lally of the Jet Propulsion Laboratory, thinking about how to use a mosaic photosensor to capture digital images, his 1961 idea was to take pictures of the planets and stars while travelling through space to give information about the astronauts' position. As with Texas Instruments employee Willis Adcock's filmless camera in 1972, the technology had yet to catch up with the concept; the Cromemco Cyclops was an all-digital camera introduced as a commercial product in 1975. Its design was published as a hobbyist construction project in the February 1975 issue of Popular Electronics magazine, it used a 32×32 Metal Oxide Semiconductor sensor. Steven Sasson, an engineer at Eastman Kodak and built the first self-contained electronic camera that used a charge-coupled device image sensor in 1975. Early uses were military and scientific. In 1986, Japanese company Nikon introduced the first digital single-lens reflex camera, the Nikon SVC. In the mid-to-late 1990s, DSLR cameras became common among consumers.
By the mid-2000s, DSLR cameras had replaced film cameras. In 2000, Sharp introduced the J-SH04 J-Phone, in Japan. By the mid-2000s, higher-end cell phones had an integrated digital camera. By the beginning of the 2010s all smartphones had an integrated digital camera; the two major types of digital image sensor are CCD and CMOS. A CCD sensor has one amplifier for all the pixels, while each pixel in a CMOS active-pixel sensor has its own amplifier. Compared to CCDs, CMOS sensors use less power. Cameras with a small sensor use a back-side-illuminated CMOS sensor. Overall final image quality is more dependent on the image processing capability of the camera, than on sensor type; the resolution of a digital camera is limited by the image sensor that turns light into discrete signals. The brighter the image at a given point on the sensor, the larger the value, read for that pixel. Depending on the physical structure of the sensor, a color filter array may be used, which requires demosaicing to recreate a full-color image.
The number of pixels in the sensor determines the camera's "pixel count". In a typical sensor, the pixel count is the product of the number of columns. For example, a 1,000 by 1,000 pixel sensor would have 1 megapixel. Final quality of an image depends on all optical transformations in the chain of producing the image. Carl Zeiss points out. In case of a digital camera, a simplistic way of expressing it is that the lens determines the maximum sharpness of the image while the image sensor determines the maximum resolution; the illustration on the right can be said to compare a lens with poor sharpness on a camera with high resolution, to a lens with good sharpness on a camera with lower resolution. Since the first digital backs were introduced, there have been three main methods of capturing the image, each based on the hardware configuration of the sensor and color filters. Single-shot capture systems use either one sensor chip with a Bayer filter mosaic, or three separate image sensors which are exposed to the same image via a beam splitter.
Multi-shot exposes the sensor to the image in a sequence of three or more openings of the lens aperture. There are several methods of application of the multi-shot technique; the most common was to use a single image sensor with three filters passed in front of the sensor in sequence to obtain the additive color information. Another multiple shot method is called Microscanning; this method uses a single sensor chip with a Bayer filter and physically moved the sensor on the focus plane of the lens to construct a higher resolution image than the native resolution of the chip. A third version combined the two methods without a Bayer filter on the chip; the third method is called scanning because the sensor moves across the focal plane much like the sensor of an image scanner. The linear or tri-linear sensors in scanning cameras utilize only a single line of photosensors, or three lines for the three colors. Scanning may be accomplished by rotating the whole camera. A digital rotating line camera offers images of high total resolution.
The choice of method for a given capture is determined by the subject matter. It is inappropriate to attempt to capture a subject that moves with anything but a single-shot sys