A screenshot called screen capture or screen grab, is a digital image of what should be visible on a monitor, television, or other visual output device. A common screenshot is created by software running on the device. A screenshot or screen capture may be created by taking a photo of the screen; the first screenshots were created with the first interactive computers around 1960. Through the 1980s, computer operating systems did not universally have built-in functionality for capturing screenshots. Sometimes text-only screens could be dumped to a text file, but the result would only capture the content of the screen, not the appearance, nor were graphics screens preservable this way; some systems had a BSAVE command that could be used to capture the area of memory where screen data was stored, but this required access to a BASIC prompt. Systems with composite video output could be connected to a VCR, entire screencasts preserved this way. Screenshot kits were available for standard cameras that included a long antireflective hood to attach between the screen and camera lens, as well as a closeup lens for the camera.
Polaroid film was popular for capturing screenshots, because of the instant results and close-focusing capability of Polaroid cameras. In 1988, Polaroid introduced Spectra film with a 9.2 × 7.3 image size more suited to the 4:3 aspect ratio of CRT screens. Screenshot support was added to Android in version 4.0. In older versions, some devices supported screenshot functionality with one of the following combinations: Press and hold the Home+Power Press and hold Back+Power Press and hold back and double tap the Home. Screenshots can be taken by pressing Volume Down+Power, are saved in the "Screenshot" folder in the gallery after a short sound and visual effect. On certain devices that use modified Android; when a keyboard is connected via USB-OTG, pressing the print screen button will take the screenshot. There is no direct way to take screenshots programmatically in non-system apps. However, on most devices, apps may use the system screenshot functionality without special permissions. On Amazon Kindle devices, one can take a screenshot by: Kindle Paperwhite - touch & hold on the top-left & bottom-right corners of the screen.
The screen will flash & image. Kindle or Kindle Touch – Simply press and hold the Home and tap anywhere on the screen. Kindle Keyboard – Press alt+⇧ Shift+G Kindle Fire 2 and Kindle Fire HD – Press and hold the volume down+power at the same time. Open the Photos app to access the pics. Kindle Fire – One needs to connect their Kindle Fire to a computer with the Kindle SDK installed and take a screenshot through the development environment. On Chromebook and related devices with the Chrome OS keyboard layout, pressing the equivalent of Ctrl+F5 on a standard keyboard will capture the entire screen, the equivalent of Ctrl+⇧ Shift+F5 will turn the mouse into a rectangle select tool for capturing a custom portion of the screen. Screenshots of the HP webOS can be taken. For webOS phones press Orange/Gray Key+Sym+P. For the HP Touchpad, press Home Key+Power+. In either case, screenshots will be saved to the "Screen captures" folder in the "Photos" app. On KDE or GNOME, PrtScr key behavior is quite similar to Windows.
In addition, the following screenshooting utilities are bundled with Linux distributions: GIMP: A raster graphics editor that can take screenshots too gnome-screenshot: The default screen grabbing utility in GNOME ImageMagick: Has an "import" command-line tool that captures screenshots in a variety of formats. Type import -window root ~/screenshot.png to capture the entire screen to your home directory. KSnapshot: The default screen grabbing utility in the KDE Shutter: Screenshot utility written in Perl scrot: Allows selecting arbitrary areas of the X screen and windows. Xwd: The screen capture utility of the X Window System A screenshot can be taken on iOS by pressing the Home button and the Lock button, however on the new iPhone X it is achieved by pressing the Volume up and Lock button; the screen will flash and the picture will be stored in PNG format in the "Camera Roll" if the device has a camera, or in "Saved Photos" if the device does not. From the iOS 11 update a little preview will pop up in the bottom left corner, which can be swiped left to save or clicked to open up an editor where the screenshot can be cropped or doodled on before being saved or shared.
The screenshot feature is available with iOS later. The same ⌘ Cmd+⇧ Shift+3 shortcut for Mac OS is used in iOS to take a screenshot, with ⌘ Cmd+⇧ Shift+4 bringing the screenshot directly in iOS' editing window in iOS 11 and later. Third-party Bluetooth keyboards have a key or function key command devoted to taking a screenshot. On macOS, a user can take a screenshot of an entire screen by pressing ⌘ Cmd+⇧ Shift+3, or of a chosen area of the screen by ⌘ Cmd+⇧ Shift+4; this screenshot is saved with one PNG file per attached monitor. If the user holds down Ctrl while doing either the screenshot will be copied to the clipboard instead. Beginning with Mac OS X Panther, it is possible to make a screenshot of an active application window. By following ⌘ Cmd+⇧ Shift+4, with pressing the Spacebar, the cross-hair cursor turns into a small camera icon; the current window under the cursor is highlighted, a click on the mouse or trackpad will capture a screenshot of the entire highlighted element. A provided application called Grab will capture a chose
A presentation program is a software package used to display information in the form of a slide show. It has three major functions: an editor that allows text to be inserted and formatted, a method for inserting and manipulating graphic images, a slide-show system to display the content. Presentation software can be viewed as enabling a functionally-specific category of electronic media, with its own distinct culture and practices as compared to traditional presentation media. Presentations in this mode of delivery are pervasive in all aspects of business communications in business planning, as well as in academic conference and professional conference settings, in the knowledge economy where ideas are a primary work output. Presentations may feature prominently in political settings workplace politics, where persuasion is a central determinant of group outcomes. Most modern meeting rooms and conference halls are configured to include presentation electronics, such as overhead projectors suitable for displaying presentation slides driven by the presenter's own laptop, under direct control of the presentation program used to develop the presentation.
The presenter will present a lecture using the slides as a visual aid for both the presenter and the audience. In presentations, the visual material is considered supplemental to a strong aural presentation that accompanies the slide show, but in many cases, such as statistical graphics, it's difficult to convey essential information other than by visual means. Endemic over-reliance on slides with low information density and with a poor accompanying lecture has given presentation software a negative reputation as sometimes functioning as a crutch for the poorly informed or the poorly prepared. Early presentation graphics software ran on computer workstations, such as those manufactured by Trollman, Genigraphics and Dicomed, it became quite easy to make last-minute changes compared to traditional pasteup. It was a lot easier to produce a large number of slides in a small amount of time. However, these workstations required skilled operators, a single workstation represented an investment of $50,000 to $200,000.
In the mid-1980s developments in the world of computers changed. Inexpensive, specialized applications now made it possible for anyone with a PC to create professional-looking presentation graphics; these programs were used to generate 35 mm slides, to be presented using a slide projector. As these programs became more common in the late 1980s several companies set up services that would accept the shows on diskette and create slides using a film recorder or print transparencies. In the 1990s dedicated LCD-based screens that could be placed on the projectors started to replace the transparencies, by the late 1990s they had all been replaced by video projectors; the first commercial computer software intended for creating WYSIWYG presentations was developed at Hewlett Packard in 1979 and called BRUNO and HP-Draw. The first microcomputer-based presentation software was Cromemco's Slidemaster, developed by John F. Dunn and released by Cromemco in 1981; the first software displaying a presentation on a personal computer screen was VCN ExecuVision, developed in 1982.
This program allowed users to choose from a library of images to accompany the text of their presentation. PowerPoint was introduced for the Macintosh computer in 1987. A presentation program is supposed to help both the speaker with an easier access to his ideas and the participants with visual information which complements the talk. There are many different types of presentations including professional, education and for general communication. Presentation programs can either supplement or replace the use of older visual-aid technology, such as pamphlets, chalkboards, flip charts, posters and overhead transparencies. Text, graphics and other objects are positioned on individual pages or "slides" or "foils"; the "slide" analogy is a reference to the slide projector, a device that has become somewhat obsolete due to the use of presentation software. Slides can be printed, or navigated through at the command of the presenter. An entire presentation can be saved in video format; the slides can be saved as images of any image file formats for any future reference.
Transitions between slides can be animated in a variety of ways, as can the emergence of elements on a slide itself. A presentation has many constraints and the most important being the limited time to present consistent information. Many presentation programs come with pre-designed images and/or have the ability to import graphic images, such as Visio and Edraw Max; some tools have the ability to search and import images from Flickr or Google directly from the tool. Custom graphics can be created in other programs such as Adobe Photoshop or GIMP and exported; the concept of clip art originated with the image library that came as a complement with VCN ExecuVision, beginning in 1983. With the growth of digital photography and video, many programs that handle these types of media include presentation functions for displaying them in a similar "slide show" format, for example iPhoto; these programs allow groups of digital photos to be displayed in a slide show with options such as selecting transitions, choosing whether or not the show stops at the end or conti
Electronic paper and e-paper sometimes electronic ink or e-ink, are display devices that mimic the appearance of ordinary ink on paper. Unlike conventional backlit flat panel displays that emit light, electronic paper displays reflect light like paper; this may make them more comfortable to read, provide a wider viewing angle than most light-emitting displays. The contrast ratio in electronic displays available as of 2008 approaches newspaper, newly developed displays are better. An ideal e-paper display can be read in direct sunlight without the image appearing to fade. Many electronic paper technologies hold static text and images indefinitely without electricity. Flexible electronic paper uses plastic substrates and plastic electronics for the display backplane. There is ongoing competition among manufacturers to provide full-color ability. Applications of electronic visual displays include electronic pricing labels in retail shops and digital signage, time tables at bus stations, electronic billboards, smartphone displays, e-readers able to display digital versions of books and magazines.
Electronic paper was first developed in the 1970s by Nick Sheridon at Xerox's Palo Alto Research Center. The first electronic paper, called Gyricon, consisted of polyethylene spheres between 75 and 106 micrometers across; each sphere is a janus particle composed of negatively charged black plastic on one side and positively charged white plastic on the other. The spheres are embedded in a transparent silicone sheet, with each sphere suspended in a bubble of oil so that they can rotate freely; the polarity of the voltage applied to each pair of electrodes determines whether the white or black side is face-up, thus giving the pixel a white or black appearance. At the FPD 2008 exhibition, Japanese company Soken demonstrated a wall with electronic wall-paper using this technology. In 2007, the Estonian company Visitret Displays was developing this kind of display using polyvinylidene fluoride as the material for the spheres improving the video speed and decreasing the control voltage. In the simplest implementation of an electrophoretic display, titanium dioxide particles one micrometer in diameter are dispersed in a hydrocarbon oil.
A dark-colored dye is added to the oil, along with surfactants and charging agents that cause the particles to take on an electric charge. This mixture is placed between two parallel, conductive plates separated by a gap of 10 to 100 micrometres; when a voltage is applied across the two plates, the particles migrate electrophoretically to the plate that bears the opposite charge from that on the particles. When the particles are located at the front side of the display, it appears white, because light is scattered back to the viewer by the high-index titania particles; when the particles are located at the rear side of the display, it appears dark, because the incident light is absorbed by the colored dye. If the rear electrode is divided into a number of small picture elements an image can be formed by applying the appropriate voltage to each region of the display to create a pattern of reflecting and absorbing regions. Electrophoretic displays are considered prime examples of the electronic paper category, because of their paper-like appearance and low power consumption.
Examples of commercial electrophoretic displays include the high-resolution active matrix displays used in the Amazon Kindle, Barnes & Noble Nook, Sony Librie, Sony Reader, Kobo eReader, iRex iLiad e-readers. These displays are constructed from an electrophoretic imaging film manufactured by E Ink Corporation. A mobile phone that used the technology is the Motorola Fone. Electrophoretic Display technology has been developed by Sipix and Bridgestone/Delta. SiPix is now part of E Ink; the Sipix design uses a flexible 0.15 mm Microcup architecture, instead of E Ink's 0.04 mm diameter microcapsules. Bridgestone Corp.'s Advanced Materials Division cooperated with Delta Optoelectronics Inc. in developing the Quick Response Liquid Powder Display technology. Electrophoretic displays can be manufactured using the Electronics on Plastic by Laser Release process developed by Philips Research to enable existing AM-LCD manufacturing plants to create flexible plastic displays. An electrophoretic display forms images by rearranging charged pigment particles with an applied electric field.
In the 1990s another type of electronic ink based on a microencapsulated electrophoretic display was conceived and prototyped by a team of undergraduates at MIT as described in their Nature paper. J. D. Albert, Barrett Comiskey, Joseph Jacobson, Jeremy Rubin and Russ Wilcox co-founded E Ink Corporation in 1997 to commercialize the technology. E ink subsequently formed a partnership with Philips Components two years to develop and market the technology. In 2005, Philips sold the electronic paper business as well as its related patents to Prime View International. "It has for many years been an ambition of researchers in display media to create a flexible low-cost system, the electronic analogue of paper. In this context, microparticle-based displays have long intrigued researchers. Switchable contrast in such displays is achieved by the electromigration of scattering or absorbing microparticles, quite distinct from the molecular-scale properties that govern the behaviour of the more familiar liquid-crystal displays.
Micro-particle-based displays possess intrinsic bistability, exhibit low power d.c. field addressing and have demonstrated high contrast and reflectivity. These features, combined with a near-lambertian viewing characteristic, result in an'ink on paper' look, but such displays have to date suffered from
WYSIWYG is an acronym for "what you see is what you get". In computing, a WYSIWYG editor is a system in which content can be edited in a form resembling its appearance when printed or displayed as a finished product, such as a printed document, web page, or slide presentation. WYSIWYG implies a user interface that allows the user to view something similar to the end result while the document is being created. In general, WYSIWYG implies the ability to directly manipulate the layout of a document without having to type or remember names of layout commands; the actual meaning depends on the user's perspective, e.g. In presentation programs, compound documents, web pages, WYSIWYG means the display represents the appearance of the page displayed to the end-user, but does not reflect how the page will be printed unless the printer is matched to the editing program, as it was with the Xerox Star and early versions of the Apple Macintosh. In word processing and desktop publishing applications, WYSIWYG means that the display simulates the appearance and represents the effect of fonts and line breaks on the final pagination using a specific printer configuration, so that, for example, a citation on page 1 of a 500-page document can refer to a reference three hundred pages later.
WYSIWYG describes ways to manipulate 3D models in stereo-chemistry, computer-aided design, 3D computer graphics. Modern software does a good job of optimizing the screen display for a particular type of output. For example, a word processor is optimized for output to a typical printer; the software emulates the resolution of the printer in order to get as close as possible to WYSIWYG. However, not the main attraction of WYSIWYG, the ability of the user to be able to visualize what they are producing. In many situations, the subtle differences between what the user sees and what the user gets are unimportant. In fact, applications may offer multiple WYSIWYG modes with different levels of "realism", including A composition mode, in which the user sees something somewhat similar to the end result, but with additional information useful while composing, such as section breaks and non-printing characters, uses a layout, more conducive to composing than to layout. A layout mode, in which the user sees something similar to the end result, but with some additional information useful in ensuring that elements are properly aligned and spaced, such as margin lines.
A preview mode, in which the application attempts to present a representation, as close to the final result as possible. Before the adoption of WYSIWYG techniques, text appeared in editors using the system standard typeface and style with little indication of layout. Users were required to enter special non-printing control codes to indicate that some text should be in boldface, italics, or a different typeface or size. In this environment there was little distinction between text editors and word processors; these applications used an arbitrary markup language to define the codes/tags. Each program had its own special way to format a document, it was a difficult and time-consuming process to change from one word processor to another; the use of markup tags and codes remains popular today in some applications due to their ability to store complex formatting information. When the tags are made visible in the editor, they occupy space in the unformatted text and so disrupt the desired layout and flow.
Bravo, a document preparation program for the Alto produced at Xerox PARC by Butler Lampson, Charles Simonyi and colleagues in 1974, is considered the first program to incorporate WYSIWYG technology, displaying text with formatting. The Alto monitor was designed so that one full page of text could be seen and printed on the first laser printers; when the text was laid out on the screen, 72 PPI font metric files were used, but when printed 300 PPI files were used—thus one would find characters and words off, a problem that continues to this day. Bravo was released commercially and the software included in the Xerox Star can be seen as a direct descendant of it. In parallel with but independent of the work at Xerox PARC, Hewlett Packard developed and released in late 1978 the first commercial WYSIWYG software application for producing overhead slides or what today are called presentation graphics; the first release, named BRUNO, ran on the HP 1000 minicomputer taking advantage of HP's first bitmapped computer terminal the HP 2640.
BRUNO was ported to the HP-3000 and re-released as "HP Draw". By 1981 MicroPro advertised that its WordStar word processor had WYSIWYG, but its display was limited to displaying styled text in WYSIWYG fashion. In 1983 the Weekly Reader advertised its Stickybear educational software with the slogan "what you see is what you get", with photographs of its Apple II graphics, but home computers of the 1970s and early 1980s lacked the sophisticated graphics capabilities necessary to display WYSIWYG documents, meaning that such applications were confined to limited-purpose, high-end workstations that were too expensive for the general public to afford. Towards the mid-1980s, things began to change. Improving technology allowed the production of cheaper bitmapped displays, WYSIWYG software started to appear for more popular computers, including LisaWrite
Desktop publishing is the creation of documents using page layout skills on a personal computer for print. Desktop publishing software can generate layouts and produce typographic quality text and images comparable to traditional typography and printing; this technology allows individuals and other organizations to self-publish a wide range of printed matter. Desktop publishing is the main reference for digital typography; when used skillfully, desktop publishing allows the user to produce a wide variety of materials, from menus to magazines and books, without the expense of commercial printing. Desktop publishing combines a personal computer and WYSIWYG page layout software to create publication documents on a computer for either large scale publishing or small scale local multifunction peripheral output and distribution. Desktop publishing methods provide more control over design and typography than word processing. However, word processing software has evolved to include some, though by no means all, capabilities available only with professional printing or desktop publishing.
The same DTP skills and software used for common paper and book publishing are sometimes used to create graphics for point of sale displays, promotional items, trade show exhibits, retail package designs and outdoor signs. Although what is classified as "DTP software" is limited to print and PDF publications, DTP skills aren't limited to print; the content produced by desktop publishers may be exported and used for electronic media. The job descriptions that include "DTP", such as DTP artist require skills using software for producing e-books, web content, web pages, which may involve web design or user interface design for any graphical user interface. Desktop publishing was first developed at Xerox PARC in the 1970s. A contradictory claim states that desktop publishing began in 1983 with a program developed by James Davise at a community newspaper in Philadelphia; the program Type Processor One ran on a PC using a graphics card for a WYSIWYG display and was offered commercially by Best info in 1984.
The Macintosh computer platform was introduced by Apple with much fanfare in 1984, but at the beginning, the Mac lacked DTP capabilities. The DTP market exploded in 1985 with the introduction in January of the Apple LaserWriter printer, in July with the introduction of PageMaker software from Aldus, which became the standard software application for desktop publishing. With its advanced layout features, PageMaker relegated word processors like Microsoft Word to the mere composition and editing of purely textual documents; the term "desktop publishing" is attributed to Aldus founder Paul Brainerd, who sought a marketing catchphrase to describe the small size and relative affordability of this suite of products, in contrast to the expensive commercial phototypesetting equipment of the day. Before the advent of desktop publishing, the only option available to most people for producing typed documents was a typewriter, which offered only a handful of typefaces and one or two font sizes. Indeed, one popular desktop publishing book was entitled The Mac is not a typewriter, it had to explain how a Mac could do so much more than a typewriter.
The ability to create WYSIWYG page layouts on screen and print pages containing text and graphical elements at crisp 300 dpi resolution was revolutionary for both the typesetting industry and the personal computer industry. Early 1980s desktop publishing was a primitive affair. Users of the PageMaker-LaserWriter-Macintosh 512K system endured frequent software crashes, cramped display on the Mac's tiny 512 x 342 1-bit monochrome screen, the inability to control letter-spacing and other typographic features, discrepancies between the screen display and printed output. However, it was a revolutionary combination at the time, was received with considerable acclaim. Behind-the-scenes technologies developed by Adobe Systems set the foundation for professional desktop publishing applications; the LaserWriter and LaserWriter Plus printers included high quality, scalable Adobe PostScript fonts built into their ROM memory. The LaserWriter's PostScript capability allowed publication designers to proof files on a local printer print the same file at DTP service bureaus using optical resolution 600+ ppi PostScript printers such as those from Linotronic.
The Macintosh II was released, much more suitable for desktop publishing because of its greater expandability, support for large color multi-monitor displays, its SCSI storage interface which allowed fast high-capacity hard drives to be attached to the system. Macintosh-based systems continued to dominate the market into 1986, when the GEM-based Ventura Publisher was introduced for MS-DOS computers. PageMaker's pasteboard metaphor simulated the process of creating layouts manually, but Ventura Publisher automated the layout process through its use of tags and style sheets and automatically generated indices and other body matter; this made it suitable for other long-format documents. Desktop publishing moved into the home market in 1986 with Professional Page for the Amiga, Publishing Partner for the Atari ST, GST's Timeworks Publisher on the PC and Atari ST, Calamus for the Atari TT030. Software was published for 8-bit computers like the A
A computer monitor is an output device that displays information in pictorial form. A monitor comprises the display device, circuitry and power supply; the display device in modern monitors is a thin film transistor liquid crystal display with LED backlighting having replaced cold-cathode fluorescent lamp backlighting. Older monitors used a cathode ray tube. Monitors are connected to the computer via VGA, Digital Visual Interface, HDMI, DisplayPort, low-voltage differential signaling or other proprietary connectors and signals. Computer monitors were used for data processing while television receivers were used for entertainment. From the 1980s onwards, computers have been used for both data processing and entertainment, while televisions have implemented some computer functionality; the common aspect ratio of televisions, computer monitors, has changed from 4:3 to 16:10, to 16:9. Modern computer monitors are interchangeable with conventional television sets. However, as computer monitors do not include components such as a television tuner and speakers, it may not be possible to use a computer monitor as a television without external components.
Early electronic computers were fitted with a panel of light bulbs where the state of each particular bulb would indicate the on/off state of a particular register bit inside the computer. This allowed the engineers operating the computer to monitor the internal state of the machine, so this panel of lights came to be known as the'monitor'; as early monitors were only capable of displaying a limited amount of information and were transient, they were considered for program output. Instead, a line printer was the primary output device, while the monitor was limited to keeping track of the program's operation; as technology developed engineers realized that the output of a CRT display was more flexible than a panel of light bulbs and by giving control of what was displayed in the program itself, the monitor itself became a powerful output device in its own right. Computer monitors were known as visual display units, but this term had fallen out of use by the 1990s. Multiple technologies have been used for computer monitors.
Until the 21st century most used cathode ray tubes but they have been superseded by LCD monitors. The first computer monitors used cathode ray tubes. Prior to the advent of home computers in the late 1970s, it was common for a video display terminal using a CRT to be physically integrated with a keyboard and other components of the system in a single large chassis; the display was monochrome and far less sharp and detailed than on a modern flat-panel monitor, necessitating the use of large text and limiting the amount of information that could be displayed at one time. High-resolution CRT displays were developed for the specialized military and scientific applications but they were far too costly for general use; some of the earliest home computers were limited to monochrome CRT displays, but color display capability was a standard feature of the pioneering Apple II, introduced in 1977, the specialty of the more graphically sophisticated Atari 800, introduced in 1979. Either computer could be connected to the antenna terminals of an ordinary color TV set or used with a purpose-made CRT color monitor for optimum resolution and color quality.
Lagging several years behind, in 1981 IBM introduced the Color Graphics Adapter, which could display four colors with a resolution of 320 x 200 pixels, or it could produce 640 x 200 pixels with two colors. In 1984 IBM introduced the Enhanced Graphics Adapter, capable of producing 16 colors and had a resolution of 640 x 350. By the end of the 1980s color CRT monitors that could display 1024 x 768 pixels were available and affordable. During the following decade, maximum display resolutions increased and prices continued to fall. CRT technology remained dominant in the PC monitor market into the new millennium because it was cheaper to produce and offered to view angles close to 180 degrees. CRTs still offer some image quality advantages over LCDs but improvements to the latter have made them much less obvious; the dynamic range of early LCD panels was poor, although text and other motionless graphics were sharper than on a CRT, an LCD characteristic known as pixel lag caused moving graphics to appear noticeably smeared and blurry.
There are multiple technologies. Throughout the 1990s, the primary use of LCD technology as computer monitors was in laptops where the lower power consumption, lighter weight, smaller physical size of LCDs justified the higher price versus a CRT; the same laptop would be offered with an assortment of display options at increasing price points: monochrome, passive color, or active matrix color. As volume and manufacturing capability have improved, the monochrome and passive color technologies were dropped from most product lines. TFT-LCD is a variant of LCD, now the dominant technology used for computer monitors; the first standalone LCDs appeared in the mid-1990s selling for high prices. As prices declined over a period of years they became more popular, by 1997 were competing with CRT monitors. Among the first desktop LCD computer monitors was the Eizo L66 in the mid-1990s, the Apple Studio Display in 1998, the Apple Cinema Display in 1999. In 2003, TFT-LCDs outsold CRTs for the first time, becoming the primary technology used for computer monitors.
The main advantages of LCDs over CRT displays are that LC
In product development, an end user is a person who uses or is intended to use a product. The end user stands in contrast to users who support or maintain the product, such as sysops, system administrators, database administrators, information technology experts, software professionals and computer technicians. End users do not possess the technical understanding or skill of the product designers, a fact, easy for designers to forget or overlook, leading to features with which the customer is dissatisfied. In information technology, end users are not "customers" in the usual sense—they are employees of the customer. For example, if a large retail corporation buys a software package for its employees to use though the large retail corporation was the "customer" which purchased the software, the end users are the employees of the company who will use the software at work. Certain American defense-related products and information require export approval from the United States Government under the ITAR and EAR.
In order to obtain a license to export, the exporter must specify both the end user and end use using an end-user certificate. In End-User License Agreements, the end user is distinguished from the value-added reseller that installs the software or the organization that purchases and manages the software. In the UK, there exist documents. End users are one of the three major factors contributing to the complexity of managing information systems; the end user's position has changed from a position in the 1950s to one in the 2010s where the end user collaborates with and advises the management information system and Information Technology department about his or her needs regarding the system or product. This raises new questions such as: Who manages each resource? What is the role of the MIS Department? What is the optimal relationship between the end user and the MIS Department? The concept of "end user" first surfaced in the late 1980s and has since raised many debates. One challenge is that the goal is to both give the user more freedom, by adding advanced features and functions, add more constraints.
This phenomenon appeared as a consequence of "consumerization" of software. In the 1960s and 1970s, computer users were programming experts and computer scientists. However, in the 1980s, in the mid- to late 1990s and the early 2000s, regular people began using computer devices and software for personal and work use. IT specialists need to cope with this trend in various ways. In the 2010s, users now want to have more control over the systems they operate, so they solve their own problems and be able to change, customize and "tweak" the systems to suit their needs; the drawback would be the risk of corruption of the systems and data the user has control of due to his or her lack of knowledge on how to properly operate the computer or software at an advanced level. For companies to appeal to the user, they take care to accommodate and think of end users in their new products, software launches and updates. A partnership needs to be formed between the programmer-developers and the everyday end users so that both parties can make the most out of the products.
Public libraries have been affected by new technologies in many ways, ranging from the digitalization of their card catalog and the shift to e-books and e-journals and offering online services. Libraries have had to undergo many changes in order to cope, including training existing librarians in Web 2.0 and database skills and hiring IT and software experts. The aim of end user documentation is to help the user understand certain aspects of the systems and to provide all the answers in one place. A lot of documentation is available for users to help them understand and properly use a certain product or service. Due to the fact that the information available is very vast, inconsistent or ambiguous, many users suffer from an information overload. Therefore, they become unable to take the right course of action; this needs to be kept in mind when developing products and services and the necessary documentation for them. Well written documentation is needed for a user to reference; some key aspects of such a documentation are: Specific titles and subtitles for subsections to aid the reader in finding sections Use of videos, annotated screenshots and links to help the reader understand how to use the device or program Structured provision of information, which goes from the most basic instructions, written in plain language, without specialist jargon or acronyms, progressing to the information that intermediate or advanced users will need Easy to search the help guide, find information and access information Clear end results are described to the reader Detailed, numbered steps, to enable users with a range of proficiency levels to go step-by-step to install and troubleshoot the product or service Unique Uniform Resource Locator so that the user can go to the product website to find additional help and resources.
At times users do not refer to the documentation available to the