1.
Firefox 3.0
–
Mozilla Firefox 3.0 is a version of the Firefox web browser released on June 17,2008 by the Mozilla Corporation. Firefox 3.0 uses version 1.9 of the Gecko layout engine for displaying web pages and this version fixes many bugs, improves standards compliance, and implements many new web APIs compared to Firefox 2.0. Other new features include a download manager, a new Places system for storing bookmarks and history. Firefox 3.0 had over 8 million unique downloads the day it was released, and by July 2008 held over 5. 6% of the recorded usage share of web browsers. Estimates of Firefox 3. 0s global market share as of February 2010 were generally in the range of 4–5%, partially as a result of this, between mid-December 2009 and the end of January 2010, Firefox 3.5 was the most popular browser passing Internet Explorer 7. Mozilla ended support for Firefox 3 on March 30,2010 with the release of 3.0.19, Firefox 3.0 was developed under the codename Gran Paradiso. This, like other Firefox codenames, is the name of an actual place, planning began in October 2006, when the development team asked users to submit feature requests that they wished to be included in Firefox 3. The first release candidate was announced on May 16,2008, followed by a release candidate on June 4,2008. Mozilla shipped the final release on June 17,2008, on its release date, Firefox 3 was featured in popular culture, mentioned on The Colbert Report, among others. One of the big changes in Firefox 3 is the implementation of Gecko 1.9, the new version fixes many bugs, improves standard compliance, and implements new web APIs. In particular, it makes Firefox 3 the first official release of a Mozilla browser to pass the Acid2 test and it also gets a better score on the Acid3 test than Firefox 2. Other new features include APNG support, and EXSLT support, a new internal memory allocator, jemalloc, is used rather than the default libc one. Gecko 1.9 uses Cairo as a backend, allowing for improved graphics performance and better consistency of look. Similarly, the Mac version of Firefox 3 runs only on Mac OS X10.4 or higher, as for the frontend changes, Firefox features a redesigned download manager with built-in search and the ability to resume downloads. Also, a new manager is included in the add-ons window. Microformats are supported for use by software that can understand their use in documents to store data in a machine-readable form, the password manager in Firefox 3 asks the user if they would like it to remember the password after the log on attempt rather than before. By doing this users are able to avoid storing an incorrect password in the manager after a bad log on attempt. Firefox 3 uses a Places system for storing bookmarks and history in an SQLite backend, the new system stores more information about users history and bookmarks, in particular letting the user tag the pages
2.
Finder (software)
–
The Finder is the default file manager and graphical user interface shell used on all Macintosh operating systems. Described in its About window as The Macintosh Desktop Experience, it is responsible for the launching of other applications, and for the overall management of files, disks. It was introduced with the first Macintosh computer, and also exists as part of GS/OS on the Apple IIGS and it had been rewritten completely with the release of Mac OS X in 2001. In a tradition dating back to the Classic Mac OS of the 1980s and 1990s, the Finder uses a view of the file system that is rendered using a desktop metaphor, that is, the files and folders are represented as appropriate icons. It uses an interface to Apples Safari browser, where the user can click on a folder to move to it. Like Safari, the Finder uses tabs to allow the user to view multiple folders, there is a favorites sidebar of commonly used and important folders on the left of the Finder window. The modern Finder uses macOS graphics APIs to display previews of a range of files, such as images, applications and PDF files. The Quick Look feature allows users to examine documents and images in more detail from the finder by pressing the space bar without opening them in a separate application. The modern Finder displays some aspects of the system outside its windows. Mounted external volumes and disk image files can be displayed on the desktop, there is a trash can on the Dock in macOS, to which files can be dragged to mark them for deletion, and to which drives can be dragged for ejection. When a volume icon is being dragged, the Trash icon in the Dock changes to an icon in order to indicate this functionality. Finder can record files to optical media on the sidebar, the classic Mac OS Finder uses a spatial metaphor quite different to the more browser-like approach of the modern macOS Finder. In the classic Finder, opening a new folder opens the location in a new window and it also allows extensive customization, with the user being able to give folders custom icons matching their content. These must then be closed individually, holding down the option key when opening a folder would also close its parent, but this trick was not discoverable and remained under the purview of power users. Introducing Mac OS X in 2000, Steve Jobs criticized the original Finder, saying that it generates a ton of windows, Ars Technica columnist John Siracusa has been a long-standing defender of the spatial interface of the classic Mac OS Finder, and a critic of the new design. Daring Fireball blog author John Gruber has voiced similar criticisms, and the spatial metaphor from the original Mac Finder. and it ends up doing neither one very well. Third-party macOS software developers offer Finder replacements that run as stand-alone applications, such as Path Finder, Xfile and these replacements are shareware or freeware and aim to include and supersede the same functionality as the Finder. After Mac OS X10.4 Tiger the UNIX command line file management tools understand resource forks, there are minor differences between Finder versions and Classic OS to System 7
3.
Computing
–
Computing is any goal-oriented activity requiring, benefiting from, or creating a mathematical sequence of steps known as an algorithm — e. g. through computers. The field of computing includes computer engineering, software engineering, computer science, information systems, the ACM Computing Curricula 2005 defined computing as follows, In a general way, we can define computing to mean any goal-oriented activity requiring, benefiting from, or creating computers. For example, an information systems specialist will view computing somewhat differently from a software engineer, regardless of the context, doing computing well can be complicated and difficult. Because society needs people to do computing well, we must think of computing not only as a profession, the fundamental question underlying all computing is What can be automated. The term computing is also synonymous with counting and calculating, in earlier times, it was used in reference to the action performed by mechanical computing machines, and before that, to human computers. Computing is intimately tied to the representation of numbers, but long before abstractions like the number arose, there were mathematical concepts to serve the purposes of civilization. These concepts include one-to-one correspondence, comparison to a standard, the earliest known tool for use in computation was the abacus, and it was thought to have been invented in Babylon circa 2400 BC. Its original style of usage was by lines drawn in sand with pebbles, abaci, of a more modern design, are still used as calculation tools today. This was the first known computer and most advanced system of calculation known to date - preceding Greek methods by 2,000 years. The first recorded idea of using electronics for computing was the 1931 paper The Use of Thyratrons for High Speed Automatic Counting of Physical Phenomena by C. E. Wynn-Williams. Claude Shannons 1938 paper A Symbolic Analysis of Relay and Switching Circuits then introduced the idea of using electronics for Boolean algebraic operations, a computer is a machine that manipulates data according to a set of instructions called a computer program. The program has a form that the computer can use directly to execute the instructions. The same program in its source code form, enables a programmer to study. Because the instructions can be carried out in different types of computers, the execution process carries out the instructions in a computer program. Instructions express the computations performed by the computer and they trigger sequences of simple actions on the executing machine. Those actions produce effects according to the semantics of the instructions, computer software or just software, is a collection of computer programs and related data that provides the instructions for telling a computer what to do and how to do it. Software refers to one or more programs and data held in the storage of the computer for some purposes. In other words, software is a set of programs, procedures, algorithms, program software performs the function of the program it implements, either by directly providing instructions to the computer hardware or by serving as input to another piece of software
4.
Computer keyboard
–
In computing, a computer keyboard is a typewriter-style device which uses an arrangement of buttons or keys to act as a mechanical lever or electronic switch. Following the decline of punch cards and paper tape, interaction via teleprinter-style keyboards became the input device for computers. A keyboard typically has characters engraved or printed on the keys, however, to produce some symbols requires pressing and holding several keys simultaneously or in sequence. While most keyboard keys produce letters, numbers or signs, other keys or simultaneous key presses can produce actions or execute computer commands. In normal usage, the keyboard is used as a text entry interface to type text and numbers into a word processor, in a modern computer, the interpretation of key presses is generally left to the software. A computer keyboard distinguishes each physical key from every other and reports all key presses to the controlling software, Keyboards are also used for computer gaming, either with regular keyboards or by using keyboards with special gaming features, which can expedite frequently used keystroke combinations. A keyboard is used to give commands to the operating system of a computer, such as Windows Control-Alt-Delete combination. A command-line interface is a type of user interface operated entirely through a keyboard and it was through such devices that modern computer keyboards inherited their layouts. Earlier models were developed separately by individuals such as Royal Earl House, earlier, Herman Hollerith developed the first keypunch devices, which soon evolved to include keys for text and number entry akin to normal typewriters by the 1930s. From the 1940s until the late 1960s, typewriters were the means of data entry. The keyboard remained the primary, most integrated computer peripheral well into the era of personal computing until the introduction of the mouse as a device in 1984. By this time, text-only user interfaces with sparse graphics gave way to comparatively graphics-rich icons on screen, One factor determining the size of a keyboard is the presence of duplicate keys, such as a separate numeric keyboard, for convenience. A keyboard with few keys is called a keypad, another factor determining the size of a keyboard is the size and spacing of the keys. Reduction is limited by the consideration that the keys must be large enough to be easily pressed by fingers. Alternatively a tool is used for pressing small keys, standard alphanumeric keyboards have keys that are on three-quarter inch centers, and have a key travel of at least 0.150 inches. Desktop computer keyboards, such as the 101-key US traditional keyboards or the 104-key Windows keyboards, include characters, punctuation symbols, numbers. The internationally common 102/104 key keyboards have a left shift key. Also the enter key is usually shaped differently, computer keyboards are similar to electric-typewriter keyboards but contain additional keys, such as the command or Windows keys
5.
Software
–
Computer software, or simply software, is that part of a computer system that consists of data or computer instructions, in contrast to the physical hardware from which the system is built. In computer science and software engineering, computer software is all information processed by computer systems, programs, computer software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. Computer hardware and software require each other and neither can be used on its own. At the lowest level, executable code consists of machine language instructions specific to an individual processor—typically a central processing unit, a machine language consists of groups of binary values signifying processor instructions that change the state of the computer from its preceding state. For example, an instruction may change the value stored in a storage location in the computer—an effect that is not directly observable to the user. An instruction may also cause something to appear on a display of the computer system—a state change which should be visible to the user. The processor carries out the instructions in the order they are provided, unless it is instructed to jump to a different instruction, the majority of software is written in high-level programming languages that are easier and more efficient for programmers, meaning closer to a natural language. High-level languages are translated into machine language using a compiler or an interpreter or a combination of the two, an outline for what would have been the first piece of software was written by Ada Lovelace in the 19th century, for the planned Analytical Engine. However, neither the Analytical Engine nor any software for it were ever created, the first theory about software—prior to creation of computers as we know them today—was proposed by Alan Turing in his 1935 essay Computable numbers with an application to the Entscheidungsproblem. This eventually led to the creation of the academic fields of computer science and software engineering. Computer science is more theoretical, whereas software engineering focuses on practical concerns. However, prior to 1946, software as we now understand it—programs stored in the memory of stored-program digital computers—did not yet exist, the first electronic computing devices were instead rewired in order to reprogram them. On virtually all platforms, software can be grouped into a few broad categories. There are many different types of software, because the range of tasks that can be performed with a modern computer is so large—see list of software. System software includes, Operating systems, which are collections of software that manage resources and provides common services for other software that runs on top of them. Supervisory programs, boot loaders, shells and window systems are parts of operating systems. In practice, an operating system bundled with additional software so that a user can potentially do some work with a computer that only has an operating system. Device drivers, which operate or control a particular type of device that is attached to a computer, utilities, which are computer programs designed to assist users in the maintenance and care of their computers
6.
Operating system
–
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. All computer programs, excluding firmware, require a system to function. Operating systems are found on many devices that contain a computer – from cellular phones, the dominant desktop operating system is Microsoft Windows with a market share of around 83. 3%. MacOS by Apple Inc. is in place, and the varieties of Linux is in third position. Linux distributions are dominant in the server and supercomputing sectors, other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can run one program at a time. Multi-tasking may be characterized in preemptive and co-operative types, in preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, e. g. Solaris, Linux, cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking, 32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem, a distributed operating system manages a group of distinct computers and makes them appear to be a single computer. The development of networked computers that could be linked and communicate with each other gave rise to distributed computing, distributed computations are carried out on more than one machine. When computers in a work in cooperation, they form a distributed system. The technique is used both in virtualization and cloud computing management, and is common in large server warehouses, embedded operating systems are designed to be used in embedded computer systems. They are designed to operate on small machines like PDAs with less autonomy and they are able to operate with a limited number of resources. They are very compact and extremely efficient by design, Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is a system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could run different programs in succession to speed up processing
7.
Event (computing)
–
In computing, an event is an action or occurrence recognized by software that may be handled by the software. Computer events can be generated or triggered by the system, by the user or in other ways, typically, events are handled synchronously with the program flow, that is, the software may have one or more dedicated places where events are handled, frequently an event loop. A source of events includes the user, who may interact with the software by way of, for example, another source is a hardware device such as a timer. Software can also trigger its own set of events into the event loop, software that changes its behavior in response to events is said to be event-driven, often with the goal of being interactive. Event driven systems are used when there is some asynchronous external activity that needs to be handled by a program, for example. An event driven system typically runs an event loop, that keeps waiting for such activities, when one of these occurs, it collects data about the event and dispatches the event to the event handler software that will deal with it. A program can choose to ignore events, and there may be libraries to dispatch an event to multiple handlers that may be programmed to listen for a particular event. Events are typically used in interfaces, where actions in the outside world are handled by the program as a series of events. Programs written for many windowing environments consist predominantly of event handlers, events can also be used at instruction set level, where they complement interrupts. Compared to interrupts, events are handled synchronously, the program explicitly waits for an event to be serviced. A common variant in object-oriented programming is the event model. C# uses events as special delegates that can only be fired by the class that declares it and this allows for better abstraction, for example, In computer programming, an event handler is a callback subroutine that handles inputs received in a program. Each event is a piece of information from the underlying framework. GUI events include key presses, mouse movement, action selections, on a lower level, events can represent availability of new data for reading a file or network stream. Event handlers are a concept in event-driven programming. The events are created by the based on interpreting lower-level inputs. For example, mouse movements and clicks are interpreted as menu selections, the events initially originate from actions on the operating system level, such as interrupts generated by hardware devices, software interrupt instructions, or state changes in polling. On this level, interrupt handlers and signal handlers correspond to event handlers, created events are first processed by an event dispatcher within the framework
8.
User (computing)
–
A user is a person who uses a computer or network service. Users generally use a system or a product without the technical expertise required to fully understand it. Power users use advanced features of programs, though they are not necessarily capable of computer programming, a user often has a user account and is identified to the system by a username. Other terms for username include login name, screenname, nickname and handle, some software products provide services to other systems and have no direct end users. End users are the ultimate users of a software product. The term is used to abstract and distinguish those who use the software from the developers of the system. This abstraction is primarily useful in designing the user interface, in user-centered design, personas are created to represent the types of users. It is sometimes specified for each persona which types of user interfaces it is comfortable with, in this context, graphical user interfaces are usually preferred to command-line interfaces for the sake of usability. The end-user development discipline blurs the distinction between users and developers. It designates activities or techniques in which people who are not professional developers create automated behavior, systems whose actor is another system or a software agent have no direct end users. To log in to an account, a user is required to authenticate oneself with a password or other credentials for the purposes of accounting, security, logging. Once the user has logged on, the system will often use an identifier such as an integer to refer to them, rather than their username. In Unix systems, the username is correlated with an identifier or user id. Computer systems operate in one of two based on what kind of users they have, Single-user systems do not have a concept of several user accounts. Multi-user systems have such a concept, and require users to identify themselves using the system. Each user account on a system typically has a home directory, in which to store files pertaining exclusively to that users activities. User accounts often contain a public profile, which contains basic information provided by the accounts owner. Various computer operating-systems and applications expect/enforce different rules for the formats of user names, in some cases, a user may be better known by their username than by their real name, such as CmdrTaco, founder of the website Slashdot
9.
Toolbar
–
In computer interface design, a toolbar is a graphical control element on which on-screen buttons, icons, menus, or other input or output elements are placed. Toolbars are seen in types of software such as office suites, graphics editors. There several user interface derived from toolbars, Address bar. It accepts uniform resource locators or file system addresses and they are found in web browsers and file managers. Breadcrumb or breadcrumb trail allows users to track of their locations within programs or documents. They are toolbars whose contents dynamically change to indicate the navigation path, ribbon was the original name for the toolbar, but has been re-purposed to refer to a complex user interface which consists of toolbars on tabs. Taskbar is a provided by an operating system to launch, monitor. A taskbar may hold other sub-toolbars, a search box is not ipso facto a toolbar but may appear on a toolbar, as is the case with the address bar. Toolbars may appear in different software, some web browsers allow additional toolbars to be added through plug-ins, many antivirus companies refer to these programs as grayware or Potentially Unwanted Programs. Media related to Toolbars at Wikimedia Commons
10.
Process state
–
In a multitasking computer system, processes may occupy a variety of states. These distinct states may not be recognized as such by the system kernel. However, they are an abstraction for the understanding of processes. The following typical process states are possible on computer systems of all kinds, in most of these states, processes are stored on main memory. When a process is first created, it occupies the created or new state, in this state, the process awaits admission to the ready state. Admission will be approved or delayed by a long-term, or admission, typically in most desktop computer systems, this admission will be approved automatically. However, for operating systems this admission may be delayed. In a real system, admitting too many processes to the ready state may lead to oversaturation and overcontention for the systems resources. A ready or waiting process has been loaded into memory and is awaiting execution on a CPU. A ready queue or run queue is used in computer scheduling, modern computers are capable of running many different programs or processes at the same time. However, the CPU is only capable of handling one process at a time, processes that are ready for the CPU are kept in a queue for ready processes. Other processes that are waiting for an event to occur, such as loading information from a drive or waiting on an internet connection, are not in the ready queue. A process moves into the state when it is chosen for execution. The processs instructions are executed by one of the CPUs of the system, There is at most one running process per CPU or core. A process can run in either of the two modes, namely kernel mode or user mode, processes in kernel mode can access both, kernel and user addresses. Kernel mode allows unrestricted access to hardware including execution of privileged instructions, various instructions are privileged and can be executed only in kernel mode. A system call from a user program leads to a switch to kernel mode, processes in user mode can access their own instructions and data but not kernel instructions and data. When the computer system is executing on behalf of a user application, however, when a user application requests a service from the operating system, the system must transition from user to kernel mode to fulfill the request
11.
Menu (computing)
–
In computing and telecommunications, a menu or menu bar is graphical control element. It is a list of options or commands presented to an operator by a computer or communications system, entering the appropriate short-cut selects a menu item. A more sophisticated solution offers navigation using the keys or the mouse. The current selection is highlighted and can be activated by pressing the enter key, a computer using a graphical user interface presents menus with a combination of text and symbols to represent choices. By clicking on one of the symbols or text, the operator is selecting the instruction that the symbol represents, a context menu is a menu in which the choices presented to the operator are automatically modified according to the current context in which the operator is working. A common use of menus is to provide convenient access to various such as saving or opening a file, quitting a program. Most widget toolkits provide some form of pull-down or pop-up menu, according to traditional human interface guidelines, menu names were always supposed to be verbs, such as file, edit and so on. This has been ignored in subsequent user interface developments. A single-word verb however is unclear, and so as to allow for multiple word menu names. Menus are now seen in consumer electronics, starting with TV sets and VCRs that gained on-screen displays in the early 1990s. Menus allow the control of settings like tint, brightness, contrast, bass and treble, other more recent electronics in the 2000s also have menus, such as digital media players. Menus are sometimes hierarchically organized, allowing navigation through different levels of the menu structure, selecting a menu entry with an arrow will expand it, showing a second menu with options related to the selected entry. Usability of sub-menus has been criticized as difficult, because of the height that must be crossed by the pointer. The steering law predicts that this movement will be slow, federal Standard 1037C Drop-down menu Hamburger button Pie menu Radio button WIMP MenUA, A Design Space of Menu Techniques Site that discusses various menu design techniques
12.
Pointing device
–
A pointing device is an input interface that allows a user to input spatial data to a computer. Movements of the device are echoed on the screen by movements of the pointer and other visual changes. Common gestures are point and click and drag and drop, while the most common pointing device by far is the mouse, many more devices have been developed. However, the mouse is commonly used as a metaphor for devices that move the cursor. For most pointing devices, Fittss law can be used to predict the speed with which users can point at a target position. To classify several pointing devices, an amount of features can be considered. For example, the movement, controlling, positioning or resistance. The following points should provide an overview of the different classifications, direct vs. indirect input In case of a direct-input pointing device, the on-screen pointer is at the same physical position as the pointing device. An indirect-input pointing device is not at the physical position as the pointer. Absolute vs. relative movement An absolute-movement input device provides a consistent mapping between a point in the space and a point in the output space. A relative-movement input device maps displacement in the space to displacement in the output state. It therefore controls the position of the cursor compared to its initial position. Isotonic vs. elastic vs. isometric An isotonic pointing device is movable and measures its displacement whereas a device is fixed. An elastic device increases its force resistance with displacement, position control vs. rate control A position-control input device directly changes the absolute or relative position of the on-screen pointer. A rate-control input device changes the speed and direction of the movement of the on-screen pointer, translation vs. rotation Another classification is the differentiation between whether the device is physically translated or rotated. Degrees of freedom Different pointing devices have different degrees of freedom, a computer mouse has two degrees of freedom, namely its movement on the x- and y-axis. However the Wiimote has 6 degrees of freedom, x-, y-, possible states As mentioned later in this article, pointing devices have different possible states. Examples for these states are out of range, tracking or dragging, examples a computer mouse is an indirect, relative, isotonic, position-control, translational input device with two degrees of freedom and three states
13.
User interface
–
The user interface, in the industrial design field of human–computer interaction, is the space where interactions between humans and machines occur. Examples of this concept of user interfaces include the interactive aspects of computer operating systems, hand tools, heavy machinery operator controls. The design considerations applicable when creating user interfaces are related to or involve such disciplines as ergonomics and psychology. Generally, the goal of user interface design is to produce a user interface makes it easy, efficient. This generally means that the needs to provide minimal input to achieve the desired output. Other terms for user interface are man–machine interface and when the machine in question is a computer human–computer interface, the user interface or human–machine interface is the part of the machine that handles the human–machine interaction. Membrane switches, rubber keypads and touchscreens are examples of the part of the Human Machine Interface which we can see. In complex systems, the interface is typically computerized. The term human–computer interface refers to this kind of system, in the context of computing the term typically extends as well to the software dedicated to control the physical elements used for human-computer interaction. The engineering of the interfaces is enhanced by considering ergonomics. The corresponding disciplines are human factors engineering and usability engineering, which is part of systems engineering, tools used for incorporating human factors in the interface design are developed based on knowledge of computer science, such as computer graphics, operating systems, programming languages. Nowadays, we use the graphical user interface for human–machine interface on computers. There is a difference between a user interface and an interface or a human–machine interface. A human-machine interface is typically local to one machine or piece of equipment, an operator interface is the interface method by which multiple equipment that are linked by a host control system is accessed or controlled. The system may expose several user interfaces to serve different kinds of users, for example, a computerized library database might provide two user interfaces, one for library patrons and the other for library personnel. The user interface of a system, a vehicle or an industrial installation is sometimes referred to as the human–machine interface. HMI is a modification of the original term MMI, in practice, the abbreviation MMI is still frequently used although some may claim that MMI stands for something different now. Another abbreviation is HCI, but is commonly used for human–computer interaction
14.
Command-line interface
–
A program which handles the interface is called a command language interpreter or shell. The interface is implemented with a command line shell, which is a program that accepts commands as text input. Command-line interfaces to computer operating systems are widely used by casual computer users. Alternatives to the line include, but are not limited to text user interface menus, keyboard shortcuts. Examples of this include the Windows versions 1,2,3,3.1, and 3.11, DosShell, and Mouse Systems PowerPanel. Command-line interfaces are preferred by more advanced computer users, as they often provide a more concise. Programs with command-line interfaces are generally easier to automate via scripting, a program that implements such a text interface is often called a command-line interpreter, command processor or shell. Under most operating systems, it is possible to replace the shell program with alternatives, examples include 4DOS for DOS, 4OS2 for OS/2. For example, the default Windows GUI is a program named EXPLORER. EXE. These programs are shells, but not CLIs, application programs may also have command line interfaces. When a program is launched from an OS command line shell, interactive command line sessions, After launch, a program may provide an operator with an independent means to enter commands in the form of text. OS inter-process communication, Most operating systems support means of inter-process communication, Command lines from client processes may be redirected to a CLI program by one of these methods. Some applications support only a CLI, presenting a CLI prompt to the user, Some examples of CLI-only applications are, DEBUG Diskpart Ed Edlin Fdisk Ping Some computer programs support both a CLI and a GUI. In some cases, a GUI is simply a wrapper around a separate CLI executable file, in other cases, a program may provide a CLI as an optional alternative to its GUI. CLIs and GUIs often support different functionality, for example, all features of MATLAB, a numerical analysis computer program, are available via the CLI, whereas the MATLAB GUI exposes only a subset of features. The early Sierra games, like the first three Kings Quest games, used commands from a command line to move the character around in the graphic window. Early computer systems often used teleprinter machines as the means of interaction with a human operator, the computer became one end of the human-to-human teleprinter model. So instead of a human communicating with another human over a teleprinter, in time, the actual mechanical teleprinter was replaced by a glass tty, and then by a smart terminal
15.
Adobe Photoshop
–
Adobe Photoshop is a raster graphics editor developed and published by Adobe Systems for macOS and Windows. Photoshop was created in 1988 by Thomas and John Knoll and it can edit and compose raster images in multiple layers and supports masks, alpha compositing and several color models including RGB, CMYK, CIELAB, spot color and duotone. Photoshop has vast support for file formats but also uses its own PSD. In addition to graphics, it has limited abilities to edit or render text, vector graphics, 3D graphics. Photoshops featureset can be expanded by Photoshop plug-ins, programs developed and distributed independently of Photoshop that can run inside it, Photoshops naming scheme was initially based on version numbers. Photoshop CS3 through CS6 were also distributed in two different editions, Standard and Extended, in June 2013, with the introduction of Creative Cloud branding, Photoshops licensing scheme was changed to that of software as a service rental model and the CS suffixes were replaced with CC. Historically, Photoshop was bundled with software such as Adobe ImageReady, Adobe Fireworks, Adobe Bridge, Adobe Device Central. Alongside Photoshop, Adobe also develops and publishes Photoshop Elements, Photoshop Lightroom, Photoshop Express, collectively, they are branded as The Adobe Photoshop Family. It is currently a licensed software, Photoshop was developed in 1987 by the American brothers Thomas and John Knoll, who sold the distribution license to Adobe Systems Incorporated in 1988. Thomas Knoll, a PhD student at the University of Michigan, began writing a program on his Macintosh Plus to display images on a monochrome display. This program, called Display, caught the attention of his brother John Knoll, an Industrial Light & Magic employee, Thomas took a six-month break from his studies in 1988 to collaborate with his brother on the program. Thomas renamed the program ImagePro, but the name was already taken, during this time, John traveled to Silicon Valley and gave a demonstration of the program to engineers at Apple and Russell Brown, art director at Adobe. Both showings were successful, and Adobe decided to purchase the license to distribute in September 1988, while John worked on plug-ins in California, Thomas remained in Ann Arbor writing code. Photoshop 1.0 was released on 19 February 1990 for Macintosh exclusively, the Barneyscan version included advanced color editing features that were stripped from the first Adobe shipped version. The handling of color slowly improved with each release from Adobe, at the time Photoshop 1.0 was released, digital retouching on dedicated high end systems, such as the Scitex, cost around $300 an hour for basic photo retouching. Photoshop files have default file extension as. PSD, which stands for Photoshop Document, a PSD file stores an image with support for most imaging options available in Photoshop. These include layers with masks, transparency, text, alpha channels and spot colors, clipping paths and this is in contrast to many other file formats that restrict content to provide streamlined, predictable functionality. A PSD file has a height and width of 30,000 pixels
16.
IBM Lotus Freelance Graphics
–
Lotus Freelance Graphics is an information graphics and presentation program developed by Lotus Software following its acquisition of Graphic Communications Inc in 1986. Lotus Freelance Graphics is a part of the Lotus SmartSuite office suite for Microsoft Windows, the pre-windows version was an entirely keyboard-driven graphics package although it would work with a mouse. In a throwback to its developer, the executable for Freelance was named GCIFL long after the Lotus acquisition of Graphic Communications Inc. It was a points and vector graphics package and was easier to use accurately without the mouse. Even with the mouse, the keyboard control dominated for its speed - as what Windows introduced as keyboard shortcuts by way of introducing longer ways of achieving the same thing with the mouse. The Windows-compatible version allows users to create and compile text, digital images, diagrams, basic drawings, the program was originally a drawing tool but enhanced to include charting features either by manually inputting data or by importing data from the Lotus 1-2-3 spreadsheet program. Freelance worked within DOS to produce slides that were a competitor to PowerPoint. Lotus opted to develop for OS/2 and got left behind on Windows giving Microsoft a head start. IBM then acquired Lotus, Microsoft then used an issue with IBM to delay Lotus. With the general lack of adoption of OS/2, Freelance became a little-used system and it was eventually grafted into a new version of 1-2-3 for Windows but by then PowerPoint and Excel had become dominant. The quality of the Freelance product started to deteriorate as IBM gave little support to SmartSuite while Microsoft Office, infoworld article 28 Sep 1987 Freelance Graphics home page Lotus SmartSuite home page Freelance Mobile Screen Show Player IBM Fix list for SmartSuite for Windows 9.8 and fix packs
17.
Function key
–
On some keyboards/computers, function keys may have default actions, accessible on power-on. Function keys may have default actions printed on/besides them, or they may have the more common F-number designations, the Singer/Friden 2201 Flexowriter Programatic, introduced in 1965, had a cluster of 13 function keys, labeled F1 to F13 to the right of the main keyboard. Although the Flexowriter could be used as a terminal, this electromechanical typewriter was primarily intended as a stand-alone word processing system. The interpretation of the keys was determined by the programming of a plugboard inside the back of the machine. Soft keys date to avionics multi-function displays of military planes of the late 1960s/early 1970s, the HP 9830A was an early desktop computer, and one of the earliest specifically computing uses. HP continued its use of keys in the HP2640, which used screen-labeled function keys, placing the keys close to the screen. NECs PC-8001, introduced in 1979, featured five function keys at the top of the keyboard and their modern use may have been popularized by IBM keyboards, first the IBM3270 terminals, then the IBM PC. Later models replaced this with a keypad, and moved the function keys to 24 keys at the top of the keyboard. The original IBM PC keyboard had 10 function keys in a 2×5 matrix at the left of the keyboard, this was replaced by 12 keys in 3 blocks of 4 at the top of the keyboard in the Model M. Since the introduction of the Apple Extended Keyboard with the Macintosh II, however, keyboards with function keys have been available and they have not traditionally been a major part of the Mac user interface, however, and are generally only used on cross-platform programs. According to the Macintosh Human Interface Guidelines, they are reserved for customization by the user, current Mac keyboards include specialized function keys for controlling sound volume. The most recent Mac keyboards include 19 function keys, but keys F1–F4 and F7–F12 by default control features such as volume, media control, former keyboards and Apple Keyboard with numeric keypad has the F1–F19 keys. Apple Macintosh notebooks, Function keys were not standard on Apple notebook hardware until the introduction of the PowerBook 5300, for the most part, Mac laptops have keys F1 through F12, with pre-defined actions for some, including controlling sound volume and screen brightness. Atari 8-bit family, four dedicated keys at the hand side or on the top of the keyboard. Atari 1200XL had four additional keys labeled F1 through F4 with pre-defined actions, atari ST, ten parallelogram-shaped keys in a horizontal row across the top of the keyboard, inset into the keyboard frame instead of popping up like normal keys. BBC Micro, red/orange keys F0 to F9 in a row above the number keys on top of the computer/keyboard. The break, arrow, and copy keys could function as F10–F15, the case included a transparent plastic strip above them to hold a function key reference card. Coleco Adam, six dark brown keys in a row above the number keys
18.
Esc key
–
On computer keyboards, the Esc key is a key used to generate the escape character. It is now placed at the top left corner of the keyboard. The keyboard symbol for the ESC key is standardized in ISO/IEC 9995-7 as symbol 29 and this symbol is encoded in Unicode as U+238B broken circle with northwest arrow. The Escape keys creation is credited to Bob Bemer, a programmer who worked for IBM. He created the key in 1960 to allow programmers working with machines to switch from one type of code to another. Much later printers and computer terminals that would use escape sequences often would take more than one following byte as part of a special sequence and this key combination still works as of Windows 10. Microsoft Windows makes use of Esc for many key shortcuts, many of these shortcuts have been present since Windows 3.0, through Windows XP, Windows Vista, Windows 7, Windows 8, and Windows 10. In OS X, Esc usually closes or cancels a dialog box or sheet, the ⌘ Command+⌥ Option+⎋ Esc combination opens the Force Quit dialog box, allowing users to end non-responsive programs. Another use for the Esc key, in combination with the Command key, is switching to Front Row, if installed. In most computer games, the key is used as a pause button and/or as a way to bring up the in-game menu. In the vi family of text editors, escape is used to switch modes and this usage is due to escape being conveniently placed in what is now the tab position on the ADM-3A terminal keyboard used to develop vi, though it is now inconveniently placed. Although such devices are long out of use, standard processing of ANSI Escape sequences very similar to 1970s VT100, is implemented in both ANSI
19.
Graphical user interface
–
GUIs were introduced in reaction to the perceived steep learning curve of command-line interfaces, which require commands to be typed on a computer keyboard. The actions in a GUI are usually performed through direct manipulation of the graphical elements, beyond computers, GUIs are used in many handheld mobile devices such as MP3 players, portable media players, gaming devices, smartphones and smaller household, office and industrial controls. Designing the visual composition and temporal behavior of a GUI is an important part of application programming in the area of human–computer interaction. Its goal is to enhance the efficiency and ease of use for the logical design of a stored program. Methods of user-centered design are used to ensure that the language introduced in the design is well-tailored to the tasks. The visible graphical interface features of an application are sometimes referred to as chrome or GUI, typically, users interact with information by manipulating visual widgets that allow for interactions appropriate to the kind of data they hold. The widgets of an interface are selected to support the actions necessary to achieve the goals of users. A model–view–controller allows a structure in which the interface is independent from and indirectly linked to application functions. This allows users to select or design a different skin at will, good user interface design relates to users more, and to system architecture less. Large widgets, such as windows, usually provide a frame or container for the main presentation content such as a web page, smaller ones usually act as a user-input tool. A GUI may be designed for the requirements of a market as application-specific graphical user interfaces. By the 1990s, cell phones and handheld game systems also employed application specific touchscreen GUIs, newer automobiles use GUIs in their navigation systems and multimedia centers, or navigation multimedia center combinations. Sample graphical desktop environments A GUI uses a combination of technologies and devices to provide a platform that users can interact with, a series of elements conforming a visual language have evolved to represent information stored in computers. This makes it easier for people with few computer skills to work with, the most common combination of such elements in GUIs is the windows, icons, menus, pointer paradigm, especially in personal computers. The WIMP style of interaction uses a virtual device to represent the position of a pointing device, most often a mouse. Available commands are compiled together in menus, and actions are performed making gestures with the pointing device, a window manager facilitates the interactions between windows, applications, and the windowing system. The windowing system handles hardware devices such as pointing devices, graphics hardware, window managers and other software combine to simulate the desktop environment with varying degrees of realism. Smaller mobile devices such as personal assistants and smartphones typically use the WIMP elements with different unifying metaphors, due to constraints in space
20.
Vega Strike
–
Vega Strike is a first-person space trading and combat simulator, developed for Microsoft Windows, Linux, FreeBSD and OS X systems. Many of the game mechanics of Vega Strike are indirectly inspired by Elite. Other games, such as Wing Commander, Privateer, influenced the original developer, Vega Strike is programmed in C/C++ over the OpenGL 3D graphics API and performs internal scripting written in Python and XML. Released under the GNU General Public License, Vega Strike is free, an unofficial remake of Wing Commander, Privateer entitled Wing Commander, Privateer - Gemini Gold was made using the Vega Strike engine and released in 2005. Financial gains allow the player to buy upgrades and/or better vehicles, thus enabling him/her to advance into more dangerous, the player can have varying levels of relations with factions. Negative relations can form if the player kills some of a given faction’s ships, positive relations can be formed if the player destroys ships that are part of an enemy to a given faction. Players can either buy and sell cargo, or accept missions from the Mission Computer, in the tradition of some precursor games, individuals of significant plot importance are often found in bars. There is also a campaign in the game which assigns certain missions for the player, the player can continue playing the game after the campaign is finished. To travel quickly to and from different planets/space stations in the same system and it multiplies the engine speed of the player’s spacecraft, causing the ship to reach high speeds, allowing the player to quickly travel to different locations in a solar system. However, the number of times it multiplies the speed is limited by gravity, The closer the player’s ship is to a planet/space station. To travel to different star systems, the needs to go to weak points in space known as jump points. When the player equips his/her ship with a drive, the player needs to go to a jump point. When the player’s ship is enough to the jump point, the player can activate the jump drive. In typical solar systems, there is an assortment of jump points, the player may have to go through multiple systems/jump points to get to the destination system. Bounty, Players are advised to be careful in their choice of targets, as every faction has their own friends, patrol, A number of targets within a system must be scanned in detail by visiting each in turn. Clean Sweep, Similar to patrol, but any hostiles encountered on the way must be eliminated, defense, A target in the system is being attacked by enemy forces. The player must eliminate the attacking forces and keep the target from being destroyed, the target can range from a small merchant ship being attacked by some light forces, to a space station being destroyed by a large, well planned attack force. Rescue, A player must rescue a pilot, and he/she will be rewarded with credits
21.
XML
–
In computing, Extensible Markup Language is a markup language that defines a set of rules for encoding documents in a format that is both human-readable and machine-readable. The W3Cs XML1.0 Specification and several other related specifications—all of them free open standards—define XML, the design goals of XML emphasize simplicity, generality, and usability across the Internet. It is a data format with strong support via Unicode for different human languages. Although the design of XML focuses on documents, the language is used for the representation of arbitrary data structures such as those used in web services. Several schema systems exist to aid in the definition of XML-based languages, hundreds of document formats using XML syntax have been developed, including RSS, Atom, SOAP, SVG, and XHTML. XML-based formats have become the default for many office-productivity tools, including Microsoft Office, OpenOffice. org and LibreOffice, XML has also provided the base language for communication protocols such as XMPP. Applications for the Microsoft. NET Framework use XML files for configuration, apple has an implementation of a registry based on XML. XML has come into use for the interchange of data over the Internet. IETF RFC7303 gives rules for the construction of Internet Media Types for use when sending XML and it also defines the media types application/xml and text/xml, which say only that the data is in XML, and nothing about its semantics. The use of text/xml has been criticized as a source of encoding problems. RFC7303 also recommends that XML-based languages be given media types ending in +xml, further guidelines for the use of XML in a networked context appear in RFC3470, also known as IETF BCP70, a document covering many aspects of designing and deploying an XML-based language. The material in this section is based on the XML Specification and this is not an exhaustive list of all the constructs that appear in XML, it provides an introduction to the key constructs most often encountered in day-to-day use. Character An XML document is a string of characters, almost every legal Unicode character may appear in an XML document. Processor and application The processor analyzes the markup and passes structured information to an application, the specification places requirements on what an XML processor must do and not do, but the application is outside its scope. The processor is often referred to colloquially as an XML parser, Markup and content The characters making up an XML document are divided into markup and content, which may be distinguished by the application of simple syntactic rules. Generally, strings that constitute markup either begin with the character < and end with a >, or they begin with the character &, strings of characters that are not markup are content. However, in a CDATA section, the delimiters <. > are classified as markup, in addition, whitespace before and after the outermost element is classified as markup. Tag A tag is a construct that begins with <
22.
Macintosh operating systems
–
The family of Macintosh operating systems developed by Apple Inc. In 1984, Apple debuted the operating system that is now known as the Classic Mac OS with its release of the original Macintosh System Software. The system, rebranded Mac OS in 1996, was preinstalled on every Macintosh until 2002, noted for its ease of use, it was also criticized for its lack of modern technologies compared to its competitors. The current Mac operating system is macOS, originally named Mac OS X until 2012, the current macOS is preinstalled with every Mac and is updated annually. It is the basis of Apples current system software for its devices, iOS, watchOS. Apples effort to expand upon and develop a replacement for its classic Mac OS in the 1990s led to a few cancelled projects, code named Star Trek, Taligent, the Macintosh is credited with having popularized this concept. The classic Mac OS is the original Macintosh operating system that was introduced in 1984 alongside the first Macintosh and remained in primary use on Macs through 2001. Apple released the original Macintosh on January 24,1984, its system software was partially based on the Lisa OS and the Xerox PARC Alto computer. It was originally named System Software, or simply System, Apple rebranded it as Mac OS in 1996 due in part to its Macintosh clone program that ended a year later, Mac OS is characterized by its monolithic system. Nine major versions of the classic Mac OS were released, the name Classic that now signifies the system as a whole is a reference to a compatibility layer that helped ease the transition to Mac OS X. Although the system was marketed as simply version 10 of Mac OS. Precursors to the release of Mac OS X include OpenStep, Apples Rhapsody project. MacOS makes use of the BSD codebase and the XNU kernel, the first desktop version of the system was released on March 24,2001, supporting the Aqua user interface. Since then, several more versions adding newer features and technologies have been released, since 2011, new releases have been offered on an annual basis. It was followed by several more official server-based releases, server functionality has instead been offered as an add-on for the desktop system since 2011. The first version of the system was ready for use in February 1988, in 1988, Apple released its first Unix-based OS, A/UX, which was a Unix operating system with the Mac OS look and feel. It was not very competitive for its time, due in part to the crowded Unix market, A/UX had most of its success in sales to the U. S. government, where POSIX compliance was a requirement that Mac OS could not meet. The Macintosh Application Environment was a software package introduced by Apple in 1994 that allowed users of certain Unix-based computer workstations to run Apple Macintosh application software, MAE used the X Window System to emulate a Macintosh Finder-style graphical user interface
23.
Microsoft Windows
–
Microsoft Windows is a metafamily of graphical operating systems developed, marketed, and sold by Microsoft. It consists of families of operating systems, each of which cater to a certain sector of the computing industry with the OS typically associated with IBM PC compatible architecture. Active Windows families include Windows NT, Windows Embedded and Windows Phone, defunct Windows families include Windows 9x, Windows 10 Mobile is an active product, unrelated to the defunct family Windows Mobile. Microsoft introduced an operating environment named Windows on November 20,1985, Microsoft Windows came to dominate the worlds personal computer market with over 90% market share, overtaking Mac OS, which had been introduced in 1984. Apple came to see Windows as an encroachment on their innovation in GUI development as implemented on products such as the Lisa. On PCs, Windows is still the most popular operating system, however, in 2014, Microsoft admitted losing the majority of the overall operating system market to Android, because of the massive growth in sales of Android smartphones. In 2014, the number of Windows devices sold was less than 25% that of Android devices sold and this comparison however may not be fully relevant, as the two operating systems traditionally target different platforms. As of September 2016, the most recent version of Windows for PCs, tablets, smartphones, the most recent versions for server computers is Windows Server 2016. A specialized version of Windows runs on the Xbox One game console, Microsoft, the developer of Windows, has registered several trademarks each of which denote a family of Windows operating systems that target a specific sector of the computing industry. It now consists of three operating system subfamilies that are released almost at the time and share the same kernel. Windows, The operating system for personal computers, tablets. The latest version is Windows 10, the main competitor of this family is macOS by Apple Inc. for personal computers and Android for mobile devices. Windows Server, The operating system for server computers, the latest version is Windows Server 2016. Unlike its clients sibling, it has adopted a strong naming scheme, the main competitor of this family is Linux. Windows PE, A lightweight version of its Windows sibling meant to operate as an operating system, used for installing Windows on bare-metal computers. The latest version is Windows PE10.0.10586.0, Windows Embedded, Initially, Microsoft developed Windows CE as a general-purpose operating system for every device that was too resource-limited to be called a full-fledged computer. The following Windows families are no longer being developed, Windows 9x, Microsoft now caters to the consumers market with Windows NT. Windows Mobile, The predecessor to Windows Phone, it was a mobile operating system
24.
Unix
–
Among these is Apples macOS, which is the Unix version with the largest installed base as of 2014. Many Unix-like operating systems have arisen over the years, of which Linux is the most popular, Unix was originally meant to be a convenient platform for programmers developing software to be run on it and on other systems, rather than for non-programmer users. The system grew larger as the system started spreading in academic circles, as users added their own tools to the system. Unix was designed to be portable, multi-tasking and multi-user in a time-sharing configuration and these concepts are collectively known as the Unix philosophy. By the early 1980s users began seeing Unix as a universal operating system. Under Unix, the system consists of many utilities along with the master control program. To mediate such access, the kernel has special rights, reflected in the division between user space and kernel space, the microkernel concept was introduced in an effort to reverse the trend towards larger kernels and return to a system in which most tasks were completed by smaller utilities. In an era when a standard computer consisted of a disk for storage and a data terminal for input and output. However, modern systems include networking and other new devices, as graphical user interfaces developed, the file model proved inadequate to the task of handling asynchronous events such as those generated by a mouse. In the 1980s, non-blocking I/O and the set of inter-process communication mechanisms were augmented with Unix domain sockets, shared memory, message queues, and semaphores. In microkernel implementations, functions such as network protocols could be moved out of the kernel, Multics introduced many innovations, but had many problems. Frustrated by the size and complexity of Multics but not by the aims and their last researchers to leave Multics, Ken Thompson, Dennis Ritchie, M. D. McIlroy, and J. F. Ossanna, decided to redo the work on a much smaller scale. The name Unics, a pun on Multics, was suggested for the project in 1970. Peter H. Salus credits Peter Neumann with the pun, while Brian Kernighan claims the coining for himself, in 1972, Unix was rewritten in the C programming language. Bell Labs produced several versions of Unix that are referred to as Research Unix. In 1975, the first source license for UNIX was sold to faculty at the University of Illinois Department of Computer Science, UIUC graduate student Greg Chesson was instrumental in negotiating the terms of this license. During the late 1970s and early 1980s, the influence of Unix in academic circles led to adoption of Unix by commercial startups, including Sequent, HP-UX, Solaris, AIX. In the late 1980s, AT&T Unix System Laboratories and Sun Microsystems developed System V Release 4, in the 1990s, Unix-like systems grew in popularity as Linux and BSD distributions were developed through collaboration by a worldwide network of programmers
25.
Command key
–
The Command key, also historically known as the Apple key, clover key, open-Apple key, splat key, pretzel key, or propeller key, is a modifier key present on Apple keyboards. The Command keys purpose is to allow the user to enter commands in applications. An extended Macintosh keyboard — the most common type — has two keys, one on each side of the space bar, some compact keyboards have one only on the left. The ⌘ symbol was chosen by Susan Kare after Steve Jobs decided that the use of the Apple logo in the system would be an over-use of the logo. Apples adaptation of the symbol — encoded in Unicode at U+2318 ⌘ — was derived in part from its use in Scandinavian countries to denote places of interest, the symbol is known by various other names, including Saint Johns Arms and Bowen knot. Apples computers up through the 1979 Apple II Plus did not have a command key. The first model on which it appeared was the 1980 Apple III and this allowed for flexible combinations of a modifier key and base key with just a few extra wires and no ROM changes, since the Apple II could only register one key press at a time. In all these cases, the left Apple key had an outlined open Apple logo, the Apple Lisa had only the closed Apple logo. Thus, the ⌘ symbol appears in the Macintosh menus as the primary modifier key symbol, the original Macintosh also had an Option key which was used primarily for entering extended characters. In 1986, the Apple IIGS was introduced, like the newer Macintosh computers to come, such as the Macintosh SE, it used the Apple Desktop Bus for its keyboard and mouse. However, it was still an Apple II, the Apple symbol was removed in the keyboards 2007 redesign, making room for the keys name to appear. In the US, the keyboard now uses the command, in Europe. On the keyboard of the NeXT Computer that key was marked Command in green, the menus were not marked with a symbol denoting the command key. Besides being used as a key for keyboard shortcuts it was also used to alter the function of some keys. Command+⇧ Shift toggles alpha lock, Command+Return sends Enter and Command+Volume-down toggles Mute, the functions were printed in green on the front side of the modified keys. This was also done on the Z, X, C and V keys Command-Alternate-* triggers a non catchable hardware reset thereby hard rebooting the computer, the Macintosh Human Interface Guidelines have always recommended that developers use the Command key for this purpose. A small set of commands are standard across nearly all applications. If an application needs more shortcuts than can be obtained with the letters of the Latin alphabet
26.
Window (computing)
–
In computing, a window is a graphical control element. It consists of an area containing some of the graphical user interface of the program it belongs to and is framed by a window decoration. It usually has a shape that can overlap with the area of other windows. It displays the output of and may allow input to one or more processes, Windows are primarily associated with graphical displays, where they can be manipulated with a pointer by employing some kind of pointing device. Text-only displays can also support windowing, as a way to multiple independent display areas. Text windows are controlled by keyboard, though some also respond to the mouse. A graphical user interface using windows as one of its main metaphors is called a system, whose main components are the display server. The idea was developed at the Stanford Research Institute and their earliest systems supported multiple windows, but there was no obvious way to indicate boundaries between them. Research continued at Xerox Corporations Palo Alto Research Center / PARC, during the 1980s the term WIMP, which stands for window, icon, menu, pointer, was coined at PARC. Apple had worked with PARC briefly at that time, apple developed an interface based on PARCs interface. It was first used on Apples Lisa and later Macintosh computers, Microsoft was developing office applications for the Mac at that time. Some speculate that this gave access to Apples OS before it was released. Windows are two dimensional objects arranged on a called the desktop. In a modern full-featured windowing system they can be resized, moved, hidden, Windows usually include other graphical objects, possibly including a menu-bar, toolbars, controls, icons and often a working area. In the working area, the document, image, folder contents or other object is displayed. Around the working area, within the window, there may be other smaller window areas, sometimes called panes or panels. The working area of a single document interface holds only one main object, child windows in multiple document interfaces, and tabs for example in many web browsers, can make several similar documents or main objects available within a single main application window. Some windows in Mac OS X have a feature called a drawer, applications that can run either under a graphical user interface or in a text user interface may use different terminology
27.
Boldface
–
In typography, emphasis is the exaggeration of words in a text with a font in a different style from the rest of the text—to emphasize them. It is the equivalent of prosodic stress in speech, the most common methods in Bold fall under the general technique of emphasis through a change or modification of font, italics, boldface and small caps. Other methods include the alteration of letter case and spacing as well as color, the human eye is very receptive to differences in brightness within a text body. Therefore, one can differentiate types of emphasis according to whether the emphasis changes the “blackness” of text. With one or the other of these techniques, words can be highlighted without making them out much from the rest of the text. This was used for marking passages that have a different context, such as words from languages, book titles. By contrast, a bold font weight makes text darker than the surrounding text, for example, printed dictionaries often use boldface for their keywords, and the names of entries can conventionally be marked in bold. Small capitals are used for emphasis, especially for the first line of a section, sometimes accompanied by or instead of a drop cap. If the text body is typeset in a typeface, it is also possible to highlight words by setting them in a sans serif face. It is still using some font superfamilies, which come with matching serif and sans-serif variants. In Japanese typography, due to the legibility of heavier Minchō type. Of these methods, italics, small capitals and capitalisation are oldest, with bold type, the house styles of many publishers in the United States use all caps text for, chapter and section headings, newspaper headlines, publication titles, warning messages, word of important meaning. Capitalization is used less commonly today by British publishers. All-uppercase letters are a form of emphasis where the medium lacks support for boldface, such as old typewriters, plain-text email, SMS. Culturally all-caps text has become an indication of shouting, for example when quoting speech and it was also once often used by American lawyers to indicate important points in a legal text. Another means of emphasis is to increase the spacing between the letters, rather than making them darker, but still achieving a distinction in blackness and this results in an effect reverse to boldface, the emphasized text becomes lighter than its environment. This is often used in typesetting and typewriter manuscripts. On typewriters a full space was used between the letters of a word and also one before and one after the word
28.
Italic type
–
In typography, italic type is a cursive font based on a stylized form of calligraphic handwriting. Owing to the influence from calligraphy, italics normally slant slightly to the right, italics are a way to emphasise key points in a printed text, or when quoting a speaker a way to show which words they stressed. One manual of English usage described italics as the print equivalent of underlining, the name comes from the fact that calligraphy-inspired typefaces were first designed in Italy, to replace documents traditionally written in a handwriting style called chancery hand. Aldus Manutius and Ludovico Arrighi were the type designers involved in this process at the time. Different glyph shapes from roman type are usually used – another influence from calligraphy –, an alternative is oblique type, in which the type is slanted but the letterforms do not change shape, this less elaborate approach is used by many sans-serif typefaces. Italic type was first used by Aldus Manutius and his press in Venice in 1500, Manutius used italic not for emphasis but for the text of small, easily carried editions of popular books, replicating the style of handwritten manuscripts of the period. The choice of using italic type, rather than the type in general use at the time, was apparently made to suggest informality in editions designed for leisure reading. Manutius italic type, cut and conceived by his employee, punchcutter Francesco Griffo, replicated handwriting of the following from the style of Niccolò de Niccoli. The first use in a volume was a 1501 edition of Virgil dedicated to Italy. Manutius italic was different in some ways from modern italics, unlike the italic type of today, the capital letters were upright capitals on the model of Roman square capitals, shorter than the ascending lower-case italic letters. While modern italics are more condensed than roman types, historian Harry Carter describes Manutius italic as about the same width as roman type. To replicate handwriting, Griffo cut at least sixty-five tied letters in the Aldine Dante, Italic typefaces of the following century used varying but reduced numbers of ligatures. Manutius type rapidly became popular in its own day and was widely imitated. The Venetian Senate gave Aldus exclusive right to its use, a patent confirmed by three successive Popes, but it was widely counterfeited as early as 1502. Griffo, who had left Venice in a dispute, cut a version for printer Girolamo Soncino. The Italians called the character Aldino, while called it Italic. Italics spread rapidly, historian Hendrik Vervliet dates the first production of italics in Paris to 1512, chancery italics faded as a style over the course of the sixteenth century, although revivals were made beginning in the twentieth century. Chancery italics may have backward-pointing serifs or round terminals pointing forwards on the ascenders, Vervliet comments that among punchcutters the main name associated with the change is Granjons
29.
Underline
–
An underline, also called an underscore, is a more or less horizontal line immediately below a portion of writing. Single and occasionally double underlining is used in hand-written or typewritten documents as a way to emphasise key text, in printed documents underlining is generally avoided, with italics or small caps often used instead, or using capitalization or bold type. Underlines are sometimes used as a diacritic, to indicate that a letter has a different pronunciation from its non-underlined form. In web browsers, default settings typically distinguish hyperlinks by underlining them, the HTML special inline element <ins>, denoting inserted text, is often presented as underlined text. HTML also has a presentational element <u>, denoting underlined text, the elements may also exist in other markup languages, such as MediaWiki. The Text Encoding Initiative provides a selection of related elements for marking editorial activity. Wikipedias style guide recommends that text should never be underlined, however, if it is necessary to underline text on a MediaWiki-server page, CSS can be used, and it is also possible to enclose with its HTML-tags, <u> to open and then </u> to cease underlining. Unicode has the combining diacritic combining low line at U+0332 ◌̲ that results in an underline when run together, not to be confused is the combining macron below. For example, You must use a paint on the ceiling. Underline is often used by spell checkers to denote misspelled or otherwise incorrect text, in Chinese, the underline is a punctuation mark for proper names. Its meaning is somewhat akin to capitalization in English and should never be used for even if the influence of English computing makes the latter sometimes occur. A wavy underline serves a function, but marks names of literary works instead of proper names. In the case of two or more adjacent proper names, each proper name is separately underlined so there should be a slight gap between the underlining of each proper name
30.
Computer file
–
A computer file is a computer resource for recording data discretely in a computer storage device. Just as words can be written to paper, so can information be written to a computer file, there are different types of computer files, designed for different purposes. A file may be designed to store a picture, a message, a video. Some types of files can store different several types of information at once, by using computer programs, a person can open, read, change, and close a computer file. Computer files may be reopened, modified, and copied a number of times. Typically, computer files are organised in a system, which keeps track of where the files are. The word file derives from the Latin filum, such a file now exists in a memory tube developed at RCA Laboratories. Electronically it retains figures fed into calculating machines, holds them in storage while it memorizes new ones - speeds intelligent solutions through mazes of mathematics, in 1952, file denoted, inter alia, information stored on punched cards. In early use, the hardware, rather than the contents stored on it, was denominated a file. For example, the IBM350 disk drives were denominated disk files, although the contemporary register file demonstrates the early concept of files, its use has greatly decreased. On most modern operating systems, files are organized into one-dimensional arrays of bytes, for example, the bytes of a plain text file are associated with either ASCII or UTF-8 characters, while the bytes of image, video, and audio files are interpreted otherwise. Most file types also allocate a few bytes for metadata, which allows a file to some basic information about itself. Some file systems can store arbitrary file-specific data outside of the file format, on other file systems this can be done via sidecar files or software-specific databases. All those methods, however, are susceptible to loss of metadata than are container. At any instant in time, a file might have a size, normally expressed as number of bytes, in most modern operating systems the size can be any non-negative whole number of bytes up to a system limit. Many older operating systems kept track only of the number of blocks or tracks occupied by a file on a storage device. In such systems, software employed other methods to track the exact byte count, the general definition of a file does not require that its size have any real meaning, however, unless the data within the file happens to correspond to data within a pool of persistent storage. A special case is a zero byte file, these files can be newly created files that have not yet had any data written to them, or may serve as some kind of flag in the file system, or are accidents
31.
Printing
–
Printing is a process for reproducing text and images using a master form or template. The earliest examples include Cylinder seals and other such as the Cyrus Cylinder. The earliest known form of printing came from China dating to before 220 A. D. Later developments in printing include the type, first developed by Bi Sheng in China around 1040 AD. Johannes Gutenberg introduced mechanical movable type printing to Europe in the 15th century, modern large-scale printing is typically done using a printing press, while small-scale printing is done free-form with a digital printer. Though paper is the most common material, it is frequently done on metals, plastics, cloth. On paper it is carried out as a large-scale industrial process and is an essential part of publishing. Woodblock printing is a technique for printing text, images or patterns that was used widely throughout East Asia and it originated in China in antiquity as a method of printing on textiles and later on paper. As a method of printing on cloth, the earliest surviving examples from China date to before 220 A. D, the earliest surviving woodblock printed fragments are from China. They are of silk printed with flowers in three colours from the Han Dynasty and they are the earliest example of woodblock printing on paper appeared in the mid-seventh century in China. By the ninth century, printing on paper had taken off, by the tenth century,400,000 copies of some sutras and pictures were printed, and the Confucian classics were in print. A skilled printer could print up to 2,000 double-page sheets per day, Printing spread early to Korea and Japan, which also used Chinese logograms, but the technique was also used in Turpan and Vietnam using a number of other scripts. This technique then spread to Persia and Russia and this technique was transmitted to Europe via the Islamic world, and by around 1400 was being used on paper for old master prints and playing cards. However, Arabs never used this to print the Quran because of the limits imposed by Islamic doctrine, block printing, called tarsh in Arabic developed in Arabic Egypt during the ninth-tenth centuries, mostly for prayers and amulets. There is some evidence to suggest that these print blocks made from non-wood materials, possibly tin, lead, the techniques employed are uncertain, however, and they appear to have had very little influence outside of the Muslim world. Though Europe adopted woodblock printing from the Muslim world, initially for fabric, block printing later went out of use in Islamic Central Asia after movable type printing was introduced from China. Block printing first came to Europe as a method for printing on cloth, images printed on cloth for religious purposes could be quite large and elaborate. When paper became relatively easily available, around 1400, the medium transferred very quickly to small religious images
32.
File manager
–
A file manager or file browser is a computer program that provides a user interface to manage files and folders. Folders and files may be displayed in a tree based on their directory structure. Some file managers contain features inspired by web browsers, including forward, some file managers provide network connectivity via protocols, such as FTP, NFS, SMB or WebDAV. This is achieved by allowing the user to browse for a server or by providing its own full client implementations for file server protocols. A term that predates the usage of file manager is directory editor, the term was used by other developers, including Jay Lepreau, who wrote the dired program in 1980, which ran on BSD. This was in inspired by an older program with the same name running on TOPS-20. Dired inspired other programs, including dired, the editor script, file-list file managers are lesser known and older than orthodox file managers. One such file manager is flist, which was first used in 1981 on the Conversational Monitor System and this is a variant of fulist, which originated before late 1978, according to comments by its author, Theo Alkema. The flist program provided a list of files in the users minidisk, the file attributes could be passed to scripts or function-key definitions, making it simple to use flist as part of CMS EXEC, EXEC2 or XEDIT scripts. Orthodox file managers or command-based file managers are text-menu based file managers, Orthodox file managers are one of the longest running families of file managers, preceding Graphical User Interface-based types. Developers create applications that duplicate and extend the manager that was introduced by PathMinder, the concept is more than thirty years old—PathMinder was released in 1984, and Norton Commander version 1.0 was released in 1986. Despite the age of this concept, file managers based on Norton Commander are actively developed, and dozens of implementations exist for DOS, Unix, Nikolai Bezroukov publishes his own set of criteria for an OFM standard. An orthodox file manager typically has three windows, two of the windows are called panels and are positioned symmetrically at the top of the screen. The third is the line, which is essentially a minimized command window that can be expanded to full screen. Only one of the panels is active at a given time, the active panel contains the file cursor. Panels are resizable and can be hidden, files in the active panel serve as the source of file operations performed by the manager. For example, files can be copied or moved from the panel to the location represented in the passive panel. This scheme is most effective for systems in which the keyboard is the primary or sole input device, the active panel shows information about the current working directory and the files that it contains
33.
String searching algorithm
–
Formally, both the pattern and searched text are vectors of elements of Σ. The Σ may be a usual human alphabet, other applications may use binary alphabet or DNA alphabet in bioinformatics. In practice, how the string is encoded can affect the feasible string search algorithms, in particular if a variable width encoding is in use then it is slow to find the Nth character. This will significantly slow down many of the advanced search algorithms. A possible solution is to search for the sequence of code units instead, the various algorithms can be classified by the number of patterns each uses. Let m be the length of the pattern, n be the length of the searchable text,1. ^ Asymptotic times are expressed using O, Ω, and Θ notation. The Boyer–Moore string search algorithm has been the benchmark for the practical string search literature. Aho–Corasick string matching algorithm Commentz-Walter algorithm Set-BOM Rabin–Karp string search algorithm Naturally and they are represented usually by a regular grammar or regular expression. One of the most common uses preprocessing as main criteria and these are expensive to construct—they are usually created using the powerset construction—but are very quick to use. For example, the DFA shown to the right recognizes the word MOMMY and this approach is frequently generalized in practice to search for arbitrary regular expressions. Baeza–Yates keeps track of whether the previous j characters were a prefix of the search string, the bitap algorithm is an application of Baeza–Yates approach. Faster search algorithms are based on preprocessing of the text, after building a substring index, for example a suffix tree or suffix array, the occurrences of a pattern can be found quickly. The latter can be accomplished by running a DFS algorithm from the root of the suffix tree, some search methods, for instance trigram search, are intended to find a closeness score between the search string and the text rather than a match/non-match. These are sometimes called fuzzy searches, sequence alignment Pattern matching Compressed pattern matching R. S. Boyer and J. S. Moore, A fast string searching algorithm, Carom. Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, chapter 32, String Matching, pp. 985-1013
34.
QWERTY
–
QWERTY is a keyboard layout for Latin script. The name comes from the order of the first six keys on the top left letter row of the keyboard, the QWERTY design is based on a layout created for the Sholes and Glidden typewriter and sold to Remington in 1873. It became popular with the success of the Remington No, the QWERTY layout was devised and created in the early 1870s by Christopher Latham Sholes, a newspaper editor and printer who lived in Milwaukee, Wisconsin. In October 1867, Sholes filed a patent application for his writing machine he developed with the assistance of his friends Carlos Glidden. Firstly, characters were mounted on arms or typebars, which would clash. Secondly, its point was located beneath the paper carriage, invisible to the operator. Consequently, jams were especially serious, because the typist could only discover the mishap by raising the carriage to inspect what had been typed, the solution was to place commonly used letter-pairs so that their typebars were not neighboring, avoiding jams. Sholes struggled for the five years to perfect his invention. The study of bigram frequency by educator Amos Densmore, brother of the financial backer James Densmore, is believed to have influenced the arrangement of letters, others suggest instead that the letter arrangement evolved from telegraph operators feedback. In November 1868 he changed the arrangement of the half of the alphabet, O to Z. These adjustments included placing the R key in the previously allotted to the period key. Apocryphal claims that change was made to let salesmen impress customers by pecking out the brand name TYPE WRITER QUOTE from one keyboard row are not formally substantiated. Vestiges of the original layout remained in the home row sequence DFGHJKL. The modern layout is, The QWERTY layout became popular with the success of the Remington No.2 of 1878, the first typewriter to include both upper and lower case letters, using a shift key. 0 and 1 were omitted to simplify the design and reduce the manufacturing and maintenance costs, they were chosen specifically because they were redundant and could be recreated using other keys. Typists who learned on these machines learned the habit of using the uppercase letter I for the one. In early designs, some characters were produced by printing two symbols with the carriage in the same position. For instance, the point, which shares a key with the numeral 1 on modern keyboards, could be reproduced by using a three-stroke combination of an apostrophe, a backspace