A web browser is a software application for accessing information on the World Wide Web. Each individual web page and video is identified by a distinct Uniform Resource Locator, enabling browsers to retrieve these resources from a web server and display them on the user's device. A web browser is not the same thing as a search engine, though the two are confused. For a user, a search engine is just a website, such as google.com, that stores searchable data about other websites. But to connect to a website's server and display its web pages, a user needs to have a web browser installed on their device; the most popular browsers are Chrome, Safari, Internet Explorer, Edge. The first web browser, called WorldWideWeb, was invented in 1990 by Sir Tim Berners-Lee, he recruited Nicola Pellow to write the Line Mode Browser, which displayed web pages on dumb terminals. 1993 was a landmark year with the release of Mosaic, credited as "the world's first popular browser". Its innovative graphical interface made the World Wide Web system easy to use and thus more accessible to the average person.
This, in turn, sparked the Internet boom of the 1990s when the Web grew at a rapid rate. Marc Andreessen, the leader of the Mosaic team, soon started his own company, which released the Mosaic-influenced Netscape Navigator in 1994. Navigator became the most popular browser. Microsoft debuted Internet Explorer in 1995. Microsoft was able to gain a dominant position for two reasons: it bundled Internet Explorer with its popular Microsoft Windows operating system and did so as freeware with no restrictions on usage; the market share of Internet Explorer peaked at over 95% in 2002. In 1998, desperate to remain competitive, Netscape launched what would become the Mozilla Foundation to create a new browser using the open source software model; this work evolved into Firefox, first released by Mozilla in 2004. Firefox reached a 28% market share in 2011. Apple released its Safari browser in 2003, it remains the dominant browser on Apple platforms. The last major entrant to the browser market was Google, its Chrome browser, which debuted in 2008, has been a huge success.
Once a web page has been retrieved, the browser's rendering engine displays it on the user's device. This includes video formats supported by the browser. Web pages contain hyperlinks to other pages and resources; each link contains a URL, when it is clicked, the browser navigates to the new resource. Thus the process of bringing content to the user begins again. To implement all of this, modern browsers are a combination of numerous software components. Web browsers can be configured with a built-in menu. Depending on the browser, the menu may be named Options, or Preferences; the menu has different types of settings. For example, users can change their home default search engine, they can change default web page colors and fonts. Various network connectivity and privacy settings are usually available. During the course of browsing, cookies received from various websites are stored by the browser; some of them contain login credentials or site preferences. However, others are used for tracking user behavior over long periods of time, so browsers provide settings for removing cookies when exiting the browser.
Finer-grained management of cookies requires a browser extension. The most popular browsers have a number of features in common, they allow users to browse in a private mode. They can be customized with extensions, some of them provide a sync service. Most browsers have these user interface features: Allow the user to open multiple pages at the same time, either in different browser windows or in different tabs of the same window. Back and forward buttons to go back to the previous page forward to the next one. A refresh or reload button to reload the current page. A stop button to cancel loading the page. A home button to return to the user's home page. An address bar to display it. A search bar to input terms into a search engine. There are niche browsers with distinct features. One example is text-only browsers that can benefit people with slow Internet connections or those with visual impairments. Mobile browser List of web browsers Comparison of web browsers Media related to Web browsers at Wikimedia Commons
Mainframe computers or mainframes are computers used by large organizations for critical applications. They are larger and have more processing power than some other classes of computers: minicomputers, servers and personal computers; the term referred to the large cabinets called "main frames" that housed the central processing unit and main memory of early computers. The term was used to distinguish high-end commercial machines from less powerful units. Most large-scale computer system architectures were established in the 1960s, but continue to evolve. Mainframe computers are used as servers. Modern mainframe design is characterized less by raw computational speed and more by: Redundant internal engineering resulting in high reliability and security Extensive input-output facilities with the ability to offload to separate engines Strict backward compatibility with older software High hardware and computational utilization rates through virtualization to support massive throughput. Hot-swapping of hardware, such as processors and memory.
Their high stability and reliability enable these machines to run uninterrupted for long periods of time, with mean time between failures measured in decades. Mainframes have high availability, one of the primary reasons for their longevity, since they are used in applications where downtime would be costly or catastrophic; the term reliability and serviceability is a defining characteristic of mainframe computers. Proper planning and implementation is required to realize these features. In addition, mainframes are more secure than other computer types: the NIST vulnerabilities database, US-CERT, rates traditional mainframes such as IBM Z, Unisys Dorado and Unisys Libra as among the most secure with vulnerabilities in the low single digits as compared with thousands for Windows, UNIX, Linux. Software upgrades require setting up the operating system or portions thereof, are non-disruptive only when using virtualizing facilities such as IBM z/OS and Parallel Sysplex, or Unisys XPCL, which support workload sharing so that one system can take over another's application while it is being refreshed.
In the late 1950s, mainframes had only a rudimentary interactive interface, used sets of punched cards, paper tape, or magnetic tape to transfer data and programs. They operated in batch mode to support back office functions such as payroll and customer billing, most of which were based on repeated tape-based sorting and merging operations followed by line printing to preprinted continuous stationery; when interactive user terminals were introduced, they were used exclusively for applications rather than program development. Typewriter and Teletype devices were common control consoles for system operators through the early 1970s, although supplanted by keyboard/display devices. By the early 1970s, many mainframes acquired interactive user terminals operating as timesharing computers, supporting hundreds of users along with batch processing. Users gained access through keyboard/typewriter terminals and specialized text terminal CRT displays with integral keyboards, or from personal computers equipped with terminal emulation software.
By the 1980s, many mainframes supported graphic display terminals, terminal emulation, but not graphical user interfaces. This form of end-user computing became obsolete in the 1990s due to the advent of personal computers provided with GUIs. After 2000, modern mainframes or phased out classic "green screen" and color display terminal access for end-users in favour of Web-style user interfaces; the infrastructure requirements were drastically reduced during the mid-1990s, when CMOS mainframe designs replaced the older bipolar technology. IBM claimed that its newer mainframes reduced data center energy costs for power and cooling, reduced physical space requirements compared to server farms. Modern mainframes can run multiple different instances of operating systems at the same time; this technique of virtual machines allows applications to run as if they were on physically distinct computers. In this role, a single mainframe can replace higher-functioning hardware services available to conventional servers.
While mainframes pioneered this capability, virtualization is now available on most families of computer systems, though not always to the same degree or level of sophistication. Mainframes can add or hot swap system capacity without disrupting system function, with specificity and granularity to a level of sophistication not available with most server solutions. Modern mainframes, notably the IBM zSeries, System z9 and System z10 servers, offer two levels of virtualization: logical partitions and virtual machines. Many mainframe customers run two machines: one in their primary data center, one in their backup data center—fully active active, or on standby—in case there is a catastrophe affecting the first building. Test, development and production workload for applications and databases can run on a single machine, except for large demands where the capacity of one machine might be limiting; such a two-mainframe installation can support continuous business service, avoiding both planned and unplanned outages.
In practice many customers use multiple mainframes linked either by Parallel Sysplex and shared DASD, or with shared, geographically dispersed storage provided by EMC
A content format is an encoded format for converting a specific type of data to displayable information. Content formats are used in recording and transmission to prepare data for observation or interpretation; this includes digitized content. Content formats may be read by either natural or manufactured tools and mechanisms. In addition to converting data to information, a content format may include the encryption and/or scrambling of that information. Multiple content formats may be contained within a single section of a storage medium or transmitted via a single channel of a transmission medium. With multimedia, multiple tracks containing multiple content formats are presented simultaneously. Content formats may either be recorded in secondary signal processing methods such as a software container format or recorded in the primary format. Observable data is known as raw data, or raw content. A primary raw content format may be directly observable or physical data which only requires hardware to display it, such as a phonographic needle and diaphragm or a projector lamp and magnifying glass.
There has been a countless number of content formats throughout history. The following are examples of some common content formats and content format categories: Communication Representation Content carrier signals Content multiplexing format Content transmission Wireless content transmission Data storage device Recording format Encoder Analog television: NTSC, PAL and SECAM Information mapping
A computer network is a digital telecommunications network which allows nodes to share resources. In computer networks, computing devices exchange data with each other using connections between nodes; these data links are established over cable media such as wires or optic cables, or wireless media such as Wi-Fi. Network computer devices that originate and terminate the data are called network nodes. Nodes are identified by network addresses, can include hosts such as personal computers and servers, as well as networking hardware such as routers and switches. Two such devices can be said to be networked together when one device is able to exchange information with the other device, whether or not they have a direct connection to each other. In most cases, application-specific communications protocols are layered over other more general communications protocols; this formidable collection of information technology requires skilled network management to keep it all running reliably. Computer networks support an enormous number of applications and services such as access to the World Wide Web, digital video, digital audio, shared use of application and storage servers and fax machines, use of email and instant messaging applications as well as many others.
Computer networks differ in the transmission medium used to carry their signals, communications protocols to organize network traffic, the network's size, traffic control mechanism and organizational intent. The best-known computer network is the Internet; the chronology of significant computer-network developments includes: In the late 1950s, early networks of computers included the U. S. military radar system Semi-Automatic Ground Environment. In 1959, Anatolii Ivanovich Kitov proposed to the Central Committee of the Communist Party of the Soviet Union a detailed plan for the re-organisation of the control of the Soviet armed forces and of the Soviet economy on the basis of a network of computing centres, the OGAS. In 1960, the commercial airline reservation system semi-automatic business research environment went online with two connected mainframes. In 1963, J. C. R. Licklider sent a memorandum to office colleagues discussing the concept of the "Intergalactic Computer Network", a computer network intended to allow general communications among computer users.
In 1964, researchers at Dartmouth College developed the Dartmouth Time Sharing System for distributed users of large computer systems. The same year, at Massachusetts Institute of Technology, a research group supported by General Electric and Bell Labs used a computer to route and manage telephone connections. Throughout the 1960s, Paul Baran and Donald Davies independently developed the concept of packet switching to transfer information between computers over a network. Davies pioneered the implementation of the concept with the NPL network, a local area network at the National Physical Laboratory using a line speed of 768 kbit/s. In 1965, Western Electric introduced the first used telephone switch that implemented true computer control. In 1966, Thomas Marill and Lawrence G. Roberts published a paper on an experimental wide area network for computer time sharing. In 1969, the first four nodes of the ARPANET were connected using 50 kbit/s circuits between the University of California at Los Angeles, the Stanford Research Institute, the University of California at Santa Barbara, the University of Utah.
Leonard Kleinrock carried out theoretical work to model the performance of packet-switched networks, which underpinned the development of the ARPANET. His theoretical work on hierarchical routing in the late 1970s with student Farouk Kamoun remains critical to the operation of the Internet today. In 1972, commercial services using X.25 were deployed, used as an underlying infrastructure for expanding TCP/IP networks. In 1973, the French CYCLADES network was the first to make the hosts responsible for the reliable delivery of data, rather than this being a centralized service of the network itself. In 1973, Robert Metcalfe wrote a formal memo at Xerox PARC describing Ethernet, a networking system, based on the Aloha network, developed in the 1960s by Norman Abramson and colleagues at the University of Hawaii. In July 1976, Robert Metcalfe and David Boggs published their paper "Ethernet: Distributed Packet Switching for Local Computer Networks" and collaborated on several patents received in 1977 and 1978.
In 1979, Robert Metcalfe pursued making Ethernet an open standard. In 1976, John Murphy of Datapoint Corporation created ARCNET, a token-passing network first used to share storage devices. In 1995, the transmission speed capacity for Ethernet increased from 10 Mbit/s to 100 Mbit/s. By 1998, Ethernet supported transmission speeds of a Gigabit. Subsequently, higher speeds of up to 400 Gbit/s were added; the ability of Ethernet to scale is a contributing factor to its continued use. Computer networking may be considered a branch of electrical engineering, electronics engineering, telecommunications, computer science, information technology or computer engineering, since it relies upon the theoretical and practical application of the related disciplines. A computer network facilitates interpersonal communications allowing users to communicate efficiently and via various means: email, instant messaging, online chat, video telephone calls, video conferencing. A network allows sharing of computing resources.
Users may access and use resources provided by devices on the network, such as printing a document on a shared network printer or use of a shared storage device. A network allows sharing of files, and
A bank is a financial institution that accepts deposits from the public and creates credit. Lending activities can be performed either indirectly through capital markets. Due to their importance in the financial stability of a country, banks are regulated in most countries. Most nations have institutionalized a system known as fractional reserve banking under which banks hold liquid assets equal to only a portion of their current liabilities. In addition to other regulations intended to ensure liquidity, banks are subject to minimum capital requirements based on an international set of capital standards, known as the Basel Accords. Banking in its modern sense evolved in the 14th century in the prosperous cities of Renaissance Italy but in many ways was a continuation of ideas and concepts of credit and lending that had their roots in the ancient world. In the history of banking, a number of banking dynasties – notably, the Medicis, the Fuggers, the Welsers, the Berenbergs, the Rothschilds – have played a central role over many centuries.
The oldest existing retail bank is Banca Monte dei Paschi di Siena, while the oldest existing merchant bank is Berenberg Bank. The concept of banking may have begun in ancient Assyria and Babylonia, with merchants offering loans of grain as collateral within a barter system. Lenders in ancient Greece and during the Roman Empire added two important innovations: they accepted deposits and changed money. Archaeology from this period in ancient China and India shows evidence of money lending. More modern banking can be traced to medieval and early Renaissance Italy, to the rich cities in the centre and north like Florence, Siena and Genoa; the Bardi and Peruzzi families dominated banking in 14th-century Florence, establishing branches in many other parts of Europe. One of the most famous Italian banks was the Medici Bank, set up by Giovanni di Bicci de' Medici in 1397; the earliest known state deposit bank, Banco di San Giorgio, was founded in 1407 at Italy. Modern banking practices, including fractional reserve banking and the issue of banknotes, emerged in the 17th and 18th centuries.
Merchants started to store their gold with the goldsmiths of London, who possessed private vaults, charged a fee for that service. In exchange for each deposit of precious metal, the goldsmiths issued receipts certifying the quantity and purity of the metal they held as a bailee; the goldsmiths began to lend the money out on behalf of the depositor, which led to the development of modern banking practices. The goldsmith paid interest on these deposits. Since the promissory notes were payable on demand, the advances to the goldsmith's customers were repayable over a longer time period, this was an early form of fractional reserve banking; the promissory notes developed into an assignable instrument which could circulate as a safe and convenient form of money backed by the goldsmith's promise to pay, allowing goldsmiths to advance loans with little risk of default. Thus, the goldsmiths of London became the forerunners of banking by creating new money based on credit; the Bank of England was the first to begin the permanent issue of banknotes, in 1695.
The Royal Bank of Scotland established the first overdraft facility in 1728. By the beginning of the 19th century a bankers' clearing house was established in London to allow multiple banks to clear transactions; the Rothschilds pioneered international finance on a large scale, financing the purchase of the Suez canal for the British government. The word bank was taken Middle English from Middle French banque, from Old Italian banco, meaning "table", from Old High German banc, bank "bench, counter". Benches were used as makeshift desks or exchange counters during the Renaissance by Jewish Florentine bankers, who used to make their transactions atop desks covered by green tablecloths; the definition of a bank varies from country to country. See the relevant country pages under for more information. Under English common law, a banker is defined as a person who carries on the business of banking by conducting current accounts for his customers, paying cheques drawn on him/her and collecting cheques for his/her customers.
In most common law jurisdictions there is a Bills of Exchange Act that codifies the law in relation to negotiable instruments, including cheques, this Act contains a statutory definition of the term banker: banker includes a body of persons, whether incorporated or not, who carry on the business of banking'. Although this definition seems circular, it is functional, because it ensures that the legal basis for bank transactions such as cheques does not depend on how the bank is structured or regulated; the business of banking is in many English common law countries not defined by statute but by common law, the definition above. In other English common law jurisdictions there are statutory definitions of the business of banking or banking business; when looking at these definitions it is important to keep in mind that they are defining the business of banking for the purposes of the legislation, not in general. In particular, most of the definitions are from legislation that has the purpose of regulating and supervising banks rather than regulating the actual business of banking.
However, in many cases the statutory definition mirrors the common law one. Examples of statutory definitions: "banking business" means the business of receiving money on current or deposit account and collecting cheques drawn by or paid in by customers, the making
History of personal computers
The history of the personal computer as a mass-market consumer electronic device began with the microcomputer revolution of the 1970s. A personal computer is one intended for interactive individual use, as opposed to a mainframe computer where the end user's requests are filtered through operating staff, or a time-sharing system in which one large processor is shared by many individuals. After the development of the microprocessor, individual personal computers were low enough in cost that they became affordable consumer goods. Early personal computers – called microcomputers – were sold in electronic kit form and in limited numbers, were of interest to hobbyists and technicians. An early use of the term "personal computer" appeared in a 3 November 1962, New York Times article reporting John W. Mauchly's vision of future computing as detailed at a recent meeting of the Institute of Industrial Engineers. Mauchly stated, "There is no reason to suppose the average boy or girl cannot be master of a personal computer".
In 1968, a manufacturer took the risk of referring to their product this way, when Hewlett-Packard advertised their "Powerful Computing Genie" as "The New Hewlett-Packard 9100A personal computer". This advertisement was deemed too extreme for the target audience and replaced with a much drier ad for the HP 9100A programmable calculator. Over the next seven years, the phrase had gained enough recognition that Byte magazine referred to its readers in its first edition as " the personal computing field", Creative Computing defined the personal computer as a "non-shared system containing sufficient processing power and storage capabilities to satisfy the needs of an individual user." In 1977, three new pre-assembled small computers hit the markets which Byte would refer to as the "1977 Trinity" of personal computing. The Apple II and the PET 2001 were advertised as personal computers, while the TRS-80 was described as a microcomputer used for household tasks including "personal financial management".
By 1979, over half a million microcomputers were sold and the youth of the day had a new concept of the personal computer. The history of the personal computer as mass-market consumer electronic devices began in 1977 with the introduction of microcomputers, although some mainframe and minicomputers had been applied as single-user systems much earlier. A personal computer is one intended for interactive individual use, as opposed to a mainframe computer where the end user's requests are filtered through operating staff, or a time sharing system in which one large processor is shared by many individuals. After the development of the microprocessor, individual personal computers were low enough in cost that they became affordable consumer goods. Early personal computers – called microcomputers– were sold in electronic kit form and in limited numbers, were of interest to hobbyists and technicians. Computer terminals were used for time sharing access to central computers. Before the introduction of the microprocessor in the early 1970s, computers were large, costly systems owned by large corporations, government agencies, similar-sized institutions.
End users did not directly interact with the machine, but instead would prepare tasks for the computer on off-line equipment, such as card punches. A number of assignments for the computer would be processed in batch mode. After the job had completed, users could collect the results. In some cases, it could take hours or days between submitting a job to the computing center and receiving the output. A more interactive form of computer use developed commercially by the middle 1960s. In a time-sharing system, multiple computer terminals let many people share the use of one mainframe computer processor; this was common in science and engineering. A different model of computer use was foreshadowed by the way in which early, pre-commercial, experimental computers were used, where one user had exclusive use of a processor. In places such as Carnegie Mellon University and MIT, students with access to some of the first computers experimented with applications that would today be typical of a personal computer.
Some of the first computers that might be called "personal" were early minicomputers such as the LINC and PDP-8, on VAX and larger minicomputers from Digital Equipment Corporation, Data General, Prime Computer, others. By today's standards, they were large and cost prohibitive. However, they were much smaller, less expensive, simpler to operate than many of the mainframe computers of the time. Therefore, they were accessible for individual laboratories and research projects. Minicomputers freed these organizations from the batch processing and bureaucracy of a commercial or university computing center. In addition, minicomputers were interactive and soon had their own operating systems; the minicomputer Xerox Alto was a landmark step in the development of personal computers because of its graphical user interface, bit-mapped high resolution screen, large internal and external memory storage and special software. In 1945, Vannevar Bush published an essay called "As We May Think" in which he outlined a possible solution to the growing problem of information storage and retrieval.
In 1968, SRI researcher Douglas Engelbart gave what was called The Mother of All Demos, in which he offered a preview of things that have become the staples of daily working
Abstraction (computer science)
In software engineering and computer science, abstraction is: the process of removing physical, spatial, or temporal details or attributes in the study of objects or systems in order to more attend to other details of interest. Abstraction, in general, is a fundamental concept to software development; the process of abstraction can be referred to as modeling and is related to the concepts of theory and design. Models can be considered types of abstractions per their generalization of aspects of reality. Abstraction in computer science is closely related to abstraction in mathematics due to their common focus on building abstractions as objects, but is related to other notions of abstraction used in other fields such as art. Abstractions may refer to vehicles, features, or rules of computational systems or programming languages that carry or utilize features of or abstraction itself, such as: the process or feature of using data types to perform data abstraction to decouple usage of from working representations of data structures within programs.
Computing operates independently of the concrete world. The hardware implements a model of computation, interchangeable with others; the software is structured in architectures to enable humans to create the enormous systems by concentrating on a few issues at a time. These architectures are made of specific choices of abstractions. Greenspun's Tenth Rule is an aphorism on how such an architecture is both complex. A central form of abstraction in computing is language abstraction: new artificial languages are developed to express specific aspects of a system. Modeling languages help in planning. Computer languages can be processed with a computer. An example of this abstraction process is the generational development of programming languages from the machine language to the assembly language and the high-level language; each stage can be used as a stepping stone for the next stage. The language abstraction continues for example in scripting languages and domain-specific programming languages. Within a programming language, some features let the programmer create new abstractions.
These include subroutines, modules and software components. Some other abstractions such as software design patterns and architectural styles remain invisible to a translator and operate only in the design of a system; some abstractions try to limit the range of concepts a programmer needs to be aware of, by hiding the abstractions that they in turn are built on. The software engineer and writer Joel Spolsky has criticised these efforts by claiming that all abstractions are leaky — that they can never hide the details below; some abstractions are designed to inter-operate with other abstractions - for example, a programming language may contain a foreign function interface for making calls to the lower-level language. In simple terms, abstraction is removing irrelevant data. Different programming languages provide different types of abstraction, depending on the intended applications for the language. For example: In object-oriented programming languages such as C++, Object Pascal, or Java, the concept of abstraction has itself become a declarative statement – using the keywords virtual or abstract and interface.
After such a declaration, it is the responsibility of the programmer to implement a class to instantiate the object of the declaration. Functional programming languages exhibit abstractions related to functions, such as lambda abstractions and higher-order functions. Modern members of the Lisp programming language family such as Clojure and Common Lisp support macro systems to allow syntactic abstraction. Other programming languages such as Scala have macros, or similar metaprogramming features; these can allow a programmer to eliminate boilerplate code, abstract away tedious function call sequences, implement new control flow structures, implement Domain Specific Languages, which allow domain-specific concepts to be expressed in concise and elegant ways. All of these, when used improve both the programmer's efficiency and the clarity of the code by making the intended purpose more explicit. A consequence of syntactic abstraction is that any Lisp dialect and in fact any programming language can, in principle, be implemented in any modern Lisp with reduced effort when compared to "more traditional" programming languages such as Python, C or Java.
Analysts have developed various methods to formally specify software systems. Some known methods include: Abstract-model based method.