World Wide Web
The World Wide Web known as the Web, is an information space where documents and other web resources are identified by Uniform Resource Locators, which may be interlinked by hypertext, are accessible over the Internet. The resources of the WWW may be accessed by users by a software application called a web browser. English scientist Tim Berners-Lee invented the World Wide Web in 1989, he wrote the first web browser in 1990 while employed at CERN near Switzerland. The browser was released outside CERN in 1991, first to other research institutions starting in January 1991 and to the general public in August 1991; the World Wide Web has been central to the development of the Information Age and is the primary tool billions of people use to interact on the Internet. Web resources may be any type of downloaded media, but web pages are hypertext media that have been formatted in Hypertext Markup Language; such formatting allows for embedded hyperlinks that contain URLs and permit users to navigate to other web resources.
In addition to text, web pages may contain images, video and software components that are rendered in the user's web browser as coherent pages of multimedia content. Multiple web resources with a common theme, a common domain name, or both, make up a website. Websites are stored in computers that are running a program called a web server that responds to requests made over the Internet from web browsers running on a user's computer. Website content can be provided by a publisher, or interactively where users contribute content or the content depends upon the users or their actions. Websites may be provided for a myriad of informative, commercial, governmental, or non-governmental reasons. Tim Berners-Lee's vision of a global hyperlinked information system became a possibility by the second half of the 1980s. By 1985, the global Internet began to proliferate in Europe and the Domain Name System came into being. In 1988 the first direct IP connection between Europe and North America was made and Berners-Lee began to discuss the possibility of a web-like system at CERN.
While working at CERN, Berners-Lee became frustrated with the inefficiencies and difficulties posed by finding information stored on different computers. On March 12, 1989, he submitted a memorandum, titled "Information Management: A Proposal", to the management at CERN for a system called "Mesh" that referenced ENQUIRE, a database and software project he had built in 1980, which used the term "web" and described a more elaborate information management system based on links embedded as text: "Imagine the references in this document all being associated with the network address of the thing to which they referred, so that while reading this document, you could skip to them with a click of the mouse." Such a system, he explained, could be referred to using one of the existing meanings of the word hypertext, a term that he says was coined in the 1950s. There is no reason, the proposal continues, why such hypertext links could not encompass multimedia documents including graphics and video, so that Berners-Lee goes on to use the term hypermedia.
With help from his colleague and fellow hypertext enthusiast Robert Cailliau he published a more formal proposal on 12 November 1990 to build a "Hypertext project" called "WorldWideWeb" as a "web" of "hypertext documents" to be viewed by "browsers" using a client–server architecture. At this point HTML and HTTP had been in development for about two months and the first Web server was about a month from completing its first successful test; this proposal estimated that a read-only web would be developed within three months and that it would take six months to achieve "the creation of new links and new material by readers, authorship becomes universal" as well as "the automatic notification of a reader when new material of interest to him/her has become available". While the read-only goal was met, accessible authorship of web content took longer to mature, with the wiki concept, WebDAV, Web 2.0 and RSS/Atom. The proposal was modelled after the SGML reader Dynatext by Electronic Book Technology, a spin-off from the Institute for Research in Information and Scholarship at Brown University.
The Dynatext system, licensed by CERN, was a key player in the extension of SGML ISO 8879:1986 to Hypermedia within HyTime, but it was considered too expensive and had an inappropriate licensing policy for use in the general high energy physics community, namely a fee for each document and each document alteration. A NeXT Computer was used by Berners-Lee as the world's first web server and to write the first web browser, WorldWideWeb, in 1990. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the first web browser and the first web server; the first web site, which described the project itself, was published on 20 December 1990. The first web page may be lost, but Paul Jones of UNC-Chapel Hill in North Carolina announced in May 2013 that Berners-Lee gave him what he says is the oldest known web page during a 1991 visit to UNC. Jones stored it on his NeXT computer. On 6 August 1991, Berners-Lee published a short summary of the World Wide Web project on the newsgroup alt.hypertext.
This date is sometimes confused with the public availability of the first web servers, which had occurred months earlier. As another example of such confusion, several news media reported that the first photo on the Web was published by Berners-Lee in 1992, an image of the CERN house band Les Horribles Cernettes taken by Silvano de Gennaro.
Opera (web browser)
Opera is a web browser for Microsoft Windows, Android, iOS, macOS, Linux operating systems. Opera Ltd. is publicly listed on the NASDAQ stock exchange, with majority ownership and control belonging to Chinese Businessman Yahui Zhou, creator of Beijing Kunlun Tech which specialises in mobile games and cybersecurity specialist Qihoo 360. Opera is a Chromium-based browser using the Blink layout engine, it differentiates itself because of other features. Opera was conceived at Telenor as a research project in 1994 and was bought by Opera Software in 1995, it had its own proprietary Presto layout engine. The Presto versions of Opera received many awards, but Presto development ended after the big transition to Chromium in 2013. There are three mobile versions called Opera Mobile, Opera Touch and Opera Mini. Opera began in 1994 as a research project at Telenor, the largest Norwegian telecommunications company. In 1995, it branched out into a separate company named Opera Software. Opera was first publicly released in 1996 with version 2.10.
In an attempt to capitalize on the emerging market for Internet-connected handheld devices, a project to port Opera to mobile device platforms was started in 1998. Opera 4.0, released in 2000, included a new cross-platform core that facilitated the creation of editions of Opera for multiple operating systems and platforms. Up to this point, Opera had to be purchased after the trial period ended. Version 5.0 saw the end of this requirement. Instead, Opera became ad-sponsored. Versions of Opera gave the user the choice of seeing banner ads or targeted text advertisements from Google. With version 8.5 the advertisements were removed and the primary financial support for the browser came through revenue from Google. Among the new features introduced in version 9.1 was fraud protection using technology from GeoTrust, a digital certificate provider, PhishTank, an organization that tracks known phishing web sites. This feature was further improved and expanded in version 9.5, when GeoTrust was replaced with Netcraft, malware protection from Haute Secure was added.
Many distinctive Opera features of the previous versions were dropped, Opera Mail was separated into a standalone application derived from Opera 12. In November 2016, the original Norwegian owner of Opera sold his stake in the business to a Chinese consortium under the name Golden Brick Capital Private Equity Fund I Limited Partnership for $600 million. An earlier deal was not approved by regulators. In January 2017, the source code of Opera 12.15 was leaked. To demonstrate how radically different a browser could look, Opera Neon, dubbed a "concept browser", was released in January 2017. PC World compared it to demo models that automakers and hardware vendors release to show their visions of the future. Instead of a Speed Dial, it displays the accessed websites in resemblance to a desktop with computer icons scattered all over it in artistic formation. Opera has originated features adopted by other web browsers, including Speed Dial, pop-up blocking, re-opening closed pages, private browsing, tabbed browsing.
Opera includes a download manager. Opera has "Speed Dial", which allows the user to add an unlimited number of pages shown in thumbnail form in a page displayed when a new tab is opened. Speed Dial allows the user to more navigate to the selected web pages, it is possible to control some aspects of the browser using the keyboard shortcuts. Page zooming allows text and other content such as Adobe Flash Player, Java platform and Scalable Vector Graphics to be increased or decreased in size to help those with impaired vision. Opera Software claims that when the Opera Turbo mode is enabled, the compression servers compress requested web pages by up to 50%, depending upon the content, before sending them to the users; this process reduces the amount of data transferred and is useful for crowded or slow network connections, making web pages load faster or when there are costs dependent on the total amount of data usage. This technique is used in Opera Mini for mobile devices and smartwatches. One security feature is the option to delete private data, such as HTTP cookies, browsing history
A wiki is a website on which users collaboratively modify content and structure directly from the web browser. In a typical wiki, text is written using a simplified markup language and edited with the help of a rich-text editor. A wiki is run using wiki software, otherwise known as a wiki engine. A wiki engine is a type of content management system, but it differs from most other such systems, including blog software, in that the content is created without any defined owner or leader, wikis have little inherent structure, allowing structure to emerge according to the needs of the users. There are dozens of different wiki engines in use, both standalone and part of other software, such as bug tracking systems; some wiki engines are open source. Some permit control over different functions. Others may permit access without enforcing access control. Other rules may be imposed to organize content; the online encyclopedia project Wikipedia is the most popular wiki-based website, is one of the most viewed sites in the world, having been ranked in the top ten since 2007.
Wikipedia is not a single wiki but rather a collection of hundreds of wikis, with each one pertaining to a specific language. In addition to Wikipedia, there are tens of thousands of other wikis in use, both public and private, including wikis functioning as knowledge management resources, notetaking tools, community websites, intranets; the English-language Wikipedia has the largest collection of articles. Ward Cunningham, the developer of the first wiki software, WikiWikiWeb described wiki as "the simplest online database that could work". "Wiki" is a Hawaiian word meaning "quick". Ward Cunningham and co-author Bo Leuf, in their book The Wiki Way: Quick Collaboration on the Web, described the essence of the Wiki concept as follows: A wiki invites all users—not just experts—to edit any page or to create new pages within the wiki Web site, using only a standard "plain-vanilla" Web browser without any extra add-ons. Wiki promotes meaningful topic associations between different pages by making page link creation intuitively easy and showing whether an intended target page exists or not.
A wiki is not a crafted site created by experts and professional writers, designed for casual visitors. Instead, it seeks to involve the typical visitor/user in an ongoing process of creation and collaboration that changes the website landscape. A wiki enables communities of contributors to write documents collaboratively. All that people require to contribute is a computer, Internet access, a web browser, a basic understanding of a simple markup language. A single page in a wiki website is referred to as a "wiki page", while the entire collection of pages, which are well-interconnected by hyperlinks, is "the wiki". A wiki is a database for creating and searching through information. A wiki allows non-linear, evolving and networked text, while allowing for editor argument and interaction regarding the content and formatting. A defining characteristic of wiki technology is the ease with which pages can be created and updated. There is no review by a moderator or gatekeeper before modifications are accepted and thus lead to changes on the website.
Many wikis are open to alteration by the general public without requiring registration of user accounts. Many edits can be made in real-time and appear instantly online, but this feature facilitates abuse of the system. Private wiki servers require user authentication to edit pages, sometimes to read them. Maged N. Kamel Boulos, Cito Maramba, Steve Wheeler write that the open wikis produce a process of Social Darwinism. "'Unfit' sentences and sections are ruthlessly culled and replaced if they are not considered'fit', which results in the evolution of a higher quality and more relevant page. While such openness may invite'vandalism' and the posting of untrue information, this same openness makes it possible to correct or restore a'quality' wiki page." Some wikis have an Edit button or link directly on the page being viewed, if the user has permission to edit the page. This can lead to a text-based editing page where participants can structure and format wiki pages with a simplified markup language, sometimes known as Wikitext, Wiki markup or Wikicode.
An example of this is the VisualEditor on Wikipedia. WYSIWYG controls do not, always provide
In software, a spell checker is a software feature that checks for misspellings in a text. Features are in software, such as a word processor, email client, electronic dictionary, or search engine. A basic spell checker carries out the following processes: It scans the text and extracts the words contained in it, it compares each word with a known list of spelled words. This might contain just a list of words, or it might contain additional information, such as hyphenation points or lexical and grammatical attributes. An additional step is a language-dependent algorithm for handling morphology. For a inflected language like English, the spell-checker will need to consider different forms of the same word, such as plurals, verbal forms and possessives. For many other languages, such as those featuring agglutination and more complex declension and conjugation, this part of the process is more complicated, it is unclear whether morphological analysis—allowing for many different forms of a word depending on its grammatical role—provides a significant benefit for English, though its benefits for synthetic languages such as German, Hungarian or Turkish are clear.
As an adjunct to these components, the program's user interface will allow users to approve or reject replacements and modify the program's operation. An alternative type of spell checker uses statistical information, such as n-grams, to recognize errors instead of correctly-spelled words; this approach requires a lot of effort to obtain sufficient statistical information. Key advantages include needing less runtime storage and the ability to correct errors in words that are not included in a dictionary. In some cases spell checkers use a fixed list of misspellings and suggestions for those misspellings. Clustering algorithms have been used for spell checking combined with phonetic information. In 1961, Les Earnest, who headed the research on this budding technology, saw it necessary to include the first spell checker that accessed a list of 10,000 acceptable words. Ralph Gorin, a graduate student under Earnest at the time, created the first true spelling checker program written as an applications program for general English text: SPELL for the DEC PDP-10 at Stanford University's Artificial Intelligence Laboratory, in February 1971.
Gorin wrote SPELL in assembly language, for faster action. Gorin made SPELL publicly accessible, as was done with most SAIL programs, it soon spread around the world via the new ARPAnet, about ten years before personal computers came into general use. SPELL, its algorithms and data structures inspired the Unix ispell program; the first spell checkers were available on mainframe computers in the late 1970s. A group of six linguists from Georgetown University developed the first spell-check system for the IBM corporation. Henry Kučera invented one for the VAX machines of Digital Equipment Corp in 1981; the first spell checkers for personal computers appeared in 1980, such as "WordCheck" for Commodore systems, released in late 1980 in time for advertisements to go to print in January 1981. Developers such as Maria Mariani and Random House rushed OEM packages or end-user products into the expanding software market for the PC but for Apple Macintosh, VAX, Unix. On the PCs, these spell checkers were standalone programs, many of which could be run in TSR mode from within word-processing packages on PCs with sufficient memory.
However, the market for standalone packages was short-lived, as by the mid-1980s developers of popular word-processing packages like WordStar and WordPerfect had incorporated spell checkers in their packages licensed from the above companies, who expanded support from just English to European and even Asian languages. However, this required increasing sophistication in the morphology routines of the software with regard to heavily-agglutinative languages like Hungarian and Finnish. Although the size of the word-processing market in a country like Iceland might not have justified the investment of implementing a spell checker, companies like WordPerfect nonetheless strove to localize their software for as many national markets as possible as part of their global marketing strategy. Firefox 2.0, a web browser, has spell check support for user-written content, such as when editing Wikitext, writing on many webmail sites and social networking websites. The web browsers Google Chrome and Opera, the email client Kmail and the instant messaging client Pidgin offer spell checking support, transparently using GNU Aspell and Hunspell as their engine.
Mac OS X now has spell check system-wide, extending the service to all bundled and third party applications. Some spell checkers have separate support for medical dictionaries to help prevent medical errors; the first spell checkers were "verifiers" instead of "correctors." They offered no suggestions for incorrectly spelled words. This was helpful for typos but it was not so helpful for logical or phonetic errors; the challenge the developers faced was the difficulty in offering useful suggestions for misspelled words. This requires applying pattern-matching algorithms, it might seem logical that where spell-checking dictionaries are concerned, "the bigger, the better," so that
The public domain consists of all the creative works to which no exclusive intellectual property rights apply. Those rights may have been forfeited, expressly waived, or may be inapplicable; the works of William Shakespeare and Beethoven, most early silent films, are in the public domain either by virtue of their having been created before copyright existed, or by their copyright term having expired. Some works are not covered by copyright, are therefore in the public domain—among them the formulae of Newtonian physics, cooking recipes, all computer software created prior to 1974. Other works are dedicated by their authors to the public domain; the term public domain is not applied to situations where the creator of a work retains residual rights, in which case use of the work is referred to as "under license" or "with permission". As rights vary by country and jurisdiction, a work may be subject to rights in one country and be in the public domain in another; some rights depend on registrations on a country-by-country basis, the absence of registration in a particular country, if required, gives rise to public-domain status for a work in that country.
The term public domain may be interchangeably used with other imprecise or undefined terms such as the "public sphere" or "commons", including concepts such as the "commons of the mind", the "intellectual commons", the "information commons". Although the term "domain" did not come into use until the mid-18th century, the concept "can be traced back to the ancient Roman Law, as a preset system included in the property right system." The Romans had a large proprietary rights system where they defined "many things that cannot be owned" as res nullius, res communes, res publicae and res universitatis. The term res nullius was defined as things not yet appropriated; the term res communes was defined as "things that could be enjoyed by mankind, such as air and ocean." The term res publicae referred to things that were shared by all citizens, the term res universitatis meant things that were owned by the municipalities of Rome. When looking at it from a historical perspective, one could say the construction of the idea of "public domain" sprouted from the concepts of res communes, res publicae, res universitatis in early Roman law.
When the first early copyright law was first established in Britain with the Statute of Anne in 1710, public domain did not appear. However, similar concepts were developed by French jurists in the 18th century. Instead of "public domain", they used terms such as publici juris or propriété publique to describe works that were not covered by copyright law; the phrase "fall in the public domain" can be traced to mid-19th century France to describe the end of copyright term. The French poet Alfred de Vigny equated the expiration of copyright with a work falling "into the sink hole of public domain" and if the public domain receives any attention from intellectual property lawyers it is still treated as little more than that, left when intellectual property rights, such as copyright and trademarks, expire or are abandoned. In this historical context Paul Torremans describes copyright as a, "little coral reef of private right jutting up from the ocean of the public domain." Copyright law differs by country, the American legal scholar Pamela Samuelson has described the public domain as being "different sizes at different times in different countries".
Definitions of the boundaries of the public domain in relation to copyright, or intellectual property more regard the public domain as a negative space. According to James Boyle this definition underlines common usage of the term public domain and equates the public domain to public property and works in copyright to private property. However, the usage of the term public domain can be more granular, including for example uses of works in copyright permitted by copyright exceptions; such a definition regards work in copyright as private property subject to fair-use rights and limitation on ownership. A conceptual definition comes from Lange, who focused on what the public domain should be: "it should be a place of sanctuary for individual creative expression, a sanctuary conferring affirmative protection against the forces of private appropriation that threatened such expression". Patterson and Lindberg described the public domain not as a "territory", but rather as a concept: "here are certain materials – the air we breathe, rain, life, thoughts, ideas, numbers – not subject to private ownership.
The materials that compose our cultural heritage must be free for all living to use no less than matter necessary for biological survival." The term public domain may be interchangeably used with other imprecise or undefined terms such as the "public sphere" or "commons", including concepts such as the "commons of the mind", the "intellectual commons", the "information commons". A public-domain book is a book with no copyright, a book, created without a license, or a book where its copyrights expired or have been forfeited. In most countries the term of protection of copyright lasts until January first, 70 years after the death of the latest living author; the longest copyright term is in Mexico, which has life plus 100 years for all deaths since July 1928. A notable exception is the United States, where every book and tale published prior to 1924 is in the public domain.
The WYSIWYG view is achieved by embedding a layout engine. This may be custom-written or based upon one used in a web browser; the goal is that, at all times during editing, the rendered result should represent what will be seen in a typical web browser. WYSIWYM is an alternative paradigm to WYSIWYG editors. Instead of focusing on the format or presentation of the document, it preserves the intended meaning of each element. For example, page headers, paragraphs, etc. are labeled as such in the editing program, displayed appropriately in the browser. A given HTML document will have an inconsistent appearance on various platforms and computers for several reasons: Different browsers and applications will render the same markup differently; the same page may display differently in Internet Explorer and Firefox on a high-resolution screen, but it will look different in the valid text-only Lynx browser. It needs to be rendered differently again on a PDA, an internet-enabled television and on a mobile phone.
Usability in a speech or braille browser, or via a screen-reader working with a conventional browser, will place demands on different aspects of the underlying HTML. All an author can do. Web browsers, like all computer software, have bugs, it is hopeless to try to design Web pages around all of the common browsers' current bugs: each time a new version of each browser comes out, a significant proportion of the World Wide Web would need re-coding to suit the new bugs and the new fixes. It is considered much wiser to design to standards, staying away from'bleeding edge' features until they settle down, wait for the browser developers to catch up to your pages, rather than the other way round. For instance, no one can argue that CSS is still'cutting edge' as there is now widespread support available in common browsers for all the major features if many WYSIWYG and other editors have not yet caught up. A single visual style can represent multiple semantic meanings Semantic meaning, derived from the underlying structure of the HTML document, is important for search engines and for various accessibility tools.
On paper we can tell from context and experience whether bold text represents a title, or emphasis, or something else. But it is difficult to convey this distinction in a WYSIWYG editor. Making a piece of text bold in a WYSIWYG editor is not sufficient to tell the reader *why* the text is bold - what the boldness represents semantically. Modern web sites are constructed in a way that makes WYSIWYG useful Modern web sites use a Content Management System or some other template processor-based means of constructing pages on the fly using content stored in a database. Individual pages are never stored in a filesystem as they may be designed and edited in a WYSIWYG editor, thus some form of abstracted template-based layout is inevitable, invalidating one of the main benefits of using a WYSIWYG editor. HTML is a structured markup language. There are certain rules on how HTML must be written if it is to conform to W3C standards for the World Wide Web. Following these rules means that web sites are accessible on all types and makes of computer, to able-bodied and people with disabilities, and
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources. For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware, although the application code is executed directly by the hardware and makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles to web servers and supercomputers; the dominant desktop operating system is Microsoft Windows with a market share of around 82.74%. MacOS by Apple Inc. is in second place, the varieties of Linux are collectively in third place. In the mobile sector, use in 2017 is up to 70% of Google's Android and according to third quarter 2016 data, Android on smartphones is dominant with 87.5 percent and a growth rate 10.3 percent per year, followed by Apple's iOS with 12.1 percent and a per year decrease in market share of 5.2 percent, while other operating systems amount to just 0.3 percent.
Linux distributions are dominant in supercomputing sectors. Other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can only run one program at a time, while a multi-tasking operating system allows more than one program to be running in concurrency; this is achieved by time-sharing, where the available processor time is divided between multiple processes. These processes are each interrupted in time slices by a task-scheduling subsystem of the operating system. Multi-tasking may be characterized in co-operative types. In preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, such as Solaris and Linux—as well as non-Unix-like, such as AmigaOS—support preemptive multitasking. Cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking.
32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem. A multi-user operating system extends the basic concept of multi-tasking with facilities that identify processes and resources, such as disk space, belonging to multiple users, the system permits multiple users to interact with the system at the same time. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources to multiple users. A distributed operating system manages a group of distinct computers and makes them appear to be a single computer; the development of networked computers that could be linked and communicate with each other gave rise to distributed computing. Distributed computations are carried out on more than one machine; when computers in a group work in cooperation, they form a distributed system.
In an OS, distributed and cloud computing context, templating refers to creating a single virtual machine image as a guest operating system saving it as a tool for multiple running virtual machines. The technique is used both in virtualization and cloud computing management, is common in large server warehouses. Embedded operating systems are designed to be used in embedded computer systems, they are designed to operate on small machines like PDAs with less autonomy. They are able to operate with a limited number of resources, they are compact and efficient by design. Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is an operating system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, it uses specialized scheduling algorithms so that a deterministic nature of behavior is achieved. An event-driven system switches between tasks based on their priorities or external events while time-sharing operating systems switch tasks based on clock interrupts.
A library operating system is one in which the services that a typical operating system provides, such as networking, are provided in the form of libraries and composed with the application and configuration code to construct a unikernel: a specialized, single address space, machine image that can be deployed to cloud or embedded environments. Early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Operating systems did not exist in their more complex forms until the early 1960s. Hardware features were added, that enabled use of runtime libraries and parallel processing; when personal computers became popular in the 1980s, operating systems were made for them similar in concept to those used on larger computers. In the 1940s, the earliest electronic digital systems had no operating systems.
Electronic systems of this time were programmed on rows of mechanical switches or by jumper wires on plug boards. These were special-purpose systems that, for example, generated ballistics tables for the military or controlled the pri