A metaphor is a figure of speech that, for rhetorical effect, directly refers to one thing by mentioning another. It may identify hidden similarities between two ideas. Antithesis, hyperbole and simile are all types of metaphor. One of the most cited examples of a metaphor in English literature is the "All the world's a stage" monologue from As You Like It: This quotation expresses a metaphor because the world is not a stage. By asserting that the world is a stage, Shakespeare uses points of comparison between the world and a stage to convey an understanding about the mechanics of the world and the behavior of the people within it; the Philosophy of Rhetoric by rhetorician I. A. Richards describes a metaphor as having two parts: the tenor and the vehicle; the tenor is the subject. The vehicle is the object. In the previous example, "the world" is compared to a stage, describing it with the attributes of "the stage". Other writers employ the general terms figure to denote the tenor and the vehicle.
Cognitive linguistics uses source, respectively. Psychologist Julian Jaynes contributed the terms metaphrand, metaphier and paraphier to the understanding of how metaphors evoke meaning thereby adding two additional terms to the common set of two basic terms. Metaphrand is equivalent to metaphor theory terms tenor and ground. Metaphier is equivalent to metaphor theory terms vehicle and source. Paraphier is any attribute, characteristics, or aspect of a metaphier, whereas any paraphrand is a selected paraphier which has conceptually become attached to a metaphrand through understanding or comprehending of a metaphor. For example, if a reader encounters this metaphor: "Pat is a tornado," the metaphrand is "Pat," the metaphier is "tornado." The paraphiers, or characteristics, of the metaphier "tornado" would include: storm, wind, danger, destruction, etc. However, the metaphoric use of those attributes or characteristics of a tornado is not one-for-one; the English metaphor derived from the 16th-century Old French word métaphore, which comes from the Latin metaphora, "carrying over", in turn from the Greek μεταφορά, "transfer", from μεταφέρω, "to carry over", "to transfer" and that from μετά, "after, across" + φέρω, "to bear", "to carry".
Metaphors are most compared with similes. It is said, for instance, that a metaphor is'a condensed analogy' or'analogical fusion' or that they'operate in a similar fashion' or are'based on the same mental process' or yet that'the basic processes of analogy are at work in metaphor', it is pointed out that'a border between metaphor and analogy is fuzzy' and'the difference between them might be described as the distance between things being compared'. A simile is a specific type of metaphor. A metaphor asserts the objects in the comparison are identical on the point of comparison, while a simile asserts a similarity. For this reason a common-type metaphor is considered more forceful than a simile; the metaphor category contains these specialized types: Allegory: An extended metaphor wherein a story illustrates an important attribute of the subject. Antithesis: A rhetorical contrast of ideas by means of parallel arrangements of words, clauses, or sentences. Catachresis: A mixed metaphor, sometimes by accident.
Hyperbole: Excessive exaggeration to illustrate a point. Metonymy: A figure of speech using the name of one thing in reference to a different thing to which the first is associated. In the phrase "lands belonging to the crown", the word "crown" is metonymy for monarch. Parable: An extended metaphor told as an anecdote to illustrate or teach a moral or spiritual lesson, such as in Aesop's fables or Jesus' teaching method as told in the Bible. Pun: Similar to a metaphor, a pun alludes to another term. However, the main difference is that a pun is a frivolous allusion between two different things whereas a metaphor is a purposeful allusion between two different things. Metaphor, like other types of analogy, can be distinguished from metonymy as one of two fundamental modes of thought. Metaphor and analogy work by bringing together concepts from different conceptual domains, while metonymy uses one element from a given domain to refer to another related element. A metaphor creates new links between otherwise distinct conceptual domains, while a metonymy relies on the existing links within them.
A dead metaphor is a metaphor. The phrases "to grasp a concept" and "to gather what you've understood" use physical action as a metaphor for understanding; the audience does not need to visualize the action. Some distinguish between a dead metaphor and a cliché. Others use "dead metaphor" to denote both. A mixed metaphor is a metaphor that leaps from one identification to a second inconsistent with the first, e.g.: I smell a rat but I'll nip him in the bud"-Irish politician Boyle Roche This form is used as a parody of metaphor itself: If we can hit that bull's-eye the rest of the dominoes will fall like a house of cards... Checkmate. An extended metaphor, or conceit, sets up a principal subject wit
Google LLC is an American multinational technology company that specializes in Internet-related services and products, which include online advertising technologies, search engine, cloud computing and hardware. It is considered one of the Big Four technology companies, alongside Amazon and Facebook. Google was founded in 1998 by Larry Page and Sergey Brin while they were Ph. D. students at Stanford University in California. Together they own about 14 percent of its shares and control 56 percent of the stockholder voting power through supervoting stock, they incorporated Google as a held company on September 4, 1998. An initial public offering took place on August 19, 2004, Google moved to its headquarters in Mountain View, nicknamed the Googleplex. In August 2015, Google announced plans to reorganize its various interests as a conglomerate called Alphabet Inc. Google is Alphabet's leading subsidiary and will continue to be the umbrella company for Alphabet's Internet interests. Sundar Pichai was appointed CEO of Google.
The company's rapid growth since incorporation has triggered a chain of products and partnerships beyond Google's core search engine. It offers services designed for work and productivity, email and time management, cloud storage, instant messaging and video chat, language translation and navigation, video sharing, note-taking, photo organizing and editing; the company leads the development of the Android mobile operating system, the Google Chrome web browser, Chrome OS, a lightweight operating system based on the Chrome browser. Google has moved into hardware. Google has experimented with becoming an Internet carrier. Google.com is the most visited website in the world. Several other Google services figure in the top 100 most visited websites, including YouTube and Blogger. Google is the most valuable brand in the world as of 2017, but has received significant criticism involving issues such as privacy concerns, tax avoidance, antitrust and search neutrality. Google's mission statement is "to organize the world's information and make it universally accessible and useful".
The companies unofficial slogan "Don't be evil" was removed from the company's code of conduct around May 2018. Google began in January 1996 as a research project by Larry Page and Sergey Brin when they were both PhD students at Stanford University in Stanford, California. While conventional search engines ranked results by counting how many times the search terms appeared on the page, the two theorized about a better system that analyzed the relationships among websites, they called this new technology PageRank. Page and Brin nicknamed their new search engine "BackRub", because the system checked backlinks to estimate the importance of a site, they changed the name to Google. The domain name for Google was registered on September 15, 1997, the company was incorporated on September 4, 1998, it was based in the garage of a friend in California. Craig Silverstein, a fellow PhD student at Stanford, was hired as the first employee. Google was funded by an August 1998 contribution of $100,000 from Andy Bechtolsheim, co-founder of Sun Microsystems.
Google received money from three other angel investors in 1998: Amazon.com founder Jeff Bezos, Stanford University computer science professor David Cheriton, entrepreneur Ram Shriram. Between these initial investors and family Google raised around 1 million dollars, what allowed them to open up their original shop in Menlo Park, California After some additional, small investments through the end of 1998 to early 1999, a new $25 million round of funding was announced on June 7, 1999, with major investors including the venture capital firms Kleiner Perkins and Sequoia Capital. In March 1999, the company moved its offices to Palo Alto, home to several prominent Silicon Valley technology start-ups; the next year, Google began selling advertisements associated with search keywords against Page and Brin's initial opposition toward an advertising-funded search engine. To maintain an uncluttered page design, advertisements were text-based. In June 2000, it was announced that Google would become the default search engine provider for Yahoo!, one of the most popular websites at the time, replacing Inktomi.
In 2003, after outgrowing two other locations, the company leased an office complex from Silicon Graphics, at 1600 Amphitheatre Parkway in Mountain View, California. The complex became known as the Googleplex, a play on the word googolplex, the number one followed by a googol zeroes. Three years Google bought the property from SGI for $319 million. By that time, the name "Google
Relevance is the concept of one topic being connected to another topic in a way that makes it useful to consider the second topic when considering the first. The concept of relevance is studied in many different fields, including cognitive sciences and library and information science. Most fundamentally, however, it is studied in epistemology. Different theories of knowledge have different implications for what is considered relevant and these fundamental views have implications for all other fields as well. "Something is relevant to a task if it increases the likelihood of accomplishing the goal, implied by T.". A thing might be relevant, a document or a piece of information may be relevant; the basic understanding of relevance does not depend on whether we speak of "things" or "information". For example, the Gandhian principles are of great relevance in today's world. If you believe that schizophrenia is caused by bad communication between mother and child family interaction studies become relevant.
If, on the other hand, you subscribe to a genetic theory of relevance the study of genes becomes relevant. If you subscribe to the epistemology of empiricism only intersubjectively controlled observations are relevant. If, on the other hand, you subscribe to feminist epistemology the sex of the observer becomes relevant. Epistemology is not just one domain among others. Epistemological views are always at play in any domain; those views influence what is regarded relevant. In formal reasoning, relevance has proved an elusive concept, it is important because the solution of any problem requires the prior identification of the relevant elements from which a solution can be constructed. It is elusive, because the meaning of relevance appears to be difficult or impossible to capture within conventional logical systems; the obvious suggestion that q is relevant to p if q is implied by p breaks down because under standard definitions of material implication, a false proposition implies all other propositions.
However though'iron is a metal' may be implied by'cats lay eggs' it doesn't seem to be relevant to it the way in which'cats are mammals' and'mammals give birth to living young' are relevant to each other. If one states "I love ice cream," and another person responds "I have a friend named Brad Cook," these statements are not relevant. However, if one states "I love ice cream," and another person responds "I have a friend named Brad Cook who likes ice cream," this statement now becomes relevant because it relates to the first person's idea. More a number of theorists have sought to account for relevance in terms of "possible world logics" in intensional logic; the idea is that necessary truths are true in all possible worlds, contradictions are true in no possible worlds, contingent propositions can be ordered in terms of the number of possible worlds in which they are true. Relevance is argued to depend upon the "remoteness relationship" between an actual world in which relevance is being evaluated and the set of possible worlds within which it is true.
During the 1960s, relevance became a fashionable buzzword, meaning roughly'relevance to social concerns', such as racial equality, social justice, world hunger, world economic development, so on. The implication was that some subjects, e.g. the study of medieval poetry and the practice of corporate law, were not worthwhile because they did not address pressing social issues. The economist John Maynard Keynes saw the importance of defining relevance to the problem of calculating risk in economic decision-making, he suggested that the relevance of a piece of evidence, such as a true proposition, should be defined in terms of the changes it produces of estimations of the probability of future events. Keynes proposed that new evidence e is irrelevant to a proposition x, given old evidence q, if and only if x p q = x q, the proposition is relevant. There are technical problems with this definition, for example, the relevance of a piece of evidence can be sensitive to the order in which other pieces of evidence are received.
In 1986, Dan Sperber and Deirdre Wilson drew attention to the central importance of relevance decisions in reasoning and communication. They proposed an account of the process of inferring relevant information from any given utterance. To do this work, they used what they called the "Principle of Relevance": namely, the position that any utterance addressed to someone automatically conveys the presumption of its own optimal relevance; the central idea of Sperber and Wilson's theory is that all utterances are encountered in some context, the correct interpretation of a particular utterance is the one that allows most new implications to be made in that context on the basis of the least amount of information necessary to convey it. For Sperber and Wilson, relevance is conceived as relative or subjective, as it depends upon the state of knowledge of a hearer when they encounter an utterance. Sperber and Wilson stress that this theory is not intended to account for every intuitive application of the English word "relevance".
Relevance, as a technical term, is restricted to relationships between utterances and interpretations, so the theory cannot account for intuitions such as the one that relevance relationships obtain in problems involving physical objects. If a plumber needs to fix a leaky faucet, for example, some objects and tools are rele
In computing, a plug-in is a software component that adds a specific feature to an existing computer program. When a program supports plug-ins, it enables customization. Web browsers have allowed executables as plug-ins, though they are now deprecated. Two plug-in examples are the Adobe Flash Player for playing videos and a Java virtual machine for running applets. A theme or skin is a preset package containing additional or changed graphical appearance details, achieved by the use of a graphical user interface that can be applied to specific software and websites to suit the purpose, topic, or tastes of different users to customize the look and feel of a piece of computer software or an operating system front-end GUI. Applications support plug-ins for many reasons; some of the main reasons include: to enable third-party developers to create abilities which extend an application to support adding new features to reduce the size of an application to separate source code from an application because of incompatible software licenses.
Types of applications and why they use plug-ins: Audio editors use plug-ins to generate, process or analyze sound. Ardour and Audacity are examples of such editors. Digital audio workstations use plug-ins to process it. Examples include ProTools. Email clients use plug-ins to encrypt email. Pretty Good Privacy is an example of such plug-ins. Video game console emulators use plug-ins to modularize the separate subsystems of the devices they seek to emulate. For example, the PCSX2 emulator makes use of video, optical, etc. plug-ins for those respective components of the PlayStation 2. Graphics software use plug-ins to support file formats and process images. Media players use plug-ins to apply filters. Foobar2000, GStreamer, Quintessential, VST, Winamp, XMMS are examples of such media players. Packet sniffers use plug-ins to decode packet formats. OmniPeek is an example of such packet sniffers. Remote sensing applications use plug-ins to process data from different sensor types. Text editors and Integrated development environments use plug-ins to support programming languages or enhance development process e.g. Visual Studio, RAD Studio, IntelliJ IDEA, jEdit and MonoDevelop support plug-ins.
Visual Studio itself can be plugged into other applications via Visual Studio Tools for Office and Visual Studio Tools for Applications. Web browsers have used executables as plug-ins, though they are now deprecated. Examples include Java SE, QuickTime, Microsoft Silverlight and Unity; the host application provides services which the plug-in can use, including a way for plug-ins to register themselves with the host application and a protocol for the exchange of data with plug-ins. Plug-ins depend on the services provided by the host application and do not work by themselves. Conversely, the host application operates independently of the plug-ins, making it possible for end-users to add and update plug-ins dynamically without needing to make changes to the host application. Programmers implement plug-in functionality using shared libraries, which get dynamically loaded at run time, installed in a place prescribed by the host application. HyperCard supported a similar facility, but more included the plug-in code in the HyperCard documents themselves.
Thus the HyperCard stack became a self-contained application in its own right, distributable as a single entity that end-users could run without the need for additional installation-steps. Programs may implement plugins by loading a directory of simple script files written in a scripting language like Python or Lua. In Mozilla Foundation definitions, the words "add-on", "extension" and "plug-in" are not synonyms. "Add-on" can refer to anything. Extensions comprise a subtype, albeit the most powerful one. Mozilla applications come with integrated add-on managers that, similar to package managers, install and manage extensions; the term, "Plug-in", however refers to NPAPI-based web content renderers. Plug-ins are being deprecated. Plug-ins appeared as early as the mid 1970s, when the EDT text editor running on the Unisys VS/9 operating system using the UNIVAC Series 90 mainframe computers provided the ability to run a program from the editor and to allow such a program to access the editor buffer, thus allowing an external program to access an edit session in memory.
The plug-in program could make calls to the editor to have it perform text-editing services upon the buffer that the editor shared with the plug-in. The Waterloo Fortran compiler used this feature to allow interactive compilation of Fortran programs edited by EDT. Early PC software applications to incorporate plug-in functionality included HyperCard and QuarkXPress on the Macintosh, both released in 1987. In 1988, Silicon Beach Software included plug-in functionality in Digital Darkroom and SuperPaint, Ed Bomke coined the term plug-in. Applet Browser extension
PageRank is an algorithm used by Google Search to rank web pages in their search engine results. PageRank was named after one of the founders of Google. PageRank is a way of measuring the importance of website pages. According to Google: PageRank works by counting the number and quality of links to a page to determine a rough estimate of how important the website is; the underlying assumption is that more important websites are to receive more links from other websites. PageRank is not the only algorithm used by Google to order search results, but it is the first algorithm, used by the company, it is the best known. PageRank is a link analysis algorithm and it assigns a numerical weighting to each element of a hyperlinked set of documents, such as the World Wide Web, with the purpose of "measuring" its relative importance within the set; the algorithm may be applied to any collection of entities with reciprocal quotations and references. The numerical weight that it assigns to any given element E is referred to as the PageRank of E and denoted by P R.
A PageRank results from a mathematical algorithm based on the webgraph, created by all World Wide Web pages as nodes and hyperlinks as edges, taking into consideration authority hubs such as cnn.com or usa.gov. The rank value indicates an importance of a particular page. A hyperlink to a page counts as a vote of support; the PageRank of a page is defined recursively and depends on the number and PageRank metric of all pages that link to it. A page, linked to by many pages with high PageRank receives a high rank itself. Numerous academic papers concerning PageRank have been published since Page and Brin's original paper. In practice, the PageRank concept may be vulnerable to manipulation. Research has been conducted into identifying falsely influenced PageRank rankings; the goal is to find an effective means of ignoring links from documents with falsely influenced PageRank. Other link-based ranking algorithms for Web pages include the HITS algorithm invented by Jon Kleinberg,the IBM CLEVER project, the TrustRank algorithm and the Hummingbird algorithm.
The eigenvalue problem was suggested in 1976 by Gabriel Pinski and Francis Narin, who worked on scientometrics ranking scientific journals, in 1977 by Thomas Saaty in his concept of Analytic Hierarchy Process which weighted alternative choices, in 1995 by Bradley Love and Steven Sloman as a cognitive model for concepts, the centrality algorithm. Larry Page and Sergey Brin developed PageRank at Stanford University in 1996 as part of a research project about a new kind of search engine. Sergey Brin had the idea that information on the web could be ordered in a hierarchy by "link popularity": a page ranks higher as there are more links to it. Rajeev Motwani and Terry Winograd co-authored with Page and Brin the first paper about the project, describing PageRank and the initial prototype of the Google search engine, published in 1998: shortly after and Brin founded Google Inc. the company behind the Google search engine. While just one of many factors that determine the ranking of Google search results, PageRank continues to provide the basis for all of Google's web-search tools.
The name "PageRank" plays off of the name of developer Larry Page, as well as of the concept of a web page. The word is a trademark of Google, the PageRank process has been patented. However, the patent is assigned to Stanford University and not to Google. Google has exclusive license rights on the patent from Stanford University; the university received 1.8 million shares of Google in exchange for use of the patent. PageRank was influenced by citation analysis, early developed by Eugene Garfield in the 1950s at the University of Pennsylvania, by Hyper Search, developed by Massimo Marchiori at the University of Padua. In the same year PageRank was introduced, Jon Kleinberg published his work on HITS. Google's founders cite Garfield and Kleinberg in their original papers. A small search-engine called "RankDex" from IDD Information Services, designed by Robin Li, from 1996 exploring a similar strategy for site-scoring and page-ranking. Li patented the technology in RankDex in 1999 and used it when he founded Baidu in China in 2000.
Larry Page referenced Li's work in some of his U. S. patents for PageRank. The PageRank algorithm outputs a probability distribution used to represent the likelihood that a person randomly clicking on links will arrive at any particular page. PageRank can be calculated for collections of documents of any size, it is assumed in several research papers that the distribution is evenly divided among all documents in the collection at the beginning of the computational process. The PageRank computations require several passes, called "iterations", through the collection to adjust approximate PageRank values to more reflect the theoretical true value. A probability is expressed as a numeric value between 0 and 1. A 0.5 probability is expressed as a "50% chance" of something happening. Hence, a PageRank of 0.5 means there is a 50% chance that a person clicking on a random link will be directed to the document with the 0.5 PageRank. Assume a small universe of four web pages: A, B, C and D. Links from a page to itself are ignored.
Multiple outbound links from one page to another page are treated as a single link. PageRank is initialized to the same value for all pages. In the original form of PageRank, the sum of PageRank over all pages was the total number of pages on the web at that time, so each page in this example would have an initial value of 1; however versions of PageRan
Web search engine
A web search engine or Internet search engine is a software system, designed to carry out web search, which means to search the World Wide Web in a systematic way for particular information specified in a web search query. The search results are presented in a line of results referred to as search engine results pages; the information may be a mix of web pages, videos, articles, research papers and other types of files. Some search engines mine data available in databases or open directories. Unlike web directories, which are maintained only by human editors, search engines maintain real-time information by running an algorithm on a web crawler. Internet content, not capable of being searched by a web search engine is described as the deep web. Internet search engines themselves predate the debut of the Web in December 1990; the Who is user search dates back to 1982 and the Knowbot Information Service multi-network user search was first implemented in 1989. The first well documented search engine that searched content files, namely FTP files was Archie, which debuted on 10 September 1990.
Prior to September 1993, the World Wide Web was indexed by hand. There was a list of webservers hosted on the CERN webserver. One snapshot of the list in 1992 remains, but as more and more web servers went online the central list could no longer keep up. On the NCSA site, new servers were announced under the title "What's New!"The first tool used for searching content on the Internet was Archie. The name stands for "archive" without the "v", it was created by Alan Emtage, Bill Heelan and J. Peter Deutsch, computer science students at McGill University in Montreal, Canada; the program downloaded the directory listings of all the files located on public anonymous FTP sites, creating a searchable database of file names. The rise of Gopher led to two new search programs and Jughead. Like Archie, they searched the file titles stored in Gopher index systems. Veronica provided a keyword search of most Gopher menu titles in the entire Gopher listings. Jughead was a tool for obtaining menu information from specific Gopher servers.
While the name of the search engine "Archie Search Engine" was not a reference to the Archie comic book series, "Veronica" and "Jughead" are characters in the series, thus referencing their predecessor. In the summer of 1993, no search engine existed for the web, though numerous specialized catalogues were maintained by hand. Oscar Nierstrasz at the University of Geneva wrote a series of Perl scripts that periodically mirrored these pages and rewrote them into a standard format; this formed the basis for W3Catalog, the web's first primitive search engine, released on September 2, 1993. In June 1993, Matthew Gray at MIT, produced what was the first web robot, the Perl-based World Wide Web Wanderer, used it to generate an index called'Wandex'; the purpose of the Wanderer was to measure the size of the World Wide Web, which it did until late 1995. The web's second search engine Aliweb appeared in November 1993. Aliweb did not use a web robot, but instead depended on being notified by website administrators of the existence at each site of an index file in a particular format.
JumpStation used a web robot to find web pages and to build its index, used a web form as the interface to its query program. It was thus the first WWW resource-discovery tool to combine the three essential features of a web search engine as described below; because of the limited resources available on the platform it ran on, its indexing and hence searching were limited to the titles and headings found in the web pages the crawler encountered. One of the first "all text" crawler-based search engines was WebCrawler, which came out in 1994. Unlike its predecessors, it allowed users to search for any word in any webpage, which has become the standard for all major search engines since, it was the search engine, known by the public. In 1994, Lycos was launched and became a major commercial endeavor. Soon after, many search engines vied for popularity; these included Magellan, Infoseek, Northern Light, AltaVista. Yahoo! was among the most popular ways for people to find web pages of interest, but its search function operated on its web directory, rather than its full-text copies of web pages.
Information seekers could browse the directory instead of doing a keyword-based search. In 1996, Netscape was looking to give a single search engine an exclusive deal as the featured search engine on Netscape's web browser. There was so much interest that instead Netscape struck deals with five of the major search engines: for $5 million a year, each search engine would be in rotation on the Netscape search engine page; the five engines were Yahoo!, Lycos and Excite. Google adopted the idea of selling search terms in 1998, from a small search engine company named goto.com. This move had a significant effect on the SE business, which went from struggling to one of the most profitable businesses in the Internet. Search engines were known as some of the brightest stars in the Internet investing frenzy that occurred in the late 1990s. Several