1.
Tuplet
–
In music a tuplet is any rhythm that involves dividing the beat into a different number of equal subdivisions from that usually permitted by the time-signature. This is indicated by a number, indicating the fraction involved, the notes involved are also often grouped with a bracket or a slur. The most common type is the triplet, an alternative modern term, irrational rhythm, was originally borrowed from Greek prosody where it referred to a syllable having a metrical value not corresponding to its actual time-value, or. A metrical foot containing such a syllable, the term would be incorrect if used in the mathematical sense or in the more general sense of unreasonable, utterly illogical, absurd. Alternative terms found occasionally are artificial division, abnormal divisions, irregular rhythm, the term polyrhythm, sometimes incorrectly used of tuplets, actually refers to the simultaneous use of opposing time signatures. Besides triplet, the terms duplet, quadruplet, quintuplet, sextuplet, septuplet, the terms nonuplet, decuplet, undecuplet, dodecuplet, and tredecuplet had been suggested but up until 1925 had not caught on. By 1964 the terms nonuplet and decuplet were usual, while subdivisions by greater numbers were commonly described as group of eleven notes, group of twelve notes. The most common tuplet is the triplet, shown at right, similarly, three triplet eighth notes are equal in duration to one quarter note. If several note values appear under the bracket, they are all affected the same way. If the notes of the tuplet are beamed together, the bracket may be omitted, for other tuplets, the number indicates a ratio to the next lower normal value in the prevailing meter. Some numbers are used inconsistently, for example septuplets usually indicate 7 notes in the duration of 4—or in compound meter 7 for 6—but may sometimes be used to mean 7 notes in the duration of 8. Thus, a septuplet lasting a whole note can be written with either quarter notes or eighth notes, a French alternative is to write pour or de in place of the colon, or above the bracketed irregular number. This reflects the French usage of, for example, six-pour-quatre as a name for the sextolet. There are disagreements about the sextuplet —which is also called sestole, sestolet, sextole, some go so far as to call the latter, when written with a numeral 6, a false sextuplet. In compound meter, even-numbered tuplets can indicate that a value is changed in relation to the dotted version of the next higher note value. Thus, two eighth notes take the time normally totaled by three eighth notes, equal to a dotted quarter note. Four quadruplet eighth notes would also equal a dotted quarter note, the duplet eighth note is thus exactly the same duration as a dotted eighth note, but the duplet notation is far more common in compound meters. A duplet in compound time is often written as 2,3 than 2, 1 1⁄2
2.
Octuple scull
–
An octuple scull is a racing shell or a rowing boat used in the sport of competitive rowing. The octuple is directed by a coxswain and propelled by eight rowers who move the boat by sculling with two oars, one in each hand, like a coxed eight, an octuple is typically 65.2 feet long and weighs 211.2 pounds. Racing boats are long, narrow, and broadly semi-circular in cross-section in order to drag to a minimum. They usually have a fin towards the rear, to prevent roll. Originally made from wood, shells are now almost always made from a material for strength. The riggers in sculling apply the forces symmetrically to each side of the boat, when there are eight rowers in a boat, each with only one sweep oar and rowing on opposite sides, the combination is referred to as a coxed eight. In sweep oared racing the rigging means the forces are staggered alternately along the boat, the symmetrical forces in sculling make the boat more efficient and so the octuple scull is faster than the coxed eight. Thames Ditton Regatta,11, Final, Junior Octuple Sculls Plate – Hampton Court
3.
Twelve-tone technique
–
All 12 notes are thus given more or less equal importance, and the music avoids being in a key. Over time, the technique increased greatly in popularity and eventually became widely influential on 20th century composers, many important composers who had originally not subscribed to or even actively opposed the technique, such as Aaron Copland and Igor Stravinsky, eventually adopted it in their music. Schoenberg himself described the system as a Method of composing with twelve tones which are related only with one another and it is commonly considered a form of serialism. Schoenbergs countryman and contemporary Josef Matthias Hauer also developed a system using unordered hexachords or tropes—but with no connection to Schoenbergs twelve-tone technique. Other composers have created systematic use of the scale. The twelve tone technique was preceded by freely atonal pieces of 1908–23 which, though free, the twelve-tone technique was also preceded by nondodecaphonic serial composition used independently in the works of Alexander Scriabin, Igor Stravinsky, Béla Bartók, Carl Ruggles, and others. Oliver Neighbour argues that Bartók was the first composer to use a group of twelve notes consciously for a structural purpose, essentially, Schoenberg and Hauer systematized and defined for their own dodecaphonic purposes a pervasive technical feature of modern musical practice, the ostinato. In Hauers breakthrough piece Nomos, Op. Schoenbergs idea in developing the technique was for it to replace those structural differentiations provided formerly by tonal harmonies, Some of these composers extended the technique to control aspects other than the pitches of notes, thus producing serial music. Some even subjected all elements of music to the serial process, the basis of the twelve-tone technique is the tone row, an ordered arrangement of the twelve notes of the chromatic scale. There are four postulates or preconditions to the technique which apply to the row, on which a work or section is based, no note is repeated within the row. The row may be subjected to interval-preserving transformations -—that is, it may appear in inversion, retrograde, or retrograde-inversion, the row in any of its four transformations may begin on any degree of the chromatic scale, in other words it may be freely transposed. A particular transformation together with a choice of transpositional level is referred to as a set form or row form, every row thus has up to 48 different row forms. However, not all prime series will yield so many variations because transposed transformations may be identical to each other. A simple case is the chromatic scale, the retrograde inversion of which is identical to the prime form. In the above example, as is typical, the retrograde inversion contains three points where the sequence of two pitches are identical to the prime row, thus the generative power of even the most basic transformations is both unpredictable and inevitable. Motivic development can be driven by such internal consistency, note that rules 1–4 above apply to the construction of the row itself, and not to the interpretation of the row in the composition. While a row may be expressed literally on the surface as thematic material, it need not be, however, individual composers have constructed more detailed systems in which matters such as these are also governed by systematic rules. The tone row chosen as the basis of the piece is called the prime series, untransposed, it is notated as P0
4.
Mathematics
–
Mathematics is the study of topics such as quantity, structure, space, and change. There is a range of views among mathematicians and philosophers as to the exact scope, Mathematicians seek out patterns and use them to formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proof, when mathematical structures are good models of real phenomena, then mathematical reasoning can provide insight or predictions about nature. Through the use of abstraction and logic, mathematics developed from counting, calculation, measurement, practical mathematics has been a human activity from as far back as written records exist. The research required to solve mathematical problems can take years or even centuries of sustained inquiry, rigorous arguments first appeared in Greek mathematics, most notably in Euclids Elements. Galileo Galilei said, The universe cannot be read until we have learned the language and it is written in mathematical language, and the letters are triangles, circles and other geometrical figures, without which means it is humanly impossible to comprehend a single word. Without these, one is wandering about in a dark labyrinth, carl Friedrich Gauss referred to mathematics as the Queen of the Sciences. Benjamin Peirce called mathematics the science that draws necessary conclusions, David Hilbert said of mathematics, We are not speaking here of arbitrariness in any sense. Mathematics is not like a game whose tasks are determined by arbitrarily stipulated rules, rather, it is a conceptual system possessing internal necessity that can only be so and by no means otherwise. Albert Einstein stated that as far as the laws of mathematics refer to reality, they are not certain, Mathematics is essential in many fields, including natural science, engineering, medicine, finance and the social sciences. Applied mathematics has led to entirely new mathematical disciplines, such as statistics, Mathematicians also engage in pure mathematics, or mathematics for its own sake, without having any application in mind. There is no clear line separating pure and applied mathematics, the history of mathematics can be seen as an ever-increasing series of abstractions. The earliest uses of mathematics were in trading, land measurement, painting and weaving patterns, in Babylonian mathematics elementary arithmetic first appears in the archaeological record. Numeracy pre-dated writing and numeral systems have many and diverse. Between 600 and 300 BC the Ancient Greeks began a study of mathematics in its own right with Greek mathematics. Mathematics has since been extended, and there has been a fruitful interaction between mathematics and science, to the benefit of both. Mathematical discoveries continue to be made today, the overwhelming majority of works in this ocean contain new mathematical theorems and their proofs. The word máthēma is derived from μανθάνω, while the modern Greek equivalent is μαθαίνω, in Greece, the word for mathematics came to have the narrower and more technical meaning mathematical study even in Classical times
5.
Element (mathematics)
–
In mathematics, an element, or member, of a set is any one of the distinct objects that make up that set. Writing A = means that the elements of the set A are the numbers 1,2,3 and 4, sets of elements of A, for example, are subsets of A. For example, consider the set B =, the elements of B are not 1,2,3, and 4. Rather, there are three elements of B, namely the numbers 1 and 2, and the set. The elements of a set can be anything, for example, C =, is the set whose elements are the colors red, green and blue. The relation is an element of, also called set membership, is denoted by the symbol ∈, writing x ∈ A means that x is an element of A. Equivalent expressions are x is a member of A, x belongs to A, x is in A and x lies in A, another possible notation for the same relation is A ∋ x, meaning A contains x, though it is used less often. The negation of set membership is denoted by the symbol ∉, writing x ∉ A means that x is not an element of A. The symbol ϵ was first used by Giuseppe Peano 1889 in his work Arithmetices principia nova methodo exposita, here he wrote on page X, Signum ϵ significat est. Ita a ϵ b legitur a est quoddam b. which means The symbol ϵ means is, so a ϵ b is read as a is a b. The symbol itself is a stylized lowercase Greek letter epsilon, the first letter of the word ἐστί, the Unicode characters for these symbols are U+2208, U+220B and U+2209. The equivalent LaTeX commands are \in, \ni and \notin, mathematica has commands \ and \. The number of elements in a set is a property known as cardinality, informally. In the above examples the cardinality of the set A is 4, an infinite set is a set with an infinite number of elements, while a finite set is a set with a finite number of elements. The above examples are examples of finite sets, an example of an infinite set is the set of positive integers =. Using the sets defined above, namely A =, B = and C =,2 ∈ A ∈ B3,4 ∉ B is a member of B Yellow ∉ C The cardinality of D = is finite, the cardinality of P = is infinite. Halmos, Paul R. Naive Set Theory, Undergraduate Texts in Mathematics, NY, Springer-Verlag, ISBN 0-387-90092-6 - Naive means that it is not fully axiomatized, not that it is silly or easy. Jech, Thomas, Set Theory, Stanford Encyclopedia of Philosophy Suppes, Patrick, Axiomatic Set Theory, NY, Dover Publications, Inc
6.
Sequence
–
In mathematics, a sequence is an enumerated collection of objects in which repetitions are allowed. Like a set, it contains members, the number of elements is called the length of the sequence. Unlike a set, order matters, and exactly the elements can appear multiple times at different positions in the sequence. Formally, a sequence can be defined as a function whose domain is either the set of the numbers or the set of the first n natural numbers. The position of an element in a sequence is its rank or index and it depends on the context or of a specific convention, if the first element has index 0 or 1. For example, is a sequence of letters with the letter M first, also, the sequence, which contains the number 1 at two different positions, is a valid sequence. Sequences can be finite, as in these examples, or infinite, the empty sequence is included in most notions of sequence, but may be excluded depending on the context. A sequence can be thought of as a list of elements with a particular order, Sequences are useful in a number of mathematical disciplines for studying functions, spaces, and other mathematical structures using the convergence properties of sequences. In particular, sequences are the basis for series, which are important in differential equations, Sequences are also of interest in their own right and can be studied as patterns or puzzles, such as in the study of prime numbers. There are a number of ways to denote a sequence, some of which are useful for specific types of sequences. One way to specify a sequence is to list the elements, for example, the first four odd numbers form the sequence. This notation can be used for sequences as well. For instance, the sequence of positive odd integers can be written. Listing is most useful for sequences with a pattern that can be easily discerned from the first few elements. Other ways to denote a sequence are discussed after the examples, the prime numbers are the natural numbers bigger than 1, that have no divisors but 1 and themselves. Taking these in their natural order gives the sequence, the prime numbers are widely used in mathematics and specifically in number theory. The Fibonacci numbers are the integer sequence whose elements are the sum of the two elements. The first two elements are either 0 and 1 or 1 and 1 so that the sequence is, for a large list of examples of integer sequences, see On-Line Encyclopedia of Integer Sequences
7.
Integer
–
An integer is a number that can be written without a fractional component. For example,21,4,0, and −2048 are integers, while 9.75, 5 1⁄2, the set of integers consists of zero, the positive natural numbers, also called whole numbers or counting numbers, and their additive inverses. This is often denoted by a boldface Z or blackboard bold Z standing for the German word Zahlen, ℤ is a subset of the sets of rational and real numbers and, like the natural numbers, is countably infinite. The integers form the smallest group and the smallest ring containing the natural numbers, in algebraic number theory, the integers are sometimes called rational integers to distinguish them from the more general algebraic integers. In fact, the integers are the integers that are also rational numbers. Like the natural numbers, Z is closed under the operations of addition and multiplication, that is, however, with the inclusion of the negative natural numbers, and, importantly,0, Z is also closed under subtraction. The integers form a ring which is the most basic one, in the following sense, for any unital ring. This universal property, namely to be an object in the category of rings. Z is not closed under division, since the quotient of two integers, need not be an integer, although the natural numbers are closed under exponentiation, the integers are not. The following lists some of the properties of addition and multiplication for any integers a, b and c. In the language of algebra, the first five properties listed above for addition say that Z under addition is an abelian group. As a group under addition, Z is a cyclic group, in fact, Z under addition is the only infinite cyclic group, in the sense that any infinite cyclic group is isomorphic to Z. The first four properties listed above for multiplication say that Z under multiplication is a commutative monoid. However, not every integer has an inverse, e. g. there is no integer x such that 2x =1, because the left hand side is even. This means that Z under multiplication is not a group, all the rules from the above property table, except for the last, taken together say that Z together with addition and multiplication is a commutative ring with unity. It is the prototype of all objects of algebraic structure. Only those equalities of expressions are true in Z for all values of variables, note that certain non-zero integers map to zero in certain rings. The lack of zero-divisors in the means that the commutative ring Z is an integral domain
8.
Recursive definition
–
A recursive definition in mathematical logic and computer science is used to define the elements in a set in terms of other elements in the set. A recursive definition of a function defines values of the functions for some inputs in terms of the values of the function for other inputs. For example, the function n. is defined by the rules 0. This definition is valid for all n, because the recursion eventually reaches the base case of 0. The definition may also be thought of as giving a procedure describing how to construct the function n. starting from n =0, the recursion theorem states that such a definition indeed defines a function. An inductive definition of a set describes the elements in a set in terms of elements in the set. For example, one definition of the set N of natural numbers is,1 is in N, if an element n is in N then n+1 is in N. N is the intersection of all sets satisfying and, there are many sets that satisfy and - for example, the set satisfies the definition. However, condition specifies the set of numbers by removing the sets with extraneous members. Properties of recursively defined functions and sets can often be proved by a principle that follows the recursive definition. Most recursive definitions have two foundations, a case and an inductive clause. In contrast, a circular definition may have no base case, such a situation would lead to an infinite regress. That recursive definitions are valid - meaning that a recursive definition identifies a unique function - is a theorem of set theory, more generally, recursive definitions of functions can be made whenever the domain is a well-ordered set, using the principle of transfinite recursion. The formal criteria for what constitutes a valid recursive definition are more complex for the general case, an outline of the general proof and the criteria can be found in Munkres. It is chiefly in logic or computer programming that recursive definitions are found, for example, a well formed formula can be defined as, a symbol which stands for a proposition - like p means Connor is a lawyer. The negation symbol, followed by a wff - like Np means It is not true that Connor is a lawyer, any of the four binary connectives followed by two wffs. The symbol K means both are true, so Kpq may mean Connor is a lawyer, and Mary likes music, the value of such a recursive definition is that it can be used to determine whether any particular string of symbols is well formed. Kpq is well formed, because its K followed by the atomic wffs p and q, nKpq is well formed, because its N followed by Kpq, which is in turn a wff
9.
Set (mathematics)
–
In mathematics, a set is a well-defined collection of distinct objects, considered as an object in its own right. For example, the numbers 2,4, and 6 are distinct objects when considered separately, Sets are one of the most fundamental concepts in mathematics. Developed at the end of the 19th century, set theory is now a part of mathematics. In mathematics education, elementary topics such as Venn diagrams are taught at a young age, the German word Menge, rendered as set in English, was coined by Bernard Bolzano in his work The Paradoxes of the Infinite. A set is a collection of distinct objects. The objects that make up a set can be anything, numbers, people, letters of the alphabet, other sets, Sets are conventionally denoted with capital letters. Sets A and B are equal if and only if they have precisely the same elements. Cantors definition turned out to be inadequate, instead, the notion of a set is taken as a notion in axiomatic set theory. There are two ways of describing, or specifying the members of, a set, one way is by intensional definition, using a rule or semantic description, A is the set whose members are the first four positive integers. B is the set of colors of the French flag, the second way is by extension – that is, listing each member of the set. An extensional definition is denoted by enclosing the list of members in curly brackets, one often has the choice of specifying a set either intensionally or extensionally. In the examples above, for instance, A = C and B = D, there are two important points to note about sets. First, in a definition, a set member can be listed two or more times, for example. However, per extensionality, two definitions of sets which differ only in one of the definitions lists set members multiple times, define, in fact. Hence, the set is identical to the set. The second important point is that the order in which the elements of a set are listed is irrelevant and we can illustrate these two important points with an example, = =. For sets with many elements, the enumeration of members can be abbreviated, for instance, the set of the first thousand positive integers may be specified extensionally as, where the ellipsis indicates that the list continues in the obvious way. Ellipses may also be used where sets have infinitely many members, thus the set of positive even numbers can be written as
10.
Computer science
–
Computer science is the study of the theory, experimentation, and engineering that form the basis for the design and use of computers. An alternate, more succinct definition of science is the study of automating algorithmic processes that scale. A computer scientist specializes in the theory of computation and the design of computational systems and its fields can be divided into a variety of theoretical and practical disciplines. Some fields, such as computational complexity theory, are highly abstract, other fields still focus on challenges in implementing computation. Human–computer interaction considers the challenges in making computers and computations useful, usable, the earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, further, algorithms for performing computations have existed since antiquity, even before the development of sophisticated computing equipment. Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623, in 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner. He may be considered the first computer scientist and information theorist, for, among other reasons and he started developing this machine in 1834, and in less than two years, he had sketched out many of the salient features of the modern computer. A crucial step was the adoption of a card system derived from the Jacquard loom making it infinitely programmable. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information, when the machine was finished, some hailed it as Babbages dream come true. During the 1940s, as new and more powerful computing machines were developed, as it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. Computer science began to be established as an academic discipline in the 1950s. The worlds first computer science program, the Cambridge Diploma in Computer Science. The first computer science program in the United States was formed at Purdue University in 1962. Since practical computers became available, many applications of computing have become distinct areas of study in their own rights and it is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM released the IBM704 and later the IBM709 computers, still, working with the IBM was frustrating if you had misplaced as much as one letter in one instruction, the program would crash, and you would have to start the whole process over again. During the late 1950s, the science discipline was very much in its developmental stages. Time has seen significant improvements in the usability and effectiveness of computing technology, modern society has seen a significant shift in the users of computer technology, from usage only by experts and professionals, to a near-ubiquitous user base
11.
Lisp (programming language)
–
Lisp is a family of computer programming languages with a long history and a distinctive, fully parenthesized prefix notation. Originally specified in 1958, Lisp is the second-oldest high-level programming language in use today. Only Fortran is older, by one year, Lisp has changed since its early days, and many dialects have existed over its history. Today, the best known general-purpose Lisp dialects are Common Lisp, Lisp was originally created as a practical mathematical notation for computer programs, influenced by the notation of Alonzo Churchs lambda calculus. It quickly became the programming language for artificial intelligence research. The name LISP derives from LISt Processor, linked lists are one of Lisps major data structures, and Lisp source code is made of lists. Thus, Lisp programs can manipulate source code as a data structure, the interchangeability of code and data gives Lisp its instantly recognizable syntax. All program code is written as s-expressions, or parenthesized lists, Lisp was invented by John McCarthy in 1958 while he was at the Massachusetts Institute of Technology. McCarthy published its design in a paper in Communications of the ACM in 1960, entitled Recursive Functions of Symbolic Expressions and Their Computation by Machine and he showed that with a few simple operators and a notation for functions, one can build a Turing-complete language for algorithms. Information Processing Language was the first AI language, from 1955 or 1956, and already included many of the concepts, such as list-processing and recursion, McCarthys original notation used bracketed M-expressions that would be translated into S-expressions. As an example, the M-expression car is equivalent to the S-expression, once Lisp was implemented, programmers rapidly chose to use S-expressions, and M-expressions were abandoned. M-expressions surfaced again with short-lived attempts of MLISP by Horace Enea, Lisp was first implemented by Steve Russell on an IBM704 computer. Russell had read McCarthys paper and realized that the Lisp eval function could be implemented in machine code, the result was a working Lisp interpreter which could be used to run Lisp programs, or more properly, evaluate Lisp expressions. Two assembly language macros for the IBM704 became the operations for decomposing lists, car. From the context, it is clear that the register is used here to mean memory register. Lisp dialects still use car and cdr for the operations that return the first item in a list, the first complete Lisp compiler, written in Lisp, was implemented in 1962 by Tim Hart and Mike Levin at MIT. This compiler introduced the Lisp model of incremental compilation, in which compiled and interpreted functions can intermix freely, the language used in Hart and Levins memo is much closer to modern Lisp style than McCarthys earlier code. Lisp was a system to implement with the compiler techniques
12.
Relational database
–
A relational database is a digital database whose organization is based on the relational model of data, as proposed by E. F. Codd in 1970. The various software systems used to maintain relational databases are known as a database management system. Virtually all relational database systems use SQL as the language for querying and maintaining the database and this model organizes data into one or more tables of columns and rows, with a unique key identifying each row. Rows are also called records or tuples, generally, each table/relation represents one entity type. The rows represent instances of type of entity and the columns representing values attributed to that instance. Each row in a table has its own unique key, rows in a table can be linked to rows in other tables by adding a column for the unique key of the linked row. Codd showed that data relationships of arbitrary complexity can be represented by a set of concepts. Part of this processing involves consistently being able to select or modify one, therefore, most physical implementations have a unique primary key for each table. When a new row is written to the table, a new value for the primary key is generated. System performance is optimized for PKs, other, more natural keys may also be identified and defined as alternate keys. Often several columns are needed to form an AK, both PKs and AKs have the ability to uniquely identify a row within a table. Additional technology may be applied to ensure a unique ID across the world, a unique identifier. The primary keys within a database are used to define the relationships among the tables, when a PK migrates to another table, it becomes a foreign key in the other table. When each cell can contain only one value and the PK migrates into a regular entity table, relationships are a logical connection between different tables, established on the basis of interaction among these tables. In order for a management system to operate efficiently and accurately. Most of the programming within a RDBMS is accomplished using stored procedures, often procedures can be used to greatly reduce the amount of information transferred within and outside of a system. For increased security, the design may grant access to only the stored procedures. Fundamental stored procedures contain the logic needed to insert new and update existing data, more complex procedures may be written to implement additional rules and logic related to processing or selecting the data
13.
Semantic Web
–
The Semantic Web is an extension of the Web through standards by the World Wide Web Consortium. The standards promote common data formats and exchange protocols on the Web, according to the W3C, The Semantic Web provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries. The term was coined by Tim Berners-Lee for a web of data that can be processed by machines, while its critics have questioned its feasibility, proponents argue that applications in industry, biology and human sciences research have already proven the validity of the original concept. The 2001 Scientific American article by Berners-Lee, Hendler, and Lassila described an expected evolution of the existing Web to a Semantic Web, in 2006, Berners-Lee and colleagues stated that, This simple idea…remains largely unrealized. In 2013, more than four million Web domains contained Semantic Web markup, in the following example, the text Paul Schuster was born in Dresden on a Website will be annotated, connecting a person with their place of birth. The following HTML-fragment shows how a small graph is being described, in RDFa-syntax using a schema. org vocabulary, the triples result in the graph shown in the given figure. One of the advantages of using Uniform Resource Identifier is that they can be dereferenced using the HTTP protocol, according to the so-called Linked Open Data principles, such a dereferenced URI should result in a document that offers further data about the given URI. This enables automated agents to access the Web more intelligently and perform tasks on behalf of users. He defines the Semantic Web as a web of data that can be processed directly and indirectly by machines, many of the technologies proposed by the W3C already existed before they were positioned under the W3C umbrella. In addition, other technologies with similar goals have emerged, such as microformats. A Semantic Web, which makes possible, has yet to emerge, but when it does. The intelligent agents people have touted for ages will finally materialize, the Semantic Web is regarded as an integrator across different content, information applications and systems. It has applications in publishing, blogging, and many other areas, many files on a typical computer can also be loosely divided into human readable documents and machine readable data. Documents like mail messages, reports, and brochures are read by humans, Data, such as calendars, addressbooks, playlists, and spreadsheets are presented using an application program that lets them be viewed, searched and combined. Metadata tags provide a method by which computers can categorise the content of web pages, for example, With HTML, rather, HTML can only say that the span of text X586172 is something that should be positioned near Acme Gizmo and €199, etc. There is no way to say this is a catalog or even to establish that Acme Gizmo is a kind of title or that €199 is a price. There is also no way to express that these pieces of information are together in describing a discrete item. Semantic HTML refers to the traditional HTML practice of markup following intention, for example, the use of <em> denoting emphasis rather than <i>, which specifies italics
14.
Resource Description Framework
–
The Resource Description Framework is a family of World Wide Web Consortium specifications originally designed as a metadata data model. It is also used in knowledge management applications, RDF was adopted as a W3C recommendation in 1999. The RDF1.0 specification was published in 2004, the RDF1.1 specification in 2014, the RDF data model is similar to classical conceptual modeling approaches. It is based upon the idea of making statements about resources in the form of subject–predicate–object expressions, the subject denotes the resource, and the predicate denotes traits or aspects of the resource, and expresses a relationship between the subject and the object. For example, one way to represent the notion The sky has the blue in RDF is as the triple, a subject denoting the sky, a predicate denoting has the color. Therefore, RDF swaps object for subject in contrast to the approach of an entity–attribute–value model in object-oriented design, entity, attribute. RDF is a model with several serialization formats, so the particular encoding for resources or triples varies from format to format. RDFs simple data model and ability to model disparate, abstract concepts has led to its increasing use in knowledge management applications unrelated to Semantic Web activity. A collection of RDF statements intrinsically represents a labeled, directed multi-graph and this theoretically makes an RDF data model better suited to certain kinds of knowledge representation than other relational or ontological models. However, in practice, RDF data is persisted in relational database or native representations. As RDFS and OWL demonstrate, one can build additional ontology languages upon RDF, guha at Apple and Tim Bray at Netscape. The W3C published a specification of RDFs data model and an XML serialization as a recommendation in February 1999, secondly that RDF was an XML format, rather than RDF being a data model and only the RDF/XML serialisation being XML-based. RDF saw little take-up in this period, but there was significant work carried out in Bristol, around ILRT at Bristol University and HP Labs, RSS1.0 and FOAF became exemplar applications for RDF in this period.0, and The RDF Test Cases. Several common serialization formats are in use, including, Turtle, N-Triples, a very simple, easy-to-parse, line-based format that is not as compact as Turtle. N-Quads, a superset of N-Triples, for serializing multiple RDF graphs, n3 or Notation3, a non-standard serialization that is very similar to Turtle, but has some additional features, such as the ability to define inference rules. RDF/XML, an XML-based syntax that was the first standard format for serializing RDF, RDF/XML is sometimes misleadingly called simply RDF because it was introduced among the other W3C specifications defining RDF and it was historically the first W3C standard RDF serialization format. However, it is important to distinguish the RDF/XML format from the abstract RDF model itself, with a little effort, virtually any arbitrary XML may also be interpreted as RDF using GRDDL, Gleaning Resource Descriptions from Dialects of Languages. RDF triples may be stored in a type of database called a triplestore, the subject of an RDF statement is either a uniform resource identifier or a blank node, both of which denote resources
15.
Linguistics
–
Linguistics is the scientific study of language, and involves an analysis of language form, language meaning, and language in context. Linguists traditionally analyse human language by observing an interplay between sound and meaning, phonetics is the study of speech and non-speech sounds, and delves into their acoustic and articulatory properties. While the study of semantics typically concerns itself with truth conditions, Grammar is a system of rules which governs the production and use of utterances in a given language. These rules apply to sound as well as meaning, and include componential sub-sets of rules, such as those pertaining to phonology, morphology, modern theories that deal with the principles of grammar are largely based within Noam Chomskys ideological school of generative grammar. In the early 20th century, Ferdinand de Saussure distinguished between the notions of langue and parole in his formulation of structural linguistics. According to him, parole is the utterance of speech, whereas langue refers to an abstract phenomenon that theoretically defines the principles. This distinction resembles the one made by Noam Chomsky between competence and performance in his theory of transformative or generative grammar. According to Chomsky, competence is an innate capacity and potential for language, while performance is the specific way in which it is used by individuals, groups. The study of parole is the domain of sociolinguistics, the sub-discipline that comprises the study of a system of linguistic facets within a certain speech community. Discourse analysis further examines the structure of texts and conversations emerging out of a speech communitys usage of language, Stylistics also involves the study of written, signed, or spoken discourse through varying speech communities, genres, and editorial or narrative formats in the mass media. In the 1960s, Jacques Derrida, for instance, further distinguished between speech and writing, by proposing that language be studied as a linguistic medium of communication in itself. Palaeography is therefore the discipline that studies the evolution of scripts in language. Linguistics also deals with the social, cultural, historical and political factors that influence language, through which linguistic, research on language through the sub-branches of historical and evolutionary linguistics also focus on how languages change and grow, particularly over an extended period of time. Language documentation combines anthropological inquiry with linguistic inquiry, in order to describe languages, lexicography involves the documentation of words that form a vocabulary. Such a documentation of a vocabulary from a particular language is usually compiled in a dictionary. Computational linguistics is concerned with the statistical or rule-based modeling of natural language from a computational perspective, specific knowledge of language is applied by speakers during the act of translation and interpretation, as well as in language education – the teaching of a second or foreign language. Policy makers work with governments to implement new plans in education, related areas of study also includes the disciplines of semiotics, literary criticism, translation, and speech-language pathology. Before the 20th century, the philology, first attested in 1716, was commonly used to refer to the science of language
16.
Philosophy
–
Philosophy is the study of general and fundamental problems concerning matters such as existence, knowledge, values, reason, mind, and language. The term was coined by Pythagoras. Philosophical methods include questioning, critical discussion, rational argument and systematic presentation, classic philosophical questions include, Is it possible to know anything and to prove it. However, philosophers might also pose more practical and concrete questions such as, is it better to be just or unjust. Historically, philosophy encompassed any body of knowledge, from the time of Ancient Greek philosopher Aristotle to the 19th century, natural philosophy encompassed astronomy, medicine and physics. For example, Newtons 1687 Mathematical Principles of Natural Philosophy later became classified as a book of physics, in the 19th century, the growth of modern research universities led academic philosophy and other disciplines to professionalize and specialize. In the modern era, some investigations that were part of philosophy became separate academic disciplines, including psychology, sociology. Other investigations closely related to art, science, politics, or other pursuits remained part of philosophy, for example, is beauty objective or subjective. Are there many scientific methods or just one, is political utopia a hopeful dream or hopeless fantasy. Major sub-fields of academic philosophy include metaphysics, epistemology, ethics, aesthetics, political philosophy, logic, philosophy of science, since the 20th century, professional philosophers contribute to society primarily as professors, researchers and writers. Traditionally, the term referred to any body of knowledge. In this sense, philosophy is related to religion, mathematics, natural science, education. This division is not obsolete but has changed, Natural philosophy has split into the various natural sciences, especially astronomy, physics, chemistry, biology and cosmology. Moral philosophy has birthed the social sciences, but still includes value theory, metaphysical philosophy has birthed formal sciences such as logic, mathematics and philosophy of science, but still includes epistemology, cosmology and others. Many philosophical debates that began in ancient times are still debated today, colin McGinn and others claim that no philosophical progress has occurred during that interval. Chalmers and others, by contrast, see progress in philosophy similar to that in science, in one general sense, philosophy is associated with wisdom, intellectual culture and a search for knowledge. In that sense, all cultures and literate societies ask philosophical questions such as how are we to live, a broad and impartial conception of philosophy then, finds a reasoned inquiry into such matters as reality, morality and life in all world civilizations. Socrates was an influential philosopher, who insisted that he possessed no wisdom but was a pursuer of wisdom
17.
Latin
–
Latin is a classical language belonging to the Italic branch of the Indo-European languages. The Latin alphabet is derived from the Etruscan and Greek alphabets, Latin was originally spoken in Latium, in the Italian Peninsula. Through the power of the Roman Republic, it became the dominant language, Vulgar Latin developed into the Romance languages, such as Italian, Portuguese, Spanish, French, and Romanian. Latin, Italian and French have contributed many words to the English language, Latin and Ancient Greek roots are used in theology, biology, and medicine. By the late Roman Republic, Old Latin had been standardised into Classical Latin, Vulgar Latin was the colloquial form spoken during the same time and attested in inscriptions and the works of comic playwrights like Plautus and Terence. Late Latin is the language from the 3rd century. Later, Early Modern Latin and Modern Latin evolved, Latin was used as the language of international communication, scholarship, and science until well into the 18th century, when it began to be supplanted by vernaculars. Ecclesiastical Latin remains the language of the Holy See and the Roman Rite of the Catholic Church. Today, many students, scholars and members of the Catholic clergy speak Latin fluently and it is taught in primary, secondary and postsecondary educational institutions around the world. The language has been passed down through various forms, some inscriptions have been published in an internationally agreed, monumental, multivolume series, the Corpus Inscriptionum Latinarum. Authors and publishers vary, but the format is about the same, volumes detailing inscriptions with a critical apparatus stating the provenance, the reading and interpretation of these inscriptions is the subject matter of the field of epigraphy. The works of several hundred ancient authors who wrote in Latin have survived in whole or in part and they are in part the subject matter of the field of classics. The Cat in the Hat, and a book of fairy tales, additional resources include phrasebooks and resources for rendering everyday phrases and concepts into Latin, such as Meissners Latin Phrasebook. The Latin influence in English has been significant at all stages of its insular development. From the 16th to the 18th centuries, English writers cobbled together huge numbers of new words from Latin and Greek words, dubbed inkhorn terms, as if they had spilled from a pot of ink. Many of these words were used once by the author and then forgotten, many of the most common polysyllabic English words are of Latin origin through the medium of Old French. Romance words make respectively 59%, 20% and 14% of English, German and those figures can rise dramatically when only non-compound and non-derived words are included. Accordingly, Romance words make roughly 35% of the vocabulary of Dutch, Roman engineering had the same effect on scientific terminology as a whole
18.
Complex number
–
A complex number is a number that can be expressed in the form a + bi, where a and b are real numbers and i is the imaginary unit, satisfying the equation i2 = −1. In this expression, a is the part and b is the imaginary part of the complex number. If z = a + b i, then ℜ z = a, ℑ z = b, Complex numbers extend the concept of the one-dimensional number line to the two-dimensional complex plane by using the horizontal axis for the real part and the vertical axis for the imaginary part. The complex number a + bi can be identified with the point in the complex plane, a complex number whose real part is zero is said to be purely imaginary, whereas a complex number whose imaginary part is zero is a real number. In this way, the numbers are a field extension of the ordinary real numbers. As well as their use within mathematics, complex numbers have applications in many fields, including physics, chemistry, biology, economics, electrical engineering. The Italian mathematician Gerolamo Cardano is the first known to have introduced complex numbers and he called them fictitious during his attempts to find solutions to cubic equations in the 16th century. Complex numbers allow solutions to equations that have no solutions in real numbers. For example, the equation 2 = −9 has no real solution, Complex numbers provide a solution to this problem. The idea is to extend the real numbers with the unit i where i2 = −1. According to the theorem of algebra, all polynomial equations with real or complex coefficients in a single variable have a solution in complex numbers. A complex number is a number of the form a + bi, for example, −3.5 + 2i is a complex number. The real number a is called the part of the complex number a + bi. By this convention the imaginary part does not include the unit, hence b. The real part of a number z is denoted by Re or ℜ. For example, Re = −3.5 Im =2, hence, in terms of its real and imaginary parts, a complex number z is equal to Re + Im ⋅ i. This expression is known as the Cartesian form of z. A real number a can be regarded as a number a + 0i whose imaginary part is 0
19.
Quaternion
–
In mathematics, the quaternions are a number system that extends the complex numbers. They were first described by Irish mathematician William Rowan Hamilton in 1843, a feature of quaternions is that multiplication of two quaternions is noncommutative. Hamilton defined a quaternion as the quotient of two directed lines in a space or equivalently as the quotient of two vectors. Quaternions are generally represented in the form, a + bi + cj + dk where a, b, c, and d are real numbers, and i, j, and k are the fundamental quaternion units. In practical applications, they can be used other methods, such as Euler angles and rotation matrices, or as an alternative to them. In modern mathematical language, quaternions form a four-dimensional associative normed division algebra over the real numbers, in fact, the quaternions were the first noncommutative division algebra to be discovered. The algebra of quaternions is often denoted by H, or in blackboard bold by H and it can also be given by the Clifford algebra classifications Cℓ0,2 ≅ Cℓ03,0. These rings are also Euclidean Hurwitz algebras, of which quaternions are the largest associative algebra. The unit quaternions can be thought of as a choice of a structure on the 3-sphere S3 that gives the group Spin. Quaternion algebra was introduced by Hamilton in 1843, carl Friedrich Gauss had also discovered quaternions in 1819, but this work was not published until 1900. Hamilton knew that the numbers could be interpreted as points in a plane. Points in space can be represented by their coordinates, which are triples of numbers, however, Hamilton had been stuck on the problem of multiplication and division for a long time. He could not figure out how to calculate the quotient of the coordinates of two points in space. The great breakthrough in quaternions finally came on Monday 16 October 1843 in Dublin, as he walked along the towpath of the Royal Canal with his wife, the concepts behind quaternions were taking shape in his mind. When the answer dawned on him, Hamilton could not resist the urge to carve the formula for the quaternions, i2 = j2 = k2 = ijk = −1, into the stone of Brougham Bridge as he paused on it. On the following day, Hamilton wrote a letter to his friend and fellow mathematician, John T. Graves and this letter was later published in the London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, vol. xxv, pp 489–95. In the letter, Hamilton states, And here there dawned on me the notion that we must admit, in some sense, an electric circuit seemed to close, and a spark flashed forth. Hamilton called a quadruple with these rules of multiplication a quaternion, Hamiltons treatment is more geometric than the modern approach, which emphasizes quaternions algebraic properties
20.
Octonion
–
In mathematics, the octonions are a normed division algebra over the real numbers, usually represented by the capital letter O, using boldface O or blackboard bold O. There are three lower-dimensional normed division algebras over the reals, the real numbers R themselves, the complex numbers C, the octonions have eight dimensions, twice the number of dimensions of the quaternions, of which they are an extension. They are noncommutative and nonassociative, but satisfy a form of associativity. Octonions are not as known as the quaternions and complex numbers. Despite this, they have interesting properties and are related to a number of exceptional structures in mathematics. Additionally, octonions have applications in such as string theory, special relativity. The octonions were discovered in 1843 by John T. Graves, the octonions were discovered independently by Cayley and are sometimes referred to as Cayley numbers or the Cayley algebra. Hamilton described the history of Graves discovery. Hamilton invented the word associative so that he could say that octonions were not associative, the octonions can be thought of as octets of real numbers. Every octonion is a linear combination of the unit octonions. Addition and subtraction of octonions is done by adding and subtracting corresponding terms and hence their coefficients, multiplication is distributive over addition, so the product of two octonions can be calculated by summing the product of all the terms, again like quaternions. The above definition though is not unique, but is one of 480 possible definitions for octonion multiplication with e0 =1. The others can be obtained by permuting and changing the signs of the basis elements. The 480 different algebras are isomorphic, and there is rarely a need to consider which particular multiplication rule is used. Each of these 480 definitions is invariant up to signs under some 7-cycle of the points, a common choice is to use the definition invariant under the 7-cycle with e1e2 = e4 as it is particularly easy to remember the multiplication. A variation of this sometimes used is to label the elements of the basis by the elements ∞,0,1,2,6, of the projective line over the finite field of order 7. The multiplication is given by e∞ =1 and e1e2 = e4. These are the nonzero codewords of the quadratic residue code of length 7 over the field of 2 elements
21.
Medieval Latin
–
Despite the clerical origin of many of its authors, medieval Latin should not be confused with Ecclesiastical Latin. There is no consensus on the exact boundary where Late Latin ends. Medieval Latin had a vocabulary, which freely borrowed from other sources. Greek provided much of the vocabulary of Christianity. The various Germanic languages spoken by the Germanic tribes, who invaded southern Europe, were major sources of new words. Germanic leaders became the rulers of parts of the Roman Empire that they conquered, other more ordinary words were replaced by coinages from Vulgar Latin or Germanic sources because the classical words had fallen into disuse. Latin was also spread to such as Ireland and Germany. Works written in the lands, where Latin was a language with no relation to the local vernacular, also influenced the vocabulary. English words like abstract, subject, communicate, matter, probable, the high point of the development of medieval Latin as a literary language came with the Carolingian renaissance, a rebirth of learning kindled under the patronage of Charlemagne, king of the Franks. On the other hand, strictly speaking there was no form of medieval Latin. Every Latin author in the period spoke Latin as a second language, with varying degrees of fluency, and syntax, grammar. For instance, rather than following the classical Latin practice of placing the verb at the end. Unlike classical Latin, where esse was the auxiliary verb, medieval Latin writers might use habere as an auxiliary, similar to constructions in Germanic. The accusative and infinitive construction in classical Latin was often replaced by a clause introduced by quod or quia. This is almost identical, for example, to the use of que in similar constructions in French. In every age from the late 8th century onwards, there were learned writers who were familiar enough with classical syntax to be aware that these forms and usages were wrong, however the use of quod to introduce subordinate clauses was especially pervasive and is found at all levels. That resulted in two features of Medieval Latin compared with Classical Latin. First, many attempted to show off their knowledge of Classical Latin by using rare or archaic constructions
22.
Greek language
–
Greek is an independent branch of the Indo-European family of languages, native to Greece and other parts of the Eastern Mediterranean. It has the longest documented history of any living language, spanning 34 centuries of written records and its writing system has been the Greek alphabet for the major part of its history, other systems, such as Linear B and the Cypriot syllabary, were used previously. The alphabet arose from the Phoenician script and was in turn the basis of the Latin, Cyrillic, Armenian, Coptic, Gothic and many other writing systems. Together with the Latin texts and traditions of the Roman world, during antiquity, Greek was a widely spoken lingua franca in the Mediterranean world and many places beyond. It would eventually become the official parlance of the Byzantine Empire, the language is spoken by at least 13.2 million people today in Greece, Cyprus, Italy, Albania, Turkey, and the Greek diaspora. Greek roots are used to coin new words for other languages, Greek. Greek has been spoken in the Balkan peninsula since around the 3rd millennium BC, the earliest written evidence is a Linear B clay tablet found in Messenia that dates to between 1450 and 1350 BC, making Greek the worlds oldest recorded living language. Among the Indo-European languages, its date of earliest written attestation is matched only by the now extinct Anatolian languages, the Greek language is conventionally divided into the following periods, Proto-Greek, the unrecorded but assumed last ancestor of all known varieties of Greek. The unity of Proto-Greek would have ended as Hellenic migrants entered the Greek peninsula sometime in the Neolithic era or the Bronze Age, Mycenaean Greek, the language of the Mycenaean civilisation. It is recorded in the Linear B script on tablets dating from the 15th century BC onwards, Ancient Greek, in its various dialects, the language of the Archaic and Classical periods of the ancient Greek civilisation. It was widely known throughout the Roman Empire, after the Roman conquest of Greece, an unofficial bilingualism of Greek and Latin was established in the city of Rome and Koine Greek became a first or second language in the Roman Empire. The origin of Christianity can also be traced through Koine Greek, Medieval Greek, also known as Byzantine Greek, the continuation of Koine Greek in Byzantine Greece, up to the demise of the Byzantine Empire in the 15th century. Much of the written Greek that was used as the language of the Byzantine Empire was an eclectic middle-ground variety based on the tradition of written Koine. Modern Greek, Stemming from Medieval Greek, Modern Greek usages can be traced in the Byzantine period and it is the language used by the modern Greeks, and, apart from Standard Modern Greek, there are several dialects of it. In the modern era, the Greek language entered a state of diglossia, the historical unity and continuing identity between the various stages of the Greek language is often emphasised. Greek speakers today still tend to regard literary works of ancient Greek as part of their own rather than a foreign language and it is also often stated that the historical changes have been relatively slight compared with some other languages. According to one estimation, Homeric Greek is probably closer to demotic than 12-century Middle English is to modern spoken English, Greek is spoken by about 13 million people, mainly in Greece, Albania and Cyprus, but also worldwide by the large Greek diaspora. Greek is the language of Greece, where it is spoken by almost the entire population
23.
Multiset
–
In mathematics, a multiset is a generalization of the concept of a set that, unlike a set, allows multiple instances of the multisets elements. For example, and are different multisets although they are the same set, however, order does not matter, so and are the same multiset. The multiplicity of an element is the number of instances of the element in a specific multiset, however, the use of multisets predates the word multiset by many centuries. Knuth attributes the first study of multisets to the Indian mathematician Bhāskarāchārya, knuth also lists other names that were proposed or used for multisets, including list, bunch, bag, heap, sample, weighted set, collection, and suite. The number of times an element belongs to the multiset is the multiplicity of that member, the total number of elements in a multiset, including repeated memberships, is the cardinality of the multiset. For example, in the multiset the multiplicities of the members a, b, and c are respectively 2,3, and 1, to distinguish between sets and multisets, a notation that incorporates square brackets is sometimes used, the multiset can be represented as. In multisets, as in sets and in contrast to tuples, the order of elements is irrelevant, The multisets and are equal. Wayne Blizard traced multisets back to the origin of numbers, arguing that “in ancient times. This shows that people implicitly used multisets even before mathematics emerged and this shows that necessity in this structure has been always so urgent that multisets have been several times rediscovered and appeared in literature under different names. For instance, they were referred to as bags by James Lyle Peterson in 1981, a multiset has been also called an aggregate, heap, bunch, sample, weighted set, occurrence set, and fireset. Although multisets were implicitly utilized from ancient times, their explicit exploration happened much later, the first known study of multisets is attributed to the Indian mathematician Bhāskarāchārya circa 1150, who described permutations of multisets. The work of Marius Nizolius contains another early reference to the concept of multisets, athanasius Kircher found the number of multiset permutations when one element can be repeated. Jean Prestet published a rule for multiset permutations in 1675. John Wallis explained this rule in detail in 1685. In the explicit form, multisets appeared in the work of Richard Dedekind, other mathematicians formalized multisets and began to study them as a precise mathematical object in the 20th century. One of the simplest and most natural examples is the multiset of prime factors of a number n, here the underlying set of elements is the set of prime divisors of n. For example, the number 120 has the prime factorization 120 =233151 which gives the multiset, a related example is the multiset of solutions of an algebraic equation. A quadratic equation, for example, has two solutions, however, in some cases they are both the same number
24.
Function (mathematics)
–
In mathematics, a function is a relation between a set of inputs and a set of permissible outputs with the property that each input is related to exactly one output. An example is the function that each real number x to its square x2. The output of a function f corresponding to a x is denoted by f. In this example, if the input is −3, then the output is 9, likewise, if the input is 3, then the output is also 9, and we may write f =9. The input variable are sometimes referred to as the argument of the function, Functions of various kinds are the central objects of investigation in most fields of modern mathematics. There are many ways to describe or represent a function, some functions may be defined by a formula or algorithm that tells how to compute the output for a given input. Others are given by a picture, called the graph of the function, in science, functions are sometimes defined by a table that gives the outputs for selected inputs. A function could be described implicitly, for example as the inverse to another function or as a solution of a differential equation, sometimes the codomain is called the functions range, but more commonly the word range is used to mean, instead, specifically the set of outputs. For example, we could define a function using the rule f = x2 by saying that the domain and codomain are the numbers. The image of this function is the set of real numbers. In analogy with arithmetic, it is possible to define addition, subtraction, multiplication, another important operation defined on functions is function composition, where the output from one function becomes the input to another function. Linking each shape to its color is a function from X to Y, each shape is linked to a color, there is no shape that lacks a color and no shape that has more than one color. This function will be referred to as the color-of-the-shape function, the input to a function is called the argument and the output is called the value. The set of all permitted inputs to a function is called the domain of the function. Thus, the domain of the function is the set of the four shapes. The concept of a function does not require that every possible output is the value of some argument, a second example of a function is the following, the domain is chosen to be the set of natural numbers, and the codomain is the set of integers. The function associates to any number n the number 4−n. For example, to 1 it associates 3 and to 10 it associates −6, a third example of a function has the set of polygons as domain and the set of natural numbers as codomain
25.
Set theory
–
Set theory is a branch of mathematical logic that studies sets, which informally are collections of objects. Although any type of object can be collected into a set, set theory is applied most often to objects that are relevant to mathematics, the language of set theory can be used in the definitions of nearly all mathematical objects. The modern study of set theory was initiated by Georg Cantor, Set theory is commonly employed as a foundational system for mathematics, particularly in the form of Zermelo–Fraenkel set theory with the axiom of choice. Beyond its foundational role, set theory is a branch of mathematics in its own right, contemporary research into set theory includes a diverse collection of topics, ranging from the structure of the real number line to the study of the consistency of large cardinals. Mathematical topics typically emerge and evolve through interactions among many researchers, Set theory, however, was founded by a single paper in 1874 by Georg Cantor, On a Property of the Collection of All Real Algebraic Numbers. Since the 5th century BC, beginning with Greek mathematician Zeno of Elea in the West and early Indian mathematicians in the East, especially notable is the work of Bernard Bolzano in the first half of the 19th century. Modern understanding of infinity began in 1867–71, with Cantors work on number theory, an 1872 meeting between Cantor and Richard Dedekind influenced Cantors thinking and culminated in Cantors 1874 paper. Cantors work initially polarized the mathematicians of his day, while Karl Weierstrass and Dedekind supported Cantor, Leopold Kronecker, now seen as a founder of mathematical constructivism, did not. This utility of set theory led to the article Mengenlehre contributed in 1898 by Arthur Schoenflies to Kleins encyclopedia, in 1899 Cantor had himself posed the question What is the cardinal number of the set of all sets. Russell used his paradox as a theme in his 1903 review of continental mathematics in his The Principles of Mathematics, in 1906 English readers gained the book Theory of Sets of Points by William Henry Young and his wife Grace Chisholm Young, published by Cambridge University Press. The momentum of set theory was such that debate on the paradoxes did not lead to its abandonment, the work of Zermelo in 1908 and Abraham Fraenkel in 1922 resulted in the set of axioms ZFC, which became the most commonly used set of axioms for set theory. The work of such as Henri Lebesgue demonstrated the great mathematical utility of set theory. Set theory is used as a foundational system, although in some areas category theory is thought to be a preferred foundation. Set theory begins with a binary relation between an object o and a set A. If o is a member of A, the notation o ∈ A is used, since sets are objects, the membership relation can relate sets as well. A derived binary relation between two sets is the relation, also called set inclusion. If all the members of set A are also members of set B, then A is a subset of B, for example, is a subset of, and so is but is not. As insinuated from this definition, a set is a subset of itself, for cases where this possibility is unsuitable or would make sense to be rejected, the term proper subset is defined
26.
Discrete mathematics
–
Discrete mathematics is the study of mathematical structures that are fundamentally discrete rather than continuous. Discrete mathematics therefore excludes topics in mathematics such as calculus. Discrete objects can often be enumerated by integers, more formally, discrete mathematics has been characterized as the branch of mathematics dealing with countable sets. However, there is no definition of the term discrete mathematics. Indeed, discrete mathematics is described less by what is included than by what is excluded, continuously varying quantities, the set of objects studied in discrete mathematics can be finite or infinite. The term finite mathematics is sometimes applied to parts of the field of mathematics that deals with finite sets. Conversely, computer implementations are significant in applying ideas from mathematics to real-world problems. Although the main objects of study in mathematics are discrete objects. In university curricula, Discrete Mathematics appeared in the 1980s, initially as a computer science support course, some high-school-level discrete mathematics textbooks have appeared as well. At this level, discrete mathematics is seen as a preparatory course. The Fulkerson Prize is awarded for outstanding papers in discrete mathematics, the history of discrete mathematics has involved a number of challenging problems which have focused attention within areas of the field. In graph theory, much research was motivated by attempts to prove the four color theorem, first stated in 1852, in logic, the second problem on David Hilberts list of open problems presented in 1900 was to prove that the axioms of arithmetic are consistent. Gödels second incompleteness theorem, proved in 1931, showed that this was not possible – at least not within arithmetic itself, Hilberts tenth problem was to determine whether a given polynomial Diophantine equation with integer coefficients has an integer solution. In 1970, Yuri Matiyasevich proved that this could not be done, at the same time, military requirements motivated advances in operations research. The Cold War meant that cryptography remained important, with fundamental advances such as public-key cryptography being developed in the following decades, operations research remained important as a tool in business and project management, with the critical path method being developed in the 1950s. The telecommunication industry has also motivated advances in mathematics, particularly in graph theory. Formal verification of statements in logic has been necessary for development of safety-critical systems. Computational geometry has been an important part of the computer graphics incorporated into modern video games, currently, one of the most famous open problems in theoretical computer science is the P = NP problem, which involves the relationship between the complexity classes P and NP
27.
Combinatorics
–
Combinatorics is a branch of mathematics concerning the study of finite or countable discrete structures. Many combinatorial questions have historically been considered in isolation, giving an ad hoc solution to a problem arising in some mathematical context. In the later twentieth century, however, powerful and general methods were developed. One of the oldest and most accessible parts of combinatorics is graph theory, Combinatorics is used frequently in computer science to obtain formulas and estimates in the analysis of algorithms. A mathematician who studies combinatorics is called a combinatorialist or a combinatorist, basic combinatorial concepts and enumerative results appeared throughout the ancient world. Greek historian Plutarch discusses an argument between Chrysippus and Hipparchus of a rather delicate enumerative problem, which was shown to be related to Schröder–Hipparchus numbers. In the Ostomachion, Archimedes considers a tiling puzzle, in the Middle Ages, combinatorics continued to be studied, largely outside of the European civilization. The Indian mathematician Mahāvīra provided formulae for the number of permutations and combinations, later, in Medieval England, campanology provided examples of what is now known as Hamiltonian cycles in certain Cayley graphs on permutations. During the Renaissance, together with the rest of mathematics and the sciences, works of Pascal, Newton, Jacob Bernoulli and Euler became foundational in the emerging field. In modern times, the works of J. J. Sylvester and Percy MacMahon helped lay the foundation for enumerative, graph theory also enjoyed an explosion of interest at the same time, especially in connection with the four color problem. In the second half of the 20th century, combinatorics enjoyed a rapid growth, in part, the growth was spurred by new connections and applications to other fields, ranging from algebra to probability, from functional analysis to number theory, etc. These connections shed the boundaries between combinatorics and parts of mathematics and theoretical science, but at the same time led to a partial fragmentation of the field. Enumerative combinatorics is the most classical area of combinatorics and concentrates on counting the number of combinatorial objects. Although counting the number of elements in a set is a rather broad mathematical problem, fibonacci numbers is the basic example of a problem in enumerative combinatorics. The twelvefold way provides a framework for counting permutations, combinations and partitions. Analytic combinatorics concerns the enumeration of combinatorial structures using tools from complex analysis, in contrast with enumerative combinatorics, which uses explicit combinatorial formulae and generating functions to describe the results, analytic combinatorics aims at obtaining asymptotic formulae. Partition theory studies various enumeration and asymptotic problems related to integer partitions, originally a part of number theory and analysis, it is now considered a part of combinatorics or an independent field. It incorporates the bijective approach and various tools in analysis and analytic number theory, graphs are basic objects in combinatorics
28.
Probability theory
–
Probability theory is the branch of mathematics concerned with probability, the analysis of random phenomena. It is not possible to predict precisely results of random events, two representative mathematical results describing such patterns are the law of large numbers and the central limit theorem. As a mathematical foundation for statistics, probability theory is essential to human activities that involve quantitative analysis of large sets of data. Methods of probability theory also apply to descriptions of complex systems given only partial knowledge of their state, a great discovery of twentieth century physics was the probabilistic nature of physical phenomena at atomic scales, described in quantum mechanics. Christiaan Huygens published a book on the subject in 1657 and in the 19th century, initially, probability theory mainly considered discrete events, and its methods were mainly combinatorial. Eventually, analytical considerations compelled the incorporation of continuous variables into the theory and this culminated in modern probability theory, on foundations laid by Andrey Nikolaevich Kolmogorov. Kolmogorov combined the notion of space, introduced by Richard von Mises. This became the mostly undisputed axiomatic basis for modern probability theory, most introductions to probability theory treat discrete probability distributions and continuous probability distributions separately. The more mathematically advanced measure theory-based treatment of probability covers the discrete, continuous, consider an experiment that can produce a number of outcomes. The set of all outcomes is called the space of the experiment. The power set of the space is formed by considering all different collections of possible results. For example, rolling an honest die produces one of six possible results, one collection of possible results corresponds to getting an odd number. Thus, the subset is an element of the set of the sample space of die rolls. In this case, is the event that the die falls on some odd number, If the results that actually occur fall in a given event, that event is said to have occurred. Probability is a way of assigning every event a value between zero and one, with the requirement that the event made up of all possible results be assigned a value of one, the probability that any one of the events, or will occur is 5/6. This is the same as saying that the probability of event is 5/6 and this event encompasses the possibility of any number except five being rolled. The mutually exclusive event has a probability of 1/6, and the event has a probability of 1, discrete probability theory deals with events that occur in countable sample spaces. Modern definition, The modern definition starts with a finite or countable set called the sample space, which relates to the set of all possible outcomes in classical sense, denoted by Ω
29.
Rule of product
–
In combinatorics, the rule of product or multiplication principle is a basic counting principle. Stated simply, it is the idea that if there are a ways of doing something and b ways of doing another thing, ⏞ In this example, the rule says, multiply 3 by 2, getting 6. The sets and in this example are disjoint sets, but that is not necessary. The number of ways to choose a member of, and then to do so again, in choosing an ordered pair each of whose components is in, is 3 ×3 =9. As another example, when you decide to order pizza, you must first choose the type of crust, next, you choose one topping, cheese, pepperoni, or sausage. Using the rule of product, you know there are 2 ×3 =6 possible combinations of ordering a pizza. In set theory, this principle is often taken to be the definition of the product of cardinal numbers. We have | S1 | ⋅ | S2 | ⋯ | S n | = | S1 × S2 × ⋯ × S n | where × is the Cartesian product operator. These sets need not be finite, nor is it necessary to have finitely many factors in the product. The rule of sum is another basic counting principle
30.
Cardinality
–
In mathematics, the cardinality of a set is a measure of the number of elements of the set. For example, the set A = contains 3 elements, there are two approaches to cardinality – one which compares sets directly using bijections and injections, and another which uses cardinal numbers. The cardinality of a set is called its size, when no confusion with other notions of size is possible. The cardinality of a set A is usually denoted | A |, with a bar on each side, this is the same notation as absolute value. Alternatively, the cardinality of a set A may be denoted by n, A, card, while the cardinality of a finite set is just the number of its elements, extending the notion to infinite sets usually starts with defining the notion of comparison of arbitrary sets. Two sets A and B have the same cardinality if there exists a bijection, that is, such sets are said to be equipotent, equipollent, or equinumerous. This relationship can also be denoted A≈B or A~B, for example, the set E = of non-negative even numbers has the same cardinality as the set N = of natural numbers, since the function f = 2n is a bijection from N to E. A has cardinality less than or equal to the cardinality of B if there exists a function from A into B. A has cardinality less than the cardinality of B if there is an injective function. If | A | ≤ | B | and | B | ≤ | A | then | A | = | B |, the axiom of choice is equivalent to the statement that | A | ≤ | B | or | B | ≤ | A | for every A, B. That is, the cardinality of a set was not defined as an object itself. However, such an object can be defined as follows, the relation of having the same cardinality is called equinumerosity, and this is an equivalence relation on the class of all sets. The equivalence class of a set A under this relation then consists of all sets which have the same cardinality as A. There are two ways to define the cardinality of a set, The cardinality of a set A is defined as its class under equinumerosity. A representative set is designated for each equivalence class, the most common choice is the initial ordinal in that class. This is usually taken as the definition of number in axiomatic set theory. Assuming AC, the cardinalities of the sets are denoted ℵ0 < ℵ1 < ℵ2 < …. For each ordinal α, ℵ α +1 is the least cardinal number greater than ℵ α
31.
Cartesian product
–
In Set theory, a Cartesian product is a mathematical operation that returns a set from multiple sets. That is, for sets A and B, the Cartesian product A × B is the set of all ordered pairs where a ∈ A and b ∈ B, products can be specified using set-builder notation, e. g. A table can be created by taking the Cartesian product of a set of rows, If the Cartesian product rows × columns is taken, the cells of the table contain ordered pairs of the form. More generally, a Cartesian product of n sets, also known as an n-fold Cartesian product, can be represented by an array of n dimensions, an ordered pair is a 2-tuple or couple. The Cartesian product is named after René Descartes, whose formulation of analytic geometry gave rise to the concept, an illustrative example is the standard 52-card deck. The standard playing card ranks form a 13-element set, the card suits form a four-element set. The Cartesian product of these sets returns a 52-element set consisting of 52 ordered pairs, Ranks × Suits returns a set of the form. Suits × Ranks returns a set of the form, both sets are distinct, even disjoint. The main historical example is the Cartesian plane in analytic geometry, usually, such a pairs first and second components are called its x and y coordinates, respectively, cf. picture. The set of all such pairs is thus assigned to the set of all points in the plane, a formal definition of the Cartesian product from set-theoretical principles follows from a definition of ordered pair. The most common definition of ordered pairs, the Kuratowski definition, is =, note that, under this definition, X × Y ⊆ P, where P represents the power set. Therefore, the existence of the Cartesian product of any two sets in ZFC follows from the axioms of pairing, union, power set, let A, B, C, and D be sets. × C ≠ A × If for example A =, then × A = ≠ = A ×, the Cartesian product behaves nicely with respect to intersections, cf. left picture. × = ∩ In most cases the above statement is not true if we replace intersection with union, cf. middle picture. Other properties related with subsets are, if A ⊆ B then A × C ⊆ B × C, the cardinality of a set is the number of elements of the set. For example, defining two sets, A = and B =, both set A and set B consist of two elements each. Their Cartesian product, written as A × B, results in a new set which has the following elements, each element of A is paired with each element of B. Each pair makes up one element of the output set, the number of values in each element of the resulting set is equal to the number of sets whose cartesian product is being taken,2 in this case
32.
Programming language
–
A programming language is a formal computer language designed to communicate instructions to a machine, particularly a computer. Programming languages can be used to programs to control the behavior of a machine or to express algorithms. From the early 1800s, programs were used to direct the behavior of such as Jacquard looms. Thousands of different programming languages have created, mainly in the computer field. Many programming languages require computation to be specified in an imperative form while other languages use forms of program specification such as the declarative form. The description of a language is usually split into the two components of syntax and semantics. Some languages are defined by a document while other languages have a dominant implementation that is treated as a reference. Some languages have both, with the language defined by a standard and extensions taken from the dominant implementation being common. A programming language is a notation for writing programs, which are specifications of a computation or algorithm, some, but not all, authors restrict the term programming language to those languages that can express all possible algorithms. For example, PostScript programs are created by another program to control a computer printer or display. More generally, a language may describe computation on some, possibly abstract. It is generally accepted that a specification for a programming language includes a description, possibly idealized. In most practical contexts, a programming language involves a computer, consequently, abstractions Programming languages usually contain abstractions for defining and manipulating data structures or controlling the flow of execution. Expressive power The theory of computation classifies languages by the computations they are capable of expressing, all Turing complete languages can implement the same set of algorithms. ANSI/ISO SQL-92 and Charity are examples of languages that are not Turing complete, markup languages like XML, HTML, or troff, which define structured data, are not usually considered programming languages. Programming languages may, however, share the syntax with markup languages if a computational semantics is defined, XSLT, for example, is a Turing complete XML dialect. Moreover, LaTeX, which is used for structuring documents. The term computer language is used interchangeably with programming language
33.
Projection (mathematics)
–
In mathematics, a projection is a mapping of a set into a subset, which is equal to its square for mapping composition. The restriction to a subspace of a projection is called a projection. An everyday example of a projection is the casting of shadows onto a plane, the projection of a point is its shadow on the paper sheet. The shadow of a point on the sheet is this point itself. The shadow of a sphere is a closed disk. Originally, the notion of projection was introduced in Euclidean geometry to denote the projection of the Euclidean space of three dimensions onto a plane in it, like the shadow example. The points P such that the line CP is parallel to the plane do not have any image by the projection, the projection of the point C itself is not defined. The projection parallel to a direction D, onto a plane, see Affine space § Projection for an accurate definition, generalized to any dimension. The concept of projection in mathematics is an old one. This rudimentary idea was refined and abstracted, first in a geometric context, over time differing versions of the concept developed, but today, in a sufficiently abstract setting, we can unify these variations. In cartography, a map projection is a map of a part of the surface of the Earth onto a plane, which, in some cases, the 3D projections are also at the basis of the theory of perspective. The need for unifying the two kinds of projections and of defining the image by a projection of any point different of the center of projection are at the origin of projective geometry. However, a transformation is a bijection of a projective space. In an abstract setting we can say that a projection is a mapping of a set which is idempotent. A projection may also refer to a mapping which has a left inverse, both notions are strongly related, as follows. Let p be an idempotent map from a set A into itself, if we denote by π the map p viewed as a map from A onto B and by i the injection of B into A, then we have π. i= IdB. Conversely, π. i = IdB implies that π∘i is idempotent, a mapping that takes an element to its equivalence class under a given equivalence relation is known as the canonical projection. The evaluation map sends a function f to the value f for a fixed x, the space of functions YX can be identified with the cartesian product ∏ i ∈ X Y i, and the evaluation map is a projection map from the cartesian product