1.
Linguistics
–
Linguistics is the scientific study of language, and involves an analysis of language form, language meaning, and language in context. Linguists traditionally analyse human language by observing an interplay between sound and meaning, phonetics is the study of speech and non-speech sounds, and delves into their acoustic and articulatory properties. While the study of semantics typically concerns itself with truth conditions, Grammar is a system of rules which governs the production and use of utterances in a given language. These rules apply to sound as well as meaning, and include componential sub-sets of rules, such as those pertaining to phonology, morphology, modern theories that deal with the principles of grammar are largely based within Noam Chomskys ideological school of generative grammar. In the early 20th century, Ferdinand de Saussure distinguished between the notions of langue and parole in his formulation of structural linguistics. According to him, parole is the utterance of speech, whereas langue refers to an abstract phenomenon that theoretically defines the principles. This distinction resembles the one made by Noam Chomsky between competence and performance in his theory of transformative or generative grammar. According to Chomsky, competence is an innate capacity and potential for language, while performance is the specific way in which it is used by individuals, groups. The study of parole is the domain of sociolinguistics, the sub-discipline that comprises the study of a system of linguistic facets within a certain speech community. Discourse analysis further examines the structure of texts and conversations emerging out of a speech communitys usage of language, Stylistics also involves the study of written, signed, or spoken discourse through varying speech communities, genres, and editorial or narrative formats in the mass media. In the 1960s, Jacques Derrida, for instance, further distinguished between speech and writing, by proposing that language be studied as a linguistic medium of communication in itself. Palaeography is therefore the discipline that studies the evolution of scripts in language. Linguistics also deals with the social, cultural, historical and political factors that influence language, through which linguistic, research on language through the sub-branches of historical and evolutionary linguistics also focus on how languages change and grow, particularly over an extended period of time. Language documentation combines anthropological inquiry with linguistic inquiry, in order to describe languages, lexicography involves the documentation of words that form a vocabulary. Such a documentation of a vocabulary from a particular language is usually compiled in a dictionary. Computational linguistics is concerned with the statistical or rule-based modeling of natural language from a computational perspective, specific knowledge of language is applied by speakers during the act of translation and interpretation, as well as in language education – the teaching of a second or foreign language. Policy makers work with governments to implement new plans in education, related areas of study also includes the disciplines of semiotics, literary criticism, translation, and speech-language pathology. Before the 20th century, the philology, first attested in 1716, was commonly used to refer to the science of language
2.
Noun phrase
–
A noun phrase or nominal phrase is a phrase which has a noun as its head word, or which performs the same grammatical function as such a phrase. Noun phrases are very common cross-linguistically, and they may be the most frequently occurring phrase type, Noun phrases often function as verb subjects and objects, as predicative expressions, and as the complements of prepositions. Noun phrases can be embedded inside each other, for instance, in some modern theories of grammar, noun phrases with determiners are analyzed as having the determiner rather than the noun as their head, they are then referred to as determiner phrases. Some examples of noun phrases are underlined in the sentences below, the head noun appears in bold. The election-year politics are annoying for many people, almost every sentence contains at least one noun phrase. Current economic weakness may be a result of energy prices. Noun phrases can be identified by the possibility of pronoun substitution and this sentence contains two noun phrases. The subject noun phrase that is present in this sentence is long, Noun phrases can be embedded in other noun phrases. They can be embedded in them, a string of words that can be replaced by a single pronoun without rendering the sentence grammatically unacceptable is a noun phrase. As to whether the string must contain at least two words, see the following section, traditionally, a phrase is understood to contain two or more words. The traditional progression in the size of units is word < phrase < clause. However, many schools of syntax – especially those that have been influenced by X-bar theory – make no such restriction. Here many single words are judged to be based on a desire for theory-internal consistency. A phrase is deemed to be a word or a combination of words that appears in a set syntactic position, on this understanding of phrases, the nouns and pronouns in bold in the following sentences are noun phrases, He saw someone. The words in bold are called phrases since they appear in the positions where multiple-word phrases can appear. This practice takes the constellation to be rather than the words themselves. The word he, for instance, functions as a pronoun, the phrase structure grammars of the Chomskyan tradition are primary examples of theories that apply this understanding of phrases. Other grammars, for instance dependency grammars, are likely to reject this approach to phrases, for them, phrases must contain two or more words
3.
Recursion
–
Recursion occurs when a thing is defined in terms of itself or of its type. Recursion is used in a variety of disciplines ranging from linguistics to logic, the most common application of recursion is in mathematics and computer science, where a function being defined is applied within its own definition. While this apparently defines a number of instances, it is often done in such a way that no loop or infinite chain of references can occur. The ancestors of ones ancestors are also ones ancestors, the Fibonacci sequence is a classic example of recursion, Fib =0 as base case 1, Fib =1 as base case 2, For all integers n >1, Fib, = Fib + Fib. Many mathematical axioms are based upon recursive rules, for example, the formal definition of the natural numbers by the Peano axioms can be described as,0 is a natural number, and each natural number has a successor, which is also a natural number. By this base case and recursive rule, one can generate the set of all natural numbers, recursively defined mathematical objects include functions, sets, and especially fractals. There are various more tongue-in-cheek definitions of recursion, see recursive humor, Recursion is the process a procedure goes through when one of the steps of the procedure involves invoking the procedure itself. A procedure that goes through recursion is said to be recursive, to understand recursion, one must recognize the distinction between a procedure and the running of a procedure. A procedure is a set of steps based on a set of rules, the running of a procedure involves actually following the rules and performing the steps. An analogy, a procedure is like a recipe, running a procedure is like actually preparing the meal. Recursion is related to, but not the same as, a reference within the specification of a procedure to the execution of some other procedure. For instance, a recipe might refer to cooking vegetables, which is another procedure that in turn requires heating water, for this reason recursive definitions are very rare in everyday situations. An example could be the procedure to find a way through a maze. Proceed forward until reaching either an exit or a branching point, If the point reached is an exit, terminate. Otherwise try each branch in turn, using the procedure recursively, if every trial fails by reaching only dead ends, return on the path led to this branching point. Whether this actually defines a terminating procedure depends on the nature of the maze, in any case, executing the procedure requires carefully recording all currently explored branching points, and which of their branches have already been exhaustively tried. This can be understood in terms of a definition of a syntactic category. A sentence can have a structure in which what follows the verb is another sentence, Dorothy thinks witches are dangerous, so a sentence can be defined recursively as something with a structure that includes a noun phrase, a verb, and optionally another sentence
4.
Countable set
–
In mathematics, a countable set is a set with the same cardinality as some subset of the set of natural numbers. A countable set is either a set or a countably infinite set. Some authors use countable set to mean countably infinite alone, to avoid this ambiguity, the term at most countable may be used when finite sets are included and countably infinite, enumerable, or denumerable otherwise. Georg Cantor introduced the term countable set, contrasting sets that are countable with those that are uncountable, today, countable sets form the foundation of a branch of mathematics called discrete mathematics. A set S is countable if there exists a function f from S to the natural numbers N =. If such an f can be found that is also surjective, in other words, a set is countably infinite if it has one-to-one correspondence with the natural number set, N. As noted above, this terminology is not universal, some authors use countable to mean what is here called countably infinite, and do not include finite sets. Alternative formulations of the definition in terms of a function or a surjective function can also be given. In 1874, in his first set theory article, Cantor proved that the set of numbers is uncountable. In 1878, he used one-to-one correspondences to define and compare cardinalities, in 1883, he extended the natural numbers with his infinite ordinals, and used sets of ordinals to produce an infinity of sets having different infinite cardinalities. A set is a collection of elements, and may be described in many ways, one way is simply to list all of its elements, for example, the set consisting of the integers 3,4, and 5 may be denoted. This is only effective for small sets, however, for larger sets, even in this case, however, it is still possible to list all the elements, because the set is finite. Some sets are infinite, these sets have more than n elements for any integer n, for example, the set of natural numbers, denotable by, has infinitely many elements, and we cannot use any normal number to give its size. Nonetheless, it out that infinite sets do have a well-defined notion of size. To understand what this means, we first examine what it does not mean, for example, there are infinitely many odd integers, infinitely many even integers, and infinitely many integers overall. However, it out that the number of even integers. This is because we arrange things such that for every integer, or, more generally, n→2n, see picture. However, not all sets have the same cardinality
5.
Infinity
–
Infinity is an abstract concept describing something without any bound or larger than any number. In mathematics, infinity is treated as a number but it is not the same sort of number as natural or real numbers. Georg Cantor formalized many ideas related to infinity and infinite sets during the late 19th, in the theory he developed, there are infinite sets of different sizes. For example, the set of integers is countably infinite, while the set of real numbers is uncountable. Ancient cultures had various ideas about the nature of infinity, the ancient Indians and Greeks did not define infinity in precise formalism as does modern mathematics, and instead approached infinity as a philosophical concept. The earliest recorded idea of infinity comes from Anaximander, a pre-Socratic Greek philosopher who lived in Miletus and he used the word apeiron which means infinite or limitless. However, the earliest attestable accounts of mathematical infinity come from Zeno of Elea, aristotle called him the inventor of the dialectic. He is best known for his paradoxes, described by Bertrand Russell as immeasurably subtle, however, recent readings of the Archimedes Palimpsest have found that Archimedes had an understanding about actual infinite quantities. The Jain mathematical text Surya Prajnapti classifies all numbers into three sets, enumerable, innumerable, and infinite, on both physical and ontological grounds, a distinction was made between asaṃkhyāta and ananta, between rigidly bounded and loosely bounded infinities. European mathematicians started using numbers in a systematic fashion in the 17th century. John Wallis first used the notation ∞ for such a number, euler used the notation i for an infinite number, and exploited it by applying the binomial formula to the i th power, and infinite products of i factors. In 1699 Isaac Newton wrote about equations with an number of terms in his work De analysi per aequationes numero terminorum infinitas. The infinity symbol ∞ is a symbol representing the concept of infinity. The symbol is encoded in Unicode at U+221E ∞ infinity and in LaTeX as \infty and it was introduced in 1655 by John Wallis, and, since its introduction, has also been used outside mathematics in modern mysticism and literary symbology. Leibniz, one of the co-inventors of infinitesimal calculus, speculated widely about infinite numbers, in real analysis, the symbol ∞, called infinity, is used to denote an unbounded limit. X → ∞ means that x grows without bound, and x → − ∞ means the value of x is decreasing without bound. ∑ i =0 ∞ f = ∞ means that the sum of the series diverges in the specific sense that the partial sums grow without bound. Infinity can be used not only to define a limit but as a value in the real number system
6.
Truth value
–
In logic and mathematics, a truth value, sometimes called a logical value, is a value indicating the relation of a proposition to truth. In classical logic, with its intended semantics, the values are true and untrue or false. This set of two values is called the Boolean domain. Corresponding semantics of logical connectives are truth functions, whose values are expressed in the form of truth tables, logical biconditional becomes the equality binary relation, and negation becomes a bijection which permutes true and false. Conjunction and disjunction are dual with respect to negation, which is expressed by De Morgans laws, assigning values for propositional variables is referred to as valuation. In intuitionistic logic, and more generally, constructive mathematics, statements are assigned a value only if they can be given a constructive proof. It starts with a set of axioms, and a statement is true if you can build a proof of the statement from those axioms, a statement is false if you can deduce a contradiction from it. This leaves open the possibility of statements that have not yet assigned a truth value. Unproven statements in Intuitionistic logic are not given a truth value. Indeed, you can prove that they have no truth value. There are various ways of interpreting Intuitionistic logic, including the Brouwer–Heyting–Kolmogorov interpretation, see also, Intuitionistic Logic - Semantics. Multi-valued logics allow for more than two values, possibly containing some internal structure. For example, on the interval such structure is a total order. Not all logical systems are truth-valuational in the sense that logical connectives may be interpreted as truth functions, but even non-truth-valuational logics can associate values with logical formulae, as is done in algebraic semantics. The algebraic semantics of intuitionistic logic is given in terms of Heyting algebras, Intuitionistic type theory uses types in the place of truth values. Topos theory uses truth values in a sense, the truth values of a topos are the global elements of the subobject classifier. Having truth values in this sense does not make a logic truth valuational
7.
Function (mathematics)
–
In mathematics, a function is a relation between a set of inputs and a set of permissible outputs with the property that each input is related to exactly one output. An example is the function that each real number x to its square x2. The output of a function f corresponding to a x is denoted by f. In this example, if the input is −3, then the output is 9, likewise, if the input is 3, then the output is also 9, and we may write f =9. The input variable are sometimes referred to as the argument of the function, Functions of various kinds are the central objects of investigation in most fields of modern mathematics. There are many ways to describe or represent a function, some functions may be defined by a formula or algorithm that tells how to compute the output for a given input. Others are given by a picture, called the graph of the function, in science, functions are sometimes defined by a table that gives the outputs for selected inputs. A function could be described implicitly, for example as the inverse to another function or as a solution of a differential equation, sometimes the codomain is called the functions range, but more commonly the word range is used to mean, instead, specifically the set of outputs. For example, we could define a function using the rule f = x2 by saying that the domain and codomain are the numbers. The image of this function is the set of real numbers. In analogy with arithmetic, it is possible to define addition, subtraction, multiplication, another important operation defined on functions is function composition, where the output from one function becomes the input to another function. Linking each shape to its color is a function from X to Y, each shape is linked to a color, there is no shape that lacks a color and no shape that has more than one color. This function will be referred to as the color-of-the-shape function, the input to a function is called the argument and the output is called the value. The set of all permitted inputs to a function is called the domain of the function. Thus, the domain of the function is the set of the four shapes. The concept of a function does not require that every possible output is the value of some argument, a second example of a function is the following, the domain is chosen to be the set of natural numbers, and the codomain is the set of integers. The function associates to any number n the number 4−n. For example, to 1 it associates 3 and to 10 it associates −6, a third example of a function has the set of polygons as domain and the set of natural numbers as codomain
8.
Indicator function
–
It is usually denoted by a symbol 1 or I, sometimes in boldface or blackboard boldface, with a subscript describing the set. The indicator function of a subset A of a set X is a function 1 A, X → defined as 1 A, = {1 if x ∈ A,0 if x ∉ A. The Iverson bracket allows the equivalent notation, to be used instead of 1 A, the function 1 A is sometimes denoted I A, χ A, KA or even just A. The set of all functions on X can be identified with P. This is a case of the notation Y X for the set of all functions f, X → Y. The notation 1 A is also used to denote the identity function of A, the notation χ A is also used to denote the characteristic function in convex analysis. A related concept in statistics is that of a dummy variable, the term characteristic function has an unrelated meaning in probability theory. The indicator or characteristic function of a subset A of some set X and this mapping is surjective only when A is a non-empty proper subset of X. By a similar argument, if A ≡ Ø then 1A =0, in the following, the dot represents multiplication, 1·1 =1, 1·0 =0 etc. + and − represent addition and subtraction. ∩ and ∪ is intersection and union, respectively. More generally, suppose A1, …, A n is a collection of subsets of X, for any x ∈ X, ∏ k ∈ I is clearly a product of 0s and 1s. This product has the value 1 at precisely those x ∈ X that belong to none of the sets Ak and is 0 otherwise and that is ∏ k ∈ I =1 X − ⋃ k A k =1 −1 ⋃ k A k. This is one form of the principle of inclusion-exclusion, as suggested by the previous example, the indicator function is a useful notational device in combinatorics. This identity is used in a proof of Markovs inequality. In many cases, such as theory, the inverse of the indicator function may be defined. This is commonly called the generalized Möbius function, as a generalization of the inverse of the function in elementary number theory. Given a probability space with A ∈ F, the random variable 1 A, Ω → R is defined by 1 A =1 if ω ∈ A. Mean E = P Variance Var = P Covariance Cov = P − P P Kurt Gödel described the function in his 1934 paper On Undecidable Propositions of Formal Mathematical Systems. There shall correspond to each class or relation R a representing function φ =0 if R and φ =1 if ~R, for example, because the product of characteristic functions φ1*φ2*
9.
Set (mathematics)
–
In mathematics, a set is a well-defined collection of distinct objects, considered as an object in its own right. For example, the numbers 2,4, and 6 are distinct objects when considered separately, Sets are one of the most fundamental concepts in mathematics. Developed at the end of the 19th century, set theory is now a part of mathematics. In mathematics education, elementary topics such as Venn diagrams are taught at a young age, the German word Menge, rendered as set in English, was coined by Bernard Bolzano in his work The Paradoxes of the Infinite. A set is a collection of distinct objects. The objects that make up a set can be anything, numbers, people, letters of the alphabet, other sets, Sets are conventionally denoted with capital letters. Sets A and B are equal if and only if they have precisely the same elements. Cantors definition turned out to be inadequate, instead, the notion of a set is taken as a notion in axiomatic set theory. There are two ways of describing, or specifying the members of, a set, one way is by intensional definition, using a rule or semantic description, A is the set whose members are the first four positive integers. B is the set of colors of the French flag, the second way is by extension – that is, listing each member of the set. An extensional definition is denoted by enclosing the list of members in curly brackets, one often has the choice of specifying a set either intensionally or extensionally. In the examples above, for instance, A = C and B = D, there are two important points to note about sets. First, in a definition, a set member can be listed two or more times, for example. However, per extensionality, two definitions of sets which differ only in one of the definitions lists set members multiple times, define, in fact. Hence, the set is identical to the set. The second important point is that the order in which the elements of a set are listed is irrelevant and we can illustrate these two important points with an example, = =. For sets with many elements, the enumeration of members can be abbreviated, for instance, the set of the first thousand positive integers may be specified extensionally as, where the ellipsis indicates that the list continues in the obvious way. Ellipses may also be used where sets have infinitely many members, thus the set of positive even numbers can be written as
10.
Predicate (grammar)
–
There are two competing notions of the predicate in theories of grammar. The competition between two concepts has generated confusion concerning the use of the term predicate in theories of grammar. This article considers both of these notions, the second notion was derived from work in predicate calculus and is prominent in modern theories of syntax and grammar. In this approach, the predicate of a sentence mostly corresponds to the verb and any auxiliaries that accompany the main verb. The predicate in traditional grammar is inspired by propositional logic of antiquity, a predicate is seen as a property that a subject has or is characterized by. A predicate is therefore an expression that can be true of something, thus, the expression is moving is true of anything that is moving. It is also the understanding of predicates in English-language dictionaries, the predicate is one of the two main parts of a sentence. The predicate must contain a verb, and the verb requires or permits other elements to complete the predicate and these elements are objects, predicatives, and adjuncts, She dances. – verb-only predicate Ben reads the book, – verb-plus-direct-object predicate Bens mother, Felicity, gave me a present. – verb-plus-indirect-object-plus-direct-object predicate She listened to the radio, – verb-plus-prepositional-object predicate They elected him president. – verb-plus-object-plus-predicative-noun predicate She met him in the park, – verb-plus-object-plus-adjunct predicate She is in the park. – verb-plus-predicative-prepositional-phrase predicate The predicate provides information about the subject, such as what the subject is, what the subject is doing, the relation between a subject and its predicate is sometimes called a nexus. A predicative nominal is a phrase, such as in George III is the king of England. The subject and predicative nominal must be connected by a linking verb, a predicative adjective is an adjective, such as in Ivano is attractive, attractive being the predicative adjective. The subject and predicative adjective must also be connected by a copula and this traditional understanding of predicates has a concrete reflex in all phrase structure theories of syntax. These theories divide the generic declarative sentence into a phrase and verb phrase. The subject NP is shown in green, and the predicate VP in blue, most modern theories of syntax and grammar take their inspiration for the theory of predicates from predicate calculus as associated with Gottlob Frege. This understanding sees predicates as relations or functions over arguments, the predicate serves either to assign a property to a single argument or to relate two or more arguments to each other
11.
Noun
–
A noun is a word that functions as the name of some specific thing or set of things, such as living creatures, objects, places, actions, qualities, states of existence, or ideas. Linguistically, a noun is a member of a large, open part of whose members can occur as the main word in the subject of a clause. Lexical categories are defined in terms of the ways in which their members combine with other kinds of expressions, the syntactic rules for nouns differ from language to language. In English, nouns are words which can occur with articles and attributive adjectives. Word classes were described by Sanskrit grammarians from at least the 5th century BC, in Yāskas Nirukta, the noun is one of the four main categories of words defined. The Ancient Greek equivalent was ónoma, referred to by Plato in the Cratylus dialog, the term used in Latin grammar was nōmen. All of these terms for noun were also words meaning name, the English word noun is derived from the Latin term, through the Anglo-Norman noun. The word classes were defined partly by the forms that they take. In Sanskrit, Greek and Latin, for example, nouns are categorized by gender and inflected for case, because adjectives share these three grammatical categories, adjectives are placed in the same class as nouns. Similarly, the Latin nōmen includes both nouns and adjectives, as originally did the English word noun, the two types being distinguished as nouns substantive and nouns adjective, many European languages use a cognate of the word substantive as the basic term for noun. Nouns in the dictionaries of languages are demarked by the abbreviation s. or sb. instead of n. which may be used for proper nouns or neuter nouns instead. In English, some authors use the word substantive to refer to a class that includes both nouns and noun phrases. It can also be used as a counterpart to attributive when distinguishing between a noun being used as the head of a phrase and a noun being used as a noun adjunct. For example, the knee can be said to be used substantively in my knee hurts. Nouns have sometimes been defined in terms of the categories to which they are subject. Such definitions tend to be language-specific, since nouns do not have the categories in all languages. Nouns are frequently defined, particularly in contexts, in terms of their semantic properties. Nouns are described as words that refer to a person, place, thing, event, substance, quality, quantity, however this type of definition has been criticized by contemporary linguists as being uninformative
12.
Adjective
–
In linguistics, an adjective is a describing word, the main syntactic role of which is to qualify a noun or noun phrase, giving more information about the object signified. Adjectives are one of the English parts of speech, although historically they were classed together with the nouns, certain words that were traditionally considered to be adjectives, including the, this, my, etc. are today usually classed separately, as determiners. Adjective comes from Latin adjectīvum additional, a calque of Ancient Greek, in the grammatical tradition of Latin and Greek, because adjectives were inflected for gender, number, and case like nouns, they were considered a subtype of noun. The words that are typically called nouns were then called substantive nouns. The terms noun substantive and noun adjective were formerly used in English, in English, attributive adjectives usually precede their nouns in simple phrases, but often follow their nouns when the adjective is modified or qualified by a phrase acting as an adverb. For example, I saw three kids, and I saw three kids happy enough to jump up and down with glee. Predicative adjectives are linked via a copula or other linking mechanism to the noun or pronoun they modify, for example, happy is an adjective in they are happy. Nominal adjectives act almost as nouns, One way this can happen is if a noun is elided and an attributive adjective is left behind. In the sentence, I read two books to them, he preferred the sad book, but she preferred the happy, happy is a nominal adjective, short for happy one or happy book. Another way this can happen is in phrases like out with the old, in with the new, where the old means, that which is old or all that is old, and similarly with the new. In such cases, the adjective functions either as a noun or as a plural count noun, as in The meek shall inherit the Earth. Adjectives feature as a part of speech in most languages, in some languages, the words that serve the semantic function of adjectives may be categorized together with some other class, such as nouns or verbs. Such an analysis is possible for the grammar of Standard Chinese, different languages do not always use adjectives in exactly the same situations. For example, where English uses to be hungry, Dutch and French use honger hebben, similarly, where Hebrew uses the adjective זקוק zaqūq, English uses the verb to need. In languages which have adjectives as a class, they are usually an open class. However, Bantu languages are known for having only a small closed class of adjectives. Many languages, including English, distinguish between adjectives, which qualify nouns and pronouns, and adverbs, which mainly modify verbs, adjectives, not all languages have exactly this distinction and many languages, including English, have words that can function as both. For example, in English fast is an adjective in a fast car, in Dutch and German, adjectives and adverbs are usually identical in form and many grammarians do not make the distinction, but patterns of inflection can suggest a difference, Eine kluge neue Idee
13.
Identity function
–
In mathematics, an identity function, also called an identity relation or identity map or identity transformation, is a function that always returns the same value that was used as its argument. In equations, the function is given by f = x, formally, if M is a set, the identity function f on M is defined to be that function with domain and codomain M which satisfies f = x for all elements x in M. In other words, the value f in M is always the same input element x of M. The identity function on M is clearly a function as well as a surjective function. The identity function f on M is often denoted by idM, in set theory, where a function is defined as a particular kind of binary relation, the identity function is given by the identity relation, or diagonal of M. If f, M → N is any function, then we have f ∘ idM = f = idN ∘ f, in particular, idM is the identity element of the monoid of all functions from M to M. Since the identity element of a monoid is unique, one can define the identity function on M to be this identity element. Such a definition generalizes to the concept of an identity morphism in category theory, the identity function is a linear operator, when applied to vector spaces. The identity function on the integers is a completely multiplicative function. In an n-dimensional vector space the identity function is represented by the identity matrix In, in a metric space the identity is trivially an isometry. An object without any symmetry has as symmetry group the group only containing this isometry. In a topological space, the identity function is always continuous
14.
Monotonic function
–
In mathematics, a monotonic function is a function between ordered sets that preserves or reverses the given order. This concept first arose in calculus, and was generalized to the more abstract setting of order theory. In calculus, a function f defined on a subset of the numbers with real values is called monotonic if. That is, as per Fig.1, a function that increases monotonically does not exclusively have to increase, a function is called monotonically increasing, if for all x and y such that x ≤ y one has f ≤ f, so f preserves the order. Likewise, a function is called monotonically decreasing if, whenever x ≤ y, then f ≥ f, if the order ≤ in the definition of monotonicity is replaced by the strict order <, then one obtains a stronger requirement. A function with this property is called strictly increasing, again, by inverting the order symbol, one finds a corresponding concept called strictly decreasing. The terms non-decreasing and non-increasing should not be confused with the negative qualifications not decreasing, for example, the function of figure 3 first falls, then rises, then falls again. It is therefore not decreasing and not increasing, but it is neither non-decreasing nor non-increasing, the term monotonic transformation can also possibly cause some confusion because it refers to a transformation by a strictly increasing function. Notably, this is the case in economics with respect to the properties of a utility function being preserved across a monotonic transform. A function f is said to be absolutely monotonic over an interval if the derivatives of all orders of f are nonnegative or all nonpositive at all points on the interval, F can only have jump discontinuities, f can only have countably many discontinuities in its domain. The discontinuities, however, do not necessarily consist of isolated points and these properties are the reason why monotonic functions are useful in technical work in analysis. In addition, this result cannot be improved to countable, see Cantor function, if f is a monotonic function defined on an interval, then f is Riemann integrable. An important application of functions is in probability theory. If X is a variable, its cumulative distribution function F X = Prob is a monotonically increasing function. A function is unimodal if it is monotonically increasing up to some point, when f is a strictly monotonic function, then f is injective on its domain, and if T is the range of f, then there is an inverse function on T for f. A map f, X → Y is said to be if each of its fibers is connected i. e. for each element y in Y the set f−1 is connected. A subset G of X × X∗ is said to be a set if for every pair. G is said to be monotone if it is maximal among all monotone sets in the sense of set inclusion
15.
Entailment
–
Logical consequence is a fundamental concept in logic, which describes the relationship between statements that holds true when one statement logically follows from one or more statements. A valid logical argument is one in which the conclusions are entailed by the premises, the philosophical analysis of logical consequence involves the questions, In what sense does a conclusion follow from its premises. And What does it mean for a conclusion to be a consequence of premises, All of philosophical logic is meant to provide accounts of the nature of logical consequence and the nature of logical truth. Logical consequence is necessary and formal, by way of examples that explain with formal proof and models of interpretation. A sentence is said to be a consequence of a set of sentences, for a given language, if and only if. The most widely prevailing view on how to best account for logical consequence is to appeal to formality and this is to say that whether statements follow from one another logically depends on the structure or logical form of the statements without regard to the contents of that form. Syntactic accounts of logical consequence rely on schemes using inference rules, for instance, we can express the logical form of a valid argument as, All A are B. All C are A. Therefore, all C are B and this argument is formally valid, because every instance of arguments constructed using this scheme are valid. This is in contrast to an argument like Fred is Mikes brothers son, if you know that Q follows logically from P no information about the possible interpretations of P or Q will affect that knowledge. Our knowledge that Q is a consequence of P cannot be influenced by empirical knowledge. Deductively valid arguments can be known to be so without recourse to experience, however, formality alone does not guarantee that logical consequence is not influenced by empirical knowledge. So the a property of logical consequence is considered to be independent of formality. The two prevailing techniques for providing accounts of logical consequence involve expressing the concept in terms of proofs, the study of the syntactic consequence is called proof theory whereas the study of semantic consequence is called model theory. A formula A is a syntactic consequence within some formal system F S of a set Γ of formulas if there is a proof in F S of A from the set Γ. Γ ⊢ F S A Syntactic consequence does not depend on any interpretation of the formal system, or, in other words, the set of the interpretations that make all members of Γ true is a subset of the set of the interpretations that make A true. Modal accounts of logical consequence are variations on the basic idea, Γ ⊢ A is true if and only if it is necessary that if all of the elements of Γ are true. Alternatively, Γ ⊢ A is true if and only if it is impossible for all of the elements of Γ to be true, such accounts are called modal because they appeal to the modal notions of logical necessity and logical possibility. Consider the modal account in terms of the argument given as an example above, the conclusion is a logical consequence of the premises because we cant imagine a possible world where all frogs are green, Kermit is a frog, and Kermit is not green
16.
Intersection (set theory)
–
In mathematics, the intersection A ∩ B of two sets A and B is the set that contains all elements of A that also belong to B, but no other elements. For explanation of the used in this article, refer to the table of mathematical symbols. The intersection of A and B is written A ∩ B, formally, A ∩ B = that is x ∈ A ∩ B if and only if x ∈ A and x ∈ B. For example, The intersection of the sets and is, the number 9 is not in the intersection of the set of prime numbers and the set of odd numbers. More generally, one can take the intersection of sets at once. The intersection of A, B, C, and D, Intersection is an associative operation, thus, A ∩ = ∩ C. Additionally, intersection is commutative, thus A ∩ B = B ∩ A, inside a universe U one may define the complement Ac of A to be the set of all elements of U not in A. We say that A intersects B if A intersects B at some element, a intersects B if their intersection is inhabited. We say that A and B are disjoint if A does not intersect B, in plain language, they have no elements in common. A and B are disjoint if their intersection is empty, denoted A ∩ B = ∅, for example, the sets and are disjoint, the set of even numbers intersects the set of multiples of 3 at 0,6,12,18 and other numbers. The most general notion is the intersection of a nonempty collection of sets. If M is a nonempty set whose elements are themselves sets, then x is an element of the intersection of M if, the notation for this last concept can vary considerably. Set theorists will sometimes write ⋂M, while others will instead write ⋂A∈M A, the latter notation can be generalized to ⋂i∈I Ai, which refers to the intersection of the collection. Here I is a nonempty set, and Ai is a set for every i in I. In the case that the index set I is the set of numbers, notation analogous to that of an infinite series may be seen. When formatting is difficult, this can also be written A1 ∩ A2 ∩ A3 ∩, even though strictly speaking, A1 ∩ (A2 ∩ (A3 ∩. Finally, let us note that whenever the symbol ∩ is placed before other symbols instead of them, it should be of a larger size. Note that in the section we excluded the case where M was the empty set
17.
Cardinality
–
In mathematics, the cardinality of a set is a measure of the number of elements of the set. For example, the set A = contains 3 elements, there are two approaches to cardinality – one which compares sets directly using bijections and injections, and another which uses cardinal numbers. The cardinality of a set is called its size, when no confusion with other notions of size is possible. The cardinality of a set A is usually denoted | A |, with a bar on each side, this is the same notation as absolute value. Alternatively, the cardinality of a set A may be denoted by n, A, card, while the cardinality of a finite set is just the number of its elements, extending the notion to infinite sets usually starts with defining the notion of comparison of arbitrary sets. Two sets A and B have the same cardinality if there exists a bijection, that is, such sets are said to be equipotent, equipollent, or equinumerous. This relationship can also be denoted A≈B or A~B, for example, the set E = of non-negative even numbers has the same cardinality as the set N = of natural numbers, since the function f = 2n is a bijection from N to E. A has cardinality less than or equal to the cardinality of B if there exists a function from A into B. A has cardinality less than the cardinality of B if there is an injective function. If | A | ≤ | B | and | B | ≤ | A | then | A | = | B |, the axiom of choice is equivalent to the statement that | A | ≤ | B | or | B | ≤ | A | for every A, B. That is, the cardinality of a set was not defined as an object itself. However, such an object can be defined as follows, the relation of having the same cardinality is called equinumerosity, and this is an equivalence relation on the class of all sets. The equivalence class of a set A under this relation then consists of all sets which have the same cardinality as A. There are two ways to define the cardinality of a set, The cardinality of a set A is defined as its class under equinumerosity. A representative set is designated for each equivalence class, the most common choice is the initial ordinal in that class. This is usually taken as the definition of number in axiomatic set theory. Assuming AC, the cardinalities of the sets are denoted ℵ0 < ℵ1 < ℵ2 < …. For each ordinal α, ℵ α +1 is the least cardinal number greater than ℵ α
18.
Adverb
–
An adverb is a word that modifies a verb, adjective, another adverb, determiner, noun phrase, clause, or sentence. Adverbs typically express manner, place, time, frequency, degree, level of certainty and this function is called the adverbial function, and may be realised by single words or by multi-word expressions. Adverbs are traditionally regarded as one of the parts of speech, the term implies that the principal function of adverbs is to act as modifiers of verbs or verb phrases. An adverb used in this way may provide information about the manner, place, time, frequency, certainty, the major exception is the function of modifier of nouns, which is performed instead by adjectives. When the function of an adverb is performed by an expression consisting of more than one word, it is called a phrase or adverbial clause. In English, adverbs of manner are formed by adding -ly to adjectives. Other languages often have similar methods for deriving adverbs from adjectives, many other adverbs, however, are not related to adjectives in this way, they may be derived from other words or phrases, or may be single morphemes. Examples of such adverbs in English include here, there, together, yesterday, aboard, very, almost, where the meaning permits, adverbs may undergo comparison, taking comparative and superlative forms. In English this is usually done by adding more and most before the adverb, although there are a few adverbs that take inflected forms, such as well, for which better and best are used. For more information about the formation and use of adverbs in English, for other languages, see § In specific languages below, and the articles on individual languages and their grammars. Adverbs are considered a part of speech in traditional English grammar, however, modern grammarians recognize that words traditionally grouped together as adverbs serve a number of different functions. Some describe adverbs a catch-all category that includes all words that do not belong to one of the parts of speech. A logical approach to dividing words into classes relies on recognizing which words can be used in a certain context, for example, the only type of word that can be inserted in the following template to form a grammatical sentence is a noun, The _____ is red. When this approach is taken, it is seen that adverbs fall into a number of different categories, for example, some adverbs can be used to modify an entire sentence, whereas others cannot. Even when a sentential adverb has other functions, the meaning is not the same. Words like very afford another example and we can say Perry is very fast, but not Perry very won the race. These words can modify adjectives but not verbs, on the other hand, there are words like here and there that cannot modify adjectives. We can say The sock looks good there but not It is a there beautiful sock, however, this distinction can be useful, especially when considering adverbs like naturally that have different meanings in their different functions
19.
Richard Montague
–
Richard Merritt Montague was an American mathematician and philosopher. Montague wrote on the foundations of logic and set theory, as would befit a student of Tarski, in other words, ZFC cannot be finitely axiomatized. He pioneered an approach to natural language semantics which became known as Montague grammar. This approach to language has been influential among certain computational linguists—perhaps more so than among more traditional philosophers of language. Montague was an accomplished organist and a real estate investor. He died violently in his own home, the crime is unsolved to this day, Anita Feferman and Solomon Feferman argue that he usually went to bars cruising and bringing people home with him. On the day that he was murdered, he brought several people for some kind of soirée. American philosophy List of American philosophers Feferman, Anita, and Solomon Feferman,2004, donald Kalish, and Montague, Richard,1964. Donald Kalish, and Montague, Richard, and Mar, Gary,1980, formal philosophy, selected papers of Richard Montague / ed. and with an introd. by Richmond H. Thomason. Partee, Barbara H.2006, Richard Montague in Brown, Keith, ed. Encyclopedia of Language and Linguistics, Vol.8, 2nd ed. Oxford, Elsevier, includes a bibliography of the secondary literature on Montague and his eponymous grammar. Richard Montague at the Mathematics Genealogy Project That’s Just Semantics, Montague Semantics at Stanford Encyclopedia of Philosophy
20.
International Standard Book Number
–
The International Standard Book Number is a unique numeric commercial book identifier. An ISBN is assigned to each edition and variation of a book, for example, an e-book, a paperback and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, the method of assigning an ISBN is nation-based and varies from country to country, often depending on how large the publishing industry is within a country. The initial ISBN configuration of recognition was generated in 1967 based upon the 9-digit Standard Book Numbering created in 1966, the 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108. Occasionally, a book may appear without a printed ISBN if it is printed privately or the author does not follow the usual ISBN procedure, however, this can be rectified later. Another identifier, the International Standard Serial Number, identifies periodical publications such as magazines, the ISBN configuration of recognition was generated in 1967 in the United Kingdom by David Whitaker and in 1968 in the US by Emery Koltay. The 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108, the United Kingdom continued to use the 9-digit SBN code until 1974. The ISO on-line facility only refers back to 1978, an SBN may be converted to an ISBN by prefixing the digit 0. For example, the edition of Mr. J. G. Reeder Returns, published by Hodder in 1965, has SBN340013818 -340 indicating the publisher,01381 their serial number. This can be converted to ISBN 0-340-01381-8, the check digit does not need to be re-calculated, since 1 January 2007, ISBNs have contained 13 digits, a format that is compatible with Bookland European Article Number EAN-13s. An ISBN is assigned to each edition and variation of a book, for example, an ebook, a paperback, and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, a 13-digit ISBN can be separated into its parts, and when this is done it is customary to separate the parts with hyphens or spaces. Separating the parts of a 10-digit ISBN is also done with either hyphens or spaces, figuring out how to correctly separate a given ISBN number is complicated, because most of the parts do not use a fixed number of digits. ISBN issuance is country-specific, in that ISBNs are issued by the ISBN registration agency that is responsible for country or territory regardless of the publication language. Some ISBN registration agencies are based in national libraries or within ministries of culture, in other cases, the ISBN registration service is provided by organisations such as bibliographic data providers that are not government funded. In Canada, ISBNs are issued at no cost with the purpose of encouraging Canadian culture. In the United Kingdom, United States, and some countries, where the service is provided by non-government-funded organisations. Australia, ISBNs are issued by the library services agency Thorpe-Bowker