1.
Colorless green ideas sleep furiously
–
The sentence was originally used in his 1955 thesis Logical Structure of Linguistic Theory and in his 1956 paper Three Models for the Description of Language. Although the sentence is correct, no obvious understandable meaning can be derived from it. As an example of a mistake, it was used to show the inadequacy of the then-popular probabilistic models of grammar. The full passage says, Colorless green ideas sleep furiously and it is fair to assume that neither sentence nor has ever occurred in an English discourse. Hence, in any model for grammaticalness, these sentences will be ruled out on identical grounds as equally remote from English. Yet, though nonsensical, is grammatical, while is not grammatical, while the meaninglessness of the sentence is often considered fundamental to Chomskys point, Chomsky was only relying on the sentences having never been spoken before. This was used then as a counter-example to the idea that the human speech engine was based upon statistical models, such as a Markov chain, the sentence can be partially interpreted through polysemy. Both green and colorless have figurative meanings, which allow colorless to be interpreted as nondescript, the sentence can therefore be construed as nondescript immature ideas have violent nightmares, a phrase with less oblique semantics. In particular, the phrase can have legitimate meaning too, if green is understood to mean newly formed, animals or humans, which truly sleep. Writers have attempted to provide the meaning through context, the first of which was written by Chinese linguist Yuen Ren Chao. In 1985, a competition was held at Stanford University in which the contestants were invited to make Chomskys sentence meaningful using not more than 100 words of prose or 14 lines of verse. An example entry from the competition, from C. M and it is a marvel to me that under this cover they are labouring unseen at such a rate within to give us the sudden awesome beauty of spring flowering bulbs. While winter reigns the earth reposes but these colourless green ideas sleep furiously and this statistical model defines a similarity metric, whereby sentences which are more like those within a corpus in certain respects are assigned higher values than sentences less alike. However, it is not clear that the model assigns every ungrammatical sentence a lower probability than every grammatical sentence and that is, colorless green ideas sleep furiously may still be statistically more remote from English than some ungrammatical sentences. To this, it may be argued that no current theory of grammar is capable of distinguishing all grammatical English sentences from ungrammatical ones, the pioneering French syntactician Lucien Tesnière came up with the French language sentence Le silence vertébral indispose la voile licite. The game of exquisite corpse is a method for generating nonsense sentences and it was named after the first sentence generated in the game in 1925 Le cadavre exquis boira le vin nouveau. The humor of the game is in the generation of sentences which are grammatical, the game also tends to generate humorous double entendres. Of course, some philosophers who were not logical positivists disagreed with this, the philosopher Bertrand Russell used the sentence Quadruplicity drinks procrastination in his “An Inquiry into Meaning and Truth” from 1940, to make a similar point, W. V
2.
Mathematics
–
Mathematics is the study of topics such as quantity, structure, space, and change. There is a range of views among mathematicians and philosophers as to the exact scope, Mathematicians seek out patterns and use them to formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proof, when mathematical structures are good models of real phenomena, then mathematical reasoning can provide insight or predictions about nature. Through the use of abstraction and logic, mathematics developed from counting, calculation, measurement, practical mathematics has been a human activity from as far back as written records exist. The research required to solve mathematical problems can take years or even centuries of sustained inquiry, rigorous arguments first appeared in Greek mathematics, most notably in Euclids Elements. Galileo Galilei said, The universe cannot be read until we have learned the language and it is written in mathematical language, and the letters are triangles, circles and other geometrical figures, without which means it is humanly impossible to comprehend a single word. Without these, one is wandering about in a dark labyrinth, carl Friedrich Gauss referred to mathematics as the Queen of the Sciences. Benjamin Peirce called mathematics the science that draws necessary conclusions, David Hilbert said of mathematics, We are not speaking here of arbitrariness in any sense. Mathematics is not like a game whose tasks are determined by arbitrarily stipulated rules, rather, it is a conceptual system possessing internal necessity that can only be so and by no means otherwise. Albert Einstein stated that as far as the laws of mathematics refer to reality, they are not certain, Mathematics is essential in many fields, including natural science, engineering, medicine, finance and the social sciences. Applied mathematics has led to entirely new mathematical disciplines, such as statistics, Mathematicians also engage in pure mathematics, or mathematics for its own sake, without having any application in mind. There is no clear line separating pure and applied mathematics, the history of mathematics can be seen as an ever-increasing series of abstractions. The earliest uses of mathematics were in trading, land measurement, painting and weaving patterns, in Babylonian mathematics elementary arithmetic first appears in the archaeological record. Numeracy pre-dated writing and numeral systems have many and diverse. Between 600 and 300 BC the Ancient Greeks began a study of mathematics in its own right with Greek mathematics. Mathematics has since been extended, and there has been a fruitful interaction between mathematics and science, to the benefit of both. Mathematical discoveries continue to be made today, the overwhelming majority of works in this ocean contain new mathematical theorems and their proofs. The word máthēma is derived from μανθάνω, while the modern Greek equivalent is μαθαίνω, in Greece, the word for mathematics came to have the narrower and more technical meaning mathematical study even in Classical times
3.
Computer science
–
Computer science is the study of the theory, experimentation, and engineering that form the basis for the design and use of computers. An alternate, more succinct definition of science is the study of automating algorithmic processes that scale. A computer scientist specializes in the theory of computation and the design of computational systems and its fields can be divided into a variety of theoretical and practical disciplines. Some fields, such as computational complexity theory, are highly abstract, other fields still focus on challenges in implementing computation. Human–computer interaction considers the challenges in making computers and computations useful, usable, the earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, further, algorithms for performing computations have existed since antiquity, even before the development of sophisticated computing equipment. Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623, in 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner. He may be considered the first computer scientist and information theorist, for, among other reasons and he started developing this machine in 1834, and in less than two years, he had sketched out many of the salient features of the modern computer. A crucial step was the adoption of a card system derived from the Jacquard loom making it infinitely programmable. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information, when the machine was finished, some hailed it as Babbages dream come true. During the 1940s, as new and more powerful computing machines were developed, as it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. Computer science began to be established as an academic discipline in the 1950s. The worlds first computer science program, the Cambridge Diploma in Computer Science. The first computer science program in the United States was formed at Purdue University in 1962. Since practical computers became available, many applications of computing have become distinct areas of study in their own rights and it is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM released the IBM704 and later the IBM709 computers, still, working with the IBM was frustrating if you had misplaced as much as one letter in one instruction, the program would crash, and you would have to start the whole process over again. During the late 1950s, the science discipline was very much in its developmental stages. Time has seen significant improvements in the usability and effectiveness of computing technology, modern society has seen a significant shift in the users of computer technology, from usage only by experts and professionals, to a near-ubiquitous user base
4.
Linguistics
–
Linguistics is the scientific study of language, and involves an analysis of language form, language meaning, and language in context. Linguists traditionally analyse human language by observing an interplay between sound and meaning, phonetics is the study of speech and non-speech sounds, and delves into their acoustic and articulatory properties. While the study of semantics typically concerns itself with truth conditions, Grammar is a system of rules which governs the production and use of utterances in a given language. These rules apply to sound as well as meaning, and include componential sub-sets of rules, such as those pertaining to phonology, morphology, modern theories that deal with the principles of grammar are largely based within Noam Chomskys ideological school of generative grammar. In the early 20th century, Ferdinand de Saussure distinguished between the notions of langue and parole in his formulation of structural linguistics. According to him, parole is the utterance of speech, whereas langue refers to an abstract phenomenon that theoretically defines the principles. This distinction resembles the one made by Noam Chomsky between competence and performance in his theory of transformative or generative grammar. According to Chomsky, competence is an innate capacity and potential for language, while performance is the specific way in which it is used by individuals, groups. The study of parole is the domain of sociolinguistics, the sub-discipline that comprises the study of a system of linguistic facets within a certain speech community. Discourse analysis further examines the structure of texts and conversations emerging out of a speech communitys usage of language, Stylistics also involves the study of written, signed, or spoken discourse through varying speech communities, genres, and editorial or narrative formats in the mass media. In the 1960s, Jacques Derrida, for instance, further distinguished between speech and writing, by proposing that language be studied as a linguistic medium of communication in itself. Palaeography is therefore the discipline that studies the evolution of scripts in language. Linguistics also deals with the social, cultural, historical and political factors that influence language, through which linguistic, research on language through the sub-branches of historical and evolutionary linguistics also focus on how languages change and grow, particularly over an extended period of time. Language documentation combines anthropological inquiry with linguistic inquiry, in order to describe languages, lexicography involves the documentation of words that form a vocabulary. Such a documentation of a vocabulary from a particular language is usually compiled in a dictionary. Computational linguistics is concerned with the statistical or rule-based modeling of natural language from a computational perspective, specific knowledge of language is applied by speakers during the act of translation and interpretation, as well as in language education – the teaching of a second or foreign language. Policy makers work with governments to implement new plans in education, related areas of study also includes the disciplines of semiotics, literary criticism, translation, and speech-language pathology. Before the 20th century, the philology, first attested in 1716, was commonly used to refer to the science of language
5.
Set (mathematics)
–
In mathematics, a set is a well-defined collection of distinct objects, considered as an object in its own right. For example, the numbers 2,4, and 6 are distinct objects when considered separately, Sets are one of the most fundamental concepts in mathematics. Developed at the end of the 19th century, set theory is now a part of mathematics. In mathematics education, elementary topics such as Venn diagrams are taught at a young age, the German word Menge, rendered as set in English, was coined by Bernard Bolzano in his work The Paradoxes of the Infinite. A set is a collection of distinct objects. The objects that make up a set can be anything, numbers, people, letters of the alphabet, other sets, Sets are conventionally denoted with capital letters. Sets A and B are equal if and only if they have precisely the same elements. Cantors definition turned out to be inadequate, instead, the notion of a set is taken as a notion in axiomatic set theory. There are two ways of describing, or specifying the members of, a set, one way is by intensional definition, using a rule or semantic description, A is the set whose members are the first four positive integers. B is the set of colors of the French flag, the second way is by extension – that is, listing each member of the set. An extensional definition is denoted by enclosing the list of members in curly brackets, one often has the choice of specifying a set either intensionally or extensionally. In the examples above, for instance, A = C and B = D, there are two important points to note about sets. First, in a definition, a set member can be listed two or more times, for example. However, per extensionality, two definitions of sets which differ only in one of the definitions lists set members multiple times, define, in fact. Hence, the set is identical to the set. The second important point is that the order in which the elements of a set are listed is irrelevant and we can illustrate these two important points with an example, = =. For sets with many elements, the enumeration of members can be abbreviated, for instance, the set of the first thousand positive integers may be specified extensionally as, where the ellipsis indicates that the list continues in the obvious way. Ellipses may also be used where sets have infinitely many members, thus the set of positive even numbers can be written as
6.
String (computer science)
–
In computer programming, a string is traditionally a sequence of characters, either as a literal constant or as some kind of variable. The latter may allow its elements to be mutated and the length changed, a string is generally understood as a data type and is often implemented as an array of bytes that stores a sequence of elements, typically characters, using some character encoding. A string may also more general arrays or other sequence data types and structures. When a string appears literally in source code, it is known as a literal or an anonymous string. In formal languages, which are used in logic and theoretical computer science. Let Σ be a non-empty finite set of symbols, called the alphabet, no assumption is made about the nature of the symbols. A string over Σ is any sequence of symbols from Σ. For example, if Σ =, then 01011 is a string over Σ, the length of a string s is the number of symbols in s and can be any non-negative integer, it is often denoted as |s|. The empty string is the string over Σ of length 0. The set of all strings over Σ of length n is denoted Σn, for example, if Σ =, then Σ2 =. Note that Σ0 = for any alphabet Σ, the set of all strings over Σ of any length is the Kleene closure of Σ and is denoted Σ*. In terms of Σn, Σ ∗ = ⋃ n ∈ N ∪ Σ n For example, if Σ =, although the set Σ* itself is countably infinite, each element of Σ* is a string of finite length. A set of strings over Σ is called a language over Σ. For example, if Σ =, the set of strings with an number of zeros, is a formal language over Σ. Concatenation is an important binary operation on Σ*, for any two strings s and t in Σ*, their concatenation is defined as the sequence of symbols in s followed by the sequence of characters in t, and is denoted st. For example, if Σ =, s = bear, and t = hug, then st = bearhug, String concatenation is an associative, but non-commutative operation. The empty string ε serves as the identity element, for any string s, therefore, the set Σ* and the concatenation operation form a monoid, the free monoid generated by Σ. In addition, the length function defines a monoid homomorphism from Σ* to the non-negative integers, a string s is said to be a substring or factor of t if there exist strings u and v such that t = usv
7.
Symbol (formal)
–
A logical symbol is a fundamental concept in logic, tokens of which may be marks or a configuration of marks which form a particular pattern. In logic, symbols build literal utility to illustrate ideas, symbols of a formal language need not be symbols of anything. For instance there are constants which do not refer to any idea. Symbols of a formal language must be capable of being specified without any reference to any interpretation of them, a symbol or string of symbols may comprise a well-formed formula if it is consistent with the formation rules of the language. In a formal system a symbol may be used as a token in formal operations. The set of symbols in a formal language is referred to as an alphabet A formal symbol as used in first-order logic may be a variable, a constant. Formal symbols are thought of as purely syntactic structures, composed into larger structures using a formal grammar. The move to view units in natural language as formal symbols was initiated by Noam Chomsky, the generative grammar model looked upon syntax as autonomous from semantics. On this point I differ from a number of philosophers, but agree, I believe, with Chomsky and this is the philosophical premise underlying Montague grammar. List of mathematical symbols List of logic symbols
8.
Well-formed formula
–
In mathematical logic, a well-formed formula, abbreviated wff, often simply formula, is a finite sequence of symbols from a given alphabet that is part of a formal language. A formal language can be identified with the set of formulas in the language, a formula is a syntactic object that can be given a semantic meaning by means of an interpretation. Two key uses of formulas are in propositional logic and predicate logic, a key use of formulas is in propositional logic and predicate logics such as first-order logic. In those contexts, a formula is a string of symbols φ for which it makes sense to ask is φ true, once any free variables in φ have been instantiated. In formal logic, proofs can be represented by sequences of formulas with certain properties, although the term formula may be used for written marks, it is more precisely understood as the sequence of symbols being expressed, with the marks being a token instance of formula. Thus the same formula may be more than once. They are given meanings by interpretations, for example, in a propositional formula, each propositional variable may be interpreted as a concrete proposition, so that the overall formula expresses a relationship between these propositions. A formula need not be interpreted, however, to be considered solely as a formula, the formulas of propositional calculus, also called propositional formulas, are expressions such as. Their definition begins with the choice of a set V of propositional variables. The alphabet consists of the letters in V along with the symbols for the propositional connectives and parentheses, the formulas will be certain expressions over this alphabet. The formulas are inductively defined as follows, Each propositional variable is, on its own, If φ is a formula, then ¬φ is a formula. If φ and ψ are formulas, and • is any binary connective, here • could be the usual operators ∨, ∧, →, or ↔. The sequence of symbols p)) is not a formula, because it does not conform to the grammar, a complex formula may be difficult to read, owing to, for example, the proliferation of parentheses. To alleviate this last phenomenon, precedence rules are assumed among the operators, for example, assuming the precedence 1. Then the formula may be abbreviated as p → q ∧ r → s ∨ ¬q ∧ ¬s This is, however, If the precedence was assumed, for example, to be left-right associative, in following order,1. ∨4. →, then the formula above would be rewritten as → The definition of a formula in first-order logic Q S is relative to the signature of the theory at hand. This signature specifies the constant symbols, relation symbols, and function symbols of the theory at hand, the definition of a formula comes in several parts. First, the set of terms is defined recursively, terms, informally, are expressions that represent objects from the domain of discourse
9.
Syntax
–
In linguistics, syntax is the set of rules, principles, and processes that govern the structure of sentences in a given language, specifically word order. The term syntax is used to refer to the study of such principles and processes. The goal of many syntacticians is to discover the syntactic rules common to all languages, in mathematics, syntax refers to the rules governing the behavior of mathematical systems, such as formal languages used in logic. The word syntax comes from Ancient Greek, σύνταξις coordination, which consists of σύν syn, together, and τάξις táxis, a basic feature of a languages syntax is the sequence in which the subject, verb, and object usually appear in sentences. Over 85% of languages usually place the subject first, either in the sequence SVO or the sequence SOV, the other possible sequences are VSO, VOS, OVS, and OSV, the last three of which are rare. In the West, the school of thought came to be known as traditional grammar began with the work of Dionysius Thrax. For centuries, work in syntax was dominated by a known as grammaire générale. This system took as its premise the assumption that language is a direct reflection of thought processes and therefore there is a single. It became apparent that there was no such thing as the most natural way to express a thought, the Port-Royal grammar modeled the study of syntax upon that of logic. Syntactic categories were identified with logical ones, and all sentences were analyzed in terms of Subject – Copula – Predicate, initially, this view was adopted even by the early comparative linguists such as Franz Bopp. The central role of syntax within theoretical linguistics became clear only in the 20th century, there are a number of theoretical approaches to the discipline of syntax. One school of thought, founded in the works of Derek Bickerton, sees syntax as a branch of biology, other linguists take a more Platonistic view, since they regard syntax to be the study of an abstract formal system. Yet others consider syntax a taxonomical device to reach broad generalizations across languages, the hypothesis of generative grammar is that language is a structure of the human mind. The goal of grammar is to make a complete model of this inner language. This model could be used to all human language and to predict the grammaticality of any given utterance. This approach to language was pioneered by Noam Chomsky, most generative theories assume that syntax is based upon the constituent structure of sentences. Generative grammars are among the theories that focus primarily on the form of a sentence and this complex category is notated as instead of V. NP\S is read as a category that searches to the left for an NP and outputs a sentence. The category of verb is defined as an element that requires two NPs to form a sentence
10.
Programming language
–
A programming language is a formal computer language designed to communicate instructions to a machine, particularly a computer. Programming languages can be used to programs to control the behavior of a machine or to express algorithms. From the early 1800s, programs were used to direct the behavior of such as Jacquard looms. Thousands of different programming languages have created, mainly in the computer field. Many programming languages require computation to be specified in an imperative form while other languages use forms of program specification such as the declarative form. The description of a language is usually split into the two components of syntax and semantics. Some languages are defined by a document while other languages have a dominant implementation that is treated as a reference. Some languages have both, with the language defined by a standard and extensions taken from the dominant implementation being common. A programming language is a notation for writing programs, which are specifications of a computation or algorithm, some, but not all, authors restrict the term programming language to those languages that can express all possible algorithms. For example, PostScript programs are created by another program to control a computer printer or display. More generally, a language may describe computation on some, possibly abstract. It is generally accepted that a specification for a programming language includes a description, possibly idealized. In most practical contexts, a programming language involves a computer, consequently, abstractions Programming languages usually contain abstractions for defining and manipulating data structures or controlling the flow of execution. Expressive power The theory of computation classifies languages by the computations they are capable of expressing, all Turing complete languages can implement the same set of algorithms. ANSI/ISO SQL-92 and Charity are examples of languages that are not Turing complete, markup languages like XML, HTML, or troff, which define structured data, are not usually considered programming languages. Programming languages may, however, share the syntax with markup languages if a computational semantics is defined, XSLT, for example, is a Turing complete XML dialect. Moreover, LaTeX, which is used for structuring documents. The term computer language is used interchangeably with programming language
11.
Computational complexity theory
–
A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures are used, such as the amount of communication, the number of gates in a circuit. One of the roles of computational complexity theory is to determine the limits on what computers can. Closely related fields in computer science are analysis of algorithms. More precisely, computational complexity theory tries to classify problems that can or cannot be solved with appropriately restricted resources, a computational problem can be viewed as an infinite collection of instances together with a solution for every instance. The input string for a problem is referred to as a problem instance. In computational complexity theory, a problem refers to the question to be solved. In contrast, an instance of this problem is a rather concrete utterance, for example, consider the problem of primality testing. The instance is a number and the solution is yes if the number is prime, stated another way, the instance is a particular input to the problem, and the solution is the output corresponding to the given input. For this reason, complexity theory addresses computational problems and not particular problem instances, when considering computational problems, a problem instance is a string over an alphabet. Usually, the alphabet is taken to be the binary alphabet, as in a real-world computer, mathematical objects other than bitstrings must be suitably encoded. For example, integers can be represented in binary notation, and graphs can be encoded directly via their adjacency matrices and this can be achieved by ensuring that different representations can be transformed into each other efficiently. Decision problems are one of the objects of study in computational complexity theory. A decision problem is a type of computational problem whose answer is either yes or no. A decision problem can be viewed as a language, where the members of the language are instances whose output is yes. The objective is to decide, with the aid of an algorithm, if the algorithm deciding this problem returns the answer yes, the algorithm is said to accept the input string, otherwise it is said to reject the input. An example of a problem is the following
12.
Decision problem
–
In computability theory and computational complexity theory, a decision problem is a question in some formal system that can be posed as a yes-no question, dependent on the input values. For example, the given two numbers x and y, does x evenly divide y. is a decision problem. The answer can be yes or no, and depends upon the values of x and y. A method for solving a problem, given in the form of an algorithm, is called a decision procedure for that problem. A decision procedure for the problem given two numbers x and y, does x evenly divide y. would give the steps for determining whether x evenly divides y. One such algorithm is long division, taught to school children. If the remainder is zero the answer produced is yes, otherwise it is no, a decision problem which can be solved by an algorithm, such as this example, is called decidable. The field of computational complexity categorizes decidable decision problems by how difficult they are to solve, difficult, in this sense, is described in terms of the computational resources needed by the most efficient algorithm for a certain problem. The field of theory, meanwhile, categorizes undecidable decision problems by Turing degree. A decision problem is any arbitrary yes-or-no question on a set of inputs. Because of this, it is traditional to define the decision problem equivalently as and these inputs can be natural numbers, but may also be values of some other kind, such as strings over the binary alphabet or over some other finite set of symbols. The subset of strings for which the problem returns yes is a formal language, alternatively, using an encoding such as Gödel numberings, any string can be encoded as a natural number, via which a decision problem can be defined as a subset of the natural numbers. A classic example of a decision problem is the set of prime numbers. It is possible to decide whether a given natural number is prime by testing every possible nontrivial factor. Although much more efficient methods of primality testing are known, the existence of any method is enough to establish decidability. A decision problem A is called decidable or effectively solvable if A is a recursive set, a problem is called partially decidable, semidecidable, solvable, or provable if A is a recursively enumerable set. Problems that are not decidable are called undecidable, the halting problem is an important undecidable decision problem, for more examples, see list of undecidable problems. Decision problems can be ordered according to many-one reducibility and related to feasible reductions such as polynomial-time reductions
13.
Complexity class
–
In computational complexity theory, a complexity class is a set of problems of related resource-based complexity. A typical complexity class has a definition of the form, the set of problems that can be solved by an abstract machine M using O of resource R, Complexity classes are concerned with the rate of growth of the requirement in resources as the input n increases. It is a measurement, and does not give time or space in requirements in terms of seconds or bytes. The O is read as order of, for the purposes of computational complexity theory, some of the details of the function can be ignored, for instance many possible polynomials can be grouped together as a class. The resource in question can either be time, essentially the number of operations on an abstract machine. The simplest complexity classes are defined by the factors, The type of computational problem. However, complexity classes can be defined based on problems, counting problems, optimization problems, promise problems. The resource that are being bounded and the bounds, These two properties are usually stated together, such as time, logarithmic space, constant depth. Many complexity classes can be characterized in terms of the logic needed to express them. Bounding the computation time above by some function f often yields complexity classes that depend on the chosen machine model. For instance, the language can be solved in time on a multi-tape Turing machine. If we allow polynomial variations in running time, Cobham-Edmonds thesis states that the complexities in any two reasonable and general models of computation are polynomially related. This forms the basis for the complexity class P, which is the set of problems solvable by a deterministic Turing machine within polynomial time. The corresponding set of problems is FP. The Blum axioms can be used to define complexity classes without referring to a computational model. Many important complexity classes can be defined by bounding the time or space used by the algorithm, some important complexity classes of decision problems defined in this manner are the following, It turns out that PSPACE = NPSPACE and EXPSPACE = NEXPSPACE by Savitchs theorem. #P is an important complexity class of counting problems, classes like IP and AM are defined using Interactive proof systems. ALL is the class of all decision problems, many complexity classes are defined using the concept of a reduction
14.
Logic
–
Logic, originally meaning the word or what is spoken, is generally held to consist of the systematic study of the form of arguments. A valid argument is one where there is a relation of logical support between the assumptions of the argument and its conclusion. Historically, logic has been studied in philosophy and mathematics, and recently logic has been studied in science, linguistics, psychology. The concept of form is central to logic. The validity of an argument is determined by its logical form, traditional Aristotelian syllogistic logic and modern symbolic logic are examples of formal logic. Informal logic is the study of natural language arguments, the study of fallacies is an important branch of informal logic. Since much informal argument is not strictly speaking deductive, on some conceptions of logic, formal logic is the study of inference with purely formal content. An inference possesses a purely formal content if it can be expressed as an application of a wholly abstract rule, that is. The works of Aristotle contain the earliest known study of logic. Modern formal logic follows and expands on Aristotle, in many definitions of logic, logical inference and inference with purely formal content are the same. This does not render the notion of informal logic vacuous, because no formal logic captures all of the nuances of natural language, Symbolic logic is the study of symbolic abstractions that capture the formal features of logical inference. Symbolic logic is divided into two main branches, propositional logic and predicate logic. Mathematical logic is an extension of logic into other areas, in particular to the study of model theory, proof theory, set theory. Logic is generally considered formal when it analyzes and represents the form of any valid argument type, the form of an argument is displayed by representing its sentences in the formal grammar and symbolism of a logical language to make its content usable in formal inference. Simply put, formalising simply means translating English sentences into the language of logic and this is called showing the logical form of the argument. It is necessary because indicative sentences of ordinary language show a variety of form. Second, certain parts of the sentence must be replaced with schematic letters, thus, for example, the expression all Ps are Qs shows the logical form common to the sentences all men are mortals, all cats are carnivores, all Greeks are philosophers, and so on. The schema can further be condensed into the formula A, where the letter A indicates the judgement all - are -, the importance of form was recognised from ancient times
15.
Formalism (philosophy of mathematics)
–
In playing this game one can prove that the Pythagorean theorem is valid because the string representing the Pythagorean theorem can be constructed using only the stated rules. According to formalism, the truths expressed in logic and mathematics are not about numbers, sets, or triangles or any other subject matter — in fact. They are syntactic forms whose shapes and locations have no meaning unless they are given an interpretation, Formalism is associated with rigorous method. In common use, a means the out-turn of the effort towards formalisation of a given limited area. In other words, matters can be formally discussed once captured in a formal system, complete formalisation is in the domain of computer science. Formalism stresses axiomatic proofs using theorems, specifically associated with David Hilbert, a formalist is an individual who belongs to the school of formalism, which is a certain mathematical-philosophical doctrine descending from Hilbert. Formalists are relatively tolerant and inviting to new approaches to logic, non-standard number systems, new set theories, the more games they study, the better. However, in all three of these examples, motivation is drawn from existing mathematical or philosophical concerns, the games are usually not arbitrary. Because of their connection with computer science, this idea is also advocated by mathematical intuitionists and constructivists in the computability tradition. Another version of formalism is known as deductivism. In deductivism, the Pythagorean theorem is not an absolute truth, under deductivism, the same view is held to be true for all other statements of formal logic and mathematics. Thus, formalism need not mean that these deductive sciences are nothing more than meaningless symbolic games and it is usually hoped that there exists some interpretation in which the rules of the game hold. Taking the deductivist view allows the working mathematician to suspend judgement on the philosophical questions. Many formalists would say that in practice, the systems to be studied are suggested by the demands of the particular science. A major early proponent of formalism was David Hilbert, whose program was intended to be a complete, Hilbert aimed to show the consistency of mathematical systems from the assumption that the finitary arithmetic was consistent. The way that Hilbert tried to show that a system was consistent was by formalizing it using a particular language. In order to formalize a system, you must first choose a language in which you can express. This language must include five components, It must include such as x
16.
Gottlob Frege
–
Friedrich Ludwig Gottlob Frege was a German philosopher, logician, and mathematician. Considered a major figure in mathematics, he is responsible for the development of modern logic and he is also understood by many to be the father of analytic philosophy, where he concentrated on the philosophy of language and mathematics. Though largely ignored during his lifetime, Giuseppe Peano and Bertrand Russell introduced his work to generations of logicians. Frege was born in 1848 in Wismar, Mecklenburg-Schwerin and his father Carl Alexander Frege was the co-founder and headmaster of a girls high school until his death. In childhood, Frege encountered philosophies that would guide his future scientific career, Frege studied at a gymnasium in Wismar and graduated in 1869. His teacher Gustav Adolf Leo Sachse, who was a poet, played the most important role in determining Freges future scientific career, Frege matriculated at the University of Jena in the spring of 1869 as a citizen of the North German Confederation. In the four semesters of his studies he attended approximately twenty courses of lectures and his most important teacher was Ernst Karl Abbe. Abbe was more than a teacher to Frege, he was a trusted friend, after Freges graduation, they came into closer correspondence. His other notable university teachers were Christian Philipp Karl Snell, Hermann Karl Julius Traugott Schaeffer, Frege married Margarete Katharina Sophia Anna Lieseberg on 14 March 1887. Though his education and early work focused primarily on geometry. His Begriffsschrift, eine der arithmetischen nachgebildete Formelsprache des reinen Denkens, Halle a/S, the Begriffsschrift broke new ground, including a rigorous treatment of the ideas of functions and variables. Previous logic had dealt with the constants and, or. Freges conceptual notation however can represent such inferences, one of Freges stated purposes was to isolate genuinely logical principles of inference, so that in the proper representation of mathematical proof, one would at no point appeal to intuition. If there was an element, it was to be isolated and represented separately as an axiom, from there on. Already in the 1879 Begriffsschrift important preliminary theorems, for example a generalized form of law of trichotomy, were derived within what Frege understood to be pure logic and this idea was formulated in non-symbolic terms in his The Foundations of Arithmetic. Later, in his Basic Laws of Arithmetic, Frege attempted to derive, by use of his symbolism, most of these axioms were carried over from his Begriffsschrift, though not without some significant changes. The one truly new principle was one he called the Basic Law V, the crucial case of the law may be formulated in modern notation as follows. Let denote the extension of the predicate Fx, i. e. the set of all Fs, then Basic Law V says that the predicates Fx and Gx have the same extension iff ∀x
17.
Begriffsschrift
–
Begriffsschrift is a book on logic by Gottlob Frege, published in 1879, and the formal system set out in that book. Begriffsschrift is usually translated as concept writing or concept notation, the title of the book identifies it as a formula language, modeled on that of arithmetic. Freges motivation for developing his formal approach to logic resembled Leibnizs motivation for his calculus ratiocinator, Frege went on to employ his logical calculus in his research on the foundations of mathematics, carried out over the next quarter century. The calculus contains the first appearance of quantified variables, and is essentially classical bivalent second-order logic with identity and it is bivalent in that sentences or formulas denote either True or False, second order because it includes relation variables in addition to object variables and allows quantification over both. The modifier with identity specifies that the language includes the identity relation, Frege presents his calculus using idiosyncratic two-dimensional notation, connectives and quantifiers are written using lines connecting formulas, rather than the symbols ¬, ∧, and ∀ in use today. For example, that judgement B materially implies judgement A, i. e, let signify that the third of those possibilities does not obtain, but one of the three others does. So if we negate, that means the third possibility is valid, i. e. we negate A, Frege declared nine of his propositions to be axioms, and justified them by arguing informally that, given their intended meanings, they express self-evident truths. – govern material implication, – negation, and identity, expresses Leibnizs indiscernibility of identicals, and asserts that identity is a reflexive relation. This rule is much harder to articulate precisely than the two preceding rules, and Frege invokes it in ways that are not obviously legitimate. The main results of the chapter, titled Parts from a general series theory. Frege applied the results from the Begriffsschrifft, including those on the ancestral of a relation, thus, if we take xRy to be the relation y = x +1, then 0R*y is the predicate y is a natural number. Says that if x, y, and z are natural numbers, then one of the following must hold, x < y, x = y and this is the so-called law of trichotomy. For a careful recent study of how the Begriffsschrift was reviewed in the German mathematical literature, some reviewers, especially Ernst Schröder, were on the whole favorable. Some vestige of Freges notation survives in the turnstile symbol ⊢ derived from his Urteilsstrich │, Frege used these symbols in the Begriffsschrift in the unified form ├─ for declaring that a proposition is true. In his later Grundgesetze he revises slightly his interpretation of the ├─ symbol, in Begriffsschrift the Definitionsdoppelstrich │├─ indicates that a proposition is a definition. Furthermore, the negation sign ¬ can be read as a combination of the horizontal Inhaltsstrich with a vertical negation stroke and this negation symbol was reintroduced by Arend Heyting in 1930 to distinguish intuitionistic from classical negation. It also appears in Gerhard Gentzens doctoral dissertation, in the Tractatus Logico Philosophicus, Ludwig Wittgenstein pays homage to Frege by employing the term Begriffsschrift as a synonym for logical formalism. Freges 1892 essay, Sense and Reference, recants some of the conclusions of the Begriffsschrifft about identity, ancestral relation Freges propositional calculus Gottlob Frege
18.
Axel Thue
–
Axel Thue, was a Norwegian mathematician, known for highly original work in diophantine approximation, and combinatorics. He stated in 1914 the so-called word problem for semigroups or Thue problem and his only known PhD student was Thoralf Skolem. The esoteric programming language Thue is named after him, Axel Thue, MacTutor History of Mathematics archive, University of St Andrews. Axel Thue at the Mathematics Genealogy Project Axel Thue private archive exists at NTNU University Library Dorabiblioteket
19.
Alphabet
–
An alphabet is a standard set of letters that is used to write one or more languages based upon the general principle that the letters represent phonemes of the spoken language. This is in contrast to other types of writing systems, such as syllabaries and logographies, the Proto-Canaanite script, later known as the Phoenician alphabet, is the first fully phonemic script. Thus the Phoenician alphabet is considered to be the first alphabet, the Phoenician alphabet is the ancestor of most modern alphabets, including Arabic, Greek, Latin, Cyrillic, Hebrew, and possibly Brahmic. Under a terminological distinction promoted by Peter T. Daniels, an alphabet is a script that represents both vowels and consonants as letters equally. In this narrow sense of the word the first true alphabet was the Greek alphabet, in other alphabetic scripts such as the original Phoenician, Hebrew or Arabic, letters predominantly or exclusively represent consonants, such a script is also called an abjad. A third type, called abugida or alphasyllabary, is one where vowels are shown by diacritics or modifications of consonantal letters, as in Devanagari. The Khmer alphabet is the longest, with 74 letters, there are dozens of alphabets in use today, the most popular being the Latin alphabet. Many languages use modified forms of the Latin alphabet, with additional letters formed using diacritical marks, while most alphabets have letters composed of lines, there are also exceptions such as the alphabets used in Braille. Alphabets are usually associated with an ordering of letters. This makes them useful for purposes of collation, specifically by allowing words to be sorted in alphabetical order and it also means that their letters can be used as an alternative method of numbering ordered items, in such contexts as numbered lists and number placements. The English word alphabet came into Middle English from the Late Latin word alphabetum, the Greek word was made from the first two letters, alpha and beta. The names for the Greek letters came from the first two letters of the Phoenician alphabet, aleph, which also meant ox, and bet, in the alphabet song in English, the term ABCs is used instead of the word alphabet. Knowing ones ABCs, in general, can be used as a metaphor for knowing the basics about anything, the history of the alphabet started in ancient Egypt. These glyphs were used as guides for logograms, to write grammatical inflections. Based on letter appearances and names, it is believed to be based on Egyptian hieroglyphs and this script had no characters representing vowels, although originally it probably was a syllabary, but unneeded symbols were discarded. An alphabetic cuneiform script with 30 signs including three that indicate the vowel was invented in Ugarit before the 15th century BC. This script was not used after the destruction of Ugarit, the Proto-Sinaitic script eventually developed into the Phoenician alphabet, which is conventionally called Proto-Canaanite before ca.1050 BC. The oldest text in Phoenician script is an inscription on the sarcophagus of King Ahiram and this script is the parent script of all western alphabets
20.
ASCII
–
ASCII, abbreviated from American Standard Code for Information Interchange, is a character encoding standard. ASCII codes represent text in computers, telecommunications equipment, and other devices, most modern character-encoding schemes are based on ASCII, although they support many additional characters. ASCII was developed from telegraph code and its first commercial use was as a seven-bit teleprinter code promoted by Bell data services. Work on the ASCII standard began on October 6,1960, the first edition of the standard was published in 1963, underwent a major revision during 1967, and experienced its most recent update during 1986. Compared to earlier telegraph codes, the proposed Bell code and ASCII were both ordered for more convenient sorting of lists, and added features for other than teleprinters. Originally based on the English alphabet, ASCII encodes 128 specified characters into seven-bit integers as shown by the ASCII chart above. The characters encoded are numbers 0 to 9, lowercase letters a to z, uppercase letters A to Z, basic punctuation symbols, control codes that originated with Teletype machines, for example, lowercase j would become binary 1101010 and decimal 106. ASCII includes definitions for 128 characters,33 are non-printing control characters that affect how text and space are processed and 95 printable characters, of these, the IANA encourages use of the name US-ASCII for Internet uses of ASCII. The ASA became the United States of America Standards Institute and ultimately the American National Standards Institute, there was some debate at the time whether there should be more control characters rather than the lowercase alphabet. The X3.2.4 task group voted its approval for the change to ASCII at its May 1963 meeting, the X3 committee made other changes, including other new characters, renaming some control characters and moving or removing others. ASCII was subsequently updated as USAS X3. 4-1967, then USAS X3. 4-1968, ANSI X3. 4-1977 and they proposed a 9-track standard for magnetic tape, and attempted to deal with some punched card formats. The X3.2 subcommittee designed ASCII based on the earlier teleprinter encoding systems, like other character encodings, ASCII specifies a correspondence between digital bit patterns and character symbols. This allows digital devices to communicate each other and to process, store. Before ASCII was developed, the encodings in use included 26 alphabetic characters,10 numerical digits, ITA2 were in turn based on the 5-bit telegraph code Émile Baudot invented in 1870 and patented in 1874. The committee debated the possibility of a function, which would allow more than 64 codes to be represented by a six-bit code. In a shifted code, some character codes determine choices between options for the character codes. It allows compact encoding, but is reliable for data transmission. The standards committee decided against shifting, and so ASCII required at least a seven-bit code, the committee considered an eight-bit code, since eight bits would allow two four-bit patterns to efficiently encode two digits with binary-coded decimal
21.
Unicode
–
Unicode is a computing industry standard for the consistent encoding, representation, and handling of text expressed in most of the worlds writing systems. As of June 2016, the most recent version is Unicode 9.0, the standard is maintained by the Unicode Consortium. Unicodes success at unifying character sets has led to its widespread, the standard has been implemented in many recent technologies, including modern operating systems, XML, Java, and the. NET Framework. Unicode can be implemented by different character encodings, the most commonly used encodings are UTF-8, UTF-16 and the now-obsolete UCS-2. UTF-8 uses one byte for any ASCII character, all of which have the same values in both UTF-8 and ASCII encoding, and up to four bytes for other characters. UCS-2 uses a 16-bit code unit for each character but cannot encode every character in the current Unicode standard, UTF-16 extends UCS-2, using one 16-bit unit for the characters that were representable in UCS-2 and two 16-bit units to handle each of the additional characters. Many traditional character encodings share a common problem in that they allow bilingual computer processing, Unicode, in intent, encodes the underlying characters—graphemes and grapheme-like units—rather than the variant glyphs for such characters. In the case of Chinese characters, this leads to controversies over distinguishing the underlying character from its variant glyphs. In text processing, Unicode takes the role of providing a unique code point—a number, in other words, Unicode represents a character in an abstract way and leaves the visual rendering to other software, such as a web browser or word processor. This simple aim becomes complicated, however, because of concessions made by Unicodes designers in the hope of encouraging a more rapid adoption of Unicode, the first 256 code points were made identical to the content of ISO-8859-1 so as to make it trivial to convert existing western text. For other examples, see duplicate characters in Unicode and he explained that he name Unicode is intended to suggest a unique, unified, universal encoding. In this document, entitled Unicode 88, Becker outlined a 16-bit character model, Unicode could be roughly described as wide-body ASCII that has been stretched to 16 bits to encompass the characters of all the worlds living languages. In a properly engineered design,16 bits per character are more than sufficient for this purpose, Unicode aims in the first instance at the characters published in modern text, whose number is undoubtedly far below 214 =16,384. By the end of 1990, most of the work on mapping existing character encoding standards had been completed, the Unicode Consortium was incorporated in California on January 3,1991, and in October 1991, the first volume of the Unicode standard was published. The second volume, covering Han ideographs, was published in June 1992, in 1996, a surrogate character mechanism was implemented in Unicode 2.0, so that Unicode was no longer restricted to 16 bits. The Microsoft TrueType specification version 1.0 from 1992 used the name Apple Unicode instead of Unicode for the Platform ID in the naming table, Unicode defines a codespace of 1,114,112 code points in the range 0hex to 10FFFFhex. Normally a Unicode code point is referred to by writing U+ followed by its hexadecimal number, for code points in the Basic Multilingual Plane, four digits are used, for code points outside the BMP, five or six digits are used, as required. Code points in Planes 1 through 16 are accessed as surrogate pairs in UTF-16, within each plane, characters are allocated within named blocks of related characters
22.
Countable set
–
In mathematics, a countable set is a set with the same cardinality as some subset of the set of natural numbers. A countable set is either a set or a countably infinite set. Some authors use countable set to mean countably infinite alone, to avoid this ambiguity, the term at most countable may be used when finite sets are included and countably infinite, enumerable, or denumerable otherwise. Georg Cantor introduced the term countable set, contrasting sets that are countable with those that are uncountable, today, countable sets form the foundation of a branch of mathematics called discrete mathematics. A set S is countable if there exists a function f from S to the natural numbers N =. If such an f can be found that is also surjective, in other words, a set is countably infinite if it has one-to-one correspondence with the natural number set, N. As noted above, this terminology is not universal, some authors use countable to mean what is here called countably infinite, and do not include finite sets. Alternative formulations of the definition in terms of a function or a surjective function can also be given. In 1874, in his first set theory article, Cantor proved that the set of numbers is uncountable. In 1878, he used one-to-one correspondences to define and compare cardinalities, in 1883, he extended the natural numbers with his infinite ordinals, and used sets of ordinals to produce an infinity of sets having different infinite cardinalities. A set is a collection of elements, and may be described in many ways, one way is simply to list all of its elements, for example, the set consisting of the integers 3,4, and 5 may be denoted. This is only effective for small sets, however, for larger sets, even in this case, however, it is still possible to list all the elements, because the set is finite. Some sets are infinite, these sets have more than n elements for any integer n, for example, the set of natural numbers, denotable by, has infinitely many elements, and we cannot use any normal number to give its size. Nonetheless, it out that infinite sets do have a well-defined notion of size. To understand what this means, we first examine what it does not mean, for example, there are infinitely many odd integers, infinitely many even integers, and infinitely many integers overall. However, it out that the number of even integers. This is because we arrange things such that for every integer, or, more generally, n→2n, see picture. However, not all sets have the same cardinality
23.
Subset
–
In mathematics, especially in set theory, a set A is a subset of a set B, or equivalently B is a superset of A, if A is contained inside B, that is, all elements of A are also elements of B. The relationship of one set being a subset of another is called inclusion or sometimes containment, the subset relation defines a partial order on sets. The algebra of subsets forms a Boolean algebra in which the relation is called inclusion. For any set S, the inclusion relation ⊆ is an order on the set P of all subsets of S defined by A ≤ B ⟺ A ⊆ B. We may also partially order P by reverse set inclusion by defining A ≤ B ⟺ B ⊆ A, when quantified, A ⊆ B is represented as, ∀x. So for example, for authors, it is true of every set A that A ⊂ A. Other authors prefer to use the symbols ⊂ and ⊃ to indicate proper subset and superset, respectively and this usage makes ⊆ and ⊂ analogous to the inequality symbols ≤ and <. For example, if x ≤ y then x may or may not equal y, but if x < y, then x definitely does not equal y, and is less than y. Similarly, using the convention that ⊂ is proper subset, if A ⊆ B, then A may or may not equal B, the set A = is a proper subset of B =, thus both expressions A ⊆ B and A ⊊ B are true. The set D = is a subset of E =, thus D ⊆ E is true, any set is a subset of itself, but not a proper subset. The empty set, denoted by ∅, is also a subset of any given set X and it is also always a proper subset of any set except itself. These are two examples in both the subset and the whole set are infinite, and the subset has the same cardinality as the whole. The set of numbers is a proper subset of the set of real numbers. In this example, both sets are infinite but the set has a larger cardinality than the former set. Another example in an Euler diagram, Inclusion is the partial order in the sense that every partially ordered set is isomorphic to some collection of sets ordered by inclusion. The ordinal numbers are a simple example—if each ordinal n is identified with the set of all ordinals less than or equal to n, then a ≤ b if and only if ⊆. For the power set P of a set S, the partial order is the Cartesian product of k = |S| copies of the partial order on for which 0 <1. This can be illustrated by enumerating S = and associating with each subset T ⊆ S the k-tuple from k of which the ith coordinate is 1 if and only if si is a member of T
24.
Regular language
–
Alternatively, a regular language can be defined as a language recognized by a finite automaton. The equivalence of regular expressions and finite automata is known as Kleenes theorem, in the Chomsky hierarchy, regular languages are defined to be the languages that are generated by Type-3 grammars. Regular languages are very useful in input parsing and programming language design, the collection of regular languages over an alphabet Σ is defined recursively as follows, The empty language Ø, and the empty string language are regular languages. For each a ∈ Σ, the language is a regular language. If A and B are regular languages, then A ∪ B, A • B, no other languages over Σ are regular. See regular expression for its syntax and semantics, note that the above cases are in effect the defining rules of regular expression. All finite languages are regular, in particular the empty string language = Ø* is regular, a simple example of a language that is not regular is the set of strings. Intuitively, it cannot be recognized with an automaton, since a finite automaton has finite memory. Techniques to prove this fact rigorously are given below, and 10. are purely algebraic approaches to define regular languages, a similar set of statements can be formulated for a monoid M⊂Σ*. In this case, equivalence over M leads to the concept of a recognizable language, some authors use one of the above properties different from 1. as alternative definition of regular languages. Some of the equivalences above, particularly those among the first four formalisms, are called Kleenes theorem in textbooks, precisely which one is called such varies between authors. One textbook calls the equivalence of regular expressions and NFAs Kleenes theorem, another textbook calls the equivalence of regular expressions and DFAs Kleenes theorem. Two other textbooks first prove the equivalence of NFAs and DFAs. Other authors simply define rational expression and regular expressions as synonymous and do the same with rational languages, the regular operations, K ∪ L, concatenation K ∘ L, and Kleene star L*. The trio operations, string homomorphism, inverse string homomorphism, as a consequence they are closed under arbitrary finite state transductions, like quotient K / L with a regular language. Even more, regular languages are closed under quotients with arbitrary languages, given two deterministic finite automata A and B, it is decidable whether they accept the same language. Disjointness, is LA ∩ LB =, membership, given a ∈ Σ*, is a ∈ LB. For regular expressions, the universality problem is NP-complete already for a singleton alphabet, for larger alphabets, that problem is PSPACE-complete
25.
Natural number
–
In mathematics, the natural numbers are those used for counting and ordering. In common language, words used for counting are cardinal numbers, texts that exclude zero from the natural numbers sometimes refer to the natural numbers together with zero as the whole numbers, but in other writings, that term is used instead for the integers. These chains of extensions make the natural numbers canonically embedded in the number systems. Properties of the numbers, such as divisibility and the distribution of prime numbers, are studied in number theory. Problems concerning counting and ordering, such as partitioning and enumerations, are studied in combinatorics, the most primitive method of representing a natural number is to put down a mark for each object. Later, a set of objects could be tested for equality, excess or shortage, by striking out a mark, the first major advance in abstraction was the use of numerals to represent numbers. This allowed systems to be developed for recording large numbers, the ancient Egyptians developed a powerful system of numerals with distinct hieroglyphs for 1,10, and all the powers of 10 up to over 1 million. A stone carving from Karnak, dating from around 1500 BC and now at the Louvre in Paris, depicts 276 as 2 hundreds,7 tens, and 6 ones, and similarly for the number 4,622. A much later advance was the development of the idea that 0 can be considered as a number, with its own numeral. The use of a 0 digit in place-value notation dates back as early as 700 BC by the Babylonians, the Olmec and Maya civilizations used 0 as a separate number as early as the 1st century BC, but this usage did not spread beyond Mesoamerica. The use of a numeral 0 in modern times originated with the Indian mathematician Brahmagupta in 628, the first systematic study of numbers as abstractions is usually credited to the Greek philosophers Pythagoras and Archimedes. Some Greek mathematicians treated the number 1 differently than larger numbers, independent studies also occurred at around the same time in India, China, and Mesoamerica. In 19th century Europe, there was mathematical and philosophical discussion about the nature of the natural numbers. A school of Naturalism stated that the numbers were a direct consequence of the human psyche. Henri Poincaré was one of its advocates, as was Leopold Kronecker who summarized God made the integers, in opposition to the Naturalists, the constructivists saw a need to improve the logical rigor in the foundations of mathematics. In the 1860s, Hermann Grassmann suggested a recursive definition for natural numbers thus stating they were not really natural, later, two classes of such formal definitions were constructed, later, they were shown to be equivalent in most practical applications. The second class of definitions was introduced by Giuseppe Peano and is now called Peano arithmetic and it is based on an axiomatization of the properties of ordinal numbers, each natural number has a successor and every non-zero natural number has a unique predecessor. Peano arithmetic is equiconsistent with several systems of set theory
26.
Empty set
–
In mathematics, and more specifically set theory, the empty set is the unique set having no elements, its size or cardinality is zero. Some axiomatic set theories ensure that the empty set exists by including an axiom of empty set, in other theories, many possible properties of sets are vacuously true for the empty set. Null set was once a synonym for empty set, but is now a technical term in measure theory. The empty set may also be called the void set, common notations for the empty set include, ∅, and ∅. The latter two symbols were introduced by the Bourbaki group in 1939, inspired by the letter Ø in the Norwegian, although now considered an improper use of notation, in the past,0 was occasionally used as a symbol for the empty set. The empty-set symbol ∅ is found at Unicode point U+2205, in LaTeX, it is coded as \emptyset for ∅ or \varnothing for ∅. In standard axiomatic set theory, by the principle of extensionality, hence there is but one empty set, and we speak of the empty set rather than an empty set. The mathematical symbols employed below are explained here, in this context, zero is modelled by the empty set. For any property, For every element of ∅ the property holds, There is no element of ∅ for which the property holds. Conversely, if for some property and some set V, the two statements hold, For every element of V the property holds, There is no element of V for which the property holds. By the definition of subset, the empty set is a subset of any set A. That is, every element x of ∅ belongs to A. Indeed, since there are no elements of ∅ at all, there is no element of ∅ that is not in A. Any statement that begins for every element of ∅ is not making any substantive claim and this is often paraphrased as everything is true of the elements of the empty set. When speaking of the sum of the elements of a finite set, the reason for this is that zero is the identity element for addition. Similarly, the product of the elements of the empty set should be considered to be one, a disarrangement of a set is a permutation of the set that leaves no element in the same position. The empty set is a disarrangment of itself as no element can be found that retains its original position. Since the empty set has no members, when it is considered as a subset of any ordered set, then member of that set will be an upper bound. For example, when considered as a subset of the numbers, with its usual ordering, represented by the real number line
27.
Turing machine
–
Despite the models simplicity, given any computer algorithm, a Turing machine can be constructed that is capable of simulating that algorithms logic. The machine operates on an infinite memory tape divided into discrete cells, the machine positions its head over a cell and reads the symbol there. The Turing machine was invented in 1936 by Alan Turing, who called it an a-machine, thus, Turing machines prove fundamental limitations on the power of mechanical computation. Turing completeness is the ability for a system of instructions to simulate a Turing machine, a Turing machine is a general example of a CPU that controls all data manipulation done by a computer, with the canonical machine using sequential memory to store data. More specifically, it is a capable of enumerating some arbitrary subset of valid strings of an alphabet. Assuming a black box, the Turing machine cannot know whether it will eventually enumerate any one specific string of the subset with a given program and this is due to the fact that the halting problem is unsolvable, which has major implications for the theoretical limits of computing. The Turing machine is capable of processing an unrestricted grammar, which implies that it is capable of robustly evaluating first-order logic in an infinite number of ways. This is famously demonstrated through lambda calculus, a Turing machine that is able to simulate any other Turing machine is called a universal Turing machine. The thesis states that Turing machines indeed capture the notion of effective methods in logic and mathematics. Studying their abstract properties yields many insights into computer science and complexity theory, at any moment there is one symbol in the machine, it is called the scanned symbol. The machine can alter the scanned symbol, and its behavior is in part determined by that symbol, however, the tape can be moved back and forth through the machine, this being one of the elementary operations of the machine. Any symbol on the tape may therefore eventually have an innings, the Turing machine mathematically models a machine that mechanically operates on a tape. On this tape are symbols, which the machine can read and write, one at a time, in the original article, Turing imagines not a mechanism, but a person whom he calls the computer, who executes these deterministic mechanical rules slavishly. If δ is not defined on the current state and the current tape symbol, Q0 ∈ Q is the initial state F ⊆ Q is the set of final or accepting states. The initial tape contents is said to be accepted by M if it eventually halts in a state from F, Anything that operates according to these specifications is a Turing machine. The 7-tuple for the 3-state busy beaver looks like this, Q = Γ = b =0 Σ = q 0 = A F = δ = see state-table below Initially all tape cells are marked with 0. In the words of van Emde Boas, p.6, The set-theoretical object provides only partial information on how the machine will behave and what its computations will look like. For instance, There will need to be many decisions on what the symbols actually look like, and a failproof way of reading and writing symbols indefinitely
28.
Regular expression
–
A regular expression, regex or regexp is, in theoretical computer science and formal language theory, a sequence of characters that define a search pattern. Usually this pattern is used by string searching algorithms for find or find. The concept arose in the 1950s when the American mathematician Stephen Cole Kleene formalized the description of a regular language, the concept came into common use with Unix text-processing utilities. Today, different syntaxes for writing regular expressions exist, one being the POSIX standard and another, widely used, being the Perl syntax. Regular expressions are used in engines, search and replace dialogs of word processors and text editors, in text processing utilities such as sed and AWK. Many programming languages provide regex capabilities, built-in, or via libraries, the phrase regular expressions is often used to mean the specific, standard textual syntax for representing patterns that matching text need to conform to. Each character in an expression is understood to be a metacharacter. For example, in the regex a, a is a literal character which matches just a and. is a meta character which matches every character except a newline. Therefore, this regex would match for example a or ax or a0, together, metacharacters and literal characters can be used to identify textual material of a given pattern, or process a number of instances of it. Pattern-matches can vary from a precise equality to a general similarity. Is a very general pattern, is general and a is a precise pattern. Wildcards could also achieve this, but are limited in what they can pattern. The usual context of wildcard characters is in globbing similar names in a list of files, for example, the regex ^+|+$ matches excess whitespace at the beginning or end of a line. An advanced regex used to match any numeral is ^. $, see the Examples section for more examples. A regex processor translates a regular expression in the above syntax into a representation which can be executed and matched against a string representing the text being searched in. The picture shows the NFA scheme N obtained from the regular expression s*, where s denotes a simpler regular expression in turn, Regular expressions originated in 1956, when mathematician Stephen Cole Kleene described regular languages using his mathematical notation called regular sets. These arose in theoretical science, in the subfields of automata theory. Other early implementations of pattern matching include the SNOBOL language, which did not use regular expressions, Regular expressions entered popular use from 1968 in two uses, pattern matching in a text editor and lexical analysis in a compiler
29.
Automata theory
–
Automata theory is the study of abstract machines and automata, as well as the computational problems that can be solved using them. It is a theory in computer science and discrete mathematics. The word automata comes from the Greek word αὐτόματα, which means self-acting, the figure at right illustrates a finite-state machine, which belongs to a well-known type of automaton. This automaton consists of states and transitions, as the automaton sees a symbol of input, it makes a transition to another state, according to its transition function, which takes the current state and the recent symbol as its inputs. Automata theory is related to formal language theory. An automaton is a representation of a formal language that may be an infinite set. Automata play a role in theory of computation, compiler construction, artificial intelligence, parsing. Following is a definition of one type of automaton, which attempts to help one grasp the essential concepts involved in automata theory/theories. An automaton is supposed to run on some given sequence of inputs in time steps. An automaton gets one input every step that is picked up from a set of symbols or letters. At any time, the symbols so far fed to the automaton as input, form a sequence of symbols. An automaton contains a set of states. At each instance in time of run, the automaton is in one of its states. At each time step when the automaton reads a symbol, it jumps or transitions to state that is decided by a function that takes the current state. This function is called the transition function, the automaton reads the symbols of the input word one after another and transitions from state to state according to the transition function, until the word is read completely. Once the input word has been read, the automaton is said to have stopped, depending on the final state, its said that the automaton either accepts or rejects an input word. There is a subset of states of the automaton, which is defined as the set of accepting states, if the final state is an accepting state, then the automaton accepts the word. The set of all the words accepted by an automaton is called the language of that automaton, any subset of the language of an automaton is a language recognized by that automaton
30.
Finite-state machine
–
A finite-state machine or finite-state automaton, finite automaton, or simply a state machine, is a mathematical model of computation. It is a machine that can be in exactly one of a finite number of states at any given time. The FSM can change from one state to another in response to external inputs. A FSM is defined by a list of its states, its state. The behavior of machines can be observed in many devices in modern society that perform a predetermined sequence of actions depending on a sequence of events with which they are presented. The finite state machine has less power than some other models of computation such as the Turing machine. The computational power distinction means there are tasks that a Turing machine can do. This is because a FSMs memory is limited by the number of states it has, FSMs are studied in the more general field of automata theory. An example of a mechanism that can be modeled by a machine is a turnstile. A turnstile, used to access to subways and amusement park rides, is a gate with three rotating arms at waist height, one across the entryway. Initially the arms are locked, blocking the entry, preventing patrons from passing through, depositing a coin or token in a slot on the turnstile unlocks the arms, allowing a single customer to push through. After the customer passes through, the arms are locked again until another coin is inserted, considered as a state machine, the turnstile has two possible states, Locked and Unlocked. There are two inputs that affect its state, putting a coin in the slot and pushing the arm. In the locked state, pushing on the arm has no effect, no matter how many times the input push is given, putting a coin in – that is, giving the machine a coin input – shifts the state from Locked to Unlocked. In the unlocked state, putting additional coins in has no effect, however, a customer pushing through the arms, giving a push input, shifts the state back to Locked. Each state is represented by a node, edges show the transitions from one state to another. Each arrow is labeled with the input that triggers that transition, an input that doesnt cause a change of state is represented by a circular arrow returning to the original state. The arrow into the Locked node from the dot indicates it is the initial state
31.
Algorithm
–
In mathematics and computer science, an algorithm is a self-contained sequence of actions to be performed. Algorithms can perform calculation, data processing and automated reasoning tasks, an algorithm is an effective method that can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. The transition from one state to the next is not necessarily deterministic, some algorithms, known as randomized algorithms, giving a formal definition of algorithms, corresponding to the intuitive notion, remains a challenging problem. In English, it was first used in about 1230 and then by Chaucer in 1391, English adopted the French term, but it wasnt until the late 19th century that algorithm took on the meaning that it has in modern English. Another early use of the word is from 1240, in a manual titled Carmen de Algorismo composed by Alexandre de Villedieu and it begins thus, Haec algorismus ars praesens dicitur, in qua / Talibus Indorum fruimur bis quinque figuris. Which translates as, Algorism is the art by which at present we use those Indian figures, the poem is a few hundred lines long and summarizes the art of calculating with the new style of Indian dice, or Talibus Indorum, or Hindu numerals. An informal definition could be a set of rules that precisely defines a sequence of operations, which would include all computer programs, including programs that do not perform numeric calculations. Generally, a program is only an algorithm if it stops eventually, but humans can do something equally useful, in the case of certain enumerably infinite sets, They can give explicit instructions for determining the nth member of the set, for arbitrary finite n. An enumerably infinite set is one whose elements can be put into one-to-one correspondence with the integers, the concept of algorithm is also used to define the notion of decidability. That notion is central for explaining how formal systems come into being starting from a set of axioms. In logic, the time that an algorithm requires to complete cannot be measured, from such uncertainties, that characterize ongoing work, stems the unavailability of a definition of algorithm that suits both concrete and abstract usage of the term. Algorithms are essential to the way computers process data, thus, an algorithm can be considered to be any sequence of operations that can be simulated by a Turing-complete system. Although this may seem extreme, the arguments, in its favor are hard to refute. Gurevich. Turings informal argument in favor of his thesis justifies a stronger thesis, according to Savage, an algorithm is a computational process defined by a Turing machine. Typically, when an algorithm is associated with processing information, data can be read from a source, written to an output device. Stored data are regarded as part of the state of the entity performing the algorithm. In practice, the state is stored in one or more data structures, for some such computational process, the algorithm must be rigorously defined, specified in the way it applies in all possible circumstances that could arise. That is, any conditional steps must be dealt with, case-by-case