1.
Mathematics
–
Mathematics is the study of topics such as quantity, structure, space, and change. There is a range of views among mathematicians and philosophers as to the exact scope, Mathematicians seek out patterns and use them to formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proof, when mathematical structures are good models of real phenomena, then mathematical reasoning can provide insight or predictions about nature. Through the use of abstraction and logic, mathematics developed from counting, calculation, measurement, practical mathematics has been a human activity from as far back as written records exist. The research required to solve mathematical problems can take years or even centuries of sustained inquiry, rigorous arguments first appeared in Greek mathematics, most notably in Euclids Elements. Galileo Galilei said, The universe cannot be read until we have learned the language and it is written in mathematical language, and the letters are triangles, circles and other geometrical figures, without which means it is humanly impossible to comprehend a single word. Without these, one is wandering about in a dark labyrinth, carl Friedrich Gauss referred to mathematics as the Queen of the Sciences. Benjamin Peirce called mathematics the science that draws necessary conclusions, David Hilbert said of mathematics, We are not speaking here of arbitrariness in any sense. Mathematics is not like a game whose tasks are determined by arbitrarily stipulated rules, rather, it is a conceptual system possessing internal necessity that can only be so and by no means otherwise. Albert Einstein stated that as far as the laws of mathematics refer to reality, they are not certain, Mathematics is essential in many fields, including natural science, engineering, medicine, finance and the social sciences. Applied mathematics has led to entirely new mathematical disciplines, such as statistics, Mathematicians also engage in pure mathematics, or mathematics for its own sake, without having any application in mind. There is no clear line separating pure and applied mathematics, the history of mathematics can be seen as an ever-increasing series of abstractions. The earliest uses of mathematics were in trading, land measurement, painting and weaving patterns, in Babylonian mathematics elementary arithmetic first appears in the archaeological record. Numeracy pre-dated writing and numeral systems have many and diverse. Between 600 and 300 BC the Ancient Greeks began a study of mathematics in its own right with Greek mathematics. Mathematics has since been extended, and there has been a fruitful interaction between mathematics and science, to the benefit of both. Mathematical discoveries continue to be made today, the overwhelming majority of works in this ocean contain new mathematical theorems and their proofs. The word máthēma is derived from μανθάνω, while the modern Greek equivalent is μαθαίνω, in Greece, the word for mathematics came to have the narrower and more technical meaning mathematical study even in Classical times

2.
Computer science
–
Computer science is the study of the theory, experimentation, and engineering that form the basis for the design and use of computers. An alternate, more succinct definition of science is the study of automating algorithmic processes that scale. A computer scientist specializes in the theory of computation and the design of computational systems and its fields can be divided into a variety of theoretical and practical disciplines. Some fields, such as computational complexity theory, are highly abstract, other fields still focus on challenges in implementing computation. Human–computer interaction considers the challenges in making computers and computations useful, usable, the earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, further, algorithms for performing computations have existed since antiquity, even before the development of sophisticated computing equipment. Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623, in 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner. He may be considered the first computer scientist and information theorist, for, among other reasons and he started developing this machine in 1834, and in less than two years, he had sketched out many of the salient features of the modern computer. A crucial step was the adoption of a card system derived from the Jacquard loom making it infinitely programmable. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information, when the machine was finished, some hailed it as Babbages dream come true. During the 1940s, as new and more powerful computing machines were developed, as it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. Computer science began to be established as an academic discipline in the 1950s. The worlds first computer science program, the Cambridge Diploma in Computer Science. The first computer science program in the United States was formed at Purdue University in 1962. Since practical computers became available, many applications of computing have become distinct areas of study in their own rights and it is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM released the IBM704 and later the IBM709 computers, still, working with the IBM was frustrating if you had misplaced as much as one letter in one instruction, the program would crash, and you would have to start the whole process over again. During the late 1950s, the science discipline was very much in its developmental stages. Time has seen significant improvements in the usability and effectiveness of computing technology, modern society has seen a significant shift in the users of computer technology, from usage only by experts and professionals, to a near-ubiquitous user base

3.
Hierarchy
–
A hierarchy is an arrangement of items in which the items are represented as being above, below, or at the same level as one another. A hierarchy can link entities either directly or indirectly, and either vertically or diagonally, indirect hierarchical links can extend vertically upwards or downwards via multiple links in the same direction, following a path. This is akin to two co-workers or colleagues, each reports to a superior, but they have the same relative amount of authority. Organizational forms exist that are alternative and complementary to hierarchy. A hierarchy is a system or organization in which people or groups are ranked one above the other according to status or authority, Hierarchies have their own special vocabulary. Most hierarchies use a more specific vocabulary pertaining to their subject, for example, with data structures, objects are known as nodes, superiors are called parents and subordinates are called children. In a business setting, a superior is a supervisor/boss and a peer is a colleague, degree of branching refers to the number of direct subordinates or children an object has a node has). Hierarchies can be categorized based on the degree, the highest degree present in the system as a whole. Categorization in this way yields two broad classes, linear and branching, in a linear hierarchy, the maximum degree is 1. In other words, all of the objects can be visualized in a lineup, note that this is referring to the objects and not the levels, every hierarchy has this property with respect to levels, but normally each level can have an infinite number of objects. An example of a hierarchy is the hierarchy of life. In a branching hierarchy, one or more objects has a degree of 2 or more, for many people, the word hierarchy automatically evokes an image of a branching hierarchy. Branching hierarchies are present within numerous systems, including organizations and classification schemes, the broad category of branching hierarchies can be further subdivided based on the degree. A flat hierarchy is a hierarchy in which the maximum degree approaches infinity. Most often, systems intuitively regarded as hierarchical have at most a moderate span, therefore, a flat hierarchy is often not viewed as a hierarchy at all. For example, diamonds and graphite are flat hierarchies of numerous carbon atoms which can be decomposed into subatomic particles. An overlapping hierarchy is a hierarchy in which at least one object has two parent objects. Pseudo-Dionysius used the related Greek word both in reference to the hierarchy and the ecclesiastical hierarchy

4.
Tree structure
–
A tree structure or tree diagram is a way of representing the hierarchical nature of a structure in a graphical form. A tree structure is conceptual, and appears in several forms, for a discussion of tree structures in specific fields, see Tree for computer science, insofar as it relates to graph theory, see tree, or also tree. Other related pages are listed below, the tree elements are called nodes. The lines connecting elements are called branches, nodes without children are called leaf nodes, end-nodes, or leaves. Every finite tree structure has a member that has no superior and this member is called the root or root node. The root is the starting node, but the converse is not true, infinite tree structures may or may not have a root node. The names of relationships between nodes model the kinship terminology of family relations, the gender-neutral names parent and child have largely displaced the older father and son terminology, although the term uncle is still used for other nodes at the same level as the parent. A nodes parent is a one step higher in the hierarchy. Sibling nodes share the same parent node, a nodes uncles are siblings of that nodes parent. A node that is connected to all nodes is called an ancestor. The connected lower-level nodes are descendants of the ancestor node, in the example, encyclopedia is the parent of science and culture, its children. Art and craft are siblings, and children of culture, which is their parent, also, encyclopedia, as the root of the tree, is the ancestor of science, culture, art and craft. Finally, science, art and craft, as leaves, are ancestors of no other node, the Oxford English Dictionary records use of both the terms tree structure and tree-diagram from 1965 in Noam Chomskys Aspects of the Theory of Syntax. In a tree there is one and only one path from any point to any other point. Computer science uses tree structures extensively For a formal definition see set theory, internet, usenet hierarchy Document Object Models logical structure, Yahoo. Almost always, these boil down to variations, or combinations, of a few basic styles, Classical node-link diagrams, nested sets that use enclosure/containment to show parenthood, examples include TreeMaps and fractal maps. Layered icicle diagrams that use alignment/adjacency, lists or diagrams that use indentation, sometimes called outlines or tree views. A correspondence to nested parentheses was first noticed by Sir Arthur Cayley, trees can also be represented radially

5.
Theoretical computer science
–
It is not easy to circumscribe the theoretical areas precisely. Work in this field is often distinguished by its emphasis on mathematical technique, despite this broad scope, the theory people in computer science self-identify as different from the applied people. Some characterize themselves as doing the science underlying the field of computing, other theory-applied people suggest that it is impossible to separate theory and application. This means that the theory people regularly use experimental science done in less-theoretical areas such as software system research. It also means there is more cooperation than mutually exclusive competition between theory and application. These developments have led to the study of logic and computability. Information theory was added to the field with a 1948 mathematical theory of communication by Claude Shannon, in the same decade, Donald Hebb introduced a mathematical model of learning in the brain. With mounting biological data supporting this hypothesis with some modification, the fields of neural networks, in 1971, Stephen Cook and, working independently, Leonid Levin, proved that there exist practically relevant problems that are NP-complete – a landmark result in computational complexity theory. With the development of mechanics in the beginning of the 20th century came the concept that mathematical operations could be performed on an entire particle wavefunction. In other words, one could compute functions on multiple states simultaneously, modern theoretical computer science research is based on these basic developments, but includes many other mathematical and interdisciplinary problems that have been posed. An algorithm is a procedure for calculations. Algorithms are used for calculation, data processing, and automated reasoning, an algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. The transition from one state to the next is not necessarily deterministic, some algorithms, known as randomized algorithms, a data structure is a particular way of organizing data in a computer so that it can be used efficiently. Different kinds of structures are suited to different kinds of applications. For example, databases use B-tree indexes for small percentages of data retrieval and compilers, data structures provide a means to manage large amounts of data efficiently for uses such as large databases and internet indexing services. Usually, efficient data structures are key to designing efficient algorithms, some formal design methods and programming languages emphasize data structures, rather than algorithms, as the key organizing factor in software design. Storing and retrieving can be carried out on data stored in main memory and in secondary memory. A problem is regarded as inherently difficult if its solution requires significant resources, the theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage

6.
Partially ordered set
–
In mathematics, especially order theory, a partially ordered set formalizes and generalizes the intuitive concept of an ordering, sequencing, or arrangement of the elements of a set. A poset consists of a set together with a binary relation indicating that, for pairs of elements in the set. The word partial in the partial order or partially ordered set is used as an indication that not every pair of elements need be comparable. That is, there may be pairs of elements for which neither element precedes the other in the poset, Partial orders thus generalize total orders, in which every pair is comparable. To be an order, a binary relation must be reflexive, antisymmetric. One familiar example of an ordered set is a collection of people ordered by genealogical descendancy. Some pairs of people bear the descendant-ancestor relationship, but other pairs of people are incomparable, a poset can be visualized through its Hasse diagram, which depicts the ordering relation. A partial order is a binary relation ≤ over a set P satisfying particular axioms which are discussed below, when a ≤ b, we say that a is related to b. The axioms for a partial order state that the relation ≤ is reflexive, antisymmetric. That is, for all a, b, and c in P, it must satisfy, in other words, a partial order is an antisymmetric preorder. A set with an order is called a partially ordered set. The term ordered set is also used, as long as it is clear from the context that no other kind of order is meant. In particular, totally ordered sets can also be referred to as ordered sets, for a, b, elements of a partially ordered set P, if a ≤ b or b ≤ a, then a and b are comparable. In the figure on top-right, e. g. and are comparable, while and are not, a partial order under which every pair of elements is comparable is called a total order or linear order, a totally ordered set is also called a chain. A subset of a poset in which no two elements are comparable is called an antichain. A more concise definition will be given using the strict order corresponding to ≤. For example, is covered by in the figure. Standard examples of posets arising in mathematics include, The real numbers ordered by the standard less-than-or-equal relation ≤, the set of subsets of a given set ordered by inclusion

7.
Tree (data structure)
–
Alternatively, a tree can be defined abstractly as a whole as an ordered tree, with a value assigned to each node. Both these perspectives are useful, while a tree can be analyzed mathematically as a whole, a tree is a data structure made up of nodes or vertices and edges without having any cycle. The tree with no nodes is called the null or empty tree, a tree that is not empty consists of a root node and potentially many levels of additional nodes that form a hierarchy. Root The top node in a tree, child A node directly connected to another node when moving away from the Root. Parent The converse notion of a child, siblings A group of nodes with the same parent. Descendant A node reachable by repeated proceeding from parent to child, ancestor A node reachable by repeated proceeding from child to parent. Leaf A node with no children, branch Internal node A node with at least one child. Degree The number of sub trees of a node, edge The connection between one node and another. Path A sequence of nodes and edges connecting a node with a descendant, level The level of a node is defined by 1 +. Height of node The height of a node is the number of edges on the longest path between that node and a leaf, height of tree The height of a tree is the height of its root node. Depth The depth of a node is the number of edges from the root node to the node. Forest A forest is a set of n ≥0 disjoint trees, there is a distinction between a tree as an abstract data type and as a concrete data structure, analogous to the distinction between a list and a linked list. To allow finite trees, one must either allow the list of children to be empty, or allow trees to be empty, in case the list of children can be of fixed size. As a data structure, a tree is a group of nodes, where each node has a value. This data structure actually defines a graph, because it may have loops or several references to the same node. Thus there is also the requirement that no two references point to the node, and a tree that violates this is corrupt. For example, rather than an empty tree, one may have a reference, a tree is always non-empty. In fact, every node must have one parent

8.
Decision problem
–
In computability theory and computational complexity theory, a decision problem is a question in some formal system that can be posed as a yes-no question, dependent on the input values. For example, the given two numbers x and y, does x evenly divide y. is a decision problem. The answer can be yes or no, and depends upon the values of x and y. A method for solving a problem, given in the form of an algorithm, is called a decision procedure for that problem. A decision procedure for the problem given two numbers x and y, does x evenly divide y. would give the steps for determining whether x evenly divides y. One such algorithm is long division, taught to school children. If the remainder is zero the answer produced is yes, otherwise it is no, a decision problem which can be solved by an algorithm, such as this example, is called decidable. The field of computational complexity categorizes decidable decision problems by how difficult they are to solve, difficult, in this sense, is described in terms of the computational resources needed by the most efficient algorithm for a certain problem. The field of theory, meanwhile, categorizes undecidable decision problems by Turing degree. A decision problem is any arbitrary yes-or-no question on a set of inputs. Because of this, it is traditional to define the decision problem equivalently as and these inputs can be natural numbers, but may also be values of some other kind, such as strings over the binary alphabet or over some other finite set of symbols. The subset of strings for which the problem returns yes is a formal language, alternatively, using an encoding such as Gödel numberings, any string can be encoded as a natural number, via which a decision problem can be defined as a subset of the natural numbers. A classic example of a decision problem is the set of prime numbers. It is possible to decide whether a given natural number is prime by testing every possible nontrivial factor. Although much more efficient methods of primality testing are known, the existence of any method is enough to establish decidability. A decision problem A is called decidable or effectively solvable if A is a recursive set, a problem is called partially decidable, semidecidable, solvable, or provable if A is a recursively enumerable set. Problems that are not decidable are called undecidable, the halting problem is an important undecidable decision problem, for more examples, see list of undecidable problems. Decision problems can be ordered according to many-one reducibility and related to feasible reductions such as polynomial-time reductions

9.
Order theory
–
Order theory is a branch of mathematics which investigates the intuitive notion of order using binary relations. It provides a framework for describing statements such as this is less than that or this precedes that. This article introduces the field and provides basic definitions, a list of order-theoretic terms can be found in the order theory glossary. Orders are everywhere in mathematics and related fields like computer science. The first order often discussed in primary school is the order on the natural numbers e. g.2 is less than 3,10 is greater than 5. This intuitive concept can be extended to orders on sets of numbers, such as the integers. The idea of being greater than or less than another number is one of the basic intuitions of number systems in general, other familiar examples of orderings are the alphabetical order of words in a dictionary and the genealogical property of lineal descent within a group of people. The notion of order is very general, extending beyond contexts that have an immediate, in other contexts orders may capture notions of containment or specialization. Abstractly, this type of order amounts to the relation, e. g. Pediatricians are physicians. However, many other orders do not and those orders like the subset-of relation for which there exist incomparable elements are called partial orders, orders for which every pair of elements is comparable are total orders. Order theory captures the intuition of orders that arises from such examples in a general setting and this is achieved by specifying properties that a relation ≤ must have to be a mathematical order. This more abstract approach makes sense, because one can derive numerous theorems in the general setting. These insights can then be transferred to many less abstract applications. Driven by the wide usage of orders, numerous special kinds of ordered sets have been defined. In addition, order theory does not restrict itself to the classes of ordering relations. A simple example of an order theoretic property for functions comes from analysis where monotone functions are frequently found and this section introduces ordered sets by building upon the concepts of set theory, arithmetic, and binary relations. Suppose that P is a set and that ≤ is a relation on P, a set with a partial order on it is called a partially ordered set, poset, or just an ordered set if the intended meaning is clear. By checking these properties, one sees that the well-known orders on natural numbers, integers, rational numbers

10.
Polynomial hierarchy
–
In computational complexity theory, the polynomial hierarchy is a hierarchy of complexity classes that generalize the classes P, NP and co-NP to oracle machines. It is a counterpart to the arithmetical hierarchy and analytical hierarchy from mathematical logic. There are multiple equivalent definitions of the classes of the polynomial hierarchy. If any Σ k P = Σ k +1 P, or if any Σ k P = Π k P, then the hierarchy collapses to level k, in particular, if P = NP, then the hierarchy collapses completely. The union of all classes in the hierarchy is the complexity class PH. The polynomial hierarchy is an analogue of the hierarchy and arithmetical hierarchy. It is known that PH is contained within PSPACE, but it is not known whether the two classes are equal. One useful reformulation of this problem is that PH = PSPACE if, if the polynomial hierarchy has any complete problems, then it has only finitely many distinct levels. Since there are PSPACE-complete problems, we know that if PSPACE = PH, then the hierarchy must collapse. Each class in the hierarchy contains ≤ m P -complete problems. Furthermore, each class in the hierarchy is closed under ≤ m P -reductions, meaning that for a class C in the hierarchy. These two facts together imply that if K i is a problem for Σ i P, then Σ i +1 P = N P K i. For instance, Σ2 P = N P S A T, in other words, if a language is defined based on some oracle in C, then we can assume that it is defined based on a complete problem for C. Complete problems therefore act as representatives of the class for which they are complete, the Sipser–Lautemann theorem states that the class BPP is contained in the second level of the polynomial hierarchy. Kannans theorem states that for any k, Σ2 is not contained in SIZE, todas theorem states that the polynomial hierarchy is contained in P#P. EXPTIME Exponential hierarchy Arithmetic hierarchy A. R. Meyer and L. J. Stockmeyer. The Equivalence Problem for Regular Expressions with Squaring Requires Exponential Space, in Proceedings of the 13th IEEE Symposium on Switching and Automata Theory, pp. 125–129,1972. The paper that introduced the polynomial hierarchy, theoretical Computer Science, vol.3, pp. 1–22,1976. Michael R. Garey and David S. Johnson, computers and Intractability, A Guide to the Theory of NP-Completeness

11.
Chomsky hierarchy
–
In the formal languages of computer science and linguistics, the Chomsky hierarchy is a containment hierarchy of classes of formal grammars. This hierarchy of grammars was described by Noam Chomsky in 1956 and it is also named after Marcel-Paul Schützenberger, who played a crucial role in the development of the theory of formal languages. A rule may be applied by replacing an occurrence of the symbols on its side with those that appear on its right-hand side. A sequence of rule applications is called a derivation, such a grammar defines the formal language, all words consisting solely of terminal symbols which can be reached by a derivation from the start symbol. Nonterminals are often represented by letters, terminals by lowercase letters. For example, the grammar with terminals, nonterminals, production rules S → AB S → ε A → aS B → b and start symbol S, other sequences that can be derived from this grammar are, ideas hate great linguists, and ideas generate. While these sentences are nonsensical, they are syntactically correct, a syntactically incorrect sentence cannot be derived from this grammar. The following table summarizes each of Chomskys four types of grammars, the class of language it generates, the type of automaton that recognizes it, and the form its rules must have. Note that the set of corresponding to recursive languages is not a member of this hierarchy. Every regular language is context-free, every context-free language is context-sensitive, every context-sensitive language is recursive, Type-0 grammars include all formal grammars. They generate exactly all languages that can be recognized by a Turing machine and these languages are also known as the recursively enumerable or Turing-recognizable languages. Note that this is different from the languages, which can be decided by an always-halting Turing machine. Type-1 grammars generate the context-sensitive languages and these grammars have rules of the form α A β → α γ β with A a nonterminal and α, β and γ strings of terminals and/or nonterminals. The strings α and β may be empty, but γ must be nonempty, the rule S → ϵ is allowed if S does not appear on the right side of any rule. The languages described by these grammars are exactly all languages that can be recognized by a linear bounded automaton Type-2 grammars generate the context-free languages and these are defined by rules of the form A → γ with A a nonterminal and γ a string of terminals and/or nonterminals. These languages are exactly all languages that can be recognized by a non-deterministic pushdown automaton, often a subset of grammars are used to make parsing easier, such as by an LL parser. Type-3 grammars generate the regular languages, such a grammar restricts its rules to a single nonterminal on the left-hand side and a right-hand side consisting of a single terminal, possibly followed by a single nonterminal. Alternatively, the side of the grammar can consist of a single terminal