1.
Mathematics
–
Mathematics is the study of topics such as quantity, structure, space, and change. There is a range of views among mathematicians and philosophers as to the exact scope, Mathematicians seek out patterns and use them to formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proof, when mathematical structures are good models of real phenomena, then mathematical reasoning can provide insight or predictions about nature. Through the use of abstraction and logic, mathematics developed from counting, calculation, measurement, practical mathematics has been a human activity from as far back as written records exist. The research required to solve mathematical problems can take years or even centuries of sustained inquiry, rigorous arguments first appeared in Greek mathematics, most notably in Euclids Elements. Galileo Galilei said, The universe cannot be read until we have learned the language and it is written in mathematical language, and the letters are triangles, circles and other geometrical figures, without which means it is humanly impossible to comprehend a single word. Without these, one is wandering about in a dark labyrinth, carl Friedrich Gauss referred to mathematics as the Queen of the Sciences. Benjamin Peirce called mathematics the science that draws necessary conclusions, David Hilbert said of mathematics, We are not speaking here of arbitrariness in any sense. Mathematics is not like a game whose tasks are determined by arbitrarily stipulated rules, rather, it is a conceptual system possessing internal necessity that can only be so and by no means otherwise. Albert Einstein stated that as far as the laws of mathematics refer to reality, they are not certain, Mathematics is essential in many fields, including natural science, engineering, medicine, finance and the social sciences. Applied mathematics has led to entirely new mathematical disciplines, such as statistics, Mathematicians also engage in pure mathematics, or mathematics for its own sake, without having any application in mind. There is no clear line separating pure and applied mathematics, the history of mathematics can be seen as an ever-increasing series of abstractions. The earliest uses of mathematics were in trading, land measurement, painting and weaving patterns, in Babylonian mathematics elementary arithmetic first appears in the archaeological record. Numeracy pre-dated writing and numeral systems have many and diverse. Between 600 and 300 BC the Ancient Greeks began a study of mathematics in its own right with Greek mathematics. Mathematics has since been extended, and there has been a fruitful interaction between mathematics and science, to the benefit of both. Mathematical discoveries continue to be made today, the overwhelming majority of works in this ocean contain new mathematical theorems and their proofs. The word máthēma is derived from μανθάνω, while the modern Greek equivalent is μαθαίνω, in Greece, the word for mathematics came to have the narrower and more technical meaning mathematical study even in Classical times

2.
Graph theory
–
In mathematics graph theory is the study of graphs, which are mathematical structures used to model pairwise relations between objects. A graph in this context is made up of vertices, nodes, or points which are connected by edges, arcs, Graphs are one of the prime objects of study in discrete mathematics. Refer to the glossary of graph theory for basic definitions in graph theory, the following are some of the more basic ways of defining graphs and related mathematical structures. To avoid ambiguity, this type of graph may be described precisely as undirected, other senses of graph stem from different conceptions of the edge set. In one more generalized notion, V is a set together with a relation of incidence that associates with each two vertices. In another generalized notion, E is a multiset of unordered pairs of vertices, Many authors call this type of object a multigraph or pseudograph. All of these variants and others are described more fully below, the vertices belonging to an edge are called the ends or end vertices of the edge. A vertex may exist in a graph and not belong to an edge, V and E are usually taken to be finite, and many of the well-known results are not true for infinite graphs because many of the arguments fail in the infinite case. The order of a graph is |V|, its number of vertices, the size of a graph is |E|, its number of edges. The degree or valency of a vertex is the number of edges that connect to it, for an edge, graph theorists usually use the somewhat shorter notation xy. Graphs can be used to model many types of relations and processes in physical, biological, social, Many practical problems can be represented by graphs. Emphasizing their application to real-world systems, the network is sometimes defined to mean a graph in which attributes are associated with the nodes and/or edges. In computer science, graphs are used to represent networks of communication, data organization, computational devices, the flow of computation, etc. For instance, the structure of a website can be represented by a directed graph, in which the vertices represent web pages. A similar approach can be taken to problems in media, travel, biology, computer chip design. The development of algorithms to handle graphs is therefore of major interest in computer science, the transformation of graphs is often formalized and represented by graph rewrite systems. Graph-theoretic methods, in forms, have proven particularly useful in linguistics. Traditionally, syntax and compositional semantics follow tree-based structures, whose power lies in the principle of compositionality

3.
Adjacency matrix
–
In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph. The elements of the matrix indicate whether pairs of vertices are adjacent or not in the graph, in the special case of a finite simple graph, the adjacency matrix is a -matrix with zeros on its diagonal. If the graph is undirected, the matrix is symmetric. The relationship between a graph and the eigenvalues and eigenvectors of its matrix is studied in spectral graph theory. The adjacency matrix should be distinguished from the matrix for a graph. The diagonal elements of the matrix are all zero, since edges from a vertex to itself are not allowed in simple graphs and it is also sometimes useful in algebraic graph theory to replace the nonzero elements with algebraic variables. Loops may be counted either once or twice, as long as a consistent convention is followed, undirected graphs often use the latter convention of counting loops twice, whereas directed graphs typically use the former convention. In this case, the smaller matrix B uniquely represents the graph, B is sometimes called the biadjacency matrix. Formally, let G = be a graph with parts U = and V =. The biadjacency matrix is the r × s 0–1 matrix B in which bi, j =1 if and only if ∈ E. If G is a multigraph or weighted graph then the elements bi. An -adjacency matrix A of a graph has Ai, j = a if is an edge, b if it is not. The Seidel adjacency matrix is a -adjacency matrix and this matrix is used in studying strongly regular graphs and two-graphs. The distance matrix has in position the distance between vertices vi and vj, the distance is the length of a shortest path connecting the vertices. Unless lengths of edges are explicitly provided, the length of a path is the number of edges in it. The distance matrix resembles a high power of the adjacency matrix, the convention followed here is that each edge adds 1 to the appropriate cell in the matrix, and each loop adds 2. This allows the degree of a vertex to be found by taking the sum of the values in either its respective row or column in the adjacency matrix. In directed graphs, the in-degree of a vertex can be computed by summing the entries of the row

4.
Complement graph
–
In graph theory, the complement or inverse of a graph G is a graph H on the same vertices such that two distinct vertices of H are adjacent if and only if they are not adjacent in G. That is, to generate the complement of a graph, one fills in all the missing edges required to form a complete graph and it is not, however, the set complement of the graph, only the edges are complemented. Let G = be a graph and let K consist of all 2-element subsets of V. Then H = is the complement of G, where K \ E is the complement of E in K. The complement is not defined for multigraphs, in graphs that allow self-loops the complement of G may be defined by adding a self-loop to every vertex that does not have one in G, and otherwise using the same formula as above. This operation is, however, different from the one for simple graphs, several graph-theoretic concepts are related to each other via complement graphs, The complement of an edgeless graph is a complete graph and vice versa. Any induced subgraph of the complement graph of a graph G is the complement of the induced subgraph in G. An independent set in a graph is a clique in the complement graph and this is a special case of the previous two properties, as an independent set is an edgeless induced subgraph and a clique is a complete induced subgraph. The complement of every graph is a claw-free graph, although the reverse is not true. The vertices of the Kneser graph KG are the k-subsets of an n-set, the complement is the Johnson graph J, where the edges are between intersecting sets. A self-complementary graph is a graph that is isomorphic to its own complement, examples include the four-vertex path graph and five-vertex cycle graph. Several classes of graphs are self-complementary, in the sense that the complement of any graph in one of these classes is another graph in the same class. Perfect graphs are the graphs in which, for every induced subgraph, the fact that the complement of a perfect graph is also perfect is the perfect graph theorem of László Lovász. Cographs are defined as the graphs that can be built up from single vertices by disjoint union and complementation operations and they form a self-complementary family of graphs, the complement of any cograph is another different cograph. Another, self-complementary definition is that they are the graphs with no induced subgraph in the form of a four-vertex path, another self-complementary class of graphs is the class of split graphs, the graphs in which the vertices can be partitioned into a clique and an independent set. The same partition gives an independent set and a clique in the complement graph, the threshold graphs are the graphs formed by repeatedly adding either an independent vertex or a universal vertex. These two operations are complementary and they generate a class of graphs. It is also possible to use these simulations to compute other properties concerning the connectivity of the complement graph

5.
Multiset
–
In mathematics, a multiset is a generalization of the concept of a set that, unlike a set, allows multiple instances of the multisets elements. For example, and are different multisets although they are the same set, however, order does not matter, so and are the same multiset. The multiplicity of an element is the number of instances of the element in a specific multiset, however, the use of multisets predates the word multiset by many centuries. Knuth attributes the first study of multisets to the Indian mathematician Bhāskarāchārya, knuth also lists other names that were proposed or used for multisets, including list, bunch, bag, heap, sample, weighted set, collection, and suite. The number of times an element belongs to the multiset is the multiplicity of that member, the total number of elements in a multiset, including repeated memberships, is the cardinality of the multiset. For example, in the multiset the multiplicities of the members a, b, and c are respectively 2,3, and 1, to distinguish between sets and multisets, a notation that incorporates square brackets is sometimes used, the multiset can be represented as. In multisets, as in sets and in contrast to tuples, the order of elements is irrelevant, The multisets and are equal. Wayne Blizard traced multisets back to the origin of numbers, arguing that “in ancient times. This shows that people implicitly used multisets even before mathematics emerged and this shows that necessity in this structure has been always so urgent that multisets have been several times rediscovered and appeared in literature under different names. For instance, they were referred to as bags by James Lyle Peterson in 1981, a multiset has been also called an aggregate, heap, bunch, sample, weighted set, occurrence set, and fireset. Although multisets were implicitly utilized from ancient times, their explicit exploration happened much later, the first known study of multisets is attributed to the Indian mathematician Bhāskarāchārya circa 1150, who described permutations of multisets. The work of Marius Nizolius contains another early reference to the concept of multisets, athanasius Kircher found the number of multiset permutations when one element can be repeated. Jean Prestet published a rule for multiset permutations in 1675. John Wallis explained this rule in detail in 1685. In the explicit form, multisets appeared in the work of Richard Dedekind, other mathematicians formalized multisets and began to study them as a precise mathematical object in the 20th century. One of the simplest and most natural examples is the multiset of prime factors of a number n, here the underlying set of elements is the set of prime divisors of n. For example, the number 120 has the prime factorization 120 =233151 which gives the multiset, a related example is the multiset of solutions of an algebraic equation. A quadratic equation, for example, has two solutions, however, in some cases they are both the same number

6.
Eigenvalue
–
In linear algebra, an eigenvector or characteristic vector of a linear transformation is a non-zero vector whose direction does not change when that linear transformation is applied to it. This condition can be written as the equation T = λ v, there is a correspondence between n by n square matrices and linear transformations from an n-dimensional vector space to itself. For this reason, it is equivalent to define eigenvalues and eigenvectors using either the language of matrices or the language of linear transformations. Geometrically an eigenvector, corresponding to a real eigenvalue, points in a direction that is stretched by the transformation. If the eigenvalue is negative, the direction is reversed, Eigenvalues and eigenvectors feature prominently in the analysis of linear transformations. The prefix eigen- is adopted from the German word eigen for proper, inherent, own, individual, special, specific, peculiar, or characteristic. In essence, an eigenvector v of a linear transformation T is a vector that. Applying T to the eigenvector only scales the eigenvector by the scalar value λ and this condition can be written as the equation T = λ v, referred to as the eigenvalue equation or eigenequation. In general, λ may be any scalar, for example, λ may be negative, in which case the eigenvector reverses direction as part of the scaling, or it may be zero or complex. The Mona Lisa example pictured at right provides a simple illustration, each point on the painting can be represented as a vector pointing from the center of the painting to that point. The linear transformation in this example is called a shear mapping, the vectors pointing to each point in the original image are therefore tilted right or left and made longer or shorter by the transformation. Notice that points along the horizontal axis do not move at all when this transformation is applied, therefore, any vector that points directly to the right or left with no vertical component is an eigenvector of this transformation because the mapping does not change its direction. Moreover, these all have an eigenvalue equal to one because the mapping does not change their length. Linear transformations can take different forms, mapping vectors in a variety of vector spaces. Alternatively, the transformation could take the form of an n by n matrix. For a matrix, eigenvalues and eigenvectors can be used to decompose the matrix, the set of all eigenvectors of T corresponding to the same eigenvalue, together with the zero vector, is called an eigenspace or characteristic space of T. If the set of eigenvectors of T form a basis of the domain of T, Eigenvalues are often introduced in the context of linear algebra or matrix theory. Historically, however, they arose in the study of quadratic forms, in the 18th century Euler studied the rotational motion of a rigid body and discovered the importance of the principal axes

7.
Two-graph
–
In mathematics, a two-graph is a set of triples chosen from a finite vertex set X, such that every quadruple from X contains an even number of triples of the two-graph. A regular two-graph has the property that every pair of vertices lies in the number of triples of the two-graph. A two-graph is not a graph and should not be confused with other objects called 2-graphs in graph theory, such as 2-regular graphs. Given a simple graph G =, the set of triples of the vertex set V whose induced subgraph has an odd number of forms a two-graph on the set V. Every two-graph can be represented in this way and this example is referred to as the standard construction of a two-graph from a simple graph. As a more complex example, let T be a tree with edge set E, the set of all triples of E that are not contained in a path of T form a two-graph on the set E. A two-graph is equivalent to a class of graphs and also to a switching class of signed complete graphs. The edges whose endpoints are both in the set, or both not in the set, are not changed, Graphs are switching equivalent if one can be obtained from the other by switching. An equivalence class of graphs under switching is called a switching class, switching was introduced by van Lint & Seidel and developed by Seidel, it has been called graph switching or Seidel switching, partly to distinguish it from switching of signed graphs. Let Γ be a two-graph on the set X, for any element x of X, define a graph with vertex set X having vertices y and z adjacent if and only if is in Γ. In this graph, x will be an isolated vertex and this two-graph is called the extension of G by x in design theoretic language. In a given switching class of graphs of a regular two-graph and that is, the two-graph is the extension of Γx by x. In the first example above of a regular two-graph, Γx is a 5-cycle for any choice of x, the two-graph of G can also be defined as the set of triples of vertices that support a negative triangle in Σ. Two signed complete graphs yield the same if and only if they are equivalent under switching. Switching of G and of Σ are related, switching the vertices in both yields a graph H and its corresponding signed complete graph. The adjacency matrix of a two-graph is the matrix of the corresponding signed complete graph, thus it is symmetric, is zero on the diagonal. If G is the corresponding to the signed complete graph Σ. The Seidel matrix has zero entries on the diagonal, -1 entries for adjacent vertices

8.
Strongly regular graph
–
In graph theory, a strongly regular graph is defined as follows. Let G = be a graph with v vertices and degree k. G is said to be regular if there are also integers λ and μ such that. Every two non-adjacent vertices have μ common neighbours, a graph of this kind is sometimes said to be an srg. Strongly regular graphs were introduced by Raj Chandra Bose in 1963, the complement of an srg is also strongly regular. A strongly regular graph is a graph with diameter 2. Pick any node as the node, in Level 0. Then its k neighbor nodes lie in Level 1, and all other nodes lie in Level 2. Nodes in Level 1 are directly connected to the root, hence they must have λ other neighbors in common with the root, and these common neighbors must also be in Level 1. Since each node has degree k, there are k − λ −1 edges remaining for each Level 1 node to connect to nodes in Level 2, therefore, there are k × edges between Level 1 and Level 2. Nodes in Level 2 are not directly connected to the root, hence they must have μ common neighbors with the root, there are nodes in Level 2, and each is connected to μ nodes in Level 1. Therefore the number of edges between Level 1 and Level 2 is × μ, equating the two expressions for the edges between Level 1 and Level 2, the relation follows. Let I denote the identity matrix and let J denote the matrix whose entries all equal 1, the adjacency matrix A of a strongly regular graph satisfies two equations. First, A J = J A = k J, which is a restatement of the vertex degree requirement, incidentally. Second, A2 + A + I = μ J, the first term gives the number of 2-step paths from each vertex to all vertices, the second term the 1-step paths, and the third term the 0-step paths. For the vertex pairs connected by an edge, the equation reduces to the number of such 2-step paths being equal to λ. For the vertex pairs not directly connected by an edge, the equation reduces to the number of such 2-step paths being equal to μ, for the trivial self-pairs, the equation reduces to the degree being equal to k. Conversely, a graph which is not a complete or null graph whose adjacency matrix satisfies both of the conditions is a strongly regular graph

9.
Combinatorics
–
Combinatorics is a branch of mathematics concerning the study of finite or countable discrete structures. Many combinatorial questions have historically been considered in isolation, giving an ad hoc solution to a problem arising in some mathematical context. In the later twentieth century, however, powerful and general methods were developed. One of the oldest and most accessible parts of combinatorics is graph theory, Combinatorics is used frequently in computer science to obtain formulas and estimates in the analysis of algorithms. A mathematician who studies combinatorics is called a combinatorialist or a combinatorist, basic combinatorial concepts and enumerative results appeared throughout the ancient world. Greek historian Plutarch discusses an argument between Chrysippus and Hipparchus of a rather delicate enumerative problem, which was shown to be related to Schröder–Hipparchus numbers. In the Ostomachion, Archimedes considers a tiling puzzle, in the Middle Ages, combinatorics continued to be studied, largely outside of the European civilization. The Indian mathematician Mahāvīra provided formulae for the number of permutations and combinations, later, in Medieval England, campanology provided examples of what is now known as Hamiltonian cycles in certain Cayley graphs on permutations. During the Renaissance, together with the rest of mathematics and the sciences, works of Pascal, Newton, Jacob Bernoulli and Euler became foundational in the emerging field. In modern times, the works of J. J. Sylvester and Percy MacMahon helped lay the foundation for enumerative, graph theory also enjoyed an explosion of interest at the same time, especially in connection with the four color problem. In the second half of the 20th century, combinatorics enjoyed a rapid growth, in part, the growth was spurred by new connections and applications to other fields, ranging from algebra to probability, from functional analysis to number theory, etc. These connections shed the boundaries between combinatorics and parts of mathematics and theoretical science, but at the same time led to a partial fragmentation of the field. Enumerative combinatorics is the most classical area of combinatorics and concentrates on counting the number of combinatorial objects. Although counting the number of elements in a set is a rather broad mathematical problem, fibonacci numbers is the basic example of a problem in enumerative combinatorics. The twelvefold way provides a framework for counting permutations, combinations and partitions. Analytic combinatorics concerns the enumeration of combinatorial structures using tools from complex analysis, in contrast with enumerative combinatorics, which uses explicit combinatorial formulae and generating functions to describe the results, analytic combinatorics aims at obtaining asymptotic formulae. Partition theory studies various enumeration and asymptotic problems related to integer partitions, originally a part of number theory and analysis, it is now considered a part of combinatorics or an independent field. It incorporates the bijective approach and various tools in analysis and analytic number theory, graphs are basic objects in combinatorics