1.
Mathematics
–
Mathematics is the study of topics such as quantity, structure, space, and change. There is a range of views among mathematicians and philosophers as to the exact scope, Mathematicians seek out patterns and use them to formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proof, when mathematical structures are good models of real phenomena, then mathematical reasoning can provide insight or predictions about nature. Through the use of abstraction and logic, mathematics developed from counting, calculation, measurement, practical mathematics has been a human activity from as far back as written records exist. The research required to solve mathematical problems can take years or even centuries of sustained inquiry, rigorous arguments first appeared in Greek mathematics, most notably in Euclids Elements. Galileo Galilei said, The universe cannot be read until we have learned the language and it is written in mathematical language, and the letters are triangles, circles and other geometrical figures, without which means it is humanly impossible to comprehend a single word. Without these, one is wandering about in a dark labyrinth, carl Friedrich Gauss referred to mathematics as the Queen of the Sciences. Benjamin Peirce called mathematics the science that draws necessary conclusions, David Hilbert said of mathematics, We are not speaking here of arbitrariness in any sense. Mathematics is not like a game whose tasks are determined by arbitrarily stipulated rules, rather, it is a conceptual system possessing internal necessity that can only be so and by no means otherwise. Albert Einstein stated that as far as the laws of mathematics refer to reality, they are not certain, Mathematics is essential in many fields, including natural science, engineering, medicine, finance and the social sciences. Applied mathematics has led to entirely new mathematical disciplines, such as statistics, Mathematicians also engage in pure mathematics, or mathematics for its own sake, without having any application in mind. There is no clear line separating pure and applied mathematics, the history of mathematics can be seen as an ever-increasing series of abstractions. The earliest uses of mathematics were in trading, land measurement, painting and weaving patterns, in Babylonian mathematics elementary arithmetic first appears in the archaeological record. Numeracy pre-dated writing and numeral systems have many and diverse. Between 600 and 300 BC the Ancient Greeks began a study of mathematics in its own right with Greek mathematics. Mathematics has since been extended, and there has been a fruitful interaction between mathematics and science, to the benefit of both. Mathematical discoveries continue to be made today, the overwhelming majority of works in this ocean contain new mathematical theorems and their proofs. The word máthēma is derived from μανθάνω, while the modern Greek equivalent is μαθαίνω, in Greece, the word for mathematics came to have the narrower and more technical meaning mathematical study even in Classical times

2.
Graph theory
–
In mathematics graph theory is the study of graphs, which are mathematical structures used to model pairwise relations between objects. A graph in this context is made up of vertices, nodes, or points which are connected by edges, arcs, Graphs are one of the prime objects of study in discrete mathematics. Refer to the glossary of graph theory for basic definitions in graph theory, the following are some of the more basic ways of defining graphs and related mathematical structures. To avoid ambiguity, this type of graph may be described precisely as undirected, other senses of graph stem from different conceptions of the edge set. In one more generalized notion, V is a set together with a relation of incidence that associates with each two vertices. In another generalized notion, E is a multiset of unordered pairs of vertices, Many authors call this type of object a multigraph or pseudograph. All of these variants and others are described more fully below, the vertices belonging to an edge are called the ends or end vertices of the edge. A vertex may exist in a graph and not belong to an edge, V and E are usually taken to be finite, and many of the well-known results are not true for infinite graphs because many of the arguments fail in the infinite case. The order of a graph is |V|, its number of vertices, the size of a graph is |E|, its number of edges. The degree or valency of a vertex is the number of edges that connect to it, for an edge, graph theorists usually use the somewhat shorter notation xy. Graphs can be used to model many types of relations and processes in physical, biological, social, Many practical problems can be represented by graphs. Emphasizing their application to real-world systems, the network is sometimes defined to mean a graph in which attributes are associated with the nodes and/or edges. In computer science, graphs are used to represent networks of communication, data organization, computational devices, the flow of computation, etc. For instance, the structure of a website can be represented by a directed graph, in which the vertices represent web pages. A similar approach can be taken to problems in media, travel, biology, computer chip design. The development of algorithms to handle graphs is therefore of major interest in computer science, the transformation of graphs is often formalized and represented by graph rewrite systems. Graph-theoretic methods, in forms, have proven particularly useful in linguistics. Traditionally, syntax and compositional semantics follow tree-based structures, whose power lies in the principle of compositionality

3.
Graph (discrete mathematics)
–
In mathematics, and more specifically in graph theory, a graph is a structure amounting to a set of objects in which some pairs of the objects are in some sense related. The objects correspond to mathematical abstractions called vertices and each of the pairs of vertices is called an edge. Typically, a graph is depicted in form as a set of dots for the vertices. Graphs are one of the objects of study in discrete mathematics, the edges may be directed or undirected. In contrast, if any edge from a person A to a person B corresponds to As admiring B, then this graph is directed, because admiration is not necessarily reciprocated. The former type of graph is called a graph and the edges are called undirected edges while the latter type of graph is called a directed graph. Graphs are the subject studied by graph theory. The word graph was first used in this sense by J. J. Sylvester in 1878, the following are some of the more basic ways of defining graphs and related mathematical structures. In one very common sense of the term, a graph is an ordered pair G = comprising a set V of vertices, nodes or points together with a set E of edges, arcs or lines, which are 2-element subsets of V. To avoid ambiguity, this type of graph may be described precisely as undirected, other senses of graph stem from different conceptions of the edge set. In one more general conception, E is a set together with a relation of incidence that associates with each two vertices. In another generalized notion, E is a multiset of unordered pairs of vertices, many authors call these types of object multigraphs or pseudographs. All of these variants and others are described more fully below, the vertices belonging to an edge are called the ends or end vertices of the edge. A vertex may exist in a graph and not belong to an edge, V and E are usually taken to be finite, and many of the well-known results are not true for infinite graphs because many of the arguments fail in the infinite case. Moreover, V is often assumed to be non-empty, but E is allowed to be the empty set, the order of a graph is |V|, its number of vertices. The size of a graph is |E|, its number of edges, the degree or valency of a vertex is the number of edges that connect to it, where an edge that connects to the vertex at both ends is counted twice. For an edge, graph theorists usually use the shorter notation xy. As stated above, in different contexts it may be useful to refine the term graph with different degrees of generality, whenever it is necessary to draw a strict distinction, the following terms are used

4.
Adjacency matrix
–
In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph. The elements of the matrix indicate whether pairs of vertices are adjacent or not in the graph, in the special case of a finite simple graph, the adjacency matrix is a -matrix with zeros on its diagonal. If the graph is undirected, the matrix is symmetric. The relationship between a graph and the eigenvalues and eigenvectors of its matrix is studied in spectral graph theory. The adjacency matrix should be distinguished from the matrix for a graph. The diagonal elements of the matrix are all zero, since edges from a vertex to itself are not allowed in simple graphs and it is also sometimes useful in algebraic graph theory to replace the nonzero elements with algebraic variables. Loops may be counted either once or twice, as long as a consistent convention is followed, undirected graphs often use the latter convention of counting loops twice, whereas directed graphs typically use the former convention. In this case, the smaller matrix B uniquely represents the graph, B is sometimes called the biadjacency matrix. Formally, let G = be a graph with parts U = and V =. The biadjacency matrix is the r × s 0–1 matrix B in which bi, j =1 if and only if ∈ E. If G is a multigraph or weighted graph then the elements bi. An -adjacency matrix A of a graph has Ai, j = a if is an edge, b if it is not. The Seidel adjacency matrix is a -adjacency matrix and this matrix is used in studying strongly regular graphs and two-graphs. The distance matrix has in position the distance between vertices vi and vj, the distance is the length of a shortest path connecting the vertices. Unless lengths of edges are explicitly provided, the length of a path is the number of edges in it. The distance matrix resembles a high power of the adjacency matrix, the convention followed here is that each edge adds 1 to the appropriate cell in the matrix, and each loop adds 2. This allows the degree of a vertex to be found by taking the sum of the values in either its respective row or column in the adjacency matrix. In directed graphs, the in-degree of a vertex can be computed by summing the entries of the row

5.
Complement graph
–
In graph theory, the complement or inverse of a graph G is a graph H on the same vertices such that two distinct vertices of H are adjacent if and only if they are not adjacent in G. That is, to generate the complement of a graph, one fills in all the missing edges required to form a complete graph and it is not, however, the set complement of the graph, only the edges are complemented. Let G = be a graph and let K consist of all 2-element subsets of V. Then H = is the complement of G, where K \ E is the complement of E in K. The complement is not defined for multigraphs, in graphs that allow self-loops the complement of G may be defined by adding a self-loop to every vertex that does not have one in G, and otherwise using the same formula as above. This operation is, however, different from the one for simple graphs, several graph-theoretic concepts are related to each other via complement graphs, The complement of an edgeless graph is a complete graph and vice versa. Any induced subgraph of the complement graph of a graph G is the complement of the induced subgraph in G. An independent set in a graph is a clique in the complement graph and this is a special case of the previous two properties, as an independent set is an edgeless induced subgraph and a clique is a complete induced subgraph. The complement of every graph is a claw-free graph, although the reverse is not true. The vertices of the Kneser graph KG are the k-subsets of an n-set, the complement is the Johnson graph J, where the edges are between intersecting sets. A self-complementary graph is a graph that is isomorphic to its own complement, examples include the four-vertex path graph and five-vertex cycle graph. Several classes of graphs are self-complementary, in the sense that the complement of any graph in one of these classes is another graph in the same class. Perfect graphs are the graphs in which, for every induced subgraph, the fact that the complement of a perfect graph is also perfect is the perfect graph theorem of László Lovász. Cographs are defined as the graphs that can be built up from single vertices by disjoint union and complementation operations and they form a self-complementary family of graphs, the complement of any cograph is another different cograph. Another, self-complementary definition is that they are the graphs with no induced subgraph in the form of a four-vertex path, another self-complementary class of graphs is the class of split graphs, the graphs in which the vertices can be partitioned into a clique and an independent set. The same partition gives an independent set and a clique in the complement graph, the threshold graphs are the graphs formed by repeatedly adding either an independent vertex or a universal vertex. These two operations are complementary and they generate a class of graphs. It is also possible to use these simulations to compute other properties concerning the connectivity of the complement graph

6.
Multiset
–
In mathematics, a multiset is a generalization of the concept of a set that, unlike a set, allows multiple instances of the multisets elements. For example, and are different multisets although they are the same set, however, order does not matter, so and are the same multiset. The multiplicity of an element is the number of instances of the element in a specific multiset, however, the use of multisets predates the word multiset by many centuries. Knuth attributes the first study of multisets to the Indian mathematician Bhāskarāchārya, knuth also lists other names that were proposed or used for multisets, including list, bunch, bag, heap, sample, weighted set, collection, and suite. The number of times an element belongs to the multiset is the multiplicity of that member, the total number of elements in a multiset, including repeated memberships, is the cardinality of the multiset. For example, in the multiset the multiplicities of the members a, b, and c are respectively 2,3, and 1, to distinguish between sets and multisets, a notation that incorporates square brackets is sometimes used, the multiset can be represented as. In multisets, as in sets and in contrast to tuples, the order of elements is irrelevant, The multisets and are equal. Wayne Blizard traced multisets back to the origin of numbers, arguing that “in ancient times. This shows that people implicitly used multisets even before mathematics emerged and this shows that necessity in this structure has been always so urgent that multisets have been several times rediscovered and appeared in literature under different names. For instance, they were referred to as bags by James Lyle Peterson in 1981, a multiset has been also called an aggregate, heap, bunch, sample, weighted set, occurrence set, and fireset. Although multisets were implicitly utilized from ancient times, their explicit exploration happened much later, the first known study of multisets is attributed to the Indian mathematician Bhāskarāchārya circa 1150, who described permutations of multisets. The work of Marius Nizolius contains another early reference to the concept of multisets, athanasius Kircher found the number of multiset permutations when one element can be repeated. Jean Prestet published a rule for multiset permutations in 1675. John Wallis explained this rule in detail in 1685. In the explicit form, multisets appeared in the work of Richard Dedekind, other mathematicians formalized multisets and began to study them as a precise mathematical object in the 20th century. One of the simplest and most natural examples is the multiset of prime factors of a number n, here the underlying set of elements is the set of prime divisors of n. For example, the number 120 has the prime factorization 120 =233151 which gives the multiset, a related example is the multiset of solutions of an algebraic equation. A quadratic equation, for example, has two solutions, however, in some cases they are both the same number

7.
Eigenvalues and eigenvectors
–
In linear algebra, an eigenvector or characteristic vector of a linear transformation is a non-zero vector whose direction does not change when that linear transformation is applied to it. This condition can be written as the equation T = λ v, there is a correspondence between n by n square matrices and linear transformations from an n-dimensional vector space to itself. For this reason, it is equivalent to define eigenvalues and eigenvectors using either the language of matrices or the language of linear transformations. Geometrically an eigenvector, corresponding to a real eigenvalue, points in a direction that is stretched by the transformation. If the eigenvalue is negative, the direction is reversed, Eigenvalues and eigenvectors feature prominently in the analysis of linear transformations. The prefix eigen- is adopted from the German word eigen for proper, inherent, own, individual, special, specific, peculiar, or characteristic. In essence, an eigenvector v of a linear transformation T is a vector that. Applying T to the eigenvector only scales the eigenvector by the scalar value λ and this condition can be written as the equation T = λ v, referred to as the eigenvalue equation or eigenequation. In general, λ may be any scalar, for example, λ may be negative, in which case the eigenvector reverses direction as part of the scaling, or it may be zero or complex. The Mona Lisa example pictured at right provides a simple illustration, each point on the painting can be represented as a vector pointing from the center of the painting to that point. The linear transformation in this example is called a shear mapping, the vectors pointing to each point in the original image are therefore tilted right or left and made longer or shorter by the transformation. Notice that points along the horizontal axis do not move at all when this transformation is applied, therefore, any vector that points directly to the right or left with no vertical component is an eigenvector of this transformation because the mapping does not change its direction. Moreover, these all have an eigenvalue equal to one because the mapping does not change their length. Linear transformations can take different forms, mapping vectors in a variety of vector spaces. Alternatively, the transformation could take the form of an n by n matrix. For a matrix, eigenvalues and eigenvectors can be used to decompose the matrix, the set of all eigenvectors of T corresponding to the same eigenvalue, together with the zero vector, is called an eigenspace or characteristic space of T. If the set of eigenvectors of T form a basis of the domain of T, Eigenvalues are often introduced in the context of linear algebra or matrix theory. Historically, however, they arose in the study of quadratic forms, in the 18th century Euler studied the rotational motion of a rigid body and discovered the importance of the principal axes

8.
J. H. van Lint
–
Jacobus Hendricus van Lint was a Dutch mathematician, professor at the Eindhoven University of Technology, of which he was rector magnificus from 1991 till 1996. He gained his Ph. D. from Utrecht University in 1957 under the supervision of Fred van der Blij and he was professor of mathematics at Eindhoven University of Technology from 1959 to 1997. He was appointed a professor at Eindhoven University of Technology at the age of 26 years. His field of research was initially number theory, but he worked mainly in combinatorics, van Lint was honored with a great number of awards. Introduction to Coding Theory, Springer, Graduate Texts in Mathematics,1982, with Peter Cameron, Designs, Graphs, Codes and their Links, London Mathematical Society Lecture Notes, Cambridge University Press,1980. With Richard M. Wilson, A Course in Combinatorics, Cambridge University Press,1992, with Gerard van der Geer, Introduction to Coding theory and Algebraic Geometry, Birkhäuser,1988. Personal web site OConnor, John J. Robertson, Edmund F. Jacobus Hendricus van Lint, MacTutor History of Mathematics archive, University of St Andrews

9.
Two-graph
–
In mathematics, a two-graph is a set of triples chosen from a finite vertex set X, such that every quadruple from X contains an even number of triples of the two-graph. A regular two-graph has the property that every pair of vertices lies in the number of triples of the two-graph. A two-graph is not a graph and should not be confused with other objects called 2-graphs in graph theory, such as 2-regular graphs. Given a simple graph G =, the set of triples of the vertex set V whose induced subgraph has an odd number of forms a two-graph on the set V. Every two-graph can be represented in this way and this example is referred to as the standard construction of a two-graph from a simple graph. As a more complex example, let T be a tree with edge set E, the set of all triples of E that are not contained in a path of T form a two-graph on the set E. A two-graph is equivalent to a class of graphs and also to a switching class of signed complete graphs. The edges whose endpoints are both in the set, or both not in the set, are not changed, Graphs are switching equivalent if one can be obtained from the other by switching. An equivalence class of graphs under switching is called a switching class, switching was introduced by van Lint & Seidel and developed by Seidel, it has been called graph switching or Seidel switching, partly to distinguish it from switching of signed graphs. Let Γ be a two-graph on the set X, for any element x of X, define a graph with vertex set X having vertices y and z adjacent if and only if is in Γ. In this graph, x will be an isolated vertex and this two-graph is called the extension of G by x in design theoretic language. In a given switching class of graphs of a regular two-graph and that is, the two-graph is the extension of Γx by x. In the first example above of a regular two-graph, Γx is a 5-cycle for any choice of x, the two-graph of G can also be defined as the set of triples of vertices that support a negative triangle in Σ. Two signed complete graphs yield the same if and only if they are equivalent under switching. Switching of G and of Σ are related, switching the vertices in both yields a graph H and its corresponding signed complete graph. The adjacency matrix of a two-graph is the matrix of the corresponding signed complete graph, thus it is symmetric, is zero on the diagonal. If G is the corresponding to the signed complete graph Σ. The Seidel matrix has zero entries on the diagonal, -1 entries for adjacent vertices

10.
Strongly regular graph
–
In graph theory, a strongly regular graph is defined as follows. Let G = be a graph with v vertices and degree k. G is said to be regular if there are also integers λ and μ such that. Every two non-adjacent vertices have μ common neighbours, a graph of this kind is sometimes said to be an srg. Strongly regular graphs were introduced by Raj Chandra Bose in 1963, the complement of an srg is also strongly regular. A strongly regular graph is a graph with diameter 2. Pick any node as the node, in Level 0. Then its k neighbor nodes lie in Level 1, and all other nodes lie in Level 2. Nodes in Level 1 are directly connected to the root, hence they must have λ other neighbors in common with the root, and these common neighbors must also be in Level 1. Since each node has degree k, there are k − λ −1 edges remaining for each Level 1 node to connect to nodes in Level 2, therefore, there are k × edges between Level 1 and Level 2. Nodes in Level 2 are not directly connected to the root, hence they must have μ common neighbors with the root, there are nodes in Level 2, and each is connected to μ nodes in Level 1. Therefore the number of edges between Level 1 and Level 2 is × μ, equating the two expressions for the edges between Level 1 and Level 2, the relation follows. Let I denote the identity matrix and let J denote the matrix whose entries all equal 1, the adjacency matrix A of a strongly regular graph satisfies two equations. First, A J = J A = k J, which is a restatement of the vertex degree requirement, incidentally. Second, A2 + A + I = μ J, the first term gives the number of 2-step paths from each vertex to all vertices, the second term the 1-step paths, and the third term the 0-step paths. For the vertex pairs connected by an edge, the equation reduces to the number of such 2-step paths being equal to λ. For the vertex pairs not directly connected by an edge, the equation reduces to the number of such 2-step paths being equal to μ, for the trivial self-pairs, the equation reduces to the degree being equal to k. Conversely, a graph which is not a complete or null graph whose adjacency matrix satisfies both of the conditions is a strongly regular graph

11.
Matrix (mathematics)
–
In mathematics, a matrix is a rectangular array of numbers, symbols, or expressions, arranged in rows and columns. For example, the dimensions of the matrix below are 2 ×3, the individual items in an m × n matrix A, often denoted by ai, j, where max i = m and max j = n, are called its elements or entries. Provided that they have the size, two matrices can be added or subtracted element by element. The rule for multiplication, however, is that two matrices can be multiplied only when the number of columns in the first equals the number of rows in the second. Any matrix can be multiplied element-wise by a scalar from its associated field, a major application of matrices is to represent linear transformations, that is, generalizations of linear functions such as f = 4x. The product of two matrices is a matrix that represents the composition of two linear transformations. Another application of matrices is in the solution of systems of linear equations, if the matrix is square, it is possible to deduce some of its properties by computing its determinant. For example, a matrix has an inverse if and only if its determinant is not zero. Insight into the geometry of a transformation is obtainable from the matrixs eigenvalues. Applications of matrices are found in most scientific fields, in computer graphics, they are used to manipulate 3D models and project them onto a 2-dimensional screen. Matrix calculus generalizes classical analytical notions such as derivatives and exponentials to higher dimensions, Matrices are used in economics to describe systems of economic relationships. A major branch of analysis is devoted to the development of efficient algorithms for matrix computations. Matrix decomposition methods simplify computations, both theoretically and practically, algorithms that are tailored to particular matrix structures, such as sparse matrices and near-diagonal matrices, expedite computations in finite element method and other computations. Infinite matrices occur in planetary theory and in atomic theory, a simple example of an infinite matrix is the matrix representing the derivative operator, which acts on the Taylor series of a function. A matrix is an array of numbers or other mathematical objects for which operations such as addition and multiplication are defined. Most commonly, a matrix over a field F is an array of scalars each of which is a member of F. Most of this focuses on real and complex matrices, that is, matrices whose elements are real numbers or complex numbers. More general types of entries are discussed below, for instance, this is a real matrix, A =

12.
Block matrix
–
In mathematics, a block matrix or a partitioned matrix is a matrix that is interpreted as having been broken into sections called blocks or submatrices. Any matrix may be interpreted as a matrix in one or more ways, with each interpretation defined by how its rows. Block matrix algebra arises in general from biproducts in categories of matrices, the matrix P = can be partitioned into 4 2×2 blocks P11 =, P12 =, P21 =, P22 =. The partitioned matrix can then be written as P = and it is possible to use a block partitioned matrix product that involves only algebra on submatrices of the factors. The partitioning of the factors is not arbitrary, however, the matrices in the resulting matrix C are calculated by multiplying, C α β = ∑ γ =1 s A α γ B γ β. Or, using the Einstein notation that implicitly sums over repeated indices, if a matrix is partitioned into four blocks, it can be inverted blockwise as follows, where A, B, C and D have arbitrary size. Equivalently, A block diagonal matrix is a matrix that is a square matrix. A block diagonal matrix A has the form A = where Ak is a matrix, in other words, it is the direct sum of A1, …. It can also be indicated as A1 ⊕ A2 ⊕ … ⊕ An or diag, any square matrix can trivially be considered a block diagonal matrix with only one block. For the determinant and trace, the following properties hold det A = det A1 × … × det A n, tr A = tr A1 + ⋯ + tr A n. The inverse of a diagonal matrix is another block diagonal matrix, composed of the inverse of each block, as follows. The eigenvalues and eigenvectors of A are simply those of A1 and A2 and. and A n and it is essentially a tridiagonal matrix but has submatrices in places of scalars. A block tridiagonal matrix A has the form A = where Ak, Bk and Ck are square sub-matrices of the lower, main, block tridiagonal matrices are often encountered in numerical solutions of engineering problems. Optimized numerical methods for LU factorization are available and hence efficient solution algorithms for equation systems with a block tridiagonal matrix as coefficient matrix, the Thomas algorithm, used for efficient solution of equation systems involving a tridiagonal matrix can also be applied using matrix operations to block tridiagonal matrices. A block Toeplitz matrix is another special block matrix, which contains blocks that are repeated down the diagonals of the matrix, the individual block matrix elements, Aij, must also be a Toeplitz matrix. A block Toeplitz matrix A has the form A =, for any arbitrary matrices A and B, we have the direct sum of A and B, denoted by A ⊕ B and defined as A ⊕ B =. This operation generalizes naturally to arbitrary dimensioned arrays, note that any element in the direct sum of two vector spaces of matrices could be represented as a direct sum of two matrices. In linear algebra terms, the use of a block matrix corresponds to having a linear mapping thought of in terms of corresponding bunches of basis vectors and that again matches the idea of having distinguished direct sum decompositions of the domain and range

13.
Centrosymmetric matrix
–
In mathematics, especially in linear algebra and matrix theory, a centrosymmetric matrix is a matrix which is symmetric about its center. More precisely, an n × n matrix A = is centrosymmetric when its entries satisfy Ai, if J denotes the n × n matrix with 1 on the counterdiagonal and 0 elsewhere, then a matrix A is centrosymmetric if and only if AJ = JA. The matrix J is sometimes referred to as the exchange matrix, all 2×2 centrosymmetric matrices have the form. All 3×3 centrosymmetric matrices have the form, if A and B are centrosymmetric matrices over a given field K, then so are A+B and cA for any c in K. In addition, the matrix product AB is centrosymmetric, since JAB = AJB = ABJ, since the identity matrix is also centrosymmetric, it follows that the set of n × n centrosymmetric matrices over K is a subalgebra of the associative algebra of all n × n matrices. An n × n matrix A is said to be if its entries satisfy Ai. Equivalently, A is skew-centrosymmetric if AJ = -JA, where J is the exchange matrix defined above. The centrosymmetric relation AJ = JA lends itself to a generalization, where J is replaced with an involutory matrix K or, more generally. Symmetric centrosymmetric matrices are sometimes called bisymmetric matrices, a similar result holds for Hermitian centrosymmetric and skew-centrosymmetric matrices. A Treatise on the Theory of Determinants, weaver, J. R. Centrosymmetric matrices, their basic properties, eigenvalues, and eigenvectors

14.
Conference matrix
–
In mathematics, a conference matrix is a square matrix C with 0 on the diagonal and +1 and −1 off the diagonal, such that CTC is a multiple of the identity matrix I. Thus, if the matrix has order n, CTC = I, some authors use a more general definition, which requires there to be a single 0 in each row and column but not necessarily on the diagonal. Conference matrices first arose in connection with a problem in telephony and they were first described by Vitold Belevitch who also gave them their name. Belevitch was interested in constructing ideal telephone conference networks from ideal transformers and discovered that such networks were represented by conference matrices, other applications are in statistics, and another is in elliptic geometry. For n >1, there are two kinds of conference matrix, let us normalize C by, first, rearranging the rows so that all the zeros are on the diagonal, and then negating any row or column whose first entry is negative. Thus, a conference matrix has all 1s in its first row and column, except for a 0 in the top left corner. Let S be the matrix that remains when the first row, then either n is evenly even, and S is antisymmetric, or n is oddly even and S is symmetric. N will always be the sum of two if n −1 is a prime power. Given a symmetric matrix, the matrix S can be viewed as the Seidel adjacency matrix of a graph. The graph has n −1 vertices, corresponding to the rows and columns of S and this graph is strongly regular of the type called a conference graph. The existence of conference matrices of orders n allowed by the restrictions is known only for some values of n. Order 66 seems to be an open problem, the essentially unique conference matrix of order 6 is given by, all other conference matrices of order 6 are obtained from this one by flipping the signs of some row and/or column. Antisymmetric matrices can also be produced by the Paley construction, let q be a prime power with residue 3. Then there is a Paley digraph of order q which leads to a conference matrix of order n = q +1. The matrix is obtained by taking for S the q × q matrix that has a +1 in position and −1 in position if there is an arc of the digraph from i to j, and zero diagonal. Then C constructed as above from S, but with the first row all negative, is a conference matrix. This construction solves only a part of the problem of deciding for which evenly even numbers n there exist antisymmetric conference matrices of order n. Using this definition, the element is no more required to be on the diagonal

15.
DFT matrix
–
In applied mathematics, a DFT matrix is an expression of a discrete Fourier transform as a transformation matrix, which can be applied to a signal through matrix multiplication. An N-point DFT is expressed as the multiplication X = W x, where x is the input signal, W is the N-by-N square DFT matrix. This is the Vandermonde matrix for the roots of unity, up to the normalization factor, note that the normalization factor in front of the sum and the sign of the exponent in ω are merely conventions, and differ in some treatments. All of the discussion applies regardless of the convention, with at most minor adjustments. The only important thing is that the forward and inverse transforms have opposite-sign exponents, however, the 1 / N choice here makes the resulting DFT matrix unitary, which is convenient in many circumstances. Fast Fourier transform algorithms utilize the symmetries of the matrix to reduce the time of multiplying a vector by this matrix, similar techniques can be applied for multiplications by matrices such as Hadamard matrix and the Walsh matrix. The two-point DFT is a case, in which the first entry is the DC. The first row performs the sum, and the second row performs the difference, the factor of 1 /2 is to make the transform unitary. The four-point DFT matrix is as follows, W = = where ω = e − π i 2 = − i, the top row is all ones, so it measures the DC component in the input signal. Recall that a matched filter compares the signal with a reversed version of whatever were looking for. −1/8 so that is why this row is a negative frequency, in this way, it could be said that the top rows of the matrix measure positive frequency content in the signal and the bottom rows measure negative frequency component in the signal. The DFT is a transform, i. e. one that preserves energy. The appropriate choice of scaling to achieve unitarity is 1 / N, so that the energy in the domain will be the same as the energy in the Fourier domain. For other properties of the DFT matrix, including its eigenvalues, connection to convolutions, applications, a rectangular portion of this continuous Fourier operator can be displayed as an image, analogous to the DFT matrix, as shown at right, where greyscale pixel value denotes numerical quantity. Multidimensional transform Clock and shift matrices The Transform and Data Compression Handbook by P. C, yip, K. Ramamohan Rao – See chapter 2 for a treatment of the DFT based largely on the DFT matrix Fourier Operator and Decimation In Time

16.
Markov matrix
–
For a matrix whose elements are stochastic, see Random matrix In mathematics, a stochastic matrix is a square matrix used to describe the transitions of a Markov chain. Each of its entries is a real number representing a probability. It has found use in probability theory, statistics, mathematical finance and linear algebra, as well as computer science, there are several different definitions and types of stochastic matrices, A right stochastic matrix is a real square matrix, with each row summing to 1. A left stochastic matrix is a square matrix, with each column summing to 1. A doubly stochastic matrix is a matrix of nonnegative real numbers with each row. In the same vein, one may define stochastic vector as a vector whose elements are real numbers which sum to 1. Thus, each row of a stochastic matrix is a stochastic vector. A stochastic matrix describes a Markov chain X t over a state space S with cardinality S. Since the total of transition probability from a i to all other states must be 1. The product of two right stochastic matrices is also right stochastic, in particular, the k -th power P k of a right stochastic matrix P is also right stochastic. The probability of transitioning from i to j in two steps is given by the t h element of the square of P, i, j. In general the probability transition of going from any state to state in a finite Markov chain given by the matrix P in k steps is given by P k. An initial distribution is given as a row vector, the right spectral radius of every right stochastic matrix is clearly at most 1. Additionally, every right stochastic matrix has an obvious column eigenvector associated to the eigenvalue 1, The vector 1, finally, the Brouwer Fixed Point Theorem implies that there is some left eigenvector which is also a stationary probability vector. On the other hand, the Perron–Frobenius theorem also ensures that every irreducible stochastic matrix has such a vector. However, this theorem cannot be applied directly to such matrices because they need not be irreducible, in general, there may be several such vectors. Among other things, this says that the probability of being in a state j is independent of the initial state i. If this process is applied repeatedly, the distribution converges to a distribution for the Markov chain

17.
Permutation matrix
–
In mathematics, particularly in matrix theory, a permutation matrix is a square binary matrix that has exactly one entry of 1 in each row and each column and 0s elsewhere. Each such matrix, say P, represents a permutation of m elements and, both methods of defining permutation matrices appear in the literature and the properties expressed in one representation can be easily converted to the other representation. This article will deal with just one of these representations. For example, the permutation matrix Pπ corresponding to the permutation, observe that the jth column of the I5 identity matrix now appears as the πth column of Pπ. The other representation, obtained by permuting the rows of the identity matrix Im, the column representation of a permutation matrix is used throughout this section, except when otherwise indicated. Multiplying P π times a vector g will permute the rows of the vector. As permutation matrices are matrices, the inverse matrix exists. Notice also that e k P π = e π, given two permutations π and σ of m elements, the corresponding permutation matrices Pπ and Pσ acting on column vectors are composed with P σ P π g = P π ∘ σ g. The same matrices acting on row vectors compose according to the same rule h P σ P π = h P π ∘ σ, to be clear, the above formulas use the prefix notation for permutation composition, that is, π ∘ σ = π. Let Q π be the permutation matrix corresponding to π in its row representation, the properties of this representation can be determined from those of the column representation since Q π = P π T = P π −1. In particular, Q π e k T = P π −1 e k T = e −1 T = e π T, from this it follows that Q σ Q π g = Q σ ∘ π g. Similarly, h Q σ Q π = h Q σ ∘ π, if denotes the identity permutation, then P is the identity matrix. Let Sn denote the group, or group of permutations. Since there are n. permutations, there are n. permutation matrices, by the formulas above, the n × n permutation matrices form a group under matrix multiplication with the identity matrix as the identity element. The map Sn → A ⊂ GL is a faithful representation, a permutation matrix is itself a doubly stochastic matrix, but it also plays a special role in the theory of these matrices. That is, the Birkhoff polytope, the set of doubly stochastic matrices, is the hull of the set of permutation matrices. The trace of a matrix is the number of fixed points of the permutation. If the permutation has fixed points, so it can be written in form as π =. σ where σ has no fixed points, then ea1