1.
Mathematics
–
Mathematics is the study of topics such as quantity, structure, space, and change. There is a range of views among mathematicians and philosophers as to the exact scope, Mathematicians seek out patterns and use them to formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proof, when mathematical structures are good models of real phenomena, then mathematical reasoning can provide insight or predictions about nature. Through the use of abstraction and logic, mathematics developed from counting, calculation, measurement, practical mathematics has been a human activity from as far back as written records exist. The research required to solve mathematical problems can take years or even centuries of sustained inquiry, rigorous arguments first appeared in Greek mathematics, most notably in Euclids Elements. Galileo Galilei said, The universe cannot be read until we have learned the language and it is written in mathematical language, and the letters are triangles, circles and other geometrical figures, without which means it is humanly impossible to comprehend a single word. Without these, one is wandering about in a dark labyrinth, carl Friedrich Gauss referred to mathematics as the Queen of the Sciences. Benjamin Peirce called mathematics the science that draws necessary conclusions, David Hilbert said of mathematics, We are not speaking here of arbitrariness in any sense. Mathematics is not like a game whose tasks are determined by arbitrarily stipulated rules, rather, it is a conceptual system possessing internal necessity that can only be so and by no means otherwise. Albert Einstein stated that as far as the laws of mathematics refer to reality, they are not certain, Mathematics is essential in many fields, including natural science, engineering, medicine, finance and the social sciences. Applied mathematics has led to entirely new mathematical disciplines, such as statistics, Mathematicians also engage in pure mathematics, or mathematics for its own sake, without having any application in mind. There is no clear line separating pure and applied mathematics, the history of mathematics can be seen as an ever-increasing series of abstractions. The earliest uses of mathematics were in trading, land measurement, painting and weaving patterns, in Babylonian mathematics elementary arithmetic first appears in the archaeological record. Numeracy pre-dated writing and numeral systems have many and diverse. Between 600 and 300 BC the Ancient Greeks began a study of mathematics in its own right with Greek mathematics. Mathematics has since been extended, and there has been a fruitful interaction between mathematics and science, to the benefit of both. Mathematical discoveries continue to be made today, the overwhelming majority of works in this ocean contain new mathematical theorems and their proofs. The word máthēma is derived from μανθάνω, while the modern Greek equivalent is μαθαίνω, in Greece, the word for mathematics came to have the narrower and more technical meaning mathematical study even in Classical times

2.
J. H. van Lint
–
Jacobus Hendricus van Lint was a Dutch mathematician, professor at the Eindhoven University of Technology, of which he was rector magnificus from 1991 till 1996. He gained his Ph. D. from Utrecht University in 1957 under the supervision of Fred van der Blij and he was professor of mathematics at Eindhoven University of Technology from 1959 to 1997. He was appointed a professor at Eindhoven University of Technology at the age of 26 years. His field of research was initially number theory, but he worked mainly in combinatorics, van Lint was honored with a great number of awards. Introduction to Coding Theory, Springer, Graduate Texts in Mathematics,1982, with Peter Cameron, Designs, Graphs, Codes and their Links, London Mathematical Society Lecture Notes, Cambridge University Press,1980. With Richard M. Wilson, A Course in Combinatorics, Cambridge University Press,1992, with Gerard van der Geer, Introduction to Coding theory and Algebraic Geometry, Birkhäuser,1988. Personal web site OConnor, John J. Robertson, Edmund F. Jacobus Hendricus van Lint, MacTutor History of Mathematics archive, University of St Andrews

3.
Eigenvalues and eigenvectors
–
In linear algebra, an eigenvector or characteristic vector of a linear transformation is a non-zero vector whose direction does not change when that linear transformation is applied to it. This condition can be written as the equation T = λ v, there is a correspondence between n by n square matrices and linear transformations from an n-dimensional vector space to itself. For this reason, it is equivalent to define eigenvalues and eigenvectors using either the language of matrices or the language of linear transformations. Geometrically an eigenvector, corresponding to a real eigenvalue, points in a direction that is stretched by the transformation. If the eigenvalue is negative, the direction is reversed, Eigenvalues and eigenvectors feature prominently in the analysis of linear transformations. The prefix eigen- is adopted from the German word eigen for proper, inherent, own, individual, special, specific, peculiar, or characteristic. In essence, an eigenvector v of a linear transformation T is a vector that. Applying T to the eigenvector only scales the eigenvector by the scalar value λ and this condition can be written as the equation T = λ v, referred to as the eigenvalue equation or eigenequation. In general, λ may be any scalar, for example, λ may be negative, in which case the eigenvector reverses direction as part of the scaling, or it may be zero or complex. The Mona Lisa example pictured at right provides a simple illustration, each point on the painting can be represented as a vector pointing from the center of the painting to that point. The linear transformation in this example is called a shear mapping, the vectors pointing to each point in the original image are therefore tilted right or left and made longer or shorter by the transformation. Notice that points along the horizontal axis do not move at all when this transformation is applied, therefore, any vector that points directly to the right or left with no vertical component is an eigenvector of this transformation because the mapping does not change its direction. Moreover, these all have an eigenvalue equal to one because the mapping does not change their length. Linear transformations can take different forms, mapping vectors in a variety of vector spaces. Alternatively, the transformation could take the form of an n by n matrix. For a matrix, eigenvalues and eigenvectors can be used to decompose the matrix, the set of all eigenvectors of T corresponding to the same eigenvalue, together with the zero vector, is called an eigenspace or characteristic space of T. If the set of eigenvectors of T form a basis of the domain of T, Eigenvalues are often introduced in the context of linear algebra or matrix theory. Historically, however, they arose in the study of quadratic forms, in the 18th century Euler studied the rotational motion of a rigid body and discovered the importance of the principal axes

4.
Graph theory
–
In mathematics graph theory is the study of graphs, which are mathematical structures used to model pairwise relations between objects. A graph in this context is made up of vertices, nodes, or points which are connected by edges, arcs, Graphs are one of the prime objects of study in discrete mathematics. Refer to the glossary of graph theory for basic definitions in graph theory, the following are some of the more basic ways of defining graphs and related mathematical structures. To avoid ambiguity, this type of graph may be described precisely as undirected, other senses of graph stem from different conceptions of the edge set. In one more generalized notion, V is a set together with a relation of incidence that associates with each two vertices. In another generalized notion, E is a multiset of unordered pairs of vertices, Many authors call this type of object a multigraph or pseudograph. All of these variants and others are described more fully below, the vertices belonging to an edge are called the ends or end vertices of the edge. A vertex may exist in a graph and not belong to an edge, V and E are usually taken to be finite, and many of the well-known results are not true for infinite graphs because many of the arguments fail in the infinite case. The order of a graph is |V|, its number of vertices, the size of a graph is |E|, its number of edges. The degree or valency of a vertex is the number of edges that connect to it, for an edge, graph theorists usually use the somewhat shorter notation xy. Graphs can be used to model many types of relations and processes in physical, biological, social, Many practical problems can be represented by graphs. Emphasizing their application to real-world systems, the network is sometimes defined to mean a graph in which attributes are associated with the nodes and/or edges. In computer science, graphs are used to represent networks of communication, data organization, computational devices, the flow of computation, etc. For instance, the structure of a website can be represented by a directed graph, in which the vertices represent web pages. A similar approach can be taken to problems in media, travel, biology, computer chip design. The development of algorithms to handle graphs is therefore of major interest in computer science, the transformation of graphs is often formalized and represented by graph rewrite systems. Graph-theoretic methods, in forms, have proven particularly useful in linguistics. Traditionally, syntax and compositional semantics follow tree-based structures, whose power lies in the principle of compositionality

5.
DFT matrix
–
In applied mathematics, a DFT matrix is an expression of a discrete Fourier transform as a transformation matrix, which can be applied to a signal through matrix multiplication. An N-point DFT is expressed as the multiplication X = W x, where x is the input signal, W is the N-by-N square DFT matrix. This is the Vandermonde matrix for the roots of unity, up to the normalization factor, note that the normalization factor in front of the sum and the sign of the exponent in ω are merely conventions, and differ in some treatments. All of the discussion applies regardless of the convention, with at most minor adjustments. The only important thing is that the forward and inverse transforms have opposite-sign exponents, however, the 1 / N choice here makes the resulting DFT matrix unitary, which is convenient in many circumstances. Fast Fourier transform algorithms utilize the symmetries of the matrix to reduce the time of multiplying a vector by this matrix, similar techniques can be applied for multiplications by matrices such as Hadamard matrix and the Walsh matrix. The two-point DFT is a case, in which the first entry is the DC. The first row performs the sum, and the second row performs the difference, the factor of 1 /2 is to make the transform unitary. The four-point DFT matrix is as follows, W = = where ω = e − π i 2 = − i, the top row is all ones, so it measures the DC component in the input signal. Recall that a matched filter compares the signal with a reversed version of whatever were looking for. −1/8 so that is why this row is a negative frequency, in this way, it could be said that the top rows of the matrix measure positive frequency content in the signal and the bottom rows measure negative frequency component in the signal. The DFT is a transform, i. e. one that preserves energy. The appropriate choice of scaling to achieve unitarity is 1 / N, so that the energy in the domain will be the same as the energy in the Fourier domain. For other properties of the DFT matrix, including its eigenvalues, connection to convolutions, applications, a rectangular portion of this continuous Fourier operator can be displayed as an image, analogous to the DFT matrix, as shown at right, where greyscale pixel value denotes numerical quantity. Multidimensional transform Clock and shift matrices The Transform and Data Compression Handbook by P. C, yip, K. Ramamohan Rao – See chapter 2 for a treatment of the DFT based largely on the DFT matrix Fourier Operator and Decimation In Time

6.
Block matrix
–
In mathematics, a block matrix or a partitioned matrix is a matrix that is interpreted as having been broken into sections called blocks or submatrices. Any matrix may be interpreted as a matrix in one or more ways, with each interpretation defined by how its rows. Block matrix algebra arises in general from biproducts in categories of matrices, the matrix P = can be partitioned into 4 2×2 blocks P11 =, P12 =, P21 =, P22 =. The partitioned matrix can then be written as P = and it is possible to use a block partitioned matrix product that involves only algebra on submatrices of the factors. The partitioning of the factors is not arbitrary, however, the matrices in the resulting matrix C are calculated by multiplying, C α β = ∑ γ =1 s A α γ B γ β. Or, using the Einstein notation that implicitly sums over repeated indices, if a matrix is partitioned into four blocks, it can be inverted blockwise as follows, where A, B, C and D have arbitrary size. Equivalently, A block diagonal matrix is a matrix that is a square matrix. A block diagonal matrix A has the form A = where Ak is a matrix, in other words, it is the direct sum of A1, …. It can also be indicated as A1 ⊕ A2 ⊕ … ⊕ An or diag, any square matrix can trivially be considered a block diagonal matrix with only one block. For the determinant and trace, the following properties hold det A = det A1 × … × det A n, tr A = tr A1 + ⋯ + tr A n. The inverse of a diagonal matrix is another block diagonal matrix, composed of the inverse of each block, as follows. The eigenvalues and eigenvectors of A are simply those of A1 and A2 and. and A n and it is essentially a tridiagonal matrix but has submatrices in places of scalars. A block tridiagonal matrix A has the form A = where Ak, Bk and Ck are square sub-matrices of the lower, main, block tridiagonal matrices are often encountered in numerical solutions of engineering problems. Optimized numerical methods for LU factorization are available and hence efficient solution algorithms for equation systems with a block tridiagonal matrix as coefficient matrix, the Thomas algorithm, used for efficient solution of equation systems involving a tridiagonal matrix can also be applied using matrix operations to block tridiagonal matrices. A block Toeplitz matrix is another special block matrix, which contains blocks that are repeated down the diagonals of the matrix, the individual block matrix elements, Aij, must also be a Toeplitz matrix. A block Toeplitz matrix A has the form A =, for any arbitrary matrices A and B, we have the direct sum of A and B, denoted by A ⊕ B and defined as A ⊕ B =. This operation generalizes naturally to arbitrary dimensioned arrays, note that any element in the direct sum of two vector spaces of matrices could be represented as a direct sum of two matrices. In linear algebra terms, the use of a block matrix corresponds to having a linear mapping thought of in terms of corresponding bunches of basis vectors and that again matches the idea of having distinguished direct sum decompositions of the domain and range

7.
Sparse matrix
–
In numerical analysis, a sparse matrix is a matrix in which most of the elements are zero. By contrast, if most of the elements are nonzero, then the matrix is considered dense, the number of zero-valued elements divided by the total number of elements is called the sparsity of the matrix. Conceptually, sparsity corresponds to systems which are loosely coupled, consider a line of balls connected by springs from one to the next, this is a sparse system as only adjacent balls are coupled. By contrast, if the line of balls had springs connecting each ball to all other balls. The concept of sparsity is useful in combinatorics and application areas such as network theory, large sparse matrices often appear in scientific or engineering applications when solving partial differential equations. Operations using standard dense-matrix structures and algorithms are slow and inefficient when applied to large sparse matrices as processing, sparse data is by nature more easily compressed and thus require significantly less storage. Some very large sparse matrices are infeasible to manipulate using standard dense-matrix algorithms, a matrix is typically stored as a two-dimensional array. Each entry in the array represents an element ai, j of the matrix and is accessed by the two indices i and j, conventionally, i is the row index, numbered from top to bottom, and j is the column index, numbered from left to right. For an m × n matrix, the amount of required to store the matrix in this format is proportional to m × n. In the case of a matrix, substantial memory requirement reductions can be realized by storing only the non-zero entries. Depending on the number and distribution of the entries, different data structures can be used. The trade-off is that accessing the individual becomes more complex. Formats can be divided into two groups, Those that support efficient modification, such as DOK, LIL, or COO and these are typically used to construct the matrices. Those that support efficient access and matrix operations, such as CSR or CSC, DOK consists of a dictionary that maps -pairs to the value of the elements. Elements that are missing from the dictionary are taken to be zero, the format is good for incrementally constructing a sparse matrix in random order, but poor for iterating over non-zero values in lexicographical order. One typically constructs a matrix in this format and then converts to another more efficient format for processing, LIL stores one list per row, with each entry containing the column index and the value. Typically, these entries are sorted by column index for faster lookup. This is another good for incremental matrix construction

8.
Adjacency matrix
–
In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph. The elements of the matrix indicate whether pairs of vertices are adjacent or not in the graph, in the special case of a finite simple graph, the adjacency matrix is a -matrix with zeros on its diagonal. If the graph is undirected, the matrix is symmetric. The relationship between a graph and the eigenvalues and eigenvectors of its matrix is studied in spectral graph theory. The adjacency matrix should be distinguished from the matrix for a graph. The diagonal elements of the matrix are all zero, since edges from a vertex to itself are not allowed in simple graphs and it is also sometimes useful in algebraic graph theory to replace the nonzero elements with algebraic variables. Loops may be counted either once or twice, as long as a consistent convention is followed, undirected graphs often use the latter convention of counting loops twice, whereas directed graphs typically use the former convention. In this case, the smaller matrix B uniquely represents the graph, B is sometimes called the biadjacency matrix. Formally, let G = be a graph with parts U = and V =. The biadjacency matrix is the r × s 0–1 matrix B in which bi, j =1 if and only if ∈ E. If G is a multigraph or weighted graph then the elements bi. An -adjacency matrix A of a graph has Ai, j = a if is an edge, b if it is not. The Seidel adjacency matrix is a -adjacency matrix and this matrix is used in studying strongly regular graphs and two-graphs. The distance matrix has in position the distance between vertices vi and vj, the distance is the length of a shortest path connecting the vertices. Unless lengths of edges are explicitly provided, the length of a path is the number of edges in it. The distance matrix resembles a high power of the adjacency matrix, the convention followed here is that each edge adds 1 to the appropriate cell in the matrix, and each loop adds 2. This allows the degree of a vertex to be found by taking the sum of the values in either its respective row or column in the adjacency matrix. In directed graphs, the in-degree of a vertex can be computed by summing the entries of the row

9.
Matrix (mathematics)
–
In mathematics, a matrix is a rectangular array of numbers, symbols, or expressions, arranged in rows and columns. For example, the dimensions of the matrix below are 2 ×3, the individual items in an m × n matrix A, often denoted by ai, j, where max i = m and max j = n, are called its elements or entries. Provided that they have the size, two matrices can be added or subtracted element by element. The rule for multiplication, however, is that two matrices can be multiplied only when the number of columns in the first equals the number of rows in the second. Any matrix can be multiplied element-wise by a scalar from its associated field, a major application of matrices is to represent linear transformations, that is, generalizations of linear functions such as f = 4x. The product of two matrices is a matrix that represents the composition of two linear transformations. Another application of matrices is in the solution of systems of linear equations, if the matrix is square, it is possible to deduce some of its properties by computing its determinant. For example, a matrix has an inverse if and only if its determinant is not zero. Insight into the geometry of a transformation is obtainable from the matrixs eigenvalues. Applications of matrices are found in most scientific fields, in computer graphics, they are used to manipulate 3D models and project them onto a 2-dimensional screen. Matrix calculus generalizes classical analytical notions such as derivatives and exponentials to higher dimensions, Matrices are used in economics to describe systems of economic relationships. A major branch of analysis is devoted to the development of efficient algorithms for matrix computations. Matrix decomposition methods simplify computations, both theoretically and practically, algorithms that are tailored to particular matrix structures, such as sparse matrices and near-diagonal matrices, expedite computations in finite element method and other computations. Infinite matrices occur in planetary theory and in atomic theory, a simple example of an infinite matrix is the matrix representing the derivative operator, which acts on the Taylor series of a function. A matrix is an array of numbers or other mathematical objects for which operations such as addition and multiplication are defined. Most commonly, a matrix over a field F is an array of scalars each of which is a member of F. Most of this focuses on real and complex matrices, that is, matrices whose elements are real numbers or complex numbers. More general types of entries are discussed below, for instance, this is a real matrix, A =