1.
Computer science
–
Computer science is the study of the theory, experimentation, and engineering that form the basis for the design and use of computers. An alternate, more succinct definition of science is the study of automating algorithmic processes that scale. A computer scientist specializes in the theory of computation and the design of computational systems and its fields can be divided into a variety of theoretical and practical disciplines. Some fields, such as computational complexity theory, are highly abstract, other fields still focus on challenges in implementing computation. Human–computer interaction considers the challenges in making computers and computations useful, usable, the earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, further, algorithms for performing computations have existed since antiquity, even before the development of sophisticated computing equipment. Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623, in 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner. He may be considered the first computer scientist and information theorist, for, among other reasons and he started developing this machine in 1834, and in less than two years, he had sketched out many of the salient features of the modern computer. A crucial step was the adoption of a card system derived from the Jacquard loom making it infinitely programmable. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information, when the machine was finished, some hailed it as Babbages dream come true. During the 1940s, as new and more powerful computing machines were developed, as it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. Computer science began to be established as an academic discipline in the 1950s. The worlds first computer science program, the Cambridge Diploma in Computer Science. The first computer science program in the United States was formed at Purdue University in 1962. Since practical computers became available, many applications of computing have become distinct areas of study in their own rights and it is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM released the IBM704 and later the IBM709 computers, still, working with the IBM was frustrating if you had misplaced as much as one letter in one instruction, the program would crash, and you would have to start the whole process over again. During the late 1950s, the science discipline was very much in its developmental stages. Time has seen significant improvements in the usability and effectiveness of computing technology, modern society has seen a significant shift in the users of computer technology, from usage only by experts and professionals, to a near-ubiquitous user base
2.
Connectivity (graph theory)
–
It is closely related to the theory of network flow problems. The connectivity of a graph is an important measure of its resilience as a network, a graph is connected when there is a path between every pair of vertices. In a connected graph, there are no unreachable vertices, a graph that is not connected is disconnected. A graph G is said to be disconnected if there exist two nodes in G such that no path in G has those nodes as endpoints, a graph with just one vertex is connected. An edgeless graph with two or more vertices is disconnected, in an undirected graph G, two vertices u and v are called connected if G contains a path from u to v. Otherwise, they are called disconnected. If the two vertices are connected by a path of length 1, i. e. by a single edge. A graph is said to be connected if every pair of vertices in the graph is connected, a connected component is a maximal connected subgraph of G. Each vertex belongs to exactly one connected component, as does each edge, a directed graph is called weakly connected if replacing all of its directed edges with undirected edges produces a connected graph. It is connected if it contains a path from u to v or a directed path from v to u for every pair of vertices u, v. It is strongly connected, diconnected, or simply strong if it contains a path from u to v. The strong components are the maximal strongly connected subgraphs, a cut, vertex cut, or separating set of a connected graph G is a set of vertices whose removal renders G disconnected. The connectivity or vertex connectivity κ is the size of a minimal vertex cut, a graph is called k-connected or k-vertex-connected if its vertex connectivity is k or greater. In particular, a graph with n vertices, denoted Kn, has no vertex cuts at all. A vertex cut for two vertices u and v is a set of vertices whose removal from the graph disconnects u and v. The local connectivity κ is the size of a smallest vertex cut separating u and v. Local connectivity is symmetric for undirected graphs, a graph G which is connected but not 2-connected is sometimes called separable. Analogous concepts can be defined for edges, in the simple case in which cutting a single, specific edge would disconnect the graph, that edge is called a bridge. More generally, a cut of G is a set of edges whose removal renders the graph disconnected. A graph is called k-edge-connected if its edge connectivity is k or greater, if u and v are vertices of a graph G, then a collection of paths between u and v is called independent if no two of them share a vertex
3.
Eigenvalues and eigenvectors
–
In linear algebra, an eigenvector or characteristic vector of a linear transformation is a non-zero vector whose direction does not change when that linear transformation is applied to it. This condition can be written as the equation T = λ v, there is a correspondence between n by n square matrices and linear transformations from an n-dimensional vector space to itself. For this reason, it is equivalent to define eigenvalues and eigenvectors using either the language of matrices or the language of linear transformations. Geometrically an eigenvector, corresponding to a real eigenvalue, points in a direction that is stretched by the transformation. If the eigenvalue is negative, the direction is reversed, Eigenvalues and eigenvectors feature prominently in the analysis of linear transformations. The prefix eigen- is adopted from the German word eigen for proper, inherent, own, individual, special, specific, peculiar, or characteristic. In essence, an eigenvector v of a linear transformation T is a vector that. Applying T to the eigenvector only scales the eigenvector by the scalar value λ and this condition can be written as the equation T = λ v, referred to as the eigenvalue equation or eigenequation. In general, λ may be any scalar, for example, λ may be negative, in which case the eigenvector reverses direction as part of the scaling, or it may be zero or complex. The Mona Lisa example pictured at right provides a simple illustration, each point on the painting can be represented as a vector pointing from the center of the painting to that point. The linear transformation in this example is called a shear mapping, the vectors pointing to each point in the original image are therefore tilted right or left and made longer or shorter by the transformation. Notice that points along the horizontal axis do not move at all when this transformation is applied, therefore, any vector that points directly to the right or left with no vertical component is an eigenvector of this transformation because the mapping does not change its direction. Moreover, these all have an eigenvalue equal to one because the mapping does not change their length. Linear transformations can take different forms, mapping vectors in a variety of vector spaces. Alternatively, the transformation could take the form of an n by n matrix. For a matrix, eigenvalues and eigenvectors can be used to decompose the matrix, the set of all eigenvectors of T corresponding to the same eigenvalue, together with the zero vector, is called an eigenspace or characteristic space of T. If the set of eigenvectors of T form a basis of the domain of T, Eigenvalues are often introduced in the context of linear algebra or matrix theory. Historically, however, they arose in the study of quadratic forms, in the 18th century Euler studied the rotational motion of a rigid body and discovered the importance of the principal axes
4.
Graph theory
–
In mathematics graph theory is the study of graphs, which are mathematical structures used to model pairwise relations between objects. A graph in this context is made up of vertices, nodes, or points which are connected by edges, arcs, Graphs are one of the prime objects of study in discrete mathematics. Refer to the glossary of graph theory for basic definitions in graph theory, the following are some of the more basic ways of defining graphs and related mathematical structures. To avoid ambiguity, this type of graph may be described precisely as undirected, other senses of graph stem from different conceptions of the edge set. In one more generalized notion, V is a set together with a relation of incidence that associates with each two vertices. In another generalized notion, E is a multiset of unordered pairs of vertices, Many authors call this type of object a multigraph or pseudograph. All of these variants and others are described more fully below, the vertices belonging to an edge are called the ends or end vertices of the edge. A vertex may exist in a graph and not belong to an edge, V and E are usually taken to be finite, and many of the well-known results are not true for infinite graphs because many of the arguments fail in the infinite case. The order of a graph is |V|, its number of vertices, the size of a graph is |E|, its number of edges. The degree or valency of a vertex is the number of edges that connect to it, for an edge, graph theorists usually use the somewhat shorter notation xy. Graphs can be used to model many types of relations and processes in physical, biological, social, Many practical problems can be represented by graphs. Emphasizing their application to real-world systems, the network is sometimes defined to mean a graph in which attributes are associated with the nodes and/or edges. In computer science, graphs are used to represent networks of communication, data organization, computational devices, the flow of computation, etc. For instance, the structure of a website can be represented by a directed graph, in which the vertices represent web pages. A similar approach can be taken to problems in media, travel, biology, computer chip design. The development of algorithms to handle graphs is therefore of major interest in computer science, the transformation of graphs is often formalized and represented by graph rewrite systems. Graph-theoretic methods, in forms, have proven particularly useful in linguistics. Traditionally, syntax and compositional semantics follow tree-based structures, whose power lies in the principle of compositionality
5.
Nauru graph
–
In the mathematical field of graph theory, the Nauru graph is a symmetric bipartite cubic graph with 24 vertices and 36 edges. It was named by David Eppstein after the star in the flag of Nauru. It has chromatic number 2, chromatic index 3, diameter 4, radius 4 and it is also a 3-vertex-connected and 3-edge-connected graph. The Nauru graph requires at least eight crossings in any drawing of it in the plane and it is one of five non-isomorphic graphs tied for being the smallest cubic graph that requires eight crossings. Another of these five graphs is the McGee graph also known as the -cage, the Nauru graph is Hamiltonian and can be described by the LCF notation,4. There is also a construction of the Nauru graph. Take three distinguishable objects and place them in four boxes, no more than one object per box. There are 24 ways to so distribute the objects, corresponding to the 24 vertices of the graph, the resulting state-transition graph is the Nauru graph. The automorphism group of the Nauru graph is a group of order 144 and it is isomorphic to the direct product of the symmetric groups S4 and S3 and acts transitively on the vertices, on the edges and on the arcs of the graph. Therefore the Nauru graph is a symmetric graph and it has automorphisms that take any vertex to any other vertex and any edge to any other edge. According to the Foster census, the Nauru graph is the cubic symmetric graph on 24 vertices. The generalized Petersen graph G is vertex-transitive if and only if n =10 and k =2 or if k2 ≡ ±1 and is only in the following seven cases. So the Nauru graph is one of only seven symmetric Generalized Petersen graphs, among these seven graphs are the cubical graph G, the Petersen graph G, the Möbius–Kantor graph G, the dodecahedral graph G and the Desargues graph G. The Nauru graph is a Cayley graph of S4, the group of permutations on four elements. The characteristic polynomial of the Nauru graph is equal to 63 x 436, making it an integral graph—a graph whose spectrum consists entirely of integers. One of these two forms a torus, so the Nauru graph is a toroidal graph, it consists of 12 hexagonal faces together with the 24 vertices and 36 edges of the Nauru graph. The dual graph of this embedding is a symmetric 6-regular graph with 12 vertices and 36 edges, the other symmetric embedding of the Nauru graph has six dodecagonal faces, and forms a surface of genus 4. Its dual is not a graph, since each face shares three edges with four other faces, but a multigraph
6.
Cayley graph
–
In mathematics, a Cayley graph, also known as a Cayley colour graph, Cayley diagram, group diagram, or colour group is a graph that encodes the abstract structure of a group. Its definition is suggested by Cayleys theorem and uses a specified, usually finite and it is a central tool in combinatorial and geometric group theory. Suppose that G is a group and S is a generating set, the Cayley graph Γ = Γ is a colored directed graph constructed as follows, Each element g of G is assigned a vertex, the vertex set V of Γ is identified with G. Each generator s of S is assigned a color c s, for any g ∈ G, s ∈ S, the vertices corresponding to the elements g and g s are joined by a directed edge of colour c s. Thus the edge set E consists of pairs of the form, in geometric group theory, the set S is usually assumed to be finite, symmetric and not containing the identity element of the group. In this case, the uncolored Cayley graph is a graph, its edges are not oriented. Suppose that G = Z is the cyclic group and the set S consists of the standard generator 1. Similarly, if G = Z n is the cyclic group of order n. More generally, the Cayley graphs of cyclic groups are exactly the circulant graphs. The Cayley graph of the product of groups is the cartesian product of the corresponding Cayley graphs. A Cayley graph of the dihedral group D4 on two generators a and b is depicted to the left, red arrows represent composition with a. Since b is self-inverse, the lines, which represent composition with b, are undirected. Therefore the graph is mixed, it has eight vertices, eight arrows, the Cayley table of the group D4 can be derived from the group presentation ⟨ a, b | a 4 = b 2 = e, a b = b a 3 ⟩. A different Cayley graph of Dih4 is shown on the right, B is still the horizontal reflection and represented by blue lines, c is a diagonal reflection and represented by green lines. As both reflections are self-inverse the Cayley graph on the right is completely undirected and this graph corresponds to the presentation ⟨ b, c | b 2 = c 2 = e, b c b c = c b c b ⟩. The Cayley graph of the group on two generators a, b corresponding to the set S = is depicted at the top of the article. Travelling along an edge to the right represents right multiplication by a, since the free group has no relations, the Cayley graph has no cycles. This Cayley graph is a key ingredient in the proof of the Banach–Tarski paradox, a Cayley graph of the discrete Heisenberg group is depicted to the right
7.
Square matrix
–
In mathematics, a square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a matrix of order n. Any two square matrices of the order can be added and multiplied. Square matrices are used to represent simple linear transformations, such as shearing or rotation. If v is a row vector, the transformation can be obtained using vRT. The entries aii form the diagonal of a square matrix. They lie on the line which runs from the top left corner to the bottom right corner of the matrix. For instance, the diagonal of the 4-by-4 matrix above contains the elements a11 =9, a22 =11, a33 =4. The diagonal of a matrix from the top right to the bottom left corner is called antidiagonal or counterdiagonal. If all entries outside the main diagonal are zero, A is called a diagonal matrix, if only all entries above the main diagonal are zero, A is called a lower triangular matrix. The identity matrix In of size n is the matrix in which all the elements on the main diagonal are equal to 1. It is a matrix of order n, and also a special kind of diagonal matrix. It is called identity matrix because multiplication with it leaves a matrix unchanged, a square matrix A that is equal to its transpose, i. e. A = AT, is a symmetric matrix, if instead, A was equal to the negative of its transpose, i. e. A = −AT, then A is a skew-symmetric matrix, by the spectral theorem, real symmetric matrices and complex Hermitian matrices have an eigenbasis, i. e. every vector is expressible as a linear combination of eigenvectors. In both cases, all eigenvalues are real and this theorem can be generalized to infinite-dimensional situations related to matrices with infinitely many rows and columns, see below. A square matrix A is called invertible or non-singular if there exists a matrix B such that AB = BA = In, if B exists, it is unique and is called the inverse matrix of A, denoted A−1. A symmetric n×n-matrix is called positive-definite, if for all vectors x ∈ Rn the associated quadratic form given by Q = xTAx takes only positive values
8.
Degree (graph theory)
–
In graph theory, the degree of a vertex of a graph is the number of edges incident to the vertex, with loops counted twice. The degree of a v is denoted deg or deg v. The maximum degree of a graph G, denoted by Δ, in the graph on the right, the maximum degree is 5 and the minimum degree is 0. In a regular graph, all degrees are the same, the degree sum formula states that, given a graph G =, ∑ v ∈ V deg =2 | E |. The formula implies that in any graph, the number of vertices with odd degree is even and this statement is known as the handshaking lemma. The latter name comes from a mathematical problem, to prove that in any group of people the number of people who have shaken hands with an odd number of other people from the group is even. The degree sequence of a graph is the non-increasing sequence of its vertex degrees. The degree sequence is a graph invariant so isomorphic graphs have the degree sequence. However, the sequence does not, in general, uniquely identify a graph, in some cases. The degree sequence problem is the problem of finding some or all graphs with the sequence being a given non-increasing sequence of positive integers. A sequence which is the sequence of some graph, i. e. for which the degree sequence problem has a solution, is called a graphic or graphical sequence. As a consequence of the sum formula, any sequence with an odd sum, such as. The converse is true, if a sequence has an even sum. The construction of such a graph is straightforward, connect vertices with odd degrees in pairs by a matching, the question of whether a given degree sequence can be realized by a simple graph is more challenging. This problem is also called graph realization problem and can either be solved by the Erdős–Gallai theorem or the Havel–Hakimi algorithm, the problem of finding or estimating the number of graphs with a given degree sequence is a problem from the field of graph enumeration. A vertex with degree 0 is called an isolated vertex, a vertex with degree 1 is called a leaf vertex or end vertex, and the edge incident with that vertex is called a pendant edge. In the graph on the right, is a pendant edge and this terminology is common in the study of trees in graph theory and especially trees as data structures. A vertex with degree n −1 in a graph on n vertices is called a dominating vertex, if each vertex of the graph has the same degree k the graph is called a k-regular graph and the graph itself is said to have degree k
9.
Bipartite graph
–
In the mathematical field of graph theory, a bipartite graph is a graph whose vertices can be divided into two disjoint sets U and V such that every edge connects a vertex in U to one in V. Vertex sets U and V are usually called the parts of the graph, equivalently, a bipartite graph is a graph that does not contain any odd-length cycles. One often writes G = to denote a graph whose partition has the parts U and V. If | U | = | V |, that is, if all vertices on the same side of the bipartition have the same degree, then G is called biregular. When modelling relations between two different classes of objects, bipartite graphs very often arise naturally, a third example is in the academic field of numismatics. Ancient coins are made using two positive impressions of the design, the charts numismatists produce to represent the production of coins are bipartite graphs. More abstract examples include the following, Every tree is bipartite, cycle graphs with an even number of vertices are bipartite. Every planar graph whose faces all have even length is bipartite, special cases of this are grid graphs and squaregraphs, in which every inner face consists of 4 edges and every inner vertex has four or more neighbors. It follows that Km, n has mn edges, closely related to the complete bipartite graphs are the crown graphs, formed from complete bipartite graphs by removing the edges of a perfect matching. Hypercube graphs, partial cubes, and median graphs are bipartite, in these graphs, the vertices may be labeled by bitvectors, in such a way that two vertices are adjacent if and only if the corresponding bitvectors differ in a single position. A bipartition may be formed by separating the vertices whose bitvectors have a number of ones from the vertices with an odd number of ones. Trees and squaregraphs form examples of graphs, and every median graph is a partial cube. Bipartite graphs may be characterized in different ways, A graph is bipartite if. A graph is bipartite if and only if it is 2-colorable, the spectrum of a graph is symmetric if and only if its a bipartite graph. In bipartite graphs, the size of minimum vertex cover is equal to the size of the maximum matching, an alternative and equivalent form of this theorem is that the size of the maximum independent set plus the size of the maximum matching is equal to the number of vertices. In any graph without isolated vertices the size of the edge cover plus the size of a maximum matching equals the number of vertices. Perfection of bipartite graphs is easy to see but perfection of the complements of bipartite graphs is less trivial and this was one of the results that motivated the initial definition of perfect graphs. The bipartite graphs, line graphs of graphs, and their complements form four out of the five basic classes of perfect graphs used in the proof of the strong perfect graph theorem
10.
Distance matrix
–
In mathematics, computer science and especially graph theory, a distance matrix is a square matrix containing the distances, taken pairwise, between the elements of a set. Depending upon the application involved, the distance being used to define this matrix may or may not be a metric, if there are N elements, this matrix will have size N×N. In graph-theoretic applications the elements are often referred to as points. When distance is defined as a metric, as for example in the Euclidean distance matrix, the smallest non-zero entry in the distance matrix measures the error correcting and error detecting capability of the code. This distance function, while well defined, is not a metric, there need be no restrictions on the weights other than the need to be able to combine and compare them, so negative weights are used in some applications. Since paths are directed, symmetry can not be guaranteed, an algebraic formulation of the above can be obtained by using the min-plus algebra. Note that the elements that are not connected directly will need to be set to infinity or a suitable large value for the min-plus operations to work correctly. A zero in these locations will be interpreted as an edge with no distance, cost. If W is an n × n matrix containing the edge weights of a graph, W for this complete graph is the adjacency matrix of G. The distance matrix of G can be computed from W as above, however, a distance matrix is necessary for hierarchical clustering. Distance matrices are used in phylogenetic analysis, in bioinformatics, distance matrices are used to represent protein structures in a coordinate-independent manner, as well as the pairwise distances between two sequences in sequence space. They are used in structural and sequential alignment, and for the determination of structures from NMR or X-ray crystallography. Sometimes it is convenient to express data as a similarity matrix. It is used to defined the distance correlation, for example, suppose these data are to be analyzed, where pixel Euclidean distance is the distance metric. The distance matrix would be, These data can then be viewed in graphic form as a heat map, in this image, black denotes a distance of 0 and white is maximal distance. Data clustering Computer Vision Min-plus matrix multiplication