Asymmetric graph
In graph theory, a branch of mathematics, an undirected graph is called an asymmetric graph if it has no nontrivial symmetries. Formally, an automorphism of a graph is a permutation p of its vertices with the property that any two vertices u and v are adjacent if and only if p and p are adjacent; the identity mapping of a graph onto itself is always an automorphism, is called the trivial automorphism of the graph. An asymmetric graph is a graph; the smallest asymmetric non-trivial graphs have 6 vertices. The smallest asymmetric regular graphs have ten vertices. One of the two smallest asymmetric cubic graphs is the twelve-vertex Frucht graph discovered in 1939. According to a strengthened version of Frucht's theorem, there are infinitely many asymmetric cubic graphs; the class of asymmetric graphs is closed under complements: a graph G is asymmetric if and only if its complement is. Any n-vertex asymmetric graph can be made symmetric by adding and removing a total of at most n/2 + o edges; the proportion of graphs on n vertices with nontrivial automorphism tends to zero as n grows, informally expressed as "almost all finite graphs are asymmetric".
In contrast, again informally, "almost all infinite graphs are symmetric." More countable infinite random graphs in the Erdős–Rényi model are, with probability 1, isomorphic to the symmetric Rado graph. The smallest asymmetric tree has seven vertices: it consists of three paths of lengths 1, 2, 3, linked at a common endpoint. In contrast to the situation for graphs all trees are symmetric. In particular, if a tree is chosen uniformly at random among all trees on n labeled nodes with probability tending to 1 as n increases, the tree will contain some two leaves adjacent to the same node and will have symmetries exchanging these two leaves
John Horton Conway
John Horton Conway is an English mathematician active in the theory of finite groups, knot theory, number theory, combinatorial game theory and coding theory. He has contributed to many branches of recreational mathematics, notably the invention of the cellular automaton called the Game of Life. Conway spent the first half of his long career at the University of Cambridge, in England, the second half at Princeton University in New Jersey, where he now holds the title Professor Emeritus. Conway was born in the son of Cyril Horton Conway and Agnes Boyce, he became interested in mathematics at a early age. By the age of eleven his ambition was to become a mathematician. After leaving sixth form, Conway entered Caius College, Cambridge to study mathematics. Conway, a "terribly introverted adolescent" in school, interpreted his admission to Cambridge as an opportunity to transform himself into a new person: an "extrovert", he was awarded his Bachelor of Arts degree in 1959 and began to undertake research in number theory supervised by Harold Davenport.
Having solved the open problem posed by Davenport on writing numbers as the sums of fifth powers, Conway began to become interested in infinite ordinals. It appears that his interest in games began during his years studying the Cambridge Mathematical Tripos, where he became an avid backgammon player, spending hours playing the game in the common room, he was awarded his doctorate in 1964 and was appointed as College Fellow and Lecturer in Mathematics at the University of Cambridge. After leaving Cambridge in 1986, he took up the appointment to the John von Neumann Chair of Mathematics at Princeton University. Conway is known for the invention of the Game of Life, one of the early examples of a cellular automaton, his initial experiments in that field were done with pen and paper, long before personal computers existed. Since the game was introduced by Martin Gardner in Scientific American in 1970, it has spawned hundreds of computer programs, web sites, articles, it is a staple of recreational mathematics.
There is an extensive wiki devoted to cataloging the various aspects of the game. From the earliest days it has been a favorite in computer labs, both for its theoretical interest and as a practical exercise in programming and data display. At times Conway has said he hates the Game of Life–largely because it has come to overshadow some of the other deeper and more important things he has done; the game did help launch a new branch of mathematics, the field of cellular automata. The Game of Life is now known to be Turing complete. Conway's career is intertwined with mathematics popularizer and Scientific American columnist Martin Gardner; when Gardner featured Conway's Game of Life in his Mathematical Games column in October 1970, it became the most read of all his columns and made Conway an instant celebrity. Gardner and Conway had first corresponded in the late 1950s, over the years Gardner had written about recreational aspects of Conway's work. For instance, he discussed Conway's game of Sprouts and his angel and devil problem.
In the September 1976 column he reviewed Conway's book On Numbers and Games and introduced the public to Conway's surreal numbers. Conferences called Gathering 4 Gardner are held every two years to celebrate the legacy of Martin Gardner, Conway himself has been a featured speaker at these events, discussing various aspects of recreational mathematics. Conway is known for his contributions to combinatorial game theory, a theory of partisan games; this he developed with Elwyn Berlekamp and Richard Guy, with them co-authored the book Winning Ways for your Mathematical Plays. He wrote the book On Numbers and Games which lays out the mathematical foundations of CGT, he is one of the inventors of sprouts, as well as philosopher's football. He developed detailed analyses of many other games and puzzles, such as the Soma cube, peg solitaire, Conway's soldiers, he came up with the angel problem, solved in 2006. He invented a new system of numbers, the surreal numbers, which are related to certain games and have been the subject of a mathematical novel by Donald Knuth.
He invented a nomenclature for exceedingly large numbers, the Conway chained arrow notation. Much of this is discussed in the 0th part of ONAG. In the mid-1960s with Michael Guy, son of Richard Guy, Conway established that there are sixty-four convex uniform polychora excluding two infinite sets of prismatic forms, they discovered the grand antiprism in the only non-Wythoffian uniform polychoron. Conway has suggested a system of notation dedicated to describing polyhedra called Conway polyhedron notation. In the theory of tessellations, he devised the Conway criterion which describes rules for deciding if a prototile will tile the plane, he investigated lattices in higher dimensions, was the first to determine the symmetry group of the Leech lattice. In knot theory, Conway formulated a new variation of the Alexander polynomial and produced a new invariant now called the Conway polynomial. After lying dormant for more than a decade, this concept became central to work in the 1980s on the novel knot polynomials.
Conway further developed tangle theory and invented a system of notation for tabulating knots, nowadays known as Conway notation, while correcting a number of errors in the 19th century knot tables and extending them to include all but four of the non-alternating primes with 11 crossings. See Topology Proceedings 7 118, he was the primary author of the ATLAS of Finite Groups giving prope
Locally linear graph
In graph theory, a locally linear graph is an undirected graph in which the neighborhood of every vertex is an induced matching. That is, for every vertex v, every neighbor of v should be adjacent to one other neighbor of v. Equivalently, every edge u v of such a graph belongs to a unique triangle u v w. Locally linear graphs have been called locally matched graphs. Examples of locally linear graphs include the triangular cactus graphs, the line graphs of 3-regular triangle-free graphs, the Cartesian products of smaller locally linear graphs. Certain Kneser graphs, certain regular graphs, are locally linear; some locally linear graphs have a number of edges, near-quadratic. The question of how dense these graphs can be is one of the formulations of the Ruzsa–Szemerédi problem; the densest planar graphs that can be locally linear are known. Several constructions for locally linear graphs are known; the friendship graphs, graphs formed by gluing together a collection of triangles at a single shared vertex, are locally linear.
They are the only finite graphs having the stronger property that every pair of vertices share one common neighbor. More every triangular cactus graph, a graph in which all simple cycles are triangles and all pairs of simple cycles have at most one vertex in common, is locally linear. Let G and H be any two locally linear graphs, select a triangle from each of them, glue the two graphs together by identifying corresponding pairs of vertices in these triangles; the resulting graph remains locally linear. The Cartesian product of any two locally linear graphs remains locally linear, because any triangles in the product come from triangles in one or the other factors. For instance, the nine-vertex Paley graph is the Cartesian product of two triangles; the Hamming graphs H are products of d triangles, again are locally linear. If G is a 3-regular triangle-free graph the line graph L is a graph formed by expanding each edge of G into a new vertex, making two vertices adjacent in L when the corresponding edges of G share an endpoint.
These graphs are locally linear. Every 4-regular locally linear graph can be constructed in this way. For instance, the graph of the cuboctahedron can be formed in this way as the line graph of a cube, the nine-vertex Paley graph is the line graph of the utility graph K 3, 3; the line graph of the Petersen graph, another graph of this type, has a property analogous to the cages in that it is the smallest possible graph in which the largest clique has three vertices, each vertex is in two edge-disjoint cliques, the shortest cycle with edges from distinct cliques has length five. A more complicated expansion process applies to planar graphs. Let G be a planar graph embedded in the plane in such a way that every face is a quadrilateral, such as the graph of a cube. If G has n vertices, it has n − 2 faces. Gluing a square antiprism onto each face of G, deleting the original edges of G, produces a new locally linear planar graph with 5 + 2 vertices and 12 edges. For instance, the cuboctahedron can again be produced from the two faces of a 4-cycle.
A Kneser graph K G a, b has vertices, representing the b -element subsets of an a -element set. Two vertices are adjacent; when a = 3 b the resulting graph is locally linear, because for each two disjoint b -element subsets there is one other b -element subset, disjoint from both of them. The resulting locally linear graph has 1 2 edges. For instance, for b = 2 the Kneser graph K G 6, 2
Zero-symmetric graph
In the mathematical field of graph theory, a zero-symmetric graph is a connected graph in which all vertices are symmetric to each other, each vertex has three incident edges, these three edges are not symmetric to each other. More it is a connected vertex-transitive cubic graph whose edges are partitioned into three different orbits by the automorphism group. In these graphs, for every two vertices u and v, there is one graph automorphism that takes u into v; the name for this class of graphs was coined by R. M. Foster in a 1966 letter to H. S. M. Coxeter; the smallest zero-symmetric graph is a nonplanar graph with 18 vertices. Its LCF notation is 9. Among planar graphs, the truncated cuboctahedral and truncated icosidodecahedral graphs are zero-symmetric; these examples are all bipartite graphs. However, there exist larger examples of zero-symmetric graphs; every finite zero-symmetric graph is a Cayley graph, a property that does not always hold for cubic vertex-transitive graphs more and that helps in the solution of combinatorial enumeration tasks concerning zero-symmetric graphs.
There are 97687 zero-symmetric graphs on up to 1280 vertices. These graphs form 89% of the cubic Cayley graphs and 88% of all connected vertex-transitive cubic graphs on the same number of vertices. All known finite connected zero-symmetric graphs contain a Hamiltonian cycle, but it is unknown whether every finite connected zero-symmetric graph is Hamiltonian; this is a special case of the Lovász conjecture that every finite connected vertex-transitive graph and every finite Cayley graph is Hamiltonian. Semi-symmetric graph, graphs that have symmetries between every two edges but not between every two vertices
Biregular graph
In graph-theoretic mathematics, a biregular graph or semiregular bipartite graph is a bipartite graph G = for which every two vertices on the same side of the given bipartition have the same degree as each other. If the degree of the vertices in U is x and the degree of the vertices in V is y the graph is said to be -biregular; every complete bipartite graph K a, b is -biregular. The rhombic dodecahedron is another example. An -biregular graph G = must satisfy the equation x | U | = y | V |; this follows from a simple double counting argument: the number of endpoints of edges in U is x | U |, the number of endpoints of edges in V is y | V |, each edge contributes the same amount to both numbers. Every regular bipartite graph is biregular; every edge-transitive graph, not vertex-transitive must be biregular. In particular every edge-transitive graph is either biregular; the Levi graphs of geometric configurations are biregular.
Vertex-transitive graph
In the mathematical field of graph theory, a vertex-transitive graph is a graph G such that, given any two vertices v1 and v2 of G, there is some automorphism f: V → V such that f = v 2. In other words, a graph is vertex-transitive if its automorphism group acts transitively upon its vertices. A graph is vertex-transitive if and only if its graph complement is, since the group actions are identical; every symmetric graph without isolated vertices is vertex-transitive, every vertex-transitive graph is regular. However, not all vertex-transitive graphs are symmetric, not all regular graphs are vertex-transitive. Finite vertex-transitive graphs include the symmetric graphs; the finite Cayley graphs are vertex-transitive, as are the vertices and edges of the Archimedean solids. Potočnik and Verret have constructed a census of all connected cubic vertex-transitive graphs on at most 1280 vertices. Although every Cayley graph is vertex-transitive, there exist other vertex-transitive graphs that are not Cayley graphs.
The most famous example is the Petersen graph, but others can be constructed including the line graphs of edge-transitive non-bipartite graphs with odd vertex degrees. The edge-connectivity of a vertex-transitive graph is equal to the degree d, while the vertex-connectivity will be at least 2/3. If the degree is 4 or less, or the graph is edge-transitive, or the graph is a minimal Cayley graph the vertex-connectivity will be equal to d. Infinite vertex-transitive graphs include: infinite paths infinite regular trees, e.g. the Cayley graph of the free group graphs of uniform tessellations, including all tilings by regular polygons infinite Cayley graphs the Rado graphTwo countable vertex-transitive graphs are called quasi-isometric if the ratio of their distance functions is bounded from below and from above. A well known conjecture stated that every infinite vertex-transitive graph is quasi-isometric to a Cayley graph. A counterexample was proposed by Diestel and Leader in 2001. In 2005, Eskin and Whyte confirmed the counterexample.
Edge-transitive graph Lovász conjecture Semi-symmetric graph Zero-symmetric graph Weisstein, Eric W. "Vertex-transitive graph". MathWorld. A census of small connected cubic vertex-transitive graphs. Primož Potočnik, Pablo Spiga, Gabriel Verret, 2012
Eigenvalues and eigenvectors
In linear algebra, an eigenvector or characteristic vector of a linear transformation is a non-zero vector that changes by only a scalar factor when that linear transformation is applied to it. More formally, if T is a linear transformation from a vector space V over a field F into itself and v is a vector in V, not the zero vector v is an eigenvector of T if T is a scalar multiple of v; this condition can be written as the equation T = λ v, where λ is a scalar in the field F, known as the eigenvalue, characteristic value, or characteristic root associated with the eigenvector v. If the vector space V is finite-dimensional the linear transformation T can be represented as a square matrix A, the vector v by a column vector, rendering the above mapping as a matrix multiplication on the left-hand side and a scaling of the column vector on the right-hand side in the equation A v = λ v. There is a direct correspondence between n-by-n square matrices and linear transformations from an n-dimensional vector space to itself, given any basis of the vector space.
For this reason, it is equivalent to define eigenvalues and eigenvectors using either the language of matrices or the language of linear transformations. Geometrically, an eigenvector, corresponding to a real nonzero eigenvalue, points in a direction, stretched by the transformation and the eigenvalue is the factor by which it is stretched. If the eigenvalue is negative, the direction is reversed. Eigenvalues and eigenvectors feature prominently in the analysis of linear transformations; the prefix eigen- is adopted from the German word eigen for "proper", "characteristic". Utilized to study principal axes of the rotational motion of rigid bodies and eigenvectors have a wide range of applications, for example in stability analysis, vibration analysis, atomic orbitals, facial recognition, matrix diagonalization. In essence, an eigenvector v of a linear transformation T is a non-zero vector that, when T is applied to it, does not change direction. Applying T to the eigenvector only scales the eigenvector by the scalar value λ, called an eigenvalue.
This condition can be written as the equation T = λ v, referred to as the eigenvalue equation or eigenequation. In general, λ may be any scalar. For example, λ may be negative, in which case the eigenvector reverses direction as part of the scaling, or it may be zero or complex; the Mona Lisa example pictured at right provides a simple illustration. Each point on the painting can be represented as a vector pointing from the center of the painting to that point; the linear transformation in this example is called a shear mapping. Points in the top half are moved to the right and points in the bottom half are moved to the left proportional to how far they are from the horizontal axis that goes through the middle of the painting; the vectors pointing to each point in the original image are therefore tilted right or left and made longer or shorter by the transformation. Notice that points along the horizontal axis do not move at all. Therefore, any vector that points directly to the right or left with no vertical component is an eigenvector of this transformation because the mapping does not change its direction.
Moreover, these eigenvectors all have an eigenvalue equal to one because the mapping does not change their length, either. Linear transformations can take many different forms, mapping vectors in a variety of vector spaces, so the eigenvectors can take many forms. For example, the linear transformation could be a differential operator like d d x, in which case the eigenvectors are functions called eigenfunctions that are scaled by that differential operator, such as d d x e λ x = λ e λ x. Alternatively, the linear transformation could take the form of an n by n matrix, in which case the eigenvectors are n by 1 matrices that are referred to as eigenvectors. If the linear transformation is expressed in the form of an n by n matrix A the eigenvalue equation above for a linear transformation can be rewritten as the matrix multiplication A v = λ v, where the eigenvector v is an n by 1 matrix. For a matrix and eigenvectors can be used to decompose the matrix, for example by diagonalizing it. Eigenvalues and eigenvectors give rise to many related mathematical concepts, the prefix eigen- is applied liberally when naming them: The set of all eigenvectors of a linear transformation, each paired with its corresponding eigenvalue, is called the eigensystem of that transformation.
The set of all eigenvectors of T corresponding to the same eigenvalue, together with the zero vector, is called an eigenspace or characteristic space of T. If the set of eigenvectors of T form a basis of the domain of T this basis is called an eigenbasis. Eigenvalues are introduced in the context of linear algebra or matrix theory. However, they arose in the study of quadratic forms and differential equations. In the 18th century Euler studied the rotational motion of a rigid body and discovered the importance of the pri