Graph drawing is an area of mathematics and computer science combining methods from geometric graph theory and information visualization to derive two-dimensional depictions of graphs arising from applications such as social network analysis, cartography and bioinformatics. A drawing of a graph or network diagram is a pictorial representation of the vertices and edges of a graph; this drawing should not be confused with the graph itself: different layouts can correspond to the same graph. In the abstract, all that matters is. In the concrete, the arrangement of these vertices and edges within a drawing affects its understandability, fabrication cost, aesthetics; the problem gets worse if the graph changes over time by adding and deleting edges and the goal is to preserve the user's mental map. Graphs are drawn as node–link diagrams in which the vertices are represented as disks, boxes, or textual labels and the edges are represented as line segments, polylines, or curves in the Euclidean plane.
Node–link diagrams can be traced back to the 13th century work of Ramon Llull, who drew diagrams of this type for complete graphs in order to analyze all pairwise combinations among sets of metaphysical concepts. In the case of directed graphs, arrowheads form a used graphical convention to show their orientation. Upward planar drawing uses the convention that every edge is oriented from a lower vertex to a higher vertex, making arrowheads unnecessary. Alternative conventions to node–link diagrams include adjacency representations such as circle packings, in which vertices are represented by disjoint regions in the plane and edges are represented by adjacencies between regions. Many different quality measures have been defined for graph drawings, in an attempt to find objective means of evaluating their aesthetics and usability. In addition to guiding the choice between different layout methods for the same graph, some layout methods attempt to directly optimize these measures; the crossing number of a drawing is the number of pairs of edges.
If the graph is planar it is convenient to draw it without any edge intersections. However, nonplanar graphs arise in applications, so graph drawing algorithms must allow for edge crossings; the area of a drawing is the size of its smallest bounding box, relative to the closest distance between any two vertices. Drawings with smaller area are preferable to those with larger area, because they allow the features of the drawing to be shown at greater size and therefore more legibly; the aspect ratio of the bounding box may be important. Symmetry display is the problem of finding symmetry groups within a given graph, finding a drawing that displays as much of the symmetry as possible; some layout methods automatically lead to symmetric drawings. It is important that edges have shapes that are as simple as possible, to make it easier for the eye to follow them. In polyline drawings, the complexity of an edge may be measured by its number of bends, many methods aim to provide drawings with few total bends or few bends per edge.
For spline curves the complexity of an edge may be measured by the number of control points on the edge. Several used quality measures concern lengths of edges: it is desirable to minimize the total length of the edges as well as the maximum length of any edge. Additionally, it may be preferable for the lengths of edges to be uniform rather than varied. Angular resolution is a measure of the sharpest angles in a graph drawing. If a graph has vertices with high degree it will have small angular resolution, but the angular resolution can be bounded below by a function of the degree; the slope number of a graph is the minimum number of distinct edge slopes needed in a drawing with straight line segment edges. Cubic graphs have slope number at most four, but graphs of degree five may have unbounded slope number. There are many different graph layout strategies: In force-based layout systems, the graph drawing software modifies an initial vertex placement by continuously moving the vertices according to a system of forces based on physical metaphors related to systems of springs or molecular mechanics.
These systems combine attractive forces between adjacent vertices with repulsive forces between all pairs of vertices, in order to seek a layout in which edge lengths are small while vertices are well-separated. These systems may perform gradient descent based minimization of an energy function, or they may translate the forces directly into velocities or accelerations for the moving vertices. Spectral layout methods use as coordinates the eigenvectors of a matrix such as the Laplacian derived from the adjacency matrix of t
In the psychology of motivation, balance theory is a theory of attitude change, proposed by Fritz Heider. It conceptualizes the cognitive consistency motive as a drive toward psychological balance; the consistency motive is the urge to maintain one's beliefs over time. Heider proposed that "sentiment" or liking relationships are balanced if the affect valence in a system multiplies out to a positive result. In social network analysis, balance theory is the extension proposed by Frank Harary and Dorwin Cartwright, it was the framework for the discussion at a Dartmouth College symposium in September 1975. For example: a Person who likes an Other person will be balanced by the same valence attitude on behalf of the other. Symbolically, P > O and P < O results in psychological balance. This can be extended to objects as well, thus introducing triadic relationships. If a person P likes object X but dislikes other person O, what does P feel upon learning that person O created the object X? This is symbolized as such: P > X P > O O > X Cognitive balance is achieved when there are three positive links or two negatives with one positive.
Two positive links and one negative like the example above creates imbalance or cognitive dissonance. Multiplying the signs shows that the person will perceive imbalance in this relationship, will be motivated to correct the imbalance somehow; the Person can either: Decide that O isn't so bad after all, Decide that X isn't as great as thought, or Conclude that O couldn't have made X. Any of these will result in psychological balance, thus resolving the dilemma and satisfying the drive. To predict the outcome of a situation using Heider's balance theory, one must weigh the effects of all the potential results, the one requiring the least amount of effort will be the outcome. Determining if the triad is balanced is simple math: + + + = +. − + − = +. − + + = −. Balance theory is useful in examining how celebrity endorsement affects consumers' attitudes toward products. If a person likes a celebrity and perceives that said celebrity likes a product, said person will tend to like the product more, in order to achieve psychological balance.
However, if the person had a dislike for the product being endorsed by the celebrity, they may begin disliking the celebrity, again to achieve psychological balance. Heider's balance theory can explain why holding the same negative attitudes of others promotes closeness. Frank Harary and Dorwin Cartwright looked at Heider's triads as 3-cycles in a signed graph; the sign of a path in a graph is the product of the signs of its edges. They considered cycles in a signed graph representing a social network. A balanced signed graph has only cycles of positive sign. Harary proved that a balanced graph is polarized, that is, it decomposes into two positive subgraphs that are joined by negative edges. In the interest of realism, a weaker property was suggested by Davis: No cycle has one negative edge. Graphs with this property may decompose into more than two positive subgraphs called clusters; the property has been called the clusterability axiom. Balanced graphs are recovered by assuming the Parsimony axiom: The subgraph of positive edges has at most two components.
The significance of balance theory for social dynamics was expressed by Anatol Rapoport: The hypothesis implies that attitudes of the group members will tend to change in such a way that one's friends' friends will tend to become one's friends and one's enemies' enemies one's friends, one's enemies' friends and one's friends' enemies will tend to become one's enemies, moreover, that these changes tend to operate across several removes. Note that a triangle of three mutual enemies makes a clusterable graph but not a balanced one. Therefore, in a clusterable network one cannot conclude that the enemy of my enemy is my friend, although this aphorism is a fact in a balanced network. Claude Flament expressed a limit to balance theory imposed by reconciling weak ties with relationships of stronger force such as family bonds: One might think that a valued algebraic graph is necessary to represent psycho-social reality, if it is to take into account the degree of intensity of interpersonal relationships.
But in fact it seems hardly possible to de
A small-world network is a type of mathematical graph in which most nodes are not neighbors of one another, but the neighbors of any given node are to be neighbors of each other and most nodes can be reached from every other node by a small number of hops or steps. A small-world network is defined to be a network where the typical distance L between two randomly chosen nodes grows proportionally to the logarithm of the number of nodes N in the network, that is: L ∝ log N while the clustering coefficient is not small. In the context of a social network, this results in the small world phenomenon of strangers being linked by a short chain of acquaintances. Many empirical graphs show the small-world effect, e.g. social networks, forms the underlying architecture of the Internet, wikis such as Wikipedia, gene networks. It is the inspiration for many network-on-chip architectures in contemporary computer hardware. A certain category of small-world networks were identified as a class of random graphs by Duncan Watts and Steven Strogatz in 1998.
They noted that graphs could be classified according to two independent structural features, namely the clustering coefficient, average node-to-node distance. Purely random graphs, built according to the Erdős–Rényi model, exhibit a small average shortest path length along with a small clustering coefficient. Watts and Strogatz measured that in fact many real-world networks have a small average shortest path length, but a clustering coefficient higher than expected by random chance. Watts and Strogatz proposed a novel graph model named the Watts and Strogatz model, with a small average shortest path length, a large clustering coefficient; the crossover in the Watts–Strogatz model between a "large world" and a small world was first described by Barthelemy and Amaral in 1999. This work was followed by a large number including exact results. Braunstein found that for weighted ER networks where the weights have a broad distribution, the optimal path scales becomes longer and scales as N1/3. Small-world networks tend to contain cliques, near-cliques, meaning sub-networks which have connections between any two nodes within them.
This follows from the defining property of a high clustering coefficient. Secondly, most pairs of nodes will be connected by at least one short path; this follows from the defining property. Several other properties are associated with small-world networks. There is an over-abundance of hubs – nodes in the network with a high number of connections; these hubs serve as the common connections mediating the short path lengths between other edges. By analogy, the small-world network of airline flights has a small mean-path length because many flights are routed through hub cities; this property is analyzed by considering the fraction of nodes in the network that have a particular number of connections going into them. Networks with a greater than expected number of hubs will have a greater fraction of nodes with high degree, the degree distribution will be enriched at high degree values; this is known colloquially as a fat-tailed distribution. Graphs of different topology qualify as small-world networks as long as they satisfy the two definitional requirements above.
Network small-worldness has been quantified by a small-coefficient, σ, calculated by comparing clustering and path length of a given network to an equivalent random network with same degree on average. Σ = C C r L L r if σ > 1, network is small-world. However, this metric is known to perform poorly because it is influenced by the network's size. Another method for quantifying network small-worldness utilizes the original definition of the small-world network comparing the clustering of a given network to an equivalent lattice network and its path length to an equivalent random network; the small-world measure is defined as ω = L r L − C C ℓ Where the characteristic path length L and clustering coefficient C are calculated from the network you are testing, Cℓ is the clustering coefficient for an equivalent lattice network and Lr is the characteristic path length for an equivalent random network. A minor transformation that more conveniently ranges between 0 to 1. Still another method for quantifying small-worldness normalizes both the network's clustering and path length relative to these characteristics in equivalent lattice and random networks.
The Small World Index is defined as S W I = L − L
A scale-free network is a network whose degree distribution follows a power law, at least asymptotically. That is, the fraction P of nodes in the network having k connections to other nodes goes for large values of k as P ∼ k − γ where γ is a parameter whose value is in the range 2 < γ < 3, although it may lie outside these bounds. Many networks have been reported to be scale-free, although statistical analysis has refuted many of these claims and questioned others. Preferential attachment and the fitness model have been proposed as mechanisms to explain conjectured power law degree distributions in real networks. In studies of the networks of citations between scientific papers, Derek de Solla Price showed in 1965 that the number of links to papers—i.e. The number of citations they receive—had a heavy-tailed distribution following a Pareto distribution or power law, thus that the citation network is scale-free, he did not however use the term "scale-free network", not coined until some decades later.
In a paper in 1976, Price proposed a mechanism to explain the occurrence of power laws in citation networks, which he called "cumulative advantage" but, today more known under the name preferential attachment. Recent interest in scale-free networks started in 1999 with work by Albert-László Barabási and colleagues at the University of Notre Dame who mapped the topology of a portion of the World Wide Web, finding that some nodes, which they called "hubs", had many more connections than others and that the network as a whole had a power-law distribution of the number of links connecting to a node. After finding that a few other networks, including some social and biological networks had heavy-tailed degree distributions, Barabási and collaborators coined the term "scale-free network" to describe the class of networks that exhibit a power-law degree distribution. However, studying seven examples of networks in social, technological and physical systems, Amaral et al were not able to find a scale-free network among these seven examples.
Only one of these examples, the movie-actor network, had degree distribution P following a power law regime for moderate k, though this power law regime was followed by a sharp cutoff showing exponential decay for large k. Barabási and Albert proposed a generative mechanism to explain the appearance of power-law distributions, which they called "preferential attachment" and, the same as that proposed by Price. Analytic solutions for this mechanism were presented in 2000 by Dorogovtsev and Samukhin and independently by Krapivsky and Leyvraz, rigorously proved by mathematician Béla Bollobás. Notably, this mechanism only produces a specific subset of networks in the scale-free class, many alternative mechanisms have been discovered since; the history of scale-free networks includes some disagreement. On an empirical level, the scale-free nature of several networks has been called into question. For instance, the three brothers Faloutsos believed that the Internet had a power law degree distribution on the basis of traceroute data.
On a theoretical level, refinements to the abstract definition of scale-free have been proposed. For example, Li et al. offered a more precise "scale-free metric". Let G be a graph with edge set E, denote the degree of a vertex v by deg . Define s = ∑ ∈ E deg ⋅ deg ; this is maximized. Now define S = s s max, where smax is the maximum value of s for H in the set of all graphs with degree distribution identical to that of G; this gives a metric between 0 and 1, where a graph G with small S is "scale-rich", a graph G with S close to 1 is "scale-free". This definition captures the notion of self-similarity implied in the name "scale-free"; the most notable characteristic in a scale-free network is the relative commonness of vertices with a degree that exceeds the average. The highest-degree nodes are called "hubs", are thought to serve specific purposes in their networks, although this depends on the domain; the scale-free property correlates with the network's robustness to failure. It turns out that the major hubs are followed by smaller ones.
These smaller hubs, in turn, are followed by other nodes with an smaller degree and so on. This hierarchy allows for a fault
A transport network, or transportation network is a realisation of a spatial network, describing a structure which permits either vehicular movement or flow of some commodity. Examples include but are not limited to road networks, air routes, pipelines and power lines. Transport network analysis is used to determine the flow of vehicles through a transport network using mathematical graph theory, it may combine different modes of transport, for example and car, to model multi-modal journeys. Transport network analysis falls within the field of transport engineering. Traffic has been studied extensively using statistical physics methods. A real transport network of Beijing was studied using network approach and percolation theory; the research showed that one can characterize the quality of traffic in a city at each time in the day using percolation threshold, see Fig. 1. Braess' paradox Flow network Heuristic routing Interplanetary Transport Network Network science Percolation theory
In the context of network theory, a complex network is a graph with non-trivial topological features—features that do not occur in simple networks such as lattices or random graphs but occur in graphs modelling of real systems. The study of complex networks is a young and active area of scientific research inspired by the empirical study of real-world networks such as computer networks, technological networks, brain networks and social networks. Most social and technological networks display substantial non-trivial topological features, with patterns of connection between their elements that are neither purely regular nor purely random; such features include a heavy tail in the degree distribution, a high clustering coefficient, assortativity or disassortativity among vertices, community structure, hierarchical structure. In the case of directed networks these features include reciprocity, triad significance profile and other features. In contrast, many of the mathematical models of networks that have been studied in the past, such as lattices and random graphs, do not show these features.
The most complex structures can be realized by networks with a medium number of interactions. This corresponds to the fact that the maximum information content is obtained for medium probabilities. Two well-known and much studied classes of complex networks are scale-free networks and small-world networks, whose discovery and definition are canonical case-studies in the field. Both are characterized by specific structural features—power-law degree distributions for the former and short path lengths and high clustering for the latter. However, as the study of complex networks has continued to grow in importance and popularity, many other aspects of network structure have attracted attention as well; the study of complex networks has been expanded to networks of networks. If those networks are interdependent, they become more vulnerable to random failures and targeted attacks and exhibit cascading failures and first-order percolation transitions. Furthermore, the collective behavior of a network in the presence of nodes failure and recovery has been studied.
It was found that such a network can have spontaneous recoveries. The field continues to develop at a brisk pace, has brought together researchers from many areas including mathematics, electric power systems, climate, computer science, sociology and others. Ideas from network science and engineering have been applied to the analysis of metabolic and genetic regulatory networks. Research on networks are published in the most visible scientific journals and obtain vigorous funding in many countries. Network theory was found useful to identify bottlenecks in city traffic. Network science is the topic of many conferences in a variety of different fields, has been the subject of numerous books both for the lay person and for the expert. A network is named scale-free if its degree distribution, i.e. the probability that a node selected uniformly at random has a certain number of links, follows a particular mathematical function called a power law. The power law implies. In contrast, networks with a single well-defined scale are somewhat similar to a lattice in that every node has the same degree.
Examples of networks with a single scale include the Erdős -- hypercubes. Models of networks that produce scale-invariant degree distribution are Barabási–Albert model and fitness model. In a network with a scale-free degree distribution, some vertices have a degree, orders of magnitude larger than the average - these vertices are called "hubs", although this is a bit misleading as there is no inherent threshold above which a node can be viewed as a hub. If there were such a threshold, the network would not be scale-free. Interest in scale-free networks began in the late 1990s with the reporting of discoveries of power-law degree distributions in real world networks such as the World Wide Web, the network of Autonomous systems, some networks of Internet routers, protein interaction networks, email networks, etc. Most of these reported "power laws" fail when challenged with rigorous statistical testing, but the more general idea of heavy-tailed degree distributions—which many of these networks do genuinely exhibit -- are different from what one would expect if edges existed independently and at random.
There are many different ways. The Yule process is a canonical generative process for power laws, has been known since 1925. However, it is known by many other names due to e.g.. The Gibrat principle by Herbert A. Simon, the Matthew effect, cumulative advantage and, preferential attachment by Barabási and Albert for power-law degree distributions. Hyperbolic Geometric Graphs have been suggested as yet another way of constructing scale-free networks; some networks with a power-law degree distribution can be resistant to the random deletion of vertices—i.e. The vast majority of vertices remain connected together in a giant component; such networks can be quite sensitive to targeted attacks aimed at fracturing the ne