1.
Network science
–
The study of networks has emerged in diverse disciplines as a means of analyzing complex relational data. The earliest known paper in this field is the famous Seven Bridges of Königsberg written by Leonhard Euler in 1736, eulers mathematical description of vertices and edges was the foundation of graph theory, a branch of mathematics that studies the properties of pairwise relations in a network structure. The field of graph theory continued to develop and found applications in chemistry, in the 1930s Jacob Moreno, a psychologist in the Gestalt tradition, arrived in the United States. He developed the sociogram and presented it to the public in April 1933 at a convention of medical scholars, Moreno claimed that before the advent of sociometry no one knew what the interpersonal structure of a group precisely looked like. The sociogram was a representation of the structure of a group of elementary school students. The boys were friends of boys and the girls were friends of girls with the exception of one boy who said he liked a single girl and this network representation of social structure was found so intriguing that it was printed in The New York Times. The sociogram has found many applications and has grown into the field of network analysis. Probabilistic theory in network science developed as an offshoot of graph theory with Paul Erdős, for social networks the exponential random graph model or p* is a notational framework used to represent the probability space of a tie occurring in a social network. In 1998, David Krackhardt and Kathleen Carley introduced the idea of a meta-network with the PCANS Model and they suggest that all organizations are structured along these three domains, Individuals, Tasks, and Resources. Their paper introduced the concept that networks occur across multiple domains and this field has grown into another sub-discipline of network science called dynamic network analysis. More recently other network science efforts have focused on mathematically describing different network topologies, duncan Watts reconciled empirical data on networks with mathematical representation, describing the small-world network. Although many networks, such as the internet, appear to maintain this aspect, the U. S. military first became interested in network-centric warfare as an operational concept based on network science in 1996. As a result, the BAST issued the NRC study in 2005 titled Network Science that defined a new field of research in Network Science for the Army. Under the tutelage of Dr. Moxley and the faculty of the USMA, in order to better instill the tenets of network science among its cadre of future leaders, the USMA has also instituted a five-course undergraduate minor in Network Science. In 2006, the U. S. S. and UK, the goal of the alliance is to perform basic research in support of Network- Centric Operations across the needs of both nations. Often, networks have certain attributes that can be calculated to analyze the properties & characteristics of the network and these network properties often define network models and can be used to analyze how certain models contrast to each other. Many of the definitions for terms used in network science can be found in Glossary of graph theory. The density D of a network is defined as a ratio of the number of edges E to the number of edges, given by the binomial coefficient

Network science

2.
Network theory
–
Network theory is the study of graphs as a representation of either symmetric relations or, more generally, of asymmetric relations between discrete objects. In computer science and network science, network theory is a part of graph theory, applications of network theory include logistical networks, the World Wide Web, Internet, gene regulatory networks, metabolic networks, social networks, epistemological networks, etc. See List of network theory topics for more examples, eulers solution of the Seven Bridges of Königsberg problem is considered to be the first true proof in the theory of networks. Network problems that involve finding a way of doing something are studied under the name combinatorial optimization. Social network analysis examines the structure of relationships between social entities and these entities are often persons, but may also be groups, organizations, nation states, web sites, or scholarly publications. Amongst many other applications, social network analysis has been used to understand the diffusion of innovations, news, similarly, it has been used to examine the spread of both diseases and health-related behaviors. It has also applied to the study of markets, where it has been used to examine the role of trust in exchange relationships. Similarly, it has used to study recruitment into political movements. It has also used to conceptualize scientific disagreements as well as academic prestige. More recently, network analysis has gained a significant use in military intelligence, with the recent explosion of publicly available high throughput biological data, the analysis of molecular networks has gained significant interest. The type of analysis in this context is related to social network analysis. For example, network motifs are small subgraphs that are over-represented in the network, similarly, activity motifs are patterns in the attributes of nodes and edges in the network that are over-represented given the network structure. The analysis of networks with respect to diseases has led to the development of the field of network medicine. Recent examples of application of theory in biology include applications to understanding Cell Cycle. The interactions between physiological systems like brain, heart, eyes, etc. can be regarded as a physiological network, the automatic parsing of textual corpora has enabled the extraction of actors and their relational networks on a vast scale. Link analysis is a subset of network analysis, exploring associations between objects, link analysis here provides the crucial relationships and associations between very many objects of different types that are not apparent from isolated pieces of information. Links are also derived from similarity of time behavior in both nodes, examples include climate networks where the links between two locations are determined for example, by the similarity of the rainfall or temperature fluctuations in both sites. The structural robustness of networks is studied using percolation theory, when a critical fraction of nodes is removed the network becomes fragmented into small disconnected clusters

Network theory

Network theory
–
A small example network with eight vertices and ten edges.

3.
Complex network
–
Such features include a heavy tail in the degree distribution, a high clustering coefficient, assortativity or disassortativity among vertices, community structure, and hierarchical structure. In the case of directed networks these features also include reciprocity, triad significance profile, in contrast, many of the mathematical models of networks that have been studied in the past, such as lattices and random graphs, do not show these features. The most complex structures can be realized by networks with a number of interactions. This corresponds to the fact that the information content is obtained for medium probabilities. Two well-known and much studied classes of networks are scale-free networks and small-world networks, whose discovery. Both are characterized by specific structural features—power-law degree distributions for the former and short path lengths, however, as the study of complex networks has continued to grow in importance and popularity, many other aspects of network structure have attracted attention as well. Recently, the study of networks has been expanded to networks of networks. If those networks are interdependent, they become more vulnerable to random failures and targeted attacks and exhibit cascading failures. Furthermore, the behavior of a network in the presence of nodes failure. It was found that such a network can have spontaneous failures, research on networks are regularly published in the most visible scientific journals and obtain vigorous funding in many countries. Network theory was found useful to identify bottlenecks in city traffic. Network science is the topic of many conferences in a variety of different fields, the power law implies that the degree distribution of these networks has no characteristic scale. In contrast, networks with a single well-defined scale are similar to a lattice in that every node has the same degree. Examples of networks with a single scale include the Erdős–Rényi random graph, if there were such a threshold, the network would not be scale-free. There are many different ways to build a network with a degree distribution. The Yule process is a canonical process for power laws. However, it is known by other names due to its frequent reinvention. The Gibrat principle by Herbert A. Simon, the Matthew effect, cumulative advantage and, preferential attachment by Barabási, recently, Hyperbolic Geometric Graphs have been suggested as yet another way of constructing scale-free networks

Complex network

4.
Small-world network
–
In the context of a social network, this results in the small world phenomenon of strangers being linked by a short chain of acquaintances. Many empirical graphs show the effect, e. g. social networks, the underlying architecture of the Internet, wikis such as Wikipedia. A certain category of small-world networks were identified as a class of graphs by Duncan Watts. They noted that graphs could be classified according to two independent structural features, namely the clustering coefficient, and average node-to-node distance, purely random graphs, built according to the Erdős–Rényi model, exhibit a small average shortest path length along with a small clustering coefficient. Watts and Strogatz measured that in fact many real-world networks have a small average shortest path length, Watts and Strogatz then proposed a novel graph model, currently named the Watts and Strogatz model, with a small average shortest path length, and a large clustering coefficient. The crossover in the Watts–Strogatz model between a world and a small world was first described by Barthelemy and Amaral in 1999. This work was followed by a number of studies, including exact results. Braunstein found that for weighted ER networks where the weights have a broad distribution. Small-world networks tend to contain cliques, and near-cliques, meaning sub-networks which have connections between almost any two nodes within them and this follows from the defining property of a high clustering coefficient. Secondly, most pairs of nodes will be connected by at least one short path and this follows from the defining property that the mean-shortest path length be small. Several other properties are associated with small-world networks. Typically there is an over-abundance of hubs – nodes in the network with a number of connections. These hubs serve as the common connections mediating the short path lengths between other edges, by analogy, the small-world network of airline flights has a small mean-path length because many flights are routed through hub cities. This property is often analyzed by considering the fraction of nodes in the network that have a number of connections going into them. Networks with a greater than expected number of hubs will have a fraction of nodes with high degree. This is known colloquially as a fat-tailed distribution, graphs of very different topology qualify as small-world networks as long as they satisfy the two definitional requirements above. Network small-worldness has been quantified by comparing clustering and path length of a network to an equivalent random network with same degree on average. The small-world measure is defined as ω = L r L − C C ℓ R. Cohen, cultural networks and word co-occurrence networks have also been shown to be small-world networks

Small-world network

Small-world network
–
Small-world network example Hubs are bigger than other nodes Average

degree = 3.833 Average shortest path length = 1.803. Clusterization coefficient = 0.522

5.
Scale-free network
–
A scale-free network is a network whose degree distribution follows a power law, at least asymptotically. Many networks have been reported to be scale-free, although statistical analysis has refuted many of these claims, preferential attachment and the fitness model have been proposed as mechanisms to explain conjectured power law degree distributions in real networks. In studies of the networks of citations between scientific papers, Derek de Solla Price showed in 1965 that the number of links to papers—i. e. The number of citations they receive—had a heavy-tailed distribution following a Pareto distribution or power law and he did not however use the term scale-free network, which was not coined until some decades later. Amaral et al. showed that most of the real-world networks can be classified into two categories according to the decay of degree distribution P for large k. Notably, however, this mechanism only produces a specific subset of networks in the scale-free class, the history of scale-free networks also includes some disagreement. On an empirical level, the nature of several networks has been called into question. On a theoretical level, refinements to the definition of scale-free have been proposed. For example, Li et al. recently offered a more precise scale-free metric. Briefly, let G be a graph with edge set E, define s = ∑ ∈ E deg ⋅ deg . This is maximized when high-degree nodes are connected to other high-degree nodes, now define S = s s max, where smax is the maximum value of s for H in the set of all graphs with degree distribution identical to that of G. This gives a metric between 0 and 1, where a graph G with small S is scale-rich, and a graph G with S close to 1 is scale-free and this definition captures the notion of self-similarity implied in the name scale-free. The most notable characteristic in a network is the relative commonness of vertices with a degree that greatly exceeds the average. The highest-degree nodes are called hubs, and are thought to serve specific purposes in their networks. The scale-free property strongly correlates with the networks robustness to failure and it turns out that the major hubs are closely followed by smaller ones. These smaller hubs, in turn, are followed by other nodes with a smaller degree. This hierarchy allows for a fault tolerant behavior, if failures occur at random and the vast majority of nodes are those with small degree, the likelihood that a hub would be affected is almost negligible. Even if a hub-failure occurs, the network will not lose its connectedness

Scale-free network

6.
Community structure
–
But overlapping communities are also allowed. A related but different problem is community search, here the goal is to find a community that a certain vertex belongs to, another common characteristic is community structure. This inhomogeneity of connections suggests that the network has certain natural divisions within it, communities are often defined in terms of the partition of the set of vertices, that is each node is put into one and only one community, just as in the figure. This is a simplification and most community detection methods find this type of community structure. However, in some cases a better representation could be one where vertices are in more than one community, the use of cliques for community detection discussed below is just one example of how such overlapping community structure can be found. Some networks may not have any meaningful community structure, many basic network models, for example, such as the random graph and the Barabási–Albert model, do not display community structure. Community structures are common in real networks. Social networks include community based on common location, interests, occupation. Finding an underlying community structure in a network, if it exists, is important for a number of reasons, communities allow us to create a large scale map of a network since individual communities act like meta-nodes in the network which makes its study easier. Individual communities also shed light on the function of the represented by the network since communities often correspond to functional units of the system. Similary, citation networks form communities by research topic, being able to identify these sub-structures within a network can provide insight into how network function and topology affect each other. Such insight can be useful in improving some algorithms on graphs such as spectral clustering, a very important reason that makes communities important is that they often have very different properties than the average properties of the networks. Thus, only concentrating on the average properties usually misses many important, for example, in a given social network, both gregarious and reticent groups might exists simultaneously. Existence of communities also generally affects various processes like rumour spreading or epidemic spreading happening on a network, hence to properly understand such processes, it is important to detect communities and also to study how they affect the spreading processes in various settings. Finally, an important application that community detection has found in science is the prediction of missing links. During the measurement process, some links may not get observed for a number of reasons, similarly, some links could falsely enter into the data because of the errors in the measurement. Both these cases are handled by community detection algorithm since it allows one to assign the probability of existence of an edge between a given pair of nodes. Finding communities within a network can be a computationally difficult task

Community structure

7.
Percolation theory
–
In statistical physics and mathematics, percolation theory describes the behaviour of connected clusters in a random graph. The applications of theory to materials science and other domains are discussed in the article percolation. A representative question is as follows, assume that some liquid is poured on top of some porous material. Will the liquid be able to make its way from hole to hole, therefore, for a given p, what is the probability that an open path exists from the top to the bottom. The behavior for n is of primary interest. This problem, called now bond percolation, was introduced in the literature by Broadbent & Hammersley. In a slightly different mathematical model for obtaining a random graph, a site is occupied with probability p or empty with probability 1-p, the question is the same, for a given p, what is the probability that a path exists between top and bottom. Of course the same questions can be asked for any lattice dimension, as is quite typical, it is actually easier to examine infinite networks than just large ones. In this case the question is, does an infinite open cluster exist. That is, is there a path of connected points of infinite length through the network, by Kolmogorovs zero–one law, for any given p, the probability that an infinite cluster exists is either zero or one. Since this probability is a function of p, there must be a critical p below which the probability is always 0. In practice, this criticality is very easy to observe, even for n as small as 100, the probability of an open path from the top to the bottom increases sharply from very close to zero to very close to one in a short span of values of p. In some cases pc may be calculated explicitly, a limit case for lattices in many dimensions is given by the Bethe lattice, whose threshold is at pc = 1/ for a coordination number z. For most infinite lattice graphs, pc cannot be calculated exactly and this universality also means that for a given dimension, the various critical exponents, the fractal dimension of the clusters at pc is independent of the lattice type and percolation type. The main fact in the phase is exponential decay. That is, when p < pc, the probability that a point is contained in an open cluster of size r decays to zero exponentially in r. This was proved for percolation in three and more dimensions by Menshikov and independently by Aizenman & Barsky, in two dimensions, it formed part of Kestens proof that pc = 1/2. The dual graph of the square lattice Z2 is also the square lattice and it follows that, in two dimensions, the supercritical phase is dual to a subcritical percolation process

Percolation theory
–
A three-dimensional site percolation graph

8.
Evolving networks
–
Evolving Networks are networks that change as a function of time. They are an extension of network science since almost all real world networks evolve over time. Evolving network concepts build on established network theory and are now being introduced into studying networks in many diverse fields. The study of networks traces its foundations to the development of graph theory, probabilistic network theory then developed with the help of eight famous papers studying random graphs written by Paul Erdős and Alfréd Rényi. The Erdős-Rényi model supposes that a graph is composed of N labeled nodes where each pair of nodes is connected by a probability p. While the ER models simplicity has helped it find many applications, the ER model fails to generate local clustering and triadic closures as often as they are found in real world networks. Therefore, the Watts and Strogatz model was proposed, whereby a network is constructed as a ring lattice. This produces a locally clustered network and dramatically reduces the path length. Despite this achievement, both the ER and the Watts and Storgatz models fail to account for the formulation of hubs as observed in real world networks. The degree distribution in the ER model follows a Poisson distribution, while the Watts and this was accomplished by incorporating preferential attachment and growth, where nodes are added to the network over time and are more likely to link to other nodes with high degree distributions. The BA model was first applied to distributions on the web. New web pages are added over time, and each new page is likely to link to highly visible hubs like Google which have high degree distributions than to nodes with only a few links. However, the model only the simplest assumptions necessary for a scale-free network to emerge, namely that there is linear growth. This minimal model does not capture variations in the shape of the distribution, variations in the degree exponent. Therefore, the model has since been modified to more fully capture the properties of evolving networks by introducing a few new properties. However, this can be alleviated by introducing a fitness for each node, Π = η i k i ∑ j η j k j, Where η is the fitness, which may also depend on time. Additionally, existing links may be destroyed and new links between existing nodes may be created, the probability of these actions occurring may depend on time and may also be related to the nodes fitness. Probabilities can be assigned to these events by studying the characteristics of the network in question in order to grow a network with identical properties

Evolving networks
–
Watts-Strogatz graph

Evolving networks
–
Animation of an evolving network according to the initial Barabasi–Albert model

9.
Network controllability
–
Network Controllability is concerned about the structural controllability of a network. Controllability describes our ability to guide a dynamical system from any state to any desired final state in finite time. This definition agrees well with our intuitive notion of control, the controllability of general directed and weighted complex networks has recently been the subject of intense study by a number of groups, worldwide. Consider the canonical linear time-invariant dynamics on a complex network X ˙ = A ⋅ X + B ⋅ u where the vector X = T captures the state of a system of N nodes at time t. The N × N matrix A describes the systems wiring diagram, the N × M matrix B identifies the nodes controlled by an outside controller. The system is controlled through the time dependent input vector u = T that the controller imposes on the system. From this result, a framework, based on the in-out degree distribution, was developed to predict n D = N D / N for scale-free. It is also notable, that Lius et al formulation would predict same values of n D for a chain graph, obviously, both these graphs have very different in and out degree distributions. It is obvious that if controllability is decided mainly by degree and they argued, that this might be due to the effect of degree correlations. However, it has shown that network controllability can be altered only by using betweenness centrality and closeness centrality. The concept of the properties was first introduced by Lin and then extended by Shields and Pearson and alternatively derived by Glover. The main question is whether the lack of controllability or observability are generic with respect to the system parameters. In the framework of control the system parameters are either independent free variables or fixed zeros. This is consistent for models of physical systems since parameter values are never known exactly, in graph theory, a matching is a set of edges without common vertices. Liu et al. extended this definition to directed graph, where a matching is a set of directed edges that do not share start or end vertices and it is easy to check that a matching of a directed graph composes of a set of vertex-disjoint simple paths and cycles. For undirected graph, analytical solutions of the size and number of maximum matchings have been studied using the cavity method developed in statistical physics, Liu et al. extended the calculations for directed graph. By calculating the maximum matchings of a range of real networks. They also calculated the number of driver nodes for a network ensemble with arbitrary degree distribution using the cavity method

Network controllability

Network controllability
–
Controlling a simple network.

10.
Graph drawing
–
A drawing of a graph or network diagram is a pictorial representation of the vertices and edges of a graph. This drawing should not be confused with the graph itself, very different layouts can correspond to the same graph, in the abstract, all that matters is which pairs of vertices are connected by edges. In the concrete, however, the arrangement of vertices and edges within a drawing affects its understandability, usability, fabrication cost. The problem gets worse, if the changes over time by adding and deleting edges. Upward planar drawing uses the convention that every edge is oriented from a vertex to a higher vertex. Many different quality measures have been defined for graph drawings, in an attempt to find means of evaluating their aesthetics. In addition to guiding the choice between different layout methods for the graph, some layout methods attempt to directly optimize these measures. The crossing number of a drawing is the number of pairs of edges cross each other. If the graph is planar, then it is convenient to draw it without any edge intersections, that is, in this case. However, nonplanar graphs frequently arise in applications, so graph drawing algorithms must generally allow for edge crossings, the area of a drawing is the size of its smallest bounding box, relative to the closest distance between any two vertices. Drawings with smaller area are generally preferable to those with area, because they allow the features of the drawing to be shown at greater size. The aspect ratio of the box may also be important. Symmetry display is the problem of finding symmetry groups within a given graph, some layout methods automatically lead to symmetric drawings, alternatively, some drawing methods start by finding symmetries in the input graph and using them to construct a drawing. It is important that edges have shapes that are as simple as possible, in polyline drawings, the complexity of an edge may be measured by its number of bends, and many methods aim to provide drawings with few total bends or few bends per edge. Similarly for spline curves the complexity of an edge may be measured by the number of points on the edge. Several commonly used quality measures concern lengths of edges, it is desirable to minimize the total length of the edges as well as the maximum length of any edge. Additionally, it may be preferable for the lengths of edges to be rather than highly varied. Angular resolution is a measure of the sharpest angles in a graph drawing, if a graph has vertices with high degree then it necessarily will have small angular resolution, but the angular resolution can be bounded below by a function of the degree

Graph drawing
–
Graphic representation of a minute fraction of the

WWW, demonstrating

hyperlinks.

11.
Social capital
–
It is generally seen as a form of capital that produces public goods for a common good. During the 1990s and 2000s the concept has become popular in a wide range of social science disciplines. The term social capital was in intermittent use from about 1890, in the first half of the 19th century, Alexis de Tocqueville had observations about American life that seemed to outline and define social capital. He observed that Americans were prone to meeting at as many gatherings as possible to all possible issues of state, economics. The high levels of transparency caused greater participation from the people, the French writer highlighted also that the level of social participation in American society was directly linked to the equality of conditions. John Dewey used the term in his monograph entitled School and Society in 1900, jane Jacobs used the term early in the 1960s. Although she did not explicitly define the social capital, her usage referred to the value of networks. Sociologist Pierre Bourdieu used the term in 1972 in his Outline of a Theory of Practice, and clarified the term some years later in contrast to cultural, economic, and symbolic capital. Sociologists James Coleman, and Barry Wellman & Scot Wortley adopted Glenn Lourys 1977 definition in developing and popularising the concept, john Dewey may have made the first direct mainstream use of social capital in The School and Society in 1899, though he did not offer a definition. The power of community governance has been stressed by many philosophers from antiquity to the 18th century, from Aristotle to Thomas Aquinas and Edmund Burke. This vision was criticised at the end of the 18th century, with the development of the idea of Homo Economicus. The debate of community versus modernization of society and individualism has been the most discussed topic among the fathers of sociology and they were convinced that industrialisation and urbanization were transforming social relationship in an irreversible way. They observed a breakdown of traditional bonds and the development of anomie. After Tönnies and Webers works, reflection on social links in modern society continued with interesting contributions in the 1950s and in the 1960s and they proposed themes similar to those of the founding fathers, with a more pessimistic emphasis on the development of society. All these reflections contributed remarkably to the development of the social concept in the following decades. He also transforms social capital from a resource possessed by individuals to an attribute of collectives, focusing on norms, mahyar Arefi identifies consensus building as a direct positive indicator of social capital. Consensus implies shared interest and agreement among various actors and stakeholders to induce collective action, collective action is thus an indicator of increased social capital. First, social capital is not equally available to all, in much the way that other forms of capital are differently available

Social capital
–

Sociology
12.
Link analysis
–
In network theory, link analysis is a data-analysis technique used to evaluate relationships between nodes. Relationships may be identified among various types of nodes, including organizations, people, Link analysis has been used for investigation of criminal activity, computer security analysis, search engine optimization, market research, medical research, and art. Knowledge discovery is an iterative and interactive process used to identify, Network analysis, link analysis and social network analysis are all methods of knowledge discovery, each a corresponding subset of the prior method. Once data is collected, it will need to be transformed into a format that can be used by both human and computer analyzers. Manual or computer-generated visualizations tools may be mapped from the data, several algorithms exist to help with analysis of data – Dijkstra’s algorithm, breadth-first search, and depth-first search. Link analysis focuses on analysis of relationships among nodes through visualization methods, klerks categorized link analysis tools into 3 generations. The first generation was introduced in 1975 as the Anacpapa Chart of Harper and this method requires extensive domain knowledge and is extremely time-consuming when reviewing vast amounts of data. In addition to the matrix, the activities matrix can be used to produce actionable information. The activities matrix, as the term might imply, centers on the actions, whereas the association matrix focuses on the relationships between people, organizations, and/or properties. The distinction between two types of matrices, while minor, is nonetheless significant in terms of the output of the analysis completed or rendered. Second generation tools consist of automatic graphics-based analysis tools such as IBM i2 Analyst’s Notebook, Netmap, SVAT and Watson. The third generation of link-analysis tools allow the visualization of linkages between elements in a data set, that can then serve as the canvas for further exploration or manual updates. Data analysis techniques are required to make effective and efficient use of the data, palshikar classifies data analysis techniques into two categories – statistical and artificial intelligence techniques. Bolton & Hand define statistical data analysis as either supervised or unsupervised methods, supervised learning methods require that rules are defined within the system to establish what is expected or unexpected behavior. Unsupervised learning methods review data in comparison to the norm and detect statistical outliers, supervised learning methods are limited in the scenarios that can be handled as this method requires that training rules are established based on previous patterns. Unsupervised learning methods can provide detection of broader issues, however, data itself has inherent issues including integrity and continuous changes. Data may contain “errors of omission and commission because of faulty collection or handling, sparrow highlights incompleteness, fuzzy boundaries and dynamic changes as the three primary problems with data analysis. Once data is transformed into a format, open texture

Link analysis
–
Association Matrix

13.
Combinatorial optimization
–
In applied mathematics and theoretical computer science, combinatorial optimization is a topic that consists of finding an optimal object from a finite set of objects. In many such problems, exhaustive search is not feasible and it operates on the domain of those optimization problems, in which the set of feasible solutions is discrete or can be reduced to discrete, and in which the goal is to find the best solution. Some common problems involving combinatorial optimization are the travelling salesman problem, Combinatorial optimization is a subset of mathematical optimization that is related to operations research, algorithm theory, and computational complexity theory. It has important applications in fields, including artificial intelligence, machine learning, mathematics, auction theory. It often involves determining the way to allocate resources used to find solutions to mathematical problems. Some examples of combinatorial optimization problems that fall into this framework are shortest paths and shortest path trees, flows and circulations, spanning trees, matching, however, generic search algorithms are not guaranteed to find an optimal solution, nor are they guaranteed to run quickly. Since some discrete optimization problems are NP-complete, such as the traveling salesman problem, cook, William J. Cunningham, William H. Pulleyblank, William R. Schrijver, Alexander. Crescenzi, Pierluigi, Kann, Viggo, Halldórsson, Magnús, Karpinski, Marek, Woeginger, a Compendium of NP Optimization Problems. Das, Arnab, Chakrabarti, Bikas K, eds, Quantum Annealing and Related Optimization Methods. Das, Arnab, Chakrabarti, Bikas K. Colloquium, Quantum annealing, a First Course in Combinatorial Optimization. On the history of combinatorial optimization, in Aardal, K. Nemhauser, G. L. Weismantel, R. Handbook of Discrete Optimization. Journal of Combinatorial Optimization The Aussois Combinatorial Optimization Workshop Java Combinatorial Optimization Platform

Combinatorial optimization
–
A

minimum spanning tree of a weighted

planar graph. Finding a minimum spanning tree is a common problem involving combinatorial optimization.

14.
Reciprocity (network science)
–
In network science, reciprocity is a measure of the likelihood of vertices in a directed network to be mutually linked. Like the clustering coefficient, scale-free degree distribution, or community structure, in real network problems, people are interested in determining the likelihood of occurring double links between vertex pairs. This problem is fundamental for several reasons, first, in the networks that transport information or material, mutual links facilitate the transportation process. Finally, detecting nontrivial patterns of reciprocity can reveal possible mechanisms, real networks have an intermediate value between 0 and 1. However, this definition of reciprocity has some defects and it cannot tell the relative difference of reciprocity compared with purely random network with the same number of vertices and edges. The useful information from reciprocity is not the value itself, if all the links occur in reciprocal pairs, ρ =1, if r=0, ρ = ρ m i n. ；。。。。。

Reciprocity (network science)

15.
Balance theory
–
In the psychology of motivation, balance theory is a theory of attitude change, proposed by Fritz Heider. It conceptualizes the cognitive consistency motive as a drive toward psychological balance, the consistency motive is the urge to maintain ones values and beliefs over time. Heider proposed that sentiment or liking relationships are balanced if the affect valence in a system out to a positive result. In social network analysis, balance theory is the proposed by Frank Harary. It was the framework for the discussion at a Dartmouth College symposium in September 1975, for example, a Person who likes an Other person will be balanced by the same valence attitude on behalf of the other. Symbolically, P > O and P < O results in psychological balance and this can be extended to things as well, thus introducing triadic relationships. If a person P likes object X but dislikes other person O and this is symbolized as such, P > X P > O O > X Balance is achieved when there are three positive links or two negatives with one positive. Two positive links and one negative like the example above creates imbalance, multiplying the signs shows that the person will perceive imbalance in this relationship, and will be motivated to correct the imbalance somehow. The Person can either, Decide that O isnt so bad after all, Decide that X isnt as great as originally thought, any of these will result in psychological balance, thus resolving the dilemma and satisfying the drive. Balance theory is useful in examining how celebrity endorsement affects consumers attitudes toward products. If a person likes a celebrity and perceives that said celebrity likes a product, said person will tend to like the product more, in order to achieve psychological balance. However, if the person already had a dislike for the product being endorsed by the celebrity, they may begin disliking the celebrity, heiders balance theory can explain why holding the same negative attitudes of others promotes closeness. Frank Harary and Dorwin Cartwright looked at Heider’s triads as 3-cycles in a signed graph, the sign of a path in a graph is the product of the signs of its edges. They considered cycles in a graph representing a social network. A balanced signed graph has only cycles of positive sign, Harary proved that a balanced graph is polarized, that is, it decomposes into two positive subgraphs that are joined by negative edges. In the interest of realism, a property was suggested by Davis. Graphs with this property may decompose into more than two positive subgraphs called clusters, the property has been called the clusterability axiom. Then balanced graphs are recovered by assuming the Parsimony axiom, The subgraph of positive edges has at most two components, note that a triangle of three mutual enemies makes a clusterable graph but not a balanced one

Balance theory
–
Heider's P-O-X model

16.
Network effect
–
In economics and business, a network effect is the effect that one user of a good or service has on the value of that product to other people. When a network effect is present, the value of a product or service is dependent on the number of others using it, the classic example is the telephone. The more people who own telephones, the valuable the telephone is to each owner. This creates a positive externality because a user may purchase a telephone without intending to create value for other users, Online social networks work in the same way, with sites like Twitter and Facebook becoming more attractive as more users join. The expression network effect is applied most commonly to positive network externalities as in the case of the telephone, negative network externalities can also occur, where more users make a product less valuable, but are more commonly referred to as congestion. Over time, positive effects can create a bandwagon effect as the network becomes more valuable. Network effects were a theme in the arguments of Theodore Vail. In 1908, when he presented the concept in Bells annual report, there were over 4,000 local and regional telephone exchanges, most of which were eventually merged into the Bell System. The economic theory of the effect was advanced significantly between 1985 and 1995 by researchers Michael L. Katz, Carl Shapiro, Joseph Farrell. Network effects were popularized by Robert Metcalfe, stated as Metcalfes law, Metcalfe was one of the co-inventors of Ethernet and a co-founder of the company 3Com. In selling the product, Metcalfe argued that customers needed Ethernet cards to grow above a critical mass if they were to reap the benefits of their network. This was expressed algebraically as having a cost of N, Network effects become significant after a certain subscription percentage has been achieved, called critical mass. At the critical point, the value obtained from the good or service is greater than or equal to the price paid for the good or service. A key business concern must then be how to attract users prior to reaching critical mass, one way is to rely on extrinsic motivation, such as a payment, a fee waiver, or a request for friends to sign up. A more natural strategy is to build a system that has enough value without network effects, then, as the number of users increases, the system becomes even more valuable and is able to attract a wider user base. Beyond critical mass, the number of subscribers generally cannot continue indefinitely. After a certain point, most networks become either congested or saturated, the applicable analogy is that of a telephone network. While the number of users is below the point, each additional user adds additional value to every other customer

Network effect
–
The dominant rail gauge in each country shown

17.
Computer network
–
A computer network or data network is a telecommunications network which allows nodes to share resources. In computer networks, networked computing devices exchange data with other using a data link. The connections between nodes are established using either cable media or wireless media, the best-known computer network is the Internet. Network computer devices that originate, route and terminate the data are called network nodes, nodes can include hosts such as personal computers, phones, servers as well as networking hardware. Two such devices can be said to be networked together when one device is able to exchange information with the other device, Computer networks differ in the transmission medium used to carry their signals, communications protocols to organize network traffic, the networks size, topology and organizational intent. In most cases, application-specific communications protocols are layered over other more general communications protocols and this formidable collection of information technology requires skilled network management to keep it all running reliably. The chronology of significant computer-network developments includes, In the late 1950s, in 1960, the commercial airline reservation system semi-automatic business research environment went online with two connected mainframes. Licklider developed a group he called the Intergalactic Computer Network. In 1964, researchers at Dartmouth College developed the Dartmouth Time Sharing System for distributed users of computer systems. The same year, at Massachusetts Institute of Technology, a group supported by General Electric and Bell Labs used a computer to route. Throughout the 1960s, Leonard Kleinrock, Paul Baran, and Donald Davies independently developed network systems that used packets to transfer information between computers over a network, in 1965, Thomas Marill and Lawrence G. Roberts created the first wide area network. This was an precursor to the ARPANET, of which Roberts became program manager. Also in 1965, Western Electric introduced the first widely used telephone switch that implemented true computer control, in 1972, commercial services using X.25 were deployed, and later used as an underlying infrastructure for expanding TCP/IP networks. In July 1976, Robert Metcalfe and David Boggs published their paper Ethernet, Distributed Packet Switching for Local Computer Networks, in 1979, Robert Metcalfe pursued making Ethernet an open standard. In 1976, John Murphy of Datapoint Corporation created ARCNET, a network first used to share storage devices. In 1995, the transmission speed capacity for Ethernet increased from 10 Mbit/s to 100 Mbit/s, by 1998, Ethernet supported transmission speeds of a Gigabit. Subsequently, higher speeds of up to 100 Gbit/s were added, the ability of Ethernet to scale easily is a contributing factor to its continued use. Providing access to information on shared storage devices is an important feature of many networks, a network allows sharing of files, data, and other types of information giving authorized users the ability to access information stored on other computers on the network

Computer network

Computer network
–

Fiber optic cables are used to transmit light from one computer/network node to another

Computer network
–
Computers are very often connected to networks using wireless links

Computer network
–
An

ATM network interface in the form of an accessory card. A lot of network interfaces are built-in.

18.
Telecommunications network
–
A telecommunications network is a collection of terminal nodes, links are connected so as to enable telecommunication between the terminals. The transmission links connect the nodes together, the nodes use circuit switching, message switching or packet switching to pass the signal through the correct links and nodes to reach the correct destination terminal. Each terminal in the network usually has an address so messages or connections can be routed to the correct recipients. The collection of addresses in the network is called the address space, for example, businesses need a greater telecommunications network if they plan to expand their company. With Internet, computer, and telephone networks, businesses can allocate their resources efficiently and these core types of networks will be discussed below, Computer network, a computer network consists of computers and devices connected to one another. Information can be transferred from one device to the next, for example, an office filled with computers can share files together on each separate device. Computer networks can range from a local area to a wide area network. The difference between the types of networks is the size and these types of computer networks work at certain speeds, also known as broadband. The Internet network connects computers worldwide, Internet network, access to the network allows users to use many resources. Over time the Internet network will replace books and this will enable users to discover information almost instantly and apply concepts to different situations. The Internet can be used for recreational, governmental, educational, businesses in particular use the Internet network for research or to service customers and clients. Telephone network, the network connects people to one another. This network can be used in a variety of ways, many businesses use the telephone network to route calls and/or service their customers. Some businesses use a network on a greater scale through a private branch exchange. It is a system where a specific business focuses on routing and servicing calls for another business, majority of the time, the telephone network is used around the world for recreational purposes. In general, every telecommunications network conceptually consists of three parts, or planes, The data plane carries the networks users traffic, the actual payload, the control plane carries control information. The management plane carries the operations and administration traffic required for network management, the management plane is sometimes considered a part of the control plane. The data network is used throughout the world to connect individuals

Telecommunications network

Telecommunications network
–
Example of how nodes may be interconnected with links to form a telecommunications network. This example is tree-like but many networks have loops.

Telecommunications network

Telecommunications network
–
The extracted channels at a site are connected to local devices through

muxponder or

transponder cards that can split or combine 40G channels to 4x 10G channels or 8x 2.5G channels.

19.
Social network
–
A social network is a social structure made up of a set of social actors, sets of dyadic ties, and other social interactions between actors. The social network perspective provides a set of methods for analyzing the structure of whole social entities as well as a variety of theories explaining the patterns observed in these structures. The study of these structures uses social network analysis to local and global patterns, locate influential entities. Social networks and the analysis of them is an inherently interdisciplinary academic field which emerged from social psychology, sociology, statistics, georg Simmel authored early structural theories in sociology emphasizing the dynamics of triads and web of group affiliations. Jacob Moreno is credited with developing the first sociograms in the 1930s to study interpersonal relationships and these approaches were mathematically formalized in the 1950s and theories and methods of social networks became pervasive in the social and behavioral sciences by the 1980s. Social network analysis is now one of the major paradigms in contemporary sociology, together with other complex networks, it forms part of the nascent field of network science. The social network is a theoretical construct useful in the sciences to study relationships between individuals, groups, organizations, or even entire societies. The term is used to describe a structure determined by such interactions. The ties through which any given social unit connects represent the convergence of the social contacts of that unit. This theoretical approach is, necessarily, relational, thus, one common criticism of social network theory is that individual agency is often ignored although this may not be the case in practice. Precisely because many different types of relations, singular or in combination, form these network configurations, in the late 1890s, both Émile Durkheim and Ferdinand Tönnies foreshadowed the idea of social networks in their theories and research of social groups. Tönnies argued that groups can exist as personal and direct social ties that either link individuals who share values and belief or impersonal, formal. Major developments in the field can be seen in the 1930s by several groups in psychology, anthropology, in psychology, in the 1930s, Jacob L. Moreno began systematic recording and analysis of social interaction in small groups, especially classrooms and work groups. In anthropology, the foundation for social theory is the theoretical and ethnographic work of Bronislaw Malinowski, Alfred Radcliffe-Brown. In sociology, the work of Talcott Parsons set the stage for taking a relational approach to understanding social structure. Later, drawing upon Parsons theory, the work of sociologist Peter Blau provides a strong impetus for analyzing the relational ties of social units with his work on social exchange theory, by the 1970s, a growing number of scholars worked to combine the different tracks and traditions. In general, social networks are self-organizing, emergent, and complex and these patterns become more apparent as network size increases. However, a network analysis of, for example, all interpersonal relationships in the world is not feasible and is likely to contain so much information as to be uninformative

Social network

Social network
–
Social network diagram, meso-level

Social network
–
Diagram: section of a large-scale social network

20.
Biological network
–
A biological network is any network that applies to biological systems. A network is any system with sub-units that are linked into a whole, biological networks provide a mathematical representation of connections found in ecological, evolutionary, and physiological studies, such as neural networks. The analysis of networks with respect to human diseases has led to the field of network medicine. Complex biological systems may be represented and analyzed as computable networks, for example, ecosystems can be modeled as networks of interacting species or a protein can be modeled as a network of amino acids. Breaking a protein down farther, amino acids can be represented as a network of connected atoms, such as carbon, nitrogen, nodes and edges are the basic components of a network. Nodes represent units in the network, while edges represent the interactions between the units, nodes can represent a wide-array of biological units, from individual organisms to individual neurons in the brain. Two important properties of a network are degree and betweenness centrality, degree is the number of edges that connect a node, while betweenness is a measure of how central a node is in a network. Nodes with high betweenness essentially serve as bridges between different portions of the network, in social networks, nodes with high degree or high betweenness may play important roles in the overall composition of a network. As early as the 1980s, researchers started viewing DNA or genomes as the storage of a language system with precise computable finite states represented as a finite state machine. Such theoretical studies have revealed that biological networks share many features with other such as the Internet or social networks. Many protein–protein interactions in a cell form protein interaction networks where proteins are nodes, pINs are the most intensely analyzed networks in biology. There are dozens of PPI detection methods to such interactions. The yeast two-hybrid system is a commonly used technique for the study of binary interactions. Recent studies have indicated conservation of molecular networks through deep evolutionary time, moreover, it has been discovered that proteins with high degrees of connectedness are more likely to be essential for survival than proteins with lesser degrees. This suggests that the composition of the network is important for the overall functioning of an organism. The activity of genes is regulated by transcription factors, proteins typically bind to DNA. Most transcription factors bind to multiple binding sites in a genome, as a result, all cells have complex gene regulatory networks. For instance, the genome encodes on the order of 1,400 DNA-binding transcription factors that regulate the expression of more than 20,000 human genes

Biological network

21.
Artificial neural network
–
Each neural unit is connected with many others, and links can enhance or inhibit the activation state of adjoining neural units. Each individual neural unit computes using summation function, There may be a threshold function or limiting function on each connection and on the unit itself, such that the signal must surpass the limit before propagating to other neurons. These systems are self-learning and trained, rather than explicitly programmed, Neural networks typically consist of multiple layers or a cube design, and the signal path traverses from the first, to the last layer of neural units. Back propagation is the use of stimulation to reset weights on the front neural units. More modern networks are a bit more free flowing in terms of stimulation and inhibition with connections interacting in a more chaotic. Dynamic neural networks are the most advanced, in that they dynamically can, based on rules, form new connections, the goal of the neural network is to solve problems in the same way that the human brain would, although several neural networks are more abstract. New brain research often stimulates new patterns in neural networks, one new approach is using connections which span much further and link processing layers rather than always being localized to adjacent neurons. Neural networks are based on numbers, with the value of the core. An interesting facet of these systems is that they are unpredictable in their success with self-learning, after training, some become great problem solvers and others dont perform as well. In order to them, several thousand cycles of interaction typically occur. Warren McCulloch and Walter Pitts created a model for neural networks based on mathematics. This model paved the way for neural network research to split into two distinct approaches, one approach focused on biological processes in the brain and the other focused on the application of neural networks to artificial intelligence. This work led to the paper by Kleene on nerve networks. “Representation of events in nerve nets and finite automata. ”In, Automata Studies, ed. by C. E. Shannon, annals of Mathematics Studies, no.34. Princeton University Press, Princeton, N. J.1956, in the late 1940s psychologist Donald Hebb created a hypothesis of learning based on the mechanism of neural plasticity that is now known as Hebbian learning. Hebbian learning is considered to be a typical unsupervised learning rule, researchers started applying these ideas to computational models in 1948 with Turings B-type machines. Farley and Wesley A. Clark first used computational machines, then called calculators, other neural network computational machines were created by Rochester, Holland, Habit, and Duda. Frank Rosenblatt created the perceptron, an algorithm for pattern recognition based on a computer learning network using simple addition and subtraction

Artificial neural network
–
Confidence analysis of a neural network

Artificial neural network

22.
Interdependent networks
–
The study of interdependent networks is a subfield of network science dealing with phenomena caused by the interactions between complex networks. Though there may be a variety of interactions between networks, dependency focuses on the scenario in which the nodes in one network require support from nodes in another network. In nature, networks rarely appear in isolation and they are typically elements in larger systems and can have non-trivial effects on one and other. For example, infrastructure networks exhibit interdependency to a large degree, the power stations which form the nodes of the power grid require fuel delivered via a network of roads or pipes and are also controlled via the nodes of communications network. Though the transportation network does not depend on the network to function. If the two networks were treated in isolation, this important feedback effect would not be seen and predictions of network robustness would be greatly overestimated, links in a standard network represent connectivity, providing information about how one node can be reached from another. Dependency links represent a need for support from one node to another and this relationship is often, though not necessarily, mutual and thus the links can be directed or undirected. Crucially, a node loses its ability to function as soon as the node it is dependent on ceases to function while it may not be so severely effected by losing a node it is connected to. In percolation theory, a node is considered active as long as it is connected to the giant component, the introduction of dependency links adds another condition, that the node that it depends on is also active. Dependency can be defined between different networks and also within the same network, interdependent networks have markedly different percolation properties than single-networks. This result is established for ER networks, lattices and other standard topologies, however, when multiple networks are interdependent, cascading failures emerge due to the positive feedback caused by dependency links. This family of processes causes a discontinuous or first order phase transition and this has been observed for random networks as well as lattices. Furthermore, for embedded interdependent networks the transition is particularly precipitous without even a critical exponent for p > p c, the high degree which is an asset in single networks can be a liability in interdependent networks. This is because the hubs which increase the robustness in networks can be dependent on vulnerable low-degree nodes. The removal of the node then removes the hub and all of its links. A typical cascading failure in a system of interdependent networks can be described as follows, each node A i in A relies on a critical resource provided by a node B i in B and vice versa. If A i stops functioning, B i will also stop functioning, the failure is triggered by the removal of a fraction 1 − p of nodes from A along with the links in A which were attached to each of those nodes. Since every node in B depends on a node in A, in network theory, we assume that only nodes which are a part of the largest connected component can continue to function

Interdependent networks

23.
Semantic network
–
A semantic network, or frame network, is a network that represents semantic relations between concepts. This is often used as a form of knowledge representation and it is a directed or undirected graph consisting of vertices, which represent concepts, and edges, which represent semantic relations between concepts. Typical standardized semantic networks are expressed as semantic triples, Semantic Nets were first invented for computers by Richard H. Richens of the Cambridge Language Research Unit in 1956 as an interlingua for machine translation of natural languages. It later featured prominently in the work of Allan M, in the subsequent decades, the distinction between semantic networks and knowledge graphs was blurred. In 2012, Google gave their knowledge graph the name Knowledge Graph, a semantic network is used when one has knowledge that is best understood as a set of concepts that are related to one another. Most semantic networks are cognitively based and they also consist of arcs and nodes which can be organized into a taxonomic hierarchy. Semantic networks contributed ideas of spreading activation, inheritance, and nodes as proto-objects and you would use the assoc function with a key of canary to extract all the information about the canary type. An example of a network is WordNet, a lexical database of English. It groups English words into sets of synonyms called synsets, provides short, general definitions, some of the most common semantic relations defined are meronymy, holonymy, hyponymy, hypernymy, synonymy and antonymy. WordNet properties have been studied from a network theory perspective and compared to other semantic networks created from Rogets Thesaurus, from this perspective the three of them are a small world structure. It is also possible to represent logical descriptions using semantic networks such as the graphs of Charles Sanders Peirce or the related conceptual graphs of John F. Sowa. These have expressive power equal to or exceeding standard first-order predicate logic, unlike WordNet or other lexical or browsing networks, semantic networks using these representations can be used for reliable automated logical deduction. Some automated reasoners exploit the features of the networks during processing. Other examples of networks are Gellish models. Gellish English with its Gellish English dictionary, is a language that is defined as a network of relations between concepts and names of concepts. Gellish English is a subset of natural English, just as Gellish Dutch is a formal subset of Dutch. Other Gellish networks consist of models and information models that are expressed in the Gellish language. A Gellish network is a network of relations between things, each relation in the network is an expression of a fact that is classified by a relation type

Semantic network

24.
Spatial network
–
A spatial network is a graph in which the vertices or edges are spatial elements associated with geometric objects, i. e. the nodes are located in a space equipped with a certain metric. Characterizing and understanding the structure, resilience and the evolution of spatial networks is crucial for many different fields ranging from urbanism to epidemiology, an urban spatial network can be constructed by abstracting intersections as nodes and streets as links, which is referred to as, resilience transportation network. One might think of the map as being the negative image of the standard map. Planar networks build up an important group out of the spatial networks, indeed, the airline passenger networks is a non-planar example, All airports in the world are connected through direct flights. The way it is embedded in space There are examples of networks, social networks for instance connect individuals through friendship relations. But in this case, space intervenes in the fact that the connection probability between two individuals decreases with the distance between them. Voronoi tessellation A spatial network can be represented by a Voronoi diagram, the dual graph for a Voronoi diagram corresponds to the Delaunay triangulation for the same set of points. Voronoi tessellations are interesting for spatial networks in the sense that they provide a natural representation model to one can compare a real world network. Mixing space and topology Examining the topology of the nodes and edges itself is another way to characterize networks, many physical phenomenas have been studied on these structures. Examples include Ising model for spontaneous magnetization, diffusion phenomena modeled as random walks, for example, new links, representing friendships, in social networks are in a certain manner random. Modelling spatial networks in respect of stochastic operations is consequent, in many cases the spatial Poisson process is used to approximate data sets of processes on spatial networks. It can be difficult to decide what a spatial element should be in complex spaces involving large open areas or many interconnected paths. The originators of space syntax, Bill Hillier and Julienne Hanson use axial lines, loosely, an axial line is the longest line of sight and access through open space, and a convex space the maximal convex polygon that can be drawn in open space. Each of these elements is defined by the geometry of the boundary in different regions of the space map. Decomposition of a map into a complete set of intersecting axial lines or overlapping convex spaces produces the axial map or overlapping convex map respectively. Objects of studies in geography are inter alia locations, activities and flows of individuals, on the other side, many important points still remain unclear, partly because at that time datasets of large networks and larger computer capabilities were lacking. Recently, spatial networks have been the subject of studies in Statistics, to connect probabilities, hyperbolic geometric graph Spatial network analysis software Complex network Planar graphs Random graphs Topological graph theory Chemical graph Bandelt, Hans-Jürgen, Chepoi, Victor. Metric graph theory and geometry, a survey, towards a Theory of Geometric Graphs

Spatial network

25.
Dependency network
–
The dependency network approach provides a system level analysis of the activity and topology of directed networks. The approach extracts causal topological relations between the nodes, and provides an important step towards inference of causal activity relations between the network nodes. This methodology has originally been introduced for the study of data, it has been extended and applied to other systems, such as the immune system. In the case of network activity, the analysis is based on partial correlations, in simple words, the partial correlation is a measure of the effect of a given node, say j, on the correlations between another pair of nodes, say i and k. Using this concept, the dependency of one node on another node, is calculated for the entire network and this results in a directed weighted adjacency matrix, of a fully connected network. The partial correlation based Dependency Networks is a new class of correlation based networks. This original methodology was first presented at the end of 2010 and this research, headed by Dror Y. Kenett and his Ph. D. supervisor Prof. Eshel Ben-Jacob, collaborated with Dr. Michele Tumminello and they quantitatively uncovered hidden information about the underlying structure of the U. S. stock market, information that was not present in the standard correlation networks. Thus, they were able for the first time to show the dependency relationships between the different economic sectors. Following this work, the dependency network methodology has been applied to the study of the immune system, as such, this methodology is applicable to any complex system. To be more specific, the correlations of the pair. Defined this way, the difference between the correlations and the partial correlations provides a measure of the influence of node j on the correlation. Therefore, we define the influence of node j on node i, or the dependency of node i on node j- D, in the case of network topology, the analysis is based on the effect of node deletion on the shortest paths between the network nodes. Note that the correlations for all pairs of nodes define a symmetric correlation matrix whose element is the correlation between nodes i and j. Next we use the resulting node correlations to compute the partial correlations, the first order partial correlation coefficient is a statistical measure indicating how a third variable affects the correlation between two other variables. The partial correlation between nodes i and k with respect to a third node j − P C is defined as, P C = C − C C where C, C and C are the node correlations defined above. We note that this quantity can be viewed either as the dependency of C on node j. The node activity dependencies define a dependency matrix D whose element is the dependency of node i on node j, for this reason, some of the methods used in the analyses of the correlation matrix have to be replaced or are less efficient

Dependency network
–
Dependency Network of financial data, for 300 of the S&P500 stocks, traded between 2001-2003. Stocks are grouped by economic sectors, and the arrow points in the direction of influence. The hub of the network, the most influencing sector, is the Financial sector. Reproduction from Kenett et al., PLoS ONE 5(12), e15032 (2010)

Dependency network
–
Dependency Network of specific antibodies activity, measured for a group of mothers. Panel (a) presents the Dependency Network, and panel (b) the standard correlation network. Reproduction from Madi et al., Chaos 21, 016109 (2011)

Dependency network
–
Example of Dependency Network of associations, constructed from a full semantic network. Reproduction from Kenett et al., PLoS ONE 6(8): e23912 (2011)

26.
Flow network
–
In graph theory, a flow network is a directed graph where each edge has a capacity and each edge receives a flow. The amount of flow on an edge cannot exceed the capacity of the edge, often in operations research, a directed graph is called a network, the vertices are called nodes and the edges are called arcs. A network is a graph G =, together with a function c, V × V → ℝ∞. Without loss of generality, it is assumed that if ∈ E then is also a member of E, since if ∉ E then may be added to E and then setting c =0. If two nodes in G are distinguished, an s and a sink t, then is called a flow network. There are various notions of a function that can be defined in a flow graph. The simplest example of a function is known as a pseudo-flow. Capacity constraint, An arcs flow cannot exceed its capacity, that is, given a pseudo-flow f in a flow network, it is often useful to consider the net flow entering a given node v, that is, the sum of the flows entering v. The excess function xf, V → ℝ is defined by xf = ∑v ∈ V f, a node u is said to be active if xf >0, deficient if xf <0 or conserving if xf =0. That is, xf ≥0 for all v ∈ V \ and that is, xf =0 for all v ∈ V \. The value of a flow f, denoted | f |, is the net flow into the sink t of the flow network. In the context of analysis, there is only an interest in considering how units are transferred between nodes in a holistic sense. For this reason, the capacity c, V × V → ℝ∞. These simplifications of the model arent always immediately intuitive, but they are convenient when it time to analysing flows. The capacity constraint simply ensures that a flow on any one arc within the network cannot exceed the capacity of that arc, the residual capacity of an arc with respect to a pseudo-flow f, denoted cf, is the difference between the arcs capacity and its flow. From this we can construct a network, denoted Gf. More formally, given a flow network G, the residual network Gf has the node set V, arc set Ef = and this concept is used in Ford–Fulkerson algorithm which computes the maximum flow in a flow network. Note that there can be a path from u to v in the residual network, since flows in opposite directions cancel out, decreasing the flow from v to u is the same as increasing the flow from u to v

Flow network
–
A flow network showing flow and capacity

27.
Clique (graph theory)
–
Cliques are one of the basic concepts of graph theory and are used in many other mathematical problems and constructions on graphs. Cliques have many applications in the sciences and particularly in bioinformatics. A clique, C, in an undirected graph G = is a subset of the vertices, C ⊆ V and this is equivalent to the condition that the induced subgraph of G induced by C is a complete graph. In some cases, the clique may also refer to the subgraph directly. A maximal clique is a clique that cannot be extended by including one more adjacent vertex, that is, some authors define cliques in a way that requires them to be maximal, and use other terminology for complete subgraphs that are not maximal. A maximum clique of a graph, G, is a clique, the clique number ω of a graph G is the number of vertices in a maximum clique in G. The intersection number of G is the smallest number of cliques that together cover all edges of G, the clique cover number of a graph G is the smallest number of cliques of G whose union covers V. A maximum clique transversal of a graph is a subset of vertices with the property that each maximum clique of the graph contains at least one vertex in the subset. The opposite of a clique is an independent set, in the sense that every clique corresponds to an independent set in the complement graph, the clique cover problem concerns finding as few cliques as possible that include every vertex in the graph. A related concept is a biclique, a complete bipartite subgraph, the bipartite dimension of a graph is the minimum number of bicliques needed to cover all the edges of the graph. Mathematical results concerning cliques include the following, Turáns theorem gives a lower bound on the size of a clique in dense graphs. If a graph has sufficiently many edges, it must contain a large clique, for instance, every graph with n vertices and more than ⌊ n 2 ⌋ ⋅ ⌈ n 2 ⌉ edges must contain a three-vertex clique. Ramseys theorem states that every graph or its complement graph contains a clique with at least a number of vertices. According to a result of Moon & Moser, a graph with 3n vertices can have at most 3n maximal cliques, the graphs meeting this bound are the Moon–Moser graphs K3,3. A special case of the Turán graphs arising as the cases in Turáns theorem. Hadwigers conjecture, still unproven, relates the size of the largest clique minor in a graph to its chromatic number, the Erdős–Faber–Lovász conjecture is another unproven statement relating graph coloring to cliques. Several important classes of graphs may be defined or characterized by their cliques, a block graph is a graph whose biconnected components are cliques. A cograph is an all of whose induced subgraphs have the property that any maximal clique intersects any maximal independent set in a single vertex

Clique (graph theory)
–
23 × 1-vertex cliques (the vertices),

28.
Connected component (graph theory)
–
For example, the graph shown in the illustration on the right has three connected components. A vertex with no incident edges is itself a connected component, a graph that is itself connected has exactly one connected component, consisting of the whole graph. An alternative way to define connected components involves the equivalence classes of a relation that is defined on the vertices of the graph. In an undirected graph, a v is reachable from a vertex u if there is a path from u to v. In this definition, a vertex is counted as a path of length zero. Reachability is a relation, since, It is reflexive. It is symmetric, If there is a path from u to v and it is transitive, If there is a path from u to v and a path from v to w, the two paths may be concatenated together to form a path from u to w. The connected components are then the induced subgraphs formed by the classes of this relation. The number of connected components is an important topological invariant of a graph, in topological graph theory it can be interpreted as the zeroth Betti number of the graph. In algebraic graph theory it equals the multiplicity of 0 as an eigenvalue of the Laplacian matrix of the graph and it is also the index of the first nonzero coefficient of the chromatic polynomial of a graph. Numbers of connected components play a key role in the Tutte theorem characterizing graphs that have perfect matchings and it is straightforward to compute the connected components of a graph in linear time using either breadth-first search or depth-first search. In either case, a search that begins at some vertex v will find the entire connected component containing v before returning. Hopcroft and Tarjan describe essentially this algorithm, and state that at that point it was well known, There are also efficient algorithms to dynamically track the connected components of a graph as vertices and edges are added, as a straightforward application of disjoint-set data structures. For forests, the cost can be reduced to O, or O amortized cost per edge deletion, finally Reingold succeeded in finding an algorithm for solving this connectivity problem in logarithmic space, showing that L = SL. Papadimitriou, Christos H. Symmetric space-bounded computation, Theoretical Computer Science,19, 161–187, Reingold, Omer, Undirected connectivity in log-space, Journal of the ACM,55, Article 17,24 pages, doi,10. 1145/1391289.1391291. MATLAB code to find connected components in undirected graphs, MATLAB File Exchange, connected components, Steven Skiena, The Stony Brook Algorithm Repository

Connected component (graph theory)
–
A graph with three connected components.

29.
Cut (graph theory)
–
In graph theory, a cut is a partition of the vertices of a graph into two disjoint subsets. Any cut determines a cut-set, the set of edges that have one endpoint in each subset of the partition and these edges are said to cross the cut. In a connected graph, each cut-set determines a unique cut, and in some cases cuts are identified with their cut-sets rather than with their vertex partitions. In a flow network, an s–t cut is a cut that requires the source and the sink to be in different subsets, the capacity of an s–t cut is defined as the sum of capacity of each edge in the cut-set. A cut C = is a partition of V of a graph G = into two subsets S and T, the cut-set of a cut C = is the set of edges that have one endpoint in S and the other endpoint in T. If s and t are specified vertices of the graph G, in an unweighted undirected graph, the size or weight of a cut is the number of edges crossing the cut. In a weighted graph, the value or weight is defined by the sum of the weights of the crossing the cut. A bond is a cut-set that does not have any other cut-set as a proper subset, a cut is minimum if the size or weight of the cut is not larger than the size of any other cut. The illustration on the shows a minimum cut, the size of this cut is 2. The max-flow min-cut theorem proves that the network flow and the sum of the cut-edge weights of any minimum cut that separates the source. There are polynomial-time methods to solve the problem, notably the Edmonds–Karp algorithm. A cut is maximum if the size of the cut is not smaller than the size of any other cut. The illustration on the shows a maximum cut, the size of the cut is equal to 5. In general, finding a cut is computationally hard. The max-cut problem is one of Karps 21 NP-complete problems, the max-cut problem is also APX-hard, meaning that there is no polynomial-time approximation scheme for it unless P = NP. However, it can be approximated to within a constant approximation ratio using semidefinite programming, note that min-cut and max-cut are not dual problems in the linear programming sense, even though one gets from one problem to other by changing min to max in the objective function. The max-flow problem is the dual of the min-cut problem, the sparsest cut problem is to bipartition the vertices so as to minimize the ratio of the number of edges across the cut divided by the number of vertices in the smaller half of the partition. This objective function favors solutions that are sparse and balanced

Cut (graph theory)
–
A minimum cut.

30.
Graph (abstract data type)
–
In computer science, a graph is an abstract data type that is meant to implement the undirected graph and directed graph concepts from mathematics. These pairs are known as edges, arcs, or lines for a graph and as arrows, directed edges, directed arcs. The vertices may be part of the structure, or may be external entities represented by integer indices or references. A graph data structure may also associate to each edge some edge value and this data structure allows the storage of additional data on the vertices. Additional data can be stored if edges are stored as objects, in which case each vertex stores its incident edges. Adjacency matrix A two-dimensional matrix, in which the rows represent source vertices, data on edges and vertices must be stored externally. Only the cost for one edge can be stored between each pair of vertices, incidence matrix A two-dimensional Boolean matrix, in which the rows represent the vertices and columns represent the edges. The entries indicate whether the vertex at a row is incident to the edge at a column. The following table gives the time complexity cost of performing operations on graphs, for each of these representations, with |V | the number of vertices. In the matrix representations, the entries encode the cost of following an edge, the cost of edges that are not present are assumed to be ∞. Adjacency lists are generally preferred because they efficiently represent sparse graphs. a, boost Networkx, a Python graph library Graphs Tutorial by Jebril FILALI

Graph (abstract data type)
–
A graph with three vertices and three edges.