1.
Computer network
–
A computer network or data network is a telecommunications network which allows nodes to share resources. In computer networks, networked computing devices exchange data with other using a data link. The connections between nodes are established using either cable media or wireless media, the best-known computer network is the Internet. Network computer devices that originate, route and terminate the data are called network nodes, nodes can include hosts such as personal computers, phones, servers as well as networking hardware. Two such devices can be said to be networked together when one device is able to exchange information with the other device, Computer networks differ in the transmission medium used to carry their signals, communications protocols to organize network traffic, the networks size, topology and organizational intent. In most cases, application-specific communications protocols are layered over other more general communications protocols and this formidable collection of information technology requires skilled network management to keep it all running reliably. The chronology of significant computer-network developments includes, In the late 1950s, in 1960, the commercial airline reservation system semi-automatic business research environment went online with two connected mainframes. Licklider developed a group he called the Intergalactic Computer Network. In 1964, researchers at Dartmouth College developed the Dartmouth Time Sharing System for distributed users of computer systems. The same year, at Massachusetts Institute of Technology, a group supported by General Electric and Bell Labs used a computer to route. Throughout the 1960s, Leonard Kleinrock, Paul Baran, and Donald Davies independently developed network systems that used packets to transfer information between computers over a network, in 1965, Thomas Marill and Lawrence G. Roberts created the first wide area network. This was an precursor to the ARPANET, of which Roberts became program manager. Also in 1965, Western Electric introduced the first widely used telephone switch that implemented true computer control, in 1972, commercial services using X.25 were deployed, and later used as an underlying infrastructure for expanding TCP/IP networks. In July 1976, Robert Metcalfe and David Boggs published their paper Ethernet, Distributed Packet Switching for Local Computer Networks, in 1979, Robert Metcalfe pursued making Ethernet an open standard. In 1976, John Murphy of Datapoint Corporation created ARCNET, a network first used to share storage devices. In 1995, the transmission speed capacity for Ethernet increased from 10 Mbit/s to 100 Mbit/s, by 1998, Ethernet supported transmission speeds of a Gigabit. Subsequently, higher speeds of up to 100 Gbit/s were added, the ability of Ethernet to scale easily is a contributing factor to its continued use. Providing access to information on shared storage devices is an important feature of many networks, a network allows sharing of files, data, and other types of information giving authorized users the ability to access information stored on other computers on the network
2.
Network motif
–
All networks, including biological networks, social networks, technological networks and more, can be represented as graphs, which include a wide variety of subgraphs. One important local property of networks are so-called network motifs, which are defined as recurrent, network motifs are sub-graphs that repeat themselves in a specific network or even among various networks. Each of these sub-graphs, defined by a pattern of interactions between vertices, may reflect a framework in which particular functions are achieved efficiently. Indeed, motifs are of notable importance largely because they may reflect functional properties and they have recently gathered much attention as a useful concept to uncover structural design principles of complex networks. Although network motifs may provide an insight into the network’s functional abilities. Let G = and G′ = be two graphs, graph G′ is a sub-graph of graph G if V′ ⊆ V and E′ ⊆ E ∩. If G′ ⊆ G and G′ contains all of the edges ‹u, v› ∈ E with u, v ∈ V′, then G′ is an induced sub-graph of G. We call G′ and G isomorphic, if exists a bijection f, V′ → V with ‹u, v› ∈ E′ ⇔ ‹f. The mapping f is called an isomorphism between G and G′, when G″ ⊂ G and there exists an isomorphism between the sub-graph G″ and a graph G′, this mapping represents an appearance of G′ in G. The number of appearances of graph G′ in G is called the frequency FG of G′ in G, a graph is called recurrent in G, when its frequency FG is above a predefined threshold or cut-off value. We used terms pattern and frequent sub-graph in this review interchangeably, there is an ensemble Ω of random graphs corresponding to the null-model associated to G. We should choose N random graphs uniformly from Ω and calculate the frequency for a particular frequent sub-graph G′ in G, the larger the Z, the more significant is the sub-graph G′ as a motif. A sub-graph with P-value less than a threshold will be treated as a significant pattern, another statistical measurement is defined for evaluating network motifs, but it is rarely used in known algorithms. This measurement is introduced by Picard et al. in 2008 and used the Poisson distribution, in addition, three specific concepts of sub-graph frequency have been proposed. As figure illustrates, the first frequency concept F1 considers all matches of a graph in original network and this definition is similar to what we have introduced above. The second concept F2 is defined as the number of edge-disjoint instances of a given graph in original network. And finally, the frequency concept F3 entails matches with disjoint edges and nodes, therefore, the two concepts F2 and F3 restrict the usage of elements of the graph, and as can be inferred, the frequency of a sub-graph declines by imposing restrictions on network element usage. As a result, a network motif detection algorithm would pass over more candidate sub-graphs if we insist on frequency concepts F2 and F3
3.
Dependency network
–
The dependency network approach provides a system level analysis of the activity and topology of directed networks. The approach extracts causal topological relations between the nodes, and provides an important step towards inference of causal activity relations between the network nodes. This methodology has originally been introduced for the study of data, it has been extended and applied to other systems, such as the immune system. In the case of network activity, the analysis is based on partial correlations, in simple words, the partial correlation is a measure of the effect of a given node, say j, on the correlations between another pair of nodes, say i and k. Using this concept, the dependency of one node on another node, is calculated for the entire network and this results in a directed weighted adjacency matrix, of a fully connected network. The partial correlation based Dependency Networks is a new class of correlation based networks. This original methodology was first presented at the end of 2010 and this research, headed by Dror Y. Kenett and his Ph. D. supervisor Prof. Eshel Ben-Jacob, collaborated with Dr. Michele Tumminello and they quantitatively uncovered hidden information about the underlying structure of the U. S. stock market, information that was not present in the standard correlation networks. Thus, they were able for the first time to show the dependency relationships between the different economic sectors. Following this work, the dependency network methodology has been applied to the study of the immune system, as such, this methodology is applicable to any complex system. To be more specific, the correlations of the pair. Defined this way, the difference between the correlations and the partial correlations provides a measure of the influence of node j on the correlation. Therefore, we define the influence of node j on node i, or the dependency of node i on node j- D, in the case of network topology, the analysis is based on the effect of node deletion on the shortest paths between the network nodes. Note that the correlations for all pairs of nodes define a symmetric correlation matrix whose element is the correlation between nodes i and j. Next we use the resulting node correlations to compute the partial correlations, the first order partial correlation coefficient is a statistical measure indicating how a third variable affects the correlation between two other variables. The partial correlation between nodes i and k with respect to a third node j − P C is defined as, P C = C − C C where C, C and C are the node correlations defined above. We note that this quantity can be viewed either as the dependency of C on node j. The node activity dependencies define a dependency matrix D whose element is the dependency of node i on node j, for this reason, some of the methods used in the analyses of the correlation matrix have to be replaced or are less efficient
4.
Social network
–
A social network is a social structure made up of a set of social actors, sets of dyadic ties, and other social interactions between actors. The social network perspective provides a set of methods for analyzing the structure of whole social entities as well as a variety of theories explaining the patterns observed in these structures. The study of these structures uses social network analysis to local and global patterns, locate influential entities. Social networks and the analysis of them is an inherently interdisciplinary academic field which emerged from social psychology, sociology, statistics, georg Simmel authored early structural theories in sociology emphasizing the dynamics of triads and web of group affiliations. Jacob Moreno is credited with developing the first sociograms in the 1930s to study interpersonal relationships and these approaches were mathematically formalized in the 1950s and theories and methods of social networks became pervasive in the social and behavioral sciences by the 1980s. Social network analysis is now one of the major paradigms in contemporary sociology, together with other complex networks, it forms part of the nascent field of network science. The social network is a theoretical construct useful in the sciences to study relationships between individuals, groups, organizations, or even entire societies. The term is used to describe a structure determined by such interactions. The ties through which any given social unit connects represent the convergence of the social contacts of that unit. This theoretical approach is, necessarily, relational, thus, one common criticism of social network theory is that individual agency is often ignored although this may not be the case in practice. Precisely because many different types of relations, singular or in combination, form these network configurations, in the late 1890s, both Émile Durkheim and Ferdinand Tönnies foreshadowed the idea of social networks in their theories and research of social groups. Tönnies argued that groups can exist as personal and direct social ties that either link individuals who share values and belief or impersonal, formal. Major developments in the field can be seen in the 1930s by several groups in psychology, anthropology, in psychology, in the 1930s, Jacob L. Moreno began systematic recording and analysis of social interaction in small groups, especially classrooms and work groups. In anthropology, the foundation for social theory is the theoretical and ethnographic work of Bronislaw Malinowski, Alfred Radcliffe-Brown. In sociology, the work of Talcott Parsons set the stage for taking a relational approach to understanding social structure. Later, drawing upon Parsons theory, the work of sociologist Peter Blau provides a strong impetus for analyzing the relational ties of social units with his work on social exchange theory, by the 1970s, a growing number of scholars worked to combine the different tracks and traditions. In general, social networks are self-organizing, emergent, and complex and these patterns become more apparent as network size increases. However, a network analysis of, for example, all interpersonal relationships in the world is not feasible and is likely to contain so much information as to be uninformative
5.
Assortativity
–
Assortativity, or assortative mixing is a preference for a networks nodes to attach to others that are similar in some way. Though the specific measure of similarity may vary, network theorists often examine assortativity in terms of a nodes degree, the addition of this characteristic to network models more closely approximates the behaviors of many real world networks. Correlations between nodes of degree are often found in the mixing patterns of many observable networks. For instance, in social networks, nodes tend to be connected with other nodes with similar degree values and this tendency is referred to as assortative mixing, or assortativity. On the other hand, technological and biological networks typically show disassortative mixing, or dissortativity, assortativity is often operationalized as a correlation between two nodes. However, there are ways to capture such a correlation. The two most prominent measures are the assortativity coefficient and the neighbor connectivity and these measures are outlined in more detail below. The assortativity coefficient is the Pearson correlation coefficient of degree between pairs of linked nodes, positive values of r indicate a correlation between nodes of similar degree, while negative values indicate relationships between nodes of different degree. In general, r lies between −1 and 1, when r =1, the network is said to have perfect assortative mixing patterns, when r =0 the network is non-assortative, while at r = −1 the network is completely disassortative. The assortativity coefficient is given by r = ∑ j k j k σ q 2, the term q k is the distribution of the remaining degree. This captures the number of edges leaving the node, other than the one that connects the pair, the distribution of this term is derived from the degree distribution p k as q k = p k +1 ∑ j ≥1 j p j. Finally, e j k refers to the joint probability distribution of the degrees of the two vertices. This quantity is symmetric on a graph, and follows the sum rules ∑ j k e j k =1 and ∑ j e j k = q k. In a Directed graph, in-assortativity and out-assortativity measure the tendencies of nodes to connect with nodes that have similar in and out degrees as themselves. Extending this further, four types of assortativity can be considered, adopting the notation of that article, it is possible to define four metrics r, r, r, and r. Let, be one of the word pairs. Let E be the number of edges in the network, suppose we label the edges of the network 1, …, E. Given edge i, let j i α be the α -degree of the source node vertex of the edge, and k i β be the β -degree of the target node of edge i
6.
Modularity (networks)
–
Modularity is one measure of the structure of networks or graphs. It was designed to measure the strength of division of a network into modules, networks with high modularity have dense connections between the nodes within modules but sparse connections between nodes in different modules. Modularity is often used in methods for detecting community structure in networks. However, it has shown that modularity suffers a resolution limit and, therefore. Biological networks, including animal brains, exhibit a degree of modularity. Many scientifically important problems can be represented and empirically studied using networks, most of these networks possess a certain community structure that has substantial importance in building an understanding regarding the dynamics of the network. For instance, a closely connected social community will imply a faster rate of transmission of information or rumor among them than a connected community. Hence, it may be imperative to identify the communities in networks since the communities may have different properties such as node degree, clustering coefficient, betweenness. Etc. from that of the average network, modularity is one such measure, which when maximized, leads to the appearance of communities in a given network. Modularity is the fraction of the edges that fall within the given groups minus the expected fraction if edges were distributed at random, the value of the modularity lies in the range [−1/2, 1). It is positive if the number of edges within groups exceeds the expected on the basis of chance. There are different methods for calculating modularity, in the most common version of the concept, the randomization of the edges is done so as to preserve the degree of each vertex. Let us consider a graph with n nodes and m such that the graph can be partitioned into two communities using a membership variable s. If a node v belongs to community 1, s v =1, or if v belongs to community 2, s v = −1. Let the adjacency matrix for the network be represented by A, also for simplicity we consider an undirected network. The expected number of edges shall be computed using the concept of Configuration Models, the configuration model is a randomized realization of a particular network. Thus, even though the degree distribution of the graph remains intact. It is important to note that Eq.3 holds good for partitioning into two communities only, hierarchical partitioning is a possible approach to identify multiple communities in a network
7.
Artificial neural network
–
Each neural unit is connected with many others, and links can enhance or inhibit the activation state of adjoining neural units. Each individual neural unit computes using summation function, There may be a threshold function or limiting function on each connection and on the unit itself, such that the signal must surpass the limit before propagating to other neurons. These systems are self-learning and trained, rather than explicitly programmed, Neural networks typically consist of multiple layers or a cube design, and the signal path traverses from the first, to the last layer of neural units. Back propagation is the use of stimulation to reset weights on the front neural units. More modern networks are a bit more free flowing in terms of stimulation and inhibition with connections interacting in a more chaotic. Dynamic neural networks are the most advanced, in that they dynamically can, based on rules, form new connections, the goal of the neural network is to solve problems in the same way that the human brain would, although several neural networks are more abstract. New brain research often stimulates new patterns in neural networks, one new approach is using connections which span much further and link processing layers rather than always being localized to adjacent neurons. Neural networks are based on numbers, with the value of the core. An interesting facet of these systems is that they are unpredictable in their success with self-learning, after training, some become great problem solvers and others dont perform as well. In order to them, several thousand cycles of interaction typically occur. Warren McCulloch and Walter Pitts created a model for neural networks based on mathematics. This model paved the way for neural network research to split into two distinct approaches, one approach focused on biological processes in the brain and the other focused on the application of neural networks to artificial intelligence. This work led to the paper by Kleene on nerve networks. “Representation of events in nerve nets and finite automata. ”In, Automata Studies, ed. by C. E. Shannon, annals of Mathematics Studies, no.34. Princeton University Press, Princeton, N. J.1956, in the late 1940s psychologist Donald Hebb created a hypothesis of learning based on the mechanism of neural plasticity that is now known as Hebbian learning. Hebbian learning is considered to be a typical unsupervised learning rule, researchers started applying these ideas to computational models in 1948 with Turings B-type machines. Farley and Wesley A. Clark first used computational machines, then called calculators, other neural network computational machines were created by Rochester, Holland, Habit, and Duda. Frank Rosenblatt created the perceptron, an algorithm for pattern recognition based on a computer learning network using simple addition and subtraction
8.
Scale-free network
–
A scale-free network is a network whose degree distribution follows a power law, at least asymptotically. Many networks have been reported to be scale-free, although statistical analysis has refuted many of these claims, preferential attachment and the fitness model have been proposed as mechanisms to explain conjectured power law degree distributions in real networks. In studies of the networks of citations between scientific papers, Derek de Solla Price showed in 1965 that the number of links to papers—i. e. The number of citations they receive—had a heavy-tailed distribution following a Pareto distribution or power law and he did not however use the term scale-free network, which was not coined until some decades later. Amaral et al. showed that most of the real-world networks can be classified into two categories according to the decay of degree distribution P for large k. Notably, however, this mechanism only produces a specific subset of networks in the scale-free class, the history of scale-free networks also includes some disagreement. On an empirical level, the nature of several networks has been called into question. On a theoretical level, refinements to the definition of scale-free have been proposed. For example, Li et al. recently offered a more precise scale-free metric. Briefly, let G be a graph with edge set E, define s = ∑ ∈ E deg ⋅ deg . This is maximized when high-degree nodes are connected to other high-degree nodes, now define S = s s max, where smax is the maximum value of s for H in the set of all graphs with degree distribution identical to that of G. This gives a metric between 0 and 1, where a graph G with small S is scale-rich, and a graph G with S close to 1 is scale-free and this definition captures the notion of self-similarity implied in the name scale-free. The most notable characteristic in a network is the relative commonness of vertices with a degree that greatly exceeds the average. The highest-degree nodes are called hubs, and are thought to serve specific purposes in their networks. The scale-free property strongly correlates with the networks robustness to failure and it turns out that the major hubs are closely followed by smaller ones. These smaller hubs, in turn, are followed by other nodes with a smaller degree. This hierarchy allows for a fault tolerant behavior, if failures occur at random and the vast majority of nodes are those with small degree, the likelihood that a hub would be affected is almost negligible. Even if a hub-failure occurs, the network will not lose its connectedness
9.
Centrality
–
In graph theory and network analysis, indicators of centrality identify the most important vertices within a graph. Applications include identifying the most influential person in a network, key infrastructure nodes in the Internet or urban networks. Centrality concepts were first developed in social analysis, and many of the terms used to measure centrality reflect their sociological origin. They should not be confused with node influence metrics, which seek to quantify the influence of every node in the network, Centrality indices are answers to the question What characterizes an important vertex. The answer is given in terms of a function on the vertices of a graph. The word importance has a number of meanings, leading to many different definitions of centrality. Two categorization schemes have been proposed, importance can be conceived in relation to a type of flow or transfer across the network. This allows centralities to be classified by the type of flow they consider important, importance can alternately be conceived as involvement in the cohesiveness of the network. This allows centralities to be classified based on how they measure cohesiveness, both of these approaches divide centralities in distinct categories. A further conclusion is that a centrality which is appropriate for one category will often get it wrong when applied to a different category, when centralities are categorized by their approach to cohesiveness, it becomes apparent that the majority of centralities inhabit one category. The count of the number of starting from a given vertex differs only in how walks are defined and counted. Restricting consideration to this group allows for a soft characterization which places centralities on a spectrum from walks of length one to infinite walks, the observation that many centralities share this familial relationships perhaps explains the high rank correlations between these indices. A network can be considered a description of the paths along which something flows and this allows a characterization based on the type of flow and the type of path encoded by the centrality. A flow can be based on transfers, where each undivisible item goes from one node to another, a second case is the serial duplication, where this is a replication of the item which goes to the next node, so both the source and the target have it. The last case is the duplication, with the item being duplicated to several links at the same time. Likewise, the type of path can be constrained to, Geodesics, paths, trails, an alternate classification can be derived from how the centrality is constructed. This again splits into two classes, centralities are either Radial or Medial. Radial centralities count walks which start/end from the given vertex, the degree and eigenvalue centralities are examples of radial centralities, counting the number of walks of length one or length infinity
10.
Balance theory
–
In the psychology of motivation, balance theory is a theory of attitude change, proposed by Fritz Heider. It conceptualizes the cognitive consistency motive as a drive toward psychological balance, the consistency motive is the urge to maintain ones values and beliefs over time. Heider proposed that sentiment or liking relationships are balanced if the affect valence in a system out to a positive result. In social network analysis, balance theory is the proposed by Frank Harary. It was the framework for the discussion at a Dartmouth College symposium in September 1975, for example, a Person who likes an Other person will be balanced by the same valence attitude on behalf of the other. Symbolically, P > O and P < O results in psychological balance and this can be extended to things as well, thus introducing triadic relationships. If a person P likes object X but dislikes other person O and this is symbolized as such, P > X P > O O > X Balance is achieved when there are three positive links or two negatives with one positive. Two positive links and one negative like the example above creates imbalance, multiplying the signs shows that the person will perceive imbalance in this relationship, and will be motivated to correct the imbalance somehow. The Person can either, Decide that O isnt so bad after all, Decide that X isnt as great as originally thought, any of these will result in psychological balance, thus resolving the dilemma and satisfying the drive. Balance theory is useful in examining how celebrity endorsement affects consumers attitudes toward products. If a person likes a celebrity and perceives that said celebrity likes a product, said person will tend to like the product more, in order to achieve psychological balance. However, if the person already had a dislike for the product being endorsed by the celebrity, they may begin disliking the celebrity, heiders balance theory can explain why holding the same negative attitudes of others promotes closeness. Frank Harary and Dorwin Cartwright looked at Heider’s triads as 3-cycles in a signed graph, the sign of a path in a graph is the product of the signs of its edges. They considered cycles in a graph representing a social network. A balanced signed graph has only cycles of positive sign, Harary proved that a balanced graph is polarized, that is, it decomposes into two positive subgraphs that are joined by negative edges. In the interest of realism, a property was suggested by Davis. Graphs with this property may decompose into more than two positive subgraphs called clusters, the property has been called the clusterability axiom. Then balanced graphs are recovered by assuming the Parsimony axiom, The subgraph of positive edges has at most two components, note that a triangle of three mutual enemies makes a clusterable graph but not a balanced one
11.
Network science
–
The study of networks has emerged in diverse disciplines as a means of analyzing complex relational data. The earliest known paper in this field is the famous Seven Bridges of Königsberg written by Leonhard Euler in 1736, eulers mathematical description of vertices and edges was the foundation of graph theory, a branch of mathematics that studies the properties of pairwise relations in a network structure. The field of graph theory continued to develop and found applications in chemistry, in the 1930s Jacob Moreno, a psychologist in the Gestalt tradition, arrived in the United States. He developed the sociogram and presented it to the public in April 1933 at a convention of medical scholars, Moreno claimed that before the advent of sociometry no one knew what the interpersonal structure of a group precisely looked like. The sociogram was a representation of the structure of a group of elementary school students. The boys were friends of boys and the girls were friends of girls with the exception of one boy who said he liked a single girl and this network representation of social structure was found so intriguing that it was printed in The New York Times. The sociogram has found many applications and has grown into the field of network analysis. Probabilistic theory in network science developed as an offshoot of graph theory with Paul Erdős, for social networks the exponential random graph model or p* is a notational framework used to represent the probability space of a tie occurring in a social network. In 1998, David Krackhardt and Kathleen Carley introduced the idea of a meta-network with the PCANS Model and they suggest that all organizations are structured along these three domains, Individuals, Tasks, and Resources. Their paper introduced the concept that networks occur across multiple domains and this field has grown into another sub-discipline of network science called dynamic network analysis. More recently other network science efforts have focused on mathematically describing different network topologies, duncan Watts reconciled empirical data on networks with mathematical representation, describing the small-world network. Although many networks, such as the internet, appear to maintain this aspect, the U. S. military first became interested in network-centric warfare as an operational concept based on network science in 1996. As a result, the BAST issued the NRC study in 2005 titled Network Science that defined a new field of research in Network Science for the Army. Under the tutelage of Dr. Moxley and the faculty of the USMA, in order to better instill the tenets of network science among its cadre of future leaders, the USMA has also instituted a five-course undergraduate minor in Network Science. In 2006, the U. S. S. and UK, the goal of the alliance is to perform basic research in support of Network- Centric Operations across the needs of both nations. Often, networks have certain attributes that can be calculated to analyze the properties & characteristics of the network and these network properties often define network models and can be used to analyze how certain models contrast to each other. Many of the definitions for terms used in network science can be found in Glossary of graph theory. The density D of a network is defined as a ratio of the number of edges E to the number of edges, given by the binomial coefficient
12.
Graph drawing
–
A drawing of a graph or network diagram is a pictorial representation of the vertices and edges of a graph. This drawing should not be confused with the graph itself, very different layouts can correspond to the same graph, in the abstract, all that matters is which pairs of vertices are connected by edges. In the concrete, however, the arrangement of vertices and edges within a drawing affects its understandability, usability, fabrication cost. The problem gets worse, if the changes over time by adding and deleting edges. Upward planar drawing uses the convention that every edge is oriented from a vertex to a higher vertex. Many different quality measures have been defined for graph drawings, in an attempt to find means of evaluating their aesthetics. In addition to guiding the choice between different layout methods for the graph, some layout methods attempt to directly optimize these measures. The crossing number of a drawing is the number of pairs of edges cross each other. If the graph is planar, then it is convenient to draw it without any edge intersections, that is, in this case. However, nonplanar graphs frequently arise in applications, so graph drawing algorithms must generally allow for edge crossings, the area of a drawing is the size of its smallest bounding box, relative to the closest distance between any two vertices. Drawings with smaller area are generally preferable to those with area, because they allow the features of the drawing to be shown at greater size. The aspect ratio of the box may also be important. Symmetry display is the problem of finding symmetry groups within a given graph, some layout methods automatically lead to symmetric drawings, alternatively, some drawing methods start by finding symmetries in the input graph and using them to construct a drawing. It is important that edges have shapes that are as simple as possible, in polyline drawings, the complexity of an edge may be measured by its number of bends, and many methods aim to provide drawings with few total bends or few bends per edge. Similarly for spline curves the complexity of an edge may be measured by the number of points on the edge. Several commonly used quality measures concern lengths of edges, it is desirable to minimize the total length of the edges as well as the maximum length of any edge. Additionally, it may be preferable for the lengths of edges to be rather than highly varied. Angular resolution is a measure of the sharpest angles in a graph drawing, if a graph has vertices with high degree then it necessarily will have small angular resolution, but the angular resolution can be bounded below by a function of the degree