1.
Graph theory
–
In mathematics graph theory is the study of graphs, which are mathematical structures used to model pairwise relations between objects. A graph in this context is made up of vertices, nodes, or points which are connected by edges, arcs, Graphs are one of the prime objects of study in discrete mathematics. Refer to the glossary of graph theory for basic definitions in graph theory, the following are some of the more basic ways of defining graphs and related mathematical structures. To avoid ambiguity, this type of graph may be described precisely as undirected, other senses of graph stem from different conceptions of the edge set. In one more generalized notion, V is a set together with a relation of incidence that associates with each two vertices. In another generalized notion, E is a multiset of unordered pairs of vertices, Many authors call this type of object a multigraph or pseudograph. All of these variants and others are described more fully below, the vertices belonging to an edge are called the ends or end vertices of the edge. A vertex may exist in a graph and not belong to an edge, V and E are usually taken to be finite, and many of the well-known results are not true for infinite graphs because many of the arguments fail in the infinite case. The order of a graph is |V|, its number of vertices, the size of a graph is |E|, its number of edges. The degree or valency of a vertex is the number of edges that connect to it, for an edge, graph theorists usually use the somewhat shorter notation xy. Graphs can be used to model many types of relations and processes in physical, biological, social, Many practical problems can be represented by graphs. Emphasizing their application to real-world systems, the network is sometimes defined to mean a graph in which attributes are associated with the nodes and/or edges. In computer science, graphs are used to represent networks of communication, data organization, computational devices, the flow of computation, etc. For instance, the structure of a website can be represented by a directed graph, in which the vertices represent web pages. A similar approach can be taken to problems in media, travel, biology, computer chip design. The development of algorithms to handle graphs is therefore of major interest in computer science, the transformation of graphs is often formalized and represented by graph rewrite systems. Graph-theoretic methods, in forms, have proven particularly useful in linguistics. Traditionally, syntax and compositional semantics follow tree-based structures, whose power lies in the principle of compositionality
2.
Directed graph
–
In mathematics, and more specifically in graph theory, a directed graph is a graph that is a set of vertices connected by edges, where the edges have a direction associated with them. It differs from an ordinary or undirected graph, in that the latter is defined in terms of unordered pairs of vertices, more specifically, these entities are addressed as directed multigraphs. On the other hand, the definition allows a directed graph to have loops. More specifically, directed graphs without loops are addressed as directed graphs. Symmetric directed graphs are directed graphs where all edges are bidirected, simple directed graphs are directed graphs that have no loops and no multiple arrows with same source and target nodes. As already introduced, in case of arrows the entity is usually addressed as directed multigraph. Some authors describe digraphs with loops as loop-digraphs. Complete directed graphs are directed graphs where each pair of vertices is joined by a symmetric pair of directed arrows. It follows that a complete digraph is symmetric, oriented graphs are directed graphs having no bidirected edges. It follows that a graph is an oriented graph iff it hasnt any 2-cycle. Tournaments are oriented graphs obtained by choosing a direction for each edge in undirected complete graphs. Directed acyclic graphs are directed graphs with no directed cycles, multitrees are DAGs in which no two directed paths from a single starting vertex meet back at the same ending vertex. Oriented trees or polytrees are DAGs formed by orienting the edges of undirected acyclic graphs, rooted trees are oriented trees in which all edges of the underlying undirected tree are directed away from the roots. Rooted directed graphs are digraphs in which a vertex has been distinguished as the root, control flow graphs are rooted digraphs used in computer science as a representation of the paths that might be traversed through a program during its execution. Signal-flow graphs are directed graphs in which nodes represent system variables and branches represent functional connections between pairs of nodes, flow graphs are digraphs associated with a set of linear algebraic or differential equations. State diagrams are directed multigraphs that represent finite state machines, representations of a quiver label its vertices with vector spaces and its edges compatibly with linear transformations between them, and transform via natural transformations. If a path leads from x to y, then y is said to be a successor of x and reachable from x, the arrow is called the inverted arrow of. The adjacency matrix of a graph is unique up to identical permutation of rows. Another matrix representation for a graph is its incidence matrix. For a vertex, the number of head ends adjacent to a vertex is called the indegree of the vertex, the indegree of v is denoted deg− and its outdegree is denoted deg+
3.
Operations research
–
Operations research, or operational research in British usage, is a discipline that deals with the application of advanced analytical methods to help make better decisions. Further, the operational analysis is used in the British military, as an intrinsic part of capability development, management. In particular, operational analysis forms part of the Combined Operational Effectiveness and Investment Appraisals and it is often considered to be a sub-field of applied mathematics. The terms management science and decision science are used as synonyms. Operation research is concerned with determining the maximum or minimum of some real-world objective. Originating in military efforts before World War II, its techniques have grown to concern problems in a variety of industries, nearly all of these techniques involve the construction of mathematical models that attempt to describe the system. Because of the computational and statistical nature of most of these fields, OR also has ties to computer science. In the decades after the two wars, the techniques were more widely applied to problems in business, industry. Early work in research was carried out by individuals such as Charles Babbage. Percy Bridgman brought operational research to bear on problems in physics in the 1920s, modern operational research originated at the Bawdsey Research Station in the UK in 1937 and was the result of an initiative of the stations superintendent, A. P. Rowe. Rowe conceived the idea as a means to analyse and improve the working of the UKs early warning radar system, initially, he analysed the operating of the radar equipment and its communication networks, expanding later to include the operating personnels behaviour. This revealed unappreciated limitations of the CH network and allowed action to be taken. Scientists in the United Kingdom including Patrick Blackett, Cecil Gordon, Solly Zuckerman, other names for it included operational analysis and quantitative management. During the Second World War close to 1,000 men and women in Britain were engaged in operational research, about 200 operational research scientists worked for the British Army. Patrick Blackett worked for different organizations during the war. In 1941, Blackett moved from the RAE to the Navy, after first working with RAF Coastal Command, in 1941, blacketts team at Coastal Commands Operational Research Section included two future Nobel prize winners and many other people who went on to be pre-eminent in their fields. They undertook a number of analyses that aided the war effort. Convoys travel at the speed of the slowest member, so small convoys can travel faster and it was also argued that small convoys would be harder for German U-boats to detect
4.
Function (mathematics)
–
In mathematics, a function is a relation between a set of inputs and a set of permissible outputs with the property that each input is related to exactly one output. An example is the function that each real number x to its square x2. The output of a function f corresponding to a x is denoted by f. In this example, if the input is −3, then the output is 9, likewise, if the input is 3, then the output is also 9, and we may write f =9. The input variable are sometimes referred to as the argument of the function, Functions of various kinds are the central objects of investigation in most fields of modern mathematics. There are many ways to describe or represent a function, some functions may be defined by a formula or algorithm that tells how to compute the output for a given input. Others are given by a picture, called the graph of the function, in science, functions are sometimes defined by a table that gives the outputs for selected inputs. A function could be described implicitly, for example as the inverse to another function or as a solution of a differential equation, sometimes the codomain is called the functions range, but more commonly the word range is used to mean, instead, specifically the set of outputs. For example, we could define a function using the rule f = x2 by saying that the domain and codomain are the numbers. The image of this function is the set of real numbers. In analogy with arithmetic, it is possible to define addition, subtraction, multiplication, another important operation defined on functions is function composition, where the output from one function becomes the input to another function. Linking each shape to its color is a function from X to Y, each shape is linked to a color, there is no shape that lacks a color and no shape that has more than one color. This function will be referred to as the color-of-the-shape function, the input to a function is called the argument and the output is called the value. The set of all permitted inputs to a function is called the domain of the function. Thus, the domain of the function is the set of the four shapes. The concept of a function does not require that every possible output is the value of some argument, a second example of a function is the following, the domain is chosen to be the set of natural numbers, and the codomain is the set of integers. The function associates to any number n the number 4−n. For example, to 1 it associates 3 and to 10 it associates −6, a third example of a function has the set of polygons as domain and the set of natural numbers as codomain
5.
Flow network
–
In graph theory, a flow network is a directed graph where each edge has a capacity and each edge receives a flow. The amount of flow on an edge cannot exceed the capacity of the edge, often in operations research, a directed graph is called a network, the vertices are called nodes and the edges are called arcs. A network is a graph G =, together with a function c, V × V → ℝ∞. Without loss of generality, it is assumed that if ∈ E then is also a member of E, since if ∉ E then may be added to E and then setting c =0. If two nodes in G are distinguished, an s and a sink t, then is called a flow network. There are various notions of a function that can be defined in a flow graph. The simplest example of a function is known as a pseudo-flow. Capacity constraint, An arcs flow cannot exceed its capacity, that is, given a pseudo-flow f in a flow network, it is often useful to consider the net flow entering a given node v, that is, the sum of the flows entering v. The excess function xf, V → ℝ is defined by xf = ∑v ∈ V f, a node u is said to be active if xf >0, deficient if xf <0 or conserving if xf =0. That is, xf ≥0 for all v ∈ V \ and that is, xf =0 for all v ∈ V \. The value of a flow f, denoted | f |, is the net flow into the sink t of the flow network. In the context of analysis, there is only an interest in considering how units are transferred between nodes in a holistic sense. For this reason, the capacity c, V × V → ℝ∞. These simplifications of the model arent always immediately intuitive, but they are convenient when it time to analysing flows. The capacity constraint simply ensures that a flow on any one arc within the network cannot exceed the capacity of that arc, the residual capacity of an arc with respect to a pseudo-flow f, denoted cf, is the difference between the arcs capacity and its flow. From this we can construct a network, denoted Gf. More formally, given a flow network G, the residual network Gf has the node set V, arc set Ef = and this concept is used in Ford–Fulkerson algorithm which computes the maximum flow in a flow network. Note that there can be a path from u to v in the residual network, since flows in opposite directions cancel out, decreasing the flow from v to u is the same as increasing the flow from u to v
6.
Maximum flow problem
–
In optimization theory, maximum flow problems involve finding a feasible flow through a single-source, single-sink flow network that is maximum. The maximum flow problem can be seen as a case of more complex network flow problems. The maximum value of an s-t flow is equal to the capacity of an s-t cut in the network. The maximum flow problem was first formulated in 1954 by T. E. Harris, in 1955, Lester R. Ford, Jr. and Delbert R. Fulkerson created the first known algorithm, the Ford–Fulkerson algorithm. The electrical flow algorithm of Christiano, Kelner, Madry, and Spielman finds an approximately optimal maximum flow, let N = be a network with s, t ∈ V being the source and the sink of N respectively. The capacity of an edge is a c, E → R +. It represents the amount of flow that can pass through an edge. The value of flow is defined by | f | = ∑ v, ∈ E f s v and it represents the amount of flow passing from the source to the sink. The maximum flow problem is to maximize | f |, that is and we can define the residual graph, which provides a systematic way to search for forward-backward operations in order to find the maximum flow. Given a flow network G, and a flow f on G, the node set of G f is the same as that of G. Each edge e = of G f is with a capacity of c e − f, each edge e ′ = of G f is with a capacity of f. The following table lists algorithms for solving the maximum flow problem, for a more extensive list, see. The integral flow theorem states that If each edge in a network has integral capacity. Given a network N = with a set of sources S =, given a directed acyclic graph G =, we are to find the minimum number of vertex-disjoint paths to cover each vertex in V. We can construct a bipartite graph G = from G, where Vout =, given a bipartite graph G =, we are to find a maximum cardinality matching in G, that is a matching that contains the largest possible number of edges. This problem can be transformed into a flow problem by constructing a network N =, E }. ∈E for each x∈X and ∈E for each y∈Y, then the value of the maximum flow in N is equal to the size of the maximum matching in G. In other words, the amount of passing through a vertex cannot exceed its capacity
7.
Pipe network analysis
–
In fluid dynamics, pipe network analysis is the analysis of the fluid flow through a hydraulics network, containing several or many interconnected branches. The aim is to determine the rates and pressure drops in the individual sections of the network. This is a problem in hydraulic design. To direct water to users, municipal water supplies often route it through a water supply network. A major part of this network will consist of interconnected pipes and this network creates a special class of problems in hydraulic design, with solution methods typically referred to as pipe network analysis. Water utilities generally make use of specialized software to solve these problems. However, many problems can also be addressed with simpler methods, like a spreadsheet equipped with a solver. Once the friction factors of the pipes are obtained, we can consider how to calculate the flow rates and this is equivalent mathematically to the statement that on any closed loop in the network, the head loss around the loop must vanish. If there are sufficient known flow rates, so that the system of given by and above is closed. The classical approach for solving these networks is to use the Hardy Cross method, in this formulation, first you go through and create guess values for the flows in the network. These initial guesses must satisfy the Kirchhoff laws and that is, if Q7 enters a junction and Q6 and Q4 leave the same junction, then the initial guess must satisfy Q7 = Q6 + Q4. After the initial guess is made, then, a loop is considered so that we can evaluate our second condition, given a starting node, we work our way around the loop in a clockwise fashion, as illustrated by Loop 1. To satisfy the Kirchhoffs second laws, we should end up with 0 about each loop at the steady-state solution. Δ Q = − ∑ head loss c − ∑ head loss c c n ⋅, the clockwise specifier means only the flows that are moving clockwise in our loop, while the counter-clockwise specifier is only the flows that are moving counter-clockwise. This adjustment doesnt solve the problem, since most networks have several loops and it is okay to use this adjustment, however, because the flow changes wont alter condition 1, and therefore, the other loops still satisfy condition 1. However, we should use the results from the first loop before we progress to other loops, an adaptation of this method is needed to account for water reservoirs attached to the network, which are joined in pairs by the use of pseudo-loops in the Hardy Cross scheme. This is discussed further on the Hardy Cross method site, the modern method is simply to create a set of conditions from the above Kirchhoff laws. Then, use a Root-finding algorithm to find Q values that all the equations
8.
Electric power distribution
–
Electric power distribution is the final stage in the delivery of electric power, it carries electricity from the transmission system to individual consumers. Distribution substations connect to the system and lower the transmission voltage to medium voltage ranging between 2 kV and 35 kV with the use of transformers. Primary distribution lines carry this medium voltage power to distribution transformers located near the customers premises, Distribution transformers again lower the voltage to the utilization voltage of household appliances and typically feed several customers through secondary distribution lines at this voltage. Commercial and residential customers are connected to the distribution lines through service drops. Customers demanding a larger amount of power may be connected directly to the primary distribution level or the subtransmission level. Electric power distribution only became necessary in the 1880s when electricity started being generated at power stations, before that electricity was usually generated where it was used. Both were supplanting gas lighting systems, with arc lighting taking over large area/street lighting, the Edison DC system needed thick copper conductor cables, and the generating plants needed to be within about 1.5 miles of the farthest customer to avoid excessively large and expensive conductors. With much cheaper transmission costs and the economies of scale of having large generating plants supply whole cities and regions. Edisons propaganda campaign was short lived with his company switching over to AC in 1892, in the first half of the 20th century, the electric power industry was vertically integrated, meaning that one company did generation, transmission, distribution, metering and billing. Starting in the 1970s and 1980s, nations began the process of deregulation and privatisation, the distribution system would remain regulated, but generation, retail, and sometimes transmission systems were transformed into competitive markets. Electric power begins at a station, where the potential difference can be as high as 13,800 volts. However, High-voltage DC can be advantageous for isolating alternating-current systems or controlling the quantity of electricity transmitted, for example, Hydro-Québec has a direct-current line which goes from the James Bay region to Boston. From the generating station it goes to the generating station’s switchyard where a step-up transformer increases the voltage to a suitable for transmission. Once in the system, electricity from each generating station is combined with electricity produced elsewhere. Electricity is consumed as soon as it is produced and it is transmitted at a very high speed, close to the speed of light. Transformers step down transmission voltages, 35kV or more, down to primary distribution voltages and these are medium voltage circuits, usually 600-35,000 V. From the transformer, power goes to the busbar that can split the power off in multiple directions. The bus distributes power to distribution lines, which fan out to customers, urban distribution is mainly underground, sometimes in common utility ducts
9.
Kirchhoff's circuit laws
–
See also Kirchhoffs laws for other laws named after Gustav Kirchhoff. Kirchhoffs circuit laws are two equalities that deal with the current and potential difference in the lumped element model of electrical circuits and they were first described in 1845 by German physicist Gustav Kirchhoff. This generalized the work of Georg Ohm and preceded the work of Maxwell, widely used in electrical engineering, they are also called Kirchhoffs rules or simply Kirchhoffs laws. Both of Kirchhoffs laws can be understood as corollaries of the Maxwell equations in the low-frequency limit and they are accurate for DC circuits, and for AC circuits at frequencies where the wavelengths of electromagnetic radiation are very large compared to the circuits. This law is also called Kirchhoffs first law, Kirchhoffs point rule, or Kirchhoffs junction rule. This formula is valid for complex currents, ∑ k =1 n I ~ k =0 The law is based on the conservation of charge whereby the charge is the product of the current and the time. A matrix version of Kirchhoffs current law is the basis of most circuit simulation software, Kirchhoffs current law combined with Ohms Law is used in nodal analysis. KCL is applicable to any lumped network irrespective of the nature of the network, whether unilateral or bilateral, active or passive and this law is also called Kirchhoffs second law, Kirchhoffs loop rule, and Kirchhoffs second rule. Similarly to KCL, it can be stated as, ∑ k =1 n V k =0 Here, n is the total number of voltages measured. The voltages may also be complex, ∑ k =1 n V ~ k =0 This law is based on the conservation of energy whereby voltage is defined as the energy per unit charge. The total amount of energy gained per unit charge must be equal to the amount of energy lost per unit charge, as energy, in the low-frequency limit, the voltage drop around any loop is zero. This includes imaginary loops arranged arbitrarily in space – not limited to the loops delineated by the circuit elements, in the low-frequency limit, this is a corollary of Faradays law of induction. This has practical application in situations involving static electricity, KCL and KVL both depend on the lumped element model being applicable to the circuit in question. When the model is not applicable, the laws do not apply, KCL, in its usual form, is dependent on the assumption that current flows only in conductors, and that whenever current flows into one end of a conductor it immediately flows out the other end. This is not an assumption for high-frequency AC circuits, where the lumped element model is no longer applicable. It is often possible to improve the applicability of KCL by considering parasitic capacitances distributed along the conductors, significant violations of KCL can occur even at 60 Hz, which is not a very high frequency. In other words, KCL is valid if the total electric charge, Q. In practical cases this is always so when KCL is applied at a geometric point, when investigating a finite region, however, it is possible that the charge density within the region may change
10.
Ecology
–
Ecology is the scientific analysis and study of interactions among organisms and their environment. It is a field that includes biology, geography. Ecology includes the study of interactions that organisms have with other, other organisms. Ecosystems are composed of dynamically interacting parts including organisms, the communities make up. Ecosystem processes, such as production, pedogenesis, nutrient cycling. These processes are sustained by organisms with specific life history traits, biodiversity, which refers to the varieties of species, genes, and ecosystems, enhances certain ecosystem services. Ecology is not synonymous with environment, environmentalism, natural history and it is closely related to evolutionary biology, genetics, and ethology. An important focus for ecologists is to improve the understanding of how biodiversity affects ecological function, Ecology is a human science as well. For example, the Circles of Sustainability approach treats ecology as more than the environment out there and it is not treated as separate from humans. Organisms and resources compose ecosystems which, in turn, maintain biophysical feedback mechanisms that moderate processes acting on living and non-living components of the planet, the word ecology was coined in 1866 by the German scientist Ernst Haeckel. Ecological thought is derivative of established currents in philosophy, particularly from ethics and politics, ancient Greek philosophers such as Hippocrates and Aristotle laid the foundations of ecology in their studies on natural history. Modern ecology became a more rigorous science in the late 19th century. Evolutionary concepts relating to adaptation and natural selection became the cornerstones of modern ecological theory, the scope of ecology contains a wide array of interacting levels of organization spanning micro-level to a planetary scale phenomena. Ecosystems, for example, contain abiotic resources and interacting life forms, an ecosystems area can vary greatly, from tiny to vast. A single tree is of consequence to the classification of a forest ecosystem. Several generations of a population can exist over the lifespan of a single leaf. Each of those aphids, in turn, support diverse bacterial communities, biodiversity describes the diversity of life from genes to ecosystems and spans every level of biological organization. The term has several interpretations, and there are ways to index, measure, characterize
11.
Food web
–
A food web is the natural interconnection of food chains and a graphical representation of what-eats-what in an ecological community. Another name for food web is a consumer-resource system, ecologists can broadly lump all life forms into one of two categories called trophic levels, 1) the autotrophs, and 2) the heterotrophs. To maintain their bodies, grow, develop, and to reproduce, autotrophs produce organic matter from inorganic substances and these chemical reactions require energy, which mainly comes from the sun and largely by photosynthesis, although a very small amount comes from hydrothermal vents and hot springs. The linkages in a food web illustrate the feeding pathways, such as where heterotrophs obtain organic matter by feeding on autotrophs, the food web is a simplified illustration of the various methods of feeding that links an ecosystem into a unified system of exchange. There are different kinds of feeding relations that can be divided into herbivory, carnivory, scavenging. Some of the organic matter eaten by heterotrophs, such as sugars, autotrophs and heterotrophs come in all sizes, from microscopic to many tonnes - from cyanobacteria to giant redwoods, and from viruses and bdellovibrio to blue whales. Charles Elton pioneered the concept of cycles, food chains. Elton organized species into groups, which was the basis for Raymond Lindemans classic. Lindeman emphasized the important role of organisms in a trophic system of classification. Even earlier, in 1768 John Bruckner described nature as one continued web of life, ecologists use these simplifications in quantitative models of trophic or consumer-resource systems dynamics. Using these models they can measure and test for generalized patterns in the structure of food web networks. Ecologists have identified non-random properties in the structure of food webs. Published examples that are used in analysis are of variable quality with omissions. However, the number of studies on community webs is on the rise. Scaling laws, for example, predict a relationship between the topology of food web predator-prey linkages and levels of species richness, links in food webs map the feeding connections in an ecological community. Food cycle is a term that is synonymous with food web. Ecologists can broadly group all life forms into one of two layers, the autotrophs and the heterotrophs. Autotrophs produce more energy, either chemically without the suns energy or by capturing the suns energy in photosynthesis
12.
Information theory
–
Information theory studies the quantification, storage, and communication of information. A key measure in information theory is entropy, entropy quantifies the amount of uncertainty involved in the value of a random variable or the outcome of a random process. For example, identifying the outcome of a coin flip provides less information than specifying the outcome from a roll of a die. Some other important measures in information theory are mutual information, channel capacity, error exponents, applications of fundamental topics of information theory include lossless data compression, lossy data compression, and channel coding. The field is at the intersection of mathematics, statistics, computer science, physics, neurobiology, Information theory studies the transmission, processing, utilization, and extraction of information. Abstractly, information can be thought of as the resolution of uncertainty, Information theory is a broad and deep mathematical theory, with equally broad and deep applications, amongst which is the vital field of coding theory. These codes can be subdivided into data compression and error-correction techniques. In the latter case, it took years to find the methods Shannons work proved were possible. A third class of information theory codes are cryptographic algorithms, concepts, methods and results from coding theory and information theory are widely used in cryptography and cryptanalysis. See the article ban for a historical application, Information theory is also used in information retrieval, intelligence gathering, gambling, statistics, and even in musical composition. Prior to this paper, limited information-theoretic ideas had been developed at Bell Labs, the unit of information was therefore the decimal digit, much later renamed the hartley in his honour as a unit or scale or measure of information. Alan Turing in 1940 used similar ideas as part of the analysis of the breaking of the German second world war Enigma ciphers. Much of the mathematics behind information theory with events of different probabilities were developed for the field of thermodynamics by Ludwig Boltzmann, Information theory is based on probability theory and statistics. Information theory often concerns itself with measures of information of the associated with random variables. Important quantities of information are entropy, a measure of information in a random variable, and mutual information. The choice of base in the following formulae determines the unit of information entropy that is used. A common unit of information is the bit, based on the binary logarithm, other units include the nat, which is based on the natural logarithm, and the hartley, which is based on the common logarithm. In what follows, an expression of the form p log p is considered by convention to be equal to zero whenever p =0 and this is justified because lim p →0 + p log p =0 for any logarithmic base
13.
Thermodynamics
–
Thermodynamics is a branch of science concerned with heat and temperature and their relation to energy and work. The behavior of these quantities is governed by the four laws of thermodynamics, the laws of thermodynamics are explained in terms of microscopic constituents by statistical mechanics. Thermodynamics applies to a variety of topics in science and engineering, especially physical chemistry, chemical engineering. The initial application of thermodynamics to mechanical heat engines was extended early on to the study of chemical compounds, Chemical thermodynamics studies the nature of the role of entropy in the process of chemical reactions and has provided the bulk of expansion and knowledge of the field. Other formulations of thermodynamics emerged in the following decades, statistical thermodynamics, or statistical mechanics, concerned itself with statistical predictions of the collective motion of particles from their microscopic behavior. In 1909, Constantin Carathéodory presented a mathematical approach to the field in his axiomatic formulation of thermodynamics. A description of any thermodynamic system employs the four laws of thermodynamics that form an axiomatic basis, the first law specifies that energy can be exchanged between physical systems as heat and work. In thermodynamics, interactions between large ensembles of objects are studied and categorized, central to this are the concepts of the thermodynamic system and its surroundings. A system is composed of particles, whose average motions define its properties, properties can be combined to express internal energy and thermodynamic potentials, which are useful for determining conditions for equilibrium and spontaneous processes. With these tools, thermodynamics can be used to describe how systems respond to changes in their environment and this can be applied to a wide variety of topics in science and engineering, such as engines, phase transitions, chemical reactions, transport phenomena, and even black holes. This article is focused mainly on classical thermodynamics which primarily studies systems in thermodynamic equilibrium, non-equilibrium thermodynamics is often treated as an extension of the classical treatment, but statistical mechanics has brought many advances to that field. Guericke was driven to make a vacuum in order to disprove Aristotles long-held supposition that nature abhors a vacuum. Shortly after Guericke, the English physicist and chemist Robert Boyle had learned of Guerickes designs and, in 1656, in coordination with English scientist Robert Hooke, using this pump, Boyle and Hooke noticed a correlation between pressure, temperature, and volume. In time, Boyles Law was formulated, which states that pressure, later designs implemented a steam release valve that kept the machine from exploding. By watching the valve rhythmically move up and down, Papin conceived of the idea of a piston and he did not, however, follow through with his design. Nevertheless, in 1697, based on Papins designs, engineer Thomas Savery built the first engine, although these early engines were crude and inefficient, they attracted the attention of the leading scientists of the time. Black and Watt performed experiments together, but it was Watt who conceived the idea of the condenser which resulted in a large increase in steam engine efficiency. Drawing on all the work led Sadi Carnot, the father of thermodynamics, to publish Reflections on the Motive Power of Fire
14.
Matching (graph theory)
–
In the mathematical discipline of graph theory, a matching or independent edge set in a graph is a set of edges without common vertices. It may also be a graph consisting of edges without common vertices. Bipartite matching is a case of a network flow problem. Given a graph G =, a matching M in G is a set of pairwise non-adjacent edges, a vertex is matched if it is an endpoint of one of the edges in the matching. In other words, a matching M of a graph G is maximal if every edge in G has a non-empty intersection with at least one edge in M, the following figure shows examples of maximal matchings in three graphs. A maximum matching is a matching that contains the largest possible number of edges, there may be many maximum matchings. The matching number ν of a graph G is the size of a maximum matching, note that every maximum matching is maximal, but not every maximal matching is a maximum matching. The following figure shows examples of maximum matchings in the three graphs. A perfect matching is a matching which matches all vertices of the graph and that is, every vertex of the graph is incident to exactly one edge of the matching. Figure above is an example of a perfect matching, every perfect matching is maximum and hence maximal. In some literature, the term complete matching is used, in the above figure, only part shows a perfect matching. A perfect matching is also an edge cover. Thus, ν ≤ ρ, that is, the size of a matching is no larger than the size of a minimum edge cover. A near-perfect matching is one in which one vertex is unmatched. This can only occur when the graph has an odd number of vertices, in the above figure, part shows a near-perfect matching. If, for every vertex in a graph, there is a matching that omits only that vertex. Given a matching M, a path is a path that begins with an unmatched vertex and is a path in which the edges belong alternatively to the matching. An augmenting path is a path that starts from and ends on free vertices
15.
Max-flow min-cut theorem
–
The max-flow min-cut theorem is a special case of the duality theorem for linear programs and can be used to derive Mengers theorem and the König–Egerváry theorem. Let N = be a network with s and t ∈ V being the source, the capacity of an edge is a mapping c, E → R+, denoted by cuv or c. It represents the amount of flow that can pass through an edge. A flow is a mapping f, E → R+, denoted by fuv or f , capacity Constraint, ∀ ∈ E, f u v ≤ c u v 2. Conservation of Flows, ∀ v ∈ V ∖, ∑ f u v = ∑ f v u. Definition, the value of flow is defined by | f | = ∑ v ∈ V f s v, where s is the source of N. It represents the amount of passing from the source to the sink. Maximize | f |, that is, to route as much flow as possible from s to t, an s-t cut C = is a partition of V such that s ∈ S and t ∈ T. The cut-set of C is the set, note that if the edges in the cut-set of C are removed, | f | =0. Minimize c, that is, to determine S and T such that the capacity of the S-T cut is minimal, the maximum value of an s-t flow is equal to the minimum capacity over all s-t cuts. The max-flow problem and min-cut problem can be formulated as two primal-dual linear programs, note that for the given s-t cut C = if i ∈ S then p i =1 and 0 otherwise. Therefore, p s should be 1 and p t should be zero, the figure on the right is a network having a value of flow of 7. The vertex in white and the vertices in grey form the subsets S and T of an s-t cut, in other words, the amount of flow passing through a vertex cannot exceed its capacity. Define an s-t cut to be the set of vertices and edges such that for any path from s to t, in this case, the capacity of the cut is the sum the capacity of each edge and vertex in it. In this new definition, the generalized max-flow min-cut theorem states that the value of an s-t flow is equal to the minimum capacity of an s-t cut in the new sense. In the undirected edge-disjoint paths problem, we are given an undirected graph G =, the Mengers theorem states that the maximum number of edge-disjoint s-t paths in an undirected graph is equal to the minimum number of edges in an s-t cut-set. In the project selection problem, there are n projects and m machines, each project pi yields revenue r and each machine qj costs c to purchase. Each project requires a number of machines and each machine can be shared by several projects, the problem is to determine which projects and machines should be selected and purchased respectively, so that the profit is maximized. An edge with infinite capacity is added if project pi requires machine qj, the s-t cut-set represents the projects and machines in P and Q respectively
16.
Cut (graph theory)
–
In graph theory, a cut is a partition of the vertices of a graph into two disjoint subsets. Any cut determines a cut-set, the set of edges that have one endpoint in each subset of the partition and these edges are said to cross the cut. In a connected graph, each cut-set determines a unique cut, and in some cases cuts are identified with their cut-sets rather than with their vertex partitions. In a flow network, an s–t cut is a cut that requires the source and the sink to be in different subsets, the capacity of an s–t cut is defined as the sum of capacity of each edge in the cut-set. A cut C = is a partition of V of a graph G = into two subsets S and T, the cut-set of a cut C = is the set of edges that have one endpoint in S and the other endpoint in T. If s and t are specified vertices of the graph G, in an unweighted undirected graph, the size or weight of a cut is the number of edges crossing the cut. In a weighted graph, the value or weight is defined by the sum of the weights of the crossing the cut. A bond is a cut-set that does not have any other cut-set as a proper subset, a cut is minimum if the size or weight of the cut is not larger than the size of any other cut. The illustration on the shows a minimum cut, the size of this cut is 2. The max-flow min-cut theorem proves that the network flow and the sum of the cut-edge weights of any minimum cut that separates the source. There are polynomial-time methods to solve the problem, notably the Edmonds–Karp algorithm. A cut is maximum if the size of the cut is not smaller than the size of any other cut. The illustration on the shows a maximum cut, the size of the cut is equal to 5. In general, finding a cut is computationally hard. The max-cut problem is one of Karps 21 NP-complete problems, the max-cut problem is also APX-hard, meaning that there is no polynomial-time approximation scheme for it unless P = NP. However, it can be approximated to within a constant approximation ratio using semidefinite programming, note that min-cut and max-cut are not dual problems in the linear programming sense, even though one gets from one problem to other by changing min to max in the objective function. The max-flow problem is the dual of the min-cut problem, the sparsest cut problem is to bipartition the vertices so as to minimize the ratio of the number of edges across the cut divided by the number of vertices in the smaller half of the partition. This objective function favors solutions that are sparse and balanced
17.
Minimum cost flow problem
–
The minimum-cost flow problem is an optimization and decision problem to find the cheapest possible way of sending a certain amount of flow through a flow network. A typical application of this involves finding the best delivery route from a factory to a warehouse where the road network has some capacity. The cost of sending this flow along an edge is f ⋅ a, the problem requires an amount of flow d to be sent from source s to sink t. This could be called a minimum-cost maximum-flow problem and is useful for finding minimum cost maximum matchings, with some solutions, finding the minimum cost maximum flow instead is straightforward. If not, one can find the flow by performing a binary search on d. A related problem is the minimum cost circulation problem, which can be used for solving minimum cost flow, the minimum cost flow problem can be solved by linear programming, since we optimize a linear function, and all constraints are linear. Apart from that, many algorithms exist, for a comprehensive survey. Some of them are generalizations of maximum flow algorithms, others use different approaches. Well-known fundamental algorithms, Cycle canceling, a primal method. Minimum mean cycle canceling, a strongly polynomial algorithm. Successive shortest path and capacity scaling, dual methods, which can be viewed as the generalizations of the Ford–Fulkerson algorithm, cost scaling, a primal-dual approach, which can be viewed as the generalization of the push-relabel algorithm. Network simplex algorithm, a version of the linear programming simplex method. Out-of-kilter algorithm by D. R. Fulkerson Given a bipartite graph G =, let w, E → R be a weight function on the edges of E. The minimum weight bipartite matching problem or assignment problem is to find a perfect matching M ⊆ E whose total weight is minimized, the idea is to reduce this problem to a network flow problem. Assign the capacity of all the edges in E’ to 1, add a source vertex s and connect it to all the vertices in A’ and add a sink vertex t and connect all vertices inside group B’ to this vertex. The capacity of all the new edges is 1 and their costs is 0, network Flows, Theory, Algorithms, and Applications. CS1 maint, Multiple names, authors list ^ Morton Klein, a primal method for minimal cost flows with applications to the assignment and transportation problems. ^ Andrew V. Goldberg and Robert E. Tarjan, finding minimum-cost circulations by canceling negative cycles
18.
Braess's paradox
–
Braess paradox or Braesss paradox is a proposed explanation for a seeming improvement to a road network being able to impede traffic through it. The paradox may have analogues in electrical grids and biological systems. It has been suggested that in theory, the improvement of a network could be accomplished by removing certain parts of it. Dietrich Braess, a mathematician at Ruhr University, Germany, noticed the flow in a network could be impeded by adding a new road. More formally, the idea behind Braess discovery is that the Nash equilibrium may not equate with the best overall flow through a network. The paradox is stated as follows, For each point of a network, let there be given the number of cars starting from it. Under these conditions one wishes to estimate the distribution of traffic flow, whether one street is preferable to another depends not only on the quality of the road, but also on the density of the flow. If every driver takes the path that looks most favorable to him, furthermore, it is indicated by an example that an extension of the road network may cause a redistribution of the traffic that results in longer individual running times. Adding extra capacity to a network when the moving entities selfishly choose their route can in some cases reduce overall performance and that is because the Nash equilibrium of such a system is not necessarily optimal. The network change induces a new structure which leads to a prisoners dilemma. In a Nash equilibrium, drivers have no incentive to change their routes, while the system is not in a Nash equilibrium, individual drivers are able to improve their respective travel times by changing the routes they take. In the case of Braess paradox, drivers continue to switch until they reach Nash equilibrium despite the reduction in overall performance. If the latency functions are linear, adding an edge can never make total travel time at equilibrium worse by a factor of more than 4/3. As a corollary, they obtain that Braess paradox is about as likely to occur as not occur, their result applies to rather than planned networks. In 1968, Dietrich Braess showed that the extension of the network may cause a redistribution of the traffic that results in longer individual running times. This paradox has a counterpart in case of a reduction of the road network, in Seoul, South Korea, a speeding up in traffic around the city was seen when a motorway was removed as part of the Cheonggyecheon restoration project. In Stuttgart, Germany, after investments into the network in 1969. In 1990 the temporary closing of 42nd Street in New York City for Earth Day reduced the amount of congestion in the area, in 2009, New York experimented with closures of Broadway at Times Square and Herald Square, which resulted in improved traffic flow and permanent pedestrian plazas
19.
Centrality
–
In graph theory and network analysis, indicators of centrality identify the most important vertices within a graph. Applications include identifying the most influential person in a network, key infrastructure nodes in the Internet or urban networks. Centrality concepts were first developed in social analysis, and many of the terms used to measure centrality reflect their sociological origin. They should not be confused with node influence metrics, which seek to quantify the influence of every node in the network, Centrality indices are answers to the question What characterizes an important vertex. The answer is given in terms of a function on the vertices of a graph. The word importance has a number of meanings, leading to many different definitions of centrality. Two categorization schemes have been proposed, importance can be conceived in relation to a type of flow or transfer across the network. This allows centralities to be classified by the type of flow they consider important, importance can alternately be conceived as involvement in the cohesiveness of the network. This allows centralities to be classified based on how they measure cohesiveness, both of these approaches divide centralities in distinct categories. A further conclusion is that a centrality which is appropriate for one category will often get it wrong when applied to a different category, when centralities are categorized by their approach to cohesiveness, it becomes apparent that the majority of centralities inhabit one category. The count of the number of starting from a given vertex differs only in how walks are defined and counted. Restricting consideration to this group allows for a soft characterization which places centralities on a spectrum from walks of length one to infinite walks, the observation that many centralities share this familial relationships perhaps explains the high rank correlations between these indices. A network can be considered a description of the paths along which something flows and this allows a characterization based on the type of flow and the type of path encoded by the centrality. A flow can be based on transfers, where each undivisible item goes from one node to another, a second case is the serial duplication, where this is a replication of the item which goes to the next node, so both the source and the target have it. The last case is the duplication, with the item being duplicated to several links at the same time. Likewise, the type of path can be constrained to, Geodesics, paths, trails, an alternate classification can be derived from how the centrality is constructed. This again splits into two classes, centralities are either Radial or Medial. Radial centralities count walks which start/end from the given vertex, the degree and eigenvalue centralities are examples of radial centralities, counting the number of walks of length one or length infinity
20.
Dinic's algorithm
–
Dinics algorithm or Dinitzs algorithm is a strongly polynomial algorithm for computing the maximum flow in a flow network, conceived in 1970 by Israeli computer scientist Yefim A. Dinitz. The algorithm runs in O time and is similar to the Edmonds–Karp algorithm, the introduction of the concepts of the level graph and blocking flow enable Dinics algorithm to achieve its performance. Yefim Dinitz invented this algorithm in response to an exercise in Adelson-Velskys algorithms class. At the time he was not aware of the facts regarding the Ford–Fulkerson algorithm. Dinitz mentions inventing his algorithm in January 1969, which was published in 1970 in the journal Doklady Akademii nauk SSSR. In 1974, Shimon Even and Alon Itai at the Technion in Haifa were very curious, however it was hard for them to decipher these two papers, each being limited to four pages to meet the restrictions of journal Doklady Akademii nauk SSSR. Even did not give up, and after three days of effort managed to both papers except for the layered network maintenance issue. Over the next couple of years, Even gave lectures on Dinics algorithm, Even and Itai also contributed to this algorithm by combining BFS and DFS, which is the current version of the algorithm. For about 10 years of time after the Ford–Fulkerson algorithm was invented and this caused a lack of any known polynomial-time algorithm to solve the max flow problem in generic cases. Let G = be a network with c and f the capacity, the residual capacity is a mapping c f, V × V → R + defined as, if ∈ E, c f = c − f c f = f c f =0 otherwise. The residual graph is the graph G f =, where E f =, an augmenting path is an s − t path in the residual graph G f. Define dist to be the length of the shortest path from s to v in G f, then the level graph of G f is the graph G L =, where E L =. A blocking flow is an s − t flow f such that the graph G ′ = with E L ′ = contains no s − t path, Dinics Algorithm Input, A network G =. Output, An s − t flow f of maximum value, set f =0 for each e ∈ E. Construct G L from G f of G. If dist = ∞, stop and output f, find a blocking flow f ′ in G L. Augment flow f by f ′ and go back to step 2. The level graph G L can be constructed by breadth-first search in O time, hence, the running time of Dinics algorithm is O. Using a data structure called dynamic trees, the time of finding a blocking flow in each phase can be reduced to O. In networks with unit capacities, a much stronger time bound holds, each blocking flow can be found in O time, and it can be shown that the number of phases does not exceed O and O
21.
Oriented matroid
–
An oriented matroid is a mathematical structure that abstracts the properties of directed graphs and of arrangements of vectors in a vector space over an ordered field. All oriented matroids have an underlying matroid, thus, results on ordinary matroids can be applied to oriented matroids. However, the converse is false, some matroids cannot become an oriented matroid by orienting an underlying structure, the distinction between matroids and oriented matroids is discussed further below. Matroids are often useful in such as dimension theory and algorithms. Because of an oriented matroids inclusion of details about the oriented nature of a structure, its usefulness extends further into several areas including geometry. In order to abstract the concept of orientation on the edges of a graph to sets, the way this achieved is with the following definition of signed sets. A signed set, X, combines a set of objects X _ with a partition of that set into two subsets, X + and X −, the members of X + are called the positive elements, members of X − are the negative elements. The set X _ = X + ∪ X − is called the support of X, the empty signed set, ∅ is defined in the obvious way. In this way, a set is just adding negative signs to distinguished elements. This will make sense as a direction only when we consider orientations of larger structures, then the sign of each element will encode its direction relative to this orientation. Like ordinary matroids, several equivalent systems of axioms exist and we refer to E as the ground set. Let C be a collection of signed sets, each of which is supported by a subset of E, if the following axioms hold for C, then equivalently C is the set of signed circuits for an oriented matroid on E. ∅ ∉ C For all X ∈ C, − X ∈ C, for all X, Y ∈ C, if X ⊆ Y then. For all X, Y ∈ C, X ≠ − Y with an e ∈ X + ∩ Y − there is a Z ∈ C such that Z + ⊆ ∖ and Z − ⊆ ∖. A chirotope of rank r is a function χ, E r → that satisfies the following axioms. χ is not identically zero For any permutation σ and x 1, …, x r ∈ E, χ = sgn χ, where sgn is the sign of the permutation. For any x 1, …, x r, y 1, …, y r ∈ E such that χ χ ≥0 for each i, then we also have χ χ ≥0. Every chirotope of rank r gives rise to a set of bases of a matroid on E consisting of those r -element subsets that χ assigns a nonzero value, the chirotope can then sign the circuits of that matroid
22.
Shortest path problem
–
In graph theory, the shortest path problem is the problem of finding a path between two vertices in a graph such that the sum of the weights of its constituent edges is minimized. The problem of finding the shortest path between two intersections on a map may be modeled by a special case of the shortest path problem in graphs. The shortest path problem can be defined for graphs whether undirected, directed and it is defined here for undirected graphs, for directed graphs the definition of path requires that consecutive vertices be connected by an appropriate directed edge. Two vertices are adjacent when they are incident to a common edge. A path in a graph is a sequence of vertices P = ∈ V × V × ⋯ × V such that v i is adjacent to v i +1 for 1 ≤ i < n. Such a path P is called a path of length n −1 from v 1 to v n, let e i, j be the edge incident to both v i and v j. When each edge in the graph has unit weight or f, E →, the single-destination shortest path problem, in which we have to find shortest paths from all vertices in the directed graph to a single destination vertex v. This can be reduced to the single-source shortest path problem by reversing the arcs in the directed graph, the all-pairs shortest path problem, in which we have to find shortest paths between every pair of vertices v, v in the graph. These generalizations have significantly more efficient algorithms than the approach of running a single-pair shortest path algorithm on all relevant pairs of vertices. The most important algorithms for solving this problem are, Dijkstras algorithm solves the single-source shortest path problem, bellman–Ford algorithm solves the single-source problem if edge weights may be negative. A* search algorithm solves for single pair shortest path using heuristics to try to speed up the search, Floyd–Warshall algorithm solves all pairs shortest paths. Johnsons algorithm solves all pairs shortest paths, and may be faster than Floyd–Warshall on sparse graphs, viterbi algorithm solves the shortest stochastic path problem with an additional probabilistic weight on each node. Additional algorithms and associated evaluations may be found in Cherkassky, Goldberg & Radzik, an algorithm using topological sorting can solve the single-source shortest path problem in linear time, Θ, in weighted DAGs. The following table is taken from Schrijver, a green background indicates an asymptotically best bound in the table. The all-pairs shortest path problem finds the shortest paths between every pair of v, v in the graph. Shortest path algorithms are applied to find directions between physical locations, such as driving directions on web mapping websites like MapQuest or Google Maps. For this application fast specialized algorithms are available, in a networking or telecommunications mindset, this shortest path problem is sometimes called the min-delay path problem and usually tied with a widest path problem. For example, the algorithm may seek the shortest widest path, a more lighthearted application is the games of six degrees of separation that try to find the shortest path in graphs like movie stars appearing in the same film
23.
National Institute of Standards and Technology
–
The National Institute of Standards and Technology is a measurement standards laboratory, and a non-regulatory agency of the United States Department of Commerce. Its mission is to promote innovation and industrial competitiveness, in 1821, John Quincy Adams had declared Weights and measures may be ranked among the necessities of life to every individual of human society. From 1830 until 1901, the role of overseeing weights and measures was carried out by the Office of Standard Weights and Measures, president Theodore Roosevelt appointed Samuel W. Stratton as the first director. The budget for the first year of operation was $40,000, a laboratory site was constructed in Washington, DC, and instruments were acquired from the national physical laboratories of Europe. In addition to weights and measures, the Bureau developed instruments for electrical units, in 1905 a meeting was called that would be the first National Conference on Weights and Measures. Quality standards were developed for products including some types of clothing, automobile brake systems and headlamps, antifreeze, during World War I, the Bureau worked on multiple problems related to war production, even operating its own facility to produce optical glass when European supplies were cut off. Between the wars, Harry Diamond of the Bureau developed a blind approach radio aircraft landing system, in 1948, financed by the Air Force, the Bureau began design and construction of SEAC, the Standards Eastern Automatic Computer. The computer went into operation in May 1950 using a combination of vacuum tubes, about the same time the Standards Western Automatic Computer, was built at the Los Angeles office of the NBS and used for research there. A mobile version, DYSEAC, was built for the Signal Corps in 1954, due to a changing mission, the National Bureau of Standards became the National Institute of Standards and Technology in 1988. Following 9/11, NIST conducted the investigation into the collapse of the World Trade Center buildings. NIST had a budget for fiscal year 2007 of about $843.3 million. NISTs 2009 budget was $992 million, and it also received $610 million as part of the American Recovery, NIST employs about 2,900 scientists, engineers, technicians, and support and administrative personnel. About 1,800 NIST associates complement the staff, in addition, NIST partners with 1,400 manufacturing specialists and staff at nearly 350 affiliated centers around the country. NIST publishes the Handbook 44 that provides the Specifications, tolerances, the Congress of 1866 made use of the metric system in commerce a legally protected activity through the passage of Metric Act of 1866. NIST is headquartered in Gaithersburg, Maryland, and operates a facility in Boulder, nISTs activities are organized into laboratory programs and extramural programs. Effective October 1,2010, NIST was realigned by reducing the number of NIST laboratory units from ten to six, nISTs Boulder laboratories are best known for NIST‑F1, which houses an atomic clock. NIST‑F1 serves as the source of the official time. NIST also operates a neutron science user facility, the NIST Center for Neutron Research, the NCNR provides scientists access to a variety of neutron scattering instruments, which they use in many research fields
24.
O'Reilly Media
–
OReilly Media is an American media company established by Tim OReilly that publishes books and Web sites and produces conferences on computer technology topics. Their distinctive brand features a woodcut of an animal on many of their book covers, the company began in 1978 as a private consulting firm doing technical writing, based in the Cambridge, Massachusetts area. In 1984, it began to retain publishing rights on manuals created for Unix vendors, a few 70-page Nutshell Handbooks were well-received, but the focus remained on the consulting business until 1988. After a conference displaying OReillys preliminary Xlib manuals attracted significant attention, in 1992, OReilly Media published one of the first popular books about the Internet, Ed Krols Whole Internet Users Guide and Catalog. OReilly Media also created the first web portal, the Global Network Navigator in 1993, it was sold to AOL in 1995, GNN was the first site on the World Wide Web to feature paid advertising. The firm is now recognized for the conferences and summits it organizes. In 1997, OReilly launched The Perl Conference to raise the profile of the Perl programming language, many of the companys other software bestsellers were also on topics that were off the radar of the commercial software industry. In 1998, OReilly invited many of the leaders of projects to a meeting. Originally called the summit, the meeting became known as the Open Source Summit. The OReilly Open Source Convention is now one of OReillys flagship events, other key events include the Strata Conference on big data, the Velocity Conference on Web Performance and Operations, and FOO Camp. Past events of note include the OReilly Emerging Technology Conference and the Web 2.0 Summit, overall, OReilly describes its business not as publishing or conferences, but as changing the world by spreading the knowledge of innovators. In 2001, O’Reilly launched Safari Books Online, a subscription based service providing access to ebooks as a joint venture with the Pearson Technology Group. In 2013, O’Reilly acquired Pearson’s interest in the joint venture, in 2003, after the dot com bust, O’Reilly’s corporate goal was to reignite enthusiasm in the computer industry. Dale Dougherty, an executive at O’Reilly, coined the phrase Web 2.0 during a brainstorming session and this then became the name for the Web 2.0 Summit run by OReilly Media and TechWeb. In May 2006 CMP Media learned of an event called the Web 2.0 Half day conference. Concerned over their obligation to take reasonable means to enforce their trade and service marks CMP sent a cease and this attempt to restrict through legal mechanisms the use of the term was criticized by some. In 2004, the named the Maker Movement with the launch of Make. Today, the flagship Maker Faire in San Mateo, CA, other Faires around the world collectively draw millions
25.
International Standard Book Number
–
The International Standard Book Number is a unique numeric commercial book identifier. An ISBN is assigned to each edition and variation of a book, for example, an e-book, a paperback and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, the method of assigning an ISBN is nation-based and varies from country to country, often depending on how large the publishing industry is within a country. The initial ISBN configuration of recognition was generated in 1967 based upon the 9-digit Standard Book Numbering created in 1966, the 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108. Occasionally, a book may appear without a printed ISBN if it is printed privately or the author does not follow the usual ISBN procedure, however, this can be rectified later. Another identifier, the International Standard Serial Number, identifies periodical publications such as magazines, the ISBN configuration of recognition was generated in 1967 in the United Kingdom by David Whitaker and in 1968 in the US by Emery Koltay. The 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108, the United Kingdom continued to use the 9-digit SBN code until 1974. The ISO on-line facility only refers back to 1978, an SBN may be converted to an ISBN by prefixing the digit 0. For example, the edition of Mr. J. G. Reeder Returns, published by Hodder in 1965, has SBN340013818 -340 indicating the publisher,01381 their serial number. This can be converted to ISBN 0-340-01381-8, the check digit does not need to be re-calculated, since 1 January 2007, ISBNs have contained 13 digits, a format that is compatible with Bookland European Article Number EAN-13s. An ISBN is assigned to each edition and variation of a book, for example, an ebook, a paperback, and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, a 13-digit ISBN can be separated into its parts, and when this is done it is customary to separate the parts with hyphens or spaces. Separating the parts of a 10-digit ISBN is also done with either hyphens or spaces, figuring out how to correctly separate a given ISBN number is complicated, because most of the parts do not use a fixed number of digits. ISBN issuance is country-specific, in that ISBNs are issued by the ISBN registration agency that is responsible for country or territory regardless of the publication language. Some ISBN registration agencies are based in national libraries or within ministries of culture, in other cases, the ISBN registration service is provided by organisations such as bibliographic data providers that are not government funded. In Canada, ISBNs are issued at no cost with the purpose of encouraging Canadian culture. In the United Kingdom, United States, and some countries, where the service is provided by non-government-funded organisations. Australia, ISBNs are issued by the library services agency Thorpe-Bowker
26.
Gary Chartrand
–
Gary Theodore Chartrand is an American-born mathematician who specializes in graph theory. He is known for his textbooks on introductory graph theory and for the concept of a highly irregular graph, Gary Chartrand was born in 1936. He was raised in Sault Ste, marie, Michigan and attended J. W. Sexton High School located in Lansing, Michigan. He earned his B. S. from Michigan State University, Michigan State University also awarded him a Master of Science and a PhD for his work in graph theory in 1964. Chartrand became the first doctoral student of Edward Nordhaus, and the first doctoral student at Michigan State University to research graph theory and his dissertation was Graphs and Their Associated Line-Graphs. Chartrand worked with Frank Harary at the University of Michigan, where he spent a year as a Research Associate, the topic of highly irregular graphs was introduced by Chartrand, Paul Erdős and Ortrud R. Oellermann. Other contributions that Chartrand has made involve dominating sets, distance in graphs, during his career at Western Michigan University, he advised 22 doctoral students in their research on aspects of graph theory. Chartrand is currently an emeritus of mathematics at Western Michigan University. 1977, Graphs as Mathematical Models, Prindle, Weber & Schmidt,1993, Applied and Algorithmic Graph Theory, McGraw Hill MR1211413. 2008, Chromatic Graph Theory, CRC Press MR2450569,2010, Graphs & Digraphs, 5th edition, CRC Press MR2766107. 2012, Mathematical Proofs, A Transition to Advanced Mathematics, 3rd edition, Pearson,2012, A First Course in Graph Theory, Dover Publications. 2015, The Fascinating World of Graph Theory, Princeton University Press MR3307972, chartrands web page at Western Michigan University Gary Theodore Chartrand at the Mathematics Genealogy Project
27.
Ron Rivest
–
Ronald Linn Rivest is a cryptographer and an Institute Professor at MIT. He is a member of MITs Department of Electrical Engineering and Computer Science and he was a member of the Election Assistance Commissions Technical Guidelines Development Committee, tasked with assisting the EAC in drafting the Voluntary Voting System Guidelines. Rivest is one of the inventors of the RSA algorithm and he is the inventor of the symmetric key encryption algorithms RC2, RC4, RC5, and co-inventor of RC6. The RC stands for Rivest Cipher, or alternatively, Rons Code and he also authored the MD2, MD4, MD5 and MD6 cryptographic hash functions. Most importantly, this system does not rely on cryptography at all, stating Our democracy is too important, he simultaneously placed ThreeBallot in the public domain. Rivest earned a Bachelors degree in Mathematics from Yale University in 1969, and he is a co-author of Introduction to Algorithms, a standard textbook on algorithms, with Thomas H. Cormen, Charles E. Leiserson and Clifford Stein. He is a member of the MIT Computer Science and Artificial Intelligence Laboratory in the Theory of Computation Group, and he was also a founder of RSA Data Security, Verisign, and of Peppercoin. Professor Rivest has research interests in cryptography, computer and network security, together with Adi Shamir and Len Adleman, he has been awarded the 2000 IEEE Koji Kobayashi Computers and Communications Award and the Secure Computing Lifetime Achievement Award. He also shared with them the Turing Award, Rivest has received an honorary degree from the Sapienza University of Rome. In 2005, he received the MITX Lifetime Achievement Award, Rivest was named the 2007 the Marconi Fellow, and on May 29,2008 he also gave the Chesley lecture at Carleton College. He was named an Institute Professor at MIT in June 2015, Cormen, Thomas H. Leiserson, Charles, Rivest, Ronald. Cormen, Thomas H. Leiserson, Charles, Rivest, Ronald, Stein, Cormen, Thomas H. Leiserson, Charles, Rivest, Ronald, Stein, Clifford
28.
Clifford Stein
–
Stein is chair of the Industrial Engineering and Operations Research Department at Columbia University. Prior to joining Columbia, Stein was a professor at Dartmouth College in New Hampshire and his work has been funded by the National Science Foundation and the Sloan Foundation. As of November 1,2015, his publications have been cited over 46,000 times and he is also the co-author of two textbooks, Introduction to Algorithms, with T. Cormen, C. Leiserson and R. Rivest, which is currently the best-selling textbook in algorithms and has translated into 8 languages. About 39,500 of Steins 46,000 citations are made to this book, discrete Math for Computer Science, with Ken Bogart and Scot Drysdale, which is a new textbook that covers discrete math at an undergraduate level. In recent years, Stein has built up ties with the Norwegian research community which earned him an honorary doctorate from the University of Oslo. Cormen, Thomas H. Leiserson, Charles E. Rivest, Ronald L. Stein, Clifford