Alexander Schrijver is a Dutch mathematician and computer scientist, a professor of discrete mathematics and optimization at the University of Amsterdam and a fellow at the Centrum Wiskunde & Informatica in Amsterdam. Since 1993 he has been co-editor in chief of the journal Combinatorica. Schrijver earned his Ph. D. in 1977 from the Vrije Universiteit in Amsterdam, under the supervision of Pieter Cornelis Baayen. He worked for the Centrum Wiskunde & Informatica in pure mathematics from 1973 to 1979, was a professor at Tilburg University from 1983 to 1989. In 1989 he rejoined the Centrum Wiskunde & Informatica, in 1990 he became a professor at the University of Amsterdam. In 2005, he stepped down from management at CWI and instead became a CWI Fellow. Schrijver was one of the winners of the Delbert Ray Fulkerson Prize of the American Mathematical Society in 1982 for his work with Martin Grötschel and László Lovász on applications of the ellipsoid method to combinatorial optimization, he won the INFORMS Frederick W. Lanchester Prize in 1986 for his book Theory of Linear and Integer Programming, again in 2004 for his book Combinatorial Optimization: Polyhedra and Efficiency.
In 2003, he won the George B. Dantzig Prize of the Mathematical Programming Society and SIAM for "deep and fundamental research contributions to discrete optimization". In 2006, he was a joint winner of the INFORMS John von Neumann Theory Prize with Grötschel and Lovász for their work in combinatorial optimization, in particular for their joint work in the book Geometric Algorithms and Combinatorial Optimization showing the polynomial-time equivalence of separation and optimization. In 2008, his work with Adri Steenbeek on scheduling the Dutch train system was honored with INFORMS' Franz Edelman Award for Achievement in Operations Research and the Management Sciences, he won the SIGMA prize of the Dutch SURF foundation for a mathematics education project. In 2015 he won the highest distinction within Operations Research in Europe. In 2005 Schrijver won the Spinoza Prize of the NWO, the highest scientific award in the Netherlands, for his research in combinatorics and algorithms. In the same year he became a Knight of the Order of the Netherlands Lion.
In 2002, Schrijver received an honorary doctorate from the University of Waterloo in Canada, in 2011 he received another one from Eötvös Loránd University in Hungary. Schrijver became a member of the Royal Netherlands Academy of Arts and Sciences in 1995, he became a corresponding member of the North Rhine-Westphalia Academy for Sciences and Arts in 2005, joined the German Academy of Sciences Leopoldina in 2006, was elected to the Academia Europaea in 2008. In 2012 he became a fellow of the American Mathematical Society. Theory of Linear and Integer Programming Geometric Algorithms and Combinatorial Optimization Combinatorial Optimization Combinatorial Optimization: Polyhedra and Efficiency
Simulated annealing is a probabilistic technique for approximating the global optimum of a given function. It is a metaheuristic to approximate global optimization in a large search space for an optimization problem, it is used when the search space is discrete. For problems where finding an approximate global optimum is more important than finding a precise local optimum in a fixed amount of time, simulated annealing may be preferable to alternatives such as gradient descent; the name and inspiration come from annealing in metallurgy, a technique involving heating and controlled cooling of a material to increase the size of its crystals and reduce their defects. Both are attributes of the material that depend on its thermodynamic free energy. Heating and cooling the material affects both the temperature and the thermodynamic free energy; the simulation of annealing can be used to find an approximation of a global minimum for a function with a large number of variables. This notion of slow cooling implemented in the simulated annealing algorithm is interpreted as a slow decrease in the probability of accepting worse solutions as the solution space is explored.
Accepting worse solutions is a fundamental property of metaheuristics because it allows for a more extensive search for the global optimal solution. In general, the simulated annealing algorithms work as follows. At each time step, the algorithm randomly selects a solution close to the current one, measures its quality, decides to move to it or to stay with the current solution based on either one of two probabilities between which it chooses on the basis of the fact that the new solution is better or worse than the current one. During the search, the temperature is progressively decreased from an initial positive value to zero and affects the two probabilities: at each step, the probability of moving to a better new solution is either kept to 1 or is changed towards a positive value; the simulation can be performed either by a solution of kinetic equations for density functions or by using the stochastic sampling method. The method is an adaptation of the Metropolis–Hastings algorithm, a Monte Carlo method to generate sample states of a thermodynamic system, published by N. Metropolis et al. in 1953.
The state of some physical systems, the function E to be minimized, is analogous to the internal energy of the system in that state. The goal is to bring the system, from an arbitrary initial state, to a state with the minimum possible energy. At each step, the simulated annealing heuristic considers some neighboring state s* of the current state s, probabilistically decides between moving the system to state s* or staying in-state s; these probabilities lead the system to move to states of lower energy. This step is repeated until the system reaches a state, good enough for the application, or until a given computation budget has been exhausted. Optimization of a solution involves evaluating the neighbours of a state of the problem, which are new states produced through conservatively altering a given state. For example, in the travelling salesman problem each state is defined as a permutation of the cities to be visited, its neighbours are the set of permutations produced by reversing the order of any two successive cities.
The well-defined way in which the states are altered to produce neighbouring states is called a "move", different moves give different sets of neighbouring states. These moves result in minimal alterations of the last state, in an attempt to progressively improve the solution through iteratively improving its parts. Simple heuristics like hill climbing, which move by finding better neighbour after better neighbour and stop when they have reached a solution which has no neighbours that are better solutions, cannot guarantee to lead to any of the existing better solutions – their outcome may be just a local optimum, while the actual best solution would be a global optimum that could be different. Metaheuristics use the neighbours of a solution as a way to explore the solutions space, although they prefer better neighbours, they accept worse neighbours in order to avoid getting stuck in local optima; the probability of making the transition from the current state s to a candidate new state s ′ is specified by an acceptance probability function P, that depends on the energies e = E and e ′ = E of the two states, on a global time-varying parameter T called the temperature.
States with a smaller energy are better than those with a greater energy. The probability function P must be positive when e ′ is greater than e; this feature prevents the method from becoming stuck at a local minimum, worse than the global one. When T tends to zero, the probability P must tend to zero if e
Linear programming relaxation
In mathematics, the relaxation of a integer linear program is the problem that arises by removing the integrality constraint of each variable. For example, in a 0-1 integer program, all constraints are of the form x i ∈; the relaxation of the original integer program instead uses a collection of linear constraints 0 ≤ x i ≤ 1. The resulting relaxation is a linear program, hence the name; this relaxation technique transforms an NP-hard optimization problem into a related problem, solvable in polynomial time. Consider the set cover problem, the linear programming relaxation of, first considered by Lovász. In this problem, one is given as input a family of sets F =. To formulate this as a 0-1 integer program, form an indicator variable xi for each set Si, that takes the value 1 when Si belongs to the chosen subfamily and 0 when it does not. A valid cover can be described by an assignment of values to the indicator variables satisfying the constraints x i ∈ and, for each element ej of the union of F, ∑ x i ≥ 1.
The minimum set cover corresponds to the assignment of indicator variables satisfying these constraints and minimizing the linear objective function min ∑ i x i. The linear programming relaxation of the set cover problem describes a fractional cover in which the input sets are assigned weights such that the total weight of the sets containing each element is at least one and the total weight of all sets is minimized; as a specific example of the set cover problem, consider the instance F =. There are three optimal set covers. Thus, the optimal value of the objective function of the corresponding 0-1 integer program is 2, the number of sets in the optimal covers. However, there is a fractional solution in which each set is assigned the weight 1/2, for which the total value of the objective function is 3/2. Thus, in this example, the linear programming relaxation has a value differing from that of the unrelaxed 0-1 integer program; the linear programming relaxation of an integer program may be solved using any standard linear programming technique.
If the optimal solution to the linear program happens to have all variables either 0 or 1, it will be an optimal solution to the original integer program. However, this is not true, except for some special cases In all cases, the solution quality of the linear program is at least as good as that of the integer program, because any integer program solution would be a valid linear program solution; that is, in a maximization problem, the relaxed program has a value greater than or equal to that of the original program, while in a minimization problem such as the set cover problem the relaxed program has a value smaller than or equal to that of the original program. Thus, the relaxation provides an optimistic bound on the integer program's solution. In the example instance of the set cover problem described above, in which the relaxation has an optimal solution value of 3/2, we can deduce that the optimal solution value of the unrelaxed integer program is at least as large. Since the set cover problem has solution values that are integers, the optimal solution quality must be at least as large as the next larger integer, 2.
Thus, in this instance, despite having a different value from the unrelaxed problem, the linear programming relaxation gives us a tight lower bound on the solution quality of the original problem. Linear programming relaxation is a standard technique for designing approximation algorithms for hard optimization problems. In this application, an important concept is the integrality gap, the maximum ratio between the solution quality of the integer program and of its relaxation. In an instance of a minimization problem, if the real minimum is M i n t, the relaxed minimum is M f r a c the integrality gap of that instance is I G = M i n t M f r a c. In a maximization problem the fraction is reversed; the integrality gap is always at least 1. In the example above, the instance F = shows an integrality gap of 4/3; the integrality gap tr
George Lann Nemhauser is an American operations researcher, the A. Russell Chandler III Chair and Institute Professor of Industrial and Systems Engineering at the Georgia Institute of Technology and the former president of the Operations Research Society of America. Nemhauser was born in The Bronx, New York, did his undergraduate education at the City College of New York, graduating with a degree in chemical engineering in 1958, he earned his Ph. D. in operations research in 1961 from Northwestern University, under the supervision of Jack Mitten. He taught at Johns Hopkins University from 1961 to 1969, moved to Cornell University, where he held the Leon C. Welch endowed chair in operations research, he moved to the Georgia Institute of Technology in 1985. He was president of ORSA in 1981, chair of the Mathematical Programming Society, founding editor of the journal Operations Research Letters. Nemhauser's research concerns large mixed integer programming problems and their applications, he is one of the co-inventors of the price method for solving integer linear programs.
He contributed important early studies of approximation algorithms for facility location problems and for submodular optimization. Nemhauser, together with Leslie Trotter, showed in 1975 that the optimal solution to the weighted vertex cover problem contains all the nodes that have a value of 1 in the linear programming relaxation as well as some of the nodes that have a value of 0.5. Nemhauser is the author of Introduction to Dynamic Programming Integer Programming Integer and Combinatorial Optimization. Optimization Nemhauser was elected as a member of the National Academy of Engineering in 1986, a fellow of INFORMS in 2002, a fellow of the Society for Industrial and Applied Mathematics in 2008, he has won five awards from INFORMS: the George E. Kimball Medal for distinguished service to INFORMS and to the profession in 1988, the Frederick W. Lanchester Prize in 1977 for a paper on approximation algorithms for facility location and again in 1989 for his textbook Integer and Combinatorial Optimization, the Phillip McCord Morse Lectureship Award in 1992, the first Optimization Society Khachiyan Prize for Life-time Accomplishments in Optimization in 2010, the John von Neumann Theory Prize in 2012.
Biography of George Nemhauser from the Institute for Operations Research and the Management Sciences
Travelling salesman problem
The travelling salesman problem asks the following question: "Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city and returns to the origin city?" It is an NP-hard problem in combinatorial optimization, important in operations research and theoretical computer science. The travelling purchaser problem and the vehicle routing problem are both generalizations of TSP. In the theory of computational complexity, the decision version of the TSP belongs to the class of NP-complete problems. Thus, it is possible that the worst-case running time for any algorithm for the TSP increases superpolynomially with the number of cities; the problem was first formulated in 1930 and is one of the most intensively studied problems in optimization. It is used as a benchmark for many optimization methods. Though the problem is computationally difficult, a large number of heuristics and exact algorithms are known, so that some instances with tens of thousands of cities can be solved and problems with millions of cities can be approximated within a small fraction of 1%.
The TSP has several applications in its purest formulation, such as planning and the manufacture of microchips. Modified, it appears as a sub-problem in many areas, such as DNA sequencing. In these applications, the concept city represents, for example, soldering points, or DNA fragments, the concept distance represents travelling times or cost, or a similarity measure between DNA fragments; the TSP appears in astronomy, as astronomers observing many sources will want to minimize the time spent moving the telescope between the sources. In many applications, additional constraints such as limited resources or time windows may be imposed; the origins of the travelling salesman problem are unclear. A handbook for travelling salesmen from 1832 mentions the problem and includes example tours through Germany and Switzerland, but contains no mathematical treatment; the travelling salesman problem was mathematically formulated in the 1800s by the Irish mathematician W. R. Hamilton and by the British mathematician Thomas Kirkman.
Hamilton’s Icosian Game was a recreational puzzle based on finding a Hamiltonian cycle. The general form of the TSP appears to have been first studied by mathematicians during the 1930s in Vienna and at Harvard, notably by Karl Menger, who defines the problem, considers the obvious brute-force algorithm, observes the non-optimality of the nearest neighbour heuristic: We denote by messenger problem the task to find, for finitely many points whose pairwise distances are known, the shortest route connecting the points. Of course, this problem is solvable by finitely many trials. Rules which would push the number of trials below the number of permutations of the given points, are not known; the rule that one first should go from the starting point to the closest point to the point closest to this, etc. in general does not yield the shortest route. It was first considered mathematically in the 1930s by Merrill M. Flood, looking to solve a school bus routing problem. Hassler Whitney at Princeton University introduced the name travelling salesman problem soon after.
In the 1950s and 1960s, the problem became popular in scientific circles in Europe and the USA after the RAND Corporation in Santa Monica offered prizes for steps in solving the problem. Notable contributions were made by George Dantzig, Delbert Ray Fulkerson and Selmer M. Johnson from the RAND Corporation, who expressed the problem as an integer linear program and developed the cutting plane method for its solution, they wrote what is considered the seminal paper on the subject in which with these new methods they solved an instance with 49 cities to optimality by constructing a tour and proving that no other tour could be shorter. Dantzig and Johnson, speculated that given a near optimal solution we may be able to find optimality or prove optimality by adding a small number of extra inequalities, they used this idea to solve their initial 49 city problem using a string model. They found. While this paper did not give an algorithmic approach to TSP problems, the ideas that lay within it were indispensable to creating exact solution methods for the TSP, though it would take 15 years to find an algorithmic approach in creating these cuts.
As well as cutting plane methods, Dantzig and Johnson used branch and bound algorithms for the first time. In the following decades, the problem was studied by many researchers from mathematics, computer science, chemistry and other sciences. In the 1960s however a new approach was created, that instead of seeking optimal solutions, one would produce a solution whose length is provably bounded by a multiple of the optimal length, in doing so create lower bounds for the problem. One method of doing this was to create a minimum spanning tree of the graph and double all its edges, which produces the bound that the length of an optimal tour is at most twice the weight of a minimum spanning tree. Christofides made a big advance in this approach of giving an approach for which we know the worst-case scenario. Christofides algorithm given in 1976, at worst is 1.5 times longer than the optimal solution. As the algorithm was so simple and quick, many hoped it would give way to a near optimal solution method.
This remains the method with the best worst-case sc
Christos Harilaos Papadimitriou is a Greek theoretical computer scientist, professor of Computer Science at Columbia University. Papadimitriou studied at the National Technical University of Athens, where in 1972 he received his Bachelor of Arts degree in Electrical Engineering, he continued to study at Princeton University, where he received his MS in Electrical Engineering in 1974 and his PhD in Electrical Engineering and Computer Science in 1976. Papadimitriou has taught at Harvard, MIT, the National Technical University of Athens, Stanford, UCSD, University of California, Berkeley and is the Donovan Family Professor of Computer Science at Columbia University. Papadimitriou co-authored a paper on pancake sorting with Bill Gates a Harvard undergraduate. Papadimitriou recalled "Two years I called to tell him our paper had been accepted to a fine math journal, he sounded eminently disinterested. He had moved to Albuquerque, New Mexico to run a small company writing code for microprocessors, of all things.
I remember thinking:'Such a brilliant kid. What a waste.'"In 2001, Papadimitriou was inducted as a Fellow of the Association for Computing Machinery and in 2002 he was awarded the Knuth Prize. He became fellow of the U. S. National Academy of Engineering for contributions to complexity theory, database theory, combinatorial optimization. In 2009 he was elected to the US National Academy of Sciences. During the 36th International Colloquium on Automata and Programming, there was a special event honoring Papadimitriou's contributions to computer science. In 2012, he, along with Elias Koutsoupias, was awarded the Gödel Prize for their joint work on the concept of the price of anarchy. Papadimitriou is the author of the textbook Computational Complexity, one of the most used textbooks in the field of computational complexity theory, he has co-authored the textbook Algorithms with Sanjoy Dasgupta and Umesh Vazirani, the graphic novel Logicomix with Apostolos Doxiadis. His name was listed in the 19th position on the CiteSeer search engine academic database and digital library.
In 1997, Papadimitriou received a doctorate honoris causa from the ETH Zurich. In 2011, Papadimitriou received a doctorate honoris causa from the National Technical University of Athens. In 2013, Papadimitriou received a doctorate honoris causa from the École polytechnique fédérale de Lausanne. Papadimitriou was awarded the IEEE John von Neumann Medal in 2016, the EATCS Award in 2015, the Gödel Prize in 2012, the IEEE Computer Society Charles Babbage Award in 2004, the Knuth Prize in 2002. Elements of the Theory of Computation. Prentice-Hall, 1982. Greek edition Combinatorial Optimization: Algorithms and Complexity. Prentice-Hall, 1982; the Theory of Database Concurrency Control. CS Press, 1986. Computational Complexity. Addison Wesley, 1994. Turing. MIT Press, November 2003. Life Sentence to Hackers?. Kastaniotis Editions, 2004. A compilation of articles written for the Greek newspaper To Vima. Algorithms. McGraw-Hill, September 2006 Logicomix, An Epic Search for Truth. Bloomsbury Publishing and Bloomsbury USA, September 2009.
He co-authored a paper with co-founder of Microsoft, on pancake sorting. At UC Berkeley, in 2006, he joined a professor-and-graduate-student band called Lady X and The Positive Eigenvalues
GSM is a standard developed by the European Telecommunications Standards Institute to describe the protocols for second-generation digital cellular networks used by mobile devices such as mobile phones and tablets. It was first deployed in Finland in December 1991; as of 2014, it has become the global standard for mobile communications – with over 90% market share, operating in over 193 countries and territories.2G networks developed as a replacement for first generation analog cellular networks, the GSM standard described a digital, circuit-switched network optimized for full duplex voice telephony. This expanded over time to include data communications, first by circuit-switched transport by packet data transport via GPRS and EDGE. Subsequently, the 3GPP developed third-generation UMTS standards, followed by fourth-generation LTE Advanced standards, which do not form part of the ETSI GSM standard. "GSM" is a trademark owned by the GSM Association. It may refer to the most common voice codec used, Full Rate.
In 1983, work began to develop a European standard for digital cellular voice telecommunications when the European Conference of Postal and Telecommunications Administrations set up the Groupe Spécial Mobile committee and provided a permanent technical-support group based in Paris. Five years in 1987, 15 representatives from 13 European countries signed a memorandum of understanding in Copenhagen to develop and deploy a common cellular telephone system across Europe, EU rules were passed to make GSM a mandatory standard; the decision to develop a continental standard resulted in a unified, standard-based network, larger than that in the United States. In February 1987 Europe produced the first agreed GSM Technical Specification. Ministers from the four big EU countries cemented their political support for GSM with the Bonn Declaration on Global Information Networks in May and the GSM MoU was tabled for signature in September; the MoU drew in mobile operators from across Europe to pledge to invest in new GSM networks to an ambitious common date.
In this short 38-week period the whole of Europe had been brought behind GSM in a rare unity and speed guided by four public officials: Armin Silberhorn, Stephen Temple, Philippe Dupuis, Renzo Failli. In 1989 the Groupe Spécial Mobile committee was transferred from CEPT to the European Telecommunications Standards Institute. In parallel France and Germany signed a joint development agreement in 1984 and were joined by Italy and the UK in 1986. In 1986, the European Commission proposed reserving the 900 MHz spectrum band for GSM; the former Finnish prime minister Harri Holkeri made the world's first GSM call on July 1, 1991, calling Kaarina Suonio using a network built by Telenokia and Siemens and operated by Radiolinja. The following year saw the sending of the first short messaging service message, Vodafone UK and Telecom Finland signed the first international roaming agreement. Work began in 1991 to expand the GSM standard to the 1800 MHz frequency band and the first 1800 MHz network became operational in the UK by 1993, called and DCS 1800.
That year, Telecom Australia became the first network operator to deploy a GSM network outside Europe and the first practical hand-held GSM mobile phone became available. In 1995 fax, data and SMS messaging services were launched commercially, the first 1900 MHz GSM network became operational in the United States and GSM subscribers worldwide exceeded 10 million. In the same year, the GSM Association formed. Pre-paid GSM SIM cards were launched in 1996 and worldwide GSM subscribers passed 100 million in 1998. In 2000 the first commercial GPRS services were launched and the first GPRS-compatible handsets became available for sale. In 2001, the first UMTS network was launched, a 3G technology, not part of GSM. Worldwide GSM subscribers exceeded 500 million. In 2002, the first Multimedia Messaging Service was introduced and the first GSM network in the 800 MHz frequency band became operational. EDGE services first became operational in a network in 2003, the number of worldwide GSM subscribers exceeded 1 billion in 2004.
By 2005 GSM networks accounted for more than 75% of the worldwide cellular network market, serving 1.5 billion subscribers. In 2005, the first HSDPA-capable network became operational; the first HSUPA network launched in 2007. Worldwide GSM subscribers exceeded three billion in 2008; the GSM Association estimated in 2010 that technologies defined in the GSM standard served 80% of the mobile market, encompassing more than 5 billion people across more than 212 countries and territories, making GSM the most ubiquitous of the many standards for cellular networks. GSM is a second-generation standard employing time-division multiple-Access spectrum-sharing, issued by the European Telecommunications Standards Institute; the GSM standard does not include the 3G Universal Mobile Telecommunications System code division multiple access technology nor the 4G LTE orthogonal frequency-division multiple access technology standards issued by the 3GPP. GSM, for the first time, set a common standard for Europe for wireless networks.
It was adopted by many countries outside Europe. This allowed subscribers to use other GSM networks; the common standard reduced research and development costs, since ha